RE: (long) Entropy and "semantic domain"

Rolf Furuli (furuli@online.no)
Wed, 3 Jun 1998 14:16:30 +0200

Pete Phillips wrote:

>Yes, Rolf, I hear what you are saying - you talk in psycholinguistics and I
>translate into cognitive linguistics and in the end they are very similar
>ideas. I don't think I have a problem with anything you are saying now. I
>am simply wary of people saying that the lexeme denotes the referent rather
>than the concept. With the tie up between one word-one concept you seemed
>to be doing just this - rather than point to the concept and say that we
>can express this concept with a variety of different lexemes each pertinent
>to their own respective contexts, you seem to be pointing to the lexeme and
>then leading onto the concept. Would it be better to start with the
>concept and then begin to look at the context and work out which lexeme was
>best. For example, your concept "bird" might be expressed by any of your
>examples or e.g. "blackbird, penguin, ostrich" but if the context included
>the categories "Antarctic", "black and white", "Emperor" you would want to
>express the concept with "penguin" |(I hope! :>))
>So in translating from Greek to English (or any other language) you need to
>look at the concept referred to by the Greek lexeme and choose from this
>concept pool the appropriate English lexeme. Hence, to use an example from
>my research but one which may cause problems if any one takes it up, the
>Greek lexeme LOGOS refers to a polysemous concept involving ideas, words,
>communication, reasoning, numeracy, philosophy, wisdom......etc. Here the
>idea of one word-one concept holds but is not too helpful for translation.
> All we can do is attempt to go from the lexeme to the concept and through
>a close reading of the specific context hope to draw the right connection
>to the right English concept and so produce from that concept pool the
>correct English lexeme.
>
>Yes?
>

Dear Pete,

Even when we communicate in our own language, we do not *hear* everything
the other part is saying, but we fill in by *guessing* what he or she is
saying. Because we have the same PP, this usually works very well. It
illustrates, however, that verbal or written communication do not
*perfectly" convey ideas, but just do so in an approximate way. Because
concepts have fuzzy edges, it is enough to get this "approximate" ideas
(only a part of the concept being made visible) to understand. From this
point of view I have no problems with your words that one concept can be
expressed "with a variety of different lexemes" (as synonyms), but I would
add that the different lexemes do so with a different kind of precision.

This means that in my model is it impossible (even a contradiction of
terms) to start with the *concept* and then look for the lexeme(s) which
best can express this concept. This is so because *concept* has no value
in itself, it does not exist apart from a particular word, but is a
reaction or understanding activated in the brain when it hears the sounds
of a particular word or sees its letters (this is of course a result of
learning, experience and convention, but that is another matter). The
model therefore presumes that each word signals one particular concept, but
once we have gone the way via *this word* and found its concept, it is
meaningful to discuss which lexemes can (in a more or less accurate way)
express this concept. So there is no concept (signalled by) "bird" in
English existing independently from the lexeme "bird", but once the concept
is activated, we can discuss synonyms such as "flying creatures" which is
broader and which also can include the insects.

My attacks on several of the ways the semantic domain model is applied is
not based on a downgrading of the context (or cotext). To the contrary, I
value these very high, and would not dream of doing a word study without
doing much work with them. The question I pose, however, is: "Who is going
to make use of the context?" "Who is going to decide its meaning?". When
you do research on John, you have a legitimate reason to use the context to
the full, and the same has the (intelligent) Bible reader. But a Bible
translator is in another position. If he or she is going to make an
interpretative translation, there is freedom to make all kinds of
interpretations and decisions on the part of the readers. But by doing
this, the readers are completely in the hands of the translators. If the
Bible translator, on the other hand, wants to make a version to help the
reader come as close as possible to the original text by help of his or her
mother tongue, the possibilities for interpretaions and decisions on the
part of the translator are greatly restricted.

We must never forget that the semantic domain model was created by the
foremost spokesmen of idiomatic translation method, and that its very
purpose is to help the translators make interpretative translations. No
problem with that for most target groups! My research has shown,I believe,
that it is possible to make a model for lexical semantics based on modern
scientific principles, which can serve as a basis for literal translation.
I call it the "semantic-signal model" (because each word is thought to
signal one concept) from which we can make the slogans "More power to the
*readers*!", "Help the *reader* to make as may decisions as possible!"
My suggestion regarding LOGOS is: In a thesis, in an article, and in a
lecture, do your interpretations and show the nuances. In a literal Bible
translation, translate it consistently with "word" except in those cases
where the two PP`s are so far apart that the *context* shows that LOGOS
signals two concepts in English.

Regards
Rolf

Rolf Furuli
lecturer in Semitic languages
University of Oslo