In the future computers will translate our language effortlessly. At first it will look like a hearing aid that can recognize speech, translate the words, and output the translation into your ear as a synthetic voice, roughly modulated to match the original signal (so people still sound the same, just speaking a different language).

This technology will allow an English speaker to have a fluent conversation with a Spanish speaker, a mandarin speaker to speak with a Russian, etc.

That is just the first step.

At first new languages will be added manually, and available for wireless download such that you can select a “translate to” language that you understand, which will have an array of “translate from” definition files, just like a text translator works now — English to Spanish, Spanish to English, etc.

Next, professionals will grow tired of maintaining an exponentially growing repository of To and From definition files. They will invent an intermediate language that is made up symbolic constituent parts (theoretically human understandable, but not practical for use as a direct language). This is the same concept as modern programming platforms like Microsoft’s .NET. The reason it’s possible to use several different programming languages in the same system is because they are all translated to the same intermediate language– in the case of .NET, that language is called creatively enough “MSIL” or “MicroSoft Intermediate Language.” As with the language I’m talking about, technically you can read and write MSIL directly but it’s not meant for that purpose, and it’s more difficult than just using the higher level language.

This intermediate language (IL) for translation will allow new language definitions to be produced by the linguists without regard for how it will be translated, because all languages will only have to be translated to IL, then they can be translated from IL to any other language. This will make the maintenance of language files at least an order of magnitude less difficult.

Either following on the heels of or simultaneously with that development, a new context recognition engine will take hold, that will intelligently add to and modify existing language definitions. For example, you speak an unusual dialect of an obscure language which has a phrase for which the universal IL has no conceptual match, a listener will ask: “What does that mean?” You will explain the meaning, and just like a human would do it, the engine will decipher the network of concepts you describe to point to a working definition of the previously unknown phrase — this definition (any definition, really) can be modified slightly over time given more information about the network of concepts that support it. That’s how real language works too.

But here’s where it gets interesting: how will the engine synthesize that new sound? What will the new word sound like? There are many options here.

For example, what if the word in question is a noun about which the listener has no knowledge, like some kind of exotic animal? Should the new translation simply use the original word since there is no analog, or should it try to translate relative to the accepted taxonomy of animal life? There’s a 25 foot tall ape in the jungle. He’s called Kong — should your translator call him “Kong” or “Very Large Gorilla”? Should it do something else? “Korilla”?

What if it’s a more complex web of ideas? In Hawaii the word “Aloha” is used for hello and goodbye. The actual translation of the word is “I love you because you exist,” which is a fascinating concept, and sheds much light on the culture given its common usage.

How will the engine translate it? Like a person, it could bear the definition in mind, but continue using the word itself, appropriated directly from the original language; that would miss the subtle meaning of it, because unlike a human, the translator’s job is to convey a complete, and culturally accurate picture of the meaning, and just because the translators knows the definition, doesn’t mean that it’s clear to the listener. It might use the whole English phrase such that whenever a native speaker says “Aloha,” you hear “I love you because you exist” — but that isn’t correct either, because it doesn’t convey the salutory (salutational?) meaning that way.

The answer may be individual. Right now, foreign phrases are often misunderstood or ignored entirely. A human can decide that the conceptual difference between “soy” and “estoy” in Spanish isn’t important, and that he’ll just memorize the situations in which one or the other should be used to mean “I am.” Others might not recognize that there was ever an important distinction to begin with. Thus, complexity of translation within an individual is scaled perfectly according to the cognitive complexity of the person himself.

A general translation engine will not have such an option: it will be tasked with precisely and completely translating language at a very fundamental level, to all possible listeners. A person can choose to think of Aloha as hello and goodbye. A person can fail to understand what the literal translation even hints at. This is a limitation of human kind that we call “Lost in Translation.”

Translation engines are going to end this phenomenon, but an essential difficulty is this: in communication there is a sender, a medium, and a receiver. Even assuming the sender is clear, and the medium is relatively noise free, the receiver ultimately decides upon the meaning conveyed based on cultural factors, physical factors, intelligence factors, and others that I’m not thinking of. That means that each person’s translator will have to be calibrated to their particular strengths and limitations in order to deliver unfettered meaning.

It also means that that meaning will be different per person. My engine will translate Aloha differently than your engine, so even though we’ll all be having a conversation about the same thing, even if we both speak English, we’ll be hearing different words.

Consequently, as time moves toward infinity, our languages will diverge, no longer inhibited by the previous physical limitation of convention: in the past, language depended on shared meaning through similar voice modulations that would produce decodable strings of “words” that roughly matched in conceptual meaning between sender and receiver. Now that rough matching is no longer necessary, the symbols connected to our shared concepts will be tailored to the person.

I would hope that such tailoring could bring about a new age of thought. Our language determines our world view in many ways, and if the complex concepts we use could be encapsulated in individualized language, then we could jump up a perhaps limitless hierarchy of concepts very quickly.

Essentially we’ll keep our own language definition, updated in real time to be translatable to IL.

A pleasant side effect of such a system would be implicit debugging of our concepts as they are communicated. Something That Eli Yudkowsky talks about frequently, as in this post on Overcoming Bias is the “Great Idea,” which normally turns out to be not as great as we had hoped.

I can postulate an idea and call it something new like “God” or “Dragon,” but something curious will happen when I try to tell other people about it. Their translation engine will choke, and they will get the word “Dragon” with no additional meaning, and their answer will be this: “What do you mean?”

Here’s the cool thing, though. In our world right now, “What do you mean” is not at all profound because it’s hard to share meaning in our current system of language, and in what “what do you mean” is the primary way of forming the conceptual framework for whatever new concept we’re attempting to understand. But that will not be so with ubiquitous translations, because when someone makes a statement that is even remotely comprehensible given their current conceptual web, the translator will convey that meaning in a precise and penetrating way. That will all but eliminate “What do you mean” as the translators do the work of conveying exactly what the speaker means without any additional effort — that will become the default condition.

That means that when a person has to ask “What do you mean,” it will mark either a truly new leap in concepts, or it will mark nonsense. It will also mean that “what do you mean” will be heard as a precise request for a specific set of information, because even if a person doesn’t exactly recognize where the conceptual disconnect is between his web of meaning and the new concept, the translation engine will know precisely that, and therefore when the speaker asks for more information, the translator will be able to formulate a much more precise question.

The end result is that new concepts will quickly either be connected with the conceptual framework that exists in the intermediate language space, or it will be sussed out as nonsense — an independent web of concepts with no bearing on reality. When such a web is a work of fiction, it’s interesting entertainment, when such a web is a culturally held belief system, it is dangerous.

Another side effect of this individualized language system is that depending on one’s expertise and interests, his concepts will be wildly divergent from another person’s. His concepts will encapsulate what he has already learned and mastered.

For example, a modern person has a concept of “desk.” When spoken to a cave man, it becomes clear that this desk concept has many subconcepts, which in turn have their own subconcepts — eventually, a person would be able to explain a desk to a caveman because the caveman has concepts in his mind like wood, the use of tools, maybe labor — all the concepts encapsulated by “desk.” The problem gets indefinitely more difficult with higher order concepts.

Right now when a man says “desk,” to a caveman it is his responsibility to divide the concept until the meaning is shared. With the ubiquitous translation engine, the man will say “desk,” and the caveman will hear what he needs to hear to understand what the man means. But that’s problematic in that it takes far longer to explain a desk than it does to simply say “desk” so it would seem that there would be some lag in communication. The man talking about a desk might have to wait days or weeks to be understood.

How this difficulty will be resolved, is to move away from the “hearing aid” form factor.

Eventually, it won’t be a hearing aid device at all. It will be implanted, then later genetically installed prenatally, then simply passed down through generations of modified humans. If human beings were to catastrophically lose their technology and history, new generations wouldn’t recognize their mode of language as “technology” at all, just a natural state of being.

It will shortly pass over the auditory senses entirely, allowing us to pass vibrations to each other to be understood in a more direct way, eventually giving way to a medium that isn’t as prone to noise, perhaps like wireless computers now. This would, in effect, be indistinguishable from being telepathic.

This would also allow for much faster transfer of conceptual webs, so that our desk man and cave man would have a similar exchange, and even though the cave man would still have to form the web of concepts in his mind, such formation wouldn’t be constrained in any meaningful way by time as it is with auditory sensing; the information would still take time to propagate through the brain, but at a rate several orders of magnitude more quickly that the ear hearing the vibrations in order, the brain decoding it, then translating it, then interpreting it relative to existing knowledge.

The other question is, what effect will our individualized language have on babies? It seems plausible to have shared meaning, then diverge on the symbols we use to represent the meaning, but how will a being with no meaning at all create a symbolic system from scratch? Will the whines be translated as “I want something but I don’t know what”? and eventually to “I am hungry” or “I have a shitty diaper”? Will parents’ words in response be translated to comfortable cooing, or will it be necessary to calibrate a new translation engine to simply convey the sound offered, so that the child can form a basic foundation (just like humans do now) for future divergence? Can the noises that are currently nonsense to the infant be translated to nonsense that is tailored to be easily understood by his particular brain pattern?

One of the more titillating questions to me is whether such a thing has already happened. What aspects of our experience seem natural to us but at some point were invented and created by previous humans or other intelligent entities, only to be forgotten? What if what we think of as our immune system is an invention of nanotechnology that was seamlessly integrated with our DNA? What if our system of communication, or other sensory systems, are the result of ingenuity rather than nature?

What if we ourselves, in our entirety, were inventions of some intelligence that has since left, or exists in a way so fundamentally askew from our mode that we cannot perceive them readily?

Alright, so it’s a tangent, and it’s far from original, but when one traces the line from where we are, to a possible future that looks an awfully lot like the present, such a tangent seems all the more plausible.

Happy New Year.

This post is an outgrowth of the conversation between A. Carrozza and I in the comments of my opening post, The Plight of the Lonely Genius. The question is, why can’t wisdom be taught?

At this point in the dialogue, we’ve failed to disagree on any major points, but I wanted to give the conversation a chance to develop into something interesting, so I’m making a post.

Wisdom: The ability, developed through experience, insight and reflection, to discern truth and exercise good judgment. (Ironically enough, a definition from Wikipedia, which was the most appropriate for the circumstance).

The definition is a useful starting point, but does not capture the breadth of the actual concept. Maybe this is a semantic debate in disguise. We’ll see.

Carrozza opens:

Human beings are not the logical, open-minded rationalists that they pretend to be. There’s an invisible dimension to human cognition that runs parallel to logical thought. For lack of a better term, let’s call it “belief” or “non-rational conditioning.”

To summarize the post, Carrozza sees wisdom in the context of a dichotomy between itself and mainstream beliefs. Humans delude themselves with the notion that we are primarily rational and logical, when in fact we are governed by superstition that is evolutionarily advantageous.

He provides the following example of wisdom, which he lifts from Zen philosophy:

  1. All things are interconnected and interdependent, therefore nothing exists independently from anything else.
  2. The universe is in a constant state of flux, and nothing exists in the same form for more than an instant at a time.
  3. Thoughts and the words that define them are static, grossly overly-simplistic, cognitive “maps” of an infinite, multi-dimensional, dynamic reality. They have utility in the same way that a street map has utility

He asks why, given these simple, largely self-evident principles, does Zen take a lifetime to master? His answer: “…all of these principles run counter to our culturally-defined and biologically-hardwired cognitive programming. Wisdom has to swim ‘upstream.'”

The reason, he supposes, if that the mind must be stripped of its assumptions and biases, and that this is the path to “wisdom.” Having described the rigors of Zen practice, he goes on:

My sense is that, most likely, these traditional practices serve to bend, fold, stretch and— hopefully —“crack open” the unconscious (innate and culturally conditioned) premises and assumptions that define and rule a zen novice’s life and and thought processes.

My response:

I call that conditioning the “animal brain” — it’s very much at odds with enlightened thought, but I don’t see the dichotomy as being so adversarial.

My position is that the animal brain is programmed by evolution, or “the frenzied, fearful belief in some eternal enemy,” and that human behavior on the whole tends to be governed by this animal within. My position is that “wisdom” isn’t so much running against the mainstream per se, as it is conquering the animal and bringing its passions and focus in line with the higher values of interconnection and the like.

In summation: “…I don’t think it’s like two sides of a coin, as much as it is to two steps in a process.”

Carrozza splits into two topics here, one about mysticism, and one continuing this line of discussion. I’ll focus on the latter, and split the other into a separate post.

He opens by drawing a distinction between the knowledge of wisdom, and the thought process that allows it to be useful, likening it to a farmer using seeds (knowledge) but needing good soil, or in the opposite case water being poured onto a duck’s back.

He expounds on this distinction, then brings up a related issue: most of the information disseminated to us is wrong. He uses this point to argue that direct experience is necessary for deep, reliable knowledge.

My response, is this:

I see what you mean about wisdom being able to be taught both in the Zen sense, and the sense of subject mastery altering one’s thought process, but I think that is a trick of semantics.

Maybe it’s by definition that the two are not the same. If we allow that “wisdom” related to a topic can be acquired by traditional study of the topic, then we’ve failed to discern between wisdom and expertise. The term becomes meaningless if wisdom is both the soil and the seed. There is clearly a correlation, but I think it’s worth separating the ideas in our mind.

Expertise does create changes in thought process. I remember clearly when I was very young, first learning to program — the moment it clicked was when I had to create a conditional statement with multiple clauses, for some reason the act of differentiating between and combining “AND” and “OR” in my mind provided me with clarity and insight that would inform my entire thought process.

Now, many years later, I occasionally dream in code, which is a sensation that a person who hasn’t mastered a subject will probably not know. I have mastered several subjects, and in each case I have been hit with that same sort of clarity. Thinking in terms of movement and equations, thinking in terms of sounds and vibrations.

These epiphanies force one to realize that our mentally abstracted view of the world is both profoundly limited, and entirely arbitrary. Were we born with a slightly different chemical makeup, we would have monumentally altered views on spatial relationships, color, tone– the list goes on.

So, I find that the act of using knowledge to break into new ways of thinking is a wizening experience, but the knowledge one gains in the subject matter isn’t inherently enlightening.

I think that’s where the seed and soil connect: the process of expanding outward into mental territory one hasn’t traveled before is precisely the process of becoming wise. One fuel for that expansion is subject matter knowledge. I would argue that the more dominant fuel is experience.

…Which brings us to the next point, which is that learned knowledge alone is not sufficient to produce wisdom, as you were saying with the young, misinformed genius. This is the realm of the mystic: to experience, as directly as one may, the fabric of the world surrounding him. To allow the universe to ply his mind, to let it bend his perception until he sees the back of his own head, and realizes that perception has nothing to do with the raw material of reality, and everything to do with the internal state of he that perceives it.

What a marvelous duality: an idea so radical as not to be believed or even comprehended by the anyman, yet so mundane and true that it is one of the most celebrated and ancient beliefs that we as humans still carry to this day, dating back to at least the dawn of history.

Unless a third person jumps in here to disagree, I can’t see this line of discourse going that much further, because I think that Carrozza and I are both of the mind that wisdom, by definition, is a measure of how mystically enlightened one is. Please, correct me if I’m wrong.

In either case, I’ll be posting the discussion about mysticism in its own post, so stay tuned for that.