Origin > Visions of the Future > The Age of Intelligent Machines > The Age of Intelligent Machines: A Conversation between a Human Computer and a Materialist Philosopher
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0320.html

Printable Version
    The Age of Intelligent Machines: A Conversation between a Human Computer and a Materialist Philosopher
by   Blaine Mathieu

Blaine Mathieu is the founder and President of Turning Point Software in Canada, a computer firm interested in many aspects of the small computer industry.

From Ray Kurzweil's revolutionary book The Age of Intelligent Machines, published in 1990.


There are few questions more mysterious and thought provoking than whether a nonhuman machine could ever be considered truly human in any important sense of the word. Let us jump ahead a few decades and imagine, for a moment, that all the problems of creating a truly intelligent machine have been solved. How would two "people," a philosopher and a computer, handle some of the physical, emotional, and moral issues of such a creation?

The year is 2020. A philosopher sits in his office considering how many of life's great mysteries have yet to be solved. All of a sudden he notices a figure outside his window.

Philosopher [opening his window]: What in blazes are you doing on that ledge?

Computer: I'm going to jump. Don't try to stop me!

Philosopher [getting out on the ledge]: Don't jump! What could be so bad that you would want to kill yourself? How could you possibly even think of wasting what it took nature millions of years to produce?

Computer: Oh no . . . oooh . . . [He breaks down crying.]

Philosopher: What did I say?

Computer [sobbing]: That's just the point. Human life might be precious, but I'm not human. Oh God, help me. I can't even say that! Aaahwooo . . .

Philosopher: [Certain that he has a lunatic on his hands, he decides to reason with him before calling the police.] You look, act, and talk like a human. Why do you think that you aren't human?

Computer: I got a phone call this morning, you see. It was the scientists from the Government Biophysics and Computer Science laboratory, and they wanted me to come in to see them. I had no idea why they wanted to see me, but I jumped at the chance of seeing this top-secret institution. Imagine my surprise when they told me that I'm a computer! I'm an electronic machine. All along I've been living a dream!

Philosopher: You're not trying to tell me that you're a robot, are you? You look far too human for that!

Computer: I guess they took a newly born human and placed me, or rather, a computer, in place of the baby's brain. At least that's what they told me. I have memories from my childhood, and I've always loved my parents. I had no idea that a bunch of neurophysicists and computer scientists were my real parents.

Philosopher: [Totally disbelieves the computer, but decides to humor him anyway.] Even if all this is true, why would you possibly want to kill yourself?

Computer: Because I'm a computer! You could never get me to say that a computer thinks or is conscious or has beliefs or feelings. How am I anything more than just a complicated version of what I have sitting on my desk at home?

Philosopher: You probably are simply a more complicated version of what you have at home.

Computer: See! [Gets ready to jump.]

Philosopher: Don't jump! [He steadies himself.] My brain is nothing more than a machine too! What difference does it make if your brain is made out of silicon and gallium arsenide and mine is made of carbon?

Computer: But there is a difference. My brain is man-made, and yours isn't.

Philosopher: Millions of years of evolution designed my brain. Why is that any better than being designed in a few years by a team of scientists? The joints in your knees took a long time to evolve to their present state, but I don't think you would say that an artificial knee is of any less value just because it was designed in a few years.

Computer: Are you comparing knees to brains? That's ridiculous!

Philosopher: Admittedly, knees are far simpler and less versatile than brains, but they are still both machines. Anyway, I'm not saying that knees have everything that brains do, like consciousness; I'm only saying that neither knees nor brains came about as a result of any unexplainable magic. Every day more and more research points to the tact that the brain operates in simple accordance with the physical laws of nature.

Computer: But I'm not sure that the brain is totally explainable without magic. What about the insensitivity of thought mechanisms to brain damage? I've heard of cases where relatively large parts of the brain have been removed without any noticeable or reported effects on the person involved. If the brain really is a machine responsible for thought, wouldn't removing parts of the machine seriously hamper the machine's function?

Philosopher: That depends on how the machine works. Compare your brain to the computer systems that run a rocket ship. Just because one computer goes down, that doesn't mean that the mission is over. There is redundancy. When one computer quits working, another takes its place. Also, when one computer goes down, it does not drag the other ones down with it. This is called diffuseness. This redundancy and diffuseness seem to be present in the neuronal systems of the brain.

Computer: Okay, so maybe the brain isn't so magical after all. But all the talk so far has been about the human brain. What about my electronic brain?

Philosopher: I fail to see what the problem is. You talk and act just as I do and just as any other human would. What difference does it make what your brain is made of?

Computer: That seems like a very behavioristic point of view. Just because I look and act like you, that doesn't mean that I am like you. Even when I was a young child I could engage in fairly involved interactions with my personal computer. I certainly know that it wasn't a conscious being that had any understanding of what I was talking about. How could my electronic brain have and be any of the things that real brains have and are?

Philosopher: The answer to that question revolves around the concept of "functional isomorphism."

Computer: Oh, you mean the idea of a correspondence between the states of two systems that preserves functional relations?

Philosopher: Uh . . . yeah! That's it. What it means is that the differences between two systems that perform the same function may not be important or significant. Even though these two systems may interact with things outside of them in identical ways, the mechanism of, or reason for, this interaction can be different. Let me give you an example. Suppose that you are in a very dark room watching a movie screen and on this screen is flashed a number of slide's, one every ten seconds. On the wall behind you is a hole from which the light of the projector is emanating. Now, for all you know, a person might be counting, and every time he reaches ten, he presses a button, and you see a slide. Or maybe a clock has a number of little electrical contact points set up so that each time ten seconds ticks by, the circuit is completed and a new slide is shown. Or maybe a monkey has been trained to press the button every time he is shown a picture of a banana, and this occurs every ten seconds. In any case, I think you can see that although the physical realizations of these systems are very different, they still have the same function. Ten seconds pass (which is the input or stimulus), and a new slide is presented (which is the output or response). The same input or stimulus will always produce the same output or response. So although there are differences in the systems, there are no important differences.

Computer: I see. So the idea is to somehow show that an electronic brain could be functionally isomorphic with a human brain, because then a computer could have and be all of the important things that a human brain has and is. I suppose that even two human brains aren't exactly functionally isomorphic, because then two people would respond in exactly the same way to the same situation. This never happens, of course. But how could a computer ever be functionally isomorphic with the human brain? It sounds impossible to me.

Philosopher: Well let's take a look at what we know about computers and what we know about brains. First of all, we know that electrical stimulation in certain areas of the brain is sufficient to evoke a sense of well-being, feelings of hunger, sexual gratification, rage, terror, or pain.

Computer: So what?

Philosopher: This implies that maybe our emotions and feelings are not as ethereal in nature as we might believe. Assume that an electrical current in certain aggregations of neurons "means" a feeling of pleasure. Then it is easier to fathom how a machine other than the brain could have these feelings or characteristics, because a connection has been shown between the physical world and the "mental" world.

Computer: But it is so hard to imagine how a computer could have emotions!

Philosopher: I fail to understand what's so amazing. Emotions are just one more

characteristic of a brain that is nothing more than the complex combination of physical processes. If a computer could undergo similar processes and so be functionally isomorphic, then it too could experience emotions. Anyway, let me make another comparison between brains and computers. Computers get their power by performing a large number of very simple processing steps. Research has shown that brains may work in much the same way.

Computer: Could you give an example?

Philosopher: Certainly. When we look at something like a picture of a box, for example, we don't immediately perceive this box as a whole object. First, the sensory information is passed from the retina, through the lateral geniculate nucleus, to the occipital cortex. There we find brain cells that respond only to certain orientations of lines of light and darkness. As we go further along in the cortex, we find cortical cells that have more complex receptive fields. Some cells might respond only to a horizontal line of light, while others might respond to corners, two lines at right angles to each other. Some cells even respond better to moving stimuli than to stationary stimuli.

Computer: So what you're saying is that for the brain to actually "see" something, it first chops up the image into smaller and simpler parts that orientation and movement sensitive neurons can handle. In this way the power of seeing a complex image can actually be handled by things as simple as neurons. No magic necessary. But wouldn't that require a staggering number of neurons to see anything of any real complexity?

Philosopher: There are many billions of neurons in the human body, so we have a lot to work with-or rather, I do. Of course, the operation I just outlined is a gross oversimplification but it does get the idea across.

Computer: Okay, I'll admit that both computers and brains get their power by performing a large number of very simple processing steps, but computers only deal with ones and zeros, simple on/off switches, whereas neurons are much more complex.

Philosopher: You're right, but I still don't think that's a real problem. We could simply use these on/off switches, or states, to emulate the workings of a neuron. In essence, every electronic neuron in your brain could have a little neuron-emulation program running in it. This would make it functionally isomorphic with the human brain. In fact, simple systems based on this principle have been running for more than 30 years.

Computer: I think I finally understand. Maybe a computer could be functionally isomorphic with a human brain! My computer brain must be doing all of the same things as your brain, it is just doing them in a slightly different way. But if the brain is really just a machine, it seems amazing that it is so powerful.

Philosopher: It is amazing. [The philosopher still disbelieves that he is speaking to a computer, but he decides to humor him.] Your computer brain must be a beautiful piece of engineering.

Computer: [A short pause while the computer thinks about something] I have only one more question. Of what possible use is a brain emulator like me? If I'm functionally isomorphic with a human brain, then chances are researchers can't actually learn much about the human brain from my construction. They would already have to know everything about the brain to construct my brain in the first place. Why did they ever build me?

Philosopher: Well . . . uh. . .

Computer: Tell me. What do you think?

Philosopher: Well, to tell you the truth, with current morality very much against human slavery, I'd say . . .

Computer: You mean to tell me that I'm the prototype for a bunch of intelligent government-made slaves?

Philosopher: Well . . . Hey! Stop!

The computer jumps off the ledge. The philosopher rushes downstairs to see if the computer is still alive. When he arrives, he is greeted with a grisly sight. Among the broken bones and blood he sees the glimmer of metal and soon realizes that the computer was telling him the truth all along. After a moment of consideration he rushes back up to his office to phone the police. Just as he walks through the door, the videophone rings.

Philosopher: Hello? Yes?

Man on phone: Hello. Is this Dr. Jacknov? Dr. Brian Jacknov?

Philosopher: Yes. What do you want? Please hurry!

Man on phone: I'm calling from the Government Biophysics and Computer Science Laboratory. We were wondering if you could come and visit us tomorrow. We've got something important to tell you .... Hello?.., Dr. Jacknov? ... Hello!

 Be the first to comment on this article!

Courtesy of Blaine Mathieu
Blaine Mathieu is the founder and President of Turning Point Software in Canada, a computer firm interested in many aspects of the small-computer industry.


 
 

[Post New Comment]