Origin > Kurzweil Archives > Ray Kurzweil Q&A with Darwin Magazine
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0363.html

Printable Version
    Ray Kurzweil Q&A with Darwin Magazine
by   Raymond Kurzweil

Machine consciousness is the subject of this dialog with Darwin Magazine.


Originally published December 3, 2001 at darwinmag.com. Published on KurzweilAI.net December 3, 2001.

Darwin: Will robots ever become conscious?

Kurzweil: We have a basic assumption that other people are conscious, regardless of whether their subjective experiences are the same as our own. That's an assumption and people think it's only something that a philosopher would worry about. But it's actually a really practical question when we get outside this shared assumption of human beings being conscious. Because when we go outside this assumption, the consensus breaks down. Take animals, for example. There is not a consensus on whether or not animals are conscious. Some people feel that at least higher order animals are conscious, other people say, "No, you can't be conscious unless you have a mastery of human language; animals just operate by instinct, instinct is a machinelike automatic response, so these animals just operate like machines."

If you go on to nonbiological entities, such as robots, there are more vexing questions. On the one hand, robots are more dissimilar to us than animals are because they're nonbiological. On the other hand, I'd predict that robotic entities we may meet 30 years from now will be displaying more humanlike behaviors than animals do because they'll be very explicitly patterned or modeled on human behavior. By that time, we will have completely reverse engineered the human brain; we'll be able to understand how it works, and program robots accordingly.

I'm not saying all artificial intelligence (AI) will be humanlike. Some will be created without human personalities because they'll have specific jobs to do where they won't require human characteristics. But some will have human personalities because we'll want to interact with them in a humanlike way, through human language. And in order to understand human language, you need to understand human nature and common sense--you have to be pretty human in order to understand human language.

So some of these machines will be humanlike, more humanlike than animals. They will be copies of humans, they will be making humanlike statements, and they'll be very convincing. Ultimately, they'll convince people that they are conscious.

Darwin: Won't people have a hard time accepting the notion that a robot is conscious?

Kurzweil: I get e-mails all the time that say, "But Ray, the computer, it's just a machine, and even if it's a complicated machine, and if it surprises you sometimes, there's really nobody home, it's not aware of its existence." And that seems like a pretty reasonable statement with regard to computers today, because computers today are still a million times simpler than the human brain, and they're constructed very, very differently.

But that's not an accurate description of machines 30 years from now. They will have the complexity of the human brain and the organization and design of the brain. They'll claim to be conscious and have feelings--to be happy, angry, sad, whatever--and they'll have the subtle cues that we associate with those claims.

True, today you can make a virtual character in a game that claims to be happy. But the character is not very convincing. So you might go along with the fantasy for a while, but you don't really believe that this virtual character is having an emotional experience; the complexity and depth of its experience is not on a human level. Thirty years from now, that won't be the case. Machines will have the subtlety, complexity, richness and depth of human behavior.

Darwin: How would we ever prove that a machine is--or isn't--conscious?

Kurzweil: It's not a scientifically resolvable question, in my view. You can't build any sort of consciousness detection machine that doesn't have some philosophical assumptions built in to it. But my idea that machines will convince us they are conscious is not an ultimate philosophical, scientific statement. It's more of a political prediction. These machines will be intelligent and persuasive enough that we'll believe them. If we don't, they'll get mad at us, and we don't want them to get mad at us. But that's not philosophical proof that they're conscious. Ultimately, consciousness is a first person phenomena. Beyond that, I'm just making assumptions. But that assumption is going to be tested as we create entities that are humanlike. It's also going to be tested by another phenomena, which is redefining human intelligence itself.

How is human intelligence going to change?

Human and machine intelligence are going to become intertwined. There are quite a few human beings that already have computers in their brains, and it doesn't upset us too much because they're doing very narrow things today, like the Parkinson's implant that reduces tremors, and cochlear implants for deaf people. I envision a scenario where we'll be able to send billions of nanobots, which are tiny robots the size of blood cells, into the human brain, where they can communicate wirelessly with our biological neurons. Rather than an implant that's located in one position, these nanobots could be highly distributed, communicating with the brain in millions of places, and therefore become part of the brain. So when you deal with a human in 2035 or 2040, you'll be dealing with an entity that has a very complicated biological brain, intimately integrated with nonbiological thinking processes that will be equally complex and ultimately more complex. It's not going to be computers on the left side of the room, humans on the right; it's going to be a very intimate integration. By 2040, 2050, even biological people will be mostly nonbiological. That clearly raises the spiritual issue of what is a person.

So you think nonbiological intelligence will dominate human intelligence?

Yes--the crossover point is somewhere in the 2030s, certainly by 2040 or 2050. One of the reasons that a nonbiological intelligence can ultimately be superior to human intelligence is that it will combine the powers of human intelligence with certain nonbiological intelligence advantages. Computers today can already do some things better than human intelligence can. A thousand dollar PC can remember billions of things fairly accurately; we're hard pressed to remember a handful of phone numbers. Computers are inherently much faster. The electrochemical information processing in the brain is literally 100 times slower than electric circuits today. Most important, machines can share their knowledge. If you want some capability on your computer, you can just load the evolved learning of one computer onto it. Not so with humans. If I read War and Peace or learn French, I can't download that to you. Humans have an advantage today, in that our pattern recognition is much more profound than what machines can do. But machines will be able to encompass all the skills of humans, and combine that with these other advantages I mentioned--being able to think faster, having very large, accurate memories. And they'll keep growing in their capability, doubling every year.

Darwin: Are these computer-induced changes you predict a threat to human civilization as we know it?

Kurzweil: In my mind, this is very much a part of the human civilization. This is very much a human endeavor. This is not an invasion of machines coming from outer space and taking us over. These machines are emerging from within our civilization. We're already very close to them; they're very intimately integrated into our civilization. If all the computers in the world stopped today our civilization would grind to a halt.

That wasn't true very recently. Thirty years ago, if all the computers in the world stopped, hardly anyone would notice except a few research scientists, maybe IT professionals. Certainly, 40 years ago, nobody was using them. We're now very dependent on our computers. We no longer have our hand on the switch, so to speak, because our civilization is so dependent on the machines, and bit by bit the machines are getting more and more intelligent.

 Join the discussion about this article on Mind·X!  
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

thank you!
posted on 12/07/2001 9:54 AM by kimmygirl3@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Very interesting I got alot of prospective insight. It motivated me to look beyond here and delve farther into technology thank you for inspriration!!!!
Kim*

Re: thank you!
posted on 12/07/2001 10:41 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

The state of the art at the Neurosciences Institute in San Diego. I think their approach is likely to produce thinking machines.

Synthetic neural modeling

The unique multilevel aspect of brain function requires a new approach to neural modeling. Pioneered by researchers at the Institute, synthetic neural modeling represents such a multilevel theoretical approach to the problem of understanding the neuronal bases of adaptive behavior. It uses simultaneous large-scale computer simulations of the nervous system, the phenotype, and the environment of a particular organism to study events and their interactions at these three levels. The simulations are based on physiological and anatomical data. They incorporate detailed models for synaptic modification and for the organization of cells into neuronal groups and layers and brain regions to generate behavior in the context of a particular environment and the unique history of an organism. Synthetic neural modeling takes into account possible evolutionary origins and modes of development of the nervous system, permitting a wide range of psychophysical and behavioral phenomena to be studied within a common framework.

A series of neurally basedmulti-jointeds been developed at the Institute. One such automaton, Darwin III, exists in an environment of simple neurally shapes moving on a background; its phenotype comprises a sessile "creature" with an eye and a multijointed arm provided with senses of touch and kinesthesia; its nervous system consists of some 50 interconnected networks containing over 50,000 cells and 620,000 synaptic junctions. By interaction with its environment, Darwin III develops sensorimotor coordination, permitting it to track moving objects with its eye, to reach out and touch objects with its arm, to categorize certain objects according to combinations of visual, tactile and kinesthetic cues, and to respond to objects based on previous categorizations. These elementary behaviors provide a microcosm in which it is possible to analyze critical problems involving the acquisition and maturation of integrated sensory and motor behavior in animals. (See Publications 4, 17, 23, 28, 29, and 49.)



Copyright Neurosciences Research Foundation, Inc., 2001

WWW.nsi.edu


Re: Ray Kurzweil Q&A with Darwin Magazine
posted on 01/03/2002 7:52 AM by instargazer@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

1. My wife and I have a dog, a mut, half Siberian Husky and half stew. He is now about 4 years old. He used to have a lot of fun chasing the red dot from a laser pointer around on the floor in the house or even outside in the grass at night. A few months ago Butch had a definite epiphany. He, all of a sudden, stopped chasing the dot and looked at the pointer in my hand. He pounced on the dot a couple of times and returned to looking at the pointer. There was more than just dumb animal going on in Butch's head. His actions were not instinctive or mechanical. Even now, when we play with the pointer, Butch will chase the dot around for only a few seconds before looking at the pointer or me. As we continue to play, if he looses where the dot is or if I stop pushing the button switch, Butch will look at me with a definite expression and tilt of the head as if to say, 'Hey! C'mon!' Sometimes he will be crouched down with his rear end up and head turned such that he may see both the floor and me.
Anyone who has ever had a non-human friend will surely agree that there are actions and true expressions on these animals which definitely show something is 'going on in there.'

2. I could write a book on why I dislike the term 'artificial intelligence.' An expert system (don't ya just love that one?) is artificial intelligence. Are you (collective) familiar with the computer program (old verbal game) 'Is it a turtle?' That could be construed as artificial intelligence. Intelligence CAN NOT be programmed!!!!! I don't want to go too deep into an explanation as I am working on an intelligence allegory. Suffice it to say intelligence forms as data input is digested then analyzed on multiple levels. If an awareness of causality occurs the spark of intelligence will ignite. A dog, which sits after being told to do so, is conscious by definition. A squirrel figuring how to get seed from a bird feeder is self-conscious by definition. Only by matter of degree can intelligence and consciousness be judged. One could argue that a plant is intelligent and conscious in that it 'knows' it needs sunlight and then consciously bends toward that light. Most humans are simply insecure and try to define themselves as the only life form which does 'this' or 'that.' The identifying aspects of humanity I find are not all characteristics for which to be proud.


Re: Ray Kurzweil Q&A with Darwin Magazine
posted on 01/03/2002 10:10 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

The old ham argument about structural compelexity generating mental states just doesn't wash.

Even the simplest computers ARE 'complex' in the sense that they are in a myriad of physical states, consisting of atoms as they do. There is no difference to the non-knowledgable observerver in fact wether the computers' enginneruing aggregate-design level 'memmory and chip' combination is 'running a program' any more than the atoms it consists of doing the same.

The complexity argument assumes the absolute existence of structural 'awareness' ( by the universe ) and some sort of quasi-religious concept enabling the computer designer to convey this concept of 'complexity' into the universe which then responds by generating a mental-physical phenmena in the form of consciousness.

This is less tenable than the squawkings of the average bible-basher - no matter how many different and interesting ways you manipulate 1s and 0s ( or any other derived mathematical onbect mapped from a physical observable ) you will never , ever generate anything other than other mathematical objects from it.

Re: Ray Kurzweil Q&A with Darwin Magazine
posted on 01/03/2002 2:22 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

what about the DNA code, which is just a chemical version of 1s and 0s? It led to your mind and mine. The kind of complexity we're talking about is complex adaptive systems and the process of emergence. That's a special kind of complexity that grows out of order and information. Complexity by itself is just a stage of existence between order and chaos. But complex adaptive systems tend to evolve to fit the environment in which they exist. They are more than just complex.

Re: Ray Kurzweil Q&A with Darwin Magazine
posted on 01/04/2002 11:21 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

There are no 0s and 1s IN the DNA - they are in your head, believe me.

DNA is a physical chemical and as such distinct from the data derived from it, which depends upon the latest theories available as to regards DNA's structure and methods of operation.

Os and 1s are mathematical objects that could be mapped to certain physical attributes of the DNA , but the fact that the chemical operation of DNA has a mathematical analogue does not make the analogue the principal cause of DNA reproduction - that is due to the chemical/physical nature of DNA - is a painting of a duck the same thing as a duck ? This is completely ridiculous. A mathematical representation of a chemical process is not the same thing as a chemical process.

In fact DNA is not 'mathematical' any way : as it changes and produces unpredictable results , as opposed to any form of computation.

Re: Ray Kurzweil Q&A with Darwin Magazine
posted on 01/04/2002 4:31 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

I didn't say there are 1s and 0s in DNA, I said the "equivalent" of 1s and 0s are encoded in DNa. Just and "on" and "off" are used in a computer to stand for 1 and 0. Ther are no 1s and 0s in a computer, either. 1 and 0 are used to symbolize a relationship that is carried out in some other medium.

In DNA, there are two types of steps on the ladder of information a cell uses to make another cell: one is Adenine, which is always tied to thymine; the other is cytosine, which is always tied to guanine. So you can look at AT as one unit of information and CG as another. If we wish, we can call AT a 1 and CG a 2. This is no more arbitrary than calling a switch that is turned on a 1 and a switch that is turned off a 0.

Of course, for the DNA you have to add a symbol, such as a + or - to show the orientation of the TA or GC because the computer that is reading and interpreting the code is sensitive to that aspect. You might compare it to typed input that is case sensitive and reads "a" as a different symbol than "A.". But in both cases you are using what amounts to a binary tape being processed one step at a time to carry out a program.

Re: Ray Kurzweil Q&A with Darwin Magazine
posted on 01/07/2002 8:28 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

There is no structure inherent in DNA of the type you talk of : your talk of 'equivalence' is to either attribute to a molecule a level of structural understanding ( which I doubt ) or more likely , that there exists a 'structural information entity' which has an independent existence as valid as an atom, for instance. Tnis 'structure' ( an onject of the same type as a mathematical object I assume) you attribute with phenomenologocal qualities that can lead to the rise of intelligence, consciousness and other mental phenomena.


My personal opinion is that your information model of DNA is inherent in your UNDERSTANDING of DNA. Sure, DNA is a phenomenal molecule that appears to be a unique life signature : it lends itself to 'information' modelling I suppose. As a model its as useful as any other explicative device but I presume has to get very ad-hoc when accounting for mutation,inadequate nutrition and the like. In short , I think an 'information model' of cell reproduction adds nothing to the currently pefectly adequate and considerably more accurate straightforaward chemical/biological descriptions.

What I would strongly contend with, however, is that the reproduction of 'structural entities' LEADS to phenomena - this is no different to saying that there is no differnce between a duck and a painting of a duck. That is total nonsense.

Re: Ray Kurzweil Q&A with Darwin Magazine
posted on 01/04/2002 11:27 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"Complexity by itself is just a stage of existence between order and chaos. But complex adaptive systems tend ... exist. They are more than just complex."
"Complexity" appears to me to be used as a magicians' cloak by a lot of AI enthusisasts - a vague notion at best.

Re: Ray Kurzweil Q&A with Darwin Magazine
posted on 01/04/2002 4:39 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

If complexity doesn't do - what then? What? A special kind of organization? (it's the same thing)

An outside agent? Vis vitalis?

What makes us intelligent?

- Thomas

Re: Ray Kurzweil Q&A with Darwin Magazine
posted on 01/07/2002 8:32 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

I'm not claiming I know what makes human brains tick - the science just isnt good enough yet , which is why I find grand claims as to the origins of mental phenomena to be in the sentience of mathematical objects ( 'complexity') to be scarcely more credible than any other half-baked religious notions , of which the AI version is epistemelogocally equivalent in my view.

Re: Ray Kurzweil Q&A with Darwin Magazine
posted on 01/07/2002 2:39 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I don't know which way is to Nogales. Have no idea. But I do know those cartographers are wrong.

- Thomas