|
|
Ray Kurzweil Q&A with Darwin Magazine
Machine consciousness is the subject of this dialog with Darwin Magazine.
Originally published December 3, 2001 at darwinmag.com. Published on KurzweilAI.net December 3, 2001.
Darwin: Will robots ever become conscious?
Kurzweil: We have a basic assumption that other people are conscious, regardless of whether their subjective experiences are the same as our own. That's an assumption and people think it's only something that a philosopher would worry about. But it's actually a really practical question when we get outside this shared assumption of human beings being conscious. Because when we go outside this assumption, the consensus breaks down. Take animals, for example. There is not a consensus on whether or not animals are conscious. Some people feel that at least higher order animals are conscious, other people say, "No, you can't be conscious unless you have a mastery of human language; animals just operate by instinct, instinct is a machinelike automatic response, so these animals just operate like machines."
If you go on to nonbiological entities, such as robots, there are more vexing questions. On the one hand, robots are more dissimilar to us than animals are because they're nonbiological. On the other hand, I'd predict that robotic entities we may meet 30 years from now will be displaying more humanlike behaviors than animals do because they'll be very explicitly patterned or modeled on human behavior. By that time, we will have completely reverse engineered the human brain; we'll be able to understand how it works, and program robots accordingly.
I'm not saying all artificial intelligence (AI) will be humanlike. Some will be created without human personalities because they'll have specific jobs to do where they won't require human characteristics. But some will have human personalities because we'll want to interact with them in a humanlike way, through human language. And in order to understand human language, you need to understand human nature and common sense--you have to be pretty human in order to understand human language.
So some of these machines will be humanlike, more humanlike than animals. They will be copies of humans, they will be making humanlike statements, and they'll be very convincing. Ultimately, they'll convince people that they are conscious.
Darwin: Won't people have a hard time accepting the notion that a robot is conscious?
Kurzweil: I get e-mails all the time that say, "But Ray, the computer, it's just a machine, and even if it's a complicated machine, and if it surprises you sometimes, there's really nobody home, it's not aware of its existence." And that seems like a pretty reasonable statement with regard to computers today, because computers today are still a million times simpler than the human brain, and they're constructed very, very differently.
But that's not an accurate description of machines 30 years from now. They will have the complexity of the human brain and the organization and design of the brain. They'll claim to be conscious and have feelings--to be happy, angry, sad, whatever--and they'll have the subtle cues that we associate with those claims.
True, today you can make a virtual character in a game that claims to be happy. But the character is not very convincing. So you might go along with the fantasy for a while, but you don't really believe that this virtual character is having an emotional experience; the complexity and depth of its experience is not on a human level. Thirty years from now, that won't be the case. Machines will have the subtlety, complexity, richness and depth of human behavior.
Darwin: How would we ever prove that a machine is--or isn't--conscious?
Kurzweil: It's not a scientifically resolvable question, in my view. You can't build any sort of consciousness detection machine that doesn't have some philosophical assumptions built in to it. But my idea that machines will convince us they are conscious is not an ultimate philosophical, scientific statement. It's more of a political prediction. These machines will be intelligent and persuasive enough that we'll believe them. If we don't, they'll get mad at us, and we don't want them to get mad at us. But that's not philosophical proof that they're conscious. Ultimately, consciousness is a first person phenomena. Beyond that, I'm just making assumptions. But that assumption is going to be tested as we create entities that are humanlike. It's also going to be tested by another phenomena, which is redefining human intelligence itself.
How is human intelligence going to change?
Human and machine intelligence are going to become intertwined. There are quite a few human beings that already have computers in their brains, and it doesn't upset us too much because they're doing very narrow things today, like the Parkinson's implant that reduces tremors, and cochlear implants for deaf people. I envision a scenario where we'll be able to send billions of nanobots, which are tiny robots the size of blood cells, into the human brain, where they can communicate wirelessly with our biological neurons. Rather than an implant that's located in one position, these nanobots could be highly distributed, communicating with the brain in millions of places, and therefore become part of the brain. So when you deal with a human in 2035 or 2040, you'll be dealing with an entity that has a very complicated biological brain, intimately integrated with nonbiological thinking processes that will be equally complex and ultimately more complex. It's not going to be computers on the left side of the room, humans on the right; it's going to be a very intimate integration. By 2040, 2050, even biological people will be mostly nonbiological. That clearly raises the spiritual issue of what is a person.
So you think nonbiological intelligence will dominate human intelligence?
Yes--the crossover point is somewhere in the 2030s, certainly by 2040 or 2050. One of the reasons that a nonbiological intelligence can ultimately be superior to human intelligence is that it will combine the powers of human intelligence with certain nonbiological intelligence advantages. Computers today can already do some things better than human intelligence can. A thousand dollar PC can remember billions of things fairly accurately; we're hard pressed to remember a handful of phone numbers. Computers are inherently much faster. The electrochemical information processing in the brain is literally 100 times slower than electric circuits today. Most important, machines can share their knowledge. If you want some capability on your computer, you can just load the evolved learning of one computer onto it. Not so with humans. If I read War and Peace or learn French, I can't download that to you. Humans have an advantage today, in that our pattern recognition is much more profound than what machines can do. But machines will be able to encompass all the skills of humans, and combine that with these other advantages I mentioned--being able to think faster, having very large, accurate memories. And they'll keep growing in their capability, doubling every year.
Darwin: Are these computer-induced changes you predict a threat to human civilization as we know it?
Kurzweil: In my mind, this is very much a part of the human civilization. This is very much a human endeavor. This is not an invasion of machines coming from outer space and taking us over. These machines are emerging from within our civilization. We're already very close to them; they're very intimately integrated into our civilization. If all the computers in the world stopped today our civilization would grind to a halt.
That wasn't true very recently. Thirty years ago, if all the computers in the world stopped, hardly anyone would notice except a few research scientists, maybe IT professionals. Certainly, 40 years ago, nobody was using them. We're now very dependent on our computers. We no longer have our hand on the switch, so to speak, because our civilization is so dependent on the machines, and bit by bit the machines are getting more and more intelligent.
| | Join the discussion about this article on Mind·X! | |
|
Mind·X Discussion About This Article:
|
|
|
|
Re: thank you!
|
|
|
|
The state of the art at the Neurosciences Institute in San Diego. I think their approach is likely to produce thinking machines.
Synthetic neural modeling
The unique multilevel aspect of brain function requires a new approach to neural modeling. Pioneered by researchers at the Institute, synthetic neural modeling represents such a multilevel theoretical approach to the problem of understanding the neuronal bases of adaptive behavior. It uses simultaneous large-scale computer simulations of the nervous system, the phenotype, and the environment of a particular organism to study events and their interactions at these three levels. The simulations are based on physiological and anatomical data. They incorporate detailed models for synaptic modification and for the organization of cells into neuronal groups and layers and brain regions to generate behavior in the context of a particular environment and the unique history of an organism. Synthetic neural modeling takes into account possible evolutionary origins and modes of development of the nervous system, permitting a wide range of psychophysical and behavioral phenomena to be studied within a common framework.
A series of neurally basedmulti-jointeds been developed at the Institute. One such automaton, Darwin III, exists in an environment of simple neurally shapes moving on a background; its phenotype comprises a sessile "creature" with an eye and a multijointed arm provided with senses of touch and kinesthesia; its nervous system consists of some 50 interconnected networks containing over 50,000 cells and 620,000 synaptic junctions. By interaction with its environment, Darwin III develops sensorimotor coordination, permitting it to track moving objects with its eye, to reach out and touch objects with its arm, to categorize certain objects according to combinations of visual, tactile and kinesthetic cues, and to respond to objects based on previous categorizations. These elementary behaviors provide a microcosm in which it is possible to analyze critical problems involving the acquisition and maturation of integrated sensory and motor behavior in animals. (See Publications 4, 17, 23, 28, 29, and 49.)
Copyright Neurosciences Research Foundation, Inc., 2001
WWW.nsi.edu
|
|
|
|
|
|
|
|
|
Re: Ray Kurzweil Q&A with Darwin Magazine
|
|
|
|
1. My wife and I have a dog, a mut, half Siberian Husky and half stew. He is now about 4 years old. He used to have a lot of fun chasing the red dot from a laser pointer around on the floor in the house or even outside in the grass at night. A few months ago Butch had a definite epiphany. He, all of a sudden, stopped chasing the dot and looked at the pointer in my hand. He pounced on the dot a couple of times and returned to looking at the pointer. There was more than just dumb animal going on in Butch's head. His actions were not instinctive or mechanical. Even now, when we play with the pointer, Butch will chase the dot around for only a few seconds before looking at the pointer or me. As we continue to play, if he looses where the dot is or if I stop pushing the button switch, Butch will look at me with a definite expression and tilt of the head as if to say, 'Hey! C'mon!' Sometimes he will be crouched down with his rear end up and head turned such that he may see both the floor and me.
Anyone who has ever had a non-human friend will surely agree that there are actions and true expressions on these animals which definitely show something is 'going on in there.'
2. I could write a book on why I dislike the term 'artificial intelligence.' An expert system (don't ya just love that one?) is artificial intelligence. Are you (collective) familiar with the computer program (old verbal game) 'Is it a turtle?' That could be construed as artificial intelligence. Intelligence CAN NOT be programmed!!!!! I don't want to go too deep into an explanation as I am working on an intelligence allegory. Suffice it to say intelligence forms as data input is digested then analyzed on multiple levels. If an awareness of causality occurs the spark of intelligence will ignite. A dog, which sits after being told to do so, is conscious by definition. A squirrel figuring how to get seed from a bird feeder is self-conscious by definition. Only by matter of degree can intelligence and consciousness be judged. One could argue that a plant is intelligent and conscious in that it 'knows' it needs sunlight and then consciously bends toward that light. Most humans are simply insecure and try to define themselves as the only life form which does 'this' or 'that.' The identifying aspects of humanity I find are not all characteristics for which to be proud.
|
|
|
|
|
|
|