The Age of Intelligent Machines: Postscript
Inventor, futurist Ray Kurzweil surveys the complex and daunting initiative to create truly intelligent machines. Neural net decision-making rivals experts, pattern recognition mimics human capabilities. While true human intelligence dwarfs today's artificial intelligence, there is no fundamental barrier to the AI field's ultimately achieving this objective, he says. From Ray Kurzweil's revolutionary book The Age of Intelligent Machines, published in 1990.
Photo by Lou Jones
A computer image.
Let us review a few of the fundamental issues underlying the age of intelligent machines. As I noted in the last section of the previous chapter, the strengths of today's machine intelligence are quite different from those of human intelligence and in many ways complement it. Once we have defined the transformations and methods underlying intelligent processes, a computer can carry them out tirelessly and at great speed. It can call upon a huge and extremely reliable memory and keep track of billions of facts and their relationships. Human intelligence, on the other hand, though weak at mastering facts, still excels at turning information into knowledge. The ability to recognize, understand, and manipulate the subtle networks of abstraction inherent in knowledge continues to set human intelligence apart.
Yet computers are clearly advancing in these skills. Within narrow domains--diagnosing certain classes of disease, performing financial judgments, and many other specialized tasks--computers already rival human experts. During the 1980s expert systems went from research experiments to commercially viable tools that are relied upon daily to perform important jobs. Computers have also begun in recent years to master the pattern-recognition tasks inherent in vision and hearing.
Though not yet up to human standards, pattern-recognition technology is sufficiently advanced to perform a wide variety of practical tasks. It is difficult to estimate when these capabilities will reach human levels, but there does not appear to be any fundamental barrier to achieving such levels. Undoubtedly, computers will achieve such levels gradually; no bell will ring when it happens.
What is clear is that by the time computers achieve human levels of performance in those areas of traditional human strength, they will also have greatly enhanced their areas of traditional superiority. (Not all experts agree with this. Doug Hofstadter, for example, speculates in Godel, Escher, Bach that a future "actually intelligent" machine may not be able to do fast accurate arithmetic, because it will get distracted and confused by the concepts triggered by the numbers-a dubious-proposition in my view.1) Once a computer can read and understand what it is reading, there is no reason why it should not read everything ever written (encyclopedias, reference works, books, journals and magazines, data bases, etc.) and thus master all knowledge.
As Norbert Wiener has pointed out, no human being has had a complete mastery of human knowledge for the past couple centuries (and it is doubtful in my view that anyone has ever had such mastery). Even mere human levels of intelligence combined with a thorough mastery of all knowledge would give computers unique intellectual skills. Combine these attributes with computers' traditional strengths of speed, tireless operation, prodigious and unfailing memory, and extremely rapid communication, and the result will be formidable. We are, of course, not yet on the threshold of this vision. This early phase of the age of intelligent machines is providing us with obedient servants that are not yet intelligent enough to question our demands of them.
Minsky points out that we have trouble imagining machines achieving the capabilities we have because of a deficiency in our concept of a machine.'2 The human race first encountered machines (of its own creation) as devices with a few dozen, and in some cases a few hundred, active parts. Today, our computerized machines have millions of active components, yet our concept of a machine as a relatively inflexible device with only a handful of behavioral options has not changed. By the end of this century chips with over a billion components are anticipated, and we will enter an era of machines with many billions of components.
Clearly, the subtlety and intelligence of the behavior of machines at those different levels of complexity are quite different. Emulating human levels of performance will require trillions, perhaps thousands of trillions, of components. At current rates of progress, we shall achieve such levels of complexity early in the next century. Human-level intelligence will not automatically follow, but reasonable extrapolations of the rate of progress of machine intelligence in a broad variety of skills in pattern recognition, fine motor coordination, decision making, and knowledge acquisition leads to the conclusion that there is no fundamental barrier to the AI field's ultimately achieving this objective. The question sounds innocuous enough, but our approach to it rests on the meanings we ascribe to the terms "machine" and "think." Consider first the question of whether or not a human being is a machine. A human being is certainly not like the early human-made machines, with only a handful of parts. Yet are we fundamentally different from a machine, one with, say, a few trillion parts? After all, our bodies and brains are presumably subject to the same natural laws as our machines.
As I stated earlier, this is not an easy question, and several thousand years of philosophical debate have failed to answer it. If we assume that the answer to this question is no (humans are not fundamentally different from machines), then we have answered the original question. We presumably think, and if indeed we are machines, then we must conclude that machines can think. If, on the other hand, we assume that we are in some way fundamentally different from a machine, then our answer depends on our definition of the word "think."3
First, let us assume a behavioral definition, that is, a definition of thinking based on outwardly observable behavior. Under this definition, a machine should be considered to think if it appears to engage in intelligent behavior. This, incidentally, appears to be the definition used by the children I interviewed (see the section "Naive Experts" in chapter 2). Now the answer is simply a matter of the level of performance we expect. If we accept levels of performance in specific areas that would be considered intelligent if performed by human beings, then we have achieved intelligent behavior in our machines already, and thus we can conclude (as did the children I talked with) that today's computers are thinking.
If, on the other hand, we expect an overall level of cognitive ability comparable to the full range of human intelligence, then today's computers cannot be regarded as thinking. If one accepts my conclusion above that computers will eventually achieve human levels of intellectual ability, then we can conclude that it is inherently possible for a machine to think, but that machines on earth have not yet started to do so.
If one accepts instead an intuitive definition of thinking, that is, an entity is considered to be thinking if it "seems" to be thinking, then responses will vary widely with the person assessing the "seeming." The children I spoke to felt that computers seemed to think, but many adults would disagree. For myself, I would say that computers do not yet seem to be thinking most of the time, although occasionally a clever leap of insight by a computer program I am interacting with will make it seem, just for a moment, that thinking is taking place.
Now let us consider the most difficult approach. If we define thinking to involve conscious intentionality, then we may not be in a position to answer the question at all. I know that I am conscious, so I know that I think (hence Descartes's famous dictum "I think, therefore I am"). I assume that other people think (lest I go mad), but this assumption appears to be built in (what philosophers would call a priori knowledge), rather than based on my observations of the behavior of other people. I can imagine machines that can understand and respond to people and situations with the same apparent intelligence as real people (see some of the scenarios above).
The behavior of such machines would be indistinguishable from that of people; they would pass any behavioral test of intelligence, including the Turing test. Are these machines conscious? Do they have genuine intentionality or free will? Or are they just following their programs? Is there a distinction to be made between conscious free will and just following a program? Is this a distinction with a difference? Here we arrive once again at the crux of a philosophical issue that has been debated for several thousand years.
Some observers, such as Minsky and Dennett, maintain that consciousness is indeed an observable and measurable facet of behavior, that we can imagine a test that could in theory determine whether or not an entity is conscious. Personally, I prefer a more subjective concept of consciousness, the idea that consciousness is a reality appreciated only by its possessor. Or perhaps I should say that consciousness is the possessor of the intelligence, rather than the other way around. If this is confusing, then you are beginning to appreciate why philosophy has always been so difficult.
If we assume a concept of thinking based on consciousness and hold that consciousness is detectable in some way, then one has only to carry out the appropriate experiment and the answer will be at hand. (If someone does this, let me know.) If, on the other hand, one accepts a subjective view of consciousness, then only the machine itself could know if it is conscious and thus thinking (assuming it can truly know anything). We could, of course, ask the machine if it is conscious, but we would not be protected from the possibility of the machine having been programmed to lie. (The philosopher Michael Serwen once proposed building an intelligent machine that could not lie and then simply asking it if it was conscious.)
One remaining approach to this question comes to us from quantum mechanics. In perhaps its most puzzling implication, quantum mechanics actually ascribes a physical reality to consciousness. Quantum theory states that a particle cannot have both a precise location and a precise velocity. If we measure its velocity precisely, then its location becomes inherently imprecise.
In other words, its location becomes a probability cloud of possible locations. The reverse is also true: measuring its precise location renders its velocity imprecise. It is important to understand exactly what quantum mechanics is trying to say. It is not saying that there is an underlying reality of an exact location and velocity and that we are simply unable to measure them both precisely. It is literally saying that if a conscious being measures the velocity of a particle, it actually renders the reality of the location of that particle imprecise.
Quantum mechanics is addressing not simply limitations in observation but the impact of conscious observation on the underlying reality of what is observed. Thus, conscious observation actually changes a property of a particle. Observation of the same particle by a machine that was not conscious would not have the same effect. If this seems strange to you, you are in good company. Einstein found it absurd and rejected it.4 Quantum mechanics is consistent with a philosophical tradition that ascribes fundamental reality to knowledge, as opposed to knowledge simply being a reflection of some other fundamental reality.5
Quantum mechanics is more than just a philosophical viewpoint, however: its predictions have been consistently confirmed. Almost any electronic device of the past 20 years demonstrates its principles, since the transistor is an embodiment of the paradoxical predictions of quantum mechanics. Quantum mechanics is the only theory in physics to ascribe a specific role to consciousness beyond simply saying that consciousness is what may happen to matter that evolves to high levels of intelligence according to physical laws.
If one accepts its notions fully, then quantum mechanics may imply a way to physically detect consciousness. I would counsel caution, however, to any who would be builder of a consciousness detector based on these principles. It might be upsetting to point a quantum-mechanical consciousness detector at ourselves and discover that we are not really conscious after all.
As a final note on quantum mechanics let me provide a good illustration of the central role it ascribes to consciousness. According to quantum mechanics, observing the velocity of a particle affects not only the preciseness of its location but also affects the preciseness of the location of certain types of "sister" particles that may have emerged from the same particle interaction that produced the particle whose velocity we just observed.
For example, if an interaction produces a pair of particles that emerge in opposite directions and if we subsequently observe the velocity of one of the particles, we will instantly affect the preciseness of the position of both that particle and its sister, which may be millions of miles away. This would appear to contradict a fundamental tenet of relativity: that effects cannot be transmitted faster than the speed of light. This paradox is currently under study.6
When computers were first invented in the mid 1940s, they were generally regarded as curiosities, though possibly of value to mathematics and a few engineering disciplines. Their value to science, business, and other disciplines soon became apparent, and exploration of their practical applications soon began.
Today, almost a half-century later, computers are ubiquitous and highly integrated into virtually all of society's institutions. If a law were passed banning all computers (and in the doubtful event that such legislation were adhered to), society would surely collapse. The orderly functioning of both government and business would break down in chaos. We are already highly dependent on these "amplifiers of human thought," as Ed Feigenbaum calls them.
As the intelligence of our machines improves and broadens, computer intelligence will become increasingly integrated into our decision-making, our economy, our work, our learning, our ability to communicate, and our life styles. They will be a driving force in shaping our future world. But the driving force in the growth of machine intelligence will continue to be human intelligence, at least for the next half century. A Final NoteWhen I was a boy, I had a penchant for collecting magic tricks and was known to give magic shows for friends and family. I took pleasure in the delight of my audience in observing apparently impossible phenomena. It became apparent to me that organizing ordinary methods in just the right sequence could give rise to striking results that went beyond the methods I started with. I also realized that revealing these methods would cause the magic to disappear and leave only the ordinary methods.
As I grew older, I discovered a more powerful form of magic: the computer. Again, by organizing ordinary methods in just the right sequences (that is, with the right algorithms), I could once again cause delight. Only the delight caused by this more grown-up magic was more profound. Computerized systems that help overcome the handicaps of the disabled or provide greater expressiveness and productivity for all of us provide measures of delight more lasting than the magic tricks of childhood.
The sequences of 1s and 0s that capture the designs and algorithms of our computers embody our future knowledge and wealth. And unlike more ordinary magic, any revelation of the methods underlying our computer magic does not tarnish its enchantment. Notes
1. See pp. 677-678 of Douglas Hofstadter's Godel, Escher, Bach: An Eterna! Golden Braid (New York: Basic Books, 1979) for a fuller account of his concept of potential computer weaknesses.
2. Marvin Minsky, Society of Mind, pp. 186, 288.
3. The general reader will find pertinent to the topic Paul M. Churchland's philosophical and scientific examination throughout Matter and Consciousness.
4. See Einstein's letters of August 9, 1939, and December 22, 1950, to E. Schrodinger, in K. Przibram, ed., Letters on Wave Mechanics, pp. 35-36 and 39-40.
5. Admittedly, some disavow the applicability of subatomic metaphors to any other aspect of life. See Paul G. Hewitt, Conceptual Physics, 2nd ed., pp. 486-487.
6. A dense and pertinent discussion of the sister particle paradox may be found in Abner Himony, "Events and Processes in the Quantum World," in R. Penrose and C. J.Isham, eds., Quantum Concepts in Space and Time, pp. 182-196.
Originally published in The Age of Spiritual Machines (C)1999 Raymond Kurzweil
|