Origin > Visions of the Future > The Age of Intelligent Machines > The Age of Intelligent Machines, Chapter 10: Visions
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0302.html

Printable Version
    The Age of Intelligent Machines, Chapter 10: Visions
by   Raymond Kurzweil

From Ray Kurzweil's revolutionary book The Age of Intelligent Machines, published in 1990.


I never think of the future, it comes soon enough.
Albert Einstein
What is possible we can do now, what is impossible will take a little longer.
A modern-day proverb
Every electronic product being sold today is obsolete.
Fred Zieber, Senior Vice President, Dataquest
The problems of the world cannot possibly be solved by skeptics or cynics whose horizons are limited by the obvious realities. We need men who can dream of things that never were.
John F. Kennedy

Scenarios



Photo by Lou Jones www.fotojones.com

Since the founding of the computer industry almost half a century ago, one of its most salient and consistent features has been change. Functionality per unit cost has been increasing exponentially for decades, a trend that shows no sign of abating. When I attended MIT in the late 1960s, thousands of students and professors shared a single computer, an IBM 7094 with 294,912 bytes of core storage (organized as 65,536 words of 36 bits each) and a speed of about 250,000 instructions per second. One needed considerable influence to obtain more than a few seconds of computer time per day. Today, one can buy a personal computer with ten times the speed and memory for a few thousand dollars. In Metamagical Themas, Doug Hofstadter cites an actual job that took 10 people with electromechanical calculators ten months to perform in the early 1940s, was redone on an IBM 704 in the early 1960s in 20 minutes, and now would take only a few seconds on a personal computer.1 David Waltz points out that memory today, after adjustment for inflation, costs only one one hundred millionth of what it did in 1950.2 If the automotive industry made as much progress in the past two decades, a typical automobile today would cost about two dollars (the doubling of price performance every 22 months on average has resulted in an improvement factor of about 2,000 in 20 years; this is comparable to the difference between the 7094 of the late 1960s and a personal computer with a Intel 80386 chip today).3 If we go back to the first relay-based computers, a personal computer today is nearly a million times faster at a tiny fraction of the cost. Many other examples of such progress abound.4

In addition to the basic power of computation as measured by speed and memory capacity, new hardware and software technologies have greatly improved our ability to interact with computer devices. Through the 1940s and 1950s most communication with computers was through boards with plug-in cables; through the 1960s and 1970s, with reels of punched paper tape, stacks of punched paper cards, and printouts from line printers. Today the advent of high resolution graphic displays, the mouse, graphics tablets, laser printers, optical cameras, scanners, voice recognition, and other technologies have provided a myriad of ways for humans and machines to communicate.

Advances in software have harnessed these increasingly potent hardware resources to expand the productivity of most professions. Twenty years ago computers were used primarily by large corporations for transaction processing and by scientists (occasionally by computer scientists to explore the power of computing). Today most workers-professionals, office workers, factory workers, farmers-have many occasions to use methods that rely on the computer. I can recall that fifteen years ago even thinking about changing my company's business projections was regarded as a very serious endeavor; it would take the finance department days to grind through the numbers to examine a single scenario. Today with spreadsheet programs it is possible to consider a dozen alternative plans and determine their implications in less than an hour. Twenty years ago the only people interacting with computers were computer experts and a small cadre of students learning the somewhat arcane new field of computation. Today computers appear ubiquitously on office desks, in kitchens, in playrooms, in grocery stores, and in elementary schools.

Will these trends continue? Some observers have pointed out that an exponential trend cannot continue forever. If a species, for example, happens upon a hospitable new environmental niche, it may multiply and expand its population exponentially for a period of time, but eventually its own numbers exhaust the available food supply or other resources and the expansion halts or even reverses. On this basis, some feel that after four decades, exponential improvement in the power of computing cannot go on for much longer. Predicting the end of this trend is, in my view, highly premature. It is, of course, possible that we will eventually reach a time when the rate of improvement slows down, but it does not appear that we are anywhere close to reaching that point. There are more than enough new computing technologies being developed to assure a continuation of the doubling of price performance (the level of performance per unit cost) every 18 to 24 months for many years.

With just conventional materials and methodologies, progress in the next ten years, at least in terms of computing speeds and memory densities, seems relatively assured. Indeed, chips with 64 million bits of RAM (random access memory) and processor chips sporting speeds of 100 million instructions per second are on the drawing board now and likely to be available in the early 1990s. Parallel-processing architectures, some including the use of analog computation, are an additional means of expanding the power of computers. Beyond the conventional methods, a broad variety of experimental techniques could further accelerate these trends. Superconducting, for example, while challenging to implement in a practical way, has the potential to break the thermal barrier that currently constrains chip geometries. As I mentioned earlier, the resulting combination of smaller component geometries with the effective utilization of the third dimension could provide a millionfold improvement in computer power. A variety of new materials, such as gallium arsenide, also have the potential to substantially improve the speed and density of electronic circuits.5 And optical circuits-computing with light rather than electricity-may multiply computing speeds by factors of many thousands.6

Will software keep up? It is often said that the pace of advances in software engineering and applications lags behind that of the startling advance of hardware technology. Advances in software are perhaps more evolutionary than revolutionary, but in many instances software techniques are already available that are just waiting for sufficiently powerful hardware to make them practical. For example, techniques for large-vocabulary speech recognition can be adapted to recognize continuous speech but require substantially greater computational speed. Vision is another application with the same requirement. There are many techniques and algorithms that are already understood but are waiting for more powerful computers to make them economically feasible.7 In the meantime, our understanding of AI methods, the sophistication of our knowledge bases, the power of our pattern-recognition technologies, and many other facets of AI software continue to grow.

Where is all this taking us? People in the computer field are accustomed to hearing about the rapidly improving speed and density of semiconductors. People in other professions inevitably hear reports of the same progress. Numbing are the extremely small numbers used to measure computer timings and the enormous numbers used for memory capacity, time measured in trillionths of a second and memory in billions of characters. What impact are these developments going to have? How will society change? How will our daily lives change? What problems will be solved or created?

One can take several approaches in attempting to answer these questions. Perhaps most instructive is to consider specific examples of devices and scenarios that have the potential to profoundly change the way we communicate, learn, live, and work. These concrete examples represent only a few of the ways that computer and other advanced technologies will shape our future world. These examples are based on trends that are already apparent. In my view, it is virtually certain (barring a world calamity) that all of these scenarios will take place. The only uncertainty is precisely when. I will attempt to project current trends into the future and estimate when we are likely to see each example.

Obviously, the further into the future we look, the more uncertain the timing of these projections become. The history of AI is replete with examples of problems that were either underestimated or (less often) overestimated. A great irony in early AI history is that many of the problems thought most difficult-proving original theorems, playing chess-turned out to be easy, while the "easy" problems-pattern recognition tasks that even a child can perform-turned out to be the most challenging.8 Nonetheless, I believe that we now have a more sophisticated appreciation of the difficulty of many of these problems, and so I will attempt the thankless task of making specific projections. Of course, by the time you discover that my predictions were altogether wrong, it will be too late to obtain a refund for the purchase price of this book.

As I mentioned, these projections are based on trends that are already evident. What is most difficult to anticipate are breakthroughs. Any attempts to have predicted the future at the beginning of this century would have almost certainly overlooked the computer, as well as atomic energy, television, the laser, and indeed, most of electronics. After going through the scenarios, I shall discuss some possible breakthroughs that may result from current research. In the following chapter I offer a discussion of the overall impact these developments are likely to have on our educational, social, political, medical, military, and economic institutions.

The translating telephone

Koji Kobayashi, chairman of the powerful Japanese corporation NEC, has a dream. Someday, according to Kobayashi, people will be able to call anyone in the world and talk, regardless of the language that they speak. The words will be translated from language to language in real time as we speak.9

Three technologies are required to achieve Kobayashi's dream: automatic speech recognition (ASR), language translation (LT), and speech synthesis (SS). All three exist today, but not nearly in sufficiently advanced form. Let us consider each of these requirements.

ASR would have to be at the "holy grail" level, that is, combining large (relatively unrestricted) vocabulary, accepting continuous speech input, and providing speaker independence (no training of the system on each voice). Conceivably, the last requirement could be eased in early versions of this system. Users of this capability may be willing to spend fifteen minutes or so training the system on their voice. Such enrollment would be required only once. Combining the first two elements-large vocabulary and continuous speech-will take us to the early to mid 1990s. Adding speaker independence will take us another several years.

LT requires only the ability to translate text, not speech, since ASR technology would be translating speech input into written language. The LT capability would


The translating telephone.
not require literary-quality translations, but it would have to perform unassisted. LT systems today require human assistance. Completely automatic LT of sufficient quality will probably become available around the same time that the requisite ASR is available. It should be noted that every pair of languages requires different software (going from French to English is a different problem from going from English to French). While many aspects of translation will be similar from one set of languages to another, LT technology will vary in quality and availability according to the languages involved.

SS is the easiest of the three technologies Kobayashi requires. In fact, it is available today. While not entirely natural, synthetic speech generated by the better synthesizers is quite comprehensible without training. Naturalness is improving, and SS systems should be entirely adequate by the time the necessary ASR and LT systems are available.

Thus, we can expect translating telephones with reasonable levels of performance for at least the more popular languages early in the first decade of the next century. With continuing improvements in performance and reductions in cost, such services could become widespread by the end of that decade. The impact will be another major step in achieving the "global village" envisioned by Marshall McLuhan (1911-1980) over two decades ago.10 Overcoming the language barrier will result in a more tightly integrated world economy and society. We shall be able to talk more easily to more people, but our ability to misunderstand each other will remain undisturbed.

The intelligent assistant

You are considering purchasing an expensive item of capital equipment and wish to analyze the different means of financing available. Issues to consider are your company's current balance sheet, other anticipated cash flow requirements, the state of various financial markets, and future financial projections. You ask your intelligent assistant to write a report that proposes the most reasonable methods to finance the purchase and analyzes the impact of each. The computer engages you in sufficient spoken dialog to clarify the request and then proceeds to conduct its study. In the course of its research, it accesses the balance sheet and financial projections from the data bases of your company. It contacts the Dow Jones data base by telephone to obtain information on current financial markets. It makes several calls to other computers to obtain the most recent financing charges for different financial instruments. In one case, it speaks to a human to clarify certain details on a lease-repurchase plan. It speaks with your company's vice president of marketing to obtain her level of confidence in achieving the sales projections for the following two years. It then organizes and presents the information in a written report complete with color charts. The report is presented to you the following day, since it took that long to reach the two humans involved. (Had it been able to conduct the research through communication only with other computers, it would have required only a few minutes.) Other services provided by your computerized assistant include keeping careful track of your schedule, including planning your travel from one appointment to another. The system plans your work for you, doing as much of it itself as it is capable of and understanding what portion of it you need to do yourself.

When we shall see the above system depends on how intelligent an assistant we would like to have. Crude forerunners exist today. Large-vocabulary ASR has been integrated with natural-language understanding and data-base-management programs to provide systems that can respond to such commands (posed by voice) as, Compare the sales of our western region to our three largest competitors. Such systems are, of course, highly limited in their problem-solving ability, but efforts to integrate ASR with data-base access have already begun.

The most challenging aspect of the vision is problem solving, having sufficient commonsense knowledge and reasoning ability to understand what information is required to solve a particular problem. Required are expert systems in many areas of endeavor that are less narrowly focused than the expert systems of today. One of the first intelligent assistants is likely to be one that helps get information from data bases through telecommunications.11 It has become clear to a number of software developers that a need exists to improve substantially the ease of accessing information from such data-base systems as Compuserve, Delphi, The Source, Dialog, Dow Jones, Lexis, Nexis, and others. Such data-base systems are greatly expanding the volume and diversity of information available, but most persons do not know where to find the appropriate information they need. The first generation of office assistant programs are now being developed that know how to obtain a broad variety of information without requiring precisely stated requests. I expect that within several years such systems will be generally available, and some of them will take ASR for input.

Thus, in the early to mid 1990s we shall see at least part of the above vision in use: flexible access to information from increasingly varied information services around the world, accessed by systems that understand human speech as well as the syntax and (at least to some extent) the semantics of natural language. They will support their own data bases and be able to access organization-specific knowledge. You will be able to obtain information in a flexible manner without having to know which data-base service has what information or how to use any particular information utility. As the 1990s progress, these systems will be integrated with problem-solving expert systems in many areas of endeavor. The level of intelligence implied in the above scenario describing a capital-equipment purchase will probably be seen during the first decade of the next century.

The world chess championship

As noted earlier, the best machine chess players are now competing successfully at the national senior-master level, regularly defeating all but about 700 players.12 All chess machines use some variant of the recursive algorithm called minimax, a strategy whose computational requirements are multiplied by some constant for each additional move ahead that is analyzed. Without a totally new approach, we thus need to make exponential progress in computational power to make linear gains in gameplaying performance (though we are indeed making exponential progress in hardware). The analysis I gave before estimated that the requisite computer power to achieve world-championship chess playing should become available between 9 and 54 years from now. This estimate was based on the continuing gains anticipated in the speeds of individual microprocessors. If we factor in the increasing popularity of parallel-processing architectures, the result will be much closer to the short end of this range. Some of the other scenarios in this section require significant advances in both hardware power and software sophistication. In my view, the ability of a machine to play championship chess is primarily a function of the former. Some of the possible breakthroughs in electronic hardware discussed below will be directly applicable to the chess issue. For example, if we are successful in harnessing the third dimension in chip fabrication (that is, building integrated circuits with hundreds or thousands of layers of active circuitry rather than just one), we will see a major improvement in parallel processing: hundreds or thousands of processors on a single chip. Taking into consideration only anticipated progress in conventional circuit-fabrication methodologies and continued development of parallel-processing architectures, I feel that a computer world chess champion is a reasonable expectation by the end of the century.

What will be the impact of such a development? For many, such as myself, it will simply be the passing of a long anticipated milestone. Yes, chess is an intelligent game (that is, it requires intelligence to play well), but it represents a type of intelligence that is particularly well suited to the strengths of early machine intelligence, what I earlier called level-2 intelligence (see "The Recursive Formula and Three Levels of Intelligence"). While level-3 intelligence will certainly benefit from the increasing power of computer hardware, it will also require substantial improvements in the ability of computers to manipulate abstract concepts.

Defenders of human chess playing often say that though computers may eventually defeat all human players, computers are not able to use the more abstract and intuitive methods that humans use.13 For example, people can eliminate from consideration certain pieces that obviously have no bearing on the current strategic situation and thus do not need to consider sequences of moves involving those pieces. Humans are also able to draw upon a wealth of experience of previous similar situations. However, neither of these abilities is inconsistent with the recursive algorithm. The ability to eliminate from consideration branches of the expanding tree of move-countermove possibilities not worth pursuing is an important part, called pruning, of any minimax program. Drawing upon a data base of previous board positions is also a common strategy in the more advanced chess programs (particularly in the early game). It is estimated that human chess masters have memorized between 20,000 and 50,000 chess boards.14 While impressive, it is clear that this is again an area where machines have a distinct edge. There is little problem in a computer mastering millions of board positions (each of which can have been analyzed in great depth in advance). Moreover, it is feasible for computers to modify such previously stored board positions to use them even if they do not precisely match a current position.

It may very well be that human players deploy methods of abstraction other than recalling previous board positions, pruning and move expansion. There is little evidence, however, that for the game of chess such heuristic strategies are inherently superior to a simple recursive strategy combined with massive computational power.15 Chess, in my view, is a good example of a type of intelligent problem solving well suited to the strengths of the first half century of machine intelligence. For other types of problem solving (level-3 problems), the situation is different.

Not everyone will cheerfully accept the advent of a computer chess champion. Human chess champions have been widely regarded as cultural heroes, especially in the Soviet Union; we regard the world chess championship as a high intellectual achievement. If someone could compute spreadsheets in his head as quickly as (or faster than) a computer, we would undoubtedly regard him as an amazing prodigy, but not as a great intellect (in fact, he would actually be an idiot savant). A computer chess championship is likely to cause a watershed change in how many observers view machine intelligence (though perhaps for the wrong reasons). More constructively, it may also cause a keener appreciation for the unique and different strengths (at least for the near future) of machine and human intelligence.

The intelligent telephone-answering machine

The intelligent telephone assistant answers your phone, converses with the calling party to determine the nature and importance of the call (according to your instructions), interrupts you if necessary, and finds you in an emergency. The latter may become relatively easy once cellular-phone technology is fully developed. If cellular phones become compact and inexpensive enough, most people would be able to carry them, perhaps in their wristwatches. This raises some interesting issues of privacy. Many people like the ability to be away from their phones; we do not necessarily want to be accessible to phone calls at all times. On the other hand, it could be considered irresponsible to be completely unavailable for contact in the event of an emergency. But addressing this issue will be more a matter of evolving social custom than artificial intelligence.

The other aspects of the above scenario require the same machine skills as the intelligent assistant: two-way voice communication, natural-language understanding, and automated problem solving. In some ways this application may be more challenging than the office assistant. For a cybernetic assistant that we interact with, many people would be willing to spend time learning how to use such technology if it really helped them to accomplish their work. We might not mind if it failed to handle every interaction gracefully, so long as it provided an overall productivity gain. On the other hand, we might set a higher standard for a machine intended to interact with our friends and associates.

The cybernetic chauffeur

When will computers drive our cars? Without major changes in the methods of traffic control, the advent of the self-driving car is not likely for a long time. Unlike character recognition, factory vision, and similar tasks that involve limited visual environments, driving on existing highways requires the full range of human vision and pattern recognition skills. Furthermore, because the consequences of failure are so serious, we would demand a high level of performance from such a system before relying on it. (On the other hand, with 50,000 traffic deaths each year on American highways, the human standard of performance is far from perfect.)

Yet there is a simpler form of the cybernetic chauffeur that is easier to build and could still accomplish a sharp reduction in driving fatalities as well as substantially reduce the tedium of driving. If specially designed sensors and communication devices were placed in major thoroughfares, cars could communicate with the road as well as with other cars. They could then be placed on automatic pilot. Highways would essentially become electronic railways that our cars could join and leave. The communication system required would be similar to the complex one already in place for cellular phones. Algorithms built into each car and into the roads would maintain safe distances between cars and would handle a variety of situations.16 Although no specific system has been developed, the underlying technology exists today to deal with the steady-state situation of cars driving on a highway and maintaining proper speed and distance. But the situation gets more complicated when several other contingencies are considered. First, we need to be able to get our car onto a computer-controlled road. Presumably, we would drive to an access road, where our computer driver would lock onto the road's communication system, which would take over (in conjunction with the intelligence of our own car computer). Getting off the road is perhaps trickier. The system has to consider the possibility that the driver has fallen asleep or is otherwise not ready to resume manual control. The best protocol might be for the system to bring the car to a halt at an exit position at which point the human driver would be expected to take over. Machine intelligence would also have to deal with possibilities of hardware failure-not just of the computer itself, but also of the engine, tires, and other parts of the vehicle. Furthermore, the designers of such systems will also have to consider the possibility of people or animals straying onto the road.

Even with all of these complications, from a technical standpoint, intelligent roads represent a substantially easier problem than creating an automatic driver that can cope with traffic situations as they currently exist. One major nontechnical barrier to creating an intelligent road system, however, is that it requires a large measure of cooperation between car manufacturers and the government agencies that manage our roads (and drivers). It is not an innovation that can be introduced to a small number of pioneering users; it needs to be implemented all at once in at least some roads and in all cars intending to access such roads. Presumably, cars not equipped with such automatic guidance equipment would not be allowed on intelligent roads. Again, cellular phone technology is similar: it was not feasible until both the portable phone equipment and the overall computerized communication systems were in place. Still, I would expect such a system to be introduced gradually. At first it would be featured on one or a few major highways on an experimental basis. If successful, it would spread from there.

The technology to accomplish this should be available in the first decade of the next century. But because political decision making is involved, it is difficult to predict when it will receive priority sufficient to warrant implementation. Though cellular-phone technology also involved the coordination of a complex system, it has grown rapidly because it created an entrepreneurial opportunity.

The more advanced scenario of the completely driverless car will take us well into the first half of the next century. Another approach might be to forget roads altogether and replace them with computer-controlled flying vehicles that can ascend and descend vertically. There is, after all, much more space in the three-dimensional open air than there is on our one-dimensional roads. There are already plans in place to install a satellite-based collision-avoidance system that will dramatically reduce airplane collisions by the end of the century. The flying vehicles envisioned here would be the size of today's cars and would be even easier to use.17

Invisible credit cards and keys

During the 1990s we shall see highly reliable person-identification systems that use pattern-recognition techniques applied to fingerprint scanning and voice patterns. Such systems will typically combine the recognition of a personal attribute (a fingerprint or voice pattern) with a password that the user types in. The password prevents a misrecognition from causing a problem. The recognition system prevents unauthorized access by a thief who acquires someone's password.

Today we typically carry a number of keys and cards to provide us with access to, and use of, our homes, cars, offices, and a broad variety of financial services. All of these keys and cards could be eliminated by the widespread adoption of reliable person-identification technologies. The acceptance of this technology is likely to be hastened by the loss of confidence in hand-written signatures caused by the explosion of electronic publishing.

Unfortunately, this type of technology is also capable of helping Big Brother track and control individual transactions and movements.

Instant ASICs

One of the remarkable recent innovations in hardware technology is the advent of the application-specific integrated circuit (ASIC), in which an entire complex electronic system is placed on a single inexpensive chip. The advent of the ASIC has provided products that are complex, diverse, customized and highly miniaturized. As Allen Newell points out in the article following this chapter, one might regard it as an almost magic technology: once an ASIC is developed, it provides enormous computational power at very low cost, takes up almost no space, and uses almost no electricity. The major barrier to greater deployment of this technology is the very long and expensive engineering cycles required to design such chips. The promise of instant ASICs is the ability to design an integrated circuit as easily as one writes a high-level computer program and, once designed, to have the actual chip available immediately.

The development of instant-ASIC technology is a major goal of the U.S. Department of Defense. Aside from its obvious military applications, it will also greatly accelerate the availability of innovative consumer products. Just as the difficulty of programming the early computers was quickly recognized as a major bottleneck, so is the difficult design process behind today's advanced chips. Indeed, the latter is receiving intense attention from all of the major players in the semiconductor industry. It is expected that in the early 1990s designers will be able to write chip programs (whose output is a piece of silicon) as easily and as quickly as computer programs.18 The availability of instant-ASIC technology will eliminate for most purposes what little difference remains today between hardware and software engineering. It will accelerate the trend toward knowledge as the primary component of value in our products and services.

Artificial people and the media of the future

Rather than describe the vision I have in mind, I shall approach the idea of artificial people by starting with what is feasible today. Consider the artificial creatures that we interact with in computerized games and educational programs. An early and prime example is Pac Man, an artificial creature capable of responding and interacting with us in certain limited ways. One might not consider Pac Man to be much of a creature, let alone an artificial person. It is kind of a cartoon caricature of a fish with a limited repertoire of movements. Similarly, our range of emotional responses to it is very narrow, but it serves as a useful starting point.

Now consider what is feasible today, about a decade after the first introduction of Pac Man and other computer games. Reasonably lifelike video images of human faces can be completely synthesized and animated. Experiments at such advanced media laboratories as the MIT Media Laboratory have created completely synthetic yet realistic computer-generated images of human faces that can move and express a wide range of responses and emotions.

Let us imagine the next step: computer games and interactive educational programs that use synthetically generated human images. Rather than simply replaying a previously stored animated sequence, such programs would start with knowledge structures representing the concepts to be expressed and then translate each concept into spoken language and articulated facial and bodily movements. We would thus see and hear images-not prestored but created in real time-of people that are reasonably realistic. The motions and speech sounds would be computed as needed from an intent to express certain ideas and emotions. These artificial people would be responding to our actions within the context of the program.

Let us take several more steps. Add speech recognition and natural language understanding. Add another several generations of improved image resolution and computing power for greatly enhanced visual realism. Add a more sophisticated problem-solving capability and more intelligence to provide greater subtlety of personality. Our artificial person is becoming more like a real person and less like Pac Man.

Applications would include very realistic games, movies that could include the viewer as a participant, and educational programs that would engage the student to learn from direct experience. Benjamin Franklin could take a child on a guided tour of colonial Philadelphia. Rather than a canned visual tour, this artificial Ben Franklin could answer questions, engage the child in dialog, customize the tour on the basis of the child's own expressed interests, and otherwise provide an engaging experience. One could debate Abraham Lincoln or take Marilyn Monroe on a date. As with any creative medium, the possibilities are limited only by our imagination. As another example, the intelligent assistant could include a persona complete with appearance, accent, and personality. As time went on, such artificial persons would continue to grow in sophistication, realism, communicative and interactive ability and of course, intelligence. Ultimately, they would develop a sense of humor.

It should be noted that personality is not an attribute that can be stuck on an intelligent machine. A personality is almost certainly a necessary byproduct of any behavior complex enough to be considered intelligent. People already speak of the personalities of the software packages they use. Shaping the personality of intelligent machines will be as important as shaping their intelligence. After all, who wants an obnoxious machine?

Such artificial persons could eventually use three-dimensional projected holographic technology (a method for creating three-dimensional images that do not require the use of special glasses). Currently, most holograms are static three dimensional pictures, although some use multiple images to provide a sense of movement. The MIT Media Lab has succeeded in creating the world's first three-dimensional holographic image generated entirely by computer.19 The ability to project a hologram entirely from computer data is an important step in imaging technology. If a computer can project one hologram, then it can be made to project any number. Ultimately, with sufficient computer power, the images could be generated fast enough to appear realistically to move. The movements would not be prestored but rather computed in real time to respond to each situation. Thus, our artificial people can ultimately be lifelike, life-size, three-dimensional images with sufficiently high resolution and subtlety of movement to be indistinguishable from real people. These future computer displays will also be able to project entire environments along with the people.

There is concern today regarding the power of television to shape our views and to engage our emotions and attention. Yet television is essentially noninteractive, of low resolution, and flat. A medium that provided nearly perfect resolution and three-dimensional images and interacted with us in an intelligent and natural way would be far more powerful in its emotional impact, possibly too powerful for many (real) people. Harnessing and regulating these media of the future will undoubtedly be an area of much debate and controversy.

Adoption of the advanced media technologies described here will begin in the late 1990s and mature over the first half of the next century. Applications include entertainment, education, conducting business transactions, even companionship.

Another approach

An entirely different approach to the concept of artificial people lies in the area of robotics. Robots of the first generation, just like the first generation of computer generated creature images (essentially pictures of robots), were neither intelligent nor realistic. We were as unlikely to mistake an early factory robot for a natural creature, let alone a person, as we were to mistake Pac Man for an image of a real animal. Here again, successive generations of technology have provided greater intelligence, subtlety, and naturalness. The primary drive for robotic technology lies in practical applications in the manufacturing and service industries. Admittedly, for most of these applications, resemblance to humans or to any other natural creature is of little relevance. Yet there will be applications for natural robots (sometimes called androids) as teachers, entertainers, and companions. Primitive robotic pets have already created a niche in the toy industry.

Creating a reasonably natural robotic imitation of a person is even more challenging than creating a convincing media image of a person. Any autonomous robot, humanoid or otherwise, has to be able to ambulate in a natural environment; this requires general-purpose vision and a high degree of fine motor coordination. Autonomous robots for exploring hostile environments, such as nuclear reactors and the surfaces of other planets, exist today. Routine use of autonomous robots in more conventional settings is likely to begin by the end of the century. Robots that are reasonably convincing artificial people will not appear until well into the next century.

Marvin Minsky has often said that a good source of insights into the realities of tomorrow's computer science can be found in today's science fiction. Isaac Asimov in his Robots of Dawn describes a society two centuries from now in which people live alongside a ubiquitous generation of robotic servants, companions, guards, and teachers. Two of the protagonists are a beautiful female scientist and her lover, a "male" "humaniform" robot.

Passing the Turing test

Scientists from the University of Clear Valley reported today that a computer program they had created was successful in passing the famous Turing test. Computer scientists around the world are celebrating the achievement of this long-awaited milestone. Reached from his retirement home, Marvin Minsky, regarded as one of the fathers of artificial intelligence (AI), praised the accomplishment and said that the age of intelligent machines had now been reached. Hubert Dreyfus, a persistent critic of the AI field, hailed the result, admitting that he had finally been proven wrong.

The advent of computers passing the Turing test will almost certainly not produce the above sort of coverage. We will more likely read the following:

Scientists from the University of Clear Valley reported today that a computer program they had created was successful in passing the famous Turing test. Computer scientists reached at press time expressed considerable skepticism about the accomplishment. Reached from his retirement home, Marvin Minsky, regarded as one of the fathers of artificial intelligence (AI), criticized the experiment, citing a number of deficiencies in method, including the selection of a human "judge" unfamiliar with the state of the art in AI. He also said that not enough time had been allowed for the judge to interview the computer foil and the human. Hubert Dreyfus, a persistent critic of the AI field, dismissed the report as the usual hype we have come to expect from the AI world and challenged the researchers to use him as the human judge.

Alan Turing was very precisely imprecise in stating the rules of his widely accepted test for machine intelligence.20 There is, of course, no reason why a test for artificial intelligence should be any less ambiguous than our definition of artificial intelligence. It is clear that the advent of the passing of the Turing test will not come on a single day. We can distinguish the following milestones:

Level 1 Computers arguably pass narrow versions of the Turing test of believability. A variety of computer programs are each successful in emulating human ability in some area: diagnosing illnesses, composing music, drawing original pictures, making financial judgements, playing chess, and so on.

Level 2 It is well established that computers can achieve human or higher levels of performance in a wide variety of intelligent tasks, and they are relied upon to diagnose illnesses, make financial judgements, etc.

Level 3 A single computer system arguably passes the full Turing test, although there is considerable controversy regarding test methodology.

Level 4 It is well established that computers are capable of passing the Turing test. No reasonable person familiar with the field questions the ability of computers to do this. Computers can engage in a relatively unrestricted range of intelligent discourse (and engage in many other intelligent activities) at human or greater levels of performance.

We are at level 1 today. A wide range of expert systems can meet or exceed human performance within narrowly defined (yet still intelligent) areas of expertise. The judgements of expert systems are beginning to be relied upon in a variety of technical and financial fields, although acceptance in the medical area is much slower. Also, computer success in a variety of artistic endeavors is beginning to be at least arguably comparable.

Level 2 is within sight and should be attained around the end of the century. As expert systems grow in sophistication and achieve more natural human interfaces, we will begin to rely on their expertise as much as (if not more than) human society relies on their idiot savant forebears today.

We will probably begin to see reports of level 3, and newspaper articles similar to the second one given above, during the first decade of the next century, with continued controversy for at least several decades thereafter. The first reports will almost certainly involve significant limitations to Turing's originally proposed challenge. We are close to having the underlying technology (if not the actual program) today if we use sufficiently naive judges and provide them with relatively little time to make their determinations.

Level 4 is what Turing had in mind when he predicted success by the year 2000. Achieving this level is far more difficult than any of the other three. It requires advanced natural-language understanding, vast knowledge bases of commonsense information, and decision-making algorithms capable of great subtlety and abstraction. Turing's prediction, made in 1950, will almost certainly not be fulfilled by the year 2000. I place the achievement of level 4 sometime between 2020 and 2070. If this turns out to be the case, then Turing will have been off by a factor of between 1.4 and 2.4 (70 to 120 years versus his prediction of 50 years), which actually is not bad for such a longterm prediction. Of course, there is no assurance that my prediction will be any more accurate than Turing's.

As mentioned earlier (see The Debate Goes On), Hubert Dreyfus has indicated that he will concede that he was wrong (and has been wrong for his entire professional career) if he can be fooled as the human judge in a Turing test. Will this happen? If we assume that Dreyfus is in good health and further that continuing advances in bioengineering technology enable him (and the rest of us) to live longer than today's average life expectancy, then it is altogether possible. Personally, I would be willing to bet on it.

Conclusion

The above scenarios provide only a small sampling of the ways in which intelligent machines of the future can be expected to touch our lives. The computers of today, dumb as they are, have already infiltrated virtually every area of work and play. The bureaucracies of our society could hardly function without their extensive computer networks. If one adds just the sharply focused intelligence of the next phase of the age of intelligent machines to the already prodigious memory capacity and speed of today's computers, the combination will be a formidable one indeed. Our cars, watches, beds, chairs, walls, floors, desks, books, clothes, phones, homes, appliances, and virtually everything else we come into contact with will be intelligent, monitoring and servicing our needs and desires. The age of intelligent machines will not start on a particular day; it is a phenomenon that has started already, with the breadth and depth of machine intelligence growing each year. Turing predicted a time when people would talk naturally about machines thinking without expecting anyone to contradict them or be surprised. We are not there today, but the day will arrive so gradually that no one (except a few authors) will notice it when it does.

Breakthroughs

Ralph Gomory, IBM's chief scientist, predicted in 1987 that within a decade the central processing units of supercomputers will have to be concentrated within a space of three cubic inches. The supercomputer core of the 1990s will be suitable for a laptop.
George Gilder

Hardware

As I noted earlier, a continuation of the same rate of exponential improvement in computer hardware appears likely for the foreseeable future, even if we consider only improvements using conventional approaches. Progress continues to be made in manufacturing integrated circuits with ever smaller geometries, which thus allows ever larger numbers of components to be placed on a single chip. As this book was being written, designers were passing the mark of several million transistors per chip. The advent of parallel-processing architectures is also contributing to expanding the amount of computation we can devote to machine intelligence.21

At the same time, researchers are experimenting with a number of exotic materials and techniques that, if perfected, could provide a quantum leap in computing power. Rather than "just" the orderly doubling of computer power every 18 to 24 months, the possibility exists for a relatively sudden increase by a factor of thousands or even a million.

Probably the most promising is the potential for superconductors to virtually eliminate the thermal constraints that now govern chip geometries. As mentioned earlier, the heat generated by resistance in circuits limits how small we can build each transistor. But superconductors offer no electrical resistance, so superconducting circuits generate no heat. A superconducting integrated circuit could thus use transistors that are smaller by at least a factor of ten in each of the two dimensions, for a hundredfold improvement in the number of components. Since these components would also operate ten times faster, the overall improvement is approximately one thousand. Furthermore, with the absence of heat, we could build a chip with a thousand layers of circuitry rather than just one. This use of the third spatial dimension provides another improvement factor of a thousand, for an overall potential improvement of one million to one.

The only problem so far is that no one yet knows a practical way to build tiny transistors using superconducting materials. Earlier attempts to harness superconducting integrated circuits, using a technology called Josephson junction, never reached commercial viability.22 One of the problems with the earlier attempts was the need to cool the circuits to so near absolute zero that very expensive liquid helium had to be used to cool them. But recent advances in superconducting have provided materials that can provide superconducting at much higher temperatures, so liquid nitrogen, at less than one tenth the cost of liquid helium, can be used. There is even the possibility of developing materials that can provide superconductivity at room temperature, the Holy Grail of superconductivity. The latest materials exhibit considerable brittleness, however, which presents a major difficulty in creating the tiny wires needed.23 While the challenges are formidable, the potential payoff of superconducting integrated circuits is so enormous that major efforts are underway on three continents (North America, Europe, and Asia).24

Another approach being explored is the development of circuits that use light instead of electricity. There are two potential advantages of light. First, light is faster. The speed of light is regarded as a fundamental speed limit that the universe is obliged to follow. Electricity in wires moves about one-third as fast. More important, laser light can contain millions of adjacent signals that each carry independent information. Thus, the potential exists to provide truly massive parallel processing, with each light signal providing a separate computational path. Optical techniques are already revolutionizing communications (with optical fibers) and memory (optical disks); the promise of optical computing would revolutionize computation itself.25

There are disadvantages, however, in that the types of computations that can be performed optically are somewhat limited. A massively parallel optical computing machine would be well suited for certain pattern-recognition tasks, particularly those using low-level feature detection, but it could not be easily adapted to conventional types of computing. For this, superconductivity appears more promising. A massively parallel computer using superconduction could provide an enormous number of parallel processors that could each use conventional software techniques.

A third, even more exotic approach is to build circuits using bioengineering techniques, essentially to grow circuits rather than to make them. Such circuits would be constructed of the same proteins that form the basis of life on earth. These organic circuits would be three-dimensional, just as circuits in the brains of natural living creatures are three-dimensional. They would be cooled in the same way that natural brains are cooled: by bloodlike liquids circulating through a system of capillaries. We would not build these circuits; they would grow in much the same way that the brain of a natural organism grows: their growth and reproduction would be controlled by genes made up of ordinary DNA. Although still at an early stage, significant progress has been made in developing such organic circuits. Wires and simple circuits have already been grown.26

Still another approach, called molecular computing, has substantial overlap with all three of the above techniques.27 Molecular computing attempts to employ light and the finest possible grains of matter (for computing, probably molecules) to provide techniques for information manipulation and storage. The goal of molecular computing is massively parallel machines with three-dimensional circuitry grown like crystals.

As I mentioned, the objective of all of these investigations is to provide a great improvement in the capacity and speed of computation. It should be emphasized that the value of doing so is not primarily to speed up existing applications of computers. Speeding up the computation of a spreadsheet that now takes a few seconds to a few microseconds is of little value. The real excitement is in being able to solve important problems that have heretofore proved intractable. I discussed earlier the massive amount of computation required to emulate human-level vision. Because the amount of computation provided by the human visual system is not yet available in a machine, all attempts at machine vision to date have been forced to make substantial compromises. The enormous improvements in speed and capacity discussed above would provide the computational power needed to emulate human level functionality.

Software

If three-dimensional superconducting chips or one of the other breakthroughs just described were perfected and computer hardware were suddenly to expand in capability by a factor of thousands or even a million, the impact on many problem areas would not be immediate. While substantial expansion of computing power is one of the requirements to master problems such as vision, there are other requirements as well. We also need continued refinement of our algorithms, greater ability to represent and manipulate abstractions, more knowledge represented in computer form, and an enhanced ability to capture and manipulate knowledge.

Two key points are worth making about potential improvements in computer technology. First, breakthroughs and software appear to me to be incompatible, particularly with regard to issues of machine intelligence. AI applications are so complex, and the requisite human-interface issues so demanding, that none of the underlying software technologies is likely to spring suddenly upon us. Inevitably, solutions to various AI problems-image understanding, speech understanding, language understanding, problem solving-are solved incrementally and gradually, with each new generation of a technology providing greater subtlety and depth. Each new step forward reveals new limitations and issues that are addressed in the next generation of the technology or product. A point often comes when the level of performance in a certain area exceeds a critical threshold, and this enables the technology to move out of a research environment into practical commercial applications. In a sense, the overall AI movement started to pass that threshold during the 1980s, when it moved from a barely visible industry with a few million dollars per year of revenue in 1980 to a billion dollars in 1986, with four billion expected in 1990, according to DM Data. It often appears that a breakthrough must have been made, because a technology became sufficiently powerful to be commercially viable. But invariably, the underlying technology was created over an extended period of time. AI is no exception.

The second point is that a hardware breakthrough of the type described above is not necessary for significant and sustained progress in each area of AI. There is no question that more powerful computers are needed for many problems, such as continuous speech recognition, human-level vision and others, but the exponential growth in computer power that will occur anyway is sufficient.

 Be the first to comment on this article!  
 

[Post New Comment]