Origin > Visions of the Future > The Age of Intelligent Machines > The Age of Intelligent Machines, Chapter 11: The Impact On...
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0300.html

Printable Version
    The Age of Intelligent Machines, Chapter 11: The Impact On...
by   Raymond Kurzweil

From Ray Kurzweil's revolutionary book The Age of Intelligent Machines, published in 1990.


It's hard to predict-especially the future.
Niels Bohr, physicist

Employment and the Economy

If every instrument could accomplish its own work, obeying or anticipating the will of others.... if the shuttle could weave, and the pick touch the lyre, without a hand to guide them, chief workmen would not need servants, nor masters slaves.

Aristotle

If machines could be so improved and multiplied, then all of our corporeal necessities could be entirely gratified, without the intervention of human labor, there will be nothing to hinder all mankind from becoming philosophers and poets.
Timothy Walker, essayist, 1831
Machinery will perform all work-automata will direct all activities and the only tasks of the human race will be to make love, study and be happy.
The United States Review, 1853

When people consider the impact of computer intelligence, few areas generate as much controversy as its potential to influence patterns of employment. Other areas-education, medicine, even warfare-evoke less passion. In education, concern is sometimes expressed about computers replacing human instructors, but astute observers realize that computer-assisted instruction is intended to compete with books, not teachers.1 There is understandable hesitancy to rely on the diagnostic judgements of medical expert systems even when they have demonstrated superior skills in some areas, but few expect undue reliance on such systems without strenuous steps to verify reliability. There is certainly controversy surrounding military applications, but even here there is some recognition that the highly pinpointed targeting provided by "smart" weapons may cause less indiscriminate destruction than the shotgun tactics required by older generations of weapon systems (more about this later). The issue of jobs, on the other hand, strikes at a fundamental and immediate concern of most people. While machines have been competing with human labor for centuries in the physical domain, the more recent competition for mental work is more threatening, both economically and psychologically.

Views on the impact of this latest wave of automation vary dramatically with the observer. Some hail the ability of machines to eliminate mental drudgery in the same way that an earlier generation of machines released us from the bondage of hard physical labor. Others point to a bleak future in which employment opportunities remain only for an elite few.

This issue is rarely approached dispassionately. Most views are heavily influenced by a number of social and ideological assumptions. I do not pretend to be immune from such influence, but I do feel that an examination of long-term trends of the past and attempts to project such trends into the future can help to illuminate these issues. Most of the automation that has taken place in human history has occurred over the past one hundred years. As I pointed out in the prolog, the macroeconomic trends during that period were quite positive. Jobs grew 10-fold in the United States (12 million in 1870 to 116 million in 1985) with the percentage of the U.S. population gainfully employed increasing from 31 percent to 48 percent. More significantly, the per-capita gross national product, as well as the average earning power of jobs, increased 600 percent in constant dollars during the same period.2 The quality of jobs improved as well, a much higher fraction of jobs providing gratification beyond the paycheck. Nonetheless, the impact of automation on jobs has been a controversial subject during this entire period. One impediment to sober reflection on the issue is that the reality of lost jobs and the resulting emotional and social impact are far easier to see than the possibility of new jobs and new industries. Early in this century, jobs in factories and in agriculture were disappearing at a rapid rate. Dire predictions of these trends spiraling to disaster were not uncommon. It was not possible for leaders at the time to say, Don't worry, millions of jobs will be created in the electrical industry, and following that, the electronics and computer industries will create millions more. Indeed at the time it would have been impossible to foresee even the existence of these industries. The phenomenon continues today, with new manufacturing technologies rapidly reducing the number of production jobs. Manufacturing today provides a quarter of all employment; by the beginning of the next century it is expected to provide relatively few jobs.3 The social and political impact of these lost jobs is felt far more strongly than the future opportunities which undoubtedly will be there.

More perspective can be gained by attempting as rigorously as possible to project these trends into the future. A comprehensive study using a detailed computer model of the U.S. economy was conducted recently by Wassily Leontief, Faye Duchin, and their colleagues at the Institute for Economic Analysis (IEA).4 The study indicates continued expansion of the per-capita gross national product and average earning power of the American worker. It projects a rapidly diminishing demand for clerical workers and other categories of semiskilled and unskilled workers and a reduction in factory workers, although the latter will be partially offset by the overall increase in economic activity. It projects a sharp reduction in the need for skilled metal workers due to increased use of factory automation, including robots and computerized machine tools. Conversely, it projects a sharp increase in the need for professionals, particularly engineers (including computer specializts), and for teachers.5

The most significant finding of the study is that the limiting factor on future economic growth will be the availability of a suitably trained workforce.6 There will be plenty of jobs in the early twenty-first century, but only if society provides a sufficient number of appropriately skilled workers to fill them. As factories are rebuilt for substantially fewer personnel, both blue and white collar, as agriculture continues to be mechanized, as the service industries begin to automate, there will be a corresponding increase in the demand for knowledge workers who can design, build, program, and run the intelligent machines of the future. At least as important as the knowledge workers are the teachers to train them. Since power and wealth in the age of intelligent machines will increasingly consist of knowledge and skill, the ability to develop and foster our human intellectual resources becomes society's most pressing challenge.

The IEA study examines a number of scenarios that differ in assumptions about the speed with which society can incorporate new computer technologies and provide the workforce with the necessary skills. Consequently, the scenarios differ dramatically in their projected employment levels and economic progress. Of course, the economic model used by the IEA, no matter how detailed, does not by any means eliminate uncertainty regarding the future. It is difficult to predict how quickly a given technology will be adopted and integrated into the workplace. Often predictions of rapid change are premature. For example, the advent of numerically controlled machine tools in the 1960s was expected to quickly revolutionize metalworking, but the new technology made little headway until it was integrated with computers in the late 1970s.7 On the other hand, some technologies move more quickly than any expert predicted, as was the case with the rapid computerization of offices during the 1980s. A comparison of the different scenarios of the IEA's study does make clear, however, that the primary variable that will determine the rate of continued economic progress and continued growth in the availability of employment is the quality and availability of appropriate education and training. Interestingly, the conclusion is the same for both advanced and underdeveloped nations.8

The nature of work

Most factories built today employ substantially fewer workers than the factories they replace. Even if robotic technology is not employed, the computerization of material handling and work flow has already substantially reduced the direct-labor content of most products. There is little question that this trend will continue and intensify over the next two decades. By early in the next century a relatively small number of technicians and other professionals will be sufficient to operate the entire production sector of society.

While employing substantially fewer people, the advent of computer controlled manufacturing technologies will permit a degree of individual customization of products not feasible today. Two centuries ago every item was inevitably a little different, since it was made by hand. During the first industrial revolution the innovation of mass production substantially reduced the individualization of products. During the second industrial revolution the innovation of extremely flexible manufacturing will increase it.9 For example, consumers will be able to sit down at their home computers and design their own clothes to their own precise measurements and style requirements using friendly, computer-assisted design software. When the user issues the command "Make clothes," the design parameters and measurements will be transmitted to a remote manufacturing facility, where the clothes will be made and shipped within hours.

As employment in the factory dwindles, employment in the office will be stable or increase. However, what we do in offices will substantially change.10 Clerical work will largely disappear. Completing a trend already under way, by early in the next century, computers will type our letters and reports, intelligently maintain our files and records, and help to organize our work. The areas likely to continue to require significant human involvement, particularly during the first half of the next century, will be communication, teaching, learning, selling, strategic-decision making, and innovation. While computers will certainly impact all of these areas, they will continue to be the primary focus of human efforts in the office. As I pointed out above, the office worker of the next century will have sustained contact with both human and machine intelligence.

The concept of a document will undergo substantial change. Extremely high resolution easy-to-view screens will become as common and as easy to read from as paper. As a result, we will routinely create, modify, handle, and read documents without their ever being converted to paper form. Documents will include a variety of types of information beyond mere text and pictures. They will routinely include voice, music, and other sound annotations. Even the graphic part of documents will become more flexible: it may be an animated three-dimensional picture. In addition, documents will be tailored in that they will include the underlying knowledge and flexibility to respond intelligently to the inputs and reactions of the reader.11 Finally, documents will not necessarily be ordered sequentially as they are in this book: they will be capable of flexible intuitive patterns that reflect the complex web of relationships among ideas (this is Ted Nelson's "hypertext ").12

With the ongoing acceleration of the pace of change, the idea of training someone for a lifelong trade will become even less valid than it is today. Instead, we shall need to teach our children how to keep learning throughout their adult lives. It is estimated that the typical worker of the twenty-first century will make changes in the way they work once or twice each decade, changes that we would now consider major career changes.13 Thus, the primary skill required for employment in the workplace of the future will be the ability to adapt and to continue growing intellectually.

A constructive change in our concept of work will be to think of the process of learning as part of work, rather than as just a prerequisite for work. The worker of the future may spend as much as half of his time learning rather than just doing. A trend toward this concept is already discernible among more enlightened employers. Employers providing on-the-job education and paying their employees to acquire new skills is likely to emerge as a major trend by the end of the century.

The trend toward work as a vital component of gratification and personal satisfaction is also likely to intensify in the decades ahead. It is to be hoped that the divisions between work on the one hand and learning, recreation, and social relationships on the other will dissolve as work becomes more fully integrated with the other facets of life.

Education

Common sense is not a simple thing. Instead, it is an immense society of hard-earned practical ideas-of multitudes of life-learned rules and exceptions, dispositions and tendencies, balances and checks.
Marvin Minsky, The Society of Mind
All instruction given or received by way of argument proceeds from pre-existent knowledge.
Aristotle, Posterior Analytics
The search for the truth is in one way hard and in another easy-for it is evident that no one of us can master it fully, nor miss it wholly. Each one of us adds a little to our knowledge of nature, and from all the facts assembled arises a certain grandeur.
Aristotle, 350 B.C.

My conclusion about the impact of intelligent machines on employment rests on education and its pivotal role in shaping the future world economy. This leads me to consider the impact of this new technology on education itself.14 Thus far the impact of computers has been modest. Of much greater effect have been the other media technologies of television, cinema, and recorded music. While it is true that hundreds of thousands of computers have found their way into schools, a much larger number have found their way into homes. In the home videogames have made their presence felt, digital technology is evident in a broad variety of consumer electronic products, and calculators are ubiquitous. Yet the content and process of education remains largely unchanged; there is continued reliance on books that, as Minsky puts it, "do not talk to each other."15 The computer revolution, which is radically transforming the work of the office and the factory, has not yet made its mark in the schools.

The enormous popularity in American classrooms of the Apple IIe, which is now several generations behind in hardware technology, is a testament to the conservative nature of the educational establishment (and to the lack of adequate resources). Seymour Papert, developer of LOGO, the principal educational computer language, compares the specter of an entire classroom of children sharing one computer to that of a classroom sharing one pencil. One pencil, according to Papert, is probably better than none, but it is not likely to lead to a "pencil revolution."

Nonetheless, computers are infiltrating the schools. The importance of this has more to do with laying the groundwork for the future than with its current impact. Before any fundamental transformation of the learning process can take place, a critical mass needs to be reached in the capabilities of personal computers, their availability to the student population, their portability, the sophistication of educational software, and their integration into the learning process.16 Let us consider the situation when such a critical mass is reached. This situation will, in my view, include the following eight developments:

  • Every child has a computer. Computers are as ubiquitous as pencils and books.
  • They are portable laptop devices about the size of a large book.
  • They include very high resolution screens that are as easy to read as books.
  • They include a variety of devices for entering information, including a keyboard and a track ball (or possibly a mouse).
  • They support high quality two-way voice communication, including natural-language understanding.
  • They are extremely easy and intuitive to use.
  • A great variety of high-quality interactive intelligent and entertaining courseware is available.
  • Computers are integrated into wireless networks.

My emphasis on intelligent courseware needs explanation. Much of the available computer-assisted instruction (CAI) provide little more than repetitive exercises that could have been provided just as easily by books.17 The better programs do provide a measure of interaction, with sequences dependent on the specific areas of weakness of the student, but they still develop little sense of the student's true strengths and weaknesses. CAI now under development has a more ambitious goal. We all have models of the world that we use to understand and respond to events and to solve problems, whether they be real-world situations or classroom exercises. If we make a mistake, it may be simply a matter of a few missing or inaccurate facts. But more often it reflects a structural defect in the organization of our knowledge. A good teacher attempts to understand the model that the student is using. Then if the student's model, and not just his data base, is faulty, the teacher devises a strategy to modify the model to more accurately reflect the subject matter. Such researchers as John Seely Brown at Xerox's Palo Alto Research Center are attempting to develop a CAI technology which will be able to model the relationships in the knowledge to be taught, diagnose the presumably weaker models that a student is starting with, develop a strategy to upgrade the student's models to the desired ones, and provide entertaining and engaging experiences to carry out the remedial strategy.18 The objective is to incrementally improve the world models of the student. As Seymour Papert points out, you cannot learn something unless "you already almost know it."

Wireless networks will allow easy sharing of courseware, submissions by students of papers, exams, courseware responses, and other creations, electronic mail and other communications (e.g., love notes). By being plugged into international networks of information, children will have immediate access to the great libraries of the world right from their school bags. In addition to being able quickly to access virtually all books, magazines, data bases, and other research materials, there will exist intelligent software assistants to help students quickly find the information they are looking for.

The above vision of an optimal educational workstation will obviously not come forth suddenly. Some aspects of it are becoming available now; others will emerge over the next decade.19 A personal computer with the necessary attributes should become available around the end of this century. With the historical ten-year lag of the educational field in adopting new computer technology, we can expect a critical mass level of ubiquitous utilization of such technology by the end of the first decade of the next century.20 If society wakes up to the pivotal role of education in determining our future economic well-being, then the time period for widespread implementation of this technology may be compressed. Yet it will still be well into the first decade of the next century before a broad transformation of the educational process is complete.

Let us envision education yet several decades farther into the future. By the second half of the next century, the future media technologies described earlier will be as widespread as the various video technologies of today. The greatest impact of the media of the future will be in education. A homework assignment, for example, might be to participate in the Constitutional Convention of 1787 and debate the founding fathers on the separation of powers between the three branches of the U.S. government. A subsequent assignment might be to negotiate the final language on behalf of the executive branch: see if you can get a better deal for the presidency on war powers. Your submission would be the actual debates that you participated in, and your teacher watches them in the same way that you originally did: in a totally realistic three-dimensional projected holographic environment with nearly perfect resolution. The founding fathers that you interact with are extremely lifelike artificial people that can hear you, understand what you are saying and respond to you just as the original founding fathers might have. They will be programmed to tolerate a young student barging in on their constitutional convention and engaging them in debate. They may also have better comprehension of contemporary language than the real founding fathers of two hundred years ago might be expected to have had.

For those of us who do not want to wait until these future media technologies are perfected, we can go back right now and engage the founding fathers in debate. We just have to use our imaginations.

Communications

Of what lasting benefit has been man's use of science and of the new instruments which his research brought into existence? First, they have increased his control of his material environment. They have improved his food, his clothing, his shelter; they have increased his security and released him partly from the bondage of bare existence. They have given him increased knowledge of his own biological processes so that he has had a progressive freedom from disease and an increased span of life. They are illuminating the interactions of his physiological and psychological functions, giving promise of an improved mental health.

Science has provided the swiftest communication between individuals; it has provided a record of ideas and has enabled man to manipulate and to make extracts from that record so that knowledge evolves and endures throughout the life of a race rather than that of an individual.
Vannevar Bush, As We May Think, 1945

By early in the next century, personal computers will be portable laptop devices containing cellular phone technology for wireless communication with both people and machines. Our portable computers will be gateways to international networks of libraries, data bases, and information services.

We can expect the personal computers of 2010 to have considerable knowledge of where to find knowledge. They will be familiar with the types of information contained in our own personal data bases, in the data bases of companies and organizations to which we have access, as well as to all subscription and public information services available through (wireless) telecommunications. As described earlier, we shall be able to ask our personal computers to find, organize, and present diverse types of information. The computer will have the intelligence to engage us in dialog to clarify our requests and needs, to access knowledge from other machines and people, and to make organized and even entertaining presentations.

Software should be highly standardized by the end of this century. In general, commercially available software packages will work on any computer with sufficient capability. Standard protocols and formats will be in place for all types of files: text, spreadsheets, documents, images, and sounds. Paper will be regarded as just another form of information storage. With highly accurate and rapid technology available for character and document recognition, paper will be a medium easily accessed by both people and machines.

Indeed, the advent of the electronic document has not caused the end of paper. The so-called "paperless office" actually uses more paper than its predecessor. U.S. consumption of paper for printed documents increased from 7 million tons in 1959 to 22 million in 1986.21 American business use grew from 850 billion pages in 1981 to about 2.5 trillion pages in 1986 and is expected to hit 4 trillion pages in 1990.22 While computers make it possible to handle documents without paper, they also greatly increase the productivity of producing paper documents.

Telephones will routinely include video capability. Later in the next century the video images will have improved to become highly realistic, moving, three dimensional, projected, holographic images with nearly perfect resolution. A phone call with a friend or business associate will be very much like visiting that person. It will appear that they are sitting with you in your living room or office. The only limitation will be that you cannot touch one another.

Even this limitation will eventually be overcome. Once we create robotic imitations of people that are extremely lifelike in terms of both look and feel, a robotic person imitator could be made to move in exactly the same way as the real person hundreds or thousands of miles away. Thus, if two people who are apart wanted to spend time together, they would each meet with a robotic imitator of the other. Each imitator would move in exactly the same way (and at nearly the same time) as the remote real person by means of high-speed communication and robotic techniques. In other words, you lift an arm, and your robotic imitator hundreds of miles away lifts its arm in exactly the same way. One problem to overcome will be the slight communications delay if the two humans are a few thousand miles apart. Artificial intelligence may be able to help out here by anticipating movements. Using this type of communication service of the late twenty-first century, couples may not necessarily have to be near each other to maintain their romantic relationships. (I have not yet checked with my wife on this, however.)

The advent of videophones, even of a conventional two-dimensional type, will place new demands on telephone etiquette. We may not always want to engage in a call with video. There will, of course, always be the option of turning the picture off (in either direction), but doing so may involve an explanation that we currently do not have to deal with. (In fact, the widespread adoption of cellular technology, even without pictures, will also put a strain on telephone etiquette. It is now feasible to be "away" from our telephones when we are busy. But if everyone has a phone in their wrist watch, it may become harder to avoid answering the phone.)

One major impact of advanced communications technology will be on the nature of our cities. Cities first developed to facilitate manufacturing and transportation and thus tended to be located near ports and rivers. With highways and railways providing greater flexibility in transporting goods and people, a primary purpose of the city shifted to communication. Congregating people in one place facilitated their ability to meet and conduct business. But if we can "meet" with anyone regardless of where we are and our computers can easily share information through wireless telecommunications networks, the need for cities will diminish. Already our cities are spreading out, and this trend will accelerate as the communication technologies described above become available. Ultimately, we will be able to live anywhere and still work, learn, and play with people anywhere else on the globe. The world will become one city, the ultimate realization of McLuhan's vision of the global village.

Warfare

When all else fails, the future still remains.
Christian Bovee
Knowledge is power and permits the wise to conquer without bloodshed and to accomplish deeds surpassing all others.
Sun Tzu (Chou dynasty philosopher and military strategist), The Art of War (fourth century b.c.)

Warfare and potential for warfare is taking a paradoxical turn in the last half of the twentieth century. There is increasing reliance, at least by the more developed nations, on "smart" weapons and a rapid evolution of such weapons. Missiles can be launched from air, ground, or sea hundreds and in some cases thousands of miles from their intended targets. These weapons find their way to their destinations using a variety of pattern-recognition and other computer technologies. Pilot's Assistants, for example, are beginning to provide pilots with an electronic copilot that helps fly, navigate, locate enemy targets, plot weapon trajectories, and other tasks. Recent military engagements which utilized such technology have resulted in more accurate destruction of enemy targets and substantially less unintended damage to neighboring civilian populations and facilities (although there are still a few bugs in these systems). Among military establishments that can afford routine use of these technologies, a profound reconsideration of military tactics is underway. The primary thrust is to replace shotgun strategies with the careful pinpointing of targets.

Not all nations have access to these new technologies. While Iran and Iraq do possess small numbers of such advanced weapons, they still primarily used weapons and battlefield tactics of World War II vintage during their recent war. The Soviet Union in Afghanistan also used relatively unsophisticated weapons and tactics. This reflects a fundamental reality in the balance of power between East and West: the Warsaw Pact forces are at least a decade behind the NATO forces in AI and computer technologies. Indeed, our primary strategy in countering the numerically superior forces of the Soviet Union and its allies is to rely on our superiority in the intelligence of our conventional (or nonnuclear) weapons as the first line of defense and on our nuclear threat as the second line.23

In accordance with this second line, the United States and NATO have been unwilling to make a declaration of no first use of nuclear weapons, stating that we may use nuclear weapons in the event of a conventional attack on Europe. However, if we can improve our intelligent but conventional weapons to a point where our confidence in the first line strategy is sufficiently enhanced, then the western allies would be in a position to issue a no-first-use pledge and forego the nuclear threat in Europe. Recent political changes in Eastern Europe and the apparent collapse of communism in many countries may hasten such a development. There are active development programs to create a new generation of, for example, ground-to-ground and air-to-ground antitank missiles that are capable of being launched from hundreds of miles away, follow irregular trajectories, search intelligently for their targets, locate, and destroy them.24 Once perfected, these missiles could be launched without precise knowledge of the location of the enemy positions. They are being designed to use a variety of artificial vision, pattern-recognition, and communication technologies to roam around and reliably locate enemy vehicles. Friendly forces would be avoided by a combination of electronic communication and pattern-recognition identification. To the extent that friendly targets are avoided by electronic communication, the reliability and security of the encoding protocols, another important area of advanced computer research, will obviously be crucial. Anticipated progress in intelligent weaponry was a major factor behind the recommendation of four former high ranking American advisers, including Robert McNamara and McGeorge Bundy, for an American no-first-use pledge in the spring 1982 issue of Foreign Affairs.25

One result of these changes is the prospect of diminished civilian destruction from war, but few observers are heralding this development. The reason for this is, of course, the enormous increases in the destructive capability of weapons that have also occurred. As terrifying and destructive as the atomic weapons that ended World War II were, the superpowers now possess more than a million times more destructive power. Children growing up today belong to the first generation in history born into an era in which the complete destruction of the human race is at least plausible. Experts may debate whether or not "nuclear winter" (the catastrophic global change in climate that some scientists have predicted would follow a large scale exchange of nuclear weapons) really has the potential to end all human life. The end of the human race has never before been seriously debated as a possibility. Whether an all-out nuclear war would actually destroy all human life or not, the overwhelming destruction that would certainly ensue has created an unprecedented level of terror, under which all of the world's people now live. Ironically, the fear of nuclear conflict has kept the peace: there has not been a world war for nearly half a century. It is a peace from which we take limited comfort.

The most evident technologies behind this radical change in the potential destructiveness of warfare are, of course, atomic fission and fusion. The potential for worldwide catastrophe would not be possible, however, without weapon-delivery systems, which rely heavily on computer intelligence to reach their destinations. The power of conventional munitions has also grown substantially, and political and social inhibitions against their use are far less than those for nuclear weapons. Thus, the possibility of eliminating nuclear weapons from the European theater paradoxically evokes fear that such a development would make Europe "safe" for a conventional war that would still be far more destructive than World War II. This duality in the development of military technology-the advent of weapons for fighting weapons rather than civilian populations and the potential for greatly enhanced destruction-will continue.

Let us consider military technology and strategy several decades into the next century, at which time these trends should have fully matured. By that time flying weapons (missiles, robot planes, and flying munitions) will be highly self-reliant. They will be capable of being launched from virtually any place on earth or from space and still finding their targets by using a combination of advanced vision and pattern recognition technologies. They will obviously need the ability to avoid or counteract defensive weapons intended for their destruction. Clearly, of primary strategic importance will be the sophistication, indeed the intelligence, of both the offensive and defensive systems of such weapons. Geography is already losing its strategic importance and should be a relatively minor factor several decades from now. Such slow moving vehicles as tanks and ships, as well as battle stations, whether land, sea, air, or space based, will be vulnerable unless defended by arrays of intelligent weapons.

Most weapons today destroy their targets with explosions or, less often, bullets. Within the next few decades it is likely that laser and particle beam weapons will be perfected. This will provide such fast-moving weapons as missiles a variety of means for both offense and defense.

Planes, particularly those closest to combat, will not require pilots. With sophisticated enough electronic technology, there is no reason why planes cannot be directed from afar by either human or machine intelligence. Of course, reliable and secure communications will be essential to prevent an enemy from taking control of remote-controlled robot aircraft. Indeed, the three Cs-command, control, and communication-are emerging as the cornerstone of future military strategy.26

In general, the interactions of future weapons are likely to be so fast that human reflexes will not be the primary criterion of tactical success. Weapons will utilize a variety of their tactical offensive and defensive capabilities within seconds or even milliseconds when meeting comparable enemy systems. In such encounters, the most capable and reliable electronics and software will clearly prevail.

I remember as a child reading a tale about a very advanced civilization that had outlawed war and replaced it with a more refined form of conflict. Rather than resort to deadly weapons, two societies challenging each other for supremacy engaged in a game of chess. Each society could select their best master player or use a committee. As I recall, no one thought to use machine intelligence for this task. Whoever won the board conflict won the war and, apparently, the spoils of war. How this was enforced was not discussed, but one can imagine that warfare in the future may not be all that dissimilar from this tale. If human reflexes and eventually human decision making, at least on a tactical level, are replaced with machine intelligence, then two societies could let their machines fight out the conflict and let them know who wins (or perhaps it would be obvious who had prevailed). It would be convenient if the actual conflict took place in some remote place, like outer space. Here the enforcement of the winner's prerogatives is obvious: the losing society will have lost its machine defenders, which will render it defenseless. It will have no choice but to submit to the victor.

This scenario differs in one important respect from the story about conflict resolution through chess. In the terms I used earlier, chess represents level 2 intelligence and is thus amenable to recursive software techniques combined with massive amounts of computer power. Battling weapons, on the other hand, require level 3 intelligence (the ability to abstract) as well as advanced forms of pattern recognition. They also require reliability. One controversial aspect of this new technology is the extent to which we can rely on these extremely complex systems, considering the limited opportunity we will have to test them under realistic wartime conditions. This issue is particularly salient for the highly centralized communication networks needed for command and control.27

Can we take any comfort from this vision? It is entirely possible that military engagements decades hence may involve relatively few casualties, particularly of a civilian nature. On the other hand, there is no guarantee that warfare will be constrained to weapons fighting weapons. The tactic of holding large civilian populations hostage will continue to have its adherents among military strategists. What is clear, however, is that a profound change in military strategy is starting to take place. The cornerstones of military power from the beginning of recorded history through recent times-geography, manpower, firepower, and battle-station defenses-are being replaced by the sophistication of computerized intelligence and communications. Humans will direct battlefield strategy, but even here computers will play a crucial role. Yet humans will still be the underlying determinants of military success. Military strength will be a function of the sophistication of the technology, but a society's leaders, scientists, engineers, technicians, and other professionals will create and use the technology. At least, that is likely to remain the case for the next half century.

Medicine

A self-balancing 28-jointed adapter-base biped; an electro-chemical reduction plant, integral with segregated stowages of special energy extract in storage batteries for subsequent actuation of thousands of hydraulic and pneumatic pumps with motors attached; sixty-two thousand miles of capillaries; . . . the whole extraordinary complex mechanism guided with exquisite precision from a turret in which are located telescopic and microscopic self-registering and recording range finders, a spectroscope, etc.; the turret control being closely allied with an air-conditioning intake and exhaust, and a main fuel intake . . .
R. Buckminster Fuller, "A Definition of a Man"

A projection of current trends gives us the following picture of medicine early in the next century: A variety of pattern-recognition systems will have become a vital part of diagnosis. Blood tests will be routinely analyzed by cybernetic technicians. Today's routine blood test generally involves a human technician examining only about 100 cells and distinguishing only a few cell types; the blood test of the early twenty-first century will involve automatic analysis of thousands or even a million blood cells as well as a thorough biochemical analysis. With such extensive analysis, precursor cells and chemicals that indicate the early stages of many diseases will be reliably detected. Most people will have such devices in their homes. A sample of blood will be painlessly extracted on a routine basis and quickly analyzed.

Electrocardiograms will be analyzed entirely by computer; indeed, prototypes of this technology exist today. Our wristwatches will monitor cardiac functions and other biological processes that might require immediate attention, in addition to diagnosing less acute problems. Particularly powerful computerized monitoring will attend anyone in a special-care situation, such as a hospital, nursery, or old-age home.

Apart from blood tests there will be almost complete reliance on diagnosis by noninvasive imaging (like sonic and particle-resonance imaging). The instantly generated hard-copy images will include the computer's diagnostic findings. The images themselves will be highly realistic computer-generated views of the interiors of our bodies and brains, rather than the often confusing, hard-to-interpret pictures from some of today's imaging devices. While human diagnosticians will continue for many years to examine images from X-ray machines, CAT scanners, nuclear-magnetic-resonance scanners, and sonic scanners, a high degree of confidence will ultimately be placed in the ability of computerized systems to detect and diagnose problems automatically.

Lifetime patient records and histories will be maintained in nationally (or internationally) coordinated data banks in place of today's disorganized system of partial, fragmented, and often illegible records. These records will include all imaging data, the complete readouts of our home blood tests and wristwatch monitoring systems. Intelligent software will be available to enable this extensive data bank to be analyzed and accessed quickly by both human and machine experts.

Expert systems will influence virtually all diagnostic and treatment decisions. These expert systems will have access to the output of a patient's most recent imaging and biochemical analyses, as well as to the entire file of all such past exams and monitored data. They will also have access to all internationally available research data and will be updated on a daily basis with the latest research insights. The written reports of these expert systems will be reviewed by human doctors in critical or complex cases, but for more routine problems the machine's opinions will be relied upon with little or no review.28

The designing of drugs will be entirely different from present methods. Most drugs on the market today were discovered accidentally. In addition to their beneficial effects, they often cause a number of undesirable side effects. Further, their positive effects are often indirect and not fully effective. In contrast, most drugs of the early twenty-first century will have been specifically designed to accomplish their missions in the most effective and least harmful ways. Drug designers will work at powerful computer-assisted design workstations that will have access to relatively complete mathematical models of all of our biochemical processes. The human drug design engineers will specify key design parameters, and the computer will perform most of the detailed design calculations. Human biochemical simulation software will allow drugs to be tested with software before any actual drugs are manufactured.29 Human trials will still be required in most cases (at least this will be true during the first half of the next century, but the simulators will ultimately be sufficiently reliable that the lengthy multistage process of animal and human testing now required will be substantially shortened.

One class of drugs will be similar to the smart weapons described in the previous section. These drugs will be actual living cells with a measure of intelligence. Like cells of our natural immune systems, they will be smart enough to identify an enemy pathogen (bacteria, virus, etc.) and destroy or pacify it. Again like immune cells, once they have completed their missions, they may either self-destruct or remain on call to defend against a future pathogen invasion. Another class of drugs will help overcome genetic diseases. Computer-designed genes will be distributed to our cells by specially designed viruses, which will essentially "infect" the body with the desired genetic information.30

Surgical operations will make extensive use of robotic assistants. In types of surgery requiring very precise performance, e.g., in eye surgery, the actual operation will be carried out by the robot, with human doctors overseeing the operation.

Research will be similar to drug design. Most research will be carried out on software that simulates human biochemical processes. Experiments that would now take years will be carried out in minutes. Reporting will be instantaneous, with key results feeding into data bases that allow access by other humans as well as by computer expert systems.31

Will these innovations improve our health and well-being? The answer is almost certainly yes. Heart disease and cancer are likely to be conquered by early in the next century. Of course, we have the opportunity right now to dramatically reduce the incidence of these diseases by applying what is already known about the crucial role of diet and life style. But that is a subject for a different book.

We have already doubled the average life expectancy in Europe, Japan, and the United States since the beginning of the first industrial revolution two centuries ago. The potential exists to substantially increase it again by the end of the twenty first century.

With machines playing such crucial and diverse roles, what will the doctors and nurses of the twenty-first century do? Their major role will be in research and in the organization of medical knowledge. Committees with both human and machine intelligence will review all research findings with a view to incorporating new rules and recommendations into our expert systems. Today new research knowledge filters into the medical community in a slow, inconsistent, and haphazard fashion. The future dissemination of knowledge and recommendations will be very rapid. Doctors will continue to be involved in strategic medical decisions and will review diagnostic and treatment recommendations in complicated cases. Yet some of the new technology will bypass the doctor. There will be a trend toward individuals taking responsibility for their own health and utilizing computerized diagnostic and remedial methods directly.

One area that will still require human attention in the early twenty-first century will be comfort and caring. Machines will not play a significant role here until mature versions of the advanced media technologies described earlier become available later in the century.

The Handicapped

A primary interest of mine is the application of computer technology to the needs of the handicapped. Through the application of computer technology, handicaps associated with the major sensory and physical disabilities can largely be overcome during the next decade or two. I am confident of this development because of the fortunate matching of the strengths of early machine intelligence with the needs of the handicapped. The typical disabled person is missing a specific skill or capability but is otherwise a normally intelligent and capable human being. It is generally possible to apply the sharply focused intelligence of today's machines to ameliorate these handicaps. A reading machine, for example, addresses the inability of a blind or dyslexic person to read, probably the most significant handicap associated with the disability of blindness.

In the early twenty-first century lives of disabled persons will be far different than they are today. For the blind, reading machines will be pocket-sized devices that can instantly scan not only pages of text but also signs and symbols found in the real world. These machines will be able to read with essentially perfect intonation and with a broad variety of voice styles. They will also be able to describe pictures and graphics, translate from one language to another, and provide access to on-line knowledge bases and libraries through wireless networks. Most blind and dyslexic persons will have them, and they may be ubiquitous among the rest of the population.

Blind persons will carry computerized navigational aids that will perform the functions of seeing-eye dogs, only with greater intelligence than today's canine navigators. Attempts up to now at electronic navigational assistants for the blind have not proved useful. Unless such a device incorporates a level of intelligence at least comparable to a seeing-eye dog, it is not of much value. This is particularly true since modern mobility training can provide a blind person equipped only with an ordinary cane with substantial travel skills. I personally know many blind people who can travel around town and even around the world with ease. With the intelligent navigational aids of the future, travel skills for the blind will become even easier.

Ultimately, compact devices will be built that combine both reading and navigational capabilities with the ability to provide intelligent descriptions of real-world scenes on a real-time basis. At that stage of machine evolution they are probably more accurately called seeing machines.32 Such a machine would be like a friend that could describe what is going on in the visible world. The blind user could ask the device (verbally or in some other way) to elaborate on a description, or he could ask it questions. The visual sensors of such a device could be built into a pair of eyeglasses, although it may be just as well to pin it on the user's lapel. In fact, these artificial eyes need not only look forward; they may as well look in all directions. And they may have better visual acuity than normal eyes. We may all want to use them.

The deaf will have hearing machines that can display what people are saying. The underlying technology required is the Holy Grail of voice recognition: combining large-vocabulary recognition with speaker independence and continuous speech. Early versions of speech-to-text aids for the deaf should appear over the next decade. Artificial hearing should also include the ability to intelligently translate other forms of auditory information, such as music and natural sounds, into other modalities, such as vision and touch.

Eventually we may find suitable channels of communication directly into the brain to provide truly artificial sight and hearing. But in any case, there will certainly be progress in restoring lost hearing and sight.

The physically handicapped (paraplegics and quadriplegics) will have their ability to walk and climb stairs restored, abilities that will overcome the severe access limitations wheel chairs impose. Methods to accomplish this will include exoskeletal robotic devices, or powered orthotic devices, as they are called. These devices will be as easy to put on as a pair of tights and will be controlled by finger motion, head motion, speech, and perhaps eventually thoughts. Another option, one that has shown promise in experiments at a number of research institutes, is direct electrical stimulation of limb muscles. This technique effectively reconnects the control link that was broken by spinal cord damage.

Those without use of their hands will control their environment, create written text, and interact with computers using voice recognition. This capability already exists. Artificial hand prostheses controlled by voice, head movement, and perhaps eventually by direct mental connection, will restore manual functionality.

Substantial progress will be made in courseware to treat dyslexia (difficulty in reading for neurophysical reasons other than visual impairment) and learning disabilities. Such systems will also provide richer learning experiences for the retarded.

Perhaps the greatest handicap associated with sensory and physical disabilities is a subtle and insidious one: the prejudice and lack of understanding often exhibited by the general public. Most handicapped persons do not want pity or charity; instead, they want to be respected for their own individuality and intelligence. We all have handicaps and limitations; those of a blind or deaf person may be more obvious, but they are not necessarily more pervasive or limiting. I have worked with many disabled persons, and I know from personal experience that they are as capable as other workers and students at most tasks. I cannot ask a blind person to drive a package across town, but I can ask him to give a speech or conduct a research project. A sighted worker may be able to drive a car, but he will undoubtedly have other limitations. The lack of understanding many people have of handicapped persons is evident in many ways, some obvious, some subtle. By way of example, I have had the following experience on many occasions while eating a meal with a blind person in a restaurant. The waiter or waitress will ask me if my blind friend wants dessert or if he wants cream in his coffee. While the waiter or waitress obviously intends no harm or disrespect, the message is clear. Since there is no indication that the blind person is also deaf, the implication is that he must not be intelligent enough to deal with human language.

A not unimportant side benefit of intelligent technology for the handicapped should be a substantial alteration of these negative perceptions. If the handicaps resulting from disabilities are significantly reduced, if blind people can read and navigate with ease, if deaf persons can hold normal conversations on the phone, then we can expect public perceptions to change as well. When blind, deaf, and other disabled persons take their place beside us in schools and the workplace and perform with the same effectiveness as their nondisabled peers, we shall begin to see these disabilities as mere inconveniences, as problems no more difficult to overcome than poor handwriting or fear of public speaking or any of the other minor challenges that we all face.

Music

I . . . begin to feel an irresistible drive to become a primitive and to create a new world.
August Strindberg, from a letter

Let us step well into the next century for a view of music when current trends have been fully established. There will still be acoustic instruments around, but they will be primarily of historical interest, much like harpsichords are today. Even concert pianists will accept electronic pianos as fully the equivalent of the best acoustic pianos. All nuances of sound, including interstring resonances, will be captured. Of course, to a pianist, the feel of a piano is important, not just the sound. Piano actions with time-varying magnetic actions will faithfully emulate the feel of the finest top-of-the-line grand pianos. Pianists will prefer the electronic versions because they are more reliable, have extensive additional capabilities, and are always perfectly in tune. We are close to this vision today. Controllers emulating the playing techniques of all conventional instruments, such as guitar, violin and drums, will have largely replaced their acoustic counterparts. Perfect synthesis of acoustic sounds will have long been established.

With the physical link between the control and generation of musical sound having long been broken, most musicians will have gravitated to a new generation of controllers. These new instruments will bear little resemblance to any instrument of today and will enable musicians optimal expressive control.

While the historically desirable sounds of pianos and violins will continue to be used, most music will use sounds with no direct acoustic counterpart. Unlike many synthetic sounds of today, these new musical sounds will have greater complexity and musical depth than any acoustic instrument we are now familiar with.

The concept of a musical sound will include not only sounds that simply start and stop but also a class of sounds that changes characteristics according to a number of continuously controllable parameters. For example, we could use all of our fingers to control ten continuous parameters of a single sound. Such a sound will exist as a time-evolving entity that changes according to expression applied in ten different dimensions. With extremely powerful parallel computing devices, the distinction between real-time and non-real-time sound modification will largely disappear; virtually everything will happen in real-time. The ability to modify sound with real-time continuous controls will be regarded as more important than the individual sounds themselves.

Computers will almost always respond in real time, although people will not always create music in real time. There will continue to be a distinction between music composition and music performance. Composition will not mean writing down music notation. It will refer to the creation of music in which the creation takes substantially longer than the piece created. Sequencers that record all continuous controls with high resolution will allow the "massaging" of a work of music in the same way that we now work on a printed document. Music created in this way will certainly not be subject to the limitations of finger dexterity (this has already been largely achieved with contemporary sequencers). Once composed, high-quality notation will be instantly available. It is likely that forms of music notation more satisfactory than the current five-line staffs will be developed over the next several decades to keep pace with the added dimensions of control, added sounds, and added sound modifications.33

Live music performance will continue to have the same appeal that it does today. While much of what is heard may have been previously programmed, musicians' sharing with their audience musical expression through real-time (and live) control will continue to be a special form of human communication. The preparation of a musical performance will involve practice and learning of the musical material as well as preparation of the knowledge bases of the musical instruments. Cybernetic musicians generating lines of accompaniment and counterpoint will be commonplace. The intelligence of these software-based musical accompanists will be partially built into the instruments and partially programmed by the musicians as they prepare a performance.34

Intelligent software incorporating extensive knowledge of musical theory and styles will be extensively used by professional musicians in creating musical compositions and preparing performances, by amateur musicians, who can jam with their computerized instruments, and by students learning how to play. The advent of musically knowledgeable software-based accompanists will provide children with exciting musical experiences at early stages of their knowledge and skill acquisition.

Music will not necessarily take the form of fixed works that we listen to from beginning to end. One form of musical composition might be a set of rules (or a modification of a set of rules) and expression structures that together can generate an essentially unlimited number of actual pieces. This "work" would sound different every time we listen to it, although each such listening would share certain qualities, which qualifies it to be considered a single work. Compositions will also allow the listener to control and modify what he is hearing. We could control the entry and exit of various instruments and lines of music or control the evolution and emotional content of a piece without necessarily having the musical knowledge of a musician. This type of musical work would allow us to explore a musical world that we could have some influence on. A musical work could respond to our physical movements or even our emotions, which the computer-based system generating the actual sounds could detect from the subtleties of our facial expressions or perhaps even our brainwaves.

In this way music becomes potentially more than entertainment. It can have powerful effects on our emotional states, influencing our moods and affecting our learning. There will not be a sharp division between the musician and nonmusician. Increasingly and regardless of musical talent and training, we shall all be able to express our feelings through music.

Politics

Economics, sociology, geopolitics, art, religion all provide powerful tools that have sufficed for centuries to explain the essential surfaces of life. To many observers, there seems nothing truly new under the sun-no need for a deep understanding of man's new tools-no requirement to descend into the microcosm of modern electronics in order to comprehend the world. The world is all too much with us.

Nonetheless, studying economics and other social sciences, I began to realize that the old disciplines were breaking down, the familiar categories slipping away. An onslaught of technological progress was reducing much of economic and social theory to gibberish. For example, such concepts as land, labor, and capital, nation and society-solemnly discussed in every academic institution as if nothing had changed-have radically different meanings than before and drastically different values. Yet the vendors of old expertise continue on as if nothing had happened.

Laws get passed, editorials written, speeches delivered, soldiers dispatched, for all the world as if we still traveled in clipper ships and communicated chiefly by mail.

Jean Kirkpatrick, for example, gave a speech, quoted respectfully in the Wall Street Journal, in which she said it was impossible to understand what is going on in the world without a comprehension of geography, "an idea of where things are." It is a common notion . . . . Visit the Pentagon, or the New York Times, and everywhere there are maps, solemnly defining national borders and sovereign territories. No one shows any signs of knowing that we no longer live in geographic time and space, that the maps of nations are fully as obsolete as the charts of a flat earth, that geography tells us virtually nothing of interest about where things are in the real world.

The worldwide network of satellites and fiber optic cables, linked to digital computers, television terminals, telephones and databases, sustain worldwide markets for information, currency and capital on line 24 hours a day. Boeing 747s constantly traversing the oceans foster a global community of commerce. The silicon in sand and glass forms a global ganglion of electronic and photonic media that leaves all history in its wake. With other new technologies of material science, bioengineering, robotics, and superconductivity, all also heavily dependent on the microchip, informations systems are radically reducing the significance of so-called raw materials and natural endowments, nations and ethnic loyalties, material totems and localities.

Israel, a desert-bound society, uses microelectronic agricultural systems to supply eighty percent of the cut flowers in Europe and compete in avocado markets in New York. Japan, a set of barren islands, has used microelectronic devices to become one of the globe's two most important nations. In an age when men can inscribe worlds on grains of sand, conventional territory no longer matters....

Today the most important products are essentially made of the sand in chips of crystalline silicon. Their worth derives from the ideas they embody. They are information technologies and their value is invisible. Yet intellectuals, supposedly masters of ideas, refuse to believe in any value they cannot see or weigh. . . .

While the cost-effectiveness of computer components and related products has risen several million fold and the price of a transistor has sunk from $9.00 in 1955 to about eight ten-thousandths of a cent in 1987, the estimates of national productivity have entirely ignored the change. Once again, things that drop in price are assumed to be dropping in value. Yet it is the astronomical reduction in the price of computing that has made it the most important force and most valuable industry in the world economy.
George Gilder, The Message of the Microcosm
Where did that knowledge exist? . . . If all records told the same tale, then the lie passed into history and became truth.
George Orwell, 1984

As I have pointed out throughout this book, wealth and power in the age of intelligent machines is increasingly becoming a function of innovation and skill. The cornerstones of power during the first industrial revolution-geography, natural resources, and manual labor-are rapidly diminishing in importance and relevance. We are entering a world in which wealth can be beamed across the world by satellite, smart weapons can reach their destinations from thousands of miles away, and some of the most powerful technologies in history require only tiny amounts of material resources and electricity. We can only conclude that the strategic variables controlling our future are becoming technology and, in particular, the human intellectual resources to advance technology.

For thousands of years governments have demonstrated the possibility of forcing people to perform manual labor (although even here productivity is certainly diminished by coercion). It is a fortunate truth of human nature that creativity and innovation cannot be forced. To create knowledge, people need the free exchange of information and ideas. They need free access to the world's accumulated knowledge bases. A society that restricts access to copiers and mimeograph machines for fear of the dissemination of uncontrolled knowledge will certainly fear the much more powerful communication technologies of personal computers, local area networks, telecommunication data bases, electronic bulletin boards, and all of the multifarious methods of instantaneous electronic communication.

Controlled societies like the Soviet Union are faced with a fundamental dilemma. If they provide their engineers and professionals in all disciplines with advanced workstation technology, they are opening the floodgates to free communication by methods far more powerful than the copiers they have traditionally banned.35 On the other hand, if they fail to do so, they will increasingly become an ineffectual third-rate power. Russia is already on a par with many third-world countries economically. Russia is a superpower only in the military sphere. If it continues to stagnate economically and fails to develop advanced computer technologies, this type of power will dissipate as well.

Innovation requires more than just computer workstations and electronic communication technologies. It also requires an atmosphere of tolerance for new and unorthodox ideas, the encouragement of risk taking, and the ability to share ideas and knowledge. A society run entirely by government bureaucracies is not in a position to provide the incentives and environment needed for entrepreneurship and the rapid development of new skills and technologies.

From all appearances, some of the leaders of the Communist world have had similar thoughts. Mikhail Gorbachev's much-heralded campaigns of glasnost (openness) and perestroika (restructuring) have taken some initial steps to open communication and provide market incentives. Important steps have been taken in many of these societies toward achieving individual liberty. But these are only the first steps in what will need to be a long journey to complete a full transformation. Already the forces of reaction in China have taken a major step backward. What is not yet clear is the ability of these societies to succeed in moving deeply entrenched bureaucracies. What is clear, however, is that the pressures for such change will not go away.

Should these societies opt instead for a continuation of the controlled society, they will also find computers to be of value. Computers today play an indispensable role in legitimate law enforcement; there is no reason why they would not be equally useful in enforcing any form of state control. With the advanced vision and networking technologies of the early twenty-first century, the potential will exist to realize George Orwell's chilling vision in 1984.

Computer technology may lead to a flowering of individual expression, creativity, and communication or to an era of efficient and effective totalitarian control. It will all depend on who controls the technology. A hopeful note is that the nature of wealth and power in the age of intelligent machines will encourage the open society. Oppressive societies will find it hard to provide the economic incentives needed to pay for computers and their development.

Our Concept of Ourselves

We know what we are, but know not what we may be.
William Shakespeare

What will happen when all these artificially intelligent computers and robots leave us with nothing to do? What will be the point of living? Granted that human obsolescence is hardly an urgent problem. It will be a long, long time before computers can master politics, poetry, or any of the other things we really care about. But a "long time" is not forever; what happens when the computers have mastered politics and poetry? One can easily envision a future when the world is run quietly and efficiently by a set of exceedingly expert systems, in which machines produce goods, services, and wealth in abundance, and where everyone lives a life of luxury. It sounds idyllic-and utterly pointless.

But personally, I have to side with the optimists-for two reasons. The first stems from the simple observation that technology is made by people. Despite the strong impression that we are helpless in the face of, say, the spread of automobiles or the more mindless clerical applications of computers, the fact is that technology does not develop according to an immutable genetic code. It embodies human values and human choices. . . . My second reason for being optimistic stems from a simple question: What does it mean to be "obsolete"?
M. Mitchell Waldrop

As I discussed earlier, I believe that a computer will be able to defeat all human players at the game of chess within the next one or two decades. When this happens, I noted, we shall either think more of computers, less of ourselves, or less of chess. If history is a guide, we will probably think less of chess. Yet, as I hope this book has made clear, the world chess championship is but one of many accomplishments that will be attained by future machine intelligence. If our approach to coping with each achievement of machine intelligence is to downgrade the intellectual value of the accomplishment, we may have a lot of revision to do over the next half century.

Let us review some of the intellectual domains that machines are likely to master in the near future. A few examples of tasks that computers are now beginning to accomplish include the following: accompanying musical performances, teaching us skills and areas of knowledge, diagnosing and recommending remedial treatment for classes of diseases, designing new bioengineered drugs, performing delicate medical operations, locating underground resources, and flying planes.

A more difficult task for a computer, one that we shall probably see during the first half of the next century, is reading a book, magazine, or newspaper and understanding its contents. This would require the computer to update its own knowledge bases to reflect the information it read. Such a system would be able to write a synopsis or a critique of its reading. Of comparable difficulty to this task is passing the Turing test, which requires a mastery of written language as well as extensive world knowledge.

Of at least comparable difficulty would be to watch a moving scene and understand what is going on. This task requires human-level vision and the ability to abstract knowledge from moving images. Add the ability for a robot to imitate humans with sufficient subtlety, and computers will be able to pass a more difficult form of the Turing test in which communication is not through the written word transmitted by terminal but rather by live face-to-face communication. For this achievement we have to go at least to late in the next century.

It is clear that the strengths and weaknesses of machine intelligence today are quite different from those of human intelligence. The very first computers had prodigious and virtually unerring memories. In comparison, our memories are quite limited and of dubious reliability. Yet the early computers' ability to organize knowledge, recognize patterns, and render expert judgements-all elements of human intelligence-was essentially nonexistent. If we examine the trends that are already apparent, we can see that computers have progressed in two ways. They have gained even greater capacities in their areas of unique strength: today's computers are a million times more powerful in terms of both speed and memory than their forebears. At the same time, they have also moved toward the strengths of human intelligence.

Cartoon by Alan Wallerstein

They are nowhere near that goal yet, but they are certainly getting closer. Today computers can organize knowledge incorporating networks of relationships, they are beginning to recognize patterns contained in visual, auditory, and other modalities, and they can render judgements that rival those of human experts. They have still not mastered the vast body of everyday knowledge we call common sense, and ironically, they are particularly weak in the pattern recognition and fine motor skills that children and even animals do so well.

Computer intelligence is not standing still. Radical new massively parallel computer architectures, together with emerging insights into the algorithms of vision, hearing, and physical skill acquisition, are propelling computers closer to human capabilities and also continuing to enhance their historical areas of superiority. While machine intelligence continues to evolve and move in our direction, human intelligence is moving very slowly, if at all. But since we have computers to serve us, human intelligence may not need to change.

Thousands of years ago, when the religious and philosophical traditions that still guide Western civilization were being formed, a human being was regarded as special. We were different from animals and certainly from material things. The ultimate intelligence in the universe, God, knew about us, and cared about us. Later on as we learned that the earth on which we stood was not the only celestial body in the world, we imagined that all the other entities in the sky revolved around us. In this world view we were special because of our central location. The sun, the moon, the stars, the comets, and other celestial objects all paid homage to us. Still later when we realized that the earth was just the third planet orbiting an unremarkable star located on an arm of an unremarkable galaxy, our view changed again. Then we were special because of our unique intelligence: We could derive knowledge from information. We could contemplate the relationships among the world's phenomena. We could create patterns with aesthetic qualities. We could appreciate those qualities. True, animals shared in this intelligence, but to a much lesser degree, which only reinforced the uniqueness of the level of intelligence we possessed.

Now we are entering an era in which this latest concept of our uniqueness will be challenged once again. To be sure, this challenge will not arrive on a single day. By the time one can seriously argue that computers possess intellectual capabilities comparable to the human species, it will have been at least a century from the invention of the electronic computer in the late 1940s. We should have time to adjust. Perhaps we shall return whence we started, with an appreciation of the inherent value of being human.

 Be the first to comment on this article!  
 

[Post New Comment]