|
|
Chapter 5: Thinking Machines
The world stands on the threshold of a second computer age. New technology now moving out of the laboratory is starting to change the computer from a fantastically fast calculating machine to a device that mimics human thought processes - giving machines the capability to reason, make judgments, and even learn. Already this artificial intelligence is performing tasks once thought to require human intelligence...
- BUSINESS WEEK
COMPUTERS have emerged from back rooms and laboratories to help with writing, calculating, and play in homes and offices. These machines do simple, repetitive tasks, but machines still in the laboratory do much more. Artificial intelligence researchers say that computers can be made smart, and fewer and fewer people disagree. To understand our future, we must see whether artificial intelligence is as impossible as flying to the Moon.
Thinking machines need not resemble human beings in shape, purpose, or mental skills. Indeed, some artificial intelligence systems will show few traits of the intelligent liberal arts graduate, but will instead serve only as powerful engines of design. Nonetheless, understanding how human minds evolved from mindless matter will shed light on how machines can be made to think. Minds, like other forms of order, evolved through variation and selection.
Minds act. One need not embrace Skinnerian behaviorism to see the importance of behavior, including the internal behavior called thinking. RNA replicating in test tubes shows how the idea of purpose can apply (as a kind of shorthand) to utterly mindless molecules. They lack nerves and muscles, but they have evolved to "behave" in ways that promote their replication. Variation and selection have shaped each molecule's simple behavior, which remains fixed for its whole "life."
Individual RNA molecules don't adapt, but bacteria do. Competition has favored bacteria that adapt to change, for example by adjusting their mix of digestive enzymes to suit the food available. Yet these mechanisms of adaptation are themselves fixed: food molecules trip genetic switches as cold air trips a thermostat.
Some bacteria also use a primitive form of trial-and-error guidance. Bacteria of this sort tend to swim in straight lines, and have just enough "memory" to know whether conditions are improving or worsening as they go. If they sense that conditions are improving, they keep going straight. If they sense that conditions are getting worse, they stop, tumble, and head off in a random, generally different, direction. They test directions, and favor the good directions by discarding the bad. And because this makes them wander toward concentrations of food molecules, they have prospered.
Flatworms lack brains, yet show the faculty of true learning. They can learn to choose the correct path in a simple T-maze. They try turning left and turning right, and gradually select the behavior - or form the habit - which produces the better result. This is selection of behavior by its consequences, which behaviorist psychologists call "the Law of Effect." The evolving genes of worm species have produced worm individuals with evolving behavior.
Still, worms trained to run mazes (even Skinner's pigeons, trained to peck when a light flashes green) show no sign of the reflective thought we associate with mind. Organisms adapting only though the simple Law of Effect learn only by trial and error, by varying and selecting actual behavior - they don't think ahead and decide. Yet natural selection often favored organisms that could think, and thinking is not magical. As Daniel Dennett of Tufts University points out, evolved genes can equip animal brains with internal models of how the world works (somewhat like the models in computer-aided engineering systems). The animals can then "imagine" various actions and consequences, avoiding actions which "seem" dangerous and carrying out actions which "seem" safe and profitable. By testing ideas against these internal models, they can save the effort and risk of testing actions in the external world.
Dennett further points out that the Law of Effect can reshape the models themselves. As genes can provide for evolving behavior, so they can provide for evolving mental models. Flexible organisms can vary their models and pay more attention to the versions that prove better guides to action. We all know what it is to try things, and learn which work. Models need not be instinctive; they can evolve in the course of a single life.
Speechless animals, however, seldom pass on their new insights. These vanish with the brain that first produced them, because learned mental models are not stamped into the genes. Yet even speechless animals can imitate each other, giving rise to memes and cultures. A female monkey in Japan invented a way to use water to separate grain from sand; others quickly learned to do the same. In human cultures, with their language and pictures, valuable new models of how the world works can outlast their creators and spread worldwide.
On a still higher level, a mind (and "mind" is by now a fitting name) can hold evolving standards for judging whether the parts of a model - the ideas of a worldview - seem reliable enough to guide action. The mind thus selects its own contents, including its selection rules. The rules of judgment that filter the contents of science evolved in this way.
As behavior, models, and standards for knowledge evolve, so can goals. That which brings good, as judged by some more basic standard, eventually begins to seem good; it then becomes a goal in itself. Honesty pays, and becomes a valued principle of action. As thought and mental models guide action and further thought, we adopt clear thinking and accurate models as goals in themselves. Curiosity grows, and with it a love of knowledge for its own sake. The evolution of goals thus brings forth both science and ethics. As Charles Darwin wrote, "the highest possible stage in moral culture is when we recognize that we ought to control our thoughts." We achieve this as well by variation and selection, by concentrating on thoughts of value and letting others slip from attention.
Marvin Minsky of the MIT Artificial Intelligence Laboratory views the mind as a sort of society, an evolving system of communicating, cooperating, competing agencies, each made up of yet simpler agents. He describes thinking and action in terms of the activity of these agencies. Some agencies can do little more than guide a hand to grasp a cup; others (vastly more elaborate) guide the speech system as it chooses words in a sticky situation. We aren't aware of directing our fingers to wrap around a cup just so. We delegate such tasks to competent agents and seldom notice unless they slip. We all feel conflicting impulses and speak unintended words; these are symptoms of discord among the agents of the mind. Our awareness of this is part of the self-regulating process by which our most general agencies manage the rest.
Memes may be seen as agents in the mind that are formed by teaching and imitation. To feel that two ideas conflict, you must have embodied both of them as agents in your mind - though one may be old, strong, and supported by allies, and the other a fresh idea-agent that may not survive its first battle. Because of our superficial self awareness, we often wonder where an idea in our heads came from. Some people imagine that these thoughts and feelings come directly from agencies outside their own minds; they incline toward a belief in haunted heads.
In ancient Rome, people believed in "genii," in good and evil spirits attending a person from cradle to grave, bringing good and ill luck. They attributed outstanding success to a special "genius." And even now, people who fail to see how natural processes create novelty see "genius" as a form of magic. But in fact, evolving genes have made minds that expand their knowledge by varying idea patterns and selecting among them. With quick variation and effective selection, guided by knowledge borrowed from others, why shouldn't such minds show what we call genius? Seeing intelligence as a natural process makes the idea of intelligent machines less startling. It also suggests how they might work. One dictionary definition of "machine" is "Any system or device, such as an electronic computer, that performs or assists in the performance of a human task." But just how many human tasks will machines be able to perform? Calculation was once a mental skill beyond machines, the province of the intelligent and educated. Today, no one thinks of calling a pocket calculator an artificial intelligence; calculation now seems a "merely" mechanical procedure.
Still, the idea of building ordinary computers once was shocking. By the mid 1800s, though, Charles Babbage had built mechanical calculators and part of a programmable mechanical computer; however, he ran into difficulties of finance and construction. One Dr. Young helped not at all: he argued that it would be cheaper to invest the money and use the interest to pay human calculators. Nor did the British Astronomer Royal, Sir George Airy - an entry in his diary states that "On September 15th Mr. Goulburn ... asked my opinion on the utility of Babbage's calculating machine... I replied, entering fully into the matter, and giving my opinion that it was worthless."
Babbage's machine was ahead of its time - meaning that in building it, machinists were forced to advance the art of making precision parts. And in fact it would not have greatly exceeded the speed of a skilled human calculator - but it would have been more reliable and easier to improve.
The story of computers and artificial intelligence (known as AI) resembles that of flight in air and space. Until recently people dismissed both ideas as impossible - commonly meaning that they couldn't see how to do them, or would be upset if they could. And so far, AI has had no simple, clinching demonstration, no equivalent of a working airplane or a landing on the Moon. It has come a long way, but people keep changing their definitions of intelligence.
Press reports of "giant electronic brains" aside, few people called the first computers intelligent. Indeed, the very name "computer" suggests a mere arithmetic machine. Yet in 1956, at Dartmouth, during the world's first conference on artificial intelligence, researchers Alan Newell and Herbert Simon unveiled Logic Theorist, a program that proved theorems in symbolic logic. In later years computer programs were playing chess and helping chemists determine molecular structures. Two medical programs, CASNET and MYCIN (the first dealing with internal medicine, the other with the diagnosis and treatment of infections), have performed impressively. According to the Handbook of Artificial Intelligence, they have been "rated, in experimental evaluations, as performing at human-expert levels in their respective domains." A program called PROSPECTOR has located, in Washington state, a molybdenum deposit worth millions of dollars.
These so-called "expert systems" succeed only within strictly limited areas of competence, but they would have amazed the computer programmers of the early 1950s. Today, however, few people consider them to be real artificial intelligence: AI has been a moving target. The passage from Business Week quoted earlier only shows that computers can now be programmed with enough knowledge, and perform fancy enough tricks, that some people feel comfortable calling them intelligent. Years of seeing fictional robots and talking computers on television have at least made the idea of AI familiar.
The chief reason for declaring AI impossible has always been the notion that "machines" are intrinsically stupid, an idea that is now beginning to fade. Past machines have indeed been gross, clumsy things that did simple, brute-force work. But computers handle information, follow complex instructions, and can be instructed to change their own instructions. They can experiment and learn. They contain not gears and grease but traceries of wire and evanescent patterns of electrical energy. As Douglas Hofstadter urges (through a character in a dialog about AI), "Why don't you let the word 'machine' conjure up images of patterns of dancing light rather than of giant steam shovels?"
Cocktail-party critics confronted with the idea of artificial intelligence often point to the stupidity of present computers, as if this proved something about the future. (A future machine may wonder whether such critics exhibited genuine thought.) Their objection is irrelevant - steam locomotives didn't fly, though they demonstrated mechanical principles later used in airplane engines. Likewise, the creeping worms of an eon ago showed no noticeable intelligence, yet our brains use neurons much like theirs.
Casual critics also avoid thinking seriously about AI by declaring that we can't possibly build machines smarter than ourselves. They forget what history shows. Our distant, speechless ancestors managed to bring forth entities of greater intelligence through genetic evolution without even thinking about it. But we are thinking about it, and the memes of technology evolve far more swiftly than the genes of biology. We can surely make machines with a more human-like ability to learn and organize knowledge.
There seems to be only one idea that could argue for the impossibility of making thought patterns dance in new forms of matter. This is the idea of mental materialism - the concept that mind is a special substance, a magical thinking-stuff somehow beyond imitation, duplication, or technological use.
Psychobiologists see no evidence for such a substance, and find no need for mental materialism to explain the mind. Because the complexity of the brain lies beyond the full grasp of human understanding, it seems complex enough to embody a mind. Indeed, if a single person could fully understand a brain, this would make the brain less complex than that person's mind. If all Earth's billions of people could cooperate in simply watching the activity of one human brain, each person would have to monitor tens of thousands of active synapses simultaneously - clearly an impossible task. For a person to try to understand the flickering patterns of the brain as a whole would be five billion times more absurd. Since our brain's mechanism so massively overwhelms our mind's ability to grasp it, that mechanism seems complex enough to embody the mind itself. Turing's TargetIn a 1950 paper on machine intelligence, British mathematician Alan Turing wrote: "I believe that by the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted." But this will depend on what we call thinking. Some say that only people can think, and that computers cannot be people; they then sit back and look smug.
But in his paper, Turing asked how we judge human intelligence, and suggested that we commonly judge people by the quality of their conversation. He then proposed what he called the imitation game - which everyone else now calls the Turing test. Imagine that you are in a room, able to communicate through a terminal with a person and a computer in two other rooms. You type messages; both the person and the computer can reply. Each tries to act human and intelligent. After a prolonged keyboard "conversation" with them-perhaps touching on literature, art, the weather, and how a mouth tastes in the morning - it might be that you could not tell which was the person and which the machine. If a machine could converse this well on a regular basis, then Turing suggests that we should consider it genuinely intelligent. Further, we would have to acknowledge that it knew a great deal about human beings.
For most practical purposes, we need not ask "Can a machine have self-awareness - that is, consciousness?" Indeed, critics who declare that machines cannot be conscious never seem able to say quite what they mean by the term. Self-awareness evolved to guide thought and action, not merely to ornament our humanity. We must be aware of other people, and of their abilities and inclinations, to make plans that involve them. Likewise we must be aware of ourselves, and of our own abilities and inclinations, to make plans about ourselves. There is no special mystery in self-awareness. What we call the self reacts to impressions from the rest of the mind, orchestrating some of its activities; this makes it no more (and no less) than a special part of the interacting patterns of thought. The idea that the self is a pattern in a special mind substance (distinct from the mind substance of the brain) would explain nothing about awareness.
A machine attempting to pass the Turing test would, of course, claim to have self-awareness. Hard-core biochauvinists would simply say that it was lying or confused. So long as they refuse to say what they mean by consciousness, they can never be proved wrong. Nonetheless, whether called conscious or not, intelligent machines will still act intelligent, and it is their actions that will affect us. Perhaps they will someday shame the biochauvinists into silence by impassioned argument, aided by a brilliant public-relations campaign.
No machine can now pass the Turing test, and none is likely to do so soon. It seems wise to ask whether there is a good reason even to try: we may gain more from AI research guided by other goals.
Let us distinguish two sorts of artificial intelligence, though a system could show both kinds. The first is technical AI, adapted to deal with the physical world. Efforts in this field lead toward automated engineering and scientific inquiry. The second is social AI, adapted to deal with human minds. Efforts in this field lead toward machines able to pass the Turing test.
Researchers working on social AI systems will learn much about the human mind along the way, and their systems will doubtless have great practical value, since we all can profit from intelligent help and advice. But automated engineering based on technical AI will have a greater impact on the technology race, including the race toward molecular technology. And an advanced automated engineering system may be easier to develop than a Turing-test passer, which must not only possess knowledge and intelligence, but must mimic human knowledge and human intelligence - a special, more difficult challenge.
As Turing asked, "May not machines carry out something which ought to be described as thinking but which is very different from what a man does?" Although some writers and politicians may refuse to recognize machine intelligence until they are confronted with a talkative machine able to pass the Turing test, many engineers will recognize intelligence in other forms. We are well on the way to automated engineering. Knowledge engineers have marketed expert systems that help people to deal with practical problems. Programmers have created computer-aided design systems that embody knowledge about shapes and motion, stress and strain, electronic circuits, heat flow, and how machine tools shape metal. Designers use these systems to augment their mental models, speeding the evolution of yet unbuilt designs. Together, designers and computers form intelligent, semiartificial systems.
Engineers can use a wide variety of computer systems to aid their work. At one end of the spectrum, they use computer screens simply as drawing boards. Farther along, they use systems able to describe parts in three dimensions and calculate their response to heat, stress, current, and so on. Some systems also know about computer-controlled manufacturing equipment, letting engineers make simulated tests of instructions that will later direct computer-controlled machines to make real parts. But the far end of the spectrum of systems involves using computers not just to record and test designs, but to generate them.
Programmers have developed their most impressive tools for use in the computer business itself. Software for chip design is an example. Integrated circuit chips now contain many thousands of transistors and wires. Designers once had to work for many months to design a circuit to do a given job, and to lay out its many parts across the surface of the chip. Today they can often delegate this task to a so-called "silicon compiler." Given a specification of a chip's function, these software systems can produce a detailed design - ready for manufacture - with little or no human help.
All these systems rely entirely on human knowledge, laboriously gathered and coded. The most flexible automated design systems today can fiddle with a proposed design to seek improvements, but they learn nothing applicable to the next design. But EURISKO is different. Developed by Professor Douglas Lenat and others at Stanford University, EURISKO is designed to explore new areas of knowledge. It is guided by heuristics - pieces of knowledge that suggest plausible actions to follow or implausible ones to avoid; in effect, various rules of thumb. It uses heuristics to suggest topics to work on, and further heuristics to suggest what approaches to try and how to judge the results. Other heuristics look for patterns in results, propose new heuristics, and rate the value of both new and old heuristics. In this way EURISKO evolves better behaviors, better internal models, and better rules for selecting among internal models. Lenat himself describes the variation and selection of heuristics and concepts in the system in terms of "mutation" and "selection," and suggests a social, cultural metaphor for understanding their interaction.
Since heuristics evolve and compete in EURISKO, it makes sense to expect parasites to appear - as indeed many have. One machine-generated heuristic, for example, rose to the highest possible value rating by claiming to have been a co-discoverer of every valuable new conjecture. Professor Lenat has worked closely with EURISKO, improving its mental immune system by giving it heuristics for shedding parasites and avoiding stupid lines of reasoning.
EURISKO has been used to explore elementary mathematics, programming, biological evolution, games, three-dimensional integrated circuit design, oil spill cleanup, plumbing, and (of course) heuristics. In some fields it has startled its designers with novel ideas, including new electronic devices for the emerging technology of three-dimensional integrated circuits.
The results of a tournament illustrate the power of a human/AI team. Traveler TCS is a futuristic naval war game, played in accordance with two hundred pages of rules specifying design, cost, and performance constraints for the fleet ("TCS" stands for "Trillion Credit Squadron"). Professor Lenat gave EURISKO these rules, a set of starting heuristics, and a program to simulate a battle between two fleets. He reports that "it then designed fleet after fleet, using the simulator as the 'natural selection' mechanism as it 'evolved' better and better fleet designs." The program would run all night, designing, testing, and drawing lessons from the results. In the morning Lenat would cull the designs and help it along. He credits about 60 percent of the results to himself, and about 40 percent to EURISKO.
Lenat and EURISKO entered the 1981 national Traveler TCS tournament with a strange-looking fleet. The other contestants laughed at it, then lost to it. The Lenat/EURISKO fleet won every round, emerging as the national champion. As Lenat notes, "This win is made more significant by the fact that no one connected with the program had ever played this game before the tournament, or seen it played, and there were no practice rounds."
In 1982 the competition sponsors changed the rules. Lenat and EURISKO entered a very different fleet. Other contestants again laughed at it, then lost. Lenat and EURISKO again won the national championship.
In 1983 the competition sponsors told Lenat that if he entered and won again, the competition would be canceled. Lenat bowed out.
EURISKO and other AI programs show that computers need not be limited to boring, repetitive work if they are given the right sort of programming. They can explore possibilities and turn up novel ideas that surprise their creators. EURISKO has shortcomings, yet it points the way to a style of partnership in which an AI system and a human expert both contribute knowledge and creativity to a design process.
In coming years, similar systems will transform engineering. Engineers will work in a creative partnership with their machines, using software derived from current computer-aided design systems for doing simulations, and using evolving, EURISKO-like systems to suggest designs to simulate. The engineer will sit at a screen to type in goals for the design process and draw sketches of proposed designs. The system will respond by refining the designs, testing them, and displaying proposed alternatives, with explanations, graphs, and diagrams. The engineer will then make further suggestions and changes, or respond with a new task, until an entire system of hardware has been designed and simulated.
As such automated engineering systems improve, they will do more and more of the work faster and faster. More and more often, the engineer will simply propose goals and then sort among good solutions proposed by the machine. Less and less often will the engineer have to select parts, materials, and configurations. Gradually engineers will be able to propose more general goals and expect good solutions to appear as a matter of course. Just as EURISKO ran for hours evolving fleets with a Traveler TCS simulator, automated engineering systems will someday work steadily to evolve passenger jets having maximum safety and economy - or to evolve military jets and missiles best able to control the skies.
Just as EURISKO has invented electronic devices, future automated engineering systems will invent molecular machines and molecular electronic devices, aided by software for molecular simulations. Such advances in automated engineering will magnify the design-ahead phenomenon described earlier. Thus automated engineering will not only speed the assembler breakthrough, it will increase the leap that follows.
Eventually software systems will be able to create bold new designs without human help. Will most people call such systems intelligent? It doesn't really matter. The AI RaceCompanies and governments worldwide support AI work because it promises commercial and military advantages. The United States has many university artificial intelligence laboratories and a host of new companies with names like Machine Intelligence Corporation, Thinking Machines Corporation, Teknowledge, and Cognitive Systems Incorporated. In October of 1981 the Japanese Ministry of Trade and Industry announced a ten-year, $850 million program to develop advanced AI hardware and software. With this, Japanese researchers plan to develop systems able to perform a billion logical inferences per second. In the fall of 1984 the Moscow Academy of Science announced a similar, five-year, $ 100 million effort. In October of 1983 the U.S. Department of Defense announced a five-year, $600 million Strategic Computing Program; they seek machines able to see, reason, understand speech, and help manage battles. As Paul Wallich reports in the IEEE Spectrum, "Artificial intelligence is considered by most people to be a cornerstone of next-generation computer technology; all the efforts in different countries accord it a prominent place in their list of goals."
Advanced AI will emerge step by step, and each step will pay off in knowledge and increased ability. As with molecular technology (and many other technologies), attempts to stop advances in one city, county, or country will at most let others take the lead. A miraculous success in stopping visible AI work everywhere would at most delay it and, as computers grow cheaper, let it mature in secret, beyond public scrutiny. Only a world state of immense power and stability could truly stop AI research everywhere and forever - a "solution" of bloodcurdling danger, in light of past abuses of merely national power. Advanced AI seems inevitable. If we hope to form a realistic view of the future, we cannot ignore it.
In a sense, artificial intelligence will be the ultimate tool because it will help us build all possible tools. Advanced AI systems could maneuver people out of existence, or they could help us build a new and better world. Aggressors could use them for conquest, or foresighted defenders could use them to stabilize peace. They could even help us control AI itself. The hand that rocks the AI cradle may well rule the world.
As with assemblers, we will need foresight and careful strategy to use this new technology safely and well. The issues are complex and interwoven with everything from the details of molecular technology to employment and the economy to the philosophical basis of human rights. The most basic issues, though, involve what AI can do. Are We Smart Enough?Despite the example of the evolution of human beings, critics may still argue that our limited intelligence may somehow prevent us from programming genuinely intelligent machines. This argument seems weak, amounting to little more than a claim that because the critic can't see how to succeed, no one else will ever do better. Still, few would deny that programming computers to equal human abilities will indeed require fresh insights into human psychology. Though the programming path to AI seems open, our knowledge does not justify the sort of solid confidence that thoughtful engineers had (decades before Sputnik) in being able to reach the Moon with rockets, or that we have today in being able to build assemblers through protein design. Programming genuine artificial intelligence, though a form of engineering, will require new science. This places it beyond firm projection.
We need accurate foresight, though. People clinging to comforting doubts about AI seem likely to suffer from radically flawed images of the future. Fortunately, automated engineering escapes some of the burden of biochauvinist prejudice. Most people are less upset by the idea of machines designing machines than they are by the idea of true general-purpose AI systems. Besides, automated engineering has been shown to work; what remains is to extend it. Still, if more general systems are likely to emerge, we would be foolish to omit them from our calculations. Is there a way to sidestep the question of our ability to design intelligent programs?
In the 1950s, many AI researchers concentrated on simulating brain functions by simulating neurons. But researchers working on programs based on words and symbols made swifter progress, and the focus of AI work shifted accordingly. Nonetheless, the basic idea of neural simulation remains sound, and molecular technology will make it more practical. What is more, this approach seems guaranteed to work because it requires no fundamental new insights into the nature of thought.
Eventually, neurobiologists will use virus-sized molecular machines to study the structure and function of the brain, cell by cell and molecule by molecule where need be. Although AI researchers may gain useful insights about the organization of thought from the resulting advances in brain science, neural simulation can succeed without such insights. Compilers translate computer programs from one language to another without understanding how they work. Photocopiers transfer patterns of words without reading them. Likewise, researchers will be able to copy the neural patterns of the brain into another medium without understanding their higher-level organization.
After learning how neurons work, engineers will be able to design and build analogous devices based on advanced nanoelectronics and nanomachines. These will interact like neurons, but will work faster. Neurons, though complex, do seem simple enough for a mind to understand and an engineer to imitate. Indeed, neurobiologists have learned much about their structure and function, even without molecular-scale machinery to probe their workings.
With this knowledge, engineers will be able to build fast, capable AI systems, even without understanding the brain and without clever programming. They need only study the brain's neural structure and join artificial neurons to form the same functional pattern. If they make all the parts right - including the way they mesh to form the whole - then the whole, too, will be right. "Neural" activity will flow in the patterns we call thought, but faster, because all the parts will work faster. Accelerating the Technology RaceAdvanced AI systems seem possible and inevitable, but what effect will they have? No one can answer this in full, but one effect of automated engineering is clear: it will speed our advance toward the limits of the possible.
To understand our prospects, we need some idea of how fast advanced AI systems will think. Modern computers have only a tiny fraction of the brain's complexity, yet they can already run programs imitating significant aspects of human behavior. They differ totally from the brain in their basic style of operation, though, so direct physical comparison is almost useless. The brain does a huge number of things at once, but fairly slowly; most modern computers do only one thing at a time, but with blinding speed.
Still, one can imagine AI hardware built to imitate a brain not only in function, but in structure. This might result from a neural-simulation approach, or from the evolution of AI programs to run on hardware with a brainlike style of organization. Either way, we can use analogies with the human brain to estimate a minimum speed for advanced assembler-built AI systems.
Neural synapses respond to signals in thousandths of a second; experimental electronic switches respond a hundred million times faster (and nanoelectronic switches will be faster yet). Neural signals travel at under one hundred meters per second; electronic signals travel a million times faster. This crude comparison of speeds suggests that brainlike electronic devices will work about a million times faster than brains made of neurons (at a rate limited by the speed of electronic signals).
This estimate is crude, of course. A neural synapse is more complex than a switch; it can change its response to signals by changing its structure. Over time, synapses even form and disappear. These changes in the fibers and connections of the brain embody the long term mental changes we call learning. They have stirred Professor Robert Jastrow of Dartmouth to describe the brain as an enchanted loom, weaving and reweaving its neural patterns throughout life.
To imagine a brainlike device with comparable flexibility, picture its electronic circuits as surrounded by mechanical nanocomputers and assemblers, with one per synapse-equivalent "switch." Just as the molecular machinery of a synapse responds to patterns of neural activity by modifying the synapse's structure, so the nanocomputers will respond to patterns of activity by directing the nanomachinery to modify the switch's structure. With the right programming, and with communication among the nanocomputers to simulate chemical signals, such a device should behave almost exactly like a brain.
Despite its complexity, the device will be compact. Nanocomputers will be smaller than synapses, and assembler-built wires will be thinner than the brain's axons and dendrites. Thin wires and small switches will make for compact circuits, and compact circuits will speed the flow of electronic patterns by shortening the distances signals must travel. It seems that a structure similar to the brain will fit in less than a cubic centimeter (as discussed in the Notes). Shorter signal paths will then join with faster transmission to yield a device over ten million times faster than a human brain.
Only cooling problems might limit such machines to slower average speeds. Imagine a conservative design, a millionfold faster than a brain and dissipating a millionfold more heat. The system consists of an assembler-built block of sapphire the size of a coffee mug, honeycombed with circuit-lined cooling channels. A high-pressure water pipe of equal diameter is bolted to its top, forcing cooling water through the channels to a similar drainpipe leaving the bottom. Hefty power cables and bundles of optical-fiber data channels trail from its sides.
The cables supply fifteen megawatts of electric power. The drainpipe carries the resulting heat away in a three-ton-per-minute flow of boiling-hot water. The optical fiber bundles carry as much data as a million television channels. They bear communications with other AI systems, with engineering simulators, and with assembler systems that build designs for final testing. Every ten seconds, the system gobbles almost two kilowatt-days of electric energy (now worth about a dollar). Every ten seconds, the system completes as much design work as a human engineer working eight hours a day for a year (now worth tens of thousands of dollars). In an hour, it completes the work of centuries. For all its activity, the system works in a silence broken only by the rush of cooling water.
This addresses the question of the sheer speed of thought, but what of its complexity? AI development seems unlikely to pause at the complexity of a single human mind. As John McCarthy of Stanford's AI lab points out, if we can place the equivalent of one human mind in a metal skull, we can place the equivalent of ten thousand cooperating minds in a building. (And a large modern power plant could supply power enough for each to think at least ten thousand times as fast as a person.) To the idea of fast engineering intelligences, add the idea of fast engineering teams.
Engineering AI systems will be slowed in their work by the need to perform experiments, but not so much as one might expect. Engineers today must perform many experiments because bulk technology is unruly. Who can say in advance exactly how a new alloy will behave when forged and then bent ten million times? Tiny cracks weaken metal, but details of processing determine their nature and effects.
Because assemblers will make objects to precise specifications, the unpredictabilities of bulk technology will be avoided. Designers (whether human or AI) will then experiment only when experimentation is faster or cheaper than calculation, or (more rarely) when basic knowledge is lacking.
AI systems with access to nanomachines will perform many experiments rapidly. They will design apparatus in seconds, and replicating assemblers will build it without the many delays (ordering special parts, shipping them, and so on) that plague projects today. Experimental apparatus on the scale of an assembler, nanocomputer, or living cell will take only minutes to build, and nanomanipulators will perform a million motions per second. Running a million ordinary experiments at once will be easy. Thus, despite delays for experimentation, automated engineering systems will move technology forward with stunning speed.
From past to future, then, the likely pattern of advancing ability looks something like this. Across eons of time, life moved forward in a long, slow advance, paced by genetic evolution. Minds with language picked up the pace, accelerated by the flexibility of memes. The invention of the methods of science and technology further accelerated advances by forcing memes to evolve faster. Growing wealth, education, and population - and better physical and intellectual tools - have continued this accelerating trend across our century.
The automation of engineering will speed the pace still more. Computer-aided design will improve, helping human engineers to generate and test ideas ever more quickly. Successors to EURISKO will shrink design times by suggesting designs and filling in the details of human innovations. At some point, full-fledged automated engineering systems will pull ahead on their own.
In parallel, molecular technology will develop and mature, aided by advances in automated engineering. Then assembler-built AI systems will bring still swifter automated engineering, evolving technological ideas at a pace set by systems a million times faster than a human brain. The rate of technological advance will then quicken to a great upward leap: in a brief time, many areas of technology will advance to the limits set by natural law. In those fields, advance will then halt on a lofty plateau of achievement.
This transformation is a dizzying prospect. Beyond it, if we survive, lies a world with replicating assemblers, able to make whatever they are told to make, without need for human labor. Beyond it, if we survive, lies a world with automated engineering systems able to direct assemblers to make devices near the limits of the possible, near the final limits of technical perfection.
Eventually, some AI systems will have both great technical ability and the social ability needed to understand human speech and wishes. If given charge of energy, materials, and assemblers, such a system might aptly be called a "genie machine." What you ask for, it will produce, Arabian legend and universal common sense suggest that we take the dangers of such engines of creation very seriously indeed.
Decisive breakthroughs in technical and social AI will be years in arriving. As Marvin Minsky has said, "The modestly intelligent machines of the near future promise only to bring us the wealth and comfort of tireless, obedient, and inexpensive servants." Most systems now called "AI" do not think or learn; they are only a crude distillate of the skills of experts, preserved, packaged, and distributed for consultation.
But genuine AI will arrive. To leave it out of our expectations would be to live in a fantasy world. To expect AI is neither optimistic nor pessimistic: as always, the researcher's optimism is the technophobe's pessimism. If we do not prepare for their arrival, social AI systems could pose a grave threat: consider the damage done by the merely human intelligence of terrorists and demagogues. Likewise, technical AI systems could destabilize the world military balance, giving one side a sudden, massive lead. With proper preparation, however, artificial intelligence could help us build a future that works - for the Earth, for people, and for the advancement of intelligence in the universe. Chapter 12 will suggest an approach, as part of the more general issue of managing the transformation that assemblers and AI will bring.
Why discuss the dangers today? Because it is not too soon to start developing institutions able to deal with such questions. Technical AI is emerging today, and its every advance will speed the technology race. Artificial intelligence is but one of many powerful technologies we must learn to manage, each adding to a complex mixture of threats and opportunities.
| | Join the discussion about this article on Mind·X! | |