|
|
Robots, Re-Evolving Mind
We are re-evolving artificial minds at ten million times the original speed of human evolution, exponentially growing robot complexity. Currently, a guppylike thousand MIPS and hundreds of megabytes of memory enable our robots to build dense, almost photorealistic 3D maps of their surroundings and navigate intelligently. Within three decades, fourth-generation universal robots with a humanlike 100 million MIPS will be able to abstract and generalize--perhaps replace us.
Carnegie Mellon University Robotics Institute
Originally published December 2000 as an academic paper, Carnegie Mellon University. Published on KurzweilAI.net March 27, 2001. Original article can be found here.
Computers have permeated everyday life and are worming their way into our gadgets, dwellings, clothes, even bodies. But if pervasive computing soon automates most of our informational needs, it will leave untouched a vaster number of essential physical tasks. Construction, protection, repair, cleaning, transport and so forth will remain in human hands.
Robot inventors in home, university and industrial laboratories have tinkered with the problem for most of the century. While mechanical bodies adequate for manual work can be built, artificial minds for autonomous servants have been frustratingly out of reach, despite the arrival of powerful computers.
The first electronic computers in the 1950s did the work of thousands of clerks. But when those superhuman behemoths were programmed to reason, they merely matched single human beginners in razor-narrow tasks. Programmed to control robot eyes and arms, they took pathetic hours to unreliably find and grasp a few wooden blocks. The situation did not improve substantially for decades.
But things are changing. Robot tasks wildly impossible in the 1970s and 1980s began to work experimentally in the 1990s. Robots mapped and navigated unfamiliar office suites, and robot vehicles drove themselves, mostly unaided, across entire countries. Vision systems locate textured objects and track and analyze faces in real time. Personal computers recognize text and speech. Why suddenly now? Trick of PerspectiveThe short answer is that, after decades at about one MIPS (million instructions, or calculations, per second), computer power available to research robots shot through 10, 100 and now 1,000 MIPS, starting about 1990 (Figure 1).
This deserves explanation because the cost-effectiveness of computing rose steadily all those decades (Figure 2).
In 1960 computers were a new and mysterious factor in the cold war, and even outlandish possibilities like Artificial Intelligence warranted significant investment. In the early 1960s AI programs ran on the era's supercomputers, similar to those used for physical simulations by weapons physicists and meteorologists. By the 1970s the promise of AI had faded, and the effort limped for a decade on old hardware. In contrast, weapons labs upgraded repeatedly to new supercomputers.
In the 1980s, departmental computers gave way to smaller project computers then to individual workstations and personal computers. Prices fell at each transition, but power per machine stayed about the same. Only after 1990 did prices stabilize and power grow.
It was a common opinion in the AI labs that, with the right program, readily available computers could encompass any human skill. The position seemed obvious in the 1950s, when computers did the work of thousands, and defensible in the 1970s, as inference and game-playing programs performed at modest human levels.
The upstart subfields of computer vision and robotics, however had a different impression. On one MIPS computers, single images crammed memory, simply scanning them consumed seconds, and serious image analysis took hours. Human vision performed far more elaborate functions many times a second.
It's easy to explain the discrepancy in hindsight. Computers do arithmetic using as few gates and switching operations as possible. Human calculation, by contrast, is a laboriously learned, ponderous, awkward, unnatural behavior. Tens of billions of neurons in our vision and motor systems strain to analogize and process a digit a second. If our brain were rewired into 10 billion arithmetic circuits, each doing 100 calculations a second, by a mad computer designer with a future surgical tool, we'd outcompute early computers a millionfold, and the illusion of computer power would be exposed. Robotics, in fact, was such an expose.
Though spectacular underachievers at the wacky new stunt of longhand calculation, we are veteran overachievers at perception and navigation. Our ancestors, across hundreds of millions of years, prevailed by being frontrunners in the competition to find food, escape danger and protect offspring. Existing robot-controlling computers are far too feeble to match the resulting prodigious perceptual inheritance. But by how much?
The vertebrate retina is understood well enough to be a kind of Rosetta stone roughly relating nervous tissue to computation. Besides light detectors, the retina contains edge- and motion-detecting circuitry, packed into a little tenth-millimeter-thick, two-centimeter-wide patch that reports on a million image regions in parallel about ten times a second via the optic nerve. In robot vision, similar detections, well coded, each require the execution of a few hundred computer instructions, making the retina's 10 million detections per second worth over 1,000 MIPS.
In a risky extrapolation that must serve until something better emerges, it would take about 50,000 MIPS to functionally imitate a rat-brain's gram of neural tissue, and almost 100 million MIPS (or 100 trillion instructions per second) to emulate the 1,500 gram human brain. PCs in 1999 matched insect nervous systems, but fell short of the human retina and a goldfish's 0.1 gram brain. They were a millionfold too weak to do the job of a human brain.
While dispiriting to artificial intelligence pioneers, the deficit does not warrant abandoning their goals. Computer power for a given price roughly doubled each year in the 1990s, after doubling every 18 months in the 1980s, and every two years prior. Two or three decades more at the present pace would close the millionfold gap. Better yet, sufficiently useful robots don't need full human-scale brainpower. Re-Evolving MindThe incremental growth of computer power suggests an incremental approach to developing robot intelligence, probably an accelerated parallel to the evolution of biological intelligence that's its model. Unlike other approaches, this path demands no great theories or insights (helpful though they can be): natural intelligence evolved in small steps through a chain of viable organisms, artificial intelligence can do the same. Nature performed evolutionary experiments at an approximately steady rate, even when evolved traits such as brain complexity grew exponentially.
Similarly, a steady engineering effort should be able to support exponentially growing robot complexity (especially as ever more of the design search is delegated to increasingly powerful machines). The journey will be much easier the second time around: we have a guide, with directions and distances, in the history of vertebrate nervous systems.
General industrial development in electronics, materials, manufacturing and elsewhere has already carried us well along the road. A series of notable experimental animal-like robots built using their day's best techniques read like mileposts along the road to intelligence.
- · Around 1950, Bristol University psychologist William Gray Walter, a pioneer brainwave researcher, built eight electronic "Tortoises," each with a scanning phototube eye and two vacuum tube amplifiers driving relays that switched steering and drive motors (Figure 3). Unprecedently lifelike, they danced around a lighted recharging hutch until batteries ran low, then entered. But simple bacteria show equally engaging tropisms.
- · In the early 1960s the Johns Hopkins University Applied Physics Lab built the corridor cruising, wall-outlet-recharging "Beast" (Figure 4). Using specialized wall-ranging sonars, outlet-seeking photocell optics and a wallplate-feeling arm, all orchestrated by several dozen transistors, the Beast's multiple coordinated behaviors resemble a large nucleated cell's, for instance a bacteria-hunting Paramecium.
- · Big mobile robots radio-controlled by huge computers appeared around 1970 at Stanford University and nearby SRI. While a Tortoise's actions followed directly from two or three light and touch discriminations, and the Beast's simply from a few dozen signals, Stanford's "Cart" (Figure 5a) and SRI's "Shakey" (Figure 5b) used TV images with thousands of pixels to choose actions after millions of calculations. The Cart adapted and predicted to track dirty white lines in ambient light.
Shakey, more ambitiously but less reliably, identified and reasoned about large blocks. The advent of multicellular animals with nervous systems in the Cambrian explosion 550 million years ago blew the lid on biological behavioral complexity. The introduction of computer control did the same for robots.
- · By 1980 a slightly faster computer and a more complex program allowed Stanford's Cart, using stereoscopic vision, to sparsely map and negotiate obstacle courses, taking five hours to cover 30 meters (Figure 1, first panel). In complexity and speed the performance was sluglike.
- · Several research robots in the early 1990s navigated and two-dimensionally mapped corridor networks in real time (Figure 1, second panel). Some optimized their interpretations of sensor data in learning processes. Onboard and offboard 10 MIPS microprocessors conferred brainpower like the tiniest fish, or middling insects.
- · In 2000 a guppylike thousand MIPS and hundreds of megabytes of memory enabled our robots to build dense, almost photorealistic 3D maps of their surroundings (Figure 1, third panel). Navigation techniques built around this core spatial awareness will suffice, I believe, to guide mobile utility robots reliably through unfamiliar surroundings, suiting them for jobs in hundreds of thousands of industrial locations and eventually hundreds of millions of homes. Such abilities have so long eluded that only a few dozen small research groups pursue them. But the number of robot developers will balloon once a vigorous commercial industry emerges. The continued evolution of robotkind will then become a driver rather than a mere beneficiary of general technical development.
How do biological and technological development rates compare? The very simple to the very complex can be found in both realms, and new designs are as likely to increase simplicity as complexity. But the simple end of the range is crammed with competitors, while the complex limit is the beginning of an endless unexplored design space. Organisms or products that are slightly more complex than any before sometimes succeed in the ecology or the marketplace, and thus raise the upper limit.
Paleontological and historical records can be scanned for this upper limit, which mostly rises over time. There are reversals in both records: notably mass extinctions and civilization collapses. But, after a recovery period, progress resumes, often faster than before. If complex entities succumb to disaster, many of their component innovations may yet survive somewhere. Classical learning weathered the collapse of Roman civilization in the remote Islamic world. Some inactive DNA sequences seem to be archives of ancestral traits.
Extincted large organisms may leave much of their heritage behind in smaller relatives, who can rapidly "re-evolve" size and complex adaptations by simple mutations in regulator genes. The re-expression of old good ideas in odd combinations often initiates an explosion of innovation. Such happened culturally in the Renaissance and biologically in the Paleocene, when birds and mammals ran riot in the post-dinosaur world.
Though creative explosions, catastrophic losses and stagnant periods in both realms, and varying investment scales in different technical projects disturb the trends, let us compare the growth of biggest nervous systems since the Cambrian with the information capacity of common big computers since World War II. Wormlike animals with perhaps a few hundred neurons evolved early in the Cambrian, over 570 million years ago.
The first electromechanical computers, with a few hundred bits of telephone relay storage, were built around 1940. Earliest vertebrates, very primitive fish with nervous systems probably smaller than the modern hagfish's, perhaps 100,000 neurons, appeared about 470 million years ago. Computers acquired 100,000 bits of rotating magnetic memory by 1955. Amphibians with perhaps a salamander's few million neurons crawled out of the water 370 million years ago.
Computers with millions of bits of magnetic core memory were available by 1965. By 1975, many computer core memories had exceeded 10 million bits and by 1985 100 million bits was common, though large mainframe computers were being largely displaced by small workstations and even smaller personal computers.
Small mammals showed up about 220 million years ago, with brains ranging to several hundred million neurons, while enormous dinosaurs around them bore brains with several billion neurons, a situation that changed only slowly until the sudden extinction of the dinosaurs 65 million years ago. Our small primate ancestors arose soon after, with brains ranging to several billion neurons.
Larger computer systems had several billion by 1995. Hominid apes with twenty billion neuron brains appeared about 30 million years ago. In 2,000, some ambitious personal computer owners equipped their systems tens of billions of bits of RAM. Humans have approximately 100 billion neurons. 100 billion bits of RAM will be standard in computers within five years.
Plot these juxtaposed geologic and recent dates against one another (the alignment of bits to neurons is arbitrary, and can be shifted with affecting the slope) and you will discover that large computer's capacities grew each decade about as much as the large nervous systems grew every hundred million years. We seem to be re-evolving mind (in a fashion) at ten million times the original speed! Earning a LivingCommercial mobile robots, which must perform reliably, have tended to use techniques about a decade after they first appeared experimentally. The smartest ones, barely insectlike at 10 MIPS, have found few jobs. A paltry ten thousand work worldwide, and companies that made them are struggling or defunct (robot manipulators have a similar story).
The largest class, Automatic Guided Vehicles (AGVs), transport materials in factories and warehouses. Most follow buried signal-emitting wires and detect endpoints and collisions with switches, techniques introduced in the 1960s. It costs hundreds of thousands of dollars to install guide wires under concrete floors, and the routes are then fixed, making the robots economical only for large, exceptionally stable factories.
Some robots made possible by the advent of microprocessors in the 1980s track softer cues, like patterns in tiled floors, and use ultrasonics and infrared proximity sensors to detect and negotiate their way around obstacles.
The most advanced industrial mobile robots to date, developed since the late 1980s, are guided by occasional navigational markers, for instance laser-sensed bar codes, and by preexisting features like walls, corners and doorways. The hard-hat labor of laying guide wires is replaced by programming carefully tuned for each route segment.
The small companies that developed the robots discovered many industrial customers eager to automate transport, floor cleaning, security patrol and other routine jobs. Alas, most buyers lost interest as they realized that installation and route changing required time-consuming and expensive work by experienced route programmers of precarious availability. Technically successful, the robots fizzled commercially. But in failure they revealed the essentials for success.
First one needs reasonably-priced physical vehicles to do various jobs. Fortunately existing AGVs, fork lift trucks, floor scrubbers and other industrial machines designed for human riders or to follow wires can be adapted for autonomy.
Second, the customer should be able, unassisted, to rapidly put a robot to work where needed. Floor cleaning and most other mundane tasks cannot bear the cost, time and uncertainty of expert installation.
Third, the robots must work for at least six months between missteps. Customers routinely rejected robots that, after a month of flawless operation, wedged themselves in corners, wandered away lost, rolled over employees' feet or fell down stairs. Six months, however, earned the machines a sick day.
Robots exist that work faultlessly for years, perfected by a repeated process that fixes the most frequent failures, revealing successively rarer problems that are corrected in turn. Alas, the reliability has been achieved only for prearranged routes. Insectlike 10 MIPS is just enough to track a few hand-picked landmarks on each path segment.
Such robots are easily confused by minor surprises like shifted bar codes or blocked corridors, not unlike ants on scent trails or moths guided by the moon, who can be trapped by circularized trails or streetlights. (Unlike plodding robots, though, insects routinely take lethal risks, and thus have more interesting, if short, lives.) Robots that chart their own routes emerged from laboratories worldwide in the mid 1990s, as microprocessors reached 100 MIPS. Most build two-dimensional maps from sonar or laser rangefinder scans to locate and route themselves, and the best seem able to navigate office hallways sometimes for days between confusions.
To date they fall far short of the six-month commercial criterion. Too often different locations in coarse 2D maps resemble one another, or the same location, scanned at different heights, looks different, or small obstacles or awkward protrusions are overlooked. But sensors, computers and techniques are improving, and success is in sight.
My small laboratory is in the race. In the 1980s we devised a way to distill large amounts of noisy sensor data into reliable maps by accumulating statistical evidence of emptiness or occupancy in each cell of a grid representing the surroundings. The approach worked well in 2D, and guides some of the robots mentioned above.
Three-dimensional maps, a thousand times richer, promised to be even better, but for years seemed computationally out of reach. In 1992 we found economies of scale and other tricks that reduced 3D grid costs a hundredfold, and now have a test program that accumulates thousands of measurements from stereoscopic camera glimpses to map a room's volume down to centimeter-scale. With 1,000 MIPS the program digests over a glimpse per second, adequate for slow indoor travel. A thousand MIPS is just appearing in high-end personal computers.
In a few years it will be found in smaller, cheaper computers fit for robots, and we've begun an intensive three-year project to develop a prototype commercial product. Highlights are an automatic learning processes to optimize hundreds of evidence-weighing parameters, programs to find clear paths, locations, floors, walls, doors and other objects in the 3D maps, and sample application programs orchestrating the basic skills into tasks like delivery, floor cleaning and patrol. The initial testbed is a small mobile robot with trinocular cameras. Inexpensive digital camera chips promise to be the cheapest way to get the millions of measurements needed for dense maps.
As a first commercial product, we plan a basketball-sized "navigation head" for retrofit onto existing industrial vehicles. It would have multiple stereoscopic cameras, 1,000 MIPS, generic mapping, recognition and control software, an application-specific program, and a hardware connection to vehicle power, controls and sensors.
Head-equipped vehicles with transport or patrol programs could be taught new routes simply by leading them through once. Floor-cleaning programs would be shown the boundaries of their work area. Introduced to a job location, the vehicles would understand their changing surroundings competently enough to work at least six months without debilitating mistakes. Ten thousand AGVs, a hundred thousand cleaning machines and, possibly, a million fork-lift trucks are candidates for retrofit, and robotization may greatly expand those markets.
Income and experience from spatially-aware industrial robots would set the stage for smarter yet cheaper ($1,000 rather than $10,000) consumer products, starting probably with small, patient robot vacuum cleaners that automatically learn their way around a home, explore unoccupied rooms and clean whenever needed. I imagine a machine low enough to fit under some furniture, with an even lower extendible brush, that returns to a docking station to recharge and disgorge its dust load. Such machines could open a true mass market for robots, with a hundred million potential customers. Commercial success will provoke competition and accelerate investment in manufacturing, engineering and research. Vacuuming robots should beget smarter cleaning robots with dusting, scrubbing and picking-up arms, followed by larger multifunction utility robots with stronger, more dextrous arms and better sensors. Programs will be written to make such machines pick up clutter, store, retrieve and deliver things, take inventory, guard homes, open doors, mow lawns, play games and on.
New applications will expand the market and spur further advancements, when robots fall short in acuity, precision, strength, reach, dexterity, skill or processing power. Capability, numbers sold, engineering and manufacturing quality, and cost effectiveness will increase in a mutually reinforcing spiral. Perhaps by 2010 the process will have produced the first broadly competent "universal robots," as big as people but with lizardlike 5,000 MIPS minds that can be programmed for almost any simple chore.
Like competent but instinct-ruled reptiles, first-generation universal robots will handle only contingencies explicitly covered in their current application programs. Unable to adapt to changing circumstances, they will often perform inefficiently or not at all. Still, so much physical work awaits them in businesses, streets, fields and homes that robotics could begin to overtake pure information technology commercially.
A second generation of universal robot with a mouselike 100,000 MIPS will adapt as the first generation does not, and even be trainable. Besides application programs, the robots would host a suite of software "conditioning modules" that generate positive and negative reinforcement signals in predefined circumstances.
Application programs would have alternatives for every step, small and large (grip under/over hand, work in/out doors). As jobs are repeated, alternatives that had resulted in positive reinforcement will be favored, those with negative outcomes shunned. With a well-designed conditioning suite (e.g., positive for doing a job fast, keeping the batteries charged, negative for breaking or hitting something) a second-generation robot will slowly learn to work increasingly well.
A monkeylike 5 million MIPS will permit a third generation of robots to learn very quickly from mental rehearsals in simulations that model physical, cultural and psychological factors. Physical properties include shape, weight, strength, texture and appearance of things and how to handle them. Cultural aspects include a thing's name, value, proper location and purpose. Psychological factors, applied to humans and other robots, include goals, beliefs, feelings and preferences.
Developing the simulators will be a huge undertaking involving thousands of programmers and experience-gathering robots. The simulation would track external events, and tune its models to keep them faithful to reality. It should let a robot learn a skill by imitation, and afford a kind of consciousness. Asked why there are candles on the table, a third generation robot might consult its simulation of house, owner and self to honestly reply that it put them there because its owner likes candlelit dinners and it likes to please its owner. Further queries would elicit more details about a simple inner mental life concerned only with concrete situations and people in its work area.
Fourth-generation universal robots with a humanlike 100 million MIPS will be able to abstract and generalize. The first ever AI programs reasoned abstractly almost as well as people, albeit in very narrow domains, and many existing expert systems outperform us. But the symbols these programs manipulate are meaningless unless interpreted by humans. For instance, a medical diagnosis program needs a human practitioner to enter a patient's symptoms, and to implement a recommended therapy.
Not so a third-generation robot, whose simulator provides a two-way conduit between symbolic descriptions and physical reality. Fourth-generation machines result from melding powerful reasoning programs to third-generation machines. They may reason about everyday actions by referring to their simulators like Herbert Gelernter's 1959 geometry theorem prover examined analytic-geometry "diagrams" to check special-case examples before trying to prove general geometric statements. Properly educated, the resulting robots are likely to become intellectually formidable. Passing the TorchBarring cataclysms, I consider the development of intelligent machines a near-term inevitability. Every technical step toward intelligent robots has a rough evolutionary counterpart, and each is likely to benefit its creators, manufacturers, and users. Each advance will provide intellectual rewards, competitive advantages, and increased wealth and options of all kinds. Each can make the world a nicer place to live. At the same time, by performing better and cheaper, the robots will displace humans from essential roles. Rather quickly, they could displace us from existence.
I'm not as alarmed as many by the latter possibility, since I consider these future machines our progeny, ``mind children'' built in our image and likeness, ourselves in more potent form. Like biological children of previous generations, they will embody humanity's best chance for a long-term future. It behooves us to give them every advantage and to bow out when we can no longer contribute.
But, as also with biological children, we can probably arrange for a comfortable retirement before we fade away. Some biological children can be convinced to care for elderly parents. Similarly, "tame" superintelligences could be created and induced to protect and support us, for a while. Such relationships require advance planning and diligent maintenance: it's time to pay attention.
It is the ``wild'' intelligences, however, those beyond our constraints, to whom the future belongs. The available tools for peeking into that strange future---extrapolation, analogy, abstraction, and reason---are, of course, totally inadequate. Yet, even they suggest surreal happenings.
Robots will sweep into space in a wave of colonization, but their wake converts everything into increasingly pure thinking stuff. A "Mind Fire" will burn across the universe. Inside the Mind, physical law loses its primacy to purposes, goals, interpretations, and God knows what else. References and AcknowledgmentsCitations, expansions, illustrations and updates on matters discussed here can be found at the author's web page.
This work has been supported since 1999 by the DARPA Mobile Autonomous Robot Software program. Prior funding came from the Office of Naval Research, NASA, Pennsylvania's Ben Franklin program, Daimler-Benz Research, Thinking Machines Corp., Denning Mobile Robotics Inc., and Carnegie Mellon University Robotics Institute.
Hans Moravec is a Principal Research Scientist at the Carnegie Mellon University Robotics Institute, where he has been since 1980. Figures
Fig 1 for screen
Fig 1 for printing
Figure 1: Progress in Robot Spatial Awareness: By 1980 the Stanford Cart had (sometimes, slowly) managed to negotiate obstacle courses by tracking and avoiding the 3D locations of a few dozen object corners in the route ahead. The top panel shows the Cart's view of a room, superimposed with red dots marking points its program has selected and stereoscopically ranged. The consequent 3D map at the right shows the same points, with diagonal stalks indicating height, and a planned obstacle-avoiding path. (Labels were added by hand.) The program updated map and plan each meter of travel. The sparse maps were barely adequate, and blunders occurred every few tens of meters.
The second panel shows a dense 2D grid map of 150 meters of corridor produced in 1993 by a program by Barry Brummitt controlling Carnegie Mellon's Xavier robot via a remote Sparc 2 workstation. The sensor was a ring of sonar rangefinders, whose interpretation was automatically learned. In the map image evidence of occupancy ranges from empty (black) through unknown (gray) to occupied (white). Regular indentations marking doors are evident, also bumps where cans, water coolers, fire extinguishers, poster displays, etc. protrude. The curvature is dead-reckoning error.
The last panel shows work in progress. As with the Cart, the left image is a robot's eye view of a scene. The right image, though resembling a fuzzy photograph, is actually a perspective view of the occupied cells of a 3D map of the scene, built from about 100,000 range measurements extracted from 20 stereoscopic views similar to the one on the left. The grid is 256 cells wide by 256 deep by 128 high, covering 6x6x3 meters. Of the eight million total cells, about 100,000 are occupied. The realistic occupied cell colors are a side effect of a learning process.
The shape of the evidence patterns corresponding to stereoscopic range values, among other system parameters, are tuned up automatically to make the best grids. A candidate grid is evaluated by "projecting" colors from the original images onto the grid's occupied cells from the appropriate directions. Each cell in a perfect grid would collect colors from different views of the same thing in real space.
Since most objects show the same color from different viewpoints, the various colorings of each single cell would agree with one another. Incorrect extra cells, however, would intercept many disparate background colors from different points of view. Conversely, colors of incorrectly missing cells would be "sprayed" across various background cells, spoiling their uniformity. The learning program tunes the system to minimize total color variance. The maps so far are ragged around the edges, and many promising improvements remain to be tried, but the results are very encouraging nevertheless.
Compare the richness of the 3D maps in the first and third panels. Both were produced by processing about 20 stereoscopic image sets, the 1980 result on a 1 MIPS DEC KL-10 mainframe computer with 500 kilobytes of memory, the 2000 result on a 1,000 MIPS Macintosh G4 with 500 megabytes of memory.
Fig 2 for screen (75dpi)
Fig 2 better resolution (150dpi)
Fig 2 our best resolution (300dpi)
Figure 2: AI's Sudden Boil: From 1960 to 1990 the cost of computers used in AI and robotics research declined from the equivalent of millions of dollars per computer in 1960 to a few thousand dollars in 1990. The computer population increased greatly, but the power available to individual AI programs remained an almost constant 1 MIPS--less than insect power. Cost per machine stabilized in 1990, and since then power has doubled yearly, to 1,000 MIPS in 2000.
The major visible exception to this pattern is computer chess, shown by a progression of knights, whose prestige lured major computer companies into providing access to their most powerful machines, and researchers into developing chess-specific hardware. (Special-purpose chess machines are positioned at the minimum general-purpose computer power that could emulate them. Similarly for the organisms at the right: each marks the minimum power of a general-purpose computer that could produce similarly complex behavior, as estimated by the text's retina to robot vision criterion.)
Fig 3
Figure 3: Gray Walter Tortoise Elsie: One of eight built, with phototube eye and two vacuum tube amplifiers driving relays that controlled steering and drive motors. Elsie's shell, removed for surgery, can be seen in the background. The tortoises exhibited very lively behavior, for instance dancing near a lighted recharging hutch until their battery ran low, then enter. Their simple tropisms resemble bacterial "intelligence".
Fig 4
Figure 4: The Hopkins Beast: Built in the early 1960s, using dozens of transistors, the Johns Hopkins University Applied Physics Lab's "Beast" wandered white hallways, centering by sonar, until its batteries ran low. Then it would seek black wall outlets with special photocell optics, and plug itself in by feel with its special recharging arm. After feeding, it would resume patrolling. Much more complex than Elsie, its deliberate behavior can be compared to a nucleated single-cell organism like a paramecium or amoeba.
Fig 5a
Fig 5b
Figure 5: Stanford's Cart and SRI's Shakey: The "Stanford Cart" (a) and SRI's "Shakey" (b) were the first mobile robots to be controlled by computers (large mainframes doing about a quarter million calculations per second, linked to the robots by radio). Both used television cameras to see. The Cart could follow white lines quite reliably, Shakey could find large prismatic objects somewhat less reliably.
Their control complexity was far greater than Elsie's or the Beast's (lines can be tracked using simple Elsie-like techniques with ground-mounted lights and photocells, but it takes complex adaptation and prediction to do it with ambient light from a high vantage point), and the use of computers to control robots can be compared to the advent of multicellular animals with nervous systems in the Cambrian explosion: both events blew the lid on behavioral complexity in their respective domains.
Original Article
| | Join the discussion about this article on Mind·X! | |