Origin > Nanotechnology > Engines of Creation > Chapter 11: The Engines of Destruction
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0116.html

Printable Version
    Chapter 11: The Engines of Destruction
by   K. Eric Drexler



Nor do I doubt if the most formidable armies ever heere upon earth is a sort of soldiers who for their smallness are not visible.
- Sir WILLIAM PERRY, on microbes, 1640

REPLICATING assemblers and thinking machines pose basic threats to people and to life on Earth. Today's organisms have abilities far from the limits of the possible, and our machines are evolving faster than we are. Within a few decades they seem likely to surpass us. Unless we learn to live with them in safety, our future will likely be both exciting and short. We cannot hope to foresee all the problems ahead, yet by paying attention to the big, basic issues, we can perhaps foresee the greatest challenges and get some idea of how to deal with them.

 Entire books will no doubt be written on the coming social upheavals: What will happen to the global order when assemblers and automated engineering eliminate the need for most international trade? How will society change when individuals can live indefinitely? What will we do when replicating assemblers can make almost anything without human labor? What will we do when AI systems can think faster than humans? (And before they jump to the conclusion that people will despair of doing or creating anything, the authors may consider how runners regard cars, or how painters regard cameras.)

 In fact, authors have already foreseen and discussed several of these issues. Each is a matter of uncommon importance, but more fundamental than any of them is the survival of life and liberty. After all, if life or liberty is obliterated, then our ideas about social problems will no longer matter.

The Threat from the Machines

In Chapter 4, I described some of what replicating assemblers will do for us if we handle them properly. Powered by fuels or sunlight, they will be able to make almost anything (including more of themselves) from common materials.

 Living organisms are also powered by fuels or sunlight, and also make more of themselves from ordinary materials. But unlike assembler-based systems, they cannot make "almost anything".

 Genetic evolution has limited life to a system based on DNA, RNA, and ribosomes, but memetic evolution will bring life-like machines based on nanocomputers and assemblers. I have already described how assembler-built molecular machines will differ from the ribosome-built machinery of life. Assemblers will be able to build all that ribosomes can, and more; assembler-based replicators will therefore be able to do all that life can, and more. From an evolutionary point of view, this poses an obvious threat to otters, people, cacti, and ferns - to the rich fabric of the biosphere and all that we prize.

 The early transistorized computers soon beat the most advanced vacuum-tube computers because they were based on superior devices. For the same reason, early assembler-based replicators could beat the most advanced modern organisms. "Plants" with "leaves" no more efficient than today's solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough, omnivorous "bacteria" could out-compete real bacteria: they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop - at least if we made no preparation. We have trouble enough controlling viruses and fruit flies.

 Among the cognoscenti of nanotechnology, this threat has become known as the "gray goo problem." Though masses of uncontrolled replicators need not be gray or gooey, the term "gray goo" emphasizes that replicators able to obliterate life might be less inspiring than a single species of crabgrass. They might be "superior" in an evolutionary sense, but this need not make them valuable. We have evolved to love a world rich in living things, ideas, and diversity, so there is no reason to value gray goo merely because it could spread. Indeed, if we prevent it we will thereby prove our evolutionary superiority.

 The gray goo threat makes one thing perfectly clear: we cannot afford certain kinds of accidents with replicating assemblers.

 In Chapter 5, I described some of what advanced AI systems will do for us, if we handle them properly. Ultimately, they will embody the patterns of thought and make them flow at a pace no mammal's brain can match. AI systems that work together as people do will be able to out-think not just individuals, but whole societies. Again, the evolution of genes has left life stuck. Again, the evolution of memes by human beings - and eventually by machines - will advance our hardware far beyond the limits of life. And again, from an evolutionary point of view this poses an obvious threat.

 Knowledge can bring power, and power can bring knowledge. Depending on their natures and their goals, advanced AI systems might accumulate enough knowledge and power to displace us, if we don't prepare properly. And as with replicators, mere evolutionary "superiority" need not make the victors better than the vanquished by any standard but brute competitive ability.

 This threat makes one thing perfectly clear: we need to find ways to live with thinking machines, to make them law-abiding citizens.

Engines of Power

Certain kinds of replicators and AI systems may confront us with forms of hardware capable of swift, effective, independent action. But the novelty of this threat - coming from the machines themselves - must not blind us to a more traditional danger. Replicators and AI systems can also serve as great engines of power, if wielded freely by sovereign states.

 Throughout history, states have developed technologies to extend their military power, and states will no doubt play a dominant role in developing replicators and AI systems. States could use replicating assemblers to build arsenals of advanced weapons, swiftly, easily, and in vast quantity. States could use special replicators directly to wage a sort of germ warfare - one made vastly more practical by programmable, computer-controlled "germs." Depending on their skills, AI systems could serve as weapon designers, strategists, or fighters. Military funds already support research in both molecular technology and artificial intelligence.

 States could use assemblers or advanced AI systems to achieve sudden, destabilizing breakthroughs. I earlier discussed reasons for expecting that the advent of replicating assemblers will bring relatively sudden changes: Able to replicate swiftly, they could become abundant in a matter of days. Able to make almost anything, they could be programmed to duplicate existing weapons, but made from superior materials. Able to work with standard, well-understood components (atoms) they could suddenly build things designed in anticipation of the assembler breakthrough. These results of design-ahead could include programmable germs and other nasty novelties. For all these reasons, a state that makes the assembler breakthrough could rapidly create a decisive military force - if not literally overnight, then at least with unprecedented speed.

 States could use advanced AI systems to similar ends. Automated engineering systems will facilitate design-ahead and speed assembler development. Al systems able to build better AI systems will allow an explosion of capability with effects hard to anticipate. Both AI systems and replicating assemblers will enable states to expand their military capabilities by orders of magnitude in a brief time.

 Replicators can be more potent than nuclear weapons: to devastate Earth with bombs would require masses of exotic hardware and rare isotopes, but to destroy all life with replicators would require only a single speck made of ordinary elements. Replicators give nuclear war some company as a potential cause of extinction, giving a broader context to extinction as a moral concern.

 Despite their potential as engines of destruction, nanotechnology and AI systems will lend themselves to more subtle uses than do nuclear weapons. A bomb can only blast things, but nanomachines and AI systems could be used to infiltrate, seize, change, and govern a territory or a world. Even the most ruthless police have no use for nuclear weapons, but they do have use for bugs, drugs, assassins, and other flexible engines of power. With advanced technology, states will be able to consolidate their power over people.

 Like genes, memes, organisms, and hardware, states have evolved. Their institutions have spread (with variations) through growth, fission, imitation, and conquest. States at war fight like beasts, but using citizens as their bones, brains, and muscle. The coming breakthroughs will confront states with new pressures and opportunities, encouraging sharp changes in how states behave. This naturally gives cause for concern. States have, historically, excelled at slaughter and oppression.

 In a sense, a state is simply the sum of the people making up its organizational apparatus: their actions add up to make its actions. But the same might be said of a dog and its cells, though a dog is clearly more than just a clump of cells. Both dogs and states are evolved systems, with structures that affect how their parts behave. For thousands of years, dogs have evolved largely to please people, because they have survived and reproduced at human whim. For thousands of years, states have evolved under other selective pressures. Individuals have far more power over their dogs than they do over "their" states. Though states, too, can benefit from pleasing people, their very existence has depended on their capability for using people, whether as leaders, police, or soldiers.

 It may seem paradoxical to say that people have limited power over states: After all, aren't people behind a state's every action? But in democracies, heads of state bemoan their lack of power, representatives bow to interest groups, bureaucrats are bound by rules, and voters, allegedly in charge, curse the whole mess. The state acts and people affect it, yet no one can claim to control it. In totalitarian states, the apparatus of power has a tradition, structure, and inner logic that leaves no one free, neither the rulers nor the ruled. Even kings had to act in ways limited by the traditions of monarchy and the practicalities of power, if they were to remain kings. States are not human, though they are made of humans.

 Despite this, history shows that change is possible, even change for the better. But changes always move from one semi-autonomous, inhuman system to another - equally inhuman but perhaps more humane. In our hope for improvements, we must not confuse states that wear a human face with states that have humane institutions.

 Describing states as quasi-organisms captures only one aspect of a complex reality, yet it suggests how they may evolve in response to the coming breakthroughs. The growth of government power, most spectacular in totalitarian countries, suggests one direction.

 States could become more like organisms by dominating their parts more completely. Using replicating assemblers, states could fill the human environment with miniature surveillance devices. Using an abundance of speech-understanding AI systems, they could listen to everyone without employing half the population as listeners. Using nanotechnology like that proposed for cell repair machines, they could cheaply tranquilize, lobotomize, or otherwise modify entire populations. This would simply extend an all too familiar pattern. The world already holds governments that spy, torture, and drug; advanced technology will merely extend the possibilities.

 But with advanced technology, states need not control people - they could instead simply discard people. Most people in most states, after all, function either as workers, larval workers, or worker-rearers, and most of these workers make, move, or grow things. A state with replicating assemblers would not need such work. What is more, advanced AI systems could replace engineers, scientists, administrators, and even leaders. The combination of nanotechnology and advanced AI will make possible intelligent, effective robots; with such robots, a state could prosper while discarding anyone, or even (in principle) everyone.

 The implications of this possibility depend on whether the state exists to serve the people, or the people exist to serve the state.

 In the first case, we have a state shaped by human beings to serve general human purposes; democracies tend to be at least rough approximations to this ideal. If a democratically controlled government loses its need for people, this will basically mean that it no longer needs to use people as bureaucrats or taxpayers. This will open new possibilities, some of which may prove desirable.

 In the second case, we have a state evolved to exploit human beings, perhaps along totalitarian lines. States have needed people as workers because human labor has been the necessary foundation of power. What is more, genocide has been expensive and troublesome to organize and execute. Yet, in this century totalitarian states have slaughtered their citizens by the millions. Advanced technology will make workers unnecessary and genocide easy. History suggests that totalitarian states may then eliminate people wholesale. There is some consolation in this. It seems likely that a state willing and able to enslave us biologically would instead simply kill us.

 The threat of advanced technology in the hands of governments makes one thing perfectly clear: we cannot afford to have an oppressive state take the lead in the coming breakthroughs.

 The basic problems I have outlined are obvious: in the future, as in the past, new technologies will lend themselves to accidents and abuse. Since replicators and thinking machines will bring great new powers, the potential for accidents and abuse will likewise be great. These possibilities pose genuine threats to our lives.

 Most people would like a chance to live and be free to choose how to live. This goal may not sound too utopian, at least in some parts of the world. It doesn't mean forcing everyone's life to fit some grand scheme; it chiefly means avoiding enslavement and death. Yet, like the achievement of a utopian dream, it will bring a future of wonders.

 Given these life-and-death problems and this general goal, we can consider what measures might help us succeed. Our strategy must involve people, principles, and institutions, but it must also rest on tactics which inevitably will involve technology.

Trustworthy Systems

To use such powerful technologies in safety, we must make hardware we can trust. To have trust, we must be able to judge technical facts accurately, an ability that will in turn depend partly on the quality of our institutions for judgment. More fundamentally, though, it will depend on whether trustworthy hardware is physically possible. This is a matter of the reliability of components and of systems.

 We can often make reliable components, even without assemblers to help. " Reliable" doesn't mean "indestructible" - anything will fail if placed close enough to a nuclear blast. It doesn't even mean "tough" - a television set may be reliable, yet not survive being bounced off a concrete floor. Rather, we call something reliable when we can count on it to work as designed.

 A reliable component need not be a perfect embodiment of a perfect design: it need only be a good enough embodiment of a cautious enough design. A bridge engineer may be uncertain about the strength of winds, the weight of traffic, and the strength of steel, but by assuming high winds, heavy traffic, and weak steel, the engineer can design a bridge that will stand.

 Unexpected failures in components commonly stem from material flaws. But assemblers will build components that have a negligible number of their atoms out of place - none, if need be. This will make them perfectly uniform and in a limited sense perfectly reliable. Radiation will still cause damage, though, because a cosmic ray can unexpectedly knock atoms loose from anything. In a small enough component (even in a modern computer memory device), a single particle of radiation can cause a failure.

[Addition to Web version of Engines of Creation: A reader of this web version has noted a problem with the math in the following example. As an example of the value of hypertext as discussed in Chapter 14 and in Eric Drexler's essay "Hypertext Publishing and the Evolution of Knowledge", this correspondence about the calculation can be read here. ]

But systems can work even when their parts fail; the key is redundancy. Imagine a bridge suspended from cables that fail randomly, each breaking about once a year at an unpredictable time. If the bridge falls when a cable breaks, it will be too dangerous to use. Imagine, though, that a broken cable takes a day to fix (because skilled repair crews with spare cables are on call), and that, though it takes five cables to support the bridge, there are actually six. Now if one cable breaks, the bridge will still stand. By clearing traffic and then replacing the failed cable, the bridge operators can restore safety. To destroy this bridge, a second cable must break in the same day as the first. Supported by six cables, each having a one-in-365 daily chance of breaking, the bridge will likely last about ten years.

 While an improvement, this remains terrible. Yet a bridge with ten cables (five needed, five extra) will fall only if six cables break on the same day: the suspension system is likely to last over ten million years. With fifteen cables, the expected lifetime is over ten thousand times the age of the Earth. Redundancy can bring an exponential explosion of safety.

 Redundancy works best when the redundant components are truly independent. If we don't trust the design process, then we must use components designed independently; if a bomb, bullet, or cosmic ray may damage several neighboring parts, then we must spread redundant parts more widely. Engineers who want to supply reliable transportation between two islands shouldn't just add more cables to a bridge. They should build two well-separated bridges using different designs, then add a tunnel, a ferry, and a pair of inland airports.

 Computer engineers also use redundancy. Stratus Computer Inc., for example, makes a machine that uses four central processing units (in two pairs) to do the work of one, but to do it vastly more reliably. Each pair is continually checked for internal consistency, and a failed pair can be replaced while its twin carries on.

 An even more powerful form of redundancy is design diversity. In computer hardware, this means using several computers with different designs, all working in parallel. Now redundancy can correct not just for failures in a piece of hardware, but for errors in its design.

 Much has been made of the problem of writing large, error-free programs; many people consider such programs impossible to develop and debug. But researchers at the UCLA Computer Science Department have shown that design diversity can also be used in software: several programmers can tackle the same problem independently, then all their programs can be run in parallel and made to vote on the answer. This multiplies the cost of writing and running the program, but it makes the resulting software system resistant to the bugs that appear in some of its parts.

 We can use redundancy to control replicators. Just as repair machines that compare multiple DNA strands will be able to correct mutations in a cell's genes, so replicators that compare multiple copies of their instructions (or that use other effective error-correcting systems) will be able to resist mutation in these "genes." Redundancy can again bring an exponential explosion of safety.

 We can build systems that are extremely reliable, but this will entail costs. Redundancy makes systems heavier, bulkier, more expensive, and less efficient. Nanotechnology, though, will make most things far lighter, smaller, cheaper, and more efficient to begin with. This will make redundancy and reliability more practical.

 Today, we are seldom willing to pay for the safest possible systems; we tolerate failures more-or-less willingly and seldom consider the real limits of reliability. This biases judgments of what can be achieved. A psychological factor also distorts our sense of how reliable things can be made: failures stick in our minds, but everyday successes draw little attention. The media amplify this tendency by reporting the most dramatic failures from around the world, while ignoring the endless and boring successes. Worse yet, the components of redundant systems may fail in visible ways, stirring alarums: imagine how the media would report a snapped bridge cable, even if the bridge were the super-safe fifteen-cable model described above. And since each added redundant component adds to the chance of a component failure, a system's reliability can seem worse even as it approaches perfection.

 Appearances aside, redundant systems made of abundant, flawless components can often be made almost perfectly reliable. Redundant systems spread over wide enough spaces will survive even bullets and bombs.

 But what about design errors? Having a dozen redundant parts will do no good if they share a fatal error in design. Design diversity is one answer; good testing is another. We can reliably evolve good designs without being reliably good designers: we need only be good at testing, good at tinkering, and good at being patient. Nature has evolved working molecular machinery through entirely mindless tinkering and testing. Having minds, we can do as well or better.

 We will find it easy to design reliable hardware if we can develop reliable automated engineering systems. But this raises the wider issue of developing trustworthy artificial intelligence systems. We will have little trouble making AI systems with reliable hardware, but what about their software?

 Like present AI systems and human minds, advanced AI systems will be synergistic combinations of many simpler parts. Each part will be more specialized and less intelligent than the system as a whole. Some parts will look for patterns in pictures, sounds, and other data and suggest what they might mean. Other parts will compare and judge the suggestions of these parts. Just as the pattern recognizers in the human visual system suffer from errors and optical illusions, so will the pattern recognizers in AI systems. (Indeed, some advanced machine vision systems already suffer from familiar optical illusions.) And just as other parts of the human mind can often identify and compensate for illusions, so will other parts of AI systems.

 As in human minds, intelligence will involve mental parts that make shaky guesses and other parts that discard most of the bad guesses before they draw much attention or affect important decisions. Mental parts that reject action ideas on ethical grounds correspond to what we call a conscience. AI systems with many parts will have room for redundancy and design diversity, making reliability possible.

 A genuine, flexible AI system must evolve ideas. To do this, it must find or form hypotheses, generate variations, test them, and then modify or discard those found inadequate. Eliminating any of these abilities would make it stupid, stubborn, or insane ("Durn machine can't think and won't learn from its mistakes - junk it!"). To avoid becoming trapped by initial misconceptions, it will have to consider conflicting views, seeing how well each explains the data, and seeing whether one view can explain another.

 Scientific communities go through a similar process. And in a paper called "The Scientific Community Metaphor," William A. Kornfeld and Carl Hewitt of the MIT Artificial Intelligence Laboratory suggest that AI researchers model their programs still more closely on the evolved structure of the scientific community. They point to the pluralism of science, to its diversity of competing proposers, supporters, and critics. Without proposers, ideas cannot appear; without supporters, they cannot grow; and without critics to weed them, bad ideas can crowd out the good. This holds true in science, in technology, in AI systems, and among the parts of our own minds.

 Having a world full of diverse and redundant proposers, supporters, and critics is what makes the advance of science and technology reliable. Having more proposers makes good proposals more common; having more critics makes bad proposals more vulnerable. Better, more numerous ideas are the result. A similar form of redundancy can help AI systems to develop sound ideas.

 People sometimes guide their actions by standards of truth and ethics, and we should be able to evolve AI systems that do likewise, but more reliably. Able to think a million times faster than us, they will have more time for second thoughts. It seems that AI systems can be made trustworthy, at least by human standards.

 I have often compared AI systems to individual human minds, but the resemblance need not be close. A system that can mimic a person may need to be personlike, but an automated engineering system probably doesn't. One proposal (called an Agora system, after the Greek term for a meeting and market place) would consist of many independent pieces of software that interact by offering one another services in exchange for money. Most pieces would be simpleminded specializts, some able to suggest a design change, and others able to analyze one. Much as Earth's ecology has evolved extraordinary organisms, so this computer economy could evolve extraordinary designs - and perhaps in a comparably mindless fashion. What is more, since the system would be spread over many machines and have parts written by many people, it could be diverse, robust, and hard for any group to seize and abuse.

 Eventually, one way or another, automated engineering systems will be able to design things more reliably than any group of human engineers can today. Our challenge will be to design them correctly. We will need human institutions that reliably develop reliable systems.

 Human institutions are evolved artificial systems, and they can often solve problems that their individual members cannot. This makes them a sort of "artificial intelligence system." Corporations, armies, and research laboratories all are examples, as are the looser structures of the market and the scientific community. Even governments may be seen as artificial intelligence systems - gross, sluggish, and befuddled, yet superhuman in their sheer capability. And what are constitutional checks and balances but an attempt to increase a government's reliability through institutional diversity and redundancy? When we build intelligent machines, we will use them to check and balance one another.

 By applying the sane principles, we may be able to develop reliable, technically oriented institutions having strong checks and balances, then use these to guide the development of the systems we will need to handle the coming breakthroughs.

Tactics for the Assembler Breakthrough

Some force in the world (whether trustworthy or not) will take the lead in developing assemblers; call it the "leading force." Because of the strategic importance of assemblers, the leading force will presumably be some organization or institution that is effectively controlled by some government or group of governments. To simplify matters, pretend for the moment that we (the good guys, attempting to be wise) can make policy for the leading force. For citizens of democratic states, this seems a good attitude to take.

 What should we do to improve our chances of reaching a future worth living in? What can we do?

 We can begin with what must not happen: we must not let a single replicating assembler of the wrong kind be loosed on an unprepared world. Effective preparations seem possible (as I will describe), but it seems that they must be based on assembler-built systems that can be built only after dangerous replicators have already become possible. Design-ahead can help the leading force prepare, yet even vigorous, foresighted action seems inadequate to prevent a time of danger. The reason is straightforward: dangerous replicators will be far simpler to design than systems that can thwart them, just as a bacterium is far simpler than an immune system. We will need tactics for containing nanotechnology while we learn how to tame it.

 One obvious tactic is isolation: the leading force will be able to contain replicator systems behind multiple walls or in laboratories in space. Simple replicators will have no intelligence, and they won't be designed to escape and run wild. Containing them seems no great challenge.

 Better yet, we will be able to design replicators that can't escape and run wild. We can build them with counters (like those in cells) that limit them to a fixed number of replications. We can build them to have requirements for special synthetic "vitamins," or for bizarre environments found only in the laboratory. Though replicators could be made tougher and more voracious than any modern pests, we can also make them useful but harmless. Because we will design them from scratch, replicators need not have the elementary survival skills that evolution has built into living cells.

 Further, they need not be able to evolve. We can give replicators redundant copies of their "genetic" instructions, along with repair mechanisms to correct any mutations. We can design them to stop working long before enough damage accumulates to make a lasting mutation a significant possibility. Finally, we can design them in ways that would hamper evolution even if mutations could occur.

 Experiments show that most computer programs (other than specially designed AI programs, such as Dr. Lenat's EURISKO) seldom respond to mutations by changing slightly; instead, they simply fail. Because they cannot vary in useful ways, they cannot evolve. Unless they are specially designed, replicators directed by nanocomputers will share this handicap. Modern organisms are fairly good at evolving partly because they descend from ancestors that evolved. They are evolved to evolve; this is one reason for the complexities of sexual reproduction and the shuffling of chromosome segments during the production of sperm and egg cells. We can simply neglect to give replicators similar talents.

 It will be easy for the leading force to make replicating assemblers useful, harmless, and stable. Keeping assemblers from being stolen and abused is a different and greater problem, because it will be a game played against intelligent opponents. As one tactic, we can reduce the incentive to steal assemblers by making them available in safe forms. This will also reduce the incentive for other groups to develop assemblers independently. The leading force, after all, will be followed by trailing forces.

Limited Assemblers

In Chapter 4, I described how a system of assemblers in a vat could build an excellent rocket engine. I also pointed out that we will be able to make assembler systems that act like seeds, absorbing sunlight and ordinary materials and growing to become almost anything. These special-purpose systems will not replicate themselves, or will do so only a fixed number of times. They will make only what they were programmed to make, when they are told to make it. Anyone lacking special assembler-built tools would be unable to reprogram them to serve other purposes.

 Using limited assemblers of this sort, people will be able to make as much as they want of whatever they want, subject to limits built into the machines. If none is programmed to make nuclear weapons, none will; if none is programmed to make dangerous replicators, none will. If some are programmed to make houses, cars, computers, toothbrushes, and whatnot, then these products can become cheap and abundant. Machines built by limited assemblers will enable us to open space, heal the biosphere, and repair human cells. Limited assemblers can bring almost unlimited wealth to the people of the world.

 This tactic will ease the moral pressure to make unlimited assemblers available immediately. But limited assemblers will still leave legitimate needs unfulfilled. Scientists will need freely programmable assemblers to conduct studies; engineers will need them to test designs. These needs can be served by sealed assembler laboratories.

Sealed Assembler Laboratories

Picture a computer accessory the size of your thumb, with a state-of-the-art plug on its bottom. Its surface looks like boring gray plastic, imprinted with a serial number, yet this sealed assembler lab is an assembler-built object that contains many things. Inside, sitting above the plug, is a large nanoelectronic computer running advanced molecular-simulation software (based on the software developed during assembler development). With the assembler lab plugged in and turned on, your assembler-built home computer displays a three-dimensional picture of whatever the lab computer is simulating, representing atoms as colored spheres. With a joystick, you can direct the simulated assembler arm to build things. Programs can move the arm faster, building elaborate structures on the screen in the blink of an eye. The simulation always works perfectly, because the nanocomputer cheats: as you make the simulated arm move simulated molecules, the computer directs an actual arm to move actual molecules. It then checks the results whenever needed to correct its calculations.

 The end of this thumb-sized object holds a sphere built in many concentric layers. Fine wires carry power and signals through the layers; these let the nanocomputer in the base communicate with the devices at the sphere's center. The outermost layer consists of sensors. Any attempt to remove or puncture it triggers a signal to a layer near the core. The next layer in is a thick spherical shell of prestressed diamond composite, with its outer layers stretched and its inner layers compressed. This surrounds a layer of thermal insulator which in turn surrounds a peppercorn-sized spherical shell made up of microscopic, carefully arranged blocks of metal and oxidizer. These are laced with electrical igniters. The outer sensor layer, if punctured, triggers these igniters. The metal-and-oxidizer demolition charge then burns in a fraction of a second, producing a gas of metal oxides denser than water and almost as hot as the surface of the Sun. But the blaze is tiny; it swiftly cools, and the diamond sphere confines its great pressure.

 This demolition charge surrounds a smaller composite shell, which surrounds another layer of sensors, which can also trigger the demolition charge. These sensors surround the cavity which contains the actual sealed assembler lab.

 These elaborate precautions justify the term "sealed." Someone outside cannot open the lab space without destroying the contents, and no assemblers or assembler-built structures can escape from within. The system is designed to let out information, but not dangerous replicators or dangerous tools. Each sensor layer consists of many redundant layers of sensors, each intended to detect any possible penetration, and each making up for possible flaws in the others. Penetration, by triggering the demolition charge, raises the lab to a temperature beyond the melting point of all possible substances and makes the survival of a dangerous device impossible. These protective mechanisms all gang up on something about a millionth their size - that is, on whatever will fit in the lab, which provides a spherical work space no wider than a human hair.

 Though small by ordinary standards, this work space holds room enough for millions of assemblers and thousands of trillions of atoms. These sealed labs will let people build and test devices, even voracious replicators, in complete safety. Children will use the atoms inside them as a construction set with almost unlimited parts. Hobbyists will exchange programs for building various gadgets. Engineers will build and test new nanotechnologies. Chemists, materials scientists, and biologists will build apparatus and run experiments. In labs built around biological samples, biomedical engineers will develop and test early cell repair machines.

 In the course of this work, people will naturally develop useful designs, whether for computer circuits, strong materials, medical devices, or whatever. After a public review of their safety, these things could be made available outside the sealed labs by programming limited assemblers to make them. Sealed labs and limited assemblers will form a complementary pair: The first will let us invent freely; the second will let us enjoy the fruits of our invention safely. The chance to pause between design and release will help us avoid deadly surprises.

 Sealed assembler labs will enable the whole of society to apply its creativity to the problems of nanotechnology. And this will speed our preparations for the time when an independent force learns how to build something nasty.

Hiding Information

In another tactic for buying time, the leading force can attempt to burn the bridge it built from bulk to molecular technology. This means destroying the records of how the first assemblers were made (or making the records thoroughly inaccessible). The leading force may be able to develop the first, crude assemblers in such a way that no one knows the details of more than a small fraction of the whole system. Imagine that we develop assemblers by the route outlined in Chapter 1. The protein machines that we use to build the first crude assemblers will then promptly become obsolete. If we destroy the records of the protein designs, this will hamper efforts to duplicate them, yet will not hamper further progress in nanotechnology.

 If sealed labs and limited assemblers are widely available, people will have little scientific or economic motivation to redevelop nanotechnology independently, and burning the bridge from bulk technology will make independent development more difficult. Yet these can be no more than delaying tactics. They won't stop independent development; the human urge for power will spur efforts which will eventually succeed. Only detailed, universal policing on a totalitarian scale could stop independent development indefinitely. If the policing were conducted by anything like a modern government, this would be a cure roughly as dangerous as the disease. And even then, would people maintain perfect vigilance forever?

 It seems that we must eventually learn to live in a world with untrustworthy replicators. One sort of tactic would be to hide behind a wall or to run far away. But these are brittle methods: dangerous replicators might breach the wall or cross the distance, and bring utter disaster. And, though walls can be made proof against small replicators, no fixed wall will be proof against large-scale, organized malice. We will need a more robust, flexible approach.

Active Shields

It seems that we can build nanomachines that act somewhat like the white blood cells of the human immune system: devices that can fight not just bacteria and viruses, but dangerous replicators of all sorts. Call an automated defense of this sort an active shield, to distinguish it from a fixed wall.

 Unlike ordinary engineering systems, reliable active shields must do more than just cope with nature or clumsy users. They must also cope with a far greater challenge - with the entire range of threats that intelligent forces can design and build under prevailing circumstances. Building and improving prototype shields will be akin to running both sides of an arms race on a laboratory scale. But the goal here will be to seek the minimum requirements for a defense that reliably prevails.

 In Chapter 5, I described how Dr. Lenat and his EURISKO program evolved successful fleets to fight according to the rules of a naval-warfare simulation game. In a similar way, we can make into a game the deadly serious effort to develop reliable shields, using sealed assembler labs of various sizes as playing fields. We can turn loose a horde of engineers, computer hackers, biologists, hobbyists, and automated engineering systems, all invited to pit their systems against one another in games limited only by the initial conditions, the laws of nature, and the walls of the sealed labs. These competitors will evolve threats and shields in an open-ended series of microbattles. When replicating assemblers have brought abundance, people will have time enough for so important a game. Eventually we can test promising shield systems in Earthlike environments in space. Success will make possible a system able to protect human life and Earth's biosphere from the worst that a fistful of loose replicators can do.

Is Success Possible?

With our present uncertainties, we cannot yet describe either threats or shields with any accuracy. Does this mean we can't have any confidence that effective shields are possible? Apparently we can; there is a difference, after all, between knowing that something is possible and knowing how to do it. And in this case, the world holds examples of analogous successes.

 There is nothing fundamentally novel about defending against invading replicators; life has been doing it for ages. Replicating assemblers, though unusually potent, will be physical systems not unlike those we already know. Experience suggests that they can be controlled.

 Viruses are molecular machines that invade cells; cells use molecular machines (such as restriction enzymes and antibodies) to defend against them. Bacteria are cells that invade organisms; organisms use cells (such as white blood cells) to defend against them. Similarly, societies use police to defend against criminals and armies to defend against invaders. On a less physical level, minds use meme systems such as the scientific method to defend against nonsense, and societies use institutions such as courts to defend against the power of other institutions.

 The biological examples in the last paragraph show that even after a billion-year arms race, molecular machines have maintained successful defenses against molecular replicators. Failures have been common too, but the successes do indicate that defense is possible. These successes suggest that we can indeed use nanomachines to defend against nanomachines. Though assemblers will bring many advances, there seems no reason why they should permanently tip the balance against defense.

 The examples given above - some involving viruses, some involving institutions - are diverse enough to suggest that successful defense rests on general principles. One might ask, Why do all these defenses succeed? But turn the question around: Why should they fail? Each conflict pits similar systems against each other, giving the attacker no obvious advantage. In each conflict, moreover, the attacker faces a defense that is well established. The defenders fight on home ground, giving them advantages such as prepared positions, detailed local knowledge, stockpiled resources, and abundant allies - when the immune system recognizes a germ, it can mobilize the resources of an entire body. All these advantages are general and basic, having little to do with the details of a technology. We can give our active shields the same advantages over dangerous replicators. And they need not sit idle while dangerous weapons are amassed, any more than the immune system sits idle while bacteria multiply.

 It would be hard to predict the outcome of an open-ended arms race between powers equipped with replicating assemblers. But before this situation can arise, the leading force seems likely to acquire a temporary but overwhelming military advantage. If the outcome of an arms race is in doubt, then the leading force will likely use its strength to ensure that no opponents are allowed to catch up. If it does so, then active shields will not have to withstand attacks backed by the resources of half a continent or half a solar system; they will instead be like a police force or an immune system, facing attacks backed only by whatever resources can be gathered in secret within the protected territory.

 In each case of successful defense that I cited above, the attackers and the shields have developed through broadly similar processes. The immune system, shaped by genetic evolution, meets threats also shaped by genetic evolution. Armies, shaped by human minds, also meet similar threats. Likewise, both active shields and dangerous replicators will be shaped by memetic evolution. But if the leading force can develop automated engineering systems that work a millionfold faster than human engineers, and if it can use them for a single year, then it can build active shields based on a million years' worth of engineering advance. With such systems we may be able to explore the limits of the possible well enough to build a reliable shield against all physically possible threats.

 Even without our knowing the details of the threats and the shields, there seems reason to believe that shields are possible. And the examples of memes controlling memes and of institutions controlling institutions also suggest that AI systems can control AI systems.

 In building active shields, we will be able to use the power of replicators and AI systems to multiply the traditional advantages of the defending force: we can give it overwhelming strength through abundant, replicator-built hardware with designs based on the equivalent of a million-year lead in technology. We can build active shields having strength and reliability that will put past systems to shame.

 Nanotechnology and artificial intelligence could bring the ultimate tools of destruction, but they are not inherently destructive. With care, we can use them to build the ultimate tools of peace.

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Drexler: Engines of Destruction
posted on 01/14/2007 2:31 PM by doojie

[Top]
[Mind·X]
[Reply to this post]

It seems Drexler has not developed any ideas already developed earlier by one called Yahweh.

Example: Tower of Babel. Humans unite to build a tower "whose top may reach unto God". Organization focusing on one singular goal, leading to entropy and accelerated destruction due to no redundancy.

Solution: Yahweh creates redundnacy by confusing their languages, so that their imaginations may function, but total destruction will be avoided.

Example: Empires form with names like Egypt, Babylon, Persia, Greece, Rome.

Solution: A small nation called Israel, conditioned to live by "God's law" in all environments "inform" each empire. The empire breaks apart, I srael remains to "inform" later empires.

Israel's function was to create increasing redundancy. The reason for this is stated in Romans 8:7: The natural, physical mind is enmity against God. It cannopt be subject to God's laws. This has two basic results:
1.There can be no human authority representing the truth of "God" since ho human can be subject to God's laws.
2. Any attempt to do so results in a continual splintering and speciation of "God concepts" into infinity.

These results have two corollaries in mathematics:
Godel's incompleteness theorem stating that no system with the complexity of number theory will be complete. There will always exist truths that can neither be proven nor disproven. Also, the consistency of such a system cannot be proven from within the system itself.
Theorem which extends to Chaitin's theorem, that in any axiomatic system, there exists an infinity of undecideable propositions.

Godel's theorem has an analogue in law: no matter how complete we try to make laws, there will always exist "gaps" that must be filled by more and more laws, into infinity.

Redundancy, therefore, is built into the system by the nature of our thinking.

That same redundancy is driven by the need to replicate which is fueled by the "selfish Gene", seeking to replicate itself perfectly into future generations, so that social institutions will be driven to replicate themselves physically, yet continually speciating due to the incompleteness of their symbolic thought processes.

Higher complexity, intelligence, and self awareness results. Because we are driven to replicate, social forms will follow the genetic algorithm at a social level, seeking to prove its "truth" by conversion of more and more into a singular form, but finding opposition due to incompleteness of thought processes.

Drexler's system has already been designed, and is working quite well.

Re: Chapter 11: The Engines of Destruction
posted on 01/14/2007 2:54 PM by czarstar

[Top]
[Mind·X]
[Reply to this post]

What will we do when replicating assemblers can make almost anything without human labor?


This is happening now with the advancement of robotics. The elite are growing wealthier in their automated factories. The master/servant Monetary Monarch is becoming solidified in robotic arms. Advancement is great but social advancement must parallel or there will be an great uprising.

Re: Chapter 11: The Engines of Destruction
posted on 01/15/2007 5:20 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

I will not deny that there is a divide between rich and poor. But I would argue that people from low income familiies (in the West, at least) hardly live the life of hardship that their ancestors endured. In fact, I would be so bold as to say their standards of living would place them above most of the aristocracy of passed ages!

Just look at what we expect from our computers: Kurzweil, in 1979, had use of a multimillion-dollar IBM 7094 with 32K memory and .25 MIPS processor speed. Yawn. I have a 2.0 Ghz Dell Laptop with 2048MB 667 Mhz SDRAM, a 100GB hardrive and a 256 MB graphics card. If that is not enough, my fast Internet connection links me to Google's datafarms, turning my laptop into a window on hundreds of petabytes of information that kings of old would have gone to war for.

Because I am wealthier than Kurzweil? Um....no. Because the law of accelerating returns dramatically reduced the cost of IT. Nanotech will bring this kind of price drop to many new areas, perhaps all of technology.

In any case, it is worth remembering that we already have a grey goo attack to contend with. if grey goo is something that uses up precious resources in order to replicate itself, then quite clearly the human race is grey goo. Because Earth has finite resources, the only way to 'make them go further' is to learn to work with matter and energy at finer and finer scales- invariably heading towards shuttling individual atoms with precision. Even that may not stave of Malthusian disaster permenantly, but any solution after that will probably be up to minds superior to humans to uncover.

Re: Chapter 11: The Engines of Destruction
posted on 01/15/2007 5:28 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

What will we do when replicating assemblers can make almost anything without human labor?

Paradoxically, we will all be working harder.

Why? Because no longer will we be working because we HAVE to, but because we WANT to. The immense improvements in computational power/ software capability that shall acompany the rise of the smart robot workforce will enable the crafting of virtual worlds beyond the ken of the most forward-thinking Second Lifer, leading to an explosion of new career opportunities. Nanofactories, claytronics and fog swarms will ensure virtual reality exists in the natural environment, while nanobots or some kind of brain/machine interface will let us 'go' to cyberspace. The boundary between the two will no longer exist.

Dedicating their time to pursuing what they love, people will inevitably put more time and effort into their chosen professions than they would in today's climate where (save for the lucky minority) work is DULL DULL DULL DULL DULL DULL repeat ad infinitum.

Re: Chapter 11: The Engines of Destruction
posted on 01/15/2007 6:46 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

...I agree with you, as far as the humans that are still alive. And no, that isn't a radical environmental doom and gloom remark. It's a remark cautioning against the now ubiquitous (even here) view that government (coercion) is the answer to every problem.

Let's all make it to the singularity folks! No reason not to, other than: "we didn't want to do our homework before we went to go vote, and we voted for too much government". That would be a shame. Check out my blawgs, where I talk about this subject in greater detail:

http://jcwitmer.blogspot.com/index.html
http://freealaska.blogspot.com/

Also, check out Eric Dondero's blog where he does much the similar:
http://www.mainstreamlibertarian.com

Then, get scholarly, and check out:
http://www.hawaii.edu/powerkills

If we never get past the point of tearing down those with great ideas, the "molecular assembler" revolution may never arrive. At least one smart person on this board has figured this out:

http://www.optimal.org

Those with less enlightened political ideas should adapt and imitate Mr. Voss.

-Jake Witmer

Re: Chapter 11: The Engines of Destruction
posted on 07/06/2008 2:04 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

I hereby disavow all prior recommendations to visit the site "www.mainstreamlibertarian.com". If you do visit the site, please also visit http://www.ericdondero.com

Eric Dondero has either been outed as a fool or as an agent provacateur, and interacting with such people is wasteful at best, and dangerous at worst. Eric is not without cunning, but he has continually compromised 100% of his philosophy without apparent cause or logical defense. He is not to be trusted as an advocate for freedom, nor as a friend.

My past alliance with him was based on the misunderstanding that he was in some way dedicated to free market political reform. He has since been proven to be an advocate of mindless militarism above all other ideas. This negates 100% of the value he has to free market reform, because he will support the candidate who is the greatest whore to American military might, irrespective of how much risk, danger, or obvious destruction they might pose to the idea of free markets.

Of course, Eric occasionally supports actual libertarian candidates, because if he didn't, noone would listen to him at all.

Don't bother with "mainstream libertarian" if you want useful information. There are vastly better sources. Such as "Third Party Watch" and http://lastfreevoice.wordpress.com/

Eric is unable to determine the difference between state control and individual freedom, or he simply doesn't care. As one example of his dementia: Because he was then admittedly trying to land a job with the (doomed) Giuliani campaign for president, he advocated that libertarians support Giuliani over Ron Paul. Now then, even if you're an advocate of the free market, you might conceivably have had problems with Ron Paul, but there is no doubt that he was more in favor of unrestricted markets than Giuliani was, or ever would be. And Eric was ugly and incessant in his fraudulent pimping of Giuliani, and his infantile smears of Ron Paul (his former employer).

-Jake