|
|
Chapter 12: Strategies and Survival
He that will not apply new remedies must expect new evils; for time is the greatest innovator.
- FRANCIS BACON
IN EARLIER CHAPTERS I have stuck close to the firm ground of technological possibility. Here, however, I must venture further into the realm of politics and human action. This ground is softer, but technological facts and evolutionary principles still provide firm points on which to stand and survey the territory.
The technology race, driven by evolutionary pressures, is carrying us toward unprecedented dangers; we need to find strategies for dealing with them. Since we see such great dangers ahead, it makes sense to consider stopping our headlong rush. But how can we? Personal RestraintAs individuals, we could refrain from doing research that leads toward dangerous capabilities. Indeed, most people will refrain, since most are not researchers in the first place. But this strategy won't stop advances: in our diverse world, others will carry the work forward. Local SuppressionA strategy of personal restraint (at least in this matter) smacks of simple inaction. But what about a strategy of local political action, of lobbying for laws to suppress certain kinds of research? This would be personal action aimed at enforcing collective inaction. Although it might succeed in suppressing research in a city, a district, a country, or an alliance, this strategy cannot help us guide the lead instead, it would let some force beyond our control take the lead. A popular movement of this sort can halt research only where the people hold the power, and its greatest possible success would merely open the way for a more repressive state to become the leading force.
Where nuclear weapons are concerned, arguments can be made for unilateral disarmament and nonviolent (or at least non-nuclear) resistance. Nuclear weapons can be used to smash military establishments and spread terror, but they cannot be used to occupy territory or rule people - not directly. Nuclear weapons have failed to suppress guerrilla warfare and social unrest, so a strategy of disarmament and resistance makes some degree of sense.
The unilateral suppression of nanotechnology and AI, in contrast, would amount to unilateral disarmament in a situation where resistance cannot work. An aggressive state could use these technologies to seize and rule (or exterminate) even a nation of Gandhis, or of armed and dedicated freedom fighters.
This deserves emphasis. Without some novel way to reform the world's oppressive states, simple research-suppression movements cannot have total success. Without a total success, a major success would mean disaster for the democracies. Even if they got nowhere, efforts of this sort would absorb the work and passion of activists, wasting scarce human resources on a futile strategy. Further, efforts at suppression would alienate concerned researchers, stirring fights between potential allies and wasting further human resources. Its futility and divisiveness make this a strategy to be shunned.
Nonetheless, suppression has undeniable appeal. It is simple and direct; "Danger coming? Let's stop it!" Further, successes in local lobbying efforts promise short-term gratification: "Danger coming? We can stop it here and now, for a start!" The start would be a false start, but not everyone will notice. The idea of simple suppression seems likely to seduce many minds. After all, local suppression of local dangers has a long, successful tradition; stopping a local polluter, for example, reduces local pollution. Efforts at local suppression of global dangers will seem similar, however different the effects may be. We will need local organization and political pressure, but they must be built around a workable strategy. Global Suppression AgreementsIn a more promising approach, we could apply local pressure for the negotiation of a verifiable, worldwide ban. A similar strategy might have a chance in the control of nuclear weapons. But stopping nanotechnology and artificial intelligence would pose problems of a different order, for at least two reasons.
First, these technologies are less well-defined than nuclear weapons: because current nuclear technology demands certain isotopes of rare metals, it is distinct from other activities. It can be defined and (in principle) banned. But modern biochemistry leads in small steps to nanotechnology, and modern computer technology leads in small steps to AI. No line defines a natural stopping point. And since each small advance will bring medical, military, and economic benefits, how could we negotiate a worldwide agreement on where to stop?
Second, these technologies are more potent than nuclear weapons: because reactors and weapons systems are fairly large, inspection could limit the size of a secret force and thus limit its strength. But dangerous replicators will be microscopic, and AI software will be intangible. How could anyone be sure that some laboratory somewhere isn't on the verge of a strategic breakthrough? In the long run, how could anyone even be sure that some hacker in a basement isn't on the verge of a strategic breakthrough? Ordinary verification measures won't work, and this makes negotiation and enforcement of a worldwide ban almost impossible.
Pressure for the right kinds of international agreements will make our path safer, but agreements simply to suppress dangerous advances apparently won't work. Again, local pressure must be part of a workable strategy. Global Suppression by ForceIf peaceful agreements won't work, one might consider using military force to suppress dangerous advances. But because of verification problems, military pressure alone would not be enough. To suppress advances by force would instead require that one power conquer and occupy hostile powers armed with nuclear weapons-hardly a safe policy. Further, the conquering power would itself be a major technological force with massive military power and a demonstrated willingness to use it. Could this power then be trusted to suppress its own advances? And even if so, could it be trusted to maintain unending, omnipresent vigilance over the whole world? If not, then threats will eventually emerge in secret, and in a world where open work on active shields has been prevented. The likely result would be disaster.
Military strength in the democracies has great benefits, but military strength alone cannot solve our problem. We cannot win safety through a strategy of conquest and research suppression.
These strategies for stopping research - whether through personal inaction, local inaction, negotiated agreement, or world conquest-all seemed doomed to fail. Yet opposition to advances will have a role to play, because we will need selective, intelligently targeted delay to postpone threats until we are prepared for them. Pressure from alert activists will be essential, but to help guide advance, not to halt it. Unilateral AdvanceIf attempts to suppress research in AI and nanotechnology seem futile and dangerous, what of the opposite course - an all-out, unilateral effort? But this too presents problems. We in the democracies probably cannot produce a major strategic breakthrough in perfect secrecy. Too many people would be involved for too many years. Since the Soviet leadership would learn of our efforts, their reaction becomes an obvious concern, and they would surely view a great breakthrough on our part as a great threat. If nanotechnology were being developed as part of a secret military program, their intelligence analysts would fear the development of a subtle but decisive weapon, perhaps based on programmable "germs." Depending on the circumstances, our opponents might choose to attack while they still could. It is important that the democracies keep the lead in these technologies, but we will be safest if we can somehow combine this strength with clearly nonthreatening policies. Balance of PowerIf we follow any of the strategies above we will inevitably stir strong conflict. Attempts to suppress nanotechnology and AI will pit the would-he suppressors against the vital interests of researchers, corporations, military establishments, and medical patients. Attempts to gain unilateral advantage through these technologies will pit the cooperating democracies against the vital interests of our opponents. All strategies will stir conflict, but need all strategies split Western societies or the world so badly?
In search of a middle path, we might seek a balance of power based on a balance of technology. This would seemingly extend a situation that has kept a measure of peace for four decades. But the key word here is "seemingly": the coming breakthroughs will be too abrupt and destabilizing for the old balance to continue. In the past, a country could suffer a technological lag of several years and yet maintain a rough military balance. With swift replicators or advanced AI, though, a lag of a single day could be fatal. A stable balance seems too much to hope for. Cooperative DevelopmentThere is, in principle, a way to ensure a technological balance between the cooperating democracies and Soviet bloc: we could develop the technologies cooperatively, sharing our tools and information. Though this has obvious problems, it is at least somewhat more practical than it may sound.
Is cooperation possible to negotiate? Failed attempts to negotiate effective arms control treaties immediately leap to mind, and cooperation may seem even more complicated and difficult to arrange. But is it? In arms control, each side is attempting to hinder the other's actions; this reinforces their adversarial relationship. Further, it stirs conflicts within each camp between groups that favor arms limitation and groups that exist to build arms. Worse yet, the negotiations revolve around words and their meanings, but each side has its own language and an incentive to twist meanings to suit itself.
Cooperation, in contrast, involves both sides working toward a shared goal; this tends to blur the adversarial nature of the relationship. Further, it may lessen the conflicts within each camp, since cooperative efforts would create projects, not destroy them. Finally, both sides discuss their efforts in a shared language - the language of mathematics and diagrams used in science and engineering. Also, cooperation has clear-cut, visible results. In the mid-1970s, the U.S. and U.S.S.R. flew a joint space mission, and until political tensions grew they were laying tentative plans for a joint space station. These were not isolated incidents, in space or on the ground; joint projects and technical exchange have gone on for years. For all its problems, technological cooperation has proved at least as easy as arms control - perhaps even easier, considering the great effort poured into the latter.
Curiously, where AI and nanotechnology are concerned, cooperation and effective arms control would have a basic similarity. To verify an arms control agreement would require constant, intimate inspection of each side's laboratories by the other's experts - a relationship as close as the most thorough cooperation imaginable.
But what would cooperation accomplish? It might ensure balance, but balance will not ensure stability. If two gunmen face each other with weapons drawn and fears high, their power is balanced, but the one that shoots first can eliminate the other's threat. A cooperative effort in technology development, unless carefully planned and controlled, would give each side fearsome weapons while providing neither side with a shield. Who could be sure that neither side would find a way to strike a disarming blow with impunity?
And even if one could guarantee this, what about the problem of other powers - and hobbyists, and accidents?
In the last chapter I described a solution to these problems: the development, testing, and construction of active shields. They offer a new remedy for a new problem, and no one has yet suggested a plausible alternative to them. Until someone does, it seems wise to consider how they might be built and whether they might make possible a strategy that can work. Personal restraint, local action, selective delay, international agreement, unilateral strength, and international cooperation - all these strategies can help us in an effort to develop active shields.
Consider our situation today. The democracies have for decades led the world in most areas of science and technology; we lead today in computer software and biotechnology. Together, we are the leading force. There seems no reason why we cannot maintain that lead and use it.
As discussed in the last chapter, the leading force will be able to use several tactics to handle the assembler breakthrough. These include using sealed assembler labs and limited assemblers, and maintaining secrecy regarding the details of initial assembler development. In the time we buy through these (and other) policies, we can work to develop active shields able to give us permanent protection against the new dangers. This defines a goal. To reach it, a two-part strategy seems best.
The first part involves action within the cooperating democracies. We need to maintain a lead that is comfortable enough for us to proceed with caution; if we felt we might lose the race, we might well panic. Proceeding with caution means developing trustworthy institutions for managing both the initial breakthroughs and the development of active shields. The shields we develop, in turn, must be designed to help us secure a future worth living in, a future with room for diversity.
The second part of this strategy involves policies toward presently hostile powers. Here, our aim must be to keep the initiative while minimizing the threat we present. Technological balance will not work, and we cannot afford to give up our lead. This leaves strength and leadership as our only real choice, making a nonthreatening posture doubly difficult to achieve. Here again we have need for stable, trustworthy institutions: if we can give them a great built-in inertia regarding their objectives, then perhaps even our opponents will have a measure of confidence in them.
To reassure our opponents (and ourselves!) these institutions should be as open as possible, consistent with their mission. We may also manage to build institutions that offer a role for Soviet cooperation. By inviting their participation, even if they refuse the terms we offer, we would offer some measure of reassurance regarding our intentions. If the Soviets were to accept, they would gain a public stake in our joint success.
Still, if the democracies are strong when the breakthroughs approach, and if we avoid threatening any government's control over its own territory, then our opponents will presumably see no advantage in attacking. Thus, we can probably do without cooperation, if necessary. Active Shields vs. Space WeaponsIt may be useful to consider how we might apply the idea of active shields in more conventional fields. Traditionally, defense has required weapons that are also useful for offense. This is one reason why "defense" has come to mean "war-making ability," and why "defense" efforts give opponents reason for fear. Presently proposed space-based defenses will extend this pattern. Almost any defensive system that can destroy attacking missiles could also destroy an opponent's defenses - or enforce a space blockade, preventing an opponent from building defenses in the first place. Such "defenses" smell of offense, as seemingly they must, to do their job. And so the arms race gathers itself for another dangerous leap.
Must defense and offense be so nearly inseparable? History makes it seem so. Walls only halt invaders when defended by warriors, but warriors can themselves march off to invade other lands. When we picture a weapon, we naturally picture human hands aiming it and human whim deciding when to fire - and history has taught us to fear the worst.
Yet today, for the first time in history, we have learned how to build defensive systems that are fundamentally different from such weapons. Consider a space-based example. We now can design devices that sense (looks like a thousand missiles have just been launched), assess (this looks like an attempted first strike) and act (try to destroy those missiles!). If a system will fire only at massive flights of missiles, then it cannot be used for offense or a space blockade. Better yet, it could be made incapable of discriminating between attacking sides. Though serving the strategic interests of its builders, it would not be subject to the day-to-day command of anyone's generals. It would just make space a hazardous environment for an attacker's missiles. Like a sea or a mountain range in earlier wars, it would threaten neither side while providing each with some protection against the other.
Though it would use weapons technologies (sensors, trackers, lasers, homing projectiles, and such), this defense wouldn't be a weapons system, because its role would be fundamentally different. Systems of this sort need a distinctive name: they are, in fact, a sort of active shield - a term that can describe any automated or semiautomated system designed to protect without threatening. By defending both sides while threatening neither, active shields could weaken the cycle of the arms race.
The technical, economic, and strategic issues raised by active shields are complex, and they may or may not be practical in the preassembler era. If they are practical, then there will be several possible approaches to building them. In one approach, the cooperating democracies would build shields unilaterally. To enable other nations to verify what the system will and (more important) won't do, we could allow multilateral inspection of key designs, components, and production steps. We needn't give away all the technologies involved, because know-what isn't the same as know-how. In a different approach, we would build shields jointly, limiting technology transfer to the minimum required for cooperation and verification (using principles discussed in the Notes).
We have more chance of banning space weapons than we do of banning nanotechnology, and this might even be the best way to minimize our near-term risks. In choosing a long-term strategy for controlling the arms race, though, we must consider more than the next step. The analysis I have outlined in this chapter suggests that traditional arms control approaches, based on negotiating verifiable limitations, cannot cope with nanotechnology. If this is the case, then we need to develop alternative approaches. Active shields - which seem essential, eventually - may offer a new, stabilizing alternative to an arms race in space. By exploring this alternative, we can explore basic issues common to all active shields. If we then develop them, we will gain experience and build institutional arrangements that may later prove essential to our survival.
Active shields are a new option based on new technologies. Making them work will require a creative, interdisciplinary synthesis of ideas in engineering, strategy, and international affairs. They offer fresh choices that may enable us to avoid old impasses. They apparently offer an answer to the ancient problem of protecting without threatening - but not an easy answer. Power, Evil, Incompetence & SlothI have outlined how nanotechnology and advanced AI will give great power to the leading force - power that can be used to destroy life, or to extend and liberate it. Since we cannot stop these technologies, it seems that we must somehow cope with the emergence of a concentration of power greater than any in history.
We will need a suitable system of institutions. To handle complex technologies safely, this system must have ways to judge the relevant facts. To handle great power safely, it must incorporate effective checks and balances, and its purposes and methods must be kept open to public scrutiny. Finally, since it will help us lay the foundations for a new world, it had best be guided by our shared interests, within a framework of sound principles.
We won't start from scratch; we will build on the institutions we have. They are diverse. Not all of our institutions are bureaucracies housed in massive gray buildings; they include such diffuse and lively institutions as the free press, the research community, and activist networks. These decentralized institutions help us control the gray, bureaucratic machines.
In part, we face a new version of the ancient and general problem of limiting the abuse of power. This presents no great, fundamental novelty, and the centuries-old principles and institutions of liberal democracy suggest how it may be solved. Democratic governments already have the physical power to blast continents and to seize, imprison, and kill their citizens. But we can live with these capabilities because these governments are fairly tame and stable.
The coming years will place greater burdens on our institutions. The principles of representative government, free speech, due process, the rule of law, and protection of human rights will remain crucial. To prepare for the new burdens, we will need to extend and reinvigorate these principles and the institutions that support them; protecting free speech regarding technical matters may be crucial. Though we face a great challenge, there is reason to hope that we can meet it.
There are also, of course, obvious reasons for doubting that we can meet it. But despair is contagious and obnoxious and leaves people depressed. Besides, despair seems unjustified, despite familiar problems: Evil - are we too wicked to do the right thing? Incompetence - are we too stupid to do the right thing? Sloth - are we too lazy to prepare?
While it would be rash to predict a rosy future, these problems do not seem insurmountable.
Democratic governments are big, sloppy, and sometimes responsible for atrocities, yet they do not seem evil, as a whole, though they may contain people who deserve the label. In fact, their leaders gain power largely by appearing to uphold conventional ideas of good. Our chief danger is that policies that seem good may lead to disaster, or that truly good policies won't be found, publicized, and implemented in time to take effect. Democracies suffer more from sloth and incompetence than from evil.
Incompetence will of course be inevitable, but need it be fatal? We human beings are by nature stupid and ignorant, yet we sometimes manage to combine our scraps of competence and knowledge to achieve great things. No one knew how to get to the Moon, and no one ever learned, yet a dozen people have walked its surface. We have succeeded in technical matters because we have learned to build institutions that draw many people together to generate and test ideas. These institutions gain reliability through redundancy, and the quality of their results depends largely on how much we care and how hard we work. When we focus enough attention and resources on reliability, we often succeed. This is why the Moon missions succeeded without a fatality in space, and why no nuclear weapon has ever been launched or detonated by accident. And this is why we may manage to handle nanotechnology and advanced AI with sufficient care to ensure a competent job. Erratic people of limited competence can join to form stable, competent institutions.
Sloth - intellectual, moral, and physical - seems perhaps our greatest danger. We can only meet great challenges with great effort. Will enough people make enough effort? No one can say, because no one can speak for everyone else. But success will not require a sudden, universal enlightenment and mobilization. It will require only that a growing community of people strive to develop, publicize, and implement workable solutions - and that they have a good and growing measure of success.
This is not so implausible. Concern about technology has become widespread, as has the idea that accelerating change will demand better foresight. Sloth will not snare everyone, and misguided thinkers will not misdirect everyone's effort. Deadly pseudo-solutions (such as blocking research) will lose the battle of ideas if enough people debunk them. And though we face a great challenge, success will make possible the fulfillment of great dreams. Great hopes and fears may stir enough people to enable the human race to win through.
Passionate concern and action will not be enough; we will also need sound policies. This will require more than good intentions and clear goals: we must also trace the factual connections in the world that will relate what we do to what we get. As we approach a technological crisis of unprecedented complexity, it makes sense to try to improve our institutions for judging important technical facts. How else can we guide the leading force and minimize the threat of terminal incompetence?
Institutions evolve. To evolve better fact-finding institutions, we can copy, adapt, and extend our past successes. These include the free press, the scientific community, and the courts. All have their virtues, and some of these virtues can be combined.
| | Be the first to comment on this article! | |