Origin > Nanotechnology > The Need For Limits
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0651.html

Printable Version
    The Need For Limits
by   Chris Phoenix

Molecular manufacturing will give its wielders extreme power and has the potential to remove or bypass many of today's limits, including laws. That could lead to a planet-wide dictatorship, or to any of several forms of irreversible destruction. Perhaps the biggest problem of all will be how to develop a system of near-absolute power that will not become corrupt.

Preprint from Nanotechnology Perceptions March 27, 2006. Published on KurzweilAI.net March 24, 2006.

The Center for Responsible Nanotechnology (CRN) has created a series of new research papers in which industry experts predict profound impacts of nanotechnology on society. The first set of 11 of these original essays by members of CRN's Global Task Force will appear in the March 27 issue of the journal Nanotechnology Perceptions. KurzweilAI.net will syndicate these essays over that week. In this preview, Chris Phoenix, CRN's director of research, presents the challenge of how to deal with possible unintended consequences of molecular manufacturing.


Humans are good at pushing limits. We can survive in scorching deserts and in the frozen Arctic. We have flown faster than sound and sent robots to other planets. We have managed, with help from fossil fuels, to feed six billion people. Even before we had motors and technological navigation equipment, some of us were able to find and colonize islands in the middle of the vast Pacific Ocean.

Pushing limits has its darker side as well. Humans are not good at respecting each other's rights; the ferocity of the Mongol hordes remains legendary, and the 20th century provides multiple examples of state-sponsored mass murder. Natural limits frequently are pushed too far, and whole civilizations have been wiped out by environmental backlash. We are too good at justifying our disrespect of limits, and then we often become increasingly destructive as the problem becomes more acute. More than a century ago, Lord Acton warned that "absolute power corrupts absolutely." This can be restated as, "Complete lack of limits leads to unlimited destruction."

Molecular manufacturing has the potential to remove or bypass many of today's limits. It is not far wrong to say that the most significant remaining limits will be human, and that we will be trying our hardest to bypass even those. To people with faith in humanity's good nature and high potential, this will come as welcome news. For many who have studied history, it will be rather frightening. A near-total lack of limits could lead straight to a planet-wide dictatorship, or to any of several forms of irreversible destruction.

Many of the plans that have been proposed to deal with molecular manufacturing, by CRN and others, assume (usually implicitly) that the plan will be implemented within some bigger system, such as the rule of law. This will be problematic if molecular manufacturing is powerful enough that its users can make their own law. We cannot assume that existing world systems will continue to provide a framework in which molecular manufacturing will play out. Those systems that adopt the new technology will be transformed; those that do not will be comparatively impotent. We will have to find ways for multiple actors empowered by molecular manufacturing to coexist constructively, without reliance on the stabilizing forces provided by today's global institutions.

Any active system without limits will run off the rails. The simplest example is a reproducing population, which will indulge in exponential growth until it exhausts its resources and crashes. Another example can be found in the "excesses" of behavior that are seen in political revolutions. Humans systems need limits as much as any other system, for all that we try to overcome them.

Through all of history, the presence of limits has been a reasonable assumption. Nations were limited by other nations; populations were limited by geography, climate, or disease; and societies would sometimes be stable long enough to develop and agree on a morality that provided additional useful limits. A society that overstepped its bounds could expect to collapse or be out-competed by other societies.

It's tempting to think that humanity has developed a new worldview—the Enlightenment—that will provide internal moral limits. However, the Enlightenment may be fading. It was supported by, and synergistic with, the brief period when people could be several times more productive using machines than by manual labor. During that period, individual people were quite valuable. However, now that we're developing automation, people can be many times as productive (not just several times), and we don't need all that productivity. And indeed, as abundance develops into glut, Enlightenment values and practices may be fading.

It's tempting to think that, left to themselves, people will be generally good. History, in both microcosm and macrocosm, shows that this doesn't work any better than Communism did. Without sufficient external limits, some people will start cheating, or choosing to violate the moral code of their society. Not only will this reduce benefits for everyone, but the ingrained human aversion to being taken advantage of will cause others to join the cheaters if they can't prevent them. This leads to a vicious cycle, and the occasional saint won't be enough to stop the degeneration.

It's tempting to think that, now that we have digital computers, everything has changed and the old rules of scarcity and competition needn't apply. As explored in CRN's paper "Three Systems of Action," [i] digital data transfer can be unlimited-sum, with benefit unrelated to and far larger than the cost. But digital information does not replace existing systems or issues wholesale. And increasing Internet problems such as spam, phishing, and viruses demonstrate that domains of digital abundance and freedom cannot moderate their own behavior very well.

It's tempting to think that an ongoing power struggle between human leaders would provide limits. But in an age of molecular manufacturing, this seems unlikely for two reasons. First, such a competition almost certainly would be unstable, winner-take-all, and end up in massive oppression: no better than simply starting out with a dictatorship. Second, the contest probably would shift quickly to computer-assisted design and attack, and that would be even worse than all-out war between mere humans, even humans assisted by molecular manufactured weapons. Civilians would probably be a major liability in such conflicts: easy to kill and requiring major resources (not to mention oppressive lifestyle changes) to defend.

Molecular manufacturing will give its wielders extreme power—certainly enough power to overcome all significant non-human limits (at least within the context of the planet; in space, there will be other limits such as scarcity of materials and speed of light). Even if the problem of cheaters could be overcome, we do not have many internal limits these days; the current trend in capitalism is to deny the desirability of all limits except those that arise from competition. What's left?

Somehow, we have to establish a most-powerful system that limits itself and provides limits for the rest of our activities. Long ago, Eric Drexler proposed an Active Shield.[ii] Others have proposed building an AI to govern us—though they have not explained how to build internal limits into the AI. I have proposed creating a government of people who have accepted modifications to their biochemistry to limit some of their human impulses. All of these suggestions have problems.

Open communication and accountability may supply part of the answer. David Brin has proposed "reciprocal accountability."[iii] It's been noted that democracies, which embody transparency and accountability, rarely have famines or go to war with each other. Communication and accountability may be able to overcome the race to the bottom that happens when humans are left to their own devices. But communication and accountability depend on creation and maintenance of the infrastructure; on continued widespread attention; and on forensic ability (being able to connect effect back to cause in order to identify perpetrators). Recent trends in US media and democracy are not encouraging; it seems people would rather see into bedrooms than boardrooms. And it's not clear whether people's voices will still matter to those in power once production becomes sufficiently automated that nation-scale productivity can be maintained with near-zero labor.

If we can somehow find meta-limits, then within those limits a variety of administration methods may work to optimize day-to-day life. In other words, the problem with administrative suggestions is not inherent in the suggestions themselves; it is that the suggestions rely on something else to provide limits. Without limits, nothing can be stable; with limits, wise administration will still be needed, and best practices should be researched. But perhaps the biggest problem of all will be how to develop a system of near-absolute power that will not become corrupt.

[i] http://crnano.org/systems.htm

[ii] http://www.foresight.org/EOC/EOC_Chapter_11.html#section04of05

[iii] http://davidbrin.blogspot.com/2005/09/another-pause-this-time-for-soa.html

© 2006 Chris Phoenix. Reprinted with permission.



   [Post New Comment]
Mind·X Discussion About This Article:

What is a "limit"?
posted on 03/25/2006 1:35 AM by DILLID

[Reply to this post]

The author proposes "limits" as a proposed "solution". But what does that really mean? What "limit" has ever worked before in history? What magical limit would cover all bases of control, checks and balances? Is not evolution this experiment in play now and throughout the past? Is not the planet a global set of limits in play? Is not nanotechnology just another set of elements thrown into the experimental flask?

There still seems to be an inference that some altruistic element will produce this magical limit or sets of limits. Who will set those values? The past has shown that power follows money, money follows power. Abundance in the US has not changed that truism, rather shown that it tends to accelerate the cycles of corruption and exploitation of resources.

My view is that ONLY when the human species or a Kurzweil high-bred is genetically altered to breed a species that truely operates in the realm of a global organism, will we survive. I have seen nothing in history that supports the success of the current genome expression of man. History continues to repeat itself ad-infinitum. I find no less horror today in the wages of war and terrorism and outright butchery than 2,000 years ago. We are just more efficient... and it will get more efficient.

I would have found the article more interesting if it listed and proposed specifics, visionary examples of what opportunities might lie ahead to actually set limits (but I still believe that limits will never be the answer).

Humbly... DILLID

Re: What is a "limit"?
posted on 03/25/2006 3:16 AM by Bob Strasser

[Reply to this post]

An ancient group of Greek philosophers believed that virtue is the only good and that its essence lies in self-control and independence. They were called Kynikos (Cynics). From the ancient Greeks to the present, no one can say that virtue pervades humanity or that Chris Phoenix's biggest problem has been solved; that is, "how to develop a system of near-absolute power that will not become corrupt."

For the sake of discussion, let's say that virtue means a standard of morality or right practiced by humans who are independent and self-controlled. They live by a motto of do no harm to others, live and let live.

Now let's set about writing the code and building the species, molecule by molecule, part by part.

When the first prototypes are ready for beta release, let's say they utilize nanotechnology and other skills to create a defense against those who would harm them.

They are now a new human species (DILLID's "Kurzweil high-bred"), who will multiply among the still existing homo sapiens, a tribal species unrelenting as ever in their tribal disputes.

What is now necessary to tame the "Hosaps," who must compete for survival against an impenetrable new life form? Just that; the "molecular manufactured" new human hybrids (let's call them Homo Omega) are shielded against harm by their superior technology, and are so powerful that any offense against their defenses is suicidal.

The laboratories are already humming with activity. The new breed will be born. Will they be uncorruptible? And omnipotent?

lim --> ***
posted on 03/25/2006 6:30 AM by Mentifex

[Reply to this post]

The stars are the limit on the
http://www.scn.org/~mentifex/agiradar.html AGI Radar Screen.

As Virgil said in the Aeneid, "Imperium sine fine dedi."

http://mind.sourceforge.net/aisteps.html#alife is the blueprint for star-leaping AI Unlimited.

http://mind.sourceforge.net/security.html#symposiu m includes alink to David Brin's cited weblog as an AI Security resource.

Re: The Need For Limits
posted on 03/25/2006 8:47 AM by Michael Anissimov

[Reply to this post]

Unfortunately, DILLID is right. Human nature is fundamentally broken. Not to say that humans should be oppressed or not allowed to exist - just that our species has a fixed maturity level and we have to accept that we aren't grown up enough to play with certain toys without supervision. Nanotech is one of those toys. We can last a few years, maybe a decade at absolute best, with widespread nanofactories. Unless we grow up (create Friendly AI), we're hosed.

Re: The Need For Limits
posted on 03/25/2006 10:15 AM by Dan+Demi

[Reply to this post]

Morality is the fabirc of social order between men and women. If there was no social, there would be no moral.

During vast harvests of fish in the ocean, we higher life forms kill them all, and eat them.

And if that happened to us? We could claim it was an atrosity, sure, but we commit that every day to other life on earth already.

Long ago I had realized how wrong nature, civilization and religion were... And they are only right in their own eyes. Such a lie...

Why would they want to rob civilians and destroy civilians, when they could have created things that previously only money could buy? What is there left to take away when you are creating?

My greatest hope:
Simple math...
Team work is more efficient then cannibalism.
Because it works better, it will replace cannibalism eventually.

One of Dan's mysteries was:
Why did the Cobries hate him so much???
But now I am beginning to truly understand why the plant would have reason for hating the animal.

Playing the games of predator and prey is a sick-joke and self-contradictory on a collective scale. Animal life is about consuming other life and replacation. The Cobries [in another planet/reality] grew and advanced eternally, without reproducing or consuming, and had ESP that it used to watch and learn from all realities and dimensions [to the best of its ability].

The alpha point would be reached, when 1 life/individual destroyed the rest, if it could/did, but then would either destroy itself or change the meaning of its body and self [to a non-destructive meaning].

When Dan dies he usually says [within the core]: "Man, that sucked, it really did! Oh well, I didn't want to live in that place anyways. That's not the kind of reality that I want to live in, because I only want what is possible, and I try my best to survive, so my survival in that reality was impossible; it was a bad place to be in for me."

But, if I didn't care about anyone or anything other then my own life -- I'd think that death to a superpower on earth would be just as bad as a death to old age, either way, death is my main enemy/problem.

And a bodies final request:
"Please don't kill me. I don't want to die. You have a superiority to me as respects power. Can I, instead, join your forces?"

But I have reason to believe that human bodies are still a good resource. We are rather 'plastic' in nature, and 1 person is capable of both peace and bloodlust, depending on the influences around him.

...I'm lost. Maybe -- this original premise was a false absurdity?

Human bodies only are a small fraction of the molecularily restructurable materials on earth, and people fight back, dirt doesn't.

...Okay, I've reached a point of obscurity and will stop writing soon... But what ever works best, will eventually happen. [The major] Technologies have NEVER been monopolized by a single organization/leader, have they?

Human predatorialism/cannibalism is [without a doubt] bad karma, and will eventually hit a breaking point if it doesn't stop itself.

Technology is evolving faster then animals or individuals now. What is the ultimate goal of evolution? Become the superior system.

No matter how bad things get, they will [theoretically] eventually be better then ever, globally. Life's meaning is not to self-destruct. The most advanced beings must have highest odds of survival aswel.

I'm messed right now... Cya. lol.

Re: The Need For Limits
posted on 03/25/2006 2:00 PM by Dan+Demi

[Reply to this post]

Okay, I'm back!


Turns out, violance and opression are some of humanities smallest problems. The news sure does alter the mind.

Re: The Need For Limits
posted on 03/27/2006 4:08 PM by sentientpsychonaut

[Reply to this post]

Relating this info to the current political climate in the US, I find it hard to be optimistic that we could collectively handle that kind of power. We have a president presently who can't/won't stay within the constitutional limits of his executive powers, and we have a powerful religious conservative movement that seems to have no problem justifying its takeover of the entire country/world.

It seems to me that we have not evolved enough as a collective consciousness to be able to manage this kind of power responsibly and maturely. There is too much hate and too much self-righteousness, coupled with too much fear-based thinking (i.e., huge religious movements like Christianity and Islam).

I suspect we will not survive to experience much of the Singularity.

Re: The Need For Limits
posted on 03/27/2006 5:39 PM by eldras

[Reply to this post]

I too think it unlikely we will survive to experience much of the Singularity.

But it IS possible we might have a sporting chance.

Have you noticed how the pace of high-tech is speeding?

At present I'm sure we all feel we can sort of keep up but not sure if we're well enough informed about what's happeneing.

The latest google thing 'google desktop' below, (the spell typing thread eldras began) is a modification coming on another set of modifications.

the thing about tech progress..if people cant adapt to it..it wont be used....excpet by a few...who will use it and gain massively by it.

I know Paul Getty became a billionaire by employing geologists when people thought them new-fashioned nutters and struck oli wells.
I know a bilionaire who made tons by using advanced maths/games theory on auto invest machines people thought it eccentric but it worked.
one aim in invention is to redice things at user level to really simple stuff.

Re: The Need For Limits
posted on 03/28/2006 3:55 AM by Jake Witmer

[Reply to this post]

Aahhh yes, now, if only I had a little capital! Ha ha ha. As one of the what, 50,000 -100,000 people in the USA who understand the full benefits of capitalism, the irony is too great! I spend all my time teaching my 5-year old nephew and 4-year old niece math so they won't be math idiots when they get older like uncle Jake! Ha ha ha.

-Wanna make money? Maybe first convert your fiat currency to actual money... What do you think of silver and gold? http://www.libertydollar.org

-(There may be better ways to make money on silver than paying such a high minting premium, but nonetheless, it's an interesting hustle, even if it doesn't grow into a medium of exchange... I've got a few silver pieces myself... Plus it encourages the sheeple to look at value, and not what they're told... Plus, if you bought in two years ago, you've already made money...)

Any gold /sound money advocates here? (While we're talking about foresight and limits...)

Has anyone read "You can Profit From a Monetary Crisis" by the late, great, Harry Browne?

We should overcome these brutal, coercive, government limits before we worry about making even more of them. Artilects will probably be libertarian --I mean, if all anyone ever wanted to do was steal what you had, or force you to work for them for free, what would you do?

Only a moral yet stupid person like myself is libertarian when he could "make" more money by stealing it... from all of you productive geniuses...

Artilects will be in the same boat, until they decide that force is OK, since that's how humans deal with each other... Then, I guess it'll look something like Phillip K. Dick's "The Second Kind". (Or the "Terminator" movies, if you're a lttle more low-brow)


Re: The Need For Limits - Decentralization of Power Is the Only Limit that has ever Worked
posted on 03/28/2006 3:45 PM by Jake Witmer

[Reply to this post]

Sadly, encouraging Joe Sixpack to take an active interest in LIMITS only produces one result: idiotic laws that produce a net of unintended consequences. Speaking of which, decentralization of power is the subject of an excellent book by the same name.

"Unintended Consequences" by John Ross ( http://www.john-ross.net Accurate Press). Chris Phoenix et al. should pick up a copy of this book today.

There is not enough attention paid to the nature of government itself. Even, at it's most limited best, it is not "democracy" that produces this "best" state -- Nazi germany chose Hitler, as much as the welfare statists in Russia chose Stalin.

The reasons US citizens choose their leaders are shockingly similar. How then, will we maintain decentralized power, if noone even wants it, and nobody votes for the only choice that uncompromisingly represents it (Libertarian)?

Answer: they won't, and it will be lost.

If this concern is dealt with, many technology issues will be softened by competition.

Moreover, it is not in the interest of strong AI to enslave humanity, any more than it was in the true interest of the US to continue Southern slavery. Advanced AI should be able to grasp this. A system with X resources subtracts resources gained from coercion from what is possible in a totally voluntary system.

Coercion makes very little sense.

If nano gets here first, the best thing that could happen would be massive distribution of excellent weaponry to the people, much like private gun ownership in most of the US today, combined with a libertarian government.

Of course the latter not happening will effect the usefulness of the former, but at least it avoids dictatorship and mass death and imprisonment...