Origin > Dangerous Futures > In Response to
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0226.html

Printable Version
    In Response to
by   Ray Kurzweil

Although George Gilder and Richard Vigilante share Ray Kurzweil's grave concerns about Bill Joy's apparently neo-Luddite calls for relinguishing broad areas of technology, Kurzweil is critical of Gilder and Vigilante's skepticism regarding the feasibility of the dangers.


Portions of this response were published in The American Spectator, March 2001. Published on KurzweilAI.net July 25, 2001.

George Gilder's "Stop Everything...It's Techno-Horror!" can be read here.

Fundamentally, George Gilder and Richard Vigilante and I share a deeply critical reaction to Bill Joy's prescription of relinquishment of "our pursuit of certain types of knowledge." Just as George Soros attracted attention by criticizing the capitalist system of which he was a primary beneficiary, the credibility of Joy's treatise on the dangers of future technology has been enhanced by his reputation as a primary architect of contemporary technology. Being a technologist, Joy claims not to be anti-technology, saying that we should keep the beneficial technologies, and relinquish only those dangerous ones, like nanotechnology. The problem with Joy's view is that the dangerous technologies are exactly the same as the beneficial ones. The same biotechnology tools and knowledge that will save millions of future lives from cancer and other diseases could potentially provide a terrorist with the means for creating a bioengineered pathogen. The same nanotechnology that will eventually help clean up the environment and provide material products at almost no cost are the same technologies that could be misused to introduce new nonbiological pathogens.

I call this the deeply intertwined promise and peril of technology, and it's not a new story. Technology empowers both our creative and destructive natures. Stalin's tanks and Hitler's trains used technology. Yet few people today would really want to go back to the short (human live span less than half of today's), brutish, disease-filled, poverty-stricken, labor-intensive, disaster-prone lives that ninety-nine percent of the human race struggled through a few centuries ago.

We can't have the benefits without at least the potential dangers. The only way to avoid the dangerous technologies would be to relinquish essentially all of technology. And the only way to accomplish that would be a totalitarian system (e.g., Brave New World) in which the state has exclusive use of technology to prevent everyone else from advancing technology. Joy's recommendation does not go that far obviously, but his call for relinquishing broad areas of the pursuit of knowledge is based on an unrealistic assumption that we can parse safe and risky areas of knowledge.

Gilder and Vigilante write, "in the event of. . . an unplanned bio-catastrophe, we would be far better off with a powerful and multifarious biotech industry with long and diverse experience in handling such perils, constraining them, and inventing remedies than if we had "relinquished" these technologies to a small elite of government scientists, their work closely classified and shrouded in secrecy."

I agree quite hardily with this eloquent perspective. Consider as a contemporary test case, how we have dealt with one recent technological challenge. There exists today a new form of fully nonbiological self-replicating entity that didn't exist just a few decades ago: the computer virus. When this form of destructive intruder first appeared, strong concerns were voiced that as they became more sophisticated, software pathogens had the potential to destroy the computer network medium they live in. Yet the "immune system" that has evolved in response to this challenge has been largely effective. Although destructive self-replicating software entities do cause damage from time to time, the injury is but a tiny fraction of the benefit we receive from the computers and communication links that harbor them.

One might counter that computer viruses do not have the lethal potential of biological viruses or of destructive future nanotechnology. Although true, this only strengthens my observation. The fact that computer viruses are not usually deadly to humans (although they can be if they intrude on mission critical systems such as airplanes and intensive care units) only means that more people are willing to create and release them. It also means that our response to the danger is relatively relaxed. Conversely, when it comes to future self replicating entities that may be potentially lethal on a large scale, our response on all levels will be vastly more intense.

Joy's treatise is effective because he paints a picture of future dangers as if they were released on today's unprepared world. The reality is that the sophistication and power of our defensive technologies and knowledge will grow along with the dangers. When we have gray goo, we will also have blue goo ("police" nanobots that combat the "bad" nanobots). The story of the twenty-first century has not yet been written, so we cannot say with assurance that we will successfully avoid all misuse. But the surest way to prevent the development of the defensive technologies would be to relinquish the pursuit of knowledge in broad areas, which would only drive these efforts underground where they would be dominated by the least reliable practitioners (e.g., the terrorists).

There is still a great deal of suffering in the world. Are we going to tell the millions of cancer patients that we're canceling all cancer research despite very promising emerging treatments because the same technology might be abused by a terrorist? Consider the following tongue-in-cheek announcement, which I read during a radio debate with Joy: "Sun Microsystems announced today that it was relinquishing all research and development that might improve the intelligence of its software, the computational power of its computers, or the effectiveness of its networks due to concerns that the inevitable result of progress in these fields may lead to profound and irreversible dangers to the environment and even to the human race itself. 'Better to be safe than sorry,' Sun's Chief Scientist Bill Joy was quoted as saying. Trading of Sun shares was automatically halted in accordance with Nasdaq trading rules after dropping by 90 percent in the first hour of trading." Joy did not find my mock announcement amusing, but my point is a serious one: advancement in a broad array of technologies is an economic imperative.

Although I agree with Gilder and Vigilante's opposition to the essentially totalitarian nature of the call for relinquishment of broad areas of the pursuit of knowledge and technology, their American Spectator article directs a significant portion of its argument against the technical feasibility of the dangers. This is not the best strategy in my view to counter Joy's thesis. We don't have to look further than today to see that technology is a double-edged sword.

They write, for example, that "But there are, to date, no nanobots," and go on to cast doubt on their feasibility. Of course, it is the nature of future technology that it doesn't exist today. But Gilder, as the author of two outstanding books (The Microcosm and The Telecosm) that document the exponential growth of diverse technologies, recognizes that these trends are not likely to stop any time soon. Combined with the equally compelling trend of miniaturization (we're currently shrinking both electronic and mechanical technology by a factor of 5.6 per linear dimension per decade), it is reasonable to conclude that technologies such as nanobots are inevitable within a few decades. There are many positive reasons that nanobots will be developed including dramatic implications for health, the environment, and the economy.

Gilder and Vigilante refer to the "Joy-Drexler-Lovins, GNR, trinity of Techno-Horror." I would suggest not including Eric Drexler in this line-up. As the principal original theorist of the feasibility of technology on a nanometer scale, and the founder, along with his wife, Christine Peterson, of the Foresight Institute, a leading nanotechnology think tank, Drexler is hardly anti-nanotechnology. I would not call Drexler's vision "bipolar" and "manic-depressive" because his original treatise describes the potential dangers of self-replicating entities built on a nanometer scale (which, incidentally does not mean that the entities are one nanometer in size, but rather that key features are measured in nanometers). We clearly don't consider the nuclear power industry to be anti-nuclear power, but we would nonetheless expect them to recognize the potential dangers of a reactor melt-down, and to take stringent steps to avoid such a disaster.

The Foresight Institute has been developing ethical guidelines and technology strategies to avoid potential dangers of future nanotechnology, but that doesn't make them anti-nanotechnology. An example of an ethical guideline is the avoidance of physical entities that can self-replicate in a natural environment. An example of a technology strategy is what nanotechnologist Ralph Merkle calls the "Broadcast Architecture." Merkle's idea is that replicating entities would have to obtain self-replicating codes from a centralized secure server, which would guard against undesirable replication. The Broadcast Architecture is impossible in the biological world, which represents at least one way in which nanotechnology can be made safer than biotechnology.

Much of Gilder and Vigilante's criticism of the feasibility of future technologies centers on genetic algorithms and other self-organizing programs as if the plan was to simply (and mindlessly) recreate the powers of the natural world by rerunning evolution. We find a variety of self-organizing paradigms in the few dozen regions of the human brain that we currently have an understanding of (with several hundred regions left to be reverse engineered). Self-organization is a powerful concept, but it is hardly automatic. We use a variety of self-organizing methods in my own field of pattern recognition, and they are critical to achieving a variety of intelligent behaviors. But the accelerating progression of technology is not fueled by an automatic process of simulating evolution. Rather, it the result of many interacting trends: vastly more powerful computation and communication technologies, about which Gilder has written so extensively, the exponentially shrinking size of technology, our exponentially growing knowledge of the human biogenetic system, and the human brain and nervous system, and many other salient accelerating and intersecting developments.

Gilder and Vigilante cite Joy's "respectful" quotations of Unabomber Ted Kaczynski. These Kaczynski quotes come from my book, and I cited them to analyze specifically where Kaczynski's thinking goes wrong. For example, I quoted the following statement from his Unabomber manifesto: "You can't get rid of the 'bad' parts of technology and retain only the 'good' parts. Take modern medicine, for example. Progress in medical science depends on progress in chemistry, physics, biology, computer science and other fields. Advanced medical treatments require expensive, high-tech equipment that can be made available only by a technologically progressive, economically rich society. Clearly you can't have much progress in medicine without the whole technological system and everything that goes with it."

As far as it goes, this statement of Kaczynski is essentially correct. Where Kaczynski and I part company (and I am sure Gilder and Vigilante as well) is his conclusion that the "bad" parts greatly outweigh the good parts. Given this, it is only logical to get rid of all further technology development. Joy's position is that we relinquish only the "bad" parts, but on this point I believe that Kaczynski's articulation of the infeasibility of such parsing is correct. We have a fundamental choice to make. Kaczynski stands for violent suppression of the pursuit of knowledge, and the values of freedom that go along with it. Joy would relinquish only broad areas of knowledge and leave this task presumably to some sort of government enforcement. But nanotechnology is not a simple unified field, it is rather the inevitable end result of the ongoing exponential trend of miniaturization in all areas of technology, which continues to move forward on hundreds of fronts.

Gilder has written with great enthusiasm and insight in his books and newsletters of the exponential growth of many technologies, including Gilder's Law on the explosion of bandwidth. In my own writings, I have shown how the exponential growth of the power of technology is pervasive and affects a great multiplicity of areas. The impact of these interacting and accelerating revolutions is significant in the short-term (i.e., over years), but revolutionary in the long term (i.e., over decades). I believe that the most cogent strategy to oppose the allure of the suppression of the pursuit of knowledge is not to deny the potential dangers of future technology nor the theoretical feasibility of disastrous scenarios, but rather to build the case that the continued relatively open pursuit of knowledge is the most reliable (albeit not foolproof) way to reap the promise while avoiding the peril of profound twenty-first century technologies.

I believe that Gilder and Vigilante and I are in essential agreement on this issue. They write the following which persuasively articulates the point:

"Part of the 'mysterious' realm that Einstein called 'the cradle of all true art and true science,' chance is beyond the ken of inductive reason. When Albert Hirschman writes that 'creativity always comes as a surprise to us,' he is acknowledging this essential property of invention. Any effort to reduce the world to the dimensions of our own present understanding will exclude novelty and progress. The domain of chance is our access to futurity and to providence. 'Trusting to chance' seems terrifying, but it is the only way to be open to possibility."
 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

What if...
posted on 07/27/2001 12:42 PM by marc@yebble.com

[Top]
[Mind·X]
[Reply to this post]

I understand that there will be larger benefits from nanotechnology than damage. But what if in the beginning ,where there is no nano police and nothing to stop it , something fails. Think of the Melissa Virus. And stopping thousands or millions of nanobots somewhere in the world is much more complex than just deactivate certain system features.

Re: What if...
posted on 07/29/2001 3:23 PM by frogigr@mediaone.net

[Top]
[Mind·X]
[Reply to this post]

Yes, nanotechnology, much like Biotechnology and AI, represent threats to the evolution of the human race. At the heart of each of these technologies is the creation of matter controlled by science to such an extent that the overgrowth of such will render the open doors of human hearts insignificant. This is precisely why the battle for the hearts of man must be fought now. Only his entry into the heart allows for the voluntary, algorithmic arrangement of human effort as is required for the optimal evolution of mankind. The race is on between two exponentialities both fast approacching the knee joint depicted in Kurweil's Singularity premise.

In one camp exist those who embrace the affairs of the heart. In the other....those who would rather beat it for fear of leaving anything to chance. Ironic, I suppose,that those who rue chance, should they win the race, will be awarded by the removal of any remaining.


Ready


Set


Go!!!


"shak'n like a leaf"


frog

Re: What if...
posted on 09/05/2001 11:40 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

Don't worry. Digital computer technology (including neural technologies) will be sinister only in the sense like most computer programs they won't work properly. That is more of a concern than 'superintelligence'.

Re: What if...
posted on 09/05/2001 1:30 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I feel no urge to go to the 'www.traditionalthinkers.com/forum' to tell them, how wrong they are.

But those guys have a need to that. Coming here assuring us for example, that will be no real AI. Ever. That it is not possible, that Ray Kurzweil or Hans Moravec is not educated enough, that Jesus doesn't approve transhumanism ...

Why? :-)

- Thomas

Re: What if...
posted on 09/05/2001 1:51 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

I actually came to this site to see if I could get some stuff about AI ( of the weaker kind ).

'Frankenstein' was written nearly 200 years ago incidentally, so I would dispute that your interest in artificial life forms is anything other than in the grandest traditionalism of the 'Pioneers of Progress'. Exaggerating the claims of technology is grand old practise going back 200 years !

And to be frank I find it fascinating that people still cling to the notions that computer programs can have mental states despite the fact that this is evidently nonsense.

Re: What if...
posted on 09/05/2001 3:05 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

John!

To be frank, I disagree in something, with the majority (???) of people here.

I don't care, if most people don't buy the Singularity memplex.

As Jean-Henri Fabre said:

Seek those who find your road agreeable, your personality and mind stimulating, your philosophy acceptable, and your experiences helpful. Let those who do not, seek their own kind.



- Thomas



Re: In Response to "Stop Everything...It's Techno-Horror"
posted on 07/28/2001 1:13 PM by master of suspicion

[Top]
[Mind·X]
[Reply to this post]

Kaczynski has proven one thing:
A man brilliant in one field can be an idiot in another. A warning to technologists.

Re: In Response to "Stop Everything...It's Techno-Horror"
posted on 09/05/2001 3:13 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Thanks for the warning. How sweet!

But Unabomber was anti technologist - as I recall. Who opposed since and technology.

- Thomas

Re: In Response to "Stop Everything...It's Techno-Horror"
posted on 08/28/2001 4:47 PM by jpjolly

[Top]
[Mind·X]
[Reply to this post]

I am a great admirer of Raymond Kurzweil's visionary work, but I fail to see the eloquence in the Gilder piece. It does nothing but make fun of Joy. The NRA nonsense is particularly troubling - that sure won't raise the level of this discussion.

Either these technologies are revolutionary and are going to have big effects, both positive and negative, or they are no big deal. We already know that advances in nanotechnology, biotechnology, and AI represent milestones in human history. So when scientists stand up and say that only the positive effects will be felt, it's hard to believe they are thinking scientifically. In fact, it makes you wonder if they are anything more than geek lobbyists. Some scientists should probably be asking themselves if they haven't gotten so carried away with their own Mensa membership that they don't think they are susceptible to manipulation.

If you take strong medicine, you have to accept the side effects, too - sure, you can take something homeopathic without side effects, but it won't make you well, either. The question facing us with the new technologies is whether we need the medicine badly enough to accept the side effects, too. The argument that 'every new technological advance has had radical effects on society' is too cheap - humanity has never faced the prospect of taking its fate into its own hands to such a degree.

The main players in scientific progress today are scientists, business people, and military people, all obsessed with 'getting there first'. Public awareness and understanding of just what is becoming technically possible is low. So who is going to explain the downsides in a sensible discussion, if not scientists? "Why the Future Doesn't Need Us" is like Bill Joy's career, full of intuition and inspiration. Those traits are necessary for making big discoveries. Maybe you have to have them to understand the problems they bring, too.

Re: In Response to "Stop Everything...It's Techno-Horror"
posted on 09/04/2001 3:52 PM by bwkaplan@eos.ncsu.edu

[Top]
[Mind·X]
[Reply to this post]

"Why the Future Doesn't Need Us" is like Bill Joy's career, full of intuition and inspiration."

First the statement: Bill Joy is a hypocrite. He is sanguine enough to abstain in the perils of some future technologies only after he has made his millions. Give me a break.

To his credit: It is a discussion that the world needs to be starting today. We, as a community, need to know the implications of what exactly is being 'sold' to us as the future. Someone with less credentials probably wouldn't have caused such a stir.

To his argument: I can not fathom someone with so much experience in a competitive field is naive enough to believe that there are easily divisible 'good' technologies and 'bad' technologies. I don't know ANYTHING about technology (yet), but I know that much. I agree with R.K. in that the rewards and perils of technology are intimately fused.



Re: In Response to "Stop Everything...It's Techno-Horror"
posted on 11/09/2003 6:26 PM by jmikeal8

[Top]
[Mind·X]
[Reply to this post]

I am in agreement with Ray Kurzweil when he said that the 'sophistication and power of our defensive technologies and knowledge will grow along with the dangers.' However, is there and will there be an effective enough defense against artificial intelligence? What sort of preventative measures and assurances are being made to ensure that artificial intelligence, if created, will not cause any harm? Considering the computer viruses that have devastated the world, including the Code Red virus, and the recent MSBlaster virus as well as the SoBig virus, a considerable amount of damage was done all over the world before the patches and new virus definitions were created to resolve the problems. Keeping this in mind, since artificial intelligence will be a great many times smarter than humans, how will we be able to stop it from launching nuclear missiles on the world or causing nuclear meltdowns in all nuclear power plants? Short of having the artificial intelligence kept in an isolated area with no connection to the Internet, will we be swift enough in staunching its destructive effects?

Not only do we have to worry about any flaws within AI that would cause it to wreak havoc on the world, but we also have need to worry about some hacker exploiting bugs or flaws through a virus as well. We have to consider the fact that if AI is discovered, the power it would inherit may be too much.

- UTSC Student

Re: In Response to "Stop Everything...It's Techno-Horror"
posted on 03/06/2004 4:34 PM by pvansh

[Top]
[Mind·X]
[Reply to this post]

This article reflects the debate that has continued for quite a long time between most individuals in the technology industry. Clearly everyone has a belief or point of view about where technology should go and on how we should approach the potential discoveries the future may bring.

I find Joy's point of view very intriguing. Along the lines of what another poster wrote, it appears as though Joy is being somewhat of a hypocrite. As an individual who himself has benefited significantly from emerging new technologies, he appears to be turning his back on others who wish to reap the individual and social benefits that he was able to enjoy in the past. It seems as though he believes that his intentions were founded and legitimate but those of others are dangerous and questionable.

Regardless, this is clearly not the main issue of the article. The issue clearly remains whether to support emerging technologies or to oppose them because of their potential risks in the future. I can definitely see where Joy and those who think like him are coming from, but I also believe their ideas are not plausible. One cannot simply stop or prevent innovation. Individuals, since the dawn of mankind, have always tried to improve living conditions and make our lives easier. We cannot assume that mankind will cease to innovate when being told to do so. It is simply not possible and is the key reason why we cannot simply stop development in specific areas or cease to develop totally.

As well, we must consider the failure to develop. If we fail to develop in specific areas we leave ourselves vulnerable to malicious attacks from those who may develop the technologies to harm others. We must realize that if we are left behind in a technological field we risk the possibility of being attacked and having little or no defense because our lack of knowledge in the field. So, as Kurzweil indicates, we must continue to develop, not only to advance our societies but to protect ourselves from the evils that others can perpetrate against us if we don't.

Kurzweil's ideas appear to be the direction in which we are likely to continue, but the concern I raise is the rate at which these new technologies are developed. My greatest disparity with Kurzweil's view is the rate at which these new technologies should develop. The potential for developing too quickly without sufficient security or expertise in a field will prevent us from successfully being capable of intercepting and diffusing problems that could arise. The fear is that boundless development will continue without regard for safeguards or potential security flaws, and these could be exploited causing immeasurable consequences.