Origin > Dangerous Futures > Kurzweil vs. Dertouzos
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0136.html

Printable Version
    Kurzweil vs. Dertouzos
by   Ray Kurzweil
Michael L. Dertouzos

In this Technology Review article, Raymond Kurzweil and Michael Dertouzos debate Bill Joy's Wired article urging "relinquishment" of research in certain risky areas of nanotechnology, genetics, and robotics.

Originally published January 1, 2001 in High Technology. Published on KurzweilAI.net March 7, 2001.

Raymond Kurzweil

Although I agree with Michael Dertouzos' conclusion in rejecting Bill Joy's prescription to relinquish "our pursuit of certain kinds of knowledge," I come to this view through a very different route. Although I am often paired with Bill Joy as the technology optimist versus Bill's pessimism, I do share his concerns about the dangers of self-replicating technologies. Michael is being shortsighted in his skepticism.

Michael writes that "just because chips...are getting faster doesn't mean they'll get smarter, let alone lead to self-replication." First of all, machines are already "getting smarter." As just one of many contemporary examples, I've recently held conversations with a person who speaks only German by translating my English speech in real time into human-sounding German speech (by combining speech recognition, language translation and speech synthesis) and similarly converting their spoken German replies into English speech. Although not perfect, this capability was not feasible at all just a few years ago. The intelligence of our technology does not need to be at human levels to be dangerous. Second, the implication that self-replication is harder than intelligence is not accurate. Software viruses, although not very intelligent, are self-replicating as well as being potentially destructive. Bioengineered biological viruses are not far behind. As for nanotechnology-based self-replication, that's further out, but the consensus in that community is this will be feasible in the 2020s, if not sooner.

Many long-range forecasts of technical feasibility in future time periods dramatically underestimate the power of future technology because they are based on what I call the "intuitive linear" view of technological progress rather than the "historical exponential" view. When people think of a future period, they intuitively assume that the current rate of progress will continue for the period being considered. However, careful consideration of the pace of technology shows that the rate of progress is not constant, but it is human nature to adapt to the changing pace, so the intuitive view is that the pace will continue at the current rate. It is typical, therefore, that even sophisticated commentators, when considering the future, extrapolate the current pace of change over the next 10 years or 100 years to determine their expectations. This is why I call this way of looking at the future the "intuitive linear" view.

But any serious consideration of the history of technology shows that technological change is at least exponential, not linear. There are a great many examples of this, including exponential trends in computation, communication, brain scanning, miniaturization and multiple aspects of biotechnology. One can examine this data in many different ways, on many different time scales and for a wide variety of different phenomena, and we find (at least) double exponential growth, a phenomenon I call the "law of accelerating returns." The law of accelerating returns does not rely on an assumption of the continuation of Moore's law, but is based on a rich model of diverse technological processes. What it clearly shows is that technology, particularly the pace of technological change, advances (at least) exponentially, not linearly, and has been doing so since the advent of technology. That is why people tend to overestimate what can be achieved in the short term (because we tend to leave out necessary details) but underestimate what can be achieved in the long term (because exponential growth is ignored).

This observation also applies to paradigm shift rates, which are currently doubling (approximately) every decade. So the technological progress in the 21st century will be equivalent to what would require (in the linear view) on the order of 20,000 years.

Michael's argument that we cannot always anticipate the effects of a particular technology is irrelevant here. These exponential trends in computation and communication technologies are greatly empowering the individual. Of course, that's good news in many ways. These trends are behind the pervasive trend we see toward democratization, and are reshaping power relations at all levels of society. But these technologies are also empowering and amplifying our destructive impulses. It's not necessary to anticipate all of the ultimate uses of a technology to see that there is danger in, for example, every college biotechnology lab having the ability to create self-replicating biological pathogens.

However, I do reject Joy's call for relinquishment of broad areas of technology (such as nanotechnology) despite my not sharing Michael's skepticism on the feasibility of these technologies. Technology has always been a double-edged sword. We don't need to look any further than today's technology to see this. If we imagine describing the dangers that exist today (enough nuclear explosive power to destroy all mammalian life, just for starters) to people who lived a couple of hundred years ago, they would think it mad to take such risks. On the other hand, how many people in the year 2001 would really want to go back to the short, brutish, disease-filled, poverty-stricken, disaster-prone lives that 99 percent of the human race struggled through a couple of centuries ago?

People often go through three stages in examining the impact of future technology: awe and wonderment at its potential to overcome age-old problems, then a sense of dread at a new set of grave dangers that accompany these new technologies, followed, finally and hopefully, by the realization that the only viable and responsible path is to set a careful course that can realize the promise while managing the peril.

The continued opportunity to alleviate human distress is one important motivation for continuing technological advancement. Also compelling are the already apparent economic gains, which will continue to hasten in the decades ahead. There is an insistent economic imperative to continue technological progress: relinquishing technological advancement would be economic suicide for individuals, companies and nations.

Which brings us to the issue of relinquishment, which is Bill Joy's most controversial recommendation and personal commitment. Forgoing fields such as nanotechnology is untenable. Nanotechnology is simply the inevitable end result of a persistent trend toward miniaturization that pervades all of technology. It is far from a single centralized effort but is being pursued by a myriad of projects with many diverse goals.

Furthermore, abandonment of broad areas of technology will only push them underground, where development would continue unimpeded by ethics and regulation. In such a situation, it would be the less stable, less responsible practitioners (for example, the terrorists) who would have all the expertise.

The constructive response to these dangers is not a simple one: It combines professional ethical guidelines (which already exist in biotechnology and are currently being drafted by nanotechnologists), oversight by regulatory bodies and the development of technology-specific "immune" responses, as well as computer-assisted surveillance by law enforcement organizations. As we go forward, balancing our cherished rights of privacy with our need to be protected from the malicious use of powerful 21st-century technologies will be one of many profound challenges.

Technology will remain a double-edged sword, and the story of the 21st century has not yet been written. It represents vast power to be used for all humankind's purposes. We have no choice but to work hard to apply these quickening technologies to advance our human values, despite what often appears to be a lack of consensus on what those values should be.

Michael Dertouzos

In my column, I observed that we have been incapable of judging where technologies are headed, hence we should not relinquish a new technology, based strictly on reason. Ray agrees with my conclusion, but for a different reason: He sees technology growing exponentially, thereby offering us the opportunity to alleviate human distress and hasten future economic gains. From his perspective, my point is "irrelevant," and my views on the future of technology are "skeptical." Let's punch through to the underlying issues, which are vital, for they point at a fundamental and all-too-often ignored relationship between technology and humanity.

Ray's exponential-growth argument is half the story: No doubt, the number of transistors on a chip has grown and will continue to grow for a while. But transistors and the systems made with them are used by people. And that's where exponential change stops! Has word-processing software, running on millions of transistors, empowered humans to contribute better writings than Socrates, Descartes or Lao Tzu?

Technologies have undergone dramatic change in the last few centuries. But people's basic needs for food, shelter, nurturing, procreation and survival have not changed in thousands of years. Nor has the rapid growth of technology altered love, hate, spirituality or the building and destruction of human relationships. Granted, when we are in the frying pan, surrounded by the sizzling oil of rapidly changing technologies, we feel that everything around us is accelerating. But, from the longer range perspective of human history and evolution, change is far more gradual. The novelty of our modern tools is counterbalanced by the constancy of our ancient needs.

As a result, technological growth, regardless of its magnitude, does not automatically empower us. It does so only when it matches our ability to use it for human purposes. And that doesn't happen as often as we'd like. Just think of the growing millions of AIDS cases in Africa, beyond our control. Or, in the industrial world, ask yourself whether we are truly better off surrounded by hordes of complex digital devices that force us to serve them rather than the other way around.

Our humanity meets technology in other ways, too: In forecasting the future of technology, Ray laments that most people use "linear thinking" that builds on existing patterns, thereby missing the big "nonlinear" ideas that are the true drivers of change. Once again, this is only half the story: In the last three decades, as I witnessed the new ideas and the 50-some startups that arose from the MIT Laboratory for Computer Science, I observed a pattern: Every successful technological innovation is the result of two simultaneous forces-a controlled insanity needed to break away from the stranglehold of current reason and ideas, and a disciplined assessment of potential human utility, to filter out the truly absurd. Focusing only on the wild part is not enough: Without a check, it often leads to exhibitionistic thinking, calculated to shock. Wild ideas can be great. But I draw a hard line when such ideas are paraded in front of a lay population as inevitable, or even likely.

That is the case with much of the futurology in today's media, because of the high value we all place on entertainment. With all the talk about intelligent agents, most people think they can go buy them in the corner drugstore. Ray, too, brings up his experience with speech translation to demonstrate computer intelligence. The Lab for Computer Science is delightfully full of Victor Zue's celebrated systems that can understand spoken English, Spanish and Mandarin, as long as the context is restricted, for example to let you ask about the weather, or to book an airline flight. Does that make them intelligent? No. Conventionally, "intelligence" is centered on our ability to reason, even imperfectly, using common sense. If we dub as intelligent, often for marketing or wishful-thinking purposes, every technological advance that mimics a tiny corner of human behavior, we will be distorting our language and exaggerating the virtues of our technology. We have no basis today to assert that machine intelligence will or will not be achieved. Stating that it will go one way or the other is to assert a belief, which is fine, as long as we say so. Does this mean that machine intelligence will never be achieved? Certainly not. Does it mean that it will be achieved? Certainly not. All it means is that we don't know-an exciting proposition that motivates us to go find out.

Attention-seizing, outlandish ideas are easy and fun to concoct. Far more difficult is to pick future directions that are likely. My preferred way for doing this, which has served me well, though not flawlessly, for the last 30 years, is this: Put in a salad bowl the wildest, most forward-thinking technological ideas that you can imagine. (This is the craziness part.) Then add your best sense of what will be useful to people. (That's the rational part.) Start mixing the salad. If you are lucky, something will pop up that begins to qualify on both counts. Grab it and run with it, since the best way to forecast the future is to build it. This forecasting approach combines "nonlinear" ideas with the "linear" notion of human utility, and with a hopeful dab of serendipity.

Ray observes that technology is a double-edged sword. I agree, but I prefer to think of it as an axe that can be used to build a house or chop the head off an adversary, depending on intentions. The good news is that since the angels and the devils are inside us, rather than within the axe, the ratio of good to evil uses of a technology is the same as the ratio of good to evil people who use that technology...which stays pretty constant through the ages. Technological progress will not automatically cause us to be engulfed by evil, as some people fear.

But for the same reason, potentially harmful uses of technology will always be near us, and we will need to deal with them. I agree with Ray's suggestions that we do so via ethical guidelines, regulatory overviews, immune response and computer-assisted surveillance. These, however, are partial remedies, rooted in reason, which has repeatedly let us down in assessing future technological directions. We need to go further.

As human beings, we have a rational, logical dimension, but also a physical, an emotional and a spiritual one. We are not fully human unless we exercise all of these capabilities in concert, as we have done throughout the millennia. To rely entirely on reason is to ascribe omniscience to a few ounces of meat, tucked inside the skull bones of antlike creatures roaming a small corner of an infinite universe--hardly a rational proposition! To live in this increasingly complex, awesome and marvelous world that surrounds us, which we barely understand, we need to marshal everything we've got that makes us human.

This brings us back to the point of my column, which is also the main theme of this discussion: When we marvel at the exponential growth of an emerging technology, we must keep in mind the constancy of the human beings who will use it. When we forecast a likely future direction, we need to balance the excitement of imaginative "nonlinear" ideas with their potential human utility. And when we are trying to cope with the potential harm of a new technology, we should use all our human capabilities to form our judgment.

To render technology useful, we must blend it with humanity. This process will serve us best if, alongside our most promising technologies, we bring our full humanity, augmenting our rational powers with our feelings, our actions and our faith. We cannot do this by reason alone!

Kurzweil vs Dertouzos republished with permission of High Technology Magazine (c) 2001. Permission conveyed through Copyright Clearance Center, Inc.

 Join the discussion about this article on Mind·X!


   [Post New Comment]
Mind·X Discussion About This Article:

Kurzweil vs. Dertouzos article
posted on 11/27/2001 4:48 AM by evs@sanddollar.net

[Reply to this post]

I agree with Raymond when he says that trying to prohibit technological advances will just drive them underground. Anything that is possible will happen eventually if someone or something has the desire to manifest that thing.

I think the subject of the eventual results of the development of technology can only be answered by answering some of the most basic philosophical questions, such as "what is the true nature of man and reality?" and "is there free will, can a man or men truly change the course of history or is it all preordained?".

I have a saying: knowing an algorithm does not imply knowing the effects of an algorithm. We can invent more tools and more powerful tools, but we can not usually imagine what these tools will eventually combine to create. This is the situation technology puts us in. The real question is whether or not we, individually and/or collectively, can determine the outcome of the vehicles we've invented that we will ride on into the future . . .

One thing I have come to be sure of is this: in the end, the only thing we can change is the course of our own thinking. Every effect we have on our perceived universe of experience begins and ends with our thoughts. So it follows that we must ***intend*** to proceed to the kind of future we want. To do this, we need to know what it is that we want, and what principles are important (such as freedom, security, happiness). We have to be brave and make difficult choices such as freedom versus security, truth versus lies, and so forth.

To become more conscious, we as human beings and as various cultures have approached the path to greater knowledge and understanding via "spirituality" and "science". One method tries to reach our reality starting from creation, and the other tries to reach creation starting from our reality. Both are ways of generating models of our universes. We need both methods, the synthetic and the analytic, to function fully.

Combining algorithms to produce higher and higher level understanding, which is what science does, is blind to the real future. Only spirituality can jump ahead of the current "state of the art", and the combinatorial results of those products, quickly according to higher principles. This is the way the Greeks "discovered" atoms without having any way to experiment directly with the universe.

In the end, both approaches are needed to guide mankind into the future we really want to be in. So, instead of trying to figure out where technology is taking us, we should figure out where we want to take technology. This means we'd better start getting clear as to what kind of environment we want our children and their children to inhabit. We should create our future, rather than let the current set of technology create us. Creation is in the realm of spirituality; without spirituality, we have no real direction. Without science, we have no real means of moving in the direction we want to.

-Vance Socci

p.s. for those maddening folks who demand exact definitions of all terms like "spirituality", figure it out from the context!

p.p.s. I'm not talking about the "emotional devotion to religion" here, although that may be a means to what I'm talking about. Religions can be viewed as theoretical world paradigms . ..

Re: Kurzweil vs. Dertouzos article
posted on 08/09/2002 10:59 AM by trait70426

[Reply to this post]

Well, how bout a very hammers and nails approach, where you try to be very conservative? I know it sounds expensive, but if the nanoreplicator laboratories were located on the moon, maybe a dangerous accident could be contained.

Re: Kurzweil vs. Dertouzos article
posted on 08/09/2002 12:02 PM by evs@sanddollar.net

[Reply to this post]

That is an excellent idea! Yes, at the moment it would be expensive, but what would the cost be if one of these bugs got loose and wreaked havoc upon us? We've had enough disaster (e.g. 9/11) for one century now . . .

Re: Kurzweil vs. Dertouzos article
posted on 06/10/2010 2:15 PM by poco424

[Reply to this post]

While I agree that putting this facility on the moon would minimize contamination problems, I see a possible problem with the idea. If the facility is manned it would be inhuman of us to leave the population in place to be affected by the contamination. If we bring those scientists home the solution to the contamination problem is thereby defeated. We would have to be ready to either leave those fellow humans to their fate and deny our humanity or to bring them home thereby defeating the intent of having the facility in isolation in the first place. I wonder which we would choose?
Just a thought. Thanx....JC

Re: Kurzweil vs. Dertouzos
posted on 01/01/2003 6:16 AM by sotiropoulos

[Reply to this post]

I am interested in sending an important document to
Mr Michael Dertouzos.
Could you e-mail me his post adress.
ion Sotiropoulos
independent research philosopher
45 Av du Maine Paris 75014

Re: Kurzweil vs. Dertouzos
posted on 05/27/2007 12:22 AM by Jake Witmer

[Reply to this post]

Dertouzos wrote:

But for the same reason, potentially harmful uses of technology will always be near us, and we will need to deal with them. I agree with Ray's suggestions that we do so via ethical guidelines, regulatory overviews, immune response and computer-assisted surveillance. These, however, are partial remedies, rooted in reason, which has repeatedly let us down in assessing future technological directions. We need to go further.

I strongly disagree that we should "deal with" new technologies using regulatory guidelines. Regulators are the least moral people in society. They are the initiators of brute force, without oversight, or checks and balances inherent in jury trials.

(Go to a traffic court sometime, and witness the way the judge treats any person who uses a "jury nullification" argument, or even asks for a jury trial. Scorn, derision, and instant threats of arrest! Logic and rationality are not tolerated in the institutions of "law" that actually contradicts the "higher law of the Constitution", because of stare decisis, voire dire, other bad reasons, etc...)

I find that often, technologists and scientists are mere infants when it comes to logically applying their ideas to a legal framework (Drexler and Freitas are two notable exceptions).

I strongly recommend that both Ray Kurzweil and Dertouzos make themselves intimately familiar with the intellectual arguments of Ayn Rand, Lysander Spooner, Milton Friedman, and Harry Browne. Then, I highly recommend that they come along with me, and work as petitioners for the Libertarian Party's ballot access for just one full day. Then, I highly recommend that these people either read Vin Suprynowicz's books or Tour a prison and ask people why they are locked up.

When the magnitude of what America has lost by short-shrifting the ideas it was founded on comes into focus, the change in attitude is often gigantic.

You see, freedom and justice are both very difficult to secure, fleeting, and fragile.

If regulators are allowed to abort the synthetic John Galt or MycroftXXX before his conception, we will be surrendering humanity to the lowest power-lusters among us.

This is the default direction that collective humanity travels in.

At what point do we reverse course, and say "No more violation of our rights will be tolerated?"

Nobody stuck up for the individual property rights of the 2/3 of the US prison population who are jailed for selling illegal drugs. Noone stuck up for the property rights of the Michigan Militia men who went to jail for owning guns! Noone stuck up for the property rights of all the vitamin stores that the FDA has raided!

-Just the way that the loss of property rights was incrementally caused by "regulators" in Nazi Germany, and Soviet Russia.

But our government has distributed camera swarms to keep us in line. They have smarter computers, GPS, and complete control of the ground.

There will be no overcoming fascism when it comes to America, if all of our smartest inventors are working for the initiators of force (the government).

Who will then rebel? Noone.

Do not hand the future of America to the regulators! You are making a "trade" and receiving NOTHING in return!

-Jake Witmer

Re: Kurzweil vs. Dertouzos
posted on 05/27/2007 8:51 AM by doojie

[Reply to this post]

Jake, an excellent response. Toffler pointed out that technology breeds cults and sub-cults, because people try to form organizations that reduce "overchoice" from technology. Technology empowers, but humans form collective systems to slow down that empowerment.

Rules and regulations are part of the same urge.
Your example of traffic court is also good. Imagine challenging a seat belt violation in court. You have the right to face your accusers, but in the courtroom, the plaintiff/accuser works for the state, the prosecutor works for the state, and the judge works for the state. All have taken an oath to uphold the laws of the state.

Add to that, if the trooper takes the witness stand, s/he is paid by the state to give that testimony, which is likely to be biased.

So, in a framework of rules developed "for our own good", we must face the plaintiff, who is the state, the prosecutor, who is the state, and the judge, who is the state. Yet both Madison and Hamilton, in "The Federalist", stated that it is wrong for a man to be judge in a cause in which he has interest, because the most powerful force will prevail, that being the one who judges the case.

That's a violation going all the way back to Magna Carta!

The conclusion, I suppose, is that rules, algorithms, and laws make good guidelines, but they ignore so much of our humanity. The "human" parts of our evolution has not progressed because they have been bypassed by the focus on mechanical/legal principles that reduce humans to machines.

We only recognize the moral/ethical dilemma now because we try to make machines that reflect humanity.

Re: Kurzweil vs. Dertouzos
posted on 05/27/2007 5:13 PM by Jake Witmer

[Reply to this post]

Thanks! If you want to support an organization that tries to prevent this injustice, I strongly recommend:
http://www.fija.org I gave them $100 and it was the best money I ever spent. They sent me a volume of useful info, and ways to fight insitutionalized injustice. Also, they sent me precedent court cases that were decided in favor of freedom, in order to fight stare decisis using stare decisis. They are in a re-organization now, and updating most fo their materials.

Every dollar spent there is well spent.

I also encourage people to support http://www.ij.org

It's not as if there aren't logically consistent people out there sticking up for individual rights, and THE PROPER LAW. (Natural law, Individual rights, Property rights, Constitutional Law, The highest law of the land, etc... whatever you want to call it.) It is merely unfortunate that the people who make the technology don't often seem to recognize that they are handing it over to ruffians who simply use it to initiate force.

Whereas, if those same technologists simply held onto their technology (in many cases), they would hold all of the cards, and the single argument the government actually has on its side:

"...or else!"

Just in case someone thinks I'm advocating that the scientists boss everyone around, my argument goes like this: "Leave us alone, (at most, protect basic property rights) ...or else!"

The books "Atlas Shrugged" by Ayn Rand, "No Treason" by Lysander Spooner, and "Why Government Doesn't Work" by Harry Browne all have good variants of the same idea: The person who initiates force is wrong.

That means:
1) The cop who pulls you over for speeding (since you, by speeding have hurt noone)
2) The DEA agent who puts someone in jail for possession of illegal dead plant matter
3) The EPA agent who confiscates property because of living plant or animal matter
4) The FDA agent who confiscates Stevia (or books on Stevia) from a mom and pop healthfood store
5) The treasury agents who confiscate someone's paycheck
6) The ATF agent who confiscates guns from someone

All of these are initiations of force made all the more horrific because no trial by jury ever intervenes, or if it does, it does so seriously infringed to the point of uselessness (after voire dire, carrot and stick plea bargaining, threats of contempt of court unless the accused is silent about jury nullification, etc...).

Incidentally, I ALWAYS ASK FOR A JURY TRIAL FOR SPEEDING TICKETS, SEAT BELT TICKETS, ETC... No matter what plea bargain they offer me. A jury is the only chance for justice!!!

Never accept the lesser punishment if you are INNOCENT!

That kind of cowardice got us to this sorry state in the first place!!!!!!!!!!!!!!!!!!!!!!!!!


Re: Kurzweil vs. Dertouzos
posted on 05/27/2007 7:27 PM by doojie

[Reply to this post]

You've touched a nerve with me, and just to deviate from this main discussion a bit, with apologies.

Law goes back to England stating that a witness is trustworthy only if s/he is disinterested, or has nothing to gain or lose by his/her testimony.

The cop/patrolman who gave you the ticket is biased because s/he has taken an oath to pull you over for infractions, and is a paid informant for the state.

His/her testimony can be challenged on the basis of bias.

If you challenge the traffic court on the basis of bias(plaintiff is the state, prosecutor is the state, judge is the state), you can then challenge the jury on the same principle.

Most every person called to jury duty has signed a W-4, which states that the information on the form is true under penalty of perjury.

This means that whoever signed the W-4 form is under oath to the state just the same as the cop,prosecutor, and judge in traffic court.

Even if you ask for a jury, you could argue biased jury.

The reason of asking for a jury is because both Madison and Hamilton, in "The Federalist", which has been quoted in Supreme Court decisions, have plainly stated that it is wrong for any person to judge in his own interest, meaning that the jury cannot be trusted, having committed itself to the state in an oath given on the W-4 form.
It would be virtually impossible to seledct an unbiased jury on those grounds.

Re: Kurzweil vs. Dertouzos
posted on 05/27/2007 5:16 PM by richiemobile

[Reply to this post]

The history of technological development, since the beginning techno-scientific history either through anthropological, archaeological, Carbon Dating, or Historical sources are determinable and unimpeachable. The existence of human beings prior to the discovery of technologies such as Fire, Stone tools and weapons, Agriculture, Bronze, Steel, the Wheel, Language, Writing and its step children the Healing Arts, Politics, Literature and Art,Also additional upgrades of the above " realities" including the Printing Press, Religion, The enlightenment,
The Reformation,Modern Education, The Internal combustion engine,
Photography, Cinema, Radio, Television, the personal computer and the Internet have created an organism that so far unresembles the original DNA identical species , homo sapiens, that one would almost fail to believe that they are drawn from the same genetic pool.
The reality that further augmentation of our species will continue at an accelerated pace is obvious to any objective observer. The only consideration is that the "pace" of that augmentation
and of course the obvious danger that our species might direct its own evolution down some inadvertent self destructive pathway.
Of course I could be wrong