|
|
Ubiquity Interviews Ray Kurzweil
"If it were up to the Luddites, human life expectancy would still be 37, and
we'd still be dying from bacterial infections," says Ray Kurzweil in this
wide-ranging interview. The anti-technology movement "is fundamentally
misguided, because it fails to appreciate the profound benefits technology
has brought."
Originally published in Ubiquity
January 11, 2006. Published on KurzweilAI.net January 18, 2006.
UBIQUITY: How is the new book doing?
KURZWEIL: Very well—it's in its fourth printing, and
has been number one both in science and in philosophy on Amazon.
UBIQUITY: It's an amazing, magisterial piece of work.
KURZWEIL: Thanks, I appreciate that.
UBIQUITY: Why don't you talk a little bit about the notion
of "singularity"? Set the premise for us.
KURZWEIL: Sure. It's actually a complicated premise, but
there are several key ideas. First of all, there's the idea that
technology in general is accelerating rapidly, and information technology
in particular is doubling its power, as measured in price performance
and bandwidth capacity, every year. We will see the power of information
technology multiplied by a factor of a billion in 25 years. If you
imagine increasing the power of computers for the same price, computation,
communication, as well as our knowledge of biology, and knowledge
of intelligence processes in the brain, by a factor of a billion
in 25 years, it's quite a formidable result.
The second observation is that information technology is not just
computerized devices like MP3 and cell phones, but is something
that is deeply influencing every aspect of our lives, including
our biology, our knowledge of intelligence, worldwide communications,
and so on. People say, well, exponential progressions can't go on
forever: like rabbits in Australia they eat up the foliage and then
the exponential growth stops. But what we see actually in these
information technologies is that the exponential growth associated
with a specific paradigm (like, for example, shrinking transistors
on an integrated circuit, which underlies Moore's Law) may come
to an end, but that doesn't stop the ongoing exponential progression
of information technology—it just yields to another paradigm.
UBIQUITY: Example?
KURZWEIL: A good example is the fact that the integrated
circuit was not the first, but the fifth, paradigm to bring exponential
growth to computers. They were shrinking vacuum tubes in the 1950s,
and that came to an end. They couldn't shrink the vacuum tube and
keep the vacuum, but it just led to another paradigm which were
transistors. In the book I ask, what are the ultimate limits of
matter and energy to support computation and communication? Yes,
there are limits, but they're not very limiting. One cubic inch
of nanotube circuitry would be 100 million times more powerful than
the human brain. So there's going to be plenty of capacity with
three-dimensional molecular computing to keep these trends going
for a long time.
UBIQUITY: How will this all play out?
KURZWEIL: There are two key aspects to the concept of singularity—the
hardware and software sides of emulating human intelligence. We'll
have sufficient hardware to recreate human intelligence pretty soon.
We'll have it in a supercomputer by 2010. A thousand dollars of
computation will equal the 10,000 trillion calculations per second
that I estimate is necessary to emulate the human brain by 2020.
The software side will take a little longer. In order to achieve
the algorithms of human intelligence, we need to actually reverse-engineer
the human brain, understand its principles of operation. And there
again, not surprisingly, we see exponential growth where we are
doubling the spatial resolution of brain scanning every year, and
doubling the information that we're gathering about the brain every
year. We're showing that we can turn this data into working models
and simulations. There's also two dozen regions of the brain, that
we have modeled and simulated, including the cerebellum—which
is where we do our skill formation and which compromises more than
half the neurons in the brain. There's an effective simulation of
that.
UBIQUITY: And this leads to what?
KURZWEIL: I make the case that this exponential progression
will lead us to an understanding of human intelligence. And by understanding
I mean we will have detailed mathematical models and computer simulations
of all of the regions of the brain by the mid 2020s. So by the end
of the 2020s we'll be able to fully recreate human intelligence.
You may wonder: "OK, what's the big deal with that? We already
have human intelligence; in fact, we've got six billion human brains
running around, so why do we need more?" One of the answers
to that question is that it will be a very powerful combination
to combine the subtle and supple powers of human pattern recognition
with ways in which machines are already superior. Machines can think
more quickly than we can. They're much better at logical thinking
and much better at remembering things: a $1000 notebook computer
can remember billions of things accurately whereas we're hard-pressed
to remember a handful of phone numbers. And most importantly, machines
can share their knowledge, their skills, and their insights at electronic
speed, which is a million times faster than human language.
My second point is that nonbiological intelligence, once it achieves
human levels, will double in power every year, whereas human intelligence—biological
intelligence—is fixed. We have 10 to the 26th power calculations
per second in the human species today, and that's not going to change,
but ultimately the nonbiological side of our civilization's intelligence
will become by the 2030s thousands of times more powerful than human
intelligence and by the 2040s billions of times more powerful. And
that will be a really profound transformation.
UBIQUITY: Why do you call this profound transformation the
"singularity"?
KURZWEIL: The "singularity" is a metaphor borrowed
from physics, really referring to the event horizon. We can't easily
see beyond the event horizon around the black hole in physics. And
here with regard to this historical singularity, we can't easily
see beyond that event horizon, because it's so profoundly transformative.
We will literally multiple the intelligence of our civilization
by merging with, and supplementing our biological intelligence,
with this profoundly more capable nonbiological intelligence by
a factor of billions, ultimately trillions. And that will dramatically
change the nature of human civilization. That in a nutshell is what
the singularity is all about.
UBIQUITY: Somewhere in the book you remark that people have
a very hard time understanding exponential growth, isn't that right?
KURZWEIL: Yes, that's a very good point, and a very important
one: it really underlies the difference between the seemingly radical
projections I'm making and people's linear perspectives of what
will happen. People intuitively think that the current pace of change
will continue at the current pace, and when I say "people"
I'm definitely including sophisticated people, scientists. I had
a debate recently with someone who is reverse-engineering the human
brain who was engaging in a linear extrapolation. He said, "Well
it's going to take me 18 months to finish modeling this one ion
channel. And there's five other ion channels, and that's five times
18 months. And then there's other details, and this other dendrite
has six more ion channels." And he's adding it all up in his
mind, and saying "Well, it be 100 years before we finish this
project, assuming the project is going to go for the next 100 years
at the same pace, with the same tools, with the same supercomputers
to do the simulations." And I'd add: without factoring in the
radically changing landscape. The fact is that the pace of progress
is dramatically increasing.
UBIQUITY: What could you cite as an example of this?
KURZWEIL: It took us 15 years to sequence HIV. We sequenced
SARS in 31 days. So someone doing the mental experiment in 1990,
about how long it would take to do, for example, the genome project
also came up with centuries to do the project. But we doubled the
amount of genetic data we have been sequencing every year. And that
has continued. We are doubling the spatial resolution of brain scanning
and so on. The future is exponential, not linear, and yet virtually
all government models used to track future trends are linear. They
actually work quite well for one year, two years, maybe three, since
linear projection is a very good approximation of an exponential
one for a short period of time—but it's a terrible one for
a long period of time. They radically diverge, because exponential
growth ultimately becomes explosive. And that is the nature of technological
evolution.
A related issue is can we really predict the future? The common
wisdom is that you cannot. But I maintain that you can reliably
predict these features of information technology. If you ask me
how much will a MIP of computing cost in 2010, or how much will
it cost to sequence a base pair of DNA in 2010, or what will the
special resolution be of brain scanning in 2014, I can give you
a figure, and it's likely to be very accurate. And I say this now,
not just looking backwards at this data, but I've been making forward-looking
projections like this for 25 years that have proven to be quite
accurate, even though they were largely controversial when I made
them.
But someone might say: How could that be? How are we able to make
reliable predictions about the overall future of these technologies
when each individual project is very unpredictable? But we see the
same thing in other areas of science. Take thermodynamics. It's
impossible to predict the path of a single molecule in the air,
because it follows a random unpredictable path, and that's true
of all of the particles. Yet, the overall properties of the gas,
made up of all of these unpredictable particles is very predictable
according to the laws of thermodynamics. And the whole process of
technology evolution is similarly a complex dynamic system where
each individual project is unpredictable, but the overall results
are very predictable. And that's another observation that is contrary
to common belief.
UBIQUITY: You see yourself as basically an expert in pattern
recognition, correct?
KURZWEIL: Yes, that's my field of interest. We developed
the first omni-font optical character recognition system, and the
first commercially marketed large-vocabulary speech recognition
system. We're now working on electrocardiogram automatic diagnosis
to create a smart undershirt for people with heart disease and conditions
like that. So pattern recognition is my field of expertise.
UBIQUITY: And so pattern recognition is the heart of...
what? Finish that sentence.
KURZWEIL: Pattern recognition is the heart of human intelligence.
We're in fact not very good at logical thinking, analytical thinking.
Computers are already much better at that than we are—as is
clear if you consider a math program like "Mathematica"
that's very hard even for professionals and mathematicians to keep
up with. And yet people are still better than machines at recognizing
patterns. However, machines are getting better, and ultimately machines
will be better than humans in all areas of pattern recognition.
Of course, at that point, computers will have achieved human levels
of intelligence, in the late 2020s. But human pattern recognition,
though, is basically hardwired for certain types of patterns. For
example, there's actually a region of the brain that recognizes
faces, and we're very good at that, because we have a built-in capability.
We're very good at recognizing language sounds, and language skill
is essentially a pattern recognition capability. Computers can apply
pattern recognition principles to other types of patterns that humans
are good at, and they're also learning how to do the kinds of pattern
recognition that humans are not good at. And ultimately, we'll be
able to exceed human intelligence.
UBIQUITY: Expand on the idea by using chess as an example.
KURZWEIL: In chess, a computer can do the logical thinking
of thinking about all of the move and counter-move sequences and
think all of the different sequences of moves 12 moves ahead and
consider billions of those in a few seconds. Garry Kasparov, the
chess master, was asked, "How many board positions can you
think of in a second?" He said, "Well, less than one."
So how is it that he can actually compete against a machine? It's
because of pattern recognition. He looks at the board and just instantly
recognizes a pattern. He sees: "This is like the board where
grand master So-and-So forgot to protect his trailing pawn two years
ago." And he's actually studied 100,000 board positions. That
is how humans think, largely by recognizing patterns.
UBIQUITY: Is there some taxonomy of pattern recognition,
so that you could, for example, compare the pattern recognition
involved in different domains? For example, you, Ray Kurzweil, see
patterns from the perspective of an inventor, whereas sports fans
will see patterns in football strategies and football games.
KURZWEIL: Yes, that's a typically human observation, and
is how we think: we see patterns. A better coach or sports strategist
will be able to have greater insights into those patterns, and be
able to anticipate the patterns of the opposition, and then think
of some way of superseding that. And historians see patterns in
events in the world. Pattern recognition is the essence of what
intelligence is.
UBIQUITY: Is pattern recognition, though, a generalizable
talent that can be replicated and transferred? You've had an astonishing
record as an inventor, and you seem to have started when you were—what?—age
five or something?
KURZWEIL: Well, five was when I fashioned myself an inventor;
I decided I was going to be an inventor when I was five, and I never
really wavered from that. When other kids were wondering whether
they would be, firemen or teachers, I always had this conceit, "Well,
I know I'm going to be an inventor," and that never changed.
UBIQUITY: I had the same conceit but I never invented anything,
so what I'm wondering now is what is the nature of your pattern
recognition talent? How do you actually go about inventing things?
What's the trick? Because I suspect that if you went into any environment
whatsoever, you would invent something for that environment. Is
that a fair assumption?
KURZWEIL: Yes, well, part of it is a belief in the power
of ideas, and a confidence that I can find the ideas to solve a
problem, and that these ideas exist. One technique is to just to
use one's imagination. Imagine that a particular problem has been
solved, and imagine what the solution would have to look like. So
I'll fantasize that I'm giving a presentation four years from now,
and describing the invention to my audience, and then I'll imagine
what would I have to be saying, and what characteristics would the
invention have to have? And then I work backwards: OK, if it's a
reading machine, well it would have to somehow pick up the image
of the page—well how would it do that? And you use your imagination
to break it down into smaller and smaller problems.
UBIQUITY: And this isn't a poetic conceit now? You really
do work that way?
KURZWEIL: Yes, that is how I work. And I actually have
a specific mental technique where I do this at night. I've been
doing this for several decades. When I go to sleep I assign myself
a problem.
UBIQUITY: For example?
KURZWEIL: It might be some mathematical problem or some
practical issue for an invention or even a business strategy question
or an interpersonal problem. But I'll assign myself some problem
where there's a solution, and I try not to solve it before I go
to sleep but just try to think about what do I know about this?
What characteristics would a solution have? And then I go to sleep.
Doing this primes my subconscious to think about it. Sigmund Freud
said accurately that when we dream, some of the censors in our brain
are relaxed, so that you might dream about things that are socially
taboo or sexually taboo, because the various censors in our brain
that say "You can't think that thought!" are relaxed.
So we think about weird things that we wouldn't allow ourselves
to think about during the day.
There are also professional blinders that prevent people from thinking
creatively. Mental blocks such as "You can't solve a signal
processing problem that way" or "Linguistics is not supposed
to be done this way." Those assumptions are also relaxed in
your dream state, and so you'll think about new ways of solving
problems without being burdened by constraints like that. Another
thing that's not working when you're dreaming is your rational faculties
to evaluate whether an idea is reasonable, and that's why fantastic
things will happen in the dream, and the most amazing thing of all
is that you don't think these fantastic things are amazing. So,
let's say, an elephant walks through the wall, you don't say, "My
God, how did an elephant walk through the wall?" You just say,
"OK, an elephant walked through wall, no big deal." So
your rational faculties are also not working.
The next step is in the morning, in this half-way state between
dreaming and being awake, what I call lucid dreaming, I still have
access to the dream thoughts. But now I'm sufficiently conscious
to also have my rational faculties. And I can evaluate these ideas,
these new creative ideas that came to me during the night, and actually
see which ones make sense. After 15 to 20 minutes, generally, if
I stay in that state, I can have keen new insights into whatever
the problem was that I assigned myself. And I've come up with many
inventions this way. I've come up with solutions to problems. If
I have a key decision to make, I'll always go through this process.
And I'll then have a real confidence in the decision, as opposed
to just trying to guess at the answer. So this is the mental technique
I use to try to combine creative thinking with rational thinking.
UBIQUITY: What implications might your technique have for
education?
KURZWEIL: Well, I do think that for kids (or really for
people at any age) the best way to learn something is to try to
solve real problems that are meaningful to them. If, for example,
you're trying to create a reading machine, then you learn about
optics. And you learn about signal processing, and image enhancement
techniques and all of these different things that you need to know
in order to solve the problem. If you really have a compelling need
to solve these problems, you will learn about them. If you're trying
to create, let's say, a hip hop song, well you learn about the history
of hip hop, and how it emerged from other forms of music. And you
learn something about urban culture. So learning things in context,
where you're actually trying to make a contribution yourself, is
a very motivating way to learn—as opposed to just trying to
dryly learn facts out of context and without a purpose for learning
them.
UBIQUITY: Since you mention music, a review of "The
Singularity is Near" by Kevin Shapiro in the new Commentary
magazine makes the observation that "computers can also compose
music, but, aside from computer scientists, not many humans enjoy
listening to it." Is that a true statement?
KURZWEIL: Well, yes and no. Computers, right now, are actually
collaborating with people, and very few musicians will create music
today without collaborating with machines that are doing sound enhancement,
sequencing, mixing intelligent signal processing and so on. But
that criticism is in the genre of observations made by commentators
who believe that since computers don't have fully human levels of
intelligence today, they never will. It's not my position that computers
are equal to humans today; the whole point of my book is that they
will have that ability in the future. And I make the case that by
2029 computers will be fully equal to humans and thereafter surpass
them because they'll be able to combine human levels of intelligence
with ways in which machines are superior. One machine will be able
to have the best human skills at every area, and multiple machines
will be able to share knowledge at electronic speeds. And the machines
will double their intelligence every year, which is the nature of
computer intelligence. I'm not saying that computers can do everything
humans can do today. Computers can't pass the Turing test today,
but I'm predicting that they'll be able to do it in 2029. [The Turing
test, conceived by British artificial intelligence pioneer Alan
Turing, suggests that if a person can not distinguish between a
machine and a human simply by the answers they give to the person's
questions, then the machine might be considered to be intelligent.—ed.]
So the fact that there are still some things that humans can do
that computers can't today is not a criticism of my case.
UBIQUITY: One of the many interesting things in your book
is your collection of the most frequent criticisms that have been
made of your work, and your response to those critics. I'm wondering
if, as an exercise in pattern recognition, you can characterize
not so much the criticisms but the critics: do you see certain kinds
of people having certain kinds of responses to your work?
KURZWEIL: Well, that's a good question. I think one common
motivation of some people is a misguided but nonetheless earnest
attempt to defend the dignity of human intelligence: "It can't
be the case that computers could achieve human levels of intelligence,
therefore we've got to find some way in which this theory fails."
And so: it can't be done because of physics; or it can't be done
because of the way microtubules do quantum computing; or it can't
be done because Gödel's uncertainty theorem proves that machines
can't possibly do what humans can do; or it can't be done for biological
reasons. All this frantic searching is done to find some reason
it can't possibly be the case that machines could achieve the majesty
of human intelligence. These criticisms are creative ideas drawn
from various fields of science and philosophy, but they have this
common motivation the ingrained belief that "it can't be so."
UBIQUITY: One of your critics called you a "materialist."
KURZWEIL: Yes, the philosopher John Searle. But I make
the case that Searle's view is really much more the materialist
view than mine. I mean he's saying—and it's surprising, actually—that
consciousness is just an ordinary biological function like lactation:
we don't understand what causes it yet, but we ultimately will find
the cause as some basic biological function. That's actually a reductionist
materialist perspective. But even if were to take what he's saying
to be the case, my answer would be that, once we discover what that
is, there's no reason why that can't be replicated in the machine.
In fact, Searle acknowledges that a neuron is just a complicated
machine; well, if one neuron is a machine, then a set of 100 billion
neurons is also a machine.
UBIQUITY: So then you're not a materialist. I was a little
surprised, however, that none of your critics, suggested that you
are a mystic. Are you a mystic?
KURZWEIL: Well, it depends what you mean by mystic. I describe
myself as a patternist, and believe that if you put matter and energy
in just the right pattern you create something that transcends it.
Technology is a good example of that: you put together lenses and
mechanical parts and some computers and some software in just the
right combination and you create a reading machine for the blind.
It's something that transcends the semblance of parts you've put
together. That is the nature of technology, and it's the nature
of the human brain. Biological molecules put in a certain combination
create the transcending properties of human intelligence; you put
notes and sounds together in just the right combination, and you
create a Beethoven symphony or a Beatles song. So patterns have
a power that transcends the parts of that pattern.
UBIQUITY: Isn't transcendence another way of saying mystical?
KURZWEIL: Well, it could be. It is a reasonable use of
the word mystical or magical, but it can have other connotations,
namely that these ideas are not rooted in science and that I would
disagree with. My views have come from a scientific analysis of
technology trends: I didn't start with these views and then try
to work backwards to justify them, I started by making a practical
effort to time my technology projects, because I realized that timing
actually is the most important issue in succeeding as an inventor,
and so I began to study these technology trends. When I did, I discovered
how predictable certain trends are, and I began making accurate
predictions based on my models, which can now anticipate 10 years,
20 years, 30 years into the future, and come up with fairly dramatic
scenarios of what will be feasible.
UBIQUITY: Have any of your critics caught your attention
to the extent that you changed the way you think about some of these
things?
KURZWEIL: That's a good question. The basic theory I put
forth, the Law of Accelerating Returns, has proven out over several
decades. I had hundreds of predictions about the 1990s, and the
early 2000s, in my first book, which I wrote in the mid 1980s, "The
Age of Intelligent Machines," which were controversial at the
time but have proven to have been quite accurate. I've continued
to think about the implications of this theory and we've seen how
it's applicable to fields like biology, which is something that
really wasn't clear 10 years ago. We only had the genome just two
years ago. So I think the critics have actually illuminated various
issues that need consideration to really think through the implications
philosophically or in terms of different aspects of our biology.
And so it's caused me to think through and develop the theory in
to other realms. But I think my basic theory is correct, and I haven't
changed my dates: I've been projecting the end of the 2020s for
machines passing the Turing test consistently for a long time.
UBIQUITY: Someone like H.G. Wells went from science and
technology into world government and large social issues and such.
Have you attempted to follow his example?
KURZWEIL: Well, I am involved with one important aspect,
and that is to study the downside to these technologies. I'm not
a utopian, and my view is not a utopian perspective. I've been articulating
the dangers and downsizing of these technologies for a long time.
Are you familiar with Bill Joy's "Wired" cover story?
UBIQUITY: Yes. In fact, that was the main topic of his Ubiquity
interview:
KURZWEIL: Right. Well, you know, he got his views originally
from my book "The Age of Spiritual Machines," which came
out in 1999 and from some conversations we had in 1998. I articulated
the downsides in that book and in those conversations, and I articulate
the dangers again in my new book, "The Singularity Is Near."
Chapter Eight is "The Deeply Intertwined Promise and Parallel
of GNR [GNR stands for the Genetics, Nanotechnology, and Robotics
age—ed.]." It is my view that the answer to the danger
is not to relinquish these technologies—the position advocated
by Bill McKibben, a noted environmentalist who brought global warming
to our attention. I have a lot of respect for him, but reject his
view that we should basically stop technology progress, and say
"enough is enough." In fact, his latest book is called
"Enough," his position is that technology has been very
good and brought us a lot of good things, but that now we have enough,
and that continuing technological development is going to create
too many dangers. But I'm opposed to that perspective, for two reasons.
Number one is that it would deprive humanity of the benefits, which
we very much need: I mean we're close to overcoming cancer and heart
disease, and stopping progress will allow the suffering in the world
to continue. But secondly, it wouldn't work in terms of ameliorating
the dangers, and would actually make them worse, because it would
drive these technologies underground where we would have even less
control over them. Responsible scientists would not have access
to the tools needed to defend society.
UBIQUITY: What would a good example of this be?
KURZWEIL: Software viruses. Here we have a new human-made,
self-replicating pathogen that didn't exist 30 years ago, and the
viruses get more and more sophisticated. Yet they have not destroyed
the Internet, they have not destroyed computer networks, and we
keep them at pretty much a nuisance level, because we have a technological
immune system that responds to the danger every time there's some
new sophisticated attack with some new virus. The defenses are created
and are distributed within a matter of hours. Now if we can do half
as well in the area of let's say biological viruses, or self-replicating
nanotechnology, we'll be doing well. However, we do need to invest
in the defensive technologies. And I've done a lot to advance that
idea. I gave testimony to the Congress recently proposing a $100
billion program to create a rapid response system for new biological
viruses. Some elements of that were in President Bush's $7 billion
program that he recently announced. It doesn't go far enough, and
is too small a program by order of magnitude, but we do have technologies
like RNA interference that can actually destroy biological viruses.
I proposed a program that would rapidly sequence a new virus, create
an RNA interference medication quickly, and gear up manufacturing.
Within a week or two, we could respond to any new virus like bird
flu, which is a natural virus, or an unnatural virus like a terrorist
weapon. We don't have that system in place, but we could put it
in place. And I think we should do that.
UBIQUITY: What other issues have you been involved in?
KURZWEIL: I'm on the Army Science Advisory Group. The Army
is actually the institution responsible for combating bioterrorism,
and I've been advising them on that. We need to increase our investment
in developing the defensive technologies, because the biggest threat
we have right now is the specter of a bioengineered biological virus
that could be very disruptive. In fact, Bill Joy and I had a joint
op-ed piece in the New York Times a few weeks ago, called "Recipe
for Destruction," where we both criticized the publication
on the Web of the genome of the 1918 flu virus. We pointed out that
it's basically a recipe for a weapon of mass destruction. People
would not advocate putting the precise design of an atom bomb on
the Web—which is in fact, illegal—and we weren't happy
when A.Q. Kahn of Pakistan was disseminating just that kind of information.
Yet here we have the design of a biological weapon that could be
even worse than an atomic bomb.
UBIQUITY: You call it a recipe. Is it a simple recipe?
KURZWEIL: Yes; in fact it is even easier now to create
the 1918 flu from the genome than it would be to create an atomic
bomb from its design. If I gave you the precise design for an atomic
bomb you still couldn't build one, and even if I gave you plutonium
and the precise designs you'd still have a hard time because building
an atomic bomb requires some pretty exotic industrial processes.
Yet if I gave you the genome of the 1918 flu (and in fact now you
have it, it's on the Web, you can download it), you can send in
a genetic sequence to a mail order house and get it built for you.
Now I'm not saying it's completely trivial to create the 1918 flu
—it has eight genes and they have to be organized in just
the right way; but it's actually not that hard either. So we criticized
the publication of that information. The genome was published to
provide the information to scientists who are trying to protect
us from bird flu, but the alternative would have been to provide
it just to those scientists, with some kind of security provisions,
and that is something we do all of the time with dangerous information.
So, anyway, I am involved quite heavily in these kinds of issues
on the safe use of these very powerful technologies.
UBIQUITY: Now is it fair to say that in spite of your concern
with the downside of many of these issues you remain basically an
optimist as far as the technological future is concerned?
KURZWEIL: Yes. Part of that is just my nature, and I think
you have to be an optimist to be an inventor and an entrepreneur—because
if you were aware of all of the obstacles you were going to face
you'd probably never start any project. So being optimistic, I think,
is actually self-fulfilling: it's not just an idle anticipation
of the future. You actually change the future if you're optimistic
in a positive way. But also it comes from just looking at the actual
history of technology, which has done an astonishing amount of good.
Although we've had a hundred wars in the 20th century, wars that
killed 180 million people, you can't necessarily blame technology
for creating the conflicts, even if they expanded the scale of destruction;
nonetheless, despite that, I would say technology has done more
good than harm. You know, 99.9 percent of humanity lived terrible
lives 200 or 300 years ago, and life was well-described by Thomas
Hobbes as "nasty, brutish, and short." Human life expectancy
was only 37 in 1800, and if someone got a simple bacterial infection
it would plunge that person's whole family into desperation, because
there were no social safety nets. Life was extremely difficult,
and labor-filled. For example, it took six hours to prepare the
evening meal. So we have liberated ourselves to a great extent from
these kinds of miseries. Though we still have a lot of suffering
in the world, only technology has the scale to solve problems like
environmental degradation and poverty. And the trends are very positive
in that. We wiped out half of poverty in Asia over the last 10 years.
According to the World Bank, at current rates, we'll cut poverty
rates by 90 percent in the next 10 years in Asia, and other areas
of the world have also made progress. So I am optimistic, even though
I am mindful of these downsides.
UBIQUITY: Do you have any thoughts about globalization,
and the anti-globalization resistance movement?
KURZWEIL: Well, globalization is a reflection of the fact
that the Internet is a worldwide phenomenon and has nothing to do
with national boundaries. A whole economy exists in this virtual
world, which is becoming a larger and larger portion of the world
economy. The power, and bandwidth, and reach of this virtual world
is growing exponentially, so the idea of, let's say, stopping outsourcing
is like trying to sweep back the ocean. I think there is a strong
anti-technology movement that started with the Luddites in 1800.
I think that movement is fundamentally misguided, because it fails
to appreciate the profound benefits technology has brought. For
example, the anti GMO movement has forced African nations to refuse
food aid because the food has been genetically modified—and
golden rice, which can save hundreds of thousands of children from
going blind, has been blocked because it involves genetically modified
crops. I'm not saying necessarily saying that every GMO [genetically
modified organism—ed.] is automatically safe, but the idea
that every GMO is automatically detrimental to the world is just
plain wrong.
UBIQUITY: Do you think the new Luddites will ever come to
see the light?
KURZWEIL: I think they're going to continue doing what
they're doing: trying to stop progress and trying to keep human
beings the way they are. But if you ask me what is a human being,
I'd say that we are the species that seeks to go beyond our limitations
and beyond our boundaries. We didn't stay on the ground. We didn't
stay in the planet. We didn't stay within the limits of our biology.
And I would point out that, if it were up to the Luddites, human
life expectancy would still be 37, and we'd still be dying from
bacterial infections.
Source: Ubiquity,
Volume 7, Issue 01, January 10-January 17, 2006.
' Copyright 2006 John
Gehl. Reprinted with permission.
| | |