|
|
|
|
|
|
|
Origin >
The Singularity >
After the Singularity: A Talk with Ray Kurzweil
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0451.html
Printable Version |
|
|
|
After the Singularity: A Talk with Ray Kurzweil
John Brockman, editor of Edge.org, recently interviewed Ray Kurzweil on the Singularity and its ramifications. According to Ray, "We are entering a new era. I call it 'the Singularity.' It's a merger between human intelligence and machine intelligence that is going to create something bigger than itself. It's the cutting edge of evolution on our planet. One can make a strong case that it's actually the cutting edge of the evolution of intelligence in general, because there's no indication that it's occurred anywhere else. To me that is what human civilization is all about. It is part of our destiny and part of the destiny of evolution to continue to progress ever faster, and to grow the power of intelligence exponentially. To contemplate stopping that--to think human beings are fine the way they are--is a misplaced fond remembrance of what human beings used to be. What human beings are is a species that has undergone a cultural and technological evolution, and it's the nature of evolution that it accelerates, and that its powers grow exponentially, and that's what we're talking about. The next stage of this will be to amplify our own intellectual powers with the results of our technology."
Originally published on March 25, 2002 on Edge.
RAY KURZWEIL: My interest in the future really stems from my interest
in being an inventor. I've had the idea of being an inventor since
I was five years old, and I quickly realized that you had to have
a good idea of the future if you're going to succeed as an inventor.
It's a little bit like surfing; you have to catch a wave at the
right time. I quickly realized the world quickly becomes a different
place than it was when you started by the time you finally get something
done. Most inventors fail not because they can't get something to
work, but because all the market's enabling forces are not in place
at the right time.
So I became a student of technology trends, and have developed
mathematical models about how technology evolves in different areas
like computers, electronics in general, communication storage devices,
biological technologies like genetic scanning, reverse engineering
of the human brain, miniaturization, the size of technology, and
the pace of paradigm shifts. This helped guide me as an entrepreneur
and as a technology creator so that I could catch the wave at the
right time.
This interest in technology trends took on a life of its own, and
I began to project some of them using what I call the law of accelerating
returns, which I believe underlies technology evolution to future
periods. I did that in a book I wrote in the 1980s, which had a
road map of what the 1990s and the early 2000's would be like, and
that worked out quite well. I've now refined these mathematical
models, and have begun to really examine what the 21st century would
be like. It allows me to be inventive with the technologies of the
21st century, because I have a conception of what technology, communications,
the size of technology, and our knowledge of the human brain will
be like in 2010, 2020, or 2030. If I can come up with scenarios
using those technologies, I can be inventive with the technologies
of the future. I can't actually create these technologies yet, but
I can write about them.
One thing I'd say is that if anything the future will be more remarkable
than any of us can imagine, because although any of us can only
apply so much imagination, there'll be thousands or millions of
people using their imaginations to create new capabilities with
these future technology powers. I've come to a view of the future
that really doesn't stem from a preconceived notion, but really
falls out of these models, which I believe are valid both for theoretical
reasons and because they also match the empirical data of the 20th
century.
One thing that observers don't fully recognize, and that a lot
of otherwise thoughtful people fail to take into consideration adequately,
is the fact that the pace of change itself has accelerated. Centuries
ago people didn't think that the world was changing at all. Their
grandparents had the same lives that they did, and they expected
their grandchildren would do the same, and that expectation was
largely fulfilled.
Today it's an axiom that life is changing and that technology is
affecting the nature of society. But what's not fully understood
is that the pace of change is itself accelerating, and the last
20 years are not a good guide to the next 20 years. We're doubling
the paradigm shift rate, the rate of progress, every decade. So
this will actually match the amount of progress we made in the whole
20th century, because we've been accelerating up to this point.
The 20th century was like 25 years of change at today's rate of
change. In the next 25 years we'll make four times the progress
you saw in the 20th century. And we'll make 20,000 years of progress
in the 21st century, which is almost a thousand times more technical
change than we saw in the 20th century.
Specifically, computation is growing exponentially. The one exponential
trend that people are aware of is called Moore's Law. But Moore's
Law itself is just one method for bringing exponential growth to
computers. People are aware that we're doubling the power of computation
every 12 months because we can put twice as many transistors on
an integrated circuit every two years. But in fact, they run twice
as fast and double both the capacity and the speed, which means
that the power quadruples.
What's not fully realized is that Moore's Law was not the first
but the fifth paradigm to bring exponential growth to computers.
We had electro-mechanical calculators, relay-based computers, vacuum
tubes, and transistors. Every time one paradigm ran out of steam
another took over. For a while there were shrinking vacuum tubes,
and finally they couldn't make them any smaller and still keep the
vacuum, so a whole different method came along. They weren't just
tiny vacuum tubes, but transistors, which constitute a whole different
approach. There's been a lot of discussion about Moore's Law running
out of steam in about 12 years because by that time the transistors
will only be a few atoms in width and we won't be able to shrink
them any more. And that's true, so that particular paradigm will
run out of steam.
We'll then go to the sixth paradigm, which is massively parallel
computing in three dimensions. We live in a 3-dimensional world,
and our brains organize in three dimensions, so we might as well
compute in three dimensions. The brain processes information using
an electrochemical method that's ten million times slower than electronics.
But it makes up for this by being three-dimensional. Every intra-neural
connection computes simultaneously, so you have a hundred trillion
things going on at the same time. And that's the direction we're
going to go in. Right now, chips, even though they're very dense,
are flat. Fifteen or twenty years from now computers will be massively
parallel and will be based on biologically inspired models, which
we will devise largely by understanding how the brain works.
We're already being significantly influenced by it. It's generally
recognized, or at least accepted by a lot of observers, that we'll
have the hardware to manipulate human intelligence within a brief
period of time - I'd say about twenty years. A thousand dollars
of computation will equal the 20 million billion calculations per
second of the human brain. What's more controversial is whether
or not we will have the software. People acknowledge that we'll
have very fast computers that could in theory emulate the human
brain, but we don't really know how the brain works, and we won't
have the software, the methods, or the knowledge to create a human
level of intelligence. Without this you just have an extremely fast
calculator.
But our knowledge of how the brain works is also growing exponentially.
The brain is not of infinite complexity. It's a very complex entity,
and we're not going to achieve a total understanding through one
simple breakthrough, but we're further along in understanding the
principles of operation of the human brain than most people realize.
The technology for scanning the human brain is growing exponentially,
our ability to actually see the internal connection patterns is
growing, and we're developing more and more detailed mathematical
models of biological neurons. We actually have very detailed mathematical
models of several dozen regions of the human brain and how they
work, and have recreated their methodologies using conventional
computation. The results of those re-engineered or re-implemented
synthetic models of those brain regions match the human brain very
closely.
We're also literally replacing sections of the brain that are degraded
or don't work any more because of disabilities or disease. There
are neural implants for Parkinson's Disease and well-known cochlear
implants for deafness. There's a new generation of those that are
coming out now that provide a thousand points of frequency resolution
and will allow deaf people to hear music for the first time. The
Parkinson's implant actually replaces the cortical neurons themselves
that are destroyed by that disease. So we've shown that it's feasible
to understand regions of the human brain, and reimplement those
regions in conventional electronics computation that will actually
interact with the brain and perform those functions.
If you follow this work and work out the mathematics of it. It's
a conservative scenario to say that within 30 years - possibly much
sooner - we will have a complete map of the human brain, we will
have complete mathematical models of how each region works, and
we will be able to re-implement the methods of the human brain,
which are quite different than many of the methods used in contemporary
artificial intelligence.
But these are actually similar to methods that I use in my own
field - pattern recognition - which is the fundamental capability
of the human brain. We can't think fast enough to logically analyze
situations very quickly, so we rely on our powers of pattern recognition.
Within 30 years we'll be able to create non-biological intelligence
that's comparable to human intelligence. Just like a biological
system, we'll have to provide it an education, but here we can bring
to bear some of the advantages of machine intelligence: Machines
are much faster, and much more accurate. A thousand-dollar computer
can remember billions of things accurately - we're hard-pressed
to remember a handful of phone numbers.
Once they learn something, machines can also share their knowledge
with other machines. We don't have quick downloading ports at the
level of our intra-neuronal connection patterns and our concentrations
of neurotransmitters, so we can't just download knowledge. I can't
just take my knowledge of French and download it to you, but machines
can. So we can educate machines through a process that can be hundreds
or thousands of times faster than the comparable process in humans.
It can provide a 20-year education to a human-level machine in maybe
a few weeks or a few days and then these machines can share their
knowledge.
The primary implication of all this will be to enhance our own
human intelligence. We're going to be putting these machines inside
our own brains. We're starting to do that now with people who have
severe medical problems and disabilities, but ultimately we'll all
be doing this. Without surgery, we'll be able to introduce calculating
machines into the blood stream that will be able to pass through
the capillaries of the brain. These intelligent, blood-cell-sized
nanobots will actually be able to go to the brain and interact with
biological neurons. The basic feasibility of this has already been
demonstrated in animals.
One application of sending billions of nanobots into the brain
is full-immersion virtual reality. If you want to be in real reality,
the nanobots sit there and do nothing, but if you want to go into
virtual reality, the nanobots shut down the signals coming from
my real senses, replace them with the signals I would be receiving
if I were in the virtual environment, and then my brain feels as
if it's in the virtual environment. And you can go there yourself
- or, more interestingly you can go there with other people - and
you can have everything from sexual and sensual encounters to business
negotiations, in full-immersion virtual reality environments that
incorporate all of the senses.
People will beam their own flow of sensory experiences and the
neurological correlates of their emotions out into the Web, the
way people now beam images from web cams in their living rooms and
bedrooms. This will enable you to plug in and actually experience
what it's like to be someone else, including their emotional reactions,
a´ la the plot concept of Being John Malkovich. In virtual
reality you don't have to be the same person. You can be someone
else, and can project yourself as a different person.
Most importantly, we'll be able to enhance our biological intelligence
with non-biological intelligence through intimate connections. This
won't mean just having one thin pipe between the brain and a non-biological
system, but actually having non-biological intelligence in billions
of different places in the brain. I don't know about you, but there
are lots of books I'd like to read and Web sites I'd like to go
to, and I find my bandwidth limiting. So instead of having a mere
hundred trillion connections, we'll have a hundred trillion times
a million. We'll be able to enhance our cognitive pattern recognition
capabilities greatly, think faster, and download knowledge.
If you follow these trends further, you get to a point where change
is happening so rapidly that there appears to be a rupture in the
fabric of human history. Some people have referred to this as the
"Singularity." There are many different definitions of
the Singularity, a term borrowed from physics, which means an actual
point of infinite density and energy that's kind of a rupture in
the fabric of space-time.
Here, that concept is applied by analogy to human history, where
we see a point where this rate of technological progress will be
so rapid that it appears to be a rupture in the fabric of human
history. It's impossible in physics to see beyond a Singularity,
which creates an event boundary, and some people have hypothesized
that it will be impossible to characterize human life after the
Singularity. My question is, what will human life be like after
the Singularity, which I predict will occur somewhere right before
the middle of the 21st century?
A lot of the concepts we have of the nature of human life - such
as longevity - suggest a limited capability as biological, thinking
entities. All of these concepts are going to undergo significant
change as we basically merge with our technology. It's taken me
a while to get my own mental arms around these issues. In the book
I wrote in the 1980s, The Age of Intelligent Machines, I ended with
the specter of machines matching human intelligence somewhere between
2020 and 2050, and I basically have not changed my view on that
time frame, although I left behind my view that this is a final
specter. In the book I wrote ten years later, The Age of Spiritual
Machines, I began to consider what life would be like past the point
where machines could compete with us. Now I'm trying to consider
what that will mean for human society.
One thing that we should keep in mind is that innate biological
intelligence is fixed. We have 1026 calculations per second in the
whole human race and there are ten billion human minds. Fifty years
from now, the biological intelligence of humanity will still be
at that same order of magnitude. On the other hand, machine intelligence
is growing exponentially, and today it's a million times less than
that biological figure. So although it still seems that human intelligence
is dominating, which it is, the crossover point is around 2030 and
non-biological intelligence will continue its exponential rise.
EDGE: This reminds me of a conversation I once had with John
Lilly about dolphins. I asked him, "How do you know they're
more intelligent than we are?" Isn't knowledge tautological?
How can we know more than we do know? Who would know it, except
us?
KURZWEIL: That's actually a very good point, because one response
is not to want to be enhanced, not to have nanobots. A lot of people
say that they just want to stay a biological person. But what will
the Singularity look like to people who want to remain biological?
The answer is that they really won't notice it, except for the fact
that machine intelligence will appear to biological humanity to
be their transcendent servants. It will appear that these machines
are very friendly are taking care of all of our needs, and are really
our transcendent servants. But providing that service of meeting
all of the material and emotional needs of biological humanity will
comprise a very tiny fraction of the mental output of the non-biological
component of our civilization. So there's a lot that, in fact, biological
humanity won't actually notice.
There are two levels of consideration here. On the economic level,
mental output will be the primary criterion. We're already getting
close to the point that the only thing that has value is information.
Information has value to the extent that it really reflects knowledge,
not just raw data. There are a few products on this table - a clock,
a camera, tape recorder - that are physical objects, but really
the value of them is in the information that went into their design:
the design of their chips and the software that's used to invent
and manufacture them. The actual raw materials - a bunch of sand
and some metals and so on - is worth a few pennies, but these products
have value because of all the knowledge that went into creating
them.
And the knowledge component of products and services is asymptoting
towards 100 percent. By the time we get to 2030 it will be basically
100 percent. With a combination of nanotechnology and artificial
intelligence, we'll be able to create virtually any physical product
and meet all of our material needs. When everything is software
and information, it'll be a matter of just downloading the right
software, and we're already getting pretty close to that.
On a spiritual level, the issue of what is consciousness is another
important aspect of this, because we will have entities by 2030
that seem to be conscious, and that will claim to have feelings.
We have entities today, like characters in your kids' video games,
that can make that claim, but they are not very convincing. If you
run into a character in a video game and it talks about its feelings,
you know it's just a machine simulation; you're not convinced that
it's a real person there. This is because that entity, which is
a software entity, is still a million times simpler than the human
brain.
In 2030, that won't be the case. Say you encounter another person
in virtual reality that looks just like a human but there's actually
no biological human behind it - it's completely an AI projecting
a human-like figure in virtual reality, or even a human-like image
in real reality using an android robotic technology. These entities
will seem human. They won't be a million times simpler than humans.
They'll be as complex as humans. They'll have all the subtle cues
of being humans. They'll be able to sit here and be interviewed
and be just as convincing as a human, just as complex, just as interesting.
And when they claim to have been angry or happy it'll be just as
convincing as when another human makes those claims.
At this point, it becomes a really deeply philosophical issue.
Is that just a very clever simulation that's good enough to trick
you, or is it really conscious in the way that we assume other people
are? In my view there's no real way to test that scientifically.
There's no machine you can slide the entity into where a green light
goes on and says okay, this entity's conscious, but no, this one's
not. You could make a machine, but it will have philosophical assumptions
built into it. Some philosophers will say that unless it's squirting
impulses through biological neurotransmitters, it's not conscious,
or that unless it's a biological human with a biological mother
and father it's not conscious. But it becomes a matter of philosophical
debate. It's not scientifically resolvable.
The next big revolution that's going to affect us right away is
biological technology, because we've merged biological knowledge
with information processing. We are in the early stages of understanding
life processes and disease processes by understanding the genome
and how the genome expresses itself in protein. And we're going
to find - and this has been apparent all along - that there's a
slippery slope and no clear definition of where life begins. Both
sides of the abortion debate have been afraid to get off the edges
of that debate: that life starts at conception on the one hand or
it starts literally at birth on the other. They don't want to get
off those edges, because they realize it's just a completely slippery
slope from one end to the other.
But we're going to make it even more slippery. We'll be able to
create stem cells without ever actually going through the fertilized
egg. What's the difference between a skin cell, which has all the
genes, and a fertilized egg? The only differences are some proteins
in the eggs and some signaling factors that we don't fully understand,
yet that are basically proteins. We will get to the point where
we'll be able to take some protein mix, which is just a bunch of
chemicals and clearly not a human being, and add it to a skin cell
to create a fertilized egg that we can then immediately differentiate
into any cell of the body. When I go like this and brush off thousands
of skin cells, I will be destroying thousands of potential people.
There's not going to be any clear boundary.
This is another way of saying also that science and technology
are going to find a way around the controversy. In the future, we'll
be able to do therapeutic cloning, which is a very important technology
that completely avoids the concept of the fetus. We'll be able to
take skin cells and create, pretty directly without ever going through
a fetus, all the cells we need.
We're not that far away from being able to create new cells. For
example, I'm 53 but with my DNA, I'll be able to create the heart
cells of a 25-year-old man, and I can replace my heart with those
cells without surgery just by sending them through my blood stream.
They'll take up residence in the heart, so at first I'll have a
heart that's one percent young cells and 99 percent older ones.
But if I keep doing this every day, a year later, my heart is 99
percent young cells. With that kind of therapy we can ultimately
replenish all the cell tissues and the organs in the body. This
is not something that will happen tomorrow, but these are the kinds
of revolutionary processes we're on the verge of.
If you look at human longevity - which is another one of these
exponential trends - you'll notice that we added a few days every
year to the human life expectancy in the 18th century. In the 19th
century we added a few weeks every year, and now we're now adding
over a hundred days a year, through all of these developments, which
are going to continue to accelerate. Many knowledgeable observers,
including myself, feel that within ten years we'll be adding more
than a year every year to life expectancy.
As we get older, human life expectancy will actually move out at
a faster rate than we're actually progressing in age, so if we can
hang in there, our generation is right on the edge. We have to watch
our health the old-fashioned way for a while longer so we're not
the last generation to die prematurely. But if you look at our kids,
by the time they're 20, 30, 40 years old, these technologies will
be so advanced that human life expectancy will be pushed way out.
There is also the more fundamental issue of whether or not ethical
debates are going to stop the developments that I'm talking about.
It's all very good to have these mathematical models and these trends,
but the question is if they going to hit a wall because people,
for one reason or another - through war or ethical debates such
as the stem cell issue controversy - thwart this ongoing exponential
development.
I strongly believe that's not the case. These ethical debates are
like stones in a stream. The water runs around them. You haven't
seen any of these biological technologies held up for one week by
any of these debates. To some extent, they may have to find some
other ways around some of the limitations, but there are so many
developments going on. There are dozens of very exciting ideas about
how to use genomic information and proteomic information. Although
the controversies may attach themselves to one idea here or there,
there's such a river of advances. The concept of technological advance
is so deeply ingrained in our society that it's an enormous imperative.
Bill Joy has gotten around - correctly - talking about the dangers,
and I agree that the dangers are there, but you can't stop ongoing
development.
The kinds of scenarios I'm talking about 20 or 30 years from now
are not being developed because there's one laboratory that's sitting
there creating a human-level intelligence in a machine. They're
happening because it's the inevitable end result of thousands of
little steps. Each little step is conservative, not radical, and
makes perfect sense. Each one is just the next generation of some
company's products. If you take thousands of those little steps
- which are getting faster and faster - you end up with some remarkable
changes 10, 20, or 30 years from now. You don't see Sun Microsystems
saying the future implication of these technologies is so dangerous
that they're going to stop creating more intelligent networks and
more powerful computers. Sun can't do that. No company can do that
because it would be out of business. There's enormous economic imperative.
There is also a tremendous moral imperative. We still have not
millions but billions of people who are suffering from disease and
poverty, and we have the opportunity to overcome those problems
through these technological advances. You can't tell the millions
of people who are suffering from cancer that we're really on the
verge of great breakthroughs that will save millions of lives from
cancer, but we're canceling all that because the terrorists might
use that same knowledge to create a bioengineered pathogen.
This is a true and valid concern, but we're not going to do that.
There's a tremendous belief in society in the benefits of continued
economic and technological advance. Still, it does raise the question
of the dangers of these technologies, and we can talk about that
as well, because that's also a valid concern.
Another aspect of all of these changes is that they force us to
re-evaluate our concept of what it means to be human. There is a
common viewpoint that reacts against the advance of technology and
its implications for humanity. The objection goes like this: we'll
have very powerful computers but we haven't solved the software
problem. And because the software's so incredibly complex, we can't
manage it.
I address this objection by saying that the software required to
emulate human intelligence is actually not beyond our current capability.
We have to use different techniques - different self-organizing
methods - that are biologically inspired. The brain is complicated
but it's not that complicated. You have to keep in mind that it
is characterized by a genome of only 23 million bytes. The genome
is six billion bits - that's eight hundred million bytes - and there
are massive redundancies. One pretty long sequence called ALU is
repeated 300 thousand times. If you use conventional data compression
on the genomes (at 23 million bytes, a small fraction of the size
of Microsoft Word), it's a level of complexity that we can handle.
But we don't have that information yet.
You might wonder how something with 23 million bytes can create
a human brain that's a million times more complicated than itself.
That's not hard to understand. The genome creates a process of wiring
a region of the human brain involving a lot of randomness. Then,
when the fetus becomes a baby and interacts with a very complicated
world, there's an evolutionary process within the brain in which
a lot of the connections die out, others get reinforced, and it
self-organizes to represent knowledge about the brain. It's a very
clever system, and we don't understand it yet, but we will, because
it's not a level of complexity beyond what we're capable of engineering.
In my view there is something special about human beings that's
different from what we see in any of the other animals. By happenstance
of evolution we were the first species to be able to create technology.
Actually there were others, but we are the only one that survived
in this ecological niche. But we combined a rational faculty, the
ability to think logically, to create abstractions, to create models
of the world in our own minds, and to manipulate the world. We have
opposable thumbs so that we can create technology, but technology
is not just tools. Other animals have used primitive tools, but
the difference is actually a body of knowledge that changes and
evolves itself from generation to generation. The knowledge that
the human species has is another one of those exponential trends.
We use one stage of technology to create the next stage, which
is why technology accelerates, why it grows in power. Today, for
example, a computer designer has these tremendously powerful computer
system design tools to create computers, so in a couple of days
they can create a very complex system and it can all be worked out
very quickly. The first computer designers had to actually draw
them all out in pen on paper. Each generation of tools creates the
power to create the next generation.
So technology itself is an exponential, evolutionary process that
is a continuation of the biological evolution that created humanity
in the first place. Biological evolution itself evolved in an exponential
manner. Each stage created more powerful tools for the next, so
when biological evolution created DNA it now had a means of keeping
records of its experiments so evolution could proceed more quickly.
Because of this, the Cambrian explosion only lasted a few tens of
millions of years, whereas the first stage of creating DNA and primitive
cells took billions of years. Finally, biological evolution created
a species that could manipulate its environment and had some rational
faculties, and now the cutting edge of evolution actually changed
from biological evolution into something carried out by one of its
own creations, Homo sapiens, and is represented by technology. In
the next epoch this species that ushered in its own evolutionary
process - that is, its own cultural and technological evolution,
as no other species has - will combine with its own creation and
will merge with its technology. At some level that's already happening,
even if most of us don't necessarily have them yet inside our bodies
and brains, since we're very intimate with the technology-it's in
our pockets. We've certainly expanded the power of the mind of the
human civilization through the power of its technology.
We are entering a new era. I call it "the Singularity."
It's a merger between human intelligence and machine intelligence
that is going to create something bigger than itself. It's the cutting
edge of evolution on our planet. One can make a strong case that
it's actually the cutting edge of the evolution of intelligence
in general, because there's no indication that it's occurred anywhere
else. To me that is what human civilization is all about. It is
part of our destiny and part of the destiny of evolution to continue
to progress ever faster, and to grow the power of intelligence exponentially.
To contemplate stopping that - to think human beings are fine the
way they are - is a misplaced fond remembrance of what human beings
used to be. What human beings are is a species that has undergone
a cultural and technological evolution, and it's the nature of evolution
that it accelerates, and that its powers grow exponentially, and
that's what we're talking about. The next stage of this will be
to amplify our own intellectual powers with the results of our technology.
What is unique about human beings is our ability to create abstract
models and to use these mental models to understand the world and
do something about it. These mental models have become more and
more sophisticated, and by becoming embedded in technology, they
have become very elaborate and very powerful. Now we can actually
understand our own minds. This ability to scale up the power of
our own civilization is what's unique about human beings.
Patterns are the fundamental ontological reality, because they
are what persists, not anything physical. Take myself, Ray Kurzweil.
What is Ray Kurzweil? Is it this stuff here? Well, this stuff changes
very quickly. Some of our cells turn over in a matter of days. Even
our skeleton, which you think probably lasts forever because we
find skeletons that are centuries old, changes over within a year.
Many of our neurons change over. But more importantly, the particles
making up the cells change over even more quickly, so even if a
particular cell is still there the particles are different. So I'm
not the same stuff, the same collection of atoms and molecules that
I was a year ago.
But what does persist is that pattern. The pattern evolves slowly,
but the pattern persists. So we're kind of like the pattern that
water makes in a stream; you put a rock in there and you'll see
a little pattern. The water is changing every few milliseconds;
if you come a second later, it's completely different water molecules,
but the pattern persists. Patterns are what have resonance. Ideas
are patterns, technology is patterns. Even our basic existence as
people is nothing but a pattern. Pattern recognition is the heart
of human intelligence. Ninety-nine percent of our intelligence is
our ability to recognize patterns.
There's been a sea change just in the last several years in the
public understanding of the acceleration of change and the potential
impact of all of these technologies - computer technology, communications,
biological technology - on human society. There's really been tremendous
change in popular public perception in the past three years because
of the onslaught of stories and news developments that document
and support this vision. There are now several stories every day
that are significant developments and that show the escalating power
of these technologies.
http://www.edge.org/3rd_culture/ kurzweil_singularity/kurzweil_singularity_index.html
(Audio and video available)
| | Join the discussion about this article on Mind·X! | |
|
|
Mind·X Discussion About This Article:
|
|
|
|
Re: After the Singularity: A Talk with Ray Kurzweil
|
|
|
|
Keep in mind what is happening with the environment, it could be an indicator for the Singularity:
The pattern I've seen in climatology is that it seems to be run by positive feedback mechanisms. The only reason we can't really predict very well in climate science, is that as the earth heats up, it triggers these positive feedback mechanisms, which then trigger other positive feedback mechanisms that we couldn't have foreseen. Well, ya'll know the story, so I'll cut it short.
I see no reason that this isn't also happening with the Rate of Accelerating Returns, which puts the Singularity much closer.
I'm gonna say Dec. 21, 2012 (no, not because I actually BELIEVE in prophecy, but I really love aesthetic resonance, so it certainly does no harm) |
|
|
|
|
|
|
|
|
Re: After the Singularity: A Talk with Ray Kurzweil
|
|
|
|
There's another shortcut.
You develop a human-equivilant machine intelligence software, run it on a supercomputer with human-equivilant speed (likely reachable around 2020), give it all the information it needs to engineer the next generation of microprocessors, and then set it to work.
While ve's working on the engineering, you and other humans set up the manufacturing facility to that ve can automatically change over all the production vimself without any human intervention.
In 12 months or so, after running for 24 hours a day, 7 days a week, ve will have engineered a processor twice as fast as the one's ve's running on. You power vim down, and replace each of the chips he's running on with one of the new chips, fire vim back up, and ve goes back to work.
Ve is now running TWICE as fast as previous. This allows vim to develop the NEXT generation of processors within - 6 months! Or, ve could allocate 50% of vis processor threads to the next-gen processor design, and the other 50% towards developing a better memory and motherboard architecture, in the same amount of time as it took vim to develop the last job!
Within several years, you've got the Singularity, running on superprocessors with several embedded distinct microprocessors, hooked up to RAM with massive amounts of bandwidth and latency of nanoseconds.....well. You've got Singularity.
Now all the engineers need to do is tell the Singularity to develop Nanotechnology, and you're a few years away from the entire world being utopian. If, of course, that initial AI was based off Friendly AI.
And damnit, why isn't Friendly AI in Kurzweil's Brain? |
|
|
|
|
|
|
|
|
Re: After the Singularity: A Talk with Ray Kurzweil
|
|
|
|
the pattern here is that change is accelerating. none disputes this. ray asserts some outcomes, and is better able to do this with credibility than most. however, we may already be at the point of singularity, and therefore by defn no one's predictions could be more/less correct. what we do KNOW, is that 30, 20, 15 years from now, the world will be unrecognizable -- and the attendant powershifts frightening. whatever the changes, they will be sudden, profound, and will make the horrors (and progress) of the 20th century appear trivial. adapt, migrate, or die... it may already be a brave new world. |
|
|
|
|
|
|
|
|
Re: After the Singularity: A Talk with Ray Kurzweil
|
|
|
|
you mean something like this:
http://www.spinbitz.net/Articles/SZ_Earth2.0.htm
1. Introductory Resources: All of this is loose speculation based on a fully functional nanotechnological infrastructure, such as that envisioned by K. Eric Drexler, Ralph Merkle and others. See Drexler's 'Nanosystems' and 'Engines of Creation' as well as Robert Freitas' 'Nanomedicine' series, as well as the following resources.
http://www.foresight.org/Conferences/MNT05/Papers/ McKendree/index.html
http://www.nanotech-now.com/utility-fog.htm
2. Introduction: An active polymorphic, Utility-Fluid* nanotechnological substrate is loosely outlined which could be used to form a multi-scale object, matter/energy transportation, transformation sensor/effector system which could span the Earth, fill and extend the atmosphere and transform it into an active participant in near and sub-terrestrial events. The Utility-Fluid system bears a behavioral macro-level resemblance to John Storrs Hall's (JoSH) "utility fog," (see http://www.nanotech-now.com/utility-fog.htm) but the similarities between the two systems rapidly diminish at the unit level. Rather, this system uses a "utility fluid" (UFL) model for its architecture as opposed to a gaseous or "utility fog" model, and it is possible that this would provide for a very different set of capabilities at the meso and macro levels.
3. Programmable Matter Units (PMU): The units in this Utility Fluid (UFL) system are similar in structure to vacuous single celled organisms (e.g. amoebas, hydrae, and neurons). The units themselves have a muscular, polymorphic membrane which can expand and contract and modulate their shape at the object surface and subsurface levels, thus imbuing the PMU and its surface detail, texture and properties with rich polymorphic capabilities. When taken together, as a "fluid", the units can slip around each other in a controlled, continuous and fluid manner using sub-surface modulations. these could include: 1. Adaptive Electro-Magnetic Levitation (AEML, to be discussed below), where the surfaces would actually be seperated some distance by magnetic levitation, similar to a gas; 2. Mechanical, 2a. Radial and/or rotary/circumferential movement of sub-surface sub-units and patternings. 2b. Utility Fog Emulation, the unit-surface can also form into radiating arms and thus the individual units can emulate JoSH's foglets in functionality if the necessity arises. The space-filling, "fluid" mode, however, is a welcome alternative to the cumbersome hand-to-hand motion-transfer from unit to unit required by the JoSH method.
3.1. Adaptive Electro-Magnetic Levitation (AEML) -==----==- The polymorphic, surface modulating capabilities make the units necessarily much more complex than the jOSH model, but the basic inter-unit motion-control programming/protocol should be simpler and readily evolvable. The embedded polymorphic, multi-scalar multi-hub, information/power switching network (see below) gives rise to many interesting possibilities. Each fluid unit is a power-switching hub. Thus networks of these units would be able to form higher level adaptable electronic circuits and electricity flow patterns. A surface of densely packed collumns of ~500 microns in diameter, (50 or so units ) could form helical or coil circuits giving rise to a magnetic field. The magnetic field formed from arrays of these AEML cells should be strong enough to provide magnetic levitation at the interface between the capsule and the surface of the muscular contraction. For larger systems the circuits could be formed at larger scales and more power could be drawn through the circuits to provide a more powerful magnetic field. ,(1/( ganglion of sperm brains1\10\1)
3.2. NOTE: [:::::::: .depending upon the possible strength of an AEML cell in the sub-micron region, the units, expanded to space-filling shape, could produce the AEML cells on their surfaces required for levitation. The precise manipulation and synchronization of patterns of AEML fields at the cell-to-cell interfaces could produce the necessary laminar flow forces (electro-magnetic peristalsis) required to actuate a laminar flow. One could imagine many scenarios arising from this capability ... the units could join together to construct machinery for the transportation and launching of other units, aggregates or macroscale objects ....::::.::::.:] |
|
|
|
|
|
|
|
|
Re: After the Singularity: A Talk with Ray Kurzweil
|
|
|
|
Convergence between machine and mind is very seductive. A machine works on principles of rules and logical execution. The mind works on the principles of inference based on memory and bayesian probabilities related to what is stored and recalled.
There is an overlap between both machine and mind. Mind 'rules' are degenerative in that memory is not a perfect vehicle for repetitive actions while computer memory is infallible until power is turned off.
The weakness in the machine is in its dependence on a holistic and reliable framework, whether device, protocol, network communication or software process. A failure in one or more renders the outcome as less than useful.
The weakness in the mind is its pliability; it becomes more useful when chemical and physiological mechanisms are reinforced (learning) but less reliable when asked to recall specific instances.
For example, repetitive attacks on a machine can condition a response to learn from what the machine has to respond to. Whether it is a physical response, (light, temperature, shock, sound) or based on programmatic sequences (if this happened then this resulted, therefore...).
The mind works on multiple levels. for example, a fright or flight experience triggers multiple response events:
1. adrenaline pulses cause muscles and heart and blood flows to respond in nanoseconds.
2. visual inputs are detected at primitive levels such such as rod and cone patterns within the retina, as well as edge detection matching to remembered patterns that are recalled as images.
3. reasoning responses based on audio, visual and sensual inputs that indicate a positive/negative experience.
4. in situ memory so that one person's recall leads to different conclusions than someone else with the same stimulus.
To infer that there is a singularity between man and machine is specious only in that the order of the experiment is wrong. Rather, one should look at how the mind can be conditioned by what we produce as systems that 'mimic' processes that are either tedious or complex for us to pursue.
A couple of examples:
1. Searching images of star quadrants is more than suited to mechanical methods. The human error is 3 orders of magnitude greater than equivalent machinery. Here is a well defined case of machines exceeding human perception - as long as rules for detection are well formulated.
2. Detecting fraud patterns in credit card usage. This problem consistently requires an external reference system for as fraud is detected, new rules are enforced. However, as these rules work to minimize fraud, new fraudulent means are created - in other words they mutate. At present, even neural models cannot react in real time to detect fraud mutation. It requires human rule intervention.
In summary, I would not call this a singularity between man and machine but more of a punctuated equilibrium where advances in machine learning and devices will stimulate a larger order of magnitude expansion of human ingenuity.
Andre Szykier, CTO
Proqueome
|
|
|
|
|
|
|
|
|
Re: After the Singularity: A Talk with Ray Kurzweil
|
|
|
|
Andre,
You seem to be saying two things at once. You imply that human mind and machine intelligence each may have their own strengths and weaknesses, taking perhaps complementary roles, and (optimistically) each can gain from the other to advance. This is an agreeable and "hopeful" vision.
However, I am not sure that "machine" capabilities will be as restricted to the "fragile, bit-error-like" ways it has taken to date. The multi-layered systems of the mind/brain can largely (in principle) be emulated in machines, and the flexibility of machines to "recompose" themselves is something that the human mind may be incapable of doing (without extreme augmentation.)
I am "multi-layered" when I search the house for a lost object. I apply my consciousness (main train of thought) to visual pattern-recognition as I search from room to room for the "matching image", and I do so, almost unaware of how I am navigating chairs and steps and not falling over. My "motor-movement-equilibrium" system is almost autonomically taking care of business, under only the slightest direction on my "conscious" part. But a well-designed robot, seeking a lost object, would also have delegated many autonomous activities to similar subsystems. The images gained by the visual system would be servicing both the "search" heuristics and the "navigation" heuristics, each making use of different aspects of the scenery.
I once argued that machines/AI would not develop any sense of "emotion", by virtue of their lack of "body" and its chemically-induced psychological states. I gave an example that I thought would be convincing, and then later thought otherwise.
We humans have likely inherited a healthy tendency to to fear the low growl of a tiger, and will respond with adrenal flows well in time for our feet to start running. What if the growl is so distant that we are not conscious of having "heard" it, even though our auditory systems picked it up? I suspect that our adrenaline will pick-up, and our heart begin to beat a bit faster, and THIS physiological manifestation we may become aware of. We might say we feel "spooked", "nervous", "fearful", and yet not (yet) know why. I asked myself, "how or why would a robot ever develop such a sensation"?
But then, it occured to me that a robot's autonomous "movement-navigation" systems might learn over time to prepare to flee upon hearing certain sounds, and (like the visual system's images are shared for distinct functions) the robot's auditory systems would likewise be shared. The "main train of thought" process might be listening (focusing) upon a conversation, and not immediately be bothered by the sound of an approaching vehicle. Perhaps the "movement-system" may have learned to react to "approaching vehicle sound" by beginning to rev its motor systems (in anticipation of the need for quick movement.)
Might the robot's "main thought processes" become interrupted, NOT (yet) by the sound of the approaching vehicle, but by the sound and vibration-sensation of the autonomous motor-system? Might the robot be "surprised" that its "heart" is racing? Whether it actually could "feel fear" the way we do, or not, it might still react "as if nervous", momentarily seeking explanation for its body's sudden "fight or flight" posture?
I would grant you that the brain's "depth" and complexity is something that machine intelligence will take a very long time to match ... but I see no fundamental barriers to this status, and thence rapid advance beyond. The human mind still has room to "expand", but not (on its own) to any such degree for nearly unbounded expansion.
Singularity, for me, will be the point where robo-AI is able to control and adapt to its environment more effectively than humans can control or adapt to their own environment, however one cares to measure intelligence.
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: After the Singularity: A Talk with Ray Kurzweil
|
|
|
|
Perhaps the universe is empty because ever previous intelligence has reached singularity.
Perhaps, post singularity, the universe as we know it becomes unimportant.
Perhaps, we reach a level of understanding and perception where we see all new horizons that are for more compelling.
Perhaps the universe of the mathematics, perhaps the universe of mind.
Perhaps, with technology to "amplify" our abilities, we are able to establish true communications with other dimensions. Would these 3 dimensions still be so exciting then?
Even more intersting to me, is perhaps, we are able to link with our Creator. Perhaps, heaven is real after all. And we could all be just a few years from reaching it.
Perhaps, the end is near. And what a beautiful ending it could be. |
|
|
|
|
|
|
|
|
Re: After the Singularity: A Talk with Ray Kurzweil
|
|
|
|
Nolan,
It is true that most fruits of the "stages of evolution" are still present today. Evolution is not so much replacement as it is augmentation.
Unlike (say) rodents, we have a degree of intellectual capability that allows us to consider the world in terms of time, species, and "superiority". Measured in terms WE create, humans are "superior" to the cockroach. In terms of evolution, neither human nor cockroach are superior, as both have "survived" to the present day.
As to which is more "ripe" for extinction, that is another matter.
But your real question, "Were it here, would we recognize it" is a good way to approach "are we expecting the right things".
Did a human put "a man on the moon"? Or did the US, or "Western Society" or simply humanity, perform that accomplishment. Where do you want to place the responsibility?
And is it evolution, per se, or simply "change" that is the center of people's concerns?
Cheers! ____tony b____ |
|
|
|
|
|
|
|
|
Re: After the Singularity: A Talk with Ray Kurzweil
|
|
|
|
Raymond,
You are missing a fundamental notion of consciousness that causes you to be completely wrong in several assertions you constantly make. For example you say:
>>> At this point, it becomes a really deeply philosophical issue. Is that just a very clever simulation that's good enough to trick you, or is it really conscious in the way that we assume other people are? In my view there's no real way to test that scientifically. There's no machine you can slide the entity into where a green light goes on and says okay, this entity's conscious, but no, this one's not. You could make a machine, but it will have philosophical assumptions built into it. Some philosophers will say that unless it's squirting impulses through biological neurotransmitters, it's not conscious, or that unless it's a biological human with a biological mother and father it's not conscious. But it becomes a matter of philosophical debate. It's not scientifically resolvable. <<<
If what David Chalmers has said about the "Hard Problem" of consciousness is true then this will most definitely be scientifically resolvable. This issue will be resolved by objectively "effing" the ineffable.
Take the currently ineffable taste of salt, for example. Theoretically a person could have a defect somewhere in his brain's ability to produce this sensation from birth making it so this person has never known or experienced this taste. Once we have the ability to correct such defects, we will be able to produce in this person's mind, this subjective experience (even without the presence of sodium chloride!) The person's response (Or the response of an AI when it "effs" for the first time) will likely be something like "Oh THAT's what salt tastes like." Because of our understanding of the proper "neural correlate" that has this quality we will know he is telling the truth when he says he now knows what the taste of salt is like.
Our brain uses these pheonomenal qualiaties to represent everything we consciously know. Something in our brain has these phenomenal qualities that we still only know subjectively. We will soon discover which cause and effect process of nature also has these phenomenal properties. Just as the scientific process involves the identification and classification of fundamental elements into a periodic table of elements the scientific process will also identify and classify all these subjective experiences, or qualia, that we are able to experience. The key difference of this table will be sufficient grounded information that will enable enhanced brains to "know" and experience or 'eff' the precise qualia represented by any particular entry in this table.
In this way, the "hard problem" of consciousness will be once and for all scientifically resolved. This discovery of which matter in our brain has these additional phenomenal and subjective qualities will surely be the most significant and earth changing scientific achievement to date.
You almost get it right when you say things like:
>>>People will beam their own flow of sensory experiences and the neurological correlates of their emotions out into the Web, the way people now beam images from web cams in their living rooms and bedrooms. This will enable you to plug in and actually experience what it's like to be someone else, including their emotional reactions, a'la the plot concept of Being John Malkovich. In virtual reality you don't have to be the same person. You can be someone else, and can project yourself as a different person.<<<
But you are still missing the simple idea of effing of the individual involuntary sensations our brain uses to represent this "flow of sensory experience" and the significance of all this in its ability to scientifically and objectively resolve the problem of other minds. The only reason "virtual reality" works is because our mind is able to recognize the patters coming from our senses and from this produce conscious knowledge models made of qualia in our brain.
You foolishly make such a big deal about the Turing Test. But in 20 years or less or after we achieve the ability to eff, people will look back and wonder how some one so smart could be so clueless about something that should be so simple and so obvious. The simple fact of the matter is the only important question to ask in a Turing Test is something like: "What is red like?"
I've written a more in-depth and complete paper on this issue and how you are mistaken for anyone interested. It is available here:
http://home.attbi.com/~brent.allsop/
Brent Allsop
allsop@extropy.org
|
|
|
|
|
|
|
|
|
Re: After the Singularity: A Talk with Ray Kurzweil
|
|
|
|
Brent,
> "Take the currently ineffable taste of salt, for example."
I still think this is more problematic than you make it.
Suppose that to me, the tastes of salt and sugar are reversed. Everytime you place salt in my mouth, it will taste to me like sugar would to you, and I exclaim "that tastes like salt" (of course). Naturally, I am going to call that taste what I've been trained from birth to call it.
Now, suppose you have never performed this experiment upon me, and do not know of my "reversed taste" sensation. You give me a salt tablet, and study my "neural correlates". What makes you suppose that there are ANY correlates there that speak "is experiencing a universally-recognized sugary taste"?
Even if such correlates were possible, they would only be recognized because (we assume) most folks would "sense salt and sugar" in coincident ways, providing a sort of "baseline" from which to measure.
Even then, you assume that my "appreciation" of what I sense is identical to some universal (at least, among humans) neural activity, when it might well be several-fold layers above, (patterns of patterns) or encoded in relation to memorized neural-ion concentrations that might be different for every individual, yet translates (for each of them) into a "salt-taste-recognition".
Finally, even if "salt-taste" appreciation could be found to have a universally recognizeable set of correlates, salt and sugar are rather close to the physical interface, chemistry to chemistry, so to speak. Very much of what we experince as "minds" is far more abstracted, "sense of confinement, sense of freedom, sense of foreboding, sense of boredom", etc.
You may seek correlates, but that does not guarantee that they will exist. Each "mind" may have found its own chaotic way to encode, reflect, and recognize these "senses".
Its not yet a "done deal", and I imagine it will be exceedingly hard to "establish" the existence, no less the universality and "template", for many of these qualities of experience.
(No reason not to try, 'though).
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: After the Singularity: A Talk with Ray Kurzweil
|
|
|
|
Tony B,
>>>Suppose that to me, the tastes of salt and sugar are reversed. Every time you place salt in my mouth, it will taste to me like sugar would to you, and I exclaim "that tastes like salt" (of course). Naturally, I am going to call that taste what I've been trained from birth to call it.<<<
Yes, this is commonly called 'Inverted Qualia' where one brain uses different qualia to represent particular sensory data.
>>>What makes you suppose that there are ANY correlates there that speak "is experiencing a universally-recognized sugary taste"?<<<
It is my theory that it will not be something that is 'spoken'. Some of the matter in our brain has these particular phenomenal qualities. Once we discover them, and which matter has them when in particular states, we will understand more about them. 'speaking' is an abstract cause and effect communication process. But phenomenal qualities are something that just are rather than some cause or affect.
>>> Finally, even if "salt-taste" appreciation could be found to have a universally recognizable set of correlates, salt and sugar are rather close to the physical interface, chemistry to chemistry, so to speak. Very much of what we experience as "minds" is far more abstracted, "sense of confinement, sense of freedom, sense of foreboding, sense of boredom", etc.<<<
I believe it is a mistake to worry about these much more fleeting and complex voluntarily produced cognitions such as ideas and so on. First we must focus on the very solid involuntary ones that represent direct sensa data. We must first know the alphabet of thoughts and what they are made of before we can hope to understand what complex voluntarily produced thoughts are.
>>>You may seek correlates, but that does not guarantee that they will exist. Each "mind" may have found its own chaotic way to encode, reflect, and recognize these "senses".<<<
I know, more than I know anything else (we could be nothing but a brain in a vat where nothing but these conscious representations exist), that my taste of salt, red, warm, and so on exist. It's only a matter of time before I can eff these precice sensations to you so you will be able to tell if you have different or identical representations. Once we can do this we'll be able to move on to try to understand more complex emotions and thoughts and interpretations and so on using this as a base to work on.
Brent Allsop
|
|
|
|
|
|
|
|