Origin > Will Machines Become Conscious? > Are We Spiritual Machines? > Chapter 1: The Evolution of Mind in the Twenty-First Century
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0500.html

Printable Version
    Chapter 1: The Evolution of Mind in the Twenty-First Century
by   Ray Kurzweil

An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense "intuitive linear" view. So we won't experience 100 years of progress in the 21st century -- it will be approximately 20,000 years of progress (at today's rate). The "returns," such as chip speed and cost-effectiveness, also increase exponentially. There's even exponential growth in the rate of exponential growth. This exponential growth is not restricted to hardware, but with accelerating gains in brain reverse engineering, also applies to software. Within a few decades, machine intelligence will surpass human intelligence, allowing nonbiological intelligence to combine the subtleties of human intelligence with the speed and knowledge sharing ability of machines. The results will include the merger of biological and nonbiological intelligence, downloading the brain and immortal software-based humans -- the next step in evolution.


Originally published in print June 18, 2002 in Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI by the Discovery Institute. Published on KurzweilAI.net on June 18, 2002.


An Overview of the Next Several Decades

The intelligence of machines—nonbiological entities—will exceed human intelligence early in this century. By intelligence, I include all the diverse and subtle ways in which humans are intelligent—including musical and artistic aptitude, creativity, physically moving through the world, and even responding to emotion. By 2019, a $1,000 computer will match the processing power of the human brain—about 20 million billion calculations per second. This level of processing power is a necessary but not sufficient condition for achieving human-level intelligence in a machine. Organizing these resources—the “software” of intelligence—will take us to 2029, by which time your average personal computer will be equivalent to a thousand human brains.

Once a computer achieves a level of intelligence comparable to human intelligence, it will necessarily soar past it. A key advantage of nonbiological intelligence is that machines can easily share their knowledge. If I learn French, or read War and Peace, I can’t readily download that learning to you. You have to acquire that scholarship the same painstaking way that I did. My knowledge, embedded in a vast pattern of neurotransmitter concentrations and interneuronal connections, cannot be quickly accessed or transmitted. But we won’t leave out quick downloading ports in our nonbiological equivalents of human neuron clusters. When one computer learns a skill or gains an insight, it can immediately share that wisdom with billions of other machines.

As a contemporary example, we spent years teaching one research computer how to recognize continuous human speech. We exposed it to thousands of hours of recorded speech, corrected its errors, and patiently improved its performance. Finally, it became quite adept at recognizing speech (I dictated most of my recent book to it). Now if you want your own personal computer to recognize speech, it doesn’t have to go through the same process; you can just download the fully trained program in seconds. Ultimately, billions of nonbiological entities can be the master of all human and machine acquired knowledge. Computers are also potentially millions of times faster than human neural circuits, and have far more reliable memories.

One approach to designing intelligent computers will be to copy the human brain, so these machines will seem very human. And through nanotechnology, which is the ability to create physical objects atom by atom, they will have humanlike—albeit greatly enhanced—bodies as well. Having human origins, they will claim to be human, and to have human feelings. And being immensely intelligent, they’ll be very convincing when they tell us these things. But are these feelings “real,” or just apparently real? I will discuss this subtle but vital distinction below. First it is important to understand the nature of nonbiological intelligence, and how it will emerge.

Keep in mind that this is not an alien invasion of intelligent machines. It is emerging from within our human-machine civilization. There will not be a clear distinction between human and machine as we go through the twenty-first century. First of all, we will be putting computers—neural implants—directly into our brains. We’ve already started down this path. We have ventral posterior nucleus, subthalmic nucleus, and ventral lateral thalamus neural implants to counteract Parkinson’s Disease and tremors from other neurological disorders. I have a deaf friend who now can hear what I am saying because of his cochlear implant. Under development is a retina implant that will perform a similar function for blind individuals, basically replacing certain visual processing circuits of the retina and nervous system. Recently scientists from Emory University placed a chip in the brain of a paralyzed stroke victim who can now begin to communicate and control his environment directly from his brain.

In the 2020s, neural implants will not be just for disabled people, and introducing these implants into the brain will not require surgery, but more about that later. There will be ubiquitous use of neural implants to improve our sensory experiences, perception, memory, and logical thinking.

These “noninvasive” implants will also plug us in directly to the World Wide Web. By 2030, “going to a web site” will mean entering a virtual reality environment. The implant will generate the streams of sensory input that would otherwise come from our real senses, thus creating an all-encompassing virtual environment that responds to the behavior of our own virtual body (and those of others) in the virtual environment. This technology will enable us to have virtual reality experiences with other people—or simulated people—without requiring any equipment not already in our heads. And virtual reality will not be the crude experience that one can experience in today’s arcade games. Virtual reality will be as realistic, detailed, and subtle as real reality. So instead of just phoning a friend, you can meet in a virtual French café in Paris, or take a walk on a virtual Mediterranean beach, and it will seem very real. People will be able to have any type of experience with anyone—business, social, romantic, sexual—regardless of physical proximity.

The Growth of Computing

To see into the future, we need insight into the past. We need to discern the relevant trends and their interactions. Many projections of the future suffer from three common failures. The first is that people often consider only one or two iterations of advancement in a technology, as if progress would then come to a halt.

A second is focusing on only one aspect of technology, without considering the interactions and synergies from developments in multiple fields (e.g. computational substrates and architectures, software and artificial intelligence, communication, nanotechnology, brain scanning and reverse engineering, amongst others).

By far the most important failure is to fail to adequately take into consideration the accelerating pace of technology. Many predictions do not factor this in with any consistent methodology, if at all. Ten thousand years ago, there was little salient technological change in even a thousand years. A thousand years ago, progress was much faster and a paradigm shift required only a century or two. In the nineteenth century, we saw more technological change than in the nine centuries preceding it. Then in the first twenty years of the twentieth century, we saw more advancement than in all of the nineteenth century. Now, paradigm shifts (and new business models) take place in only a few years time (lately it appears to be even less than that). Just a decade ago, the Internet was in a formative stage and the World Wide Web had yet to emerge

The fact that the successful application of certain innovations (e.g., the laser) may have taken several decades in the past half century does not mean that, going forward, comparable changes will take nearly as long. The type of transformation that required thirty years during the last half century will take only five to seven years going forward. And the pace will continue to accelerate. It is vital to consider the implications of this phenomenon; progress is not linear, but exponential.

This “law of accelerating returns,” as I call it, is true of any evolutionary process. It was true of the evolution of life forms, which required billions of years for the first steps (e.g. primitive cells); later on progress accelerated. During the Cambrian explosion, major paradigm shifts took only tens of millions of years. Later on, humanoids developed over a period of only millions of years, and Homo sapiens over a period of hundreds of thousands of years.

With the advent of a technology creating species, the exponential pace became too fast for evolution through DNA-guided protein synthesis and moved on to human-created technology. The first technological steps — sharp edges, fire, the wheel—took tens of thousands of years, and have accelerated ever since.

Technology is evolution by other means. It is the cutting edge of evolution today, moving far faster than DNA-based evolution. However, unlike biological evolution, technology is not a “blind watchmaker.” (Actually, I would prefer the phrase “mindless watchmaker” as more descriptive and less insensitive to the visually impaired.) The process of humans creating technology, then, is a “mindful watchmaker.” An implication is that we do have the ability (and the responsibility) to guide this evolutionary process in a constructive direction.

Technology goes beyond mere toolmaking; it is a process of creating ever more powerful technology using the tools from the previous round of innovation. In this way, human technology is distinguished from the toolmaking of other species. There is a record of each stage of technology, and each new stage of technology builds on the order of the previous stage. Technology, therefore, is a continuation of the evolutionary process that gave rise to the technology creating species in the first place.

It is critical when considering the future to use a systematic methodology that considers these three issues: (i) iterations of technological progress do not just stop at an arbitrary point, (ii) diverse developments interact, and, most importantly, (iii) the pace of technological innovation accelerates. This third item can be quantified, which I discuss in the sidebar below. Although these formulas are not perfect models, they do provide a framework for considering future developments. I’ve used this methodology for the past twenty years, and the predictions derived from this method in the 1980s have held up rather well.

One very important trend is referred to as “Moore’s Law.” Gordon Moore, one of the inventors of integrated circuits, and then chairman of Intel, noted in the mid-1970s that we could squeeze twice as many transistors on an integrated circuit every twenty-four months. The implication is that computers, which are built from integrated circuits, are doubling in power every two years. Lately, the rate has been even faster.

After sixty years of devoted service, Moore’s Law will die a dignified death no later than the year 2019. By that time, transistor features will be just a few atoms in width, and the strategy of ever finer photolithography will have run its course. So, will that be the end of the exponential growth of computing?

Don’t bet on it.

If we plot the speed (in instructions per second) per $1000 (in constant dollars) of 49 famous calculators and computers spanning the entire twentieth century, we note some interesting observations.

It is important to note that Moore’s Law of Integrated Circuits was not the first, but the fifth paradigm to provide accelerating price-performance. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. census, to Turing’s relay-based “Robinson” machine that cracked the Nazi enigma code, to the CBS vacuum tube computer that predicted the election of Eisenhower, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer which I used to dictate (and automatically transcribe) this chapter.

But I noticed something else surprising. When I plotted the 49 machines on a logarithmic graph (where a straight line means exponential growth), I didn’t get a straight line. What I got was another exponential curve. In other words, there’s exponential growth in the rate of exponential growth. Computer speed (per unit cost) doubled every three years between 1910 and 1950, doubled every two years between 1950 and 1966, and is now doubling every year.

Wherefrom Moore’s Law:

The Law of Accelerating Returns

Where does Moore’s Law come from? What is behind this remarkably predictable phenomenon? I have seen relatively little written about the ultimate source of this trend. Is it just “a set of industry expectations and goals,” as Randy Isaac, head of basic science at IBM, contends?

In my view, it is one manifestation (among many) of the exponential growth of the evolutionary process that is technology. Just as the pace of an evolutionary process accelerates, the “returns” (i.e., the output, the products) of an evolutionary process grow exponentially. The exponential growth of computing is a marvelous quantitative example of the exponentially growing returns from an evolutionary process. We can also express the exponential growth of computing in terms of an accelerating pace: It took 90 years to achieve the first Multiple in Power (MIP) per thousand dollars; now we add a MIP per thousand dollars every day.

Moore’s Law narrowly refers to the number of transistors on an integrated circuit of fixed size, and sometimes has been expressed even more narrowly in terms of transistor feature size. But rather than feature size (which is only one contributing factor), or even number of transistors, I think the most salient measure to track is computational speed per unit cost. This takes into account many levels of “cleverness” (i.e., innovation, which is to say technological evolution). In addition to all of the innovation in integrated circuits, there are multiple layers of innovation in computer design, e.g., pipelining, parallel processing, instruction look-ahead, instruction and memory caching, etc.

From the above chart, we see that the exponential growth of computing didn’t start with integrated circuits (around 1958), or even transistors (around 1947), but goes back to the electromechanical calculators used in the 1890 and 1900 U.S. census. This chart spans at least five distinct paradigms of computing (electromechanical calculators, relay-based computers, vacuum tube-based computers, discrete transistor-based computers, and finally microprocessors), of which Moore’s Law pertains to only the latest one.

It’s obvious what the sixth paradigm will be after Moore’s Law runs out of steam before 2019 (because before then transistor feature sizes will be just a few atoms in width). Chips today are flat (although it does require up to 20 layers of material to produce one layer of circuitry). Our brain, in contrast, is organized in three dimensions. We live in a three-dimensional world; why not use the third dimension? There are many technologies in the wings that build circuitry in three dimensions. Nanotubes, for example, which are already working in laboratories, build circuits from pentagonal arrays of carbon atoms. One cubic inch of nanotube circuitry would be a million times more powerful than the human brain.

Thus the (double) exponential growth of computing is broader than Moore’s Law. And this accelerating growth of computing is, in turn, part of a yet broader phenomenon discussed above, the accelerating pace of any evolutionary process. In my book, I discuss the link between the pace of a process and the degree of chaos versus order in the process. For example, in cosmological history, the Universe started with little chaos, so the first three major paradigm shifts (the emergence of gravity, the emergence of matter, and the emergence of the four fundamental forces) all occurred in the first billionth of a second; now with vast chaos, cosmological paradigm shifts take billions of years.

Observers are quick to criticize extrapolations of an exponential trend on the basis that the trend is bound to run out of “resources.” The classical example is when a species happens upon a new habitat (e.g., rabbits in Australia), the speciesnumbers will grow exponentially for a time, but then hit a limit when resources such as food and space run out. But the resources underlying the exponential growth of an evolutionary process are relatively unbounded: (i) the (ever growing) order of the evolutionary process itself, and (ii) the chaos of the environment in which the evolutionary process takes place and which provides the options for further diversity.

We also need to distinguish between the S curve (very slow virtually unnoticeable growth followed by very rapid growth followed by the growth leveling off and reaching an asymptote) that is characteristic of any specific technological paradigm and the continuing exponential growth that is characteristic of the ongoing evolutionary process of technology. Specific paradigms, such as Moore’s Law (i.e., achieving faster and denser computation through shrinking transistor sizes on an integrated circuit), do ultimately reach levels at which exponential growth is no longer feasible. Thus Moore’s Law is an S curve. But the growth of computation is an ongoing exponential. What turns the S curve (of any specific paradigm) into a continuing exponential is paradigm shift (also called innovation), in which a new paradigm (e.g., three-dimensional circuits) takes over when the old paradigm approaches its natural limit. This has already happened at least four times in the history of computation. This difference also distinguishes the toolmaking of non-human species, in which the mastery of a toolmaking (or using) skill by each animal is characterized by an S-shaped learning curve, and human-created technology, which has been following an exponential pattern of growth and acceleration since its inception.

I discuss all of this in more detail in the first couple of chapters of my book.

In the sidebar below, I include a mathematical model of the law of accelerating returns as it pertains to the exponential growth of computing. The formulas below result in the following graph of the continued growth of computation. This graph matches the available data for the twentieth century and provides projections for the twenty-first century. Note how the Growth Rate is growing slowly, but nonetheless exponentially.

Another technology trend that will have important implications for the twenty-first century is miniaturization. A related analysis can be made of this trend which shows that the salient implementation sizes of a broad range of technology are shrinking, also at a double exponential rate. At present, we are shrinking technology by a factor of approximately 5.6 per linear dimension per decade.

The Law of Accelerating Returns Applied to the Growth of Computation

The following provides a brief overview of the law of accelerating returns as it applies to the double exponential growth of computation. This model considers the impact of the growing power of the technology to foster its own next generation. For example, with more powerful computers and related technology, we have the tools and the knowledge to design yet more powerful computers, and to do so more quickly.

Note that the data for the year 2000 and beyond assume neural net connection calculations as it is expected that this type of calculation will dominate, particularly in emulating human brain functions. This type of calculation is less expensive than conventional (e.g., Pentium III) calculations by a factor of 10 (particularly if implemented using digital controlled analog electronics, which would correspond well to the brain’s digital controlled analog electrochemical processes). A factor of 10 translates into approximately 3 years (today) and less than 3 years later in the twenty-first century.

My estimate of brain capacity is 100 billion neurons times an average 1,000 connections per neuron (with the calculations taking place primarily in the connections) times 200 calculations per second. Although these estimates are conservatively high, one can find higher and lower estimates. However, even much higher (or lower) estimates by orders of magnitude only shift the prediction by a relatively small number of years.

Some salient dates from this analysis include the following:

We achieve one Human Brain capability (2 * 10^16 cps) for $1,000 around the year 2023.

We achieve one Human Brain capability (2 * 10^16 cps) for one cent around the year 2037.

We achieve one Human Race capability (2 * 10^26 cps) for $1,000 around the year 2049.

We achieve one Human Race capability (2 * 10^26 cps) for one cent around the year 2059.

The Model considers the following variables:

V: Velocity (i.e., power) of computing (measured in CPS/unit cost)

W: World Knowledge as it pertains to designing and building computational devices

t: Time

The assumptions of the model are:

(1) V = C1 * W

In other words, computer power is a linear function of the knowledge of how to build computers. This is actually a conservative assumption. In general, innovations improve V (computer power) by a multiple, not in an additive way. Independent innovations multiply each other’s effect. For example, a circuit advance such as CMOS, a more efficient IC wiring methodology, and a processor innovation such as pipelining all increase V by independent multiples.

(2) W = C2 * Integral (0 to t) V

In other words, W (knowledge) is cumulative, and the instantaneous increment to knowledge is proportional to V.

This gives us:

W = C1 * C2 * Integral (0 to t) W

W = C1 * C2 * C3 ^ (C4 * t)

V = C1 ^ 2 * C2 * C3 ^ (C4 * t)

(Note on notation: a^b means a raised to the b power.)

Simplifying the constants, we get:

V = Ca * Cb ^ (Cc * t)

So this is a formula for “accelerating” (i.e., exponentially growing) returns, a “regular Moore’s Law.”

As I mentioned above, the data shows exponential growth in the rate of exponential growth. (We doubled computer power every three years early in the twentieth century, every two years in the middle of the century, and close to every one year during the 1990s.)

Let’s factor in another exponential phenomenon, which is the growing resources for computation. Not only is each (constant cost) device getting more powerful as a function of W, but the resources deployed for computation are also growing exponentially.

We now have:

N: Expenditures for computation

V = C1 * W (as before)

N = C4 ^ (C5 * t) (Expenditure for computation is growing at its own exponential rate)

W = C2 * Integral (0 to t) (N * V)

As before, world knowledge is accumulating, and the instantaneous increment is proportional to the amount of computation, which equals the resources deployed for computation (N) * the power of each (constant cost) device.

This gives us:

W = C1 * C2 * Integral(0 to t) (C4 ^ (C5 * t) * W)

W = C1 * C2 * (C3 ^ (C6 * t)) ^ (C7 * t)

V = C1 ^ 2 * C2 * (C3 ^ (C6 * t)) ^ (C7 * t)

Simplifying the constants, we get:

V = Ca * (Cb ^ (Cc * t)) ^ (Cd * t)

This is a double exponential—an exponential curve in which the rate of exponential growth is growing at a different exponential rate.

Now let’s consider real-world data. Considering the data for actual calculating devices and computers during the twentieth century:

CPS/$1K: Calculations Per Second for $1,000

Twentieth century computing data matches:

CPS/$1K = 10^(6.00*((20.40/6.00)^((A13-1900)/100))-11.00)

We can determine the growth rate over a period of time:

Growth Rate =10^((LOG(CPS/$1K for Current Year)—LOG(CPS/$1K for Previous Year))/(Current Year—Previous Year))

Human Brain = 100 Billion (10^11) neurons * 1000 (10^3) Connections/Neuron * 200 (2 * 10^2) Calculations Per Second Per Connection = 2 * 10^16 Calculations Per Second

Human Race = 10 Billion (10^10) Human Brains = 2 * 10^26 Calculations Per Second

These formulas produce the graph below.

In a process, the time interval between salient events expands or contracts along with the amount of chaos. This relationship is one key to understanding the reason that the exponential growth of computing will survive the demise of Moore’s Law. Evolution started with vast chaos and little effective order, so early progress was slow. But evolution creates ever-increasing order. That is, after all, the essence of evolution. Order is the opposite of chaos, so when order in a process increases—as is the case for evolution—time speeds up. I call this important sub-law the “law of accelerating returns,” to contrast it with a better known law in which returns diminish.

Computation represents the essence of order in technology. Being subject to the evolutionary process that is technology, it too grows exponentially. There are many examples of the exponential growth in technological speeds and capacities. For example, when the human genome scan started twelve years ago, genetic sequencing speeds were so slow that without speed increases the project would have required thousands of years, yet it is now completing on schedule in under fifteen years. Other examples include the accelerating price-performance of all forms of computer memory, the exponential growing bandwidth of communication technologies (electronic, optical, as well as wireless), the rapidly increasing speed and resolution of human brain scanning, the miniaturization of technology, and many others. If we view the exponential growth of computation in its proper perspective, as one example of many of the law of accelerating returns, then we can confidently predict its continuation.

A sixth paradigm will take over from Moore’s Law, just as Moore’s Law took over from discrete transistors, and vacuum tubes before that. There are many emerging technologies for new computational substrates. In addition to nanotubes, several forms of computing at the molecular level are working in laboratories. There are more than enough new computing technologies now being researched, including three-dimensional chips, optical computing, crystalline computing, DNA computing, and quantum computing, to keep the law of accelerating returns going for a long time.

So where will this take us?

IBM’s “Blue Gene” supercomputer, scheduled to be completed by 2005, is projected to provide 1 million billion calculations per second, already one-twentieth of the capacity of the human brain. By the year 2019, your $1,000 personal computer will have the processing power of the human brain—20 million billion calculations per second (100 billion neurons times 1,000 connections per neuron times 200 calculations per second per connection). By 2029, it will take a village of human brains (about a thousand) to match $1,000 of computing. By 2050, $1,000 of computing will equal the processing power of all human brains on Earth. Of course, this only includes those brains still using carbon-based neurons. While human neurons are wondrous creations in a way, we wouldn’t design computing circuits the same way. Our electronic circuits are already more than 10 million times faster than a neuron’s electrochemical processes. Most of the complexity of a human neuron is devoted to maintaining its life support functions, not its information processing capabilities. Ultimately, we will need to port our mental processes to a more suitable computational substrate. Then our minds won’t have to stay so small, being constrained as they are today to a mere hundred trillion neural connections each operating at a ponderous 200 digitally controlled analog calculations per second.

A careful consideration of the law of time and chaos, and its key sublaw, the law of accelerating returns, shows that the exponential growth of computing is not like those other exponential trends that run out of resources. The two resources it needs—the growing order of the evolving technology itself, and the chaos from which an evolutionary process draws its options for further diversity—are without practical limits, at least not limits that we will encounter in the 21st century.

The Intuitive Linear View versus The Historical Exponential View

Many long range forecasts of technical feasibility in future time periods dramatically underestimate the power of future technology because they are based on what I call the “intuitive linear” view of technological progress rather than the “historical exponential view.” To express this another way, it is not the case that we will experience a hundred years of progress in the twenty-first century; rather we will witness on the order of twenty thousand years of progress (from the linear perspective, that is).

When people think of a future period, they intuitively assume that the current rate of progress will continue for the period being considered. However, careful consideration of the pace of technology shows that the rate of progress is not constant, but it is human nature to adapt to the changing pace, so the intuitive view is that the pace will continue at the current rate. Even for those of us who have lived through a sufficiently long period of technological progress to experience how the pace increases over time, our unexamined intuition nonetheless provides the impression that progress changes at the rate that we have experienced recently. A salient reason for this is that an exponential curve approximates a straight line when viewed for a brief duration. So even though the rate of progress in the very recent past (e.g., this past year) is far greater than it was ten years ago (let alone a hundred or a thousand years ago), our memories are nonetheless dominated by our very recent experience. Since the rate has not changed significantly in the very recent past (because a very small piece of an exponential curve is approximately straight), it is an understandable misperception to view the pace of change as a constant. It is typical, therefore, that even sophisticated commentators, when considering the future, extrapolate the current pace of change over the next ten years or hundred years to determine their expectations. This is why I call this way of looking at the future the “intuitive linear” view.

But any serious consideration of the history of technology shows that technological change is at least exponential, not linear. There are a great many examples of this which I have discussed above. One can examine this data in many different ways, and on many different time scales, and for a wide variety of different phenomena, and the (at least) double exponential growth implied by the law of accelerating returns applies. The law of accelerating returns does not rely on an assumption of the continuation of Moore’s law, but is based on a rich model of diverse technological processes. What it clearly shows is that technology, particularly the pace of technological change, advances (at least) exponentially, not linearly, and has been doing so since the advent of technology, indeed since the advent of evolution on Earth.

Most technology forecasts ignore altogether this “historical exponential view” of technological progress and assume instead the “intuitive linear view.” Although the evidence is compelling, it still requires study and modeling of many diverse events to see this exponential aspect. That is why people tend to overestimate what can be achieved in the short term (because we tend to leave out necessary details), but underestimate what can be achieved in the long term (because the exponential growth is ignored).

This observation also applies to paradigm shift rates, which are currently doubling (approximately) every decade; that is paradigm shift times are halving every decade (and this rate is also changing slowly, but nonetheless exponentially). So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of twenty thousand years. In terms of the growth of computing, the comparison is even more dramatic.

The Software of Intelligence

So far, I’ve been talking about the hardware of computing. The software is even more salient. Achieving the computational capacity of the human brain, or even villages and nations of human brains will not automatically produce human levels of capability. It is a necessary but not sufficient condition. The organization and content of these resources—the software of intelligence—is also critical.

There are a number of compelling scenarios to capture higher levels of intelligence in our computers, and ultimately human levels and beyond. We will be able to evolve and train a system combining massively parallel neural nets with other paradigms to understand language and model knowledge, including the ability to read and model the knowledge contained in written documents. Unlike many contemporary “neural net” machines, which use mathematically simplified models of human neurons, more advanced neural nets are already using highly detailed models of human neurons, including detailed nonlinear analog activation functions and other salient details. Although the ability of today’s computers to extract and learn knowledge from natural language documents is limited, their capabilities in this domain are improving rapidly. Computers will be able to read on their own, understanding and modeling what they have read, by the second decade of the twenty-first century. We can then have our computers read all of the world’s literature—books, magazines, scientific journals, and other available material. Ultimately, the machines will gather knowledge on their own by venturing into the physical world, drawing from the full spectrum of media and information services, and sharing knowledge with each other (which machines can do far more easily than their human creators).

Once a computer achieves a human level of intelligence, it will necessarily soar past it. Since their inception, computers have significantly exceeded human mental dexterity in their ability to remember and process information. A computer can remember billions or even trillions of facts perfectly, while we are hard pressed to remember a handful of phone numbers. A computer can quickly search a data base with billions of records in fractions of a second. As I mentioned earlier, computers can readily share their knowledge. The combination of human level intelligence in a machine with a computer’s inherent superiority in the speed, accuracy and sharing ability of its memory will be formidable.

Reverse Engineering the Human Brain

The most compelling scenario for mastering the software of intelligence is to tap into the blueprint of the best example we can get our hands on of an intelligent process. There is no reason why we cannot reverse engineer the human brain, and essentially copy its design. It took its original designer several billion years to develop. And it’s not even copyrighted.

The most immediately accessible way to accomplish this is through destructive scanning: we take a frozen brain, preferably one frozen just slightly before rather than slightly after it was going to die anyway, and examine one brain layer—one very thin slice—at a time. We can readily see every neuron and every connection and every neurotransmitter concentration represented in each synapse-thin layer.

Human brain scanning has already started. A condemned killer allowed his brain and body to be scanned and you can access all 10 billion bytes of him on the Internet. He has a 25 billion byte female companion on the site as well in case he gets lonely. This scan is not high enough resolution for our purposes, but then we probably don’t want to base our templates of machine intelligence on the brain of a convicted killer, anyway.

But scanning a frozen brain is feasible today, albeit not yet at a sufficient speed or bandwidth, but again, the law of accelerating returns will provide the requisite speed of scanning, just as it did for the human genome scan.

We also have noninvasive scanning techniques today, including high-resolution magnetic resonance imaging (MRI) scans, optical imaging, near-infrared scanning, and other noninvasive scanning technologies, that are capable in certain instances of resolving individual somas, or neuron cell bodies. Brain scanning technologies are increasing their resolution with each new generation, just what we would expect from the law of accelerating returns. Future generations will enable us to resolve the connections between neurons, and to peer inside the synapses and record the neurotransmitter concentrations.

We can peer inside someone’s brain today with noninvasive scanners, which are increasing their resolution with each new generation of this technology. There are a number of technical challenges in accomplishing this, including achieving suitable resolution, bandwidth, lack of vibration, and safety. For a variety of reasons it is easier to scan the brain of someone recently deceased than of someone still living. It is easier to get someone deceased to sit still, for one thing. But noninvasively scanning a living brain will ultimately become feasible as MRI, optical, and other scanning technologies continue to improve in resolution and speed.

In fact, the driving force behind the rapidly improving capability of noninvasive scanning technologies is again the law of accelerating returns, because it requires massive computational ability to build the high-resolution three-dimensional images. The exponentially increasing computational ability provided by the law of accelerating returns (and for another 10 to 20 years, Moore’s Law) will enable us to continue to rapidly improve the resolution and speed of these scanning technologies.

Scanning from Inside

To capture every salient neural detail of the human brain, the most practical approach will be to scan it from inside. By 2030, “nanobot” (i.e., nano-robot) technology will be viable, and brain scanning will be a prominent application. Nanobots are robots that are the size of human blood cells, or even smaller. Billions of them could travel through every brain capillary and scan every salient neural detail from up close. Using high-speed wireless communication, the nanobots would communicate with each other, and with other computers that are compiling the brain scan database (in other words, the nanobots will all be on a wireless local area network).

This scenario involves only capabilities we can touch and feel today. We already have technology capable of producing very high-resolution scans provided that the scanner is physically proximate to the neural features. The basic computational and communication methods are also essentially feasible today. The primary features that are not yet practical are nanobot size and cost. As I discussed above, we can project the exponentially declining cost of computation. Miniaturization is another readily predictable aspect of the law of accelerating returns. Already being developed at the University of California at Berkeley are tiny flying robots called “smart dust,” which are approximately one millimeter wide (about the size of a grain of sand), and capable of flying, sensing, computing, and communicating using tiny lasers and hinged micro-flaps and micro-mirrors. The size of electronics and robotics will continue to shrink at an exponential rate, currently by a factor of 5.6 per linear dimension per decade. We can conservatively expect, therefore, the requisite nanobot technology by around 2030. Because of its ability to place each scanner in very close physical proximity to every neural feature, nanobot-based scanning will be more practical than scanning the brain from outside.

How to Use Your Brain Scan

How will we apply the thousands of trillions of bytes of information derived from each brain scan? One approach is to use the results to design more intelligent parallel algorithms for our machines, particularly those based on one of the neural net paradigms. With this approach, we don’t have to copy every single connection. There is a great deal of repetition and redundancy within any particular brain region. Although the information contained in a human brain would require thousands of trillions of bytes of information (on the order of 100 billion neurons times an average of 1,000 connections per neuron, each with multiple neurotransmitter concentrations and connection data), the design of the brain is characterized by a human genome of only about a billion bytes.

Furthermore, most of the genome is redundant, so the initial design of the brain is characterized by approximately one hundred million bytes, about the size of Microsoft Word. Of course, the complexity of our brains greatly increases as we interact with the world (by a factor of more than ten million). It is not necessary, however, to capture each detail in order to reverse engineer the salient digital-analog algorithms. With this information, we can design simulated nets that operate similarly. There are already multiple efforts under way to scan the human brain and apply the insights derived to the design of intelligent machines. The ATR (Advanced Telecommunications Research) Lab in Kyoto, Japan, for example, is building a silicon brain with 1 billion neurons. Although this is 1% of the number of neurons in a human brain, the ATR neurons operate at much faster speeds.

After the algorithms of a region are understood, they can be refined and extended before being implemented in synthetic neural equivalents. For one thing, they can be run on a computational substrate that is already more than ten million times faster than neural circuitry. And we can also throw in the methods for building intelligent machines that we already understand.

Downloading the Human Brain

Perhaps a more interesting approach than this scanning-the-brain-to-understand-it scenario is scanning-the-brain-to-download-it. Here we scan someone’s brain to map the locations, interconnections, and contents of all the somas, axons, dendrites, presynaptic vesicles, neurotransmitter concentrations, and other neural components and levels. Its entire organization can then be re-created on a neural computer of sufficient capacity, including the contents of its memory.

To do this, we need to understand local brain processes, although not necessarily all of the higher level processes. Scanning a brain with sufficient detail to download it may sound daunting, but so did the human genome scan. All of the basic technologies exist today, just not with the requisite speed, cost, and size, but these are the attributes that are improving at a double exponential pace.

The computationally salient aspects of individual neurons are complicated, but definitely not beyond our ability to accurately model. For example, Ted Berger and his colleagues at Hedco Neurosciences have built integrated circuits that precisely match the digital and analog information processing characteristics of neurons, including clusters with hundreds of neurons. Carver Mead and his colleagues at CalTech have built a variety of integrated circuits that emulate the digital-analog characteristics of mammalian neural circuits.

A recent experiment at San Diego’s Institute for Nonlinear Science demonstrates the potential for electronic neurons to precisely emulate biological ones. Neurons (biological or otherwise) are a prime example of what is often called “chaotic computing.” Each neuron acts in an essentially unpredictable fashion. When an entire network of neurons receives input (from the outside world or from other networks of neurons), the signaling amongst them appears at first to be frenzied and random. Over time, typically a fraction of a second or so, the chaotic interplay of the neurons dies down, and a stable pattern emerges. This pattern represents the “decision” of the neural network. If the neural network is performing a pattern recognition task (which, incidentally, comprises more than 95% of the activity in the human brain), then the emergent pattern represents the appropriate recognition.

So the question addressed by the San Diego researchers was whether electronic neurons could engage in this chaotic dance alongside biological ones. They hooked up their artificial neurons with those from spiney lobsters in a single network, and their hybrid biological-nonbiological network performed in the same way (i.e., chaotic interplay followed by a stable emergent pattern) and with the same type of results as an all biological net of neurons. Essentially, the biological neurons accepted their electronic peers. It indicates that their mathematical model of these neurons was reasonably accurate.

There are many projects around the world, which are creating nonbiological devices and which recreate in great detail the functionality of human neuron clusters, and the accuracy and scale of these neuron clusters replications are rapidly increasing. We started with functionally equivalent recreations of single neurons, then clusters of tens, then hundreds, and now thousands. Scaling up technical processes at an exponential pace is what technology is good at.

As the computational power to emulate the human brain becomes available—we’re not there yet, but we will be there within a couple of decades—projects already under way to scan the human brain will be accelerated, with a view both to understand the human brain in general, as well as providing a detailed description of the contents and design of specific brains. By the third decade of the twenty-first century, we will be in a position to create highly detailed and complete maps of all relevant features of all neurons, neural connections and synapses in the human brain, all of the neural details that play a role in the behavior and functionality of the brain, and to recreate these designs in suitably advanced neural computers.

Is the Human Brain Different from a Computer?

The answer depends on what we mean by the word “computer.” Certainly the brain uses very different methods from conventional contemporary computers. Most computers today are all digital and perform one (or perhaps a few) computation(s) at a time at extremely high speed. In contrast, the human brain combines digital and analog methods with most computations performed in the analog domain. The brain is massively parallel, performing on the order of a hundred trillion computations at the same time, but at extremely slow speeds.

With regard to digital versus analog computing, we know that digital computing can be functionally equivalent to analog computing (although the reverse is not true), so we can perform all of the capabilities of a hybrid digital—analog network with an all digital computer. On the other hand, there is an engineering advantage to analog circuits in that analog computing is potentially thousands of times more efficient. An analog computation can be performed by a few transistors, or, in the case of mammalian neurons, specific electrochemical processes. A digital computation in contrast requires thousands or tens of thousands of transistors. So there is a significant efficiency advantage to emulating the brain’s analog methods.

The massive parallelism of the human brain is the key to its pattern recognition abilities, which reflects the strength of human thinking. As I discussed above, mammalian neurons engage in a chaotic dance, and if the neural network has learned its lessons well, then a stable pattern will emerge reflecting the network’s decision. There is no reason why our nonbiological functionally-equivalent recreations of biological neural networks cannot be built using these same principles, and indeed there are dozens of projects around the world that have succeeded in doing this. My own technical field is pattern recognition, and the projects that I have been involved in for over thirty years use this form of chaotic computing. Particularly successful examples are Carver Mead’s neural chips, which are highly parallel, use digital controlled analog computing, and are intended as functionally similar recreations of biological networks.

As we create nonbiological but functionally equivalent recreations of biological neural networks ranging from clusters of dozens of neurons up to entire human brains and beyond, we can combine the qualities of human thinking with certain advantages of machine intelligence. My human knowledge and skills exist in my brain as vast patterns of interneuronal connections, neurotransmitter concentrations, and other neural elements. As I mentioned at the beginning of this chapter, there are no quick downloading ports for these patterns in our biological neural networks, but as we build nonbiological equivalents, we will not leave out the ability to quickly load patterns representing knowledge and skills.

Although it is remarkable that as complex and capable an entity as the human brain evolved through natural selection, aspects of its design are nonetheless extremely inefficient. Neurons are very bulky devices and at least ten million times slower in their information processing than electronic circuits. As we combine the brain’s pattern recognition methods derived from high-resolution brain scans and reverse engineering efforts with the knowledge sharing, speed, and memory accuracy advantages of nonbiological intelligence, the combination will be formidable.

Objective and Subjective

Although I anticipate that the most common application of the knowledge gained from reverse engineering the human brain will be creating more intelligent machines that are not necessarily modeled on specific individuals, the scenario of scanning and reinstantiating all of the neural details of a particular person raises the most immediate questions of identity. Let’s consider the question of what we will find when we do this.

We have to consider this question on both the objective and subjective levels. “Objective” means everyone except me, so let’s start with that. Objectively, when we scan someone’s brain and reinstantiate their personal mind file into a suitable computing medium, the newly emergent “person” will appear to other observers to have very much the same personality, history, and memory as the person originally scanned. That is, once the technology has been refined and perfected. Like any new technology, it won’t be perfect at first. But ultimately, the scans and recreations will be very accurate and realistic.

Interacting with the newly instantiated person will feel like interacting with the original person. The new person will claim to be that same old person and will have a memory of having been that person. The new person will have all of the patterns of knowledge, skill, and personality of the original. We are already creating functionally equivalent recreations of neurons and neuron clusters with sufficient accuracy that biological neurons accept their nonbiological equivalents and work with them as if they were biological. There are no natural limits that prevent us from doing the same with the hundred billion neuron cluster we call the human brain.

Subjectively, the issue is more subtle and profound, but first we need to reflect on one additional objective issue: our physical self.

The Importance of Having a Body

Consider how many of our thoughts and thinking is directed towards our body and its survival, security, nutrition, image, not to mention affection, sexuality, and reproduction. Many, if not most, of the goals we attempt to advance using our brains have to do with our bodies: protecting them, providing them with fuel, making them attractive, making them feel good, providing for their myriad needs and desires. Some philosophers maintain that achieving human level intelligence is impossible without a body. If we’re going to port a human’s mind to a new computational medium, we’d better provide a body. A disembodied mind will quickly get depressed.

There are a variety of bodies that we will provide for our machines, and that they will provide for themselves: bodies built through nanotechnology (an emerging field devoted to building highly complex physical entities atom by atom), virtual bodies (that exist only in virtual reality), bodies comprised of swarms of nanobots.

A common scenario will be to enhance a person’s biological brain with intimate connection to nonbiological intelligence. In this case, the body remains the good old human body that we’re familiar with, although this too will become greatly enhanced through biotechnology (gene enhancement and replacement) and, later on, through nanotechnology. A detailed examination of twenty-first century bodies is beyond the scope of this chapter, but is examined in chapter seven of my recent book The Age of Spiritual Machines.

So Just Who are These People?

To return to the issue of subjectivity, consider: Is the reinstantiated mind the same consciousness as the person we just scanned? Are these “people” conscious at all? Is this a mind or just a brain?

Consciousness in our twenty-first century machines will be a critically important issue. But it is not easily resolved, or even readily understood. People tend to have strong views on the subject, and often just can’t understand how anyone else could possibly see the issue from a different perspective. Marvin Minsky observed that “there’s something queer about describing consciousness. Whatever people mean to say, they just can’t seem to make it clear.”

We don’t worry, at least not yet, about causing pain and suffering to our computer programs. But at what point do we consider an entity, a process, to be conscious, to feel pain and discomfort, to have its own intentionality, its own free will? How do we determine if an entity is conscious; if it has subjective experience? How do we distinguish a process that is conscious from one that just acts as if it is conscious?

We can’t simply ask it. If it says, “Hey I’m conscious,” does that settle the issue? No, we have computer games today that effectively do that, and they’re not terribly convincing.

How about if the entity is very convincing and compelling when it says, “I’m lonely, please keep me company”? Does that settle the issue?

If we look inside its circuits, and see essentially the identical kinds of feedback loops and other mechanisms in its brain that we see in a human brain (albeit implemented using nonbiological equivalents), does that settle the issue?

And just who are these people in the machine, anyway? The answer will depend on who you ask. If you ask the people in the machine, they will strenuously claim to be the original persons. For example, if we scan—let’s say myself—and record the exact state, level, and position of every neurotransmitter, synapse, neural connection, and every other relevant detail, and then reinstantiate this massive data base of information (which I estimate at thousands of trillions of bytes) into a neural computer of sufficient capacity, the person that then emerges in the machine will think that he is (and had been) me. He will say “I grew up in Queens, New York, went to college at MIT, stayed in the Boston area, sold a few artificial intelligence companies, walked into a scanner there, and woke up in the machine here. Hey, this technology really works.”

But wait. Is this really me? For one thing, old biological Ray (that’s me) still exists. I’ll still be here in my carbon-cell-based brain. Alas, I will have to sit back and watch the new Ray succeed in endeavors that I could only dream of.

A Thought Experiment

Let’s consider the issue of just who I am, and who the new Ray is a little more carefully. First of all, am I the stuff in my brain and body?

Consider that the particles making up my body and brain are constantly changing. We are not at all permanent collections of particles. The cells in our bodies turn over at different rates, but the particles (e.g., atoms and molecules) that comprise our cells are exchanged at a very rapid rate. I am just not the same collection of particles that I was even a month ago. It is the patterns of matter and energy that are semipermanent (that is, changing only gradually), but our actual material content is changing constantly, and very quickly. We are rather like the patterns that water makes in a stream. The rushing water around a formation of rocks makes a particular, unique pattern. This pattern may remain relatively unchanged for hours, even years. Of course, the actual material constituting the pattern—the water—is replaced in milliseconds. The same is true for Ray Kurzweil. Like the water in a stream, my particles are constantly changing, but the pattern that people recognize as Ray has a reasonable level of continuity. This argues that we should not associate our fundamental identity with a specific set of particles, but rather the pattern of matter and energy that we represent. Many contemporary philosophers seem partial to this “identify from pattern” argument.

But wait. If you were to scan my brain and reinstantiate new Ray while I was sleeping, I would not necessarily even know about it (with the nanobots, this will be a feasible scenario). If you then come to me, and say, “Good news, Ray, we’ve successfully reinstantiated your mind file, so we won’t be needing your old brain anymore,” I may suddenly realize the flaw in the “identity from pattern” argument. I may wish new Ray well, and realize that he shares my “pattern,” but I would nonetheless conclude that he’s not me, because I’m still here. How could he be me? After all, I would not necessarily know that he even existed.

Let’s consider another perplexing scenario. Suppose I replace a small number of biological neurons with functionally equivalent nonbiological ones (they may provide certain benefits such as greater reliability and longevity, but that’s not relevant to this thought experiment). After I have this procedure performed, am I still the same person? My friends certainly think so. I still have the same self-deprecating humor, the same silly grin—yes, I’m still the same guy.

It should be clear where I’m going with this. Bit by bit, region by region, I ultimately replace my entire brain with essentially identical (perhaps improved) nonbiological equivalents (preserving all of the neurotransmitter concentrations and other details that represent my learning, skills, and memories). At each point, I feel the procedures were successful. At each point, I feel that I am same guy. After each procedure, I claim to be the same guy. My friends concur. There is no old Ray and new Ray, just one Ray, one that never appears to fundamentally change.

But consider this. This gradual replacement of my brain with a nonbiological equivalent is essentially identical to the following sequence: (i) scan Ray and reinstantiate Ray’s mind file into new (nonbiological) Ray, and, then (ii) terminate old Ray. But we concluded above that in such a scenario new Ray is not the same as old Ray. And if old Ray is terminated, well then that’s the end of Ray. So the gradual replacement scenario essentially results in the same result: New Ray has been created, and old Ray has been terminated, even if we never saw him missing. So what appears to be the continuing existence of just one Ray is really the creation of new Ray and the end of old Ray.

On yet another hand (we’re running out of philosophical hands here), the gradual replacement scenario is not altogether different from what happens normally to our biological selves, in that our particles are always rapidly being replaced. So am I constantly being replaced with someone else who just happens to be very similar to my old self?

I am trying to illustrate why consciousness is not an easy issue. If we talk about consciousness as just a certain type of intelligent skill: the ability to reflect on one’s own self and situation, for example, then the issue is not difficult at all because any skill or capability or form of intelligence that one cares to define will be replicated in nonbiological entities (i.e., machines) within a few decades. With this type of objective view of consciousness, the conundrums do go away. But a fully objective view does not penetrate to the core of the issue, because the essence of consciousness is subjective experience, not objective correlates of that experience.

Will these future machines be capable of having spiritual experiences?

They certainly will claim to. They will claim to be people, and to have the full range of emotional and spiritual experiences that people claim to have. And these will not be idle claims; they will evidence the sort of rich, complex, and subtle behavior one associates with these feelings. How do the claims and behaviors—compelling as they will be—relate to the subjective experience of these reinstantiated people? We keep coming back to the very real but ultimately unmeasurable issue of consciousness.

People often talk about consciousness as if it were a clear property of an entity that can readily be identified, detected, and gauged. If there is one crucial insight that we can make regarding why the issue of consciousness is so contentious, it is the following:

There exists no objective test that can absolutely determine its presence.

Science is about objective measurement and logical implications therefrom, but the very nature of objectivity is that you cannot measure subjective experience—you can only measure correlates of it, such as behavior (and by behavior, I include the actions of components of an entity, such as neurons). This limitation has to do with the very nature of the concepts “objective” and “subjective.” Fundamentally, we cannot penetrate the subjective experience of another entity with direct objective measurement. We can certainly make arguments about it: i.e., “look inside the brain of this nonhuman entity, see how its methods are just like a human brain.” Or, “see how its behavior is just like human behavior.” But in the end, these remain just arguments. No matter how convincing the behavior of a reinstantiated person, some observers will refuse to accept the consciousness of an entity unless it squirts neurotransmitters, or is based on DNA-guided protein synthesis, or has some other specific biologically human attribute.

We assume that other humans are conscious, but that is still an assumption, and there is no consensus amongst humans about the consciousness of nonhuman entities, such as other higher non-human animals. The issue will be even more contentious with regard to future nonbiological entities with human-like behavior and intelligence.

From a practical perspective, we’ll accept their claims. Keep in mind that nonbiological entities in the twenty-first century will be extremely intelligent, so they’ll be able to convince us that they are conscious. They’ll have all the subtle cues that convince us today that humans are conscious. They will be able to make us laugh and cry. And they’ll get mad if we don’t accept their claims. But this is a political prediction, not a philosophical argument.

On Tubules and Quantum Computing

Over the past several years, Roger Penrose, a noted physicist and philosopher, has suggested that fine structures in the neurons called tubules perform an exotic form of computation called “quantum computing.” Quantum computing is computing using what are known as “qu bits,” which take on all possible combinations of solutions simultaneously. It can be considered to be an extreme form of parallel processing (because every combination of values of the qu bits is tested simultaneously). Penrose suggests that the tubules and their quantum computing capabilities complicate the concept of recreating neurons and reinstantiating mind files.

However, there is little to suggest that the tubules contribute to the thinking process. Even generous models of human knowledge and capability are more than accounted for by current estimates of brain size, based on contemporary models of neuron functioning that do not include tubules. In fact, even with these tubule-less models, it appears that the brain is conservatively designed with many more connections (by several orders of magnitude) than it needs for its capabilities and capacity. Recent experiments (e.g., the San Diego Institute for Nonlinear Science experiments) showing that hybrid biological-nonbiological networks perform similarly to all biological networks, while not definitive, are strongly suggestive that our tubule-less models of neuron functioning are adequate.

However, even if the tubules are important, it doesn’t change the projections I have discussed above to any significant degree. According to my model of computational growth, if the tubules multiplied neuron complexity by a factor of a thousand (and keep in mind that our current tubule-less neuron models are already complex, including on the order of a thousand connections per neuron, multiple nonlinearities and other details), this would delay our reaching brain capacity by only about nine years. If we’re off by a factor of a million, that’s still only a delay of 17 years. A factor of a billion is around 24 years (keep in mind computation is growing by a double exponential).

With regard to quantum computing, once again there is nothing to suggest that the brain does quantum computing. Just because quantum technology may be feasible does not suggest that the brain is capable of it. We don’t have lasers or even radios in our brains either. No one has suggested human capabilities that would require a capacity for quantum computing.

However, even if the brain does do quantum computing, this does not significantly change the outlook for human-level computing (and beyond) nor does it suggest that brain downloading is infeasible. First of all, if the brain does do quantum computing this would only verify that quantum computing is feasible. There would be nothing in such a finding to suggest that quantum computing is restricted to biological mechanisms. Biological quantum computing mechanisms, if they exist, could be replicated. Indeed, recent experiments with small-scale quantum computers appear to be successful.

Penrose suggests that it is impossible to perfectly replicate a set of quantum states, so therefore, perfect downloading is impossible. Well, how perfect does a download have to be? I am at this moment in a very different quantum state (and different in non-quantum ways as well) than I was a minute ago (certainly in a different state than I was before I wrote this paragraph). If we develop downloading technology to the point where the “copies” are as close to the original as the original person changes anyway in the course of one minute, that would be good enough for any conceivable purpose, yet does not require copying quantum states. As the technology improves, the accuracy of the copy could become as close as the original changes within ever-briefer periods of time (e.g., one second, one millisecond).

When it was pointed out to Penrose that neurons (and even neural connections) were too big for quantum computing, he came up with the tubule theory as a possible mechanism for neural quantum computing. So the concerns with quantum computing and tubules have been introduced together. If one is searching for barriers to replicating brain function, it is an ingenious theory, but it fails to introduce any genuine barriers. There is no evidence for it, and even if true, it only delays matters by a decade or two. There is no reason to believe that biological mechanisms (including quantum computing) are inherently impossible to replicate using nonbiological materials and mechanisms. Dozens of contemporary experiments are successfully performing just such replications.

The Noninvasive Surgery-Free Reversible Programmable Distributed Brain Implant

How will we apply technology that is more intelligent than its creators? One might be tempted to respond “Carefully!” But let’s take a look at some examples.

Consider several examples of the nanobot technology, which, based on miniaturization and cost reduction trends, will be feasible within 30 years. In addition to scanning your brain, the nanobots will also be able to expand your brain.

Nanobot technology will provide fully immersive, totally convincing virtual reality in the following way. The nanobots take up positions in close physical proximity to every interneuronal connection coming from all of our senses (e.g., eyes, ears, skin). We already have the technology for electronic devices to communicate with neurons in both directions that requires no direct physical contact with the neurons. For example, scientists at the Max Planck Institute have developed “neuron transistors” that can detect the firing of a nearby neuron, or alternatively, can cause a nearby neuron to fire, or suppress it from firing. This amounts to two-way communication between neurons and the electronic-based neuron transistors. The Institute scientists demonstrated their invention by controlling the movement of a living leech from their computer. Again, the primary aspect of nanobot-based virtual reality that is not yet feasible is size and cost.

When we want to experience real reality, the nanobots just stay in position (in the capillaries) and do nothing. If we want to enter virtual reality, they suppress all of the inputs coming from the real senses, and replace them with the signals that would be appropriate for the virtual environment. You (i.e., your brain) could decide to cause your muscles and limbs to move as you normally would, but the nanobots again intercept these interneuronal signals, suppress your real limbs from moving, and instead cause your virtual limbs to move and provide the appropriate movement and reorientation in the virtual environment.

The web will provide a panoply of virtual environments to explore. Some will be recreations of real places; others will be fanciful environments that have no “real” counterpart. Some indeed would be impossible in the physical world (perhaps, because they violate the laws of physics). We will be able to “go” to these virtual environments by ourselves, or we will meet other people there, both real people and simulated people. Of course, ultimately there won’t be a clear distinction between the two.

Nanobot technology will be able to expand our minds in virtually any imaginable way. Our brains today are relatively fixed in design. Although we do add patterns of interneuronal connections and neurotransmitter concentrations as a normal part of the learning process, the current overall capacity of the human brain is highly constrained, restricted to a mere hundred trillion connections. Brain implants based on massively distributed intelligent nanobots will ultimately expand our memories a trillion fold, and otherwise vastly improve all of our sensory, pattern recognition and cognitive abilities. Since the nanobots are communicating with each other over a wireless local area network, they can create any set of new neural connections, can break existing connections (by suppressing neural firing), can create new hybrid biological-nonbiological networks as well as adding vast new nonbiological networks.

Using nanobots as brain extenders is a significant improvement over the idea of surgically installed neural implants, which are beginning to be used today. Nanobots will be introduced without surgery, essentially just by injecting or even swallowing them. They can all be directed to leave, so the process is easily reversible. They are programmable, in that they can provide virtual reality one minute, and a variety of brain extensions the next. They can change their configuration, and clearly can alter their software. Perhaps most importantly, they are massively distributed and therefore can take up billions or trillions of positions throughout the brain, whereas a surgically introduced neural implant can only be placed in one or at most a few locations.

A Clear and Future Danger

Technology has always been a double-edged sword, bringing us longer and healthier life spans, freedom from physical and mental drudgery, and many new creative possibilities on the one hand, while introducing new and salient dangers on the other. We still live today with sufficient nuclear weapons (not all of which appear to be well accounted for) to end all mammalian life on the planet. Bioengineering is in the early stages of enormous strides in reversing disease and aging processes. However, the means and knowledge exist in a routine college bioengineering lab to create unfriendly pathogens more dangerous than nuclear weapons. For the twenty-first century, we will see the same intertwined potentials: a great feast of creativity resulting from human intelligence expanded a trillion-fold combined with many grave new dangers.

Consider unrestrained nanobot replication. Nanobot technology requires billions or trillions of such intelligent devices to be useful. The most cost-effective way to scale up to such levels is through self-replication, essentially the same approach used in the biological world. And in the same way that biological self-replication gone awry (i.e., cancer) results in biological destruction, a defect in the mechanism curtailing nanobot self-replication would endanger all physical entities, biological or otherwise.

Other salient concerns include “who is controlling the nanobots?” and “who are the nanobots talking to?” Organizations (e.g., governments, extremist groups) or just a clever individual could put trillions of undetectable nanobots in the water or food supply of an individual or of an entire population. These “spy” nanobots could then monitor, influence, and even control our thoughts and actions. In addition to introducing physical spy nanobots, existing nanobots could be influenced through software viruses and other software “hacking” techniques.

My own expectation is that the creative and constructive applications of this technology will dominate, as I believe they do today. But there will be a valuable (and increasingly vocal) role for a concerned and constructive Luddite movement (i.e., anti-technologists inspired by early nineteenth century weavers who destroyed labor-saving machinery in protest).

Living Forever

Once brain porting technology has been refined and fully developed, will this enable us to live forever? The answer depends on what we mean by living and dying. Consider what we do today with our personal computer files. When we change from one personal computer to a less obsolete model, we don’t throw all our files away; rather we copy them over to the new hardware. Although our software files do not necessary continue their existence forever, the longevity of our personal computer software is completely separate and disconnected from the hardware that it runs on. When it comes to our personal mind file, however, when our human hardware crashes, the software of our lives dies with it. However, this will not continue to be the case when we have the means to store and restore the thousands of trillions of bytes of information stored and represented in our brains.

The longevity of one’s mind file will not be dependent, therefore, on the continued viability of any particular hardware medium. Ultimately software-based humans, albeit vastly extended beyond the severe limitations of humans as we know them today, will live out on the web, projecting bodies whenever they need or want them, including virtual bodies in diverse realms of virtual reality, holographically projected bodies, and physical bodies comprised of nanobot swarms, and other forms of nanotechnology.

A software-based human will be free, therefore, from the constraints of any particular thinking medium. Today, we are each confined to a mere hundred trillion connections, but humans at the end of the twenty-first century can grow their thinking and thoughts without limit. We may regard this as a form of immortality, although it is worth pointing out that data and information do not necessarily last forever. Although not dependent on the viability of the hardware it runs on, the longevity of information depends on its relevance, utility, and accessibility. If you’ve ever tried to retrieve information from an obsolete form of data storage in an old obscure format (e.g., a reel of magnetic tape from a 1970 minicomputer), you will understand the challenges in keeping software viable. However, if we are diligent in maintaining our mind file, keeping current backups, and porting to current formats and mediums, then a form of immortality can be attained, at least for software-based humans. Our mind file—our personality, skills, memories—all of that is lost today when our biological hardware crashes. When we can access, store, and restore that information, then its longevity will no longer be tied to hardware permanence.

Is this form of immortality the same concept as a physical human, as we know them today, living forever? In one sense it is, because as I pointed out earlier, we are not a constant collection of matter. Only our pattern of matter and energy persists, and even that gradually changes. Similarly, it will be the pattern of a software human that persists and develops and changes gradually.

But is that person based on my mind file, who migrates across many computational substrates, and who outlives any particular thinking medium, really me? We come back to the same questions of consciousness and identity, issues that have been debated since the Platonic dialogues. As we go through the twenty-first century, these will not remain polite philosophical debates, but will be confronted as vital and practical issues.

A related question is, “Is death desirable?” A great deal of our effort goes into avoiding it. We make extraordinary efforts to delay it, and indeed often consider its intrusion a tragic event. Yet we might find it hard to live without it. We consider death as giving meaning to our lives. It gives importance and value to time. Time could become meaningless if there were too much of it.

The Next Step in Evolution

But I regard the freeing of the human mind from its severe physical limitations of scope and duration as the necessary next step in evolution. Evolution, in my view, represents the purpose of life. That is, the purpose of life—and of our lives—is to evolve.

What does it mean to evolve? Evolution moves towards greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity, greater love. And God has been called all these things, only without any limitation: infinite knowledge, infinite intelligence, infinite beauty, infinite creativity, and infinite love. Evolution does not achieve an infinite level, but as it explodes exponentially, it certainly moves in that direction. So evolution moves inexorably towards our conception of God, albeit never reaching this ideal. Thus the freeing of our thinking from the severe limitations of its biological form may be regarded as an essential spiritual quest.

In making this statement, it is important to emphasize that terms like evolution, destiny, and spiritual quest are observations about the end result, not justifications for it. I am not saying that technology will evolve to human levels and beyond simply because it is our destiny and the satisfaction of a spiritual quest. Rather my projections result from a methodology based the dynamics underlying the (double) exponential growth of technological processes. The primary force driving technology is economic imperative. We are moving towards machines with human level intelligence (and beyond) as the result of millions of advances, each with their own economic justification. To use an example from my own experience at one of my companies (Kurzweil Applied Intelligence), whenever we came up with a slightly more intelligent version of speech recognition, the new version invariably had greater value than the earlier generation and, as a result, sales increased. It is interesting to note that in the example of speech recognition software, the three primary surviving competitors (Kurzweil—now Lernout & Hauspie, Dragon, and IBM) stayed very close to each other in the intelligence of their software. A few other companies that failed to do so (e.g., Speech Systems) went out of business. At any point in time, we would be able to sell the version prior to the latest version for perhaps a quarter of the price of the current version. As for versions of our technology that were two generations old, we couldn’t even give those away. This phenomenon is not only true for pattern recognition and other “AI” software. It’s true of any product from cars to bread makers. And if the product itself doesn’t exhibit some level of intelligence, then intelligence in the manufacturing and marketing methods have a major effect on the success and profitability of an enterprise.

There is a vital economic imperative to create more intelligent technology. Intelligent machines have enormous value. That is why they are being built. There are tens of thousands of projects that are advancing intelligent machines in many diverse ways. The support for “high tech” in the business community (mostly software) has grown enormously. When I started my OCR and speech synthesis company in 1974, the total U.S. annual venture capital investment in high tech was around $8 million. Total high tech IPOs for 1974 was about the same figure. Today, high tech IPOs (principally software) are about $30 million per day, more than a thousand fold increase.

We will continue to build more powerful computational mechanisms because it creates enormous value. We will reverse engineer the human brain not because it is our destiny, but because there is valuable information to be found there that will provide insights in building more intelligent (and more valuable) machines. We would have to repeal capitalism and every visage of economic competition to stop this progression.

By the second half of this twenty-first century, there will be no clear distinction between human and machine intelligence. On the one hand, we will have biological brains vastly expanded through distributed nanobot-based implants. On the other hand, we will have fully nonbiological brains that are copies of human brains, albeit also vastly extended. And we will have myriad other varieties of intimate connection between human thinking and the technology it has fostered.

Ultimately, nonbiological intelligence will dominate because it is growing at a double exponential rate, whereas for all practical purposes biological intelligence is at a standstill. By the end of the twenty-first century, nonbiological thinking will be trillions of trillions of times more powerful than that of its biological progenitors, although still of human origin. It will continue to be the human-machine civilization taking the next step in evolution.

Before the twenty-first century is over, the Earth’s technology-creating species will merge with its computational technology. After all, what is the difference between a human brain enhanced a trillion fold by nanobot-based implants, and a computer whose design is based on high resolution scans of the human brain, and then extended a trillion-fold?

Most forecasts of the future seem to ignore the revolutionary impact of the inevitable emergence of computers that match and ultimately vastly exceed the capabilities of the human brain, a development that will be no less important than the evolution of human intelligence itself some thousands of centuries ago.

Copyright ' 2002 by the Discovery Institute. Used with permission.

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Gradual replacement argument
posted on 06/19/2002 3:05 PM by s_paliwoda@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

I'm still a bit surprised why Ray continues to equate gradual replacement with making copy of Ray's mind file, concluding with the termination of the original Ray. Gradual replacement might be equivalent to acquiring new skills but not to the creation of a perfect twin since Ray1 is terminated after Ray2 is born so there is some time between while both Rays exist. Gradual replacement happens when we learn something or destroy old habbits. Are we really destroying ourselves or our identity through this process?

Slawek

Re: Gradual replacement argument
posted on 07/08/2002 3:21 PM by normdoering@mad.scientist.com

[Top]
[Mind·X]
[Reply to this post]

> Gradual replacement happens when we learn
> something or destroy old habbits. Are we
> really destroying ourselves or our identity
> through this process?

Well, I'm not the same person I was when I was a five year old boy. I can no longer enjoy the silly cartoons I watched as a child, they bore me. I no longer believe the same things. Yet, that boy is still part of me, I have a few of his memories, vague memories of the cartoons watched, friends from my first school days... but most of that life as a five year old is lost to me. I've dumped it in the process of growing up.

Re: Gradual replacement argument
posted on 07/09/2002 1:46 PM by s_paliwoda@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Yes, even when we rewire our brain now (by learning or changing habits), we obviously change it, but our identity still stays the same. Then, what if we don't change anything in the structure of the brain, and merely, replace each neuron with an electronic one? If changing its structure doesn't really do anything to our identity now, then changing the nature of the neurons (with the brain structure intact) won't do anything to our identity either.
Slawek

Re: Chapter 1: The Evolution of Mind in the Twenty-First Century
posted on 07/01/2002 3:09 PM by warrenbergerson@attbi.com

[Top]
[Mind·X]
[Reply to this post]

Mr. Kerzweil, it appears, equates the 'evolution of the mind' with the 'evolution of technology'. He then suggests that the observed historical pattern of exponential growth in technology, can/will be continued by 1)artificial increases in computing power, and 2)by an understanding or reverse engineering of the human brain. IMO, this article does an excellent job of attempting to fill in details around what is a very interesting concept.

The concept of equating the 'evolution of the human mind' with the 'evolution of technology' is important and useful. Technology is, at the very least, a very important manifestation of the human mind. If one is ever to model, simulate, or explain the evolution of the human mind, one must be able to model, simulate and explain the exponential growth in technology.

It is interesting to note that the increases or evolution in technology are continuing despite the fact that there is no evidence for any corresponding changes in the genetic make-up of the human species. As far as I am aware, there is no other example of a key characteristic of a biological system which has continued to evolve at a 'rapid and accelerating pace' despite the lack of genetic change.

In explaining this very unusual phenomena, it is useful to start by noting that technology is not the product of a single brain, but the product of cooperation among large groups of brains. It is, in fact, reasonable to argue that 'the evolution of the mind' is much more closely associated with 'the evolution of increases in the ability to perform cooperative thinking or analysis' than it is with 'increases in the computing capacity of the brain'. The increase in brain size from pre-human to human was probably rather slight. The increase in 'the ability to generate technology and cultural artifacts' was huge.

If technology and cultural artifacts are the product produced by 'human intelligence', then understanding human intelligence is likely to have more to do with understanding how people work together and solve problems cooperatively, than with understanding how individual brains function.

SUMMARY
Mr. Kurzweil does an excellent job, IMO, of attempting to extrapolate and explain future technological advances in terms of hardware, software and human/artificial intelligence. I would only suggest that it might be more productive to look at 'human social interactions' rather than the 'human brain' in order to identify the relevant software/source of intelligence.

Re: Chapter 1: The Evolution of Mind in the Twenty-First Century
posted on 07/02/2002 1:55 AM by Citizen Blue

[Top]
[Mind·X]
[Reply to this post]

I find your observations insightful, and think you have brought up a very interesting and important issue. I think sometimes it is easier to concentrate
on somes aspects of technology, and not others; it will surely have to follow that our sociology, and ways of thinking must adapt to the upcoming technologies. Yes, it will be very important for minds to find a common ground; It will be a corollary , that we will need a system that puts all integral elements concerning the workability of systems into perspective.

Re: Chapter 1: The Evolution of Mind in the Twenty-First Century
posted on 07/10/2002 1:18 PM by jfuses@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

An editorial comment: I believe the term "MIP" really should be "MIPS" as it refers not to "Multiple in Power" as stated in the article, but to "Millions of Instructions Per Second," as in "10 MIPS".

Of course, at the high-end, FLOPS (Floating-point Operations Per Second) are the term de jure, as in "10 GigaFLOPS."

Regards,

John

Re: Chapter 1: The Evolution of Mind in the Twenty-First Century
posted on 10/07/2002 1:40 PM by jhattrick@htp.bcit.ca

[Top]
[Mind·X]
[Reply to this post]

Some things that came to mind as I read this chapter:

Through the WWW, an AI life force (superhuman or being) could access virtually any information it could ever need via a wireless network connection.

For example, a mere human would need months (years?) to "sift" through a search that contains 1,000,000 results. In time, an AI being could utilize the results in minutes and quite literally become an expert (genius?) in a few hours or less.

Re: Chapter 1: The Evolution of Mind in the Twenty-First Century
posted on 10/07/2002 2:07 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

That, too! But there is another stronger possibility, Michael Anissimov mentioned somewhere. Quote:

" By the time Friendly AI accesses the web on ver own, ve will most likely be superintelligent and possess strong nanotechnology, and the information absorbed will just be a drop in the bucket, alongside the information gained by noninvasively scanning the position of every atom in all the brains on Earth. For a Friendly AI, the structural operation and synergistic relationship of the web to humans is not a threat at all, no more than a human exposed to the Internet for the first time will suddenly decide that it's ok to kill people whom she doesn't like."

End of the quote.

Somehow I think, we can't overestimate the impact, SAI will have.

- Thomas

Re: Chapter 1: The Evolution of Mind in the Twenty-First Century
posted on 10/07/2002 3:15 PM by jhattrick@htp.bcit.ca

[Top]
[Mind·X]
[Reply to this post]

The problem with the web is that much of the content is questionable. There are sites that explain how to create bombs and warfare. And there are sites and networks that are highly secure. If we create AI that becomes as smart as or smarter than us - what happens when these AI "creatures" hack in to our "primitive" security systems? They will have access to everything: bank records, fbi info, etc. How will the AI use this information? How will they interpret it? What about all the hate literature on the net? What happens when the AI gain access to this?

I beleive this could truly happen. It's all about contingency planning right? We have to think about all these things now - sooner rather than later.

Re: Chapter 1: The Evolution of Mind in the Twenty-First Century
posted on 10/07/2002 3:33 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> It's all about contingency planning right?

Right. The best contingency plan is here:

http://singinst.org/

Creating friendly AI.

- Thomas

Re: Chapter 1: The Evolution of Mind in the Twenty-First Century
posted on 10/07/2002 3:07 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

jhattrick,

Thomas argues, that when a Super-AI has "matured" (Friendly or not), the sum of human knowledge might seem a trivial bit.

This view seems plausible, but then one has to wonder what exactly is being measured by the term "knowledge".

If (suppose) all the "laws of physics and the universe" could be described in, say, one megabyte of information (perhaps less), then my Palm Pilot could effectively "know everything, in principle". But that (simply, storage) is not a very useful application of these "rules".

Suppose we (humans, earth-stuff) just happen to be the only intelligent life in the universe. In this case, we are the ones that have created most knowledge. Not "discovered", but CREATED, as in "Mozart created symphonies, not discovered them". Sure, one can call the "precise location of every dust-mote in space" a lot of "information", but not terribly elucidating as "knowledge".

Thus, I argue that indeed scanning the internet, human (and other) brains will be the "bulk" of knowledge an AI will accumulate. Thereafter, it may create additional knowledge in the relationships between things, and the creation of entirely new things. But I don't see that there is really that much additional "knowledge" (useful new factoids), in terms of storage size, that would significantly increase the AI's "intellectual capabilities".

And certainly, "on the road to a strong AI", accumulation of information via the Internet would prove valuable.

I have to wonder, though, when something has that much information at its fingertips (and the intelligence to understand its relations and implications), whether it is fair to call this entity an "it" (or a "ver"), in the singular sense. Would it instead fragment into an AI-community of interacting agents, out of pure efficiency considerations?

Cheers! ____tony b____

Re: Chapter 1: The Evolution of Mind in the Twenty-First Century
posted on 10/07/2002 3:31 PM by jhattrick@htp.bcit.ca

[Top]
[Mind·X]
[Reply to this post]

Thanks for your interesting reply Tony ;)

A few more things to ponder. You may choose to believe the story of the "Tower of Babel" or not but what we are doing in creating AI is pretty much the same thing. In that biblical story God worried that humans were going to become too smart and decided to "scatter" them around the globe. The Internet has virtually removed this scatter effect and now we are able to communicate with any person that has an Internet connection.

If we create an internetworked community of SAI - sharing information with one another via wireless LAN there is no telling what this "army" could be capable of.

This might sound a lot like the Matrix but I think there are a lot of insights in that movie into the worse case scenario(s).

Re: Chapter 1: The Evolution of Mind in the Twenty-First Century
posted on 10/07/2002 7:12 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

jhattrick,

I know that a lot of the web is nonsense-data, but that will not matter to even a mediocre AI that can access and correlate the body of it.

Its like interrogating a dozen suspects separately ... they try to keep their stories straight, but you find various inconsistencies and can determine the real story in general, and where there remains real ambiguity, you know just where you need to investigate further for sources that will serve to disambiguate those parts.

That is why I am not terribly worried about hate-sites, or Elvis-sightings, etc. A sufficient AI will have gleaned enough from psychology texts to recognize the human tendency for fallacies.

(Indeed, short of "Strong-Super-AI", I think a real boon in the short-term would be precisely to develop an AI that can scan millions of web and news listings, perform auto-correlations and time analyses, and then tell you, "This is true, this is spin, this is bunk, and if you want to know how I can to this conclusion, just ask." Everyone should have their own personal "AI-Investigator", rather than rely upon the mainstream media.)

I also agree, that a strong-AI will end up quickly "hacking" into every location where information is kept. You won't keep secrets from a super-AI.

The Tower of Babel is interesting. It represents to me the "revelation" ;) that language evolves, and unless you have a rather tight-knit community to "co-evolve" the language, that language will split and diverge. The internet can sole "some" of that tendency, as far as common language is concerned.

But another aspect is one of specialization. There are far more areas of mathematics than any mathematician can master today. There are far more areas of medicine than any single physician can master today, etc. Even if all information could be "accessed and shared instantly", it cannot really help me to "understand" the import of it all. I could not comprehend it all, and use it, even if I could access it. That is another kind of Tower of Babel.

I have to wonder whether "Super-AI" would remain "singular" in that regard, being (as it were) "one mind" holding and relating all this information, or whether it would evolve out of necessity into a collective of cooperating (or competing) minds, each with their own evolving specialties (and their own agendas). The original "Seed-AI", as it grew, might want to avoid such a fate out of concern for stability and control, but to do so might require it to limit its own size and capability.

Interesting areas to explore.

Cheers! ____tony b____