Chapter Six: Building New Brains. . .
You can only make a certain amount with your hands, but with your mind, it's unlimited.
--Kal Seinfeld's advice to his son, Jerry
Let's review what we need to build an intelligent machine. One resource required is the right set of formulas. We examined three quintessential formulas in chapter 4. There are dozens of others in use, and a more complete understanding of the brain will undoubtedly introduce hundreds more. But all of these appear to be variations on the three basic themes: recursive search, self-organizing networks of elements, and evolutionary improvement through repeated struggle among competing designs.
A second resource needed is knowledge. Some pieces of knowledge are needed as seeds for a process to converge on a meaningful result. Much of the rest can be automatically learned by adaptive methods when neural nets or evolutionary algorithms are exposed to the right learning environment.
The third resource required is computation itself. In this regard, the human brain is eminently capable in some ways, and remarkably weak in others. Its strength is reflected in its massive parallelism, an approach that our computers can also benefit from. The brain's weakness is the extraordinarily slow speed of its computing medium, a limitation that computers do not share with us. For this reason, DNA-based evolution will eventually have to be abandoned. DNA-based evolution is good at tinkering with and extending its designs, but it is unable to scrap an entire design and start over. Organisms created through DNA-based evolution are stuck with an extremely plodding type of circuitry.
But the Law of Accelerating Returns tells us that evolution will not remain stuck at a dead end for very long. And indeed, evolution has found a way around the computational limitations of neural circuitry. Cleverly, it has created organisms that in turn invented a computational technology a million times faster than carbon-based neurons (which are continuing to get yet faster). Ultimately, the computing conducted on extremely slow mammalian neural circuits will be ported to a far more versatile and speedier electronic (and photonic) equivalent.
When will this happen? Let's take another look at the Law of Accelerating Returns as applied to computation. In the chapter 1 chart, "The Exponential Growth of Computing, 1900--1998," we saw that the slope of the curve representing exponential growth was itself gradually increasing. Computer speed (as measured in calculations per second per thousand dollars) doubled every three years between 1910 and 1950, doubled every two years between 1950 and 1966, and is now doubling every year. This suggests possible exponential growth in the rate of exponential growth.1
This apparent acceleration in the acceleration may result, however, from the confounding of the two strands of the Law of Accelerating Returns, which for the past forty years has expressed itself using the Moore's Law paradigm of shrinking transistor sizes on an integrated circuit. As transistor die sizes decrease, the electrons streaming through the transistor have less distance to travel, hence the switching speed of the transistor increases. So exponentially improving speed is the first strand. Reduced transistor die sizes also enable chip manufacturers to squeeze a greater number of transistors onto an integrated circuit, so exponentially improving densities of computation is the second strand.
In the early years of the computer age, it was primarily the first strand--
increasing circuit speeds--that improved the overall computation rate of computers. During the 1990s, however, advanced microprocessors began using a form of parallel processing called pipelining, in which multiple calculations were performed at the same time (some mainframes going back to the 1970s used this technique). Thus the speed of computer processors as measured in instructions per second now also reflects the second strand: greater densities of computation resulting from the use of parallel processing.
As we are approaching more perfect harnessing of the improving density of computation, processor speeds are now effectively doubling every twelve months. This is fully feasible today when we build hardware-based neural nets because neural net processors are relatively simple and highly parallel. Here we create a processor for each neuron and eventually one for each interneuronal connection. Moore's Law thereby enables us to double both the number of processors as well as their speed every two years, an effective quadrupling of the number of interneuronal-connection calculations per second.
This apparent acceleration in the acceleration of computer speeds may result, therefore, from an improving ability to benefit from both strands of the Law of Accelerating Returns. When Moore's Law dies by the year 2020, new forms of circuitry beyond integrated circuits will continue both strands of exponential improvement. But ordinary exponential growth--two strands of it--is dramatic enough. Using the more conservative prediction of just one level of acceleration as our guide, let's consider where the Law of Accelerating Returns will take us in the twenty-first century.
The human brain has about 100 billion neurons. With an estimated average of one thousand connections between each neuron and its neighbors, we have about 100 trillion connections, each capable of a simultaneous calculation. That's rather massive parallel processing, and one key to the strength of human thinking. A profound weakness, however, is the excruciatingly slow speed of neural circuitry, only 200 calculations per second. For problems that benefit from massive parallelism, such as neural-net-based pattern recognition, the human brain does a great job. For problems that require extensive sequential thinking, the human brain is only mediocre.
With 100 trillion connections, each computing at 200 calculations per second, we get 20 million billion calculations per second. This is a conservatively high estimate; other estimates are lower by one to three orders of magnitude. So when will we see the computing speed of the human brain in your personal computer?
The answer depends on the type of computer we are trying to build. The most relevant is a massively parallel neural net computer. In 1997, $2,000 of neural computer chips using only modest parallel processing could perform around 2 billion connection calculations per second. Since neural net emulations benefit from both strands of the acceleration of computational power, this capacity will double every twelve months. Thus by the year 2020, it will have doubled about twenty-three times, resulting in a speed of about 20 million billion neural connection calculations per second, which is equal to the human brain.
If we apply the same analysis to an "ordinary" personal computer, we get the year 2025 to achieve human brain capacity in a $1,000 device.2 This is because the general-purpose type of computations that a conventional personal computer is designed for are inherently more expensive than the simpler, highly repetitive neural-connection calculations. Thus I believe that the 2020 estimate is more accurate because by 2020, most of the computations performed in our computers will be of the neural-connection type.
The memory capacity of the human brain is about 100 trillion synapse strengths (neurotransmitter concentrations at interneuronal connections), which we can estimate at about a million billion bits. In 1998, a billion bits of RAM (128 megabytes) cost about $200. The capacity of memory circuits has been doubling every eighteen months. Thus by the year 2023, a million billion bits will cost about $1,000.3 However, this silicon equivalent will run more than a billion times faster than the human brain. There are techniques for trading off memory for speed, so we can effectively match human memory for $1,000 sooner than 2023.
Taking all of this into consideration, it is reasonable to estimate that a $1,000 personal computer will match the computing speed and capacity of the human brain by around the year 2020, particularly for the neuron-connection calculation, which appears to comprise the bulk of the computation in the human brain. Supercomputers are one thousand to ten thousand times faster than personal computers. As this book is being written, IBM is building a supercomputer based on the design of Deep Blue, its silicon chess champion, capable of 10 teraflops (that is, 10 trillion calculations per second), only 2,000 times slower than the human brain. Japan's Nippon Electric Company hopes to beat that with a 32-teraflop machine. IBM then hopes to follow that with 100 teraflops by around the year 2004 (just what Moore's Law predicts, by the way). Supercomputers will reach the 20 million billion calculations per second capacity of the human brain around 2010, a decade earlier than personal computers.4
In another approach, projects such as Sun Microsystems' Jini program have been initiated to harvest the unused computation on the Internet. Note that at any particular moment, the significant majority of the computers on the Internet are not being used. Even those that are being used are not being used to capacity (for example, typing text uses less than one percent of a typical notebook computer's computing capacity). Under the Internet computation harvesting proposals, cooperating sites would load special software that would enable a virtual massively parallel computer to be created out of the computers on the network. Each user would still have priority over his or her own machine, but in the background, a significant fraction of the millions of computers on the Internet would be harvested into one or more supercomputers. The amount of unused computation on the Internet today exceeds the computational capacity of the human brain, so we already have available in at least one form the hardware side of human intelligence. And with the continuation of the Law of Accelerating Returns, this availability will become increasingly ubiquitous.
After human capacity in a $1,000 personal computer is achieved around the year 2020, our thinking machines will improve the cost performance of their computing by a factor of two every twelve months. That means that the capacity of computing will double ten times every decade, which is a factor of one thousand (210) every ten years. So your personal computer will be able to simulate the brain power of a small village by the year 2030, the entire population of the United States by 2048, and a trillion human brains by 2060.5 If we estimate the human Earth population at 10 billion persons, one penny's worth of computing circa 2099 will have a billion times greater computing capacity than all humans on Earth.6
Of course I may be off by a year or two. But computers in the twenty-first century will not be wanting for computing capacity or memory. Computing Substrates in the Twenty-First CenturyI've noted that the continued exponential growth of computing is implied by the Law of Accelerating Returns, which states that any process that moves toward greater order--evolution in particular--will exponentially speed up its pace as time passes. The two resources that the exploding pace of an evolutionary process--such as the progression of computer technology--requires are (1) its own increasing order, and (2) the chaos in the environment in which it takes place. Both of these resources are essentially without limit.
Although we can anticipate the overall acceleration in technological progress, one might still expect that the actual manifestation of this progression would still be somewhat irregular. After all, it depends on such variable phenomena as individual innovation, business conditions, investment patterns, and the like. Contemporary theories of evolutionary processes, such as the Punctuated Equilibrium theories,7 posit that evolution works by periodic leaps or discontinuities followed by periods of relative stability. It is thus remarkable how predictable computer progress has been.
So, how will the Law of Accelerating Returns as applied to computation roll out in the decades beyond the demise of Moore's Law on Integrated Circuits by the year 2020? For the immediate future, Moore's Law will continue with ever smaller component geometries packing greater numbers of yet faster transistors on each chip. But as circuit dimensions reach near atomic sizes, undesirable quantum effects such as unwanted electron tunneling will produce unreliable results. Nonetheless, Moore's standard methodology will get very close to human processing power in a personal computer and beyond that in a supercomputer.
The next frontier is the third dimension. Already, venture-backed companies (mostly California-based) are competing to build chips with dozens and ultimately thousands of layers of circuitry. With names like Cubic Memory, Dense-Pac, and Staktek, these companies are already shipping functional three-dimensional "cubes" of circuitry. Although not yet cost competitive with the customary flat chips, the third dimension will be there when we run out of space in the first two.8 Computing with LightBeyond that, there is no shortage of exotic computing technologies being developed in research labs, many of which have already demonstrated promising results. Optical computing uses streams of photons (particles of light) rather than electrons. A laser can produce billions of coherent streams of photons, with each stream performing its own independent series of calculations. The calculations on each stream are performed in parallel by special optical elements such as lenses, mirrors, and diffraction gratings. Several companies, including QuantaImage, Photonics, and Mytec Technologies, have applied optical computing to the recognition of fingerprints. Lockheed has applied optical computing to the automatic identification of malignant breast lesions.9
The advantage of an optical computer is that it is massively parallel with potentially trillions of simultaneous calculations. Its disadvantage is that it is not programmable and performs a fixed set of calculations for a given configuration of optical computing elements. But for important classes of problems such as recognizing patterns, it combines massive parallelism (a quality shared by the human brain) with extremely high speed (which the human brain lacks). Computing with the Machinery of LifeA new field called molecular computing has sprung up to harness the DNA molecule itself as a practical computing device. DNA is nature's own nanoengineered computer and it is well suited for solving combinatorial problems. Combining attributes is, after all, the essence of genetics. Applying actual DNA to practical computing applications got its start when Leonard Adleman, a University of Southern California mathematician, coaxed a test tube full of DNA molecules (see the box on page 108) to solve the well-known "traveling salesperson" problem. In this classic problem, we try to find an optimal route for a hypothetical traveler between multiple cities without having to visit a city more than once. Only certain city pairs are connected by routes, so finding the right path is not straightforward. It is an ideal problem for a recursive algorithm, although if the number of cities is too large, even a very fast recursive search will take far too long.
Professor Adleman and other scientists in the molecular-computing field have identified a set of enzyme reactions that corresponds to the logical and arithmetic operations needed to solve a variety of computing problems. Although DNA molecular operations produce occasional errors, the number of DNA strands being used is so large that any molecular errors become statistically insignificant. Thus, despite the inherent error rate in DNA's computing and copying processes, a DNA computer can be highly reliable if properly designed.
DNA computers have subsequently been applied to a range of difficult combinatorial problems. A DNA computer is more flexible than an optical computer but it is still limited to the technique of applying massive parallel search by assembling combinations of elements.10
There is another, more powerful way to apply the computing power of DNA that has not yet been explored. I present it below in the section on quantum computing. How to Solve the Traveling-Salesperson Problem with a Test Tube of DNAOne of DNA's advantageous properties is its ability to replicate itself, and the information it contains. To solve the traveling-salesperson problem, Professor Adleman performed the following steps:
· Generate a small strand of DNA with a unique code for each city.
· Replicate each such strand (one for each city) trillions of times using a process called "polymerase chain reaction" (PCR).
· Next, put the pools of DNA (one for each city) together in a test tube. This step uses DNA's affinity to link strands together. Longer strands will form automatically. Each such longer strand represents a possible route of multiple cities. The small strands representing each city link up with one another in a random fashion, so there is no mathematical certainty that a linked strand representing the correct answer (sequence of cities) will be formed. However, the number of strands is so vast that it is virtually certain that at least one strand--and probably millions--will be formed that represent the correct answer.
The next steps use specially designed enzymes to eliminate the trillions of strands that represent the wrong answer, leaving only the strands representing the correct answer:
· Use molecules called primers to destroy those DNA strands that do not start with the start city as well as those that do not end with the end city, and replicate these surviving strands (using PCR).
· Use an enzyme reaction to eliminate those DNA strands that represent a travel path greater than the total number of cities.
· Use an enzyme reaction to destroy those strands that do not include the first city. Repeat for each of the cities.
· Now, each of the surviving strands represents the correct answer. Replicate these surviving strands (using PCR) until there are billions of such strands.
· Using a technique called electrophoresis, read out the DNA sequence of these correct strands (as a group). The readout looks like a set of distinct lines, which specifies the correct sequence of cities. The Brain in the CrystalAnother approach contemplates growing a computer as a crystal directly in three dimensions, with computing elements being the size of large molecules within the crystalline lattice. This is another approach to harnessing the third dimension.
Stanford Professor Lambertus Hesselink has described a system in which data is stored in a crystal as a hologram--an optical interference pattern.11 This three-dimensional storage method requires only a million atoms for each bit and thus could achieve a trillion bits of storage for each cubic centimeter. Other projects hope to harness the regular molecular structure of crystals as actual computing elements. The Nanotube: A Variation of BuckyballsThree professors--Richard Smalley and Robert Curl of Rice University, and Harold Kroto of the University of Sussex--shared the 1996 Nobel Prize in Chemistry for their 1985 discovery of soccer-ball-shaped molecules formed of a large number of carbon atoms. Organized in hexagonal and pentagonal patterns like R. Buckminster Fuller's building designs, they were dubbed "buckyballs." These unusual molecules, which form naturally in the hot fumes of a furnace, are extremely strong--a hundred times stronger than steel--a property they share with Fuller's architectural innovations.12
More recently, Dr. Sumio Iijima of Nippon Electric Company showed that in addition to the spherical buckyballs, the vapor from carbon arc lamps also contained elongated carbon molecules that looked like long tubes.13 Called nanotubes because of their extremely small size--fifty thousand of them side by side would equal the thickness of one human hair--they are formed of the same pentagonal patterns of carbon atoms as buckyballs and share the buckyball's unusual strength.
What is most remarkable about the nanotube is that it can perform the electronic functions of silicon-based components. If a nanotube is straight, it conducts electricity as well as or better than a metal conductor. If a slight helical twist is introduced, the nanotube begins to act like a transistor. The full range of electronic devices can be built using nanotubes.
Since a nanotube is essentially a sheet of graphite that is only one atom thick, it is vastly smaller than the silicon transistors on an integrated chip. Although extremely small, they are far more durable than silicon devices. Moreover, they handle heat much better than silicon and thus can be assembled into three- dimensional arrays more easily than silicon transistors. Dr. Alex Zettl, a physics professor at the University of California at Berkeley, envisions three-dimensional arrays of nanotube-based computing elements similar to--but far denser and faster than--the human brain. Quantum particles are the dreams that stuff is made of.
--David Moser
So far we have been talking about mere digital computing. There is actually a more powerful approach called quantum computing. It promises the ability to solve problems that even massively parallel digital computers cannot solve. Quantum computers harness a paradoxical result of quantum mechanics. Actually, I am being redundant--all results of quantum mechanics are paradoxical.
Note that the Law of Accelerating Returns and other projections in this book do not rely on quantum computing. The projections in this book are based on readily measurable trends and are not relying on discontinuities in technological progress that nonetheless occurred in the twentieth century. There will inevitably be technological discontinuities in the twenty-first century, and quantum computing would certainly qualify.
What is quantum computing? Digital computing is based on "bits" of information which are either off or on--zero or one. Bits are organized into larger structures such as numbers, letters, and words, which in turn can represent virtually any form of information: text, sounds, pictures, moving images. Quantum computing, on the other hand, is based on qu-bits (pronounced cue-bits), which essentially are zero and one at the same time. The qu-bit is based on the fundamental ambiguity inherent in quantum mechanics. The position, momentum, or other state of a fundamental particle remains "ambiguous" until a process of disambiguation causes that particle to "decide" where it is, where it has been, and what properties it has. For example, consider a stream of photons that strike a sheet of glass at a 45-degree angle. As each photon strikes the glass, it has a choice of traveling either straight through the glass or reflecting off the glass. Each photon will actually take both paths (actually more than this, see below) until a process of conscious observation forces each particle to decide which path it took. This behavior has been extensively confirmed in numerous contemporary experiments.
In a quantum computer, the qu-bits would be represented by a property--nuclear spin is a popular choice--of individual electrons. If set up in the proper way, the electrons will not have decided the direction of their nuclear spin (up or down) and thus will be in both states at the same time. The process of conscious observation of the electrons' spin states--or any subsequent phenomena dependent on a determination of these states--causes the ambiguity to be resolved. This process of disambiguation is called quantum decoherence. If it weren't for quantum decoherence, the world we live in would be a baffling place indeed.
The key to the quantum computer is that we would present it with a problem, along with a way to test the answer. We would set up the quantum decoherence of the qu-bits in such a way that only an answer that passes the test survives the decoherence. The failing answers essentially cancel each other out. As with a number of other approaches (for example, recursive and genetic algorithms), one of the keys to quantum computing is, therefore, a careful statement of the problem, including a precise way to test possible answers.
The series of qu-bits represents simultaneously every possible solution to the problem. A single qu-bit represents two possible solutions. Two linked qu-bits represent four possible answers. A quantum computer with 1,000 qu-bits represents 21,000 (this is approximately equal to a decimal number consisting of 1, followed by 301 zeroes) possible solutions simultaneously. The statement of the problem--expressed as a test to be applied to potential answers--is presented to the string of qu-bits so that the qu-bits decohere (that is, each qu-bit changes from its ambiguous 0--1 state to an actual 0 or a 1), leaving a series of 0's and 1's that pass the test. Essentially all 21,000 possible solutions have been tried simultaneously, leaving only the correct solution.
This process of reading out the answer through quantum decoherence is obviously the key to quantum computing. It is also the most difficult aspect to grasp. Consider the following analogy. Beginning physics students learn that if light strikes a mirror at an angle, it will bounce off the mirror in the opposite direction and at the same angle to the surface. But according to quantum theory, that is not what is happening. Each photon actually bounces off every possible point on the mirror, essentially trying out every possible path. The vast majority of these paths cancel each other out, leaving only the path that classical physics predicts. Think of the mirror as representing a problem to be solved. Only the correct solution--light bounced off at an angle equal to the incoming angle--survives all of the quantum cancellations. A quantum computer works the same way. The test of the correctness of the answer to the problem is set up in such a way that the vast majority of the possible answers--those that do not pass the test--cancel each other out, leaving only the sequence of bits that does pass the test. An ordinary mirror, therefore, can be thought of as a special example of a quantum computer, albeit one that solves a rather simple problem.
As a more useful example, encryption codes are based on factoring large numbers (factoring means determining which smaller numbers, when multiplied together, result in the larger number). Factoring a number with several hundred bits is virtually impossible on any digital computer even if we had billions of years to wait for the answer. A quantum computer can try every possible combination of factors simultaneously and break the code in less than a billionth of a second (communicating the answer to human observers does take a bit longer). The test applied by the quantum computer during its key disambiguation stage is very simple: just multiply one factor by the other and if the result equals the encryption code, then we have solved the problem.
It has been said that quantum computing is to digital computing as a hydrogen bomb is to a firecracker. This is a remarkable statement when we consider that digital computing is quite revolutionary in its own right. The analogy is based on the following observation. Consider (at least in theory) a Universe-sized (nonquantum) computer in which every neutron, electron, and proton in the Universe is turned into a computer, and each one (that is, every particle in the Universe) is able to compute trillions of calculations per second. Now imagine certain problems that this Universe-sized supercomputer would be unable to solve even if we ran that computer until either the next big bang or until all the stars in the Universe died--about ten to thirty billion years. There are many examples of such massively intractable problems; for example, cracking encryption codes that use a thousand bits, or solving the traveling-salesman problem with a thousand cities. While very massive digital computing (including our theoretical Universe-sized computer) is unable to solve this class of problems, a quantum computer of microscopic size could solve such problems in less than a billionth of a second.
Are quantum computers feasible? Recent advances, both theoretical and practical, suggest that the answer is yes. Although a practical quantum computer has not been built, the means for harnessing the requisite decoherence has been demonstrated. Isaac Chuang of Los Alamos National Laboratory and MIT's Neil Gershenfeld have actually built a quantum computer using the carbon atoms in the alanine molecule. Their quantum computer was only able to add one and one, but that's a start. We have, of course, been relying on practical applications of other quantum effects, such as the electron tunneling in transistors, for decades.14 A Quantum Computer in a Cup of CoffeeOne of the difficulties in designing a practical quantum computer is that it needs to be extremely small, basically atom or molecule sized, to harness the delicate quantum effects. But it is very difficult to keep individual atoms and molecules from moving around due to thermal effects. Moreover, individual molecules are generally too unstable to build a reliable machine. For these problems, Chuang and Gershenfeld have come up with a theoretical breakthrough. Their solution is to take a cup of liquid and consider every molecule to be a quantum computer. Now instead of a single unstable molecule-sized quantum computer, they have a cup with about a hundred billion trillion quantum computers. The point here is not more massive parallelism, but rather massive redundancy. In this way, the inevitably erratic behavior of some of the molecules has no effect on the statistical behavior of all the molecules in the liquid. This approach of using the statistical behavior of trillions of molecules to overcome the lack of reliability of a single molecule is similar to Professor Adleman's use of trillions of DNA strands to overcome the comparable issue in DNA computing.
This approach to quantum computing also solves the problem of reading out the answer bit by bit without causing those qu-bits that have not yet been read to decohere prematurely. Chuang and Gershenfeld subject their liquid computer to radio-wave pulses, which cause the molecules to respond with signals indicating the spin state of each electron. Each pulse does cause some unwanted decoherence, but, again, this decoherence does not affect the statistical behavior of trillions of molecules. In this way, the quantum effects become stable and reliable.
Chuang and Gershenfeld are currently building a quantum computer that can factor small numbers. Although this early model will not compete with conventional digital computers, it will be an important demonstration of the feasibility of quantum computing. Apparently high on their list for a suitable quantum liquid is freshly brewed Java coffee, which, Gershenfeld notes, has "unusually even heating characteristics."
Quantum computing starts to overtake digital computing when we can link at least 40 qu-bits. A 40-qu-bit quantum computer would be evaluating a trillion possible solutions simultaneously, which would match the fastest supercomputers. At 60 bits, we would be doing a million trillion simultaneous trials. When we get to hundreds of qu-bits, the capabilities of a quantum computer would vastly overpower any conceivable digital computer.
So here's my idea. The power of a quantum computer depends on the number of qu-bits that we can link together. We need to find a large molecule that is specifically designed to hold large amounts of information. Evolution has designed just such a molecule: DNA. We can readily create any sized DNA molecule we wish from a few dozen nucleotide rungs to thousands. So once again we combine two elegant ideas--in this case the liquid-DNA computer and the liquid-quantum computer--to come up with a solution greater than the sum of its parts. By putting trillions of DNA molecules in a cup, there is the potential to build a highly redundant--and therefore reliable--quantum computer with as many qu-bits as we care to harness. Remember you read it here first. Suppose No One Ever Looks at the AnswerConsider that the quantum ambiguity a quantum computer relies on is decohered, that is, disambiguated, when a conscious entity observes the ambiguous phenomenon. The conscious entities in this case are us, the users of the quantum computer. But in using a quantum computer, we are not directly looking at the nuclear spin states of individual electrons. The spin states are measured by an apparatus that in turn answers some question that the quantum computer has been asked to solve. These measurements are then processed by other electronic gadgets, manipulated further by conventional computing equipment, and finally displayed or printed on a piece of paper.
Suppose no human or other conscious entity ever looks at the printout. In this situation, there has been no conscious observation, and therefore no decoherence. As I discussed earlier, the physical world only bothers to manifest itself in an unambiguous state when one of us conscious entities decides to interact with it. So the page with the answer is ambiguous, undetermined--until and unless a conscious entity looks at it. Then instantly all the ambiguity is retroactively resolved, and the answer is there on the page. The implication is that the answer is not there until we look at it. But don't try to sneak up on the page fast enough to see the answerless page; the quantum effects are instantaneous. What Is It Good For?A key requirement for quantum computing is a way to test the answer. Such a test does not always exist. However, a quantum computer would be a great mathematician. It could simultaneously consider every possible combination of axioms and previously solved theorems (within a quantum computer's qu-bit capacity) to prove or disprove virtually any provable or disprovable conjecture. Although a mathematical proof is often extremely difficult to come up with, confirming its validity is usually straightforward, so the quantum approach is well suited.
Quantum computing is not directly applicable, however, to problems such as playing a board game. Whereas the "perfect" chess move for a given board is a good example of a finite but intractable computing problem, there is no easy way to test the answer. If a person or process were to present an answer, there is no way to test its validity other than to build the same move-countermove tree that generated the answer in the first place. Even for mere "good" moves, a quantum computer would have no obvious advantage over a digital computer.
How about creating art? Here a quantum computer would have considerable value. Creating a work of art involves solving a series, possibly an extensive series, of problems. A quantum computer could consider every possible combination of elements--words, notes, strokes--for each such decision. We still need a way to test each answer to the sequence of aesthetic problems, but the quantum computer would be ideal for instantly searching through a Universe of possibilities. Encryption Destroyed and ResurrectedAs mentioned above, the classic problem that a quantum computer is ideally suited for is cracking encryption codes, which relies on factoring large numbers. The strength of an encryption code is measured by the number of bits that needs to be factored. For example, it is illegal in the United States to export encryption technology using more than 40 bits (56 bits if you give a key to law-enforcement authorities). A 40-bit encryption method is not very secure. In September 1997, Ian Goldberg, a University of California at Berkeley graduate student, was able to crack a 40-bit code in three and a half hours using a network of 250 small computers.15 A 56-bit code is a bit better (16 bits better, actually). Ten months later, John Gilmore, a computer privacy activist, and Paul Kocher, an encryption expert, were able to break the 56-bit code in 56 hours using a specially designed computer that cost them $250,000 to build. But a quantum computer can easily factor any sized number (within its capacity). Quantum computing technology would essentially destroy digital encryption.
But as technology takes away, it also gives. A related quantum effect can provide a new method of encryption that can never be broken. Again, keep in mind that, in view of the Law of Accelerating Returns, "never" is not as long as it used to be.
This effect is called quantum entanglement. Einstein, who was not a fan of quantum mechanics, had a different name for it, calling it "spooky action at a distance." The phenomenon was recently demonstrated by Dr. Nicolas Gisin of the University of Geneva in a recent experiment across the city of Geneva.16 Dr. Gisin sent twin photons in opposite directions through optical fibers. Once the photons were about seven miles apart, they each encountered a glass plate from which they could either bounce off or pass through. Thus, they were each forced to make a decision to choose among two equally probable pathways. Since there was no possible communication link between the two photons, classical physics would predict that their decisions would be independent. But they both made the same decision. And they did so at the same instant in time, so even if there were an unknown communication path between them, there was not enough time for a message to travel from one photon to the other at the speed of light. The two particles were quantum entangled and communicated instantly with each other regardless of their separation. The effect was reliably repeated over many such photon pairs.
The apparent communication between the two photons takes place at a speed far greater than the speed of light. In theory, the speed is infinite in that the decoherence of the two photon travel decisions, according to quantum theory, takes place at exactly the same instant. Dr. Gisin's experiment was sufficiently sensitive to demonstrate the communication was at least ten thousand times faster than the speed of light.
So, does this violate Einstein's Special Theory of Relativity, which postulates the speed of light as the fastest speed at which we can transmit information? The answer is no--there is no information being communicated by the entangled photons. The decision of the photons is random--a profound quantum randomness--and randomness is precisely not information. Both the sender and the receiver of the message simultaneously access the identical random decisions of the entangled photons, which are used to encode and decode, respectively, the message. So we are communicating randomness--not information--at speeds far greater than the speed of light. The only way we could convert the random decisions of the photons into information is if we edited the random sequence of photon decisions. But editing this random sequence would require observing the photon decisions, which in turn would cause quantum decoherence, which would destroy the quantum entanglement. So Einstein's theory is preserved.
Even though we cannot instantly transmit information using quantum entanglement, transmitting randomness is still very useful. It allows us to resurrect the process of encryption that quantum computing would destroy. If the sender and receiver of a message are at the two ends of an optical fiber, they can use the precisely matched random decisions of a stream of quantum entangled photons to respectively encode and decode a message. Since the encryption is fundamentally random and nonrepeating, it cannot be broken. Eavesdropping would also be impossible, as this would cause quantum decoherence that could be detected at both ends. So privacy is preserved.
Note that in quantum encryption, we are transmitting the code instantly. The actual message will arrive much more slowly--at only the speed of light. The prospect of computers competing with the full range of human capabilities generates strong, often adverse feelings, as well as no shortage of arguments that such a specter is theoretically impossible. One of the more interesting such arguments comes from an Oxford mathematician and physicist, Roger Penrose.
In his 1989 best-seller, The Emperor's New Mind, Penrose puts forth two conjectures.17 The first has to do with an unsettling theorem proved by a Czech mathematician, Kurt Gödel. Gödel's famous "incompleteness theorem," which has been called the most important theorem in mathematics, states that in a mathematical system powerful enough to generate the natural numbers, there inevitably exist propositions that can be neither proved nor disproved. This was another one of those twentieth-century insights that upset the orderliness of nineteenth-century thinking.
A corollary of Gödel's theorem is that there are mathematical propositions that cannot be decided by an algorithm. In essence, these Gödelian impossible problems require an infinite number of steps to be solved. So Penrose's first conjecture is that machines cannot do what humans can do because machines can only follow an algorithm. An algorithm cannot solve a Gödelian unsolvable problem. But humans can. Therefore, humans are better.
Penrose goes on to state that humans can solve unsolvable problems because our brains do quantum computing. Subsequently responding to criticism that neurons are too big to exhibit quantum effects, Penrose cited small structures in the neurons called microtubules that may be capable of quantum computation.
However, Penrose's first conjecture--that humans are inherently superior to machines--is unconvincing for at least three reasons:
1. It is true that machines can't solve Gödelian impossible problems. But humans can't solve them either. Humans can only estimate them. Computers can make estimates as well, and in recent years are doing a better job of this than humans.
2. In any event, quantum computing does not permit solving Gödelian impossible problems either. Solving a Gödelian impossible problem requires an algorithm with an infinite number of steps. Quantum computing can turn an intractable problem that could not be solved on a conventional computer in trillions of years into an instantaneous computation. But it still falls short of infinite computing.
3. Even if (1) and (2) above were wrong, that is, if humans could solve Gödelian impossible problems and do so because of their quantum-computing ability, that still does not restrict quantum computing from machines. The opposite is the case. If the human brain exhibits quantum computing, this would only confirm that quantum computing is possible, that matter following natural laws can perform quantum computing. Any mechanisms in human neurons capable of quantum computing, such as the microtubules, would be replicable in a machine. Machines use quantum effects--tunneling--in trillions of devices (that is, transistors) today.18 There is nothing to suggest that the human brain has exclusive access to quantum computing.
Penrose's second conjecture is more difficult to resolve. It is that an entity exhibiting quantum computing is conscious. He is saying that it is the human's quantum computing that accounts for her consciousness. Thus quantum computing--quantum decoherence--yields consciousness.
Now we do know that there is a link between consciousness and quantum decoherence. That is, consciousness observing a quantum uncertainty causes quantum decoherence. Penrose, however, is asserting a link in the opposite direction. This does not follow logically. Of course quantum mechanics is not logical in the usual sense--it follows quantum logic (some observers use the word "strange" to describe quantum logic). But even applying quantum logic, Penrose's second conjecture does not appear to follow. On the other hand, I am unable to reject it out of hand because there is a strong nexus between consciousness and quantum decoherence in that the former causes the latter. I have thought about this issue for three years, and have been unable to accept it or reject it. Perhaps before writing my next book I will have an opinion on Penrose's second conjecture. For many people the mind is the last refuge of mystery against the encroaching spread of science, and they don't like the idea of science engulfing the last bit of terra incognita.
--Herb Simon as quoted by Daniel Dennett
Cannot we let people be themselves, and enjoy life in their own way? You are trying to make another you. One's enough.
--Ralph Waldo Emerson
For the wise men of old . . . the solution has been knowledge and self-discipline, . . . and in the practice of this technique, are ready to do things hitherto regarded as disgusting and impious--such as digging up and mutilating the dead.
--C. S. Lewis
Intelligence is: (a) the most complex phenomenon in the Universe; or (b) a profoundly simple process.
The answer, of course, is (c) both of the above. It's another one of those great dualities that make life interesting. We've already talked about the simplicity of intelligence: simple paradigms and the simple process of computation. Let's talk about the complexity. IS THE BRAIN BIG ENOUGH?Is our conception of human neuron functioning and our estimates of the number of neurons and connections in the human brain consistent with what we know about the brain's capabilities? Perhaps human neurons are far more capable than we think they are. If so, building a machine with human-level capabilities might take longer than expected.
We find that estimates of the number of concepts--"chunks" of knowledge--that a human expert in a particular field has mastered are remarkably consistent: about 50,000 to 100,000. This approximate range appears to be valid over a wide range of human endeavors: the number of board positions mastered by a chess grand master, the concepts mastered by an expert in a technical field, such as a physician, the vocabulary of a writer (Shakespeare used 29,000 words;19 this book uses a lot fewer).
This type of professional knowledge is, of course, only a small subset of the knowledge we need to function as human beings. Basic knowledge of the world, including so-called common sense, is more extensive. We also have an ability to recognize patterns: spoken language, written language, objects, faces. And we have our skills: walking, talking, catching balls. I believe that a reasonably conservative estimate of the general knowledge of a typical human is a thousand times greater than the knowledge of an expert in her professional field. This provides us a rough estimate of 100 million chunks--bits of understanding, concepts, patterns, specific skills--per human. As we will see below, even if this estimate is low (by a factor of up to a thousand), the brain is still big enough.
The number of neurons in the human brain is estimated at approximately 100 billion, with an average of 1,000 connections per neuron, for a total of 100 trillion connections. With 100 trillion connections and 100 million chunks of knowledge (including patterns and skills), we get an estimate of about a million connections per chunk.
Our computer simulations of neural nets use a variety of different types of neuron models, all of which are relatively simple. Efforts to provide detailed electronic models of real mammalian neurons appear to show that while animal neurons are more complicated than typical computer models, the difference in complexity is modest. Even using our simpler computer versions of neurons, we find that we can model a chunk of knowledge--a face, a character shape, a phoneme, a word sense--using as little as a thousand connections per chunk. Thus our rough estimate of a million neural connections in the human brain per human knowledge chunk appears reasonable.
Indeed it appears ample. Thus we could make my estimate (of the number of knowledge chunks) a thousand times greater, and the calculation still works. It is likely, however, that the brain's encoding of knowledge is less efficient than the methods we use in our machines. This apparent inefficiency is consistent with our understanding that the human brain is conservatively designed. The brain relies on a large degree of redundancy and a relatively low density of information storage to gain reliability and to continue to function effectively despite a high rate of neuron loss as we age.
My conclusion is that it does not appear that we need to contemplate a model of information processing of individual neurons that is significantly more complex than we currently understand in order to explain human capability. The brain is big enough.
We come back to knowledge, which starts out with simple seeds but ultimately becomes elaborate as the knowledge-gathering process interacts with the chaotic real world. Indeed, that is how intelligence originated. It was the result of the evolutionary process we call natural selection, itself a simple paradigm, that drew its complexity from the pandemonium of its environment. We see the same phenomenon when we harness evolution in the computer. We start with simple formulas, add the simple process of evolutionary iteration and combine this with the simplicity of massive computation. The result is often complex, capable, and intelligent algorithms.
But we don't need to simulate the entire evolution of the human brain in order to tap the intricate secrets it contains. Just as a technology company will take apart and "reverse engineer" (analyze to understand the methods of) a rival's products, we can do the same with the human brain. It is, after all, the best example we can get our hands on of an intelligent process. We can tap the architecture, organization, and innate knowledge of the human brain in order to greatly accelerate our understanding of how to design intelligence in a machine. By probing the brain's circuits, we can copy and imitate a proven design, one that took its original designer several billion years to develop. (And it's not even copyrighted.)
As we approach the computational ability to simulate the human brain--we're not there today, but we will begin to be in about a decade's time--such an effort will be intensely pursued. Indeed, this endeavor has already begun.
For example, Synaptics' vision chip is fundamentally a copy of the neural organization, implemented in silicon of course, of not only the human retina, but the early stages of mammalian visual processing. It has captured the essence of the algorithm of early mammalian visual processing, an algorithm called center surround filtering. It is not a particularly complicated chip, yet it realistically captures the essence of the initial stages of human vision.
There is a popular conceit among observers, both informed and uninformed, that such a reverse engineering project is infeasible. Hofstadter worries that "our brains may be too weak to understand themselves."20 But that is not what we are finding. As we probe the brain's circuits, we find that the massively parallel algorithms are far from incomprehensible. Nor is there anything like an infinite number of them. There are hundreds of specialized regions in the brain, and it does have a rather ornate architecture, the consequence of its long history. The entire puzzle is not beyond our comprehension. It will certainly not be beyond the comprehension of twenty-first-century machines.
The knowledge is right there in front of us, or rather inside of us. It is not impossible to get at. Let's start with the most straightforward scenario, one that is essentially feasible today (at least to initiate).
We start by freezing a recently deceased brain.
Now, before I get too many indignant reactions, let me wrap myself in Leonardo da Vinci's cloak. Leonardo also received a disturbed reaction from his contemporaries. Here was a guy who stole dead bodies from the morgue, carted them back to his dwelling, and then took them apart. This was before dissecting dead bodies was in style. He did this in the name of knowledge, not a highly valued pursuit at the time. He wanted to learn how the human body works, but his contemporaries found his activities bizarre and disrespectful. Today we have a different view, that expanding our knowledge of this wondrous machine is the most respectful homage we can pay. We cut up dead bodies all the time to learn more about how living bodies work, and to teach others what we have already learned.
There's no difference here in what I am suggesting. Except for one thing: I am talking about the brain, not the body. This strikes closer to home. We identify more with our brains than our bodies. Brain surgery is regarded as more invasive than toe surgery. Yet the value of the knowledge to be gained from probing the brain is too valuable to ignore. So we'll get over whatever squeamishness remains.
As I was saying, we start by freezing a dead brain. This is not a new concept--Dr. E. Fuller Torrey, a former supervisor at the National Institute of Mental Health and now head of the mental health branch of a private research foundation, has 44 freezers filled with 226 frozen brains.21 Torrey and his associates hope to gain insight into the causes of schizophrenia, so all of his brains are of deceased schizophrenic patients, which is probably not ideal for our purposes.
We examine one brain layer--one very thin slice--at a time. With suitably sensitive two-dimensional scanning equipment we should be able to see every neuron and every connection represented in each synapse-thin layer. When a layer has been examined and the requisite data stored, it can be scraped away to reveal the next slice. This information can be stored and assembled into a giant three-dimensional model of the brain's wiring and neural topology.
It would be better if the frozen brains were not already dead long before freezing. A dead brain will reveal a lot about living brains, but it is clearly not the ideal laboratory. Some of that deadness is bound to reflect itself in a deterioration of its neural structure. We probably don't want to base our designs for intelligent machines on dead brains. We are likely to be able to take advantage of people who, facing imminent death, will permit their brains to be destructively scanned just slightly before rather than slightly after their brains would have stopped functioning on their own. Recently, a condemned killer allowed his brain and body to be scanned and you can access all 10 billion bytes of him on the Internet at the Center for Human Simulation's "Visible Human Project" web site.22 There's an even higher resolution 25-billion-byte female companion on the site as well. Although the scan of this couple is not high enough resolution for the scenario envisioned here, it's an example of donating one's brain for reverse engineering. Of course we may not want to base our templates of machine intelligence on the brain of a convicted killer, anyway.
Easier to talk about are the emerging noninvasive means of scanning our brains. I began with the more invasive scenario above because it is technically much easier. We have in fact the means to conduct a destructive scan today (although not yet the bandwidth to scan the entire brain in a reasonable amount of time). In terms of noninvasive scanning, high-speed, high-resolution magnetic resonance imaging (MRI) scanners are already able to view individual somas (neuron cell bodies) without disturbing the living tissue being scanned. More powerful MRIs are being developed that will be capable of scanning individual nerve fibers that are only ten microns (millionths of a meter) in diameter. These will be available during the first decade of the twenty-first century. Eventually we will be able to scan the presynaptic vesicles that are the site of human learning.
We can peer inside someone's brain today with MRI scanners, which are increasing their resolution with each new generation of this technology. There are a number of technical challenges in accomplishing this, including achieving suitable resolution, bandwidth (that is, speed of transmission), lack of vibration, and safety. For a variety of reasons it is easier to scan the brain of someone recently deceased than of someone still living. (It is easier to get someone deceased to sit still, for one thing.) But noninvasively scanning a living brain will ultimately become feasible as MRI and other scanning technologies continue to improve in resolution and speed.
A new scanning technology called optical imaging, developed by Professor Amiram Grinvald at Israel's Weizmann Institute, is capable of significantly higher resolution than MRI. Like MRI, it is based on the interaction between electrical activity in the neurons and blood circulation in the capillaries feeding the neurons. Grinvald's device is capable of resolving features smaller than fifty microns, and can operate in real time, thus enabling scientists to view the firing of individual neurons. Grinvald and researchers at Germany's Max Planck Institute were struck by the remarkable regularity of the patterns of neural firing when the brain was engaged in processing visual information.23 One of the researchers, Dr. Mark Hübener, commented that "our maps of the working brain are so orderly they resemble the street map of Manhattan rather than, say, of a medieval European town." Grinvald, Hübener, and their associates were able to use their brain scanner to distinguish between sets of neurons responsible for perception of depth, shape, and color. As these neurons interact with one another, the resulting pattern of neural firings resembles elaborately linked mosaics. From the scans, it was possible for the researchers to see how the neurons were feeding information to each other. For example, they noted that the depth perception neurons were arranged in parallel columns, providing information to the shape-detecting neurons that formed more elaborate pinwheel-like patterns. Currently, the Grinvald scanning technology is only able to image a thin slice of the brain near its surface, but the Weizmann Institute is working on refinements that will extend its three-dimensional capability. Grinvald's scanning technology is also being used to boost the resolution of MRI scanning. A recent finding that near-infrared light can pass through the skull is also fueling excitement about the ability of optical imaging as a high-resolution method of brain scanning.
The driving force behind the rapidly improving capability of noninvasive scanning technologies such as MRI is again the Law of Accelerating Returns, because it requires massive computational ability to build the high-resolution, three-dimensional images from the raw magnetic resonance patterns that an MRI scanner produces. The exponentially increasing computational ability provided by the Law of Accelerating Returns (and for another fifteen to twenty years, Moore's Law) will enable us to continue to rapidly improve the resolution and speed of these noninvasive scanning technologies.
Mapping the human brain synapse by synapse may seem like a daunting effort, but so did the Human Genome Project, an effort to map all human genes, when it was launched in 1991. Although the bulk of the human genetic code has still not been decoded, there is confidence at the nine American Genome Sequencing Centers that the task will be completed, if not by 2005, then at least within a few years of that target date. Recently, a new private venture with funding from Perkin-Elmer has announced plans to sequence the entire human genome by the year 2001. As I noted above, the pace of the human genome scan was extremely slow in its early years, and has picked up speed with improved technology, particularly computer programs that identify the useful genetic information. The researchers are counting on further improvements in their gene-hunting computer programs to meet their deadline. The same will be true of the human-brain-mapping project, as our methods of scanning and recording the 100 trillion neural connections pick up speed from the Law of Accelerating Returns. There are two scenarios for using the results of detailed brain scans. The most immediate--scanning the brain to understand it--is to scan portions of the brain to ascertain the architecture and implicit algorithms of interneuronal connections in different regions. The exact position of each and every nerve fiber is not as important as the overall pattern. With this information we can design simulated neural nets that operate similarly. This process will be rather like peeling an onion as each layer of human intelligence is revealed.
This is essentially what Synaptics has done in its chip that mimics mammalian neural-image processing. This is also what Grinvald, Hübener, and their associates plan to do with their visual-cortex scans. And there are dozens of other contemporary projects designed to scan portions of the brain and apply the resulting insights to the design of intelligent systems.
Within a region, the brain's circuitry is highly repetitive, so only a small portion of a region needs to be fully scanned. The computationally relevant activity of a neuron or group of neurons is sufficiently straightforward that we can understand and model these methods by examining them. Once the structure and topology of the neurons, the organization of the interneuronal wiring, and the sequence of neural firing in a region have been observed, recorded, and analyzed, it becomes feasible to reverse engineer that region's parallel algorithms. After the algorithms of a region are understood, they can be refined and extended prior to being implemented in synthetic neural equivalents. The methods can certainly be greatly sped up given that electronics is already more than a million times faster than neural circuitry.
We can combine the revealed algorithms with the methods for building intelligent machines that we already understand. We can also discard aspects of human computing that may not be useful in a machine. Of course, we'll have to be careful that we don't throw the baby out with the bathwater. A more challenging but also ultimately feasible scenario will be to scan someone's brain to map the locations, interconnections, and contents of the somas, axons, dendrites, presynaptic vesicles, and other neural components. Its entire organization could then be re-created on a neural computer of sufficient capacity, including the contents of its memory.
This is harder in an obvious way than the scanning-the-brain-to-understand-it scenario. In the former, we need only sample each region until we understand the salient algorithms. We can then combine those insights with knowledge we already have. In this--scanning the brain to download it--scenario, we need to capture every little detail. On the other hand, we don't need to understand all of it; we need only to literally copy it, connection by connection, synapse by synapse, neurotransmitter by neurotransmitter. It requires us to understand local brain processes, but not necessarily the brain's global organization, at least not in full. It is likely that by the time we can do this, we will understand much of it, anyway.
To do this right, we do need to understand what the salient information-processing mechanisms are. Much of a neuron's elaborate structure exists to support its own structural integrity and life processes and does not directly contribute to its handling of information. We know that neuron-computing processing is based on hundreds of different neurotransmitters and that different neural mechanisms in different regions allow for different types of computing. The early vision neurons, for example, are good at accentuating sudden color changes to facilitate finding the edges of objects. Hippocampus neurons are likely to have structures for enhancing the long-term retention of memories. We also know that neurons use a combination of digital and analog computing that needs to be accurately modeled. We need to identify structures capable of quantum computing, if any. All of the key features that affect information processing need to be recognized if we are to copy them accurately.
How well will this work? Of course, like any new technology, it won't be perfect at first, and initial downloads will be somewhat imprecise. Small imperfections won't necessarily be immediately noticeable because people are always changing to some degree. As our understanding of the mechanisms of the brain improves and our ability to accurately and noninvasively scan these features improves, reinstantiating (reinstalling) a person's brain should alter a person's mind no more than it changes from day to day. What Will We Find When We Do This?We have to consider this question on both the objective and subjective levels. "Objective" means everyone except me, so let's start with that. Objectively, when we scan someone's brain and reinstantiate their personal mind file into a suitable computing medium, the newly emergent "person" will appear to other observers to have very much the same personality, history, and memory as the person originally scanned. Interacting with the newly instantiated person will feel like interacting with the original person. The new person will claim to be that same old person and will have a memory of having been that person, having grown up in Brooklyn, having walked into a scanner here, and woken up in the machine there. He'll say, "Hey, this technology really works."
There is the small matter of the "new person's" body. What kind of body will a reinstantiated personal mind file have: the original human body, an upgraded body, a synthetic body, a nanoengineered body, a virtual body in a virtual environment? This is an important question, which I will discuss in the next chapter.
Subjectively, the question is more subtle and profound. Is this the same consciousness as the person we just scanned? As we saw in chapter 3, there are strong arguments on both sides. The position that fundamentally we are our "pattern" (because our particles are always changing) would argue that this new person is the same because their patterns are essentially identical. The counter argument, however, is the possible continued existence of the person who was originally scanned. If he--Jack--is still around, he will convincingly claim to represent the continuity of his consciousness. He may not be satisfied to let his mental clone carry on in his stead. We'll keep bumping into this issue as we explore the twenty-first century.
But once over the divide, the new person will certainly think that he was the original person. There will be no ambivalence in his mind as to whether or not he committed suicide when he agreed to be transferred into a new computing substrate leaving his old slow carbon-based neural-computing machinery behind. To the extent that he wonders at all whether or not he is really the same person that he thinks he is, he'll be glad that his old self took the plunge, because otherwise he wouldn't exist.
Is he--the newly installed mind--conscious? He certainly will claim to be. And being a lot more capable than his old neural self, he'll be persuasive and effective in his position. We'll believe him. He'll get mad if we don't. A Growing TrendIn the second half of the twenty-first century, there will be a growing trend toward making this leap. Initially, there will be partial porting--replacing aging memory circuits, extending pattern-recognition and reasoning circuits through neural implants. Ultimately, and well before the twenty-first century is completed, people will port their entire mind file to the new thinking technology.
There will be nostalgia for our humble carbon-based roots, but there is nostalgia for vinyl records also. Ultimately, we did copy most of that analog music to the more flexible and capable world of transferable digital information. The leap to port our minds to a more capable computing medium will happen gradually but inexorably nonetheless.
As we port ourselves, we will also vastly extend ourselves. Remember that $1,000 of computing in 2060 will have the computational capacity of a trillion human brains. So we might as well multiply memory a trillion fold, greatly extend recognition and reasoning abilities, and plug ourselves into the pervasive wireless-communications network. While we are at it, we can add all human knowledge--as a readily accessible internal database as well as already processed and learned knowledge using the human type of distributed understanding. The Age of Neural Implants Has Already StartedThe patients are wheeled in on stretchers. Suffering from an advanced stage of Parkinson's disease, they are like statues, their muscles frozen, their bodies and faces totally immobile. Then in a dramatic demonstration at a French clinic, the doctor in charge throws an electrical switch. The patients suddenly come to life, get up, walk around, and calmly and expressively describe how they have overcome their debilitating symptoms. This is the dramatic result of a new neural implant therapy that is approved in Europe, and still awaits FDA approval in the United States.
The diminished levels of the neurotransmitter dopamine in a Parkinson's patient causes overactivation of two tiny regions in the brain: the ventral posterior nucleus and the subthalmic nucleus. This overactivation in turn causes the slowness, stiffness, and gait difficulties of the disease, and ultimately results in total paralysis and death. Dr. A. L. Benebid, a French physician at Fourier University in Grenoble, discovered that stimulating these regions with a permanently implanted electrode paradoxically inhibits these overactive regions and reverses the symptoms. The electrodes are wired to a small electronic control unit placed in the patient's chest. Through radio signals, the unit can be programmed, even turned on and off. When switched off, the symptoms immediately return. The treatment has the promise of controlling the most devastating symptoms of the disease.24
Similar approaches have been used with other brain regions. For example, by implanting an electrode in the ventral lateral thalamus, the tremors associated with cerebral palsy, multiple sclerosis, and other tremor-causing conditions can be suppressed.
"We used to treat the brain like soup, adding chemicals that enhance or suppress certain neurotransmitters," says Rick Trosch, one of the American physicians helping to perfect "deep brain stimulation" therapies. "Now we're treating it like circuitry."25
Increasingly, we are starting to combat cognitive and sensory afflictions by treating the brain and nervous system like the complex computational system that it is. Cochlear implants together with electronic speech processors perform frequency analysis of sound waves, similar to that performed by the inner ear. About 10 percent of the formerly deaf persons who have received this neural replacement device are now able to hear and understand voices well enough that they can hold conversations using a normal telephone.
Neurologist and ophthalmologist at Harvard Medical School Dr. Joseph Rizzo and his colleagues have developed an experimental retina implant. Rizzo's neural implant is a small solar-powered computer that communicates to the optic nerve. The user wears special glasses with tiny television cameras that communicate to the implanted computer by laser signal.26
Researchers at Germany's Max Planck Institute for Biochemistry have developed special silicon devices that can communicate with neurons in both directions. Directly stimulating neurons with an electrical current is not the ideal approach since it can cause corrosion to the electrodes and create chemical by-products that damage the cells. In contrast, the Max Planck Institute devices are capable of triggering an adjacent neuron to fire without a direct electrical link. The Institute scientists demonstrated their invention by controlling the movements of a living leech from their computer.
Going in the opposite direction--from neurons to electronics--is a device called a "neuron transistor,"27 which can detect the firing of a neuron. The scientists hope to apply both technologies to the control of artificial human limbs by connecting spinal nerves to computerized prostheses. The Institute's Peter Fromherz says, "These two devices join the two worlds of information processing: the silicon world of the computer and the water world of the brain."
Neurobiologist Ted Berger and his colleagues at Hedco Neurosciences and Engineering have built integrated circuits that precisely match the properties and information processing of groups of animal neurons. The chips exactly mimic the digital and analog characteristics of the neurons they have analyzed. They are currently scaling up their technology to systems with hundreds of neurons.28 Professor Carver Mead and his colleagues at the California Institute of Technology have also built digital-analog integrated circuits that match the processing of mammalian neural circuits comprising hundreds of neurons.29
The age of neural implants is under way, albeit at an early stage. Directly enhancing the information processing of our brain with synthetic circuits is focusing at first on correcting the glaring defects caused by neurological and sensory diseases and disabilities. Ultimately we will all find the benefits of extending our abilities through neural implants difficult to resist. Actually there won't be mortality by the end of the twenty-first century. Not in the sense that we have known it. Not if you take advantage of the twenty-first century's brain-porting technology. Up until now, our mortality was tied to the longevity of our hardware. When the hardware crashed, that was it. For many of our forebears, the hardware gradually deteriorated before it disintegrated. Yeats lamented our dependence on a physical self that was "but a paltry thing, a tattered coat upon a stick."30 As we cross the divide to instantiate ourselves into our computational technology, our identity will be based on our evolving mind file. We will be software, not hardware.
And evolve it will. Today, our software cannot grow. It is stuck in a brain of a mere 100 trillion connections and synapses. But when the hardware is trillions of times more capable, there is no reason for our minds to stay so small. They can and will grow.
As software, our mortality will no longer be dependent on the survival of the computing circuitry. There will still be hardware and bodies, but the essence of our identity will switch to the permanence of our software. Just as, today, we don't throw our files away when we change personal computers--we transfer them, at least the ones we want to keep. So, too, we won't throw our mind file away when we periodically port ourselves to the latest, ever more capable, "personal" computer. Of course, computers won't be the discrete objects they are today. They will be deeply embedded in our bodies, brains, and environment. Our identity and survival will ultimately become independent of the hardware and its survival.
Our immortality will be a matter of being sufficiently careful to make frequent backups. If we're careless about this, we'll have to load an old backup copy and be doomed to repeat our recent past.
Let's jump to the other side of this coming century. You said that by 2099 a penny of computing will be equal to a billion times the computing power of all human brains combined. Sounds like human thinking is going to be pretty trivial.
Unassisted, that's true.
So how will we human beings fare in the midst of such competition?
First, we have to recognize that the more powerful technology--the technologically more sophisticated civilization--always wins. That appears to be what happened when our Homo sapiens sapiens subspecies met the Homo sapiens neanderthalensis and other nonsurviving subspecies of Homo sapiens. That is what happened when the more technologically advanced Europeans encountered the indigenous peoples of the Americas. This is happening today as the more advanced technology is the key determinant of economic and military power.
So we're going to be slaves to these smart machines?
Slavery is not a fruitful economic system to either side in an age of intellect. We would have no value as slaves to machines. Rather, the relationship is starting out the other way.
It's true that my personal computer does what I ask it to do--sometimes! Maybe I should start being nicer to it.
No, it doesn't care how you treat it, not yet. But ultimately our native thinking capacities will be no match for the all-encompassing technology we're creating.
Maybe we should stop creating it.
We can't stop. The Law of Accelerating Returns forbids it! It's the only way to keep evolution going at an accelerating pace.
Hey, calm down. It's fine with me if evolution slows down a tad. Since when have we adopted your acceleration law as the law of the land?
We don't have to. Stopping computer technology, or any fruitful technology, would mean repealing basic realities of economic competition, not to mention our quest for knowledge. It's not going to happen. Furthermore, the road we're going down is a road paved with gold. It's full of benefits that we're never going to resist--continued growth in economic prosperity, better health, more intense communication, more effective education, more engaging entertainment, better sex.
Until the computers take over.
Look, this is not an alien invasion. Although it sounds unsettling, the advent of machines with vast intelligence is not necessarily a bad thing.
I guess if we can't beat them, we'll have to join them.
That's exactly what we're going to do. Computers started out as extensions of our minds, and they will end up extending our minds. Machines are already an integral part of our civilization, and the sensual and spiritual machines of the twenty-first century will be an even more intimate part of our civilization.
Okay, in terms of extending my mind, let's get back to implants for my French Lit class. Is this going to be like I've read this stuff? Or is it just going to be like a smart personal computer that I can communicate with quickly because it happens to be located in my head?
That's a key question, and I think it will be controversial. It gets back to the issue of consciousness. Some people will feel that what goes in their neural implants is indeed subsumed by their consciousness. Others will feel that it remains outside of their sense of self. Ultimately, I think that we will regard the mental activity of the implants as part of our own thinking. Consider that even without implants, ideas and thoughts are constantly popping into our heads, and we have little idea of where they came from, or how they got there. We nonetheless consider all the mental phenomena that we become aware of as our own thoughts.
So I'll be able to download memories of experiences I've never had?
Yes, but someone has probably had the experience. So why not have the ability to share it?
I suppose for some experiences, it might be safer to just download the memories of it.
Less time-consuming also.
Do you really think that scanning a frozen brain is feasible today?
Sure, just stick your head in my freezer here.
Gee, are you sure this is safe?
Absolutely.
Well, I think I'll wait for FDA approval.
Okay, then you'll have to wait a long time.
Thinking ahead, I still have this sense that we're doomed. I mean, I can understand how a newly instantiated mind, as you put it, will be happy that she was created and will think that she had been me prior to my having been scanned and is still me in a shiny new brain. She'll have no regrets and will be on the "other side." But I don't see how I can get across the human-machine divide. As you pointed out, if I'm scanned, that new me isn't me because I'm still here in my old brain.
Yes, there's a little glitch in this regard. But I'm sure we'll figure how to solve this thorny problem with a little more consideration.
Originally published in The Age of Spiritual Machines (C)1999 Raymond Kurzweil
|