Origin > Will Machines Become Conscious? > Are We Spiritual Machines? > Chapter 9: What Turing Fallacy?
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0492.html

Printable Version
    Chapter 9: What Turing Fallacy?
Response to Thomas Ray
by   Ray Kurzweil

Countering Thomas Ray's objections, Ray Kurzweil points out significant progress in modeling neural and neurochemical processes and the innate ability of biologically inspired self-organizing systems to realistically emulate natural processes, including, ultimately, human intelligence. In brain reverse engineering, according to Kurzweil, "we are approximately where we were in the genome project about ten years ago." to relinquish broad areas in the pursuit of knowledge.


Originally published in print June 18, 2002 in Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI by the Discovery Institute. Published on KurzweilAI.net on June 18, 2002.

Measurement Without Observation

In Thomas Ray’s articulated world, there is no such thing as consciousness, a view he makes clear in his reductionist view of quantum mechanics. Ray states that “it is the act of measurement that causes the collapse of the wave function, not conscious observation of the measurement.” In other words, according to Ray, the collapse of the wave function is caused by measurement without observation, but what could this refer to? We know that any quantum “collapsed” event involving one or more particles causes some reaction beyond those particles immediately involved. No particle is an “island,” so to speak. These inevitable reactions are properly considered measurements. The only conceivable circumstance in which an event would not cause a specific reaction (i.e., a measurement) is if the event was indeterminate because the wave function was not collapsed. It is only through the collapse of the wave function that an event becomes determinate and thereby causes a reaction in the world, which constitutes a measurement. It is, therefore, the collapse of the wave function that causes measurement, which Ray tells us causes collapse of the wave function. So what Ray is saying is that the collapse of the wave function causes the collapse of the wave function. By removing the concept of observation from measurement, Ray’s explanation of quantum mechanics devolves into this tautology.

Ray goes on to call any other view, even those of other mainstream scientists, a “glaring…error.” Ray’s rigid view regards any introduction of consciousness in the world to be an “error.” I would also point out that if we accept Ray’s view, then Penrose’s objection to the potential of consciousness in a nonbiological entity (i.e., Penrose’s argument to the effect that such an entity would have to recreate the precise quantum state of a biological conscious entity) becomes even less valid.

Colloquial Chaos Indeed

Ray casts doubt that there is increased chaos as an organism moves from conception as a single cell to a mature individual. Consider what we know about this process. The human genome contains 3 billion DNA rungs for a total of 6 billion bits of data. There is enormous redundancy in this information (e.g., a sequence known as “ALU” is repeated hundreds of thousands of times), so the amount of unique information is estimated at around 3% or about 23 megabytes. In contrast, the human brain contains on the order of 100 trillion connections. Just specifying this connection data would require trillions of bytes. Thus as we go from the genome, which specifies the brain among all other organs, to the fully expressed individual, the amount of information, considering just the brain connection patterns alone, increases by a factor of millions. We know that the genome specifies a wiring plan for the interneuronal connections that includes a great deal of randomness, i.e., chaos, at specific implementation stages. This includes the stage of fetal wiring, during which interneuronal connections essentially wire themselves with a significant element of randomness applied during the process, as well as the growing of new dendrites after birth (which is believed to be a critical part of the learning process). This is at least one source of the increasing chaos resulting from the development of the individual from a fertilized egg. Another source is the chaos inherent in the environment that the individual encounters.

Ray states that I argue that “increasing rates of mutations and unpredictable events are, in part, driving the increasing frequency of ‘salient events’ in evolution.” That’s not my argument at all and I never make this statement. My position is that the acceleration of an evolutionary process, including both biological and technological evolution, results from the greater power of each level of evolution to create the next level. For example, with the evolution of DNA, evolutionary experiments could proceed more rapidly and more effectively with each stage of results recorded in the evolving DNA code. With the innovation of sexual reproduction, a more effective means for devising new combinations of genetic information became available. Within technological evolution, we also find that each generation of technology enables the next generation to proceed more rapidly. For example, the first generation of computers were designed by pen on paper and built with screwdrivers and wires. Compare that to the very rapid design of new computers using today’s computer- assisted design tools.

Ray’s Digital Fallacy

Thomas Ray starts his chapter by citing my alleged “failure to consider the unique nature of the digital medium.” However, my thesis repeatedly refers to combining analog and digital methods in the same way that the human brain does; for example, “more advanced neural nets [which] are already using highly detailed models of human neurons, including detailed nonlinear analog activation functions.” I go on to cite the advantages of emulating the brain’s “digital controlled analog” design, and conclude that “there is a significant efficiency advantage to emulating the brain’s analog methods.” Analog methods are not the exclusive province of biological systems. We used to refer to “digital computers” to distinguish them from the more ubiquitous analog computers which were widely used during World War II.

It is also worth pointing out that analog processes can be emulated with digital methods whereas the reverse is not necessarily the case. However, there are efficiency advantages to analog processing as I point out above. Analog methods are readily recreated by conventional transistors which are essentially analog devices. It is only by adding the additional mechanism of comparing the transistor’s output to a threshold that it is made into a digital device.

What Turing Fallacy?

Thomas Ray states:

The primary criticism that I wish to make of Kurzweil’s book, is that he proposes to create intelligent machines by copying human brains into computers. We might call this the Turing Fallacy. The Turing Test suggests that we can know that machines have become intelligent when we cannot distinguish them from human, in free conversation over a teletype. The Turing Test is one of the biggest red-herrings in science.

This paragraph contains several ideas, unrelated ones in my view, but concepts that appear to animate Ray’s discomfort with “strong AI.” I should make the caveat that Ray appears to accept the possibility of an advanced but “fundamentally alien intelligence” that is “rooted in and natural to the medium,” but he dismisses the possibility, as well as the desirability, of AIs sharing human-like attributes. He states that “AIs must certainly be non-Turing,” which he defines as “unlike human intelligences.” So this is what Ray means by the “Turing Fallacy.” He is maintaining that any intelligence that might emerge in nonbiological mediums would not and could not be a human intelligence, and would, therefore, be unable to pass the Turing Test.

It is fair to conclude from this that Thomas Ray accepts the Turing Test as a reasonable test for “human intelligence,” but makes the point that there could be an alien intelligence that is very capable in terms of performing intelligent tasks but unable to pass this particular test. I should point out that Turing has made precisely the same point. Turing intended his Test specifically as a measure of human intelligence. An entity passing the Turing Test may be said to be intelligent, and moreover, to possess a human-form of intelligence. Turing specifically states that the converse statement does not hold, that failure to pass the Test does not indicate a lack of intelligence. I’ve made the same point. As I stated in the Age of Intelligent Machines, certain animals such as dolphins, giant squids, and certain species of whales, appear to have relatively high levels of intelligence, but are in no position to pass the Turing Test (they can’t type for one thing). Even a human would not be able to pass the Test if she didn’t speak the language of the Turing Test judge. The key point of the Turing test is that human language is sufficiently broad that we can test for the full range of human intelligence through human language dialogues. The Turing Test itself does not represent a fallacy, but rather a keen insight into the power of human communication through language to represent our thinking processes.

So where is the fallacy? Ray appears to be objecting to the concept of creating a human-like intelligence, one whose communication is sufficiently indistinguishable from human language-based communication such that it could pass the Turing Test, on grounds of both desirability and feasibility.

With regard to the first issue, the desirability of machines understanding our language is, I believe, clear from the entire history of the AI field. As machines have gained proficiency in aspects of human language, they have become more useful and more valuable. Language is our primary method of communication, and machines need to understand human language in order to interact with us in an efficacious manner. Ultimately, they will need to understand human knowledge in a deep way to fully manifest their potential to be our partners in the creation of new knowledge. And as Turing pointed out, understanding human knowledge, including our capacity for understanding and expressing higher order emotions, is a prerequisite for effective human communication, and is therefore, a necessary set of skills to pass the Turing Test.

With regard to the issue of feasibility, Thomas Ray states:

I accept that this level of computing power is likely to be reached, someday. But no amount of raw computer power will be intelligent in the relevant sense unless it is properly organized. This is a software problem, not a hardware problem.

I agree with this, of course, as I’ve had to state several times. A primary scenario that I describe in solving the “software problem” is to reverse engineer the methods deployed by the human brain. Although I also talk about the potential to copy specific human brains, I acknowledge that this is a more difficult task and will take longer. Reverse engineering the human brain is quite feasible, and we are further along in this project than most people realize. As I pointed out earlier, there are many contemporary examples that have demonstrated the feasibility of reverse engineering human neurons, neuron clusters, and entire brain regions, and then implementing the resulting detailed mathematical models in nonbiological mediums.

I mentioned Lloyd Watts’ work in which he has developed a detailed and working model of more than a dozen brain regions related to auditory processing. Carver Mead’s retina models, which are implemented as digital controlled analog processes on silicon chips, capture processes similar to those that take place in human visual processing. The complexity and level of detail in these models is expanding exponentially along with the growing capacity of our computational and communication technologies. This undertaking is similar to the genome project in which we scanned the genome and are now proceeding to understand the three-dimensional structures and processes described therein. In the mission to scan and reverse engineer the neural organization and information processing of the human brain, we are approximately where we were in the genome project about ten years ago. I estimate that we will complete this project within thirty years, which takes into account an exponential rather than linear projection of our anticipated progress.

Thomas Ray essentially anticipates the answer to his own challenge, when he states that:

In order for the metallic “copy” to have the same function, we would have to abstract the functional properties out of the organic neural elements, and find structures and processes in the new metallic medium that provide identical functions. This abstraction and functional-structural translation from the organic into the metallic medium would require a deep understanding of the natural neural processes, combined with the invention of many computing devices and processes which do not yet exist.

I’m not sure why Ray uses the word “metallic” repeatedly, other than to demonstrate his inclination to regard nonbiological intelligence as inherently exhibiting the brittle, mechanical, and unsubtle properties that we have traditionally associated with machines. However, in this paragraph, Ray describes the essential process that I have proposed. Many contemporary projects have shown the feasibility of developing and expressing a deep understanding of “natural neural processes,” and the language for that expression is mathematics.

We clearly will need to invent new computing devices to create the necessary capacity, and Ray appears to accept that this will happen. As for inventing new processes, new algorithms are continually being developed as well; however, what we have found thus far is that we have encountered few difficulties instantiating these models once they are revealed. The mathematical models derived from the reverse engineering process are readily implemented using available methods.

The revealed secrets of human intelligence will undoubtedly provide many enabling methods in the creation of the software of intelligence. An added bonus will be deep insight into our own nature, into human function and dysfunction.

Thomas Ray states that:

The structure and function of the brain or its components cannot be separated. The circulatory system provides life support for the brain, but it also delivers hormones that are an integral part of the chemical information processing function of the brain. The membrane of a neuron is a structural feature defining the limits and integrity of a neuron, but it is also the surface along which depolarization propagates signals. The structural and life-support functions cannot be separated from the handling of information.

Ray goes on to describe several of the “broad spectrum of chemical communication mechanisms” that the brain exhibits. However, all of these are readily modelable, and a great deal of progress has already been made in this endeavor. The intermediate language is mathematics, and translating the mathematical models into equivalent nonbiological mechanisms is the easiest step in this process. With regard to the delivery of hormones by the circulatory system, this is an extremely low bandwidth phenomenon, which will not be difficult to model and replicate. The blood levels of specific hormones and other chemicals influence parameter levels that affect a great many synapses at once.

Ray concludes that “a metallic computation system operates on fundamentally different dynamic properties and could never precisely and exactly ‘copy’ the function of a brain.” If one follows closely the progress in the related fields of neurobiology, brain scanning, neuron and neural region modeling, neuron-electronic communication, neural implants, and related endeavors, we find that our ability to replicate the salient functionality of biological information processing can meet any desired level of precision. In other words, the copied functionality can be “close enough” for any conceivable purpose or goal, including satisfying a Turing Test judge. Moreover, we find that efficient implementations of the mathematical models require substantially less computational capacity than the theoretical potential of the biological neuron clusters being modeled.

Ray goes on to describe his own creative proposal for creating nonbiological intelligence, which is to use evolutionary algorithms to allow a digital intelligence to “emerge,” one that is “rooted in the nature of the medium.” This would be, according to Ray, a “non-Turing intelligence,” but “one which would complement rather than duplicate our talents and abilities.”

I have no problem with this particular idea. Another of Ray’s mistaken claims is that I offer human brain reverse engineering as the only route to strong AI. The truth is that I strongly advocate multiple approaches. I describe the reverse engineering idea (among others) because it serves as a useful existence proof of the feasibility of understanding and replicating human intelligence. In my book The Age of Spiritual Machines, I describe a number of other approaches, including the one that Ray prefers (evolutionary algorithms). My own work in pattern recognition and other aspects of AI consistently utilizes multiple approaches, and it is inevitable that the ultimate path to strong AI will combine insights from a variety of paradigms. The primary role of developing mathematical models of biological neurons, scanning the human brain, and reverse engineering the hundreds of brain regions is to develop biologically-inspired models of intelligence, the insights of which we will then combine with other lines of attack.

With regard to Ray’s own preference for evolutionary or genetic algorithms, he misstates the scope of the problem. He suggests that the thousands of bits of genetic information in contemporary genetic algorithms may end up falling “ten orders of magnitude below organic evolution.” But the thousands of bits of genetic order represented by contemporary systems are already only four or five orders of magnitude below that of the unique information contained in the genome.

Ray makes several other curious statements. Ray says:

…the most complex of our creations are showing alarming failure rates. Orbiting satellites and telescopes, space shuttles, interplanetary probes, the Pentium chip, computer operating systems, all seem to be pushing the limits of what we can effectively design and build through conventional approaches… Our most complex software (operating systems and telecommunications control systems) already contains tens of millions of lines of code. At present it seems unlikely that we can produce and manage software with hundreds of millions or billions of lines of code.

First of all, what alarming failure rates is Ray referring to? Computerized mission critical systems are remarkably reliable. Computerized systems of significant sophistication routinely fly and land our airplanes automatically. I am not aware of any airplane crash that has been attributed to a failure of these systems, yet many crashes are caused by the human errors of pilots and maintenance crews. Automated Intensive Care monitoring systems in hospitals almost never malfunction, yet hundreds of thousands of people die from human medical errors. If there are alarming failure rates to worry about, it’s human failure, not those of mission critical computer systems. The Pentium chip problem which Ray alludes to was extremely subtle, caused almost no repercussions, and was quickly rectified.

The complexity of computerized systems has indeed been scaling up exponentially. Moreover, the cutting edge of our efforts to emulate human intelligence will utilize the self-organizing paradigms that we find in the human brain. I am not suggesting that self-organizing methods such as neural nets and evolutionary algorithms are simple or automatic to use, but they represent powerful tools which will help to alleviate the need for unmanageable levels of complexity.

Most importantly, it is not the case that the human brain represents a complexity comparable to “billions of lines of code.” The human brain is created from a genome of only about 23 million bytes of unique information (less than Microsoft Word). It is through self-organizing processes that incorporate significant elements of randomness (as well as exposure to the real world) that this small amount of design information is expanded to the trillions of bytes of information represented in a mature human brain. The design of the human brain is not based on billions of lines of code or the equivalent thereof. Similarly, the task of creating human level intelligence in a nonbiological entity will not involve creating a massive expert system comprising billions of rules or lines of code, but rather a learning, chaotic, self-organizing, system, one ultimately that is biologically inspired.

Thomas Ray writes:

The engineers among us might propose nano-molecular devices with fullerene switches, or even DNA-like computers. But I am sure they would never think of neurons. Neurons are astronomically large structures compared to the molecules we are starting with.

This is exactly my own point. The purpose of reverse engineering the human brain is not to copy the digestive or other unwieldy processes of biological neurons, but rather to understand their salient information processing methods. The feasibility of doing this has already been demonstrated in dozens of contemporary projects. The scale and complexity of the neuron clusters being emulated is scaling up by orders of magnitude, along with all of our other technological capabilities.

Once we have completed the reverse engineering of the several hundred brain regions, we will implement these methods in nonbiological substrates. In this way, we will combine these human-like capacities with the natural advantages of machines, i.e., the speed, accuracy and scale of memory, and, most importantly, the ability to instantly share knowledge.

Ray writes:

Over and over again, in a variety of ways, we are shaping cyberspace in the form of the 3D material space that we inhabit. But cyberspace is not a material space and it is not inherently 3D. The idea of downloading the human mind into a computer is yet another example of failing to understand and work with the properties of the medium….Cyberspace is not a 3D Euclidean space. It is not a material world. We are not constrained by the same laws of physics, unless we impose them upon ourselves.

The reality is that we can do both. At times we will want to impose upon our virtual reality environments the three-dimensional gravity-bound reality we are used to. After all, that is the nature of the world we are comfortable in. At other times, we may wish to explore environments that have no earthly counterpart, ones that may indeed violate the laws of physics.

We can also do both with regard to emulating intelligence in our machines. We can apply Ray’s preferred genetic algorithm approach while we also benefit from the reverse engineering of biological information processes, among other methods.

In summing up, Thomas Ray writes:

Everything we know about intelligence is based on one example of intelligence, namely, human intelligence. This limited experience burdens us with preconceptions and limits our imaginations.

Actually, it is Thomas Ray who is limiting his imagination to his single idea of unleashing “evolution in the digital medium.” Certainly there will be new forms of intelligence as nonbiological intelligence continues to grow. It will draw upon many sources of knowledge, some biologically motivated, and some inspired by our own imagination and creativity, ultimately augmented by the creativity of our machines.

The power of our civilization has already been greatly augmented by our technology, with which we are becoming increasingly intimate, as devices slip into our pockets and dangle from our bodies like jewelry. Within this decade, computing and communications will appear to disappear, with electronics being woven into our clothing, images being written directly to our retinas, and extremely high bandwidth wireless communication continually augmenting our visual and auditory reality. Within a few decades, with technology that travels inside our bodies and brains, we will be in a position to vastly expand our intelligence, our experiences, and our capacity to experience. Nonbiological intelligence will ultimately combine the inherent speed and knowledge sharing advantages of machines with the deep and subtle powers of the massively parallel, pattern recognition-based, biological paradigm.

Today, our most sophisticated machines are still millions of times simpler than human intelligence. Similarly, the total capacity of all the computers in the world today remains at least six orders of magnitude below that of all human brains, which I estimate at 1026 digitally controlled analog transactions per second. However, our biological capacity is fixed. Each of us is limited to a mere hundred trillion interneuronal connections. Machine intelligence, on the other hand, is growing exponentially in capacity, complexity, and capability. By the middle of this century, it will be nonbiological intelligence, representing an intimate panoply of paradigms, that predominates.

Copyright ' 2002 by the Discovery Institute. Used with permission.

 Be the first to comment on this article!

 
 

[Post New Comment]