Origin > Will Machines Become Conscious? > Are We Spiritual Machines? > Chapter 5: Kurzweil's Turing Fallacy
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0496.html

Printable Version
    Chapter 5: Kurzweil's Turing Fallacy
by   Thomas Ray

Reverse-engineering the human brain is doomed to failure because of the "Turing fallacy" -- a nonbiological computation system could never precisely copy the complex neural, structural, and chemical functions of a brain or achieve the required level of reliability, says Thomas Ray, who proposes evolution of "non-Turing" AIs as an alternative


Originally published in print June 18, 2002 in Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI by the Discovery Institute. Published on KurzweilAI.net on June 18, 2002.

There are numerous directions from which to criticize Kurzweil’s proposal for strong AI. In this essay I will focus on his failure to consider the unique nature of the digital medium when discussing artificial intelligence. But before elaborating on this point, I would like briefly to call attention to some other issues.

Psychic Quantum Mechanics

Kurzweil’s interpretation of quantum mechanics leads him to the conclusion that “consciousness, matter, and energy are inextricably linked.” While this is true in the sense that consciousness arises from the interactions of matter and energy, it is not true in the sense that Kurzweil intends it: that quantum ambiguities are not resolved until they are forced to do so by a conscious observer.

Kurzweil’s error is most glaringly apparent in his description of the paper output from a quantum computer: “So the page with the answer is ambiguous, undetermined—until and unless a conscious entity looks at it. Then instantly all the ambiguity is retroactively resolved, and the answer is there on the page. The implication is that the answer is not there until we look at it.” He makes the same error in describing the evolution of the universe: “From one perspective of quantum mechanics—we could say that any Universe that fails to evolve conscious life to apprehend its existence never existed in the first place.”

Kurzweil does not understand that it is the act of measurement that causes the collapse of the wave function, not conscious observation of the measurement. In practice, the collapse is (probably always) caused by a completely unconscious measuring device. Printing of the result on a paper could be such a measuring device. Subsequent conscious observation of the measurement is irrelevant.

This psychic quantum mechanics did not originate with Kurzweil. It has been around for decades, apparently as a way to deal with Schrödinger’s cat. Thus, Kurzweil may be able to point to physicists who hold this view. Similarly, I could point to biologists who believe in the biblical story of creation rather than evolution. The existence of experts who believe a doctrine, however, is no argument for the truth of the doctrine.

Colloquial Chaos

Kurzweil’s suggestion that in a process, the time interval between salient events expands or contracts along with the amount of chaos (“the law of time and chaos”), is quite interesting. Yet, the definitions of “salient events” and “chaos” are quite subjective, making the “law” difficult to support. Technically, it would probably be more appropriate to use the word “entropy” in place of “chaos,” but for consistency, I will also use “chaos” in this discussion.

Most striking is the apparently inconsistent use of chaos. He states that in an evolutionary process order increases, and he says: “Evolution draws upon the chaos in the larger system in which it takes place for its options for diversity.” Yet he states that in the development of an individual organism chaos increases, and he says: “The development of an organism from conception as a single cell through maturation is a process moving toward greater diversity and thus greater disorder.” Kurzweil suggests that in evolution, diversity implies order, while in development, diversity implies disorder.

Through evolution, the diversity of species on Earth has increased, and through development, the diversity of cell types increases. I would characterize both as processes that generate order. Why does Kurzweil think that development generates chaos? His apparent reason is to make his law of time and chaos consistent with our perception of time: Our subjective unit of time grows with our age.

I believe that the scientific community would generally agree that the developmental process up to the period of reproduction is a process of increasing order. In humans, who live well beyond their reproductive years, the condition of the body begins to deteriorate after the reproductive years, and this senescence would generally be considered a process of increasing chaos.

In an effort to fit development seamlessly into his law of time and chaos, Kurzweil presents the whole life cycle from conception to death, as unidirectional, towards increasing chaos. This position is indefensible. The developmental process directly contradicts the law of time and chaos. Development is a process in which the time between salient events increases with order.

He attempts to be clear and concrete in his use of the term chaos: “If we’re dealing with the process of evolution of life-forms, then chaos represents the unpredictable events encountered by organisms, and the random mutations that are introduced in the genetic code.” He explains: “Evolution draws upon the great chaos in its midst—the ever increasing entropy governed by the flip side of the Law of Time and Chaos—for its options for innovation.” This implies that unpredictable events and mutations are becoming more frequent, a position that would be difficult to defend. His argument is that increasing rates of mutations and unpredictable events are, in part, driving the increasing frequency of “salient events” in evolution. He does not provide any support for this highly questionable argument.

Despite his attempt to be precise, his use of “chaos” is vernacular: “When the entire Universe was just a ‘naked’ singularity . . . there was no chaos.” “As the Universe grew in size, chaos increased exponentially.” “Now with billions of galaxies sprawled out over trillions of light-years of space, the Universe contains vast reaches of chaos . . .” “We start out as a single fertilized cell, so there’s only rather limited chaos there. Ending up with trillions of cells, chaos greatly expands.” It seems that he associates chaos with size, a very unconventional use of the term.

His completely false interpretation of quantum mechanics, his vague and inconsistent use of terms such as “chaos” and “salient events,” and his failure to understand the thermodynamics of development represent errors in the basic science from which he constructs his view of the world. These misunderstandings of basic science seriously undermine the credibility of his arguments.

I am not comfortable with the equation of technological development and evolution. I think that most evolutionary biologists would consider these to be quite separate processes, yet, their equation represents a point of view consistent with Kurzweil’s arguments and also consistent with the concept of “meme” developed by the evolutionary biologist Richard Dawkins.

Intelligence in the Digital Medium

The primary criticism that I wish to make of Kurzweil’s book, however, is that he proposes to create intelligent machines by copying human brains into computers. We might call this the Turing Fallacy. The Turing Test suggests that we can know that machines have become intelligent when we cannot distinguish them from human, in free conversation over a teletype. The Turing Test is one of the biggest red-herrings in science.

It reminds me of early cinema when we set a camera in front of a stage and filmed a play. Because the cinema medium was new, we really didn’t understand what it is and what we can do with it. At that point we completely misunderstood the nature of the medium of cinema. We are in almost the same position today with respect to the digital medium.

Over and over again, in a variety of ways, we are shaping cyberspace in the form of the 3D material space that we inhabit. But cyberspace is not a material space and it is not inherently 3D. The idea of downloading the human mind into a computer is yet another example of failing to understand and work with the properties of the medium. Let me give some other examples and then come back to this.

I have heard it said that cyberspace is a place for the mind, yet we feel compelled to take our bodies with us. 3D virtual worlds and avatars are manifestations of this. I have seen virtual worlds where you walk down streets lined by buildings. In one I saw a Tower Records store, whose front looked like the real thing. You approached the door, opened it, entered, and saw rows of CDs on racks and an escalator to take you to the next floor. Just Like The Real Thing!

I saw a demo of Alpha World, built by hundreds of thousands of mostly teenagers. It was the day after Princess Diana died, and there were many memorials to her, bouquets of flowers by fountains, photos of Diana with messages. It looked Just Like The Real memorials to Diana.

I wondered, why do these worlds look and function as much as possible like the real thing? This is cyberspace, where we can do anything. We can move from point A to point B instantly without passing through the space in between. So why are we forcing ourselves to walk down streets and halls and to open doors?

Cyberspace is not a 3D Euclidean space. It is not a material world. We are not constrained by the same laws of physics, unless we impose them upon ourselves. We need to liberate our minds from what we are familiar with before we can use the full potential of cyberspace. Why should we compute collision avoidance for avatars in virtual worlds when we have the alternative to find out how many avatars can dance on the head of a pin?

The WWW is a good counter-example, because it recognizes that in cyberspace it doesn’t matter where something is physically located. Amazon.com is a good alternative to the mindlessly familiar 3D Tower Record store.

Let me come back to Kurzweil’s ideas on AI. Kurzweil states that it is “ultimately feasible” to:

. . . scan someone’s brain to map the locations, interconnections, and contents of the somas, axons, dendrites, presynaptic vesicles, and other neural components. Its entire organization could then be re-created on a neural computer of sufficient capacity, including the contents of its memory . . . we need only to literally copy it, connection by connection, synapse by synapse, neurotransmitter by neurotransmitter.

This passage most clearly illustrates Kurzweil’s version of the Turing Fallacy. It is not only infeasible to “copy” a complex organic organ into silicon without losing its function, but it is the least imaginative approach to creating an AI. How do we copy a seratonin molecule or a presynaptic vesicle into silicon? This passage of the book does not explicitly state whether he is proposing a software simulation from the molecular level up, of a copy of the brain, or if he is proposing the construction of actual silicon neurons, vesicles, neurotransmitters, and their wiring together into an exact copy of a particular brain. Yet in the context of the preceding discussion, it appears that he is proposing the latter.

Such a proposal is doomed to failure. It would be a fantastic task to map the entire physical, chemical, and dynamic structure of a brain. Even if this could be accomplished, there would be no method for building a copy. There is no known technology for building complexly differentiated microscopic structures on such a large scale. If a re-construction method existed, we might expect that a copy made of the same materials, carbon chemistry, if somehow jump-started into the proper dynamic activity, would have the same function (though such a copied brain would require a body to support it). But a copy made of metallic materials could not possibly have the same function. It would be a fantastically complex and intricate dynamic sculpture, whose function would bear no relation to a human brain. And what of the body and its essential sensory integration with the brain?

In order for the metallic “copy” to have the same function, we would have to abstract the functional properties out of the organic neural elements, and find structures and processes in the new metallic medium that provide identical functions. This abstraction and functional-structural translation from the organic into the metallic medium would require a deep understanding of the natural neural processes, combined with the invention of many computing devices and processes which do not yet exist.

However, Kurzweil has stated that one advantage of the brain-copy approach is that “we don’t need to understand all of it; we need only to literally copy it.” Yet he is ambivalent on this critical point, adding: “To do this right, we do need to understand what the salient information-processing mechanisms are. Much of a neuron’s elaborate structure exists to support its own structural integrity and life processes and does not directly contribute to its handling of information.”

The structure and function of the brain or its components cannot be separated. The circulatory system provides life support for the brain, but it also delivers hormones that are an integral part of the chemical information processing function of the brain. The membrane of a neuron is a structural feature defining the limits and integrity of a neuron, but it is also the surface along which depolarization propagates signals. The structural and life-support functions cannot be separated from the handling of information.

The brain is a chemical organ, with a broad spectrum of chemical communication mechanisms ranging from microscopic packets of neurotransmitters precisely delivered at target synapses, to nitrogen oxide gas and hormones spread through the circulatory system or diffusing through the intercellular medium of the brain. There also exist a wide range of chemical communications systems with intermediate degrees of specificity of delivery. The brain has evolved its exquisitely subtle and complex functionality based on the properties of these chemical systems. A metallic computation system operates on fundamentally different dynamic properties and could never precisely and exactly “copy” the function of a brain.

The materials of which computers are constructed have fundamentally different physical, chemical, and electrical properties than the materials from which the brain is constructed. It is impossible to create a “copy” of an organic brain out of the materials of computation. This applies not only to the proposition of copying an individual human brain with such accuracy as to replicate a human mind along with its memories, but also to the somewhat less extreme proposition of creating an artificial intelligence by reverse engineering the human brain.

Structures and processes suitable for information processing in the organic medium are fundamentally different from those of the metallic computational medium. Intelligent information processing in the computational medium must be based on fundamentally different structures and processes, and thus cannot be copied from organic brains.

I see three separate processes which are sometimes confounded. Machines having:

1) computing power equal to the level of human intelligence

2) computing performance equal to the level of human intelligence

3) computing like human intelligence

A large portion of Kurzweil’s book establishes the first process by extrapolating Moore’s Law into the future until individual machines can perform the same number of computations per second as is estimated for the human brain (~2020 A.D.).

I accept that this level of computing power is likely to be reached, someday. But no amount of raw computer power will be intelligent in the relevant sense unless it is properly organized. This is a software problem, not a hardware problem. The organizational complexity of software does not march forward according to Moore’s Law.

While I can accept that computing power will inevitably reach human levels, I am not confident that computing performance will certainly follow. The exponential increase of computing power is driven by higher densities and greater numbers of components on chips, not by exponentially more complex chip designs.

The most complex of artifacts designed and built by humans are much less complex that living organisms. Yet the most complex of our creations are showing alarming failure rates. Orbiting satellites and telescopes, space shuttles, interplanetary probes, the Pentium chip, computer operating systems, all seem to be pushing the limits of what we can effectively design and build through conventional approaches.

It is not certain that our most complex artifacts will be able to increase in complexity by an additional one, two or more orders of magnitude, in pace with computing power. Our most complex software (operating systems and telecommunications control systems) already contains tens of millions of lines of code. At present it seems unlikely that we can produce and manage software with hundreds of millions or billions of lines of code. In fact there is no evidence that we will ever be able to design and build intelligent software.

This leads to the next distinction, which is central to my argument, and requires some explanation:

2) computing performance equal to the level of human intelligence

3) computing like human intelligence

A machine might exhibit an intelligence identical to and indistinguishable from humans, a Turing AI, or a machine might exhibit a fundamentally different kind of intelligence, like some science fiction alien intelligence. I expect that intelligences which emerge from the digital and organic media will be as different as their respective media, even if they have comparable computing performance.

Everything we know about life is based on one example of life, namely, life on earth. Everything we know about intelligence is based on one example of intelligence, namely, human intelligence. This limited experience burdens us with preconceptions and limits our imaginations.

Consider this thought experiment:

We are all robots. Our bodies are made of metal and our brains of silicon chips. We have no experience or knowledge of carbon-based life, not even in our science fiction. Now one of us robots comes to an AI discussion with a flask of methane, ammonia, hydrogen, water, and some dissolved minerals. The robot asks: “Do you suppose we could build a computer from this stuff?”

The engineers among us might propose nano-molecular devices with fullerene switches, or even DNA-like computers. But I am sure they would never think of neurons. Neurons are astronomically large structures compared to the molecules we are starting with.

Faced with the raw medium of carbon chemistry, and no knowledge of organic life, we would never think of brains built of neurons, supported by circulatory and digestive systems, in bodies with limbs for mobility, bodies which can only exist in the context of the ecological community that feeds them.

We are in a similar position today as we face the raw medium of digital computation and communications. The preconceptions and limited imagination deriving from our organic-only experience of life and intelligence make it difficult for us to understand the nature of this new medium, and the forms of life and intelligence that might inhabit it.

How can we go beyond our conceptual limits, find the natural form of intelligent processes in the digital medium, and work with the medium to bring it to its full capacity, rather than just imposing the world we know upon it by forcing it to run a simulation of our physics, chemistry, and biology?

In the carbon medium it was evolution that explored the possibilities inherent in the medium, and created the human mind. Evolution listens to the technology that it is embedded in. It has the advantage of being mindless, and therefore devoid of preconceptions, and not limited by imagination.

I propose the creation of a digital nature. A system of wildlife reserves in cyberspace, in the interstices between human colonizations, feeding off of unused CPU-cycles (and permitted a share of our bandwidth). This would be a place where evolution can spontaneously generate complex information processes, free of the demands of human engineers and market analysts telling it what the target applications are.

Digital naturalists can then explore this cyber-nature in search of applications for the products of digital evolution in the same way that our ancestors found applications among the products of organic nature such as: rice, wheat, corn, chickens, cows, pharmaceuticals, silk, mahogany. But, of course, the applications that we might find in the living digital world would not be material; they would be information processes.

It is possible that out of this digital nature there might emerge a digital intelligence, truly rooted in the nature of the medium, rather than brutishly copied and downloaded from organic nature. It would be a fundamentally alien intelligence, but one which would complement rather than duplicate our talents and abilities.

I think it would be fair to say that the main point of Kurzweil’s book is that artificial entities with intelligence equal to and greater than humans will inevitably arise, in the near future. While his detailed explanation of how this might happen focuses on what I consider to be the Turing Fallacy, that is, that it will initially take a human form, Kurzweil would probably be content with any route to these higher intelligences, Turing or non-Turing.

While I feel that AIs must certainly be non-Turing—unlike human intelligences—I feel ambivalent about whether they will emerge at all. It is not the certainty that Kurzweil paints, like the inexorable march of Moore’s Law. Raw computing power is not intelligence. Our ability ever to create information processes of a complexity comparable to the human mind is completely unproven and absolutely uncertain.

I have suggested evolution as an alternate approach to producing intelligent information processes. These evolved AIs would certainly be non-Turing AIs. Yet evolution in the digital medium remains a process with a very limited record of accomplishments. We have been able to establish active evolutionary processes, by both natural and artificial selection in the digital medium. But the evolving entities have always contained at most several thousand bits of genetic information.

We do not yet have a measure on the potential of evolution in this medium. If we were to realize a potential within several orders of magnitude of that of organic evolution, it would be a spectacular success. But if the potential of digital evolution falls ten orders of magnitude below organic evolution, then digital evolution will lose its luster. There is as yet no evidence to suggest which outcome is more likely.

The hope for evolution as a route to AI is not only that it would produce an intelligence rooted in and natural to the medium, but that evolution in the digital medium is capable of generating levels of complexity comparable to what it has produced in the organic medium. Evolution is the only process that is proven to be able to generate such levels of complexity. That proof, however, is in the organic rather than the digital medium. Like an artist who can express his creativity in oil paint but not stone sculpture, evolution may be capable of magnificent creations in the organic medium but not the digital.

Yet the vision of the digital evolution of vast complexity is still out there, waiting for realization or disproof. It should encourage us, although we are at the most rudimentary level of our experience with evolution in the digital medium. Nevertheless, the possibilities are great enough to merit a serious and sustained effort.

Copyright ' 2002 by the Discovery Institute. Used with permission.

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

The Cloud of Influence
posted on 12/05/2002 9:55 PM by T. Upton LeVeen

[Top]
[Mind·X]
[Reply to this post]

The present is a deeply personal moment. And for Ray Kurzweil, so deep and understood it is that only people who venture in proximity to his event horizon of consciousness - the border of his self-sustained solipsist integrity - will ever be able to moderately understand it. I think he's applied his understanding of quantum mechanics appropriately to his discussion of consciousness and predictability concerning the human mind.
Consciousness has its limitations. Take for instance your (the reader's) personal "now state." I'd be willing to bet that you were not considering the cardiac pulsations in your left index finger. Yet, given the time, you can focus on it implying that it is now an object of your consciousness. But considering that your consciousness has a limitation to its scope, what thoughts, feelings, etc. did it leave behind in order to make "space" for the new ponderings about your index finger? Essentially our consciousness has a focus, which isn't new news to people, but something that I felt needed a quick mention before relating to both Kurzweils discussion and this following illustration of my own.

To what extent does the underlying carpet of quantum fluctuations influence our beings?

Engineers are often fond of chopping off a string of numbers that might go on for 43 decimal places before stopping. Aside from being a practice of scientific notation, they reason, "What use are these 43 decimals but to cause confusion in my calculation, especially if I can round them to three decimal places and get the same apparent results as is I had included them- a standing bridge with steel of tensile strength 125,000 psi?"

Similarly, what if we are to consider our focused consciousness to be like probabilistic clouds generated upon the sure-firing cumulative thresholds of excitations responsible for that directed attention? In other words, if our nerves have a threshold of depolarization (or hyper-polarization as within the retina) then once we direct our consciousness towards a certain subject matter, the primary bell curve of physical thresholds take precedence over that of the periphery ones and our attention is sustained until otherwise directed. Such a Cloud of Influence (COI) is commanded by the observable relativistic center core of energy, the firing thresholds. Within the fringes of the Gaussian curve denoting this cloud is the inevitable overlap into yet a second realm, the Cloud of Subconscious Variability (CSV). This CSV operates differently because it does not have the dominating influence of the relativistic excited thresholds within the COI. The CSV therefore lies closer to the carpet of quantum fluctuations and therefore the opportunity for it to be influenced by such quantum fluctuations is increased, but not inevitable and perhaps highly improbable, given the mathematical dynamics of this scale of physics, of which I am not a master. (A spaceship 1,000 miles above Earth may have a better probability to be influenced by Jupiter's gravity than if it was upon an Earth-based launch-pad, but is still more likely to fall back to Earth than zoom away to Jupiter, if the physical conditions of its motion limit it from doing so.)

In any case, to what extent, if any, can we neglect the small overlapping of Gaussian clouds and focus solely on the core of relativistic threshold firings at the center of our consciousness WITHOUT affecting the PATH of this core to our beings? Is there a place on the infinite outcomes (until observed) timeline that we can say, hey, if when copying the mind of a human we chop off that string of probabilities up to this nth decimal point, it won't really matter because he will remain the same person and those lesser probabilities were frivolous anyhow. I think it's likely that there's a "T Point" above the Planc Unit where a form of determinism arises and human behavior can emerge out of the realm of probability and into the marble universe that Einstein was eager to construct.
And my interpretation of this geometry of consciousness is similar to that of Chinese handcuffs (looks like I'm on the Asian theme today.) There's a more modern version of these reed-woven finger traps that uses Nilon as its material. These nilon "boingy springs" (as I refer to them) has a tube-within-a-tube geometry such that you can push the inner tube through and eventually it will circle around the outside of the structure. (I really wish I had a picture to illustrate this thought.) Now consider this to be similar to our geometrical consciousness, where it's core rests at the very center of this innermost tube, only in OUR case there are multiple tubes having multiple orientations (probabilities) (about 20 million billion, in fact) but each capable of achieving that same outward folding motion, all at the same time, which sounds physically impossible at first. However, instead of these things being tubes, consider the inner tube of all of these omni-oriented springs to cone-in "pinch" from both directions onto the central point of consciousness. The reason why our consciousnesses would not be frozen in time is because in actuality, this three-dimensional representation of the geometry of consciousness is actually my attempt to illustrate my feelings of three dimensional motion within a forth dimensional realm.

Consciousness and time merge at this point. The varrying probabilities of chaotic fractal representations are what comprise reality, and in so doing the creation of an adult human from a fetus is actually farther down the line of chaotic evolution than the original fetal mass because it has had to venture down the periscope of fractal examination.

If we are then to consider the structures of the 3-dimensional motion-based Universe, black holes, big bangs, red shifts, etc. I think it would be in our grasp to evaluate how they are connected to our present-day state of consciousness, that being within the "knee-joint" of the exponential.

(((((((tangent(((((I also think a separate interpretation of the photo-electric effect explaining why energy can be represented as both a particle and a wave is that the three dimensional geometry around a particle can actually create movement around particle depending on viewing the interpretation of the dimensions around the particle... sort of to say that there is a dual interpretation of the geometry and in so-being, the geometry oscillates around the stationary particle, the particle emerging from the forth dimension as an energy "packet." In essense the "wave part" of the photoelectric effect could actually be a birdged-representation of the geometry of space.)))))see my diagram)))I guess I can't attach it))ask and I will send)))))))))

On Technology and Intelligence

I think that Thomas Ray has made some valid points concerning the dilemmas of replicating the abilities of a human mind with that of a computer. However there is one thing that I would like to point out. I believe that consciousness is separate from intelligence BUT connected by the common thread of individual reality... much like the way a set of ninja numb chucks has two distinct batons connected by a common chain.... throw one baton up into the air and the other one hasn't a choice but to follow, yet each baton, if you hold them in separate hands, can be moved in different directions in so much that they do not break the bonds of reality. Intelligence and consciousness... like a bird's two dexterous wings connected by a single body.

To make this crucial separation between intelligence and consciousness is to shed light on what will happen once scientists use binary computers to begin the process of mapping a humans inner entity onto a computer. A binary-type computer will have to faaaarrrrr exceed the computational capacity of a human mind to even come close to a consciousness of similar proportions to that of a humans because the human mind obviously runs on something more powerful and flexble than a binary code.
For instance (although consciousness is much more than a visual experience) consider two TV screens of equal area one with nine times the pixel density than the other. Lets suppose that the one with greater pixel density is analogous to a human while one of lesser density is analogous to computer. Lets also suppose that a signal is sent to the televisions and each has the computational capacity (intelligence) to decipher the information and send it to their monitors. Let's say this signal is analogous to reality and pixel density is analogous to consciousness. There is one more catch to this scenario: these tv's also have feedback loops to their memory faculties. The purposes of these memory faculties is to record which pixel was illuminated. If a signal is sent to pixel areas A4 for both machines, A4 will definitely illuminate for the computer, while a sequence of 1/9 subunits is to illuminate for the human, depending upon it's initial states. Therefore the human will have a more selective feedback experience. In such a way, because of the intrinsic nature of the outputs (binary vs. ninary, if you will), their intelligences are equal yet their consciousness are not.

I don't know if I've made complete sense here, but what I initially set out to do was to say that similar to when I'm sitting here in my chair witnessing first hand my own reality, even if I new that a computer reflection was here sharing my consciousness in tandem within me, because it is based on a binary mechanism of data retrieval whereas my mind is based on a more advanced (but not infinite) method of data retrieval, the computer might be able to match my level of intelligence by shadowing itself within the core of relativistic threshold firings, but not equal my level of consciousness, and therefore would be analogous to a tv screen that was destined to only give one interpretation of the data that it received.
The next logical question would then become, well if the equally intelligent computer has a lesser degree of consciousness than its copied human counterpart, what level of computational intelligence is needed for it's consciousness to be on par with that of a human (how far must it exceed that of the human brain)? The answer lies within my Cloud of Influence (the directional vector of consciousness) model that is analogous to what I think Kurzweil is claiming. If we are able to match the algorithms sufficiently such that the Cloud of Influence is preserved and matched in its outcomes, then the computers will have reached a state where its consciousness has matched that of humans. In so doing it would take a hyper-intelligent binary computer to match the consciousness of a super-intelligent multiple-state human brain. We will have shed the burdens of nutritinal sustenance and gotten down to the crux of our goal: organized electron vectoring.

Will Moore's Law extend the computational power of a binary system to the point where these hyper-computers can sufficiently account for human consciousness? Or maybe it is more encouraging to work from the other end of the spectrum, with quantum computers as the focus of investigation?
I favor the later one for it's malleability, though wouldn't rule out the binary method in case of emergencies. For instance, if while concentrating on my finger, certain outcomes of predictability are occurring, I could take a snap shot of my mind here and consider the highest probability of that particular mindset recorded. Then if I chose to turn my attention to a different aspect of who I am, perhaps my patented way of rambling on about things, take a snapshot of it, I would then have two gems of highest probabilties. Continue to do this again and again and again, and I would start to map my entire conscious domain... perhaps I would even chose to remove that memory of my 3 year old birthday AFTER mapping into binary, such that it would not interfere with the continual process of mapping new regions long since forgotten.

This process is what I've referred to with my circle of friends as the "Evolution of Quicksand Tom." I would begin my replicating at the tip of my toe (bringing new meaning to the word "upload") and eventually finish with the mapping (or replacement) of my brain. Indeed the last breath of air before I would surrender my lungs for replication would be a nervous one, but when death is the alternative, and the development of quantum computation is still in its early stages, it's important to have a good backup in reserve. (But such concern might be needless if the exponential is on lift-of mode.)

Cheers,
Thomas

Re: The Cloud of Influence
posted on 12/05/2002 11:45 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Thomas (Upton LeVeen),

> "The CSV therefore lies closer to the carpet of quantum fluctuations and therefore the opportunity for it to be influenced by such quantum fluctuations is increased, but not inevitable and perhaps highly improbable, given the mathematical dynamics of this scale of physics, of which I am not a master. (A spaceship 1,000 miles above Earth may have a better probability to be influenced by Jupiter's gravity than if it was upon an Earth-based launch-pad, but is still more likely to fall back to Earth than zoom away to Jupiter, if the physical conditions of its motion limit it from doing so.)"

A spaceship, on the ground or in orbit, is not much of an "amplifier", so the analogy is not the best.

The brain, and life in general, have a characteristic ability to "amplify the small". One might argue that the weather does as well (butterfly effect) but the difference is that the weather, and many other relatively homogeneous and unstructured systems amplify small changes chaotically, whereas living systems seem able to produce coherence out of chaotic fluctuations.

Think of how a ping-pong ball approximates a random walk, when allowed to fall between two planes where small wooden pegs form a lattice between the planes. Drop the ball at the center of one edge, and by the time it emerges from the bottom edge, it "may" still be near the center, but might also have bounced itself farther away.

The presence of "Jupiter", tiny as it might be, is akin to a tiny adjustment in the placement of the pegs in the lattice. Each adjustment has hardly any great individual effect upon the path of the ping-pong ball, but even the smallest effect early on is amplified as the ball continues its journey.

> "Indeed the last breath of air before I would surrender my lungs for replication would be a nervous one, but when death is the alternative ..."

Here I am confused. Suppose the technology of upload was so "perfected", that you could be replicated right now, and this procedure was performed "non-destructively". Now, your "upload self" feels just fine and dandy, and assures you that "it is truely you". It goes off to live in Florida, while you move to Montana. Are you willing to "give up your lungs/life", just because you have been perfectly uploaded?

Why should "doing it faster" (upload and death of original) make any substantive difference?

Some argue for "doing it gradually". Even this, however, (given the preceding comments) suggests that "you" actually die every moment, and never realize it because it happens so fast, and the "new person" who inherits you memories and state each moment (thus) sees everything as "familiar".

Is this upload/immortality thing just a means to avoid "the experience of death"? The "anticipation of death"?

While living in Montana, miles away from your perfect upload, would it be ok to kill you peacefully in your sleep, so you never knew you died?

Hmmmm.

Cheers! ____tony b____