Origin > How to Build a Brain > A Wager on the Turing Test: Why I Think I Will Win
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0374.html

Printable Version
    A Wager on the Turing Test: Why I Think I Will Win
by   Ray Kurzweil

Will Ray Kurzweil's predictions come true? He's putting his money where his mouth is. Here's why he thinks he will win a bet on the future of artificial intelligence. The wager: an AI that passes the Turing Test by 2029.


Published April 9, 2002 on KurzweilAI.net. Click here to read an explanation of the bet and its background, with rules and definitions. Click here to read Mitch Kapor's response. Also see Ray Kurzweil's final word on why he will win.

The Significance of the Turing Test. The implicit, and in my view brilliant, insight in Turing's eponymous test is the ability of written human language to represent human-level thinking. The basis of the Turing test is that if the human Turing test judge is competent, then an entity requires human-level intelligence in order to pass the test. The human judge is free to probe each candidate with regard to their understanding of basic human knowledge, current events, aspects of the candidate's personal history and experiences, as well as their subjective experiences, all expressed through written language. As humans jump from one concept and one domain to the next, it is possible to quickly touch upon all human knowledge, on all aspects of human, well, humanness.

To the extent that the "AI" chooses to reveal its "history" during the interview with the Turing Test judge (note that none of the contestants are required to reveal their histories), the AI will need to use a fictional human history because "it" will not be in a position to be honest about its origins as a machine intelligence and pass the test. (By the way, I put the word "it" in quotes because it is my view that once an AI does indeed pass the Turing Test, we may very well consider "it" to be a "he" or a "she.") This makes the task of the machines somewhat more difficult than that of the human foils because the humans can use their own history. As fiction writers will attest, presenting a totally convincing human history that is credible and tracks coherently is a challenging task that most humans are unable to accomplish successfully. However, some humans are capable of doing this, and it will be a necessary task for a machine to pass the Turing test.

There are many contemporary examples of computers passing "narrow" forms of the Turing test, that is, demonstrating human-level intelligence in specific domains. For example, Gary Kasparov, clearly a qualified judge of human chess intelligence, declared that he found Deep Blue's playing skill to be indistinguishable from that of a human chess master during the famous tournament in which he was defeated by Deep Blue. Computers are now displaying human-level intelligence in a growing array of domains, including medical diagnosis, financial investment decisions, the design of products such as jet engines, and a myriad of other tasks that previously required humans to accomplish. We can say that such "narrow AI" is the threshold that the field of AI has currently achieved. However, the subtle and supple skills required to pass the broad Turing test as originally described by Turing is far more difficult than any narrow Turing Test. In my view, there is no set of tricks or simpler algorithms (i.e., methods simpler than those underlying human level intelligence) that would enable a machine to pass a properly designed Turing test without actually possessing intelligence at a fully human level.

There has been a great deal of philosophical discussion and speculation concerning the issue of consciousness, and whether or not we should consider a machine that passed the Turing test to be conscious. Clearly, the Turing test is not an explicit test for consciousness. Rather, it is a test of human-level performance. My own view is that inherently there is no objective test for subjective experience (i.e., consciousness) that does not have philosophical assumptions built into it. The reason for this has to do with the difference between the concepts of objective and subjective experience. However, it is also my view that once nonbiological intelligence does achieve a fully human level of intelligence, such that it can pass the Turing test, humans will treat such entities as if they were conscious. After all, they (the machines) will get mad at us if we don't. However, this is a political prediction rather than a philosophical position.

It is also important to note that once a computer does achieve a human level of intelligence, it will necessarily soar past it. Electronic circuits are already at least 10 million times faster than the electrochemical information processing in our interneuronal connections. Machines can share knowledge instantly, whereas we biological humans do not have quick downloading ports on our neurotransmitter concentration levels, interneuronal connection patterns, nor any other biological bases of our memory and skill. Language-capable machines will be able to access vast and accurate knowledge bases, including reading and mastering all the literature and sources of information available to our human-machine civilization. Thus "Turing Test level" machines will be able to combine human level intelligence with the powerful ways in which machines already excel. In addition, machines will continue to grow exponentially in their capacity and knowledge. It will be a formidable combination.

Why I Think I Will Win. In considering the question of when machine (i.e., nonbiological) intelligence will match the subtle and supple powers of human biological intelligence, we need to consider two interrelated but distinct questions: when will machines have the hardware capacity to match human information processing, and when will our technology have mastered the methods, i.e., the software of human intelligence. Without the latter, we would end up with extremely fast calculators, and would not achieve the endearing qualities that characterize human discernment (nor the deep knowledge and command of language necessary to pass a full Turing test!).

Both the hardware and software sides of this question are deeply influenced by the exponential nature of information-based technologies. The exponential growth that we see manifest in "Moore's Law" is far more pervasive than commonly understood. Our first observation is that the shrinking of transistors on an integrated circuit, which is the principle of Moore's Law, was not the first but the fifth paradigm to provide exponential growth to computing (after electromechanical calculators, relay-based computers, vacuum tube-based computing, and discrete transistors). Each time one approach begins to run out of steam, research efforts intensify to find the next source of renewed exponential growth (e.g., vacuum tubes were made smaller until it was no longer feasible to maintain a vacuum, which led to transistors). Thus the power and price-performance of technologies, particularly information-based technologies, grow as a cascade of S-curves: exponential growth leading to an asymptote, leading to paradigm shift (i.e., innovation), and another S-curve. Moreover, the underlying theory of the exponential growth of information-based technologies, which I call the law of accelerating returns, as well as a detailed examination of the underlying data, show that there is a second level of exponential growth, i.e., the rate of exponential growth is itself growing exponentiallyi.

Second, this phenomenon of ongoing exponential growth through a cascade of S-curves is far broader than computation. We see the same double exponential growth in a wide range of technologies, including communication technologies (wired and wireless), biological technologies (e.g., DNA base-pair sequencing), miniaturization, and of particular importance to the software of intelligence, brain reverse engineering (e.g., brain scanning, neuronal and brain region modeling).

Within the next approximately fifteen years, the current computational paradigm of Moore's Law will come to an end because by that time the key transistor features will only be a few atoms in width. However, there are already at least two dozen projects devoted to the next (i.e., the sixth) paradigm, which is to compute in three-dimensions. Integrated circuits are dense but flat. We live in a three-dimensional world, our brains are organized in three dimensions, and we will soon be computing in three dimensions. The feasibility of three-dimensional computing has already been demonstrated in several landmark projects, including the particularly powerful approach of nanotube-based electronics. However, for those who are (irrationally) skeptical of the potential for three-dimensional computing, it should be pointed out that achieving even a conservatively high estimate of the information processing capacity of the human brain (i.e., one hundred billion neurons times a thousand connections per neuron times 200 digitally controlled analog "transactions" per second, or about 20 million billion operations per second) will be achieved by conventional silicon circuits prior to 2020.

It is correct to point out that achieving the "software" of human intelligence is the more salient, and more difficult, challenge. On multiple levels, we are being guided in this effort by a grand project to reverse engineer (i.e., understand the principles of operation of) the human brain itself. Just as the human genome project accelerated (with the bulk of the genome being sequenced in the last year of the project), the effort to reverse engineer the human brain is also growing exponentially, and is further along than most people realize. We already have highly detailed mathematical models of several dozen of the several hundred types of neurons found in the brain. The resolution, bandwidth, and price-performance of human brain scanning is also growing exponentially. By combining the neuron modeling and interconnection data obtained from scanning, scientists have already reverse engineered two dozen of the several hundred regions of the brain. Implementations of these reverse engineered models using contemporary computation matches the performance of the biological regions that were recreated in significant detail. Already, we are in a early stage of being able to replace small regions of the brain that have been damaged from disease or disability using neural implants (e.g., ventral posterior nucleus, subthalmic nucleus, and ventral lateral thalamus neural implants to counteract Parkinson's Disease and tremors from other neurological disorders, cochlear implants, emerging retinal implants, and others).

If we combine the exponential trends in computation, communications, and miniaturization, it is a conservative expectation that we will within 20 to 25 years be able to send tiny scanners the size of blood cells into the brain through the capillaries to observe interneuronal connection data and even neurotransmitter levels from up close. Even without such capillary-based scanning, the contemporary experience of the brain reverse engineering scientists, (e.g., Lloyd Watts, who has modeled over a dozen regions of the human auditory system), is that the connections in a particular region follow distinct patterns, and that it is not necessary to see every connection in order to understand the massively parallel, digital controlled analog algorithms that characterize information processing in each region. The work of Watts and others has demonstrated another important insight, that once the methods in a brain region are understood and implemented using contemporary technology, the computational requirements for the machine implementation requires on the order of a thousand times less computation than the theoretical potential of the biological neurons being simulated.

A careful analysis of the requisite trends shows that we will understand the principles of operation of the human brain and be in a position to recreate its powers in synthetic substrates well within thirty years. The brain is self-organizing, which means that it is created with relatively little innate knowledge. Most of its complexity comes from its own interaction with a complex world. Thus it will be necessary to provide an artificial intelligence with an education just as we do with a natural intelligence. But here the powers of machine intelligence can be brought to bear. Once we are able to master a process in a machine, it can perform its operations at a much faster speed than biological systems. As I mentioned, contemporary electronics is already more than ten million times faster than the human nervous system's electrochemical information processing. Once an AI masters human basic language skills, it will be in a position to expand its language skills and general knowledge by rapidly reading all human literature and by absorbing the knowledge contained on millions of web sites. Also of great significance will be the ability of machines to share their knowledge instantly.

One challenge to our ability to master the apparent complexity of human intelligence in a machine is whether we are capable of building a system of this complexity without the brittleness that often characterizes very complex engineering systems. This a valid concern, but the answer lies in emulating the ways of nature. The initial design of the human brain is of a complexity that we can already manage. The human brain is characterized by a genome with only 23 million bytes of useful information (that's what left of the 800 million byte genome when you eliminate all of the redundancies, e.g., the sequence called "ALU" which is repeated hundreds of thousands of times). 23 million bytes is smaller than Microsoft WORD. How is it, then, that the human brain with its 100 trillion connections can result from a genome that is so small? The interconnection data alone is a million times greater than the information in the genome.

The answer is that the genome specifies a set of processes, each of which utilizes chaotic methods (i.e., initial randomness, then self-organization) to increase the amount of information represented. It is known, for example, that the wiring of the interconnections follows a plan that includes a great deal of randomness. As the individual person encounters her environment, the connections and the neurotransmitter level pattern self-organize to better represent the world, but the initial design is specified by a program that is not extreme in its complexity.

Thus we will not program human intelligence link by link as in some massive expert system. Nor is it the case that we will simply set up a single genetic (i.e., evolutionary) algorithm and have intelligence at human levels automatically evolve itself. Rather we will set up an intricate hierarchy of self-organizing systems, based largely on the reverse engineering of the human brain, and then provide for its education. However, this learning process can proceed hundreds if not thousands of times faster than the comparable process for humans.

Another challenge is that the human brain must incorporate some other kind of "stuff" that is inherently impossible to recreate in a machine. Penrose imagines that the intricate tubules in human neurons are capable of quantum based processes, although there is no evidence for this. I would point out that even if the tubules do exhibit quantum effects, there is nothing barring us from applying these same quantum effects in our machines. After all, we routinely use quantum methods in our machines today. The transistor, for example, is based on quantum tunneling. The human brain is made of the same small list of proteins that all biological systems are comprised of. We are rapidly recreating the powers of biological substances and systems, including neurological systems, so there is little basis to expect that the brain relies on some nonengineerable essence for its capabilities. In some theories, this special "stuff" is associated with the issue of consciousness, e.g., the idea of a human soul associated with each person. Although one may take this philosophical position, the effect is to separate consciousness from the performance of the human brain. Thus the absence of such a soul may in theory have a bearing on the issue of consciousness, but would not prevent a nonbiological entity from the performance abilities necessary to pass the Turing test.

Another challenge is that an AI must have a human or human-like body in order to display human-like responses. I agree that a body is important to provide a situated means to interact with the world. The requisite technologies to provide simulated or virtual bodies are also rapidly advancing. Indeed, we already have emerging replacements or augmentations for virtually every system in our body. Moreover, humans will be spending a great deal of time in full immersion virtual reality environments incorporating all of the senses by 2029, so a virtual body will do just as well. Fundamentally, emulating our bodies in real or virtual reality is a less complex task than emulating our brains.

Finally, we have the challenge of emotion, the idea that although machines may very well be able to master the more analytical cognitive abilities of humans, they inherently will never be able to master the decidedly illogical and much harder to characterize attributes of human emotion. A slightly broader way of characterizing this challenge is to pose it in terms of "qualia," which refers essentially to the full range of subjective experiences. Keep in mind that the Turing test is assessing convincing reactions to emotions and to qualia. The apparent difficulty of responding appropriately to emotion and other qualia appears to be at least a significant part of Mitchell Kapor's hesitation to accept the idea of a Turing-capable machine. It is my view that understanding and responding appropriately to human emotion is indeed the most complex thing that we do (with other types of qualia being if anything simpler to respond to). It is the cutting edge of human intelligence, and is precisely the heart of the Turing challenge. Although human emotional intelligence is complex, it nonetheless remains a capability of the human brain, with our endocrine system adding only a small measure of additional complexity (and operating at a relatively low bandwidth). All of my observations above pertain to the issue of emotion, because that is the heart of what we are reverse engineering. Thus, we can say that a side benefit of creating Turing-capable machines will be new levels of insight into ourselves.

i All of the points addressed in this statement of "Why I Think I Will Win" (the Long Now Turing Test Wager) are examined in more detail in my essay "The Law of Accelerating Returns" available at http://www.kurzweilai.net/meme/frame.html?main=/articles/art0134.html.

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Ray is right, but for overly narrowly stated reasons...
posted on 01/08/2002 1:28 PM by ben@goertzel.org

[Top]
[Mind·X]
[Reply to this post]

I think that Ray will win the bet, but perhaps not for the precise reasons that he states.

He seems to believe that "real AI" (i.e., human-level general intelligence) will follow from brain scanning and the ensuing detailed understanding of human intelligence.

It would seem that he is not so bullish on non-closely-brain-based approaches to AI.

I agree that both computer hardware and brain scanning will both be kick-ass before 2029, but will this be how AI comes about? This is one possible path to "real AI", but it is far from the only one.

I wish that Ray, given his position as a publicist of Singularitarian and futurist thinking, would give a little more attention to non-closely-brain-based approaches to "real AI", which do exist.

Indeed, this is a personal axe that I am grinding here. But I think it is a very important axe ;> See www.goertzel.org/realaibook/ for some information on a forthcoming book summarizing radical current approaches to "real AI", none of which are based on the detailed results of brain scanning, although brain science is certainly one source of inspiration for this work.

-- Ben Goertzel



Brain Chemistry needs to be able to change
posted on 04/17/2002 1:15 AM by lottomagic@net2000.com.au

[Top]
[Mind·X]
[Reply to this post]

Also, Real AI to come even close to the functions of the real brain would need the ability to simulate changes in brain chemistry. I can't see this happening for some time.

Hey check out my website
www.lottomagic.com

Re: Ray is right, but for overly narrowly stated reasons...
posted on 09/03/2002 11:39 PM by laserjoy@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

It seems that understanding the brain is essential one way or another. Your distinction between closely-brain-based approaches and brain research is not clear to me. Don't you think maybe we could learn more yet with brain scans?

I read your site for Real AI New Approaches to Artificial General Intelligence on www.goertzel.org/realaibook/. I am very eager to buy and read the book.

Perhaps Ray would be willing to include other approaches in his arguments if he had another exponential growth graph or two. ;-)

-Richard




Re: Ray is right, but for overly narrowly stated reasons...
posted on 09/03/2002 11:44 PM by ben@goertzel.org

[Top]
[Mind·X]
[Reply to this post]


I have no doubt that we can learn a lot from brain scans, particularly once we have scans with much higher spatial & temporal resolution that is available now. I have worked with EEG, PET and MEG data, and I know that the current state of the hardware is not enough to teach us more than very general lessons about how the mind works.

The difference between closely brain-inspired AI and non-closely-brain-inspired AI is pretty clear to me. If you read the article on the Novamente AI Engine at www.realai.net you will see what I mean. The question is whether one introduces data structures & algorithms that one KNOWS can't be the way the brain "does it", but that seem to carry out similar functions to parts of the brain, in an abstract sense. I do introduce such data structures and algorithms, hence I don't consider my work closely brain-based... though it's certainly loosely brain-inspired..

-- Ben Goertzel

Re: Ray is right, but for overly narrowly stated reasons...
posted on 09/04/2002 1:10 AM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

Ben,

> "The question is whether one introduces data structures & algorithms that one KNOWS can't be the way the brain "does it", but that seem to carry out similar functions to parts of the brain, in an abstract sense."

Another way to phrase the question is, "At what level of functionality do we care to make distinctions, and why?"

If an artificial system is able to beat a 9-dan at Go, while debating the the finer issues of current political intrigues and painting a beautiful and original watercolor, I will be quite convinced that its intelligence is "on par" with my own (I cannot yet break into the dan ranks!). Why will I care _how_ it is accomplished, if "intelligence" is all that is of interest?

If, however, we want to hold this creature to be a "sentience" (morally wrong to abuse or "kill", say) then we have a deeper problem. Many will not care how closely you "mimic" the "actual processing" of a human brain, if the underlying physics is "too different".

To address those that complain (perhaps rightly) that any "algorithmic" simulacrum of intelligence will be limited by the incompleteness of "formal systems", I have the following thought exercise, really a fantasy.

I imagine that we have created "algorithmically constrained" robots, whose demonstrated intelligence is at least remarkable, and we divide these into two camps. To one camp, we add an additional "organ" I will call a "gel-pack". Perhaps this gel-pack is an electrolytic gel packed with billions of carbon nanotubes of varied lengths, and for which a million needles penetrate the gel to various depths and locations. The gel-pack is treated like any other "sense organ", in that its "needles" are treated as I/O points for internal processing such as the eyes or ears, but is isolated entirely within the robot. It merely (perhaps) acts as a "strange correlator"; as the rest of the "AI-system" goes about its business, activity potentials find their way into this gel-pack through certain needles, and "strangely" affect the potentials available at other needles.

The "AI" simply treats this gel-pack as if it were another "sensory/motor system", and as the robot-AI continues to learn through associations with the world, we would become increasingly unsure to what degree this "gel-pack was contributing to the overall state of the system. And if these "gel-pack augmented" robots began to outperform their gel-deficient brothers ...

The point of this exercise is simply to elide the argument that "its all algorithmic, all syntactic manipulation." To argue that "it cannot possibly be sentient or conscious, despite its observed behaviors" becomes far more problematic. It bears down on at least one "crutch" employed by vitalists or theologeans that "it cannot be 'mind' because it is fully explicable".

Thoughts?

Cheers! ____tony b____

Re: Ray is right, but for overly narrowly stated reasons...
posted on 09/04/2002 1:17 AM by ben@goertzel.org

[Top]
[Mind·X]
[Reply to this post]


I think that the debate over whether programs can be sentient or not, will disappear fast after highly intelligent autonomous programs become well-knonw.

Really, how does any of us know that any other humans besides ourselves are conscious? We don't (read Breakfast of Champions by Vonnegut ;). We don't bother to agonize over this much either -- we're pragmatists about other humans' consciousness, and once real artificial general intelligences are around, we'll be pragmatists about their consciousness too.

-- Ben Goertzel

Re: Ray is right, but for overly narrowly stated reasons...
posted on 09/04/2002 7:06 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

Ben,

I strongly agree. The rise of (at least) convincingly sentient artificials is inevitable, and even if some scientists would argue "not really sentient", you would have the problem of "sending the wrong message" to youth regarding behaviors, if you allowed obvious cruelty toward the "seemingly sentient".

"Oh child, do not worry, it does not really 'feel' pain or fright. It has simply evolved its programming to scream and writhe like that as a 'survival ploy' when we undertake to disassemble it."

That would not cut it, and there we would be.

Cheers! ____tony b____

Turing Test flaw?
posted on 01/08/2002 5:55 PM by morbid_curiosity@gmx.net

[Top]
[Mind·X]
[Reply to this post]

One flaw I can think of with this process would be the fact that a computer could have a limitless amount of information about the world, where a human would not. It could very well give away a computer then if a judge were to probe the subjects knowledge of everything very deeply and with obscure things most people would not know.
I suppose one way to get around this would be to not provide the computer with as much knowledge as it could possibly hold, but rather fashioning it after the amount of knowledge a regular human would have. But then the question comes, is the point of this contest to build a computer that acts like a human, or build a computer that is as advanced as a human?

Re: Turing Test flaw?
posted on 01/08/2002 11:57 PM by iph1954@msn.com

[Top]
[Mind·X]
[Reply to this post]

Any artificial intelligence sufficently advanced to have a chance at passing a Turing Test will certainly have access to far more information than a human could possibly possess. BUT -- that AI will also be smart enough to know how to disguise this superior knowledge and feign human capacity. That's the whole point of the test.

Re: Turing Test flaw?
posted on 04/12/2002 4:13 AM by lottomagic@net2000.com.au

[Top]
[Mind·X]
[Reply to this post]


Regardless of the point of the test, which in Turing's case seems to be that if it fools a human it can be considered artificially intelligent, the fundamental reasoning behind the point seems severely flawed and from my perspective largely a joke.

I can only re-iterate that just because some computer thing fools a human, does not necessarily make that thing "intelligent". Being made up of complex conditional data structures and having access to huge amounts of data or "knowledge" does not necessarily amount to real or true intelligence.

This is the point I make about the Turing test. This test is not suited to measuring true AI, if we wish to elevate our view of such an AI to the levels of real intelligence in the real world.

Without explicitly defining all the test parameters and agreeing upon them, it is difficult to conclude that such a test could offer any conclusive and convincing proof one way or the other, of the presence of real AI.

From this point of view and in consideration of the broad range of possible parameters that may pertain to such a test, one could easily use the test to claim for and against the presence of AI. Thus the test basically can not be relied upon in any reasonable way.

12 April

Re: Turing Test flaw?
posted on 04/09/2002 10:35 PM by lottomagic@net2000.com.au

[Top]
[Mind·X]
[Reply to this post]


Surely it would not be hard to pass the Turing Test. Frankly I think it is a grossly outdated concept and a huge joke with severely flawed assumptions.

One fundamentally flawed assumption is the level of intellect and awareness of the person the AI simulation is trying to fool. For example, a child may be more easily fooled by the an AI simulation than an adult. Thus the AI may pass the test with some people and fail it with others.

The other things is that simply being able to answer a set of questions says very little about real intelligence. Anything can be rope learnt by computers and people. Computer AIs generally rope learn anyway. This test is extremely subjective and unreliable.

Also a snail could have more "real intelligence" than many computer AIs, yet it would most likely fail the Turing test, as would a dog, a monkey and possibly even a dolphin.

Sorry but Turing seems to be outdated and should not be seriously entertained as a reliable measure of Artificial Intelligence.

I mean really, wake up and smell the coffee!

Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 02/06/2002 6:44 PM by tbrownell@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

I wrote an article entitled "Why the Loebner Prize Will Never Be Won" you can read about it at http://www.LFReD.com/loebnerarticle.htm

Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 02/06/2002 7:16 PM by pfunk45@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

You mention it being important that the AI needs to have a human history. I think this is critical! An AI without life experience is essentially a newborn child. It will need sensory inputs to learn the "right" things.

What about letting it loose on the Internet and letting it learn what it wants to? This could be extremely dangerous. Would you let a child loose in the slums of Detroit to learn about the world? It would, but the hard lessons it may learn are very different than a child nurtured in a caring home.

I think we'll have to be very careful to feed this AI the right "diet" of knowledge if we want it to grow in what humans consider a morally positive direction.

Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 02/06/2002 10:05 PM by Tubadecuba@aol.com

[Top]
[Mind·X]
[Reply to this post]

I've seen it mentioned somewhere around here that with sufficient non-invasive brain scanning technology, it would be possible to extract memories from any living(or dead) brain. I think the best way for us to create the most humanlike AI would be to use a human memory for our Turing Test candidate. That would solve our problem of not making our AI "too smart" to pass this test- however I have doubts about how we would be able to inject a lifetime of memory into an already functioning brain. I am sure that the growth of the human brain is dependent on memory as it comes in, so that if we reverse engineer the human brain, and program our own, we couldn't just inject an adult's memory into it (that would cause for a massive headache), seeing as we respond to information as it streams in. And if we altered our artificial brains so that you could input a knowledge base in one sitting, your AI might stray from human- and if that were to happen, it would fail the test.

Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 04/13/2002 3:28 PM by doron@rogers.com

[Top]
[Mind·X]
[Reply to this post]

Just a couple of quick observations:

The capacity to do a brain scan that will allow reverse engineering a specific brain will not be available for at least a few decades, if not centuries. Let's say it is somehow enough to just extract the physical organization of the neurons in a live brain (which is obviously insufficient - we need to know the chemistry at each synapse as well), we will need to obtain a scan at a resolution of 0.1 micron or better. If we scan a cube of 100x100x100mm, this translates into 10 to the 18th power data (giga giga) points! The scan needs to be obtained at a very short time (say 1 microsec) to guarantee that nothing moved more than 0.1 micron during the scan.
Even if the electronics and computation means to collect and process data at the required resolution and time period would become available in the next 30 years, the physics for doing such a thing are currently completely unknown. CT and MRI scanners can do no better than 0.5mm resolution (about 10 to the 8th power points) over tens of milliseconds, and this has improved very gradually over the last 30 years. No other approach to high resolution 3D scans of a live brain is even on the horizon. I just don't see how improvements of 10 orders of magnitude in resolution and 6 orders of magnitude in time will be achieved in this time frame.

So - forget about brain scans and reverse engineering by 2029.

Another observation is that whatever artificial intelligence system is developed by that date, it will have to undergo a substantial period of learning before it can succeed in passing the test. If the learning will be self-learning (ie, experiential), it will take quite a few years fpr the system to acquire the full scale of an adult human world and common-sense knowledge. Alternatively, if the knowledge will be "downloaded" to it, human experiential knowledge will have to be manually translated into a binary code for the download. This will also take a long period, especially since it will need to be extensively tested and debugged to ensure entry errors are caught and corrected. For example, to answer the question "What would you do if I dropped a X on your foot?" for all possible nouns X (eg, muffin, lead pipe, knife, nephew) the system will have to be correctly aware of the combination of properties of typical samples of thousands of nouns. Somebody will have to manually put that information in the system.

This brings me to my last observation: even if such an incredibly complex and knowledgable piece of software would be incrementally developed by 2029, it will take many years to find and correct enough bugs to allow it to run for two or more hours of a completely unconstrained conversational test without screwing up royally at least once , eg,
Q: "What would you feel if I threw you in the sewer?"
A: "I'm unable to answer this question due to insufficient information. Please provide more details."

With all due respect to the exponential progress of some technologies, my conclusion is that 30 years are just not enough for sufficient progress to be made.

Doron Dekel

Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 04/13/2002 4:06 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> we need to know the chemistry at each synapse as well

Not each. They are not that different.

> we will need to obtain a scan at a resolution of 0.1 micron or better.

We already have that kind of precision. Too invasive for now.

> this translates into 10 to the 18th power data (giga giga) points!

That is not true. We need a lot less. It's not a random data inside brains, but neurons with a position and connections. I think it could be stored in 10^15 bytes or less. Some computers will soon have that amount of storage.

> The scan needs to be obtained at a very short time (say 1 microsec) to guarantee that nothing moved more than 0.1 micron during the scan.

What if it moves? A LOT of neurons die every second - but the whole thinking process is not affected much.

> No other approach to high resolution 3D scans of a live brain is even on the horizon.

And will not be for decades?

> it will have to undergo a substantial period of learning before it can succeed in passing the test

'Substantial period' for an upload could be 14 days for us - and 10 thousand years for him. This is the whole point of the Singularity.

>If the learning will be self-learning (ie, experiential), it will take quite a few years fpr the system to acquire the full scale

Let him have enough time - thousand years! We could go for a short vacations (a week or two) meanwhile.

> Somebody will have to manually put that information in the system.

Why? I don't understand why.

> my conclusion is that 30 years are just not enough for sufficient progress to be made.

Wana bet? :)

- Thomas

Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 04/13/2002 7:14 PM by Citizen Blue

[Top]
[Mind·X]
[Reply to this post]

Also perception of what we think as computers may change as well; What may differentiate a computer from a genome may infact be more complex then any of us could possible fathom in the future. Are we in fact referring to the classic conception of a computer, or addendums to it?

Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 04/14/2002 3:25 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> Also perception of what we think as computers may change as well

It has already changed. That we in fact do think by some computable way.

In the past we were observed as some transcendental nonturing nonmachines ... for ever distant from the primitive electronic machines.



To hope that the Earth is flat again - is very thin.

- Thomas

Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 04/13/2002 7:55 PM by doron@rogers.com

[Top]
[Mind·X]
[Reply to this post]

>> we need to know the chemistry at each synapse as well

> Not each. They are not that different.

What I meant was: you need to know the "weight" of each synapse in contributing to the firing of the neuron to which it is connected. This varies a lot from synapse to synapse, and it is critical to know it to be able to reverse engineer a particular brain.

>> this translates into 10 to the 18th power data (giga giga) points!

>That is not true. We need a lot less. It's not a random data inside brains, but neurons with a position and connections. I think it could be stored in 10^15 bytes or less. Some computers will soon have that amount of storage.

You cannot know in advance what points to acquire, since you don't know where the dendrites and synapses are - that's what you are trying to find.

Storage of the data is not the issue, it is the need to extract an immense amount of 3D data all at once that's the problem.

I should add also that even if such a non-invasive samping device would have existed, we would still need to develop sophisticated algorithms for reliably automatically tracing the dendrites in the data and building the connectivity map. This will take a few more years as well.

>> The scan needs to be obtained at a very short time (say 1 microsec) to guarantee that nothing moved more than 0.1 micron during the scan.

>What if it moves? A LOT of neurons die every second - but the whole thinking process is not affected much.

If it moves, the dentrites will become blurry in the 3D data and you will not be able to tell which two neurons they connect.

>> No other approach to high resolution 3D scans of a live brain is even on the horizon.

>And will not be for decades?

No one can say that, but it takes a decade or two, even today, for a fundamental discovery to be converted to practical, reliable, devices. And, as I said, no such fundamental discovery is currently on the horizon.

>> it will have to undergo a substantial period of learning before it can succeed in passing the test

> 'Substantial period' for an upload could be 14 days for us - and 10 thousand years for him. This is the whole point of the Singularity.

Experiential learning requires that the system will actually undergo real life experiences (eg, participate in social interactions with people, see what they do in the washroom). This cannot be compressed in time, although I can imagine, perhaps, many systems acquiring experiences in parallel and then sharing them somehow. Even then, fast sharing of experiences between huge neurological simulations will be a tough technological challenge that will take many years to figure out.

>> Somebody will have to manually put that information in the system.

> Why? I don't understand why.

Before the system can make sense of what books say about human experience, it will have to know a lot of things about everyday life. Somebody will have to enter those things into the system (or, perhaps, cyc will help - see www.cyc.com - but I doubt that it will be possible to easily translate cyc to another storage format).

> Wana bet? :)

Sure - how much?

- Doron


Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 04/14/2002 3:45 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I can't win the bet. If I am right - a million dollar bet is nothing.

I can only loose.

But then again - we uploaders are most likely correct in our optimism.

Take another approach!

A human inside NMRI is scanned. The data stream from his head to the computer has a certain value in bauds.

The going on changes inside the brains are another data stream.

If the first stream is bigger, than after some (long) time we have a picture not completely different of what is the real one.

This approach is currently exploited in the quest for the extra solar planets.

Some star is observed for a long time. Than the best possible explaining model is created. Showing a planet or two.

The data for an upload may already be scattered around.

At least - we have more data every day.

I bet my world view. If that will not be possible by 2029 - I will abandon my views.


- Thomas

Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 04/15/2002 2:51 AM by Citizen Blue

[Top]
[Mind·X]
[Reply to this post]

That's right; let us be optimists!!! It does not mean we are not realists.

Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 10/28/2002 5:57 AM by Tik777

[Top]
[Mind·X]
[Reply to this post]

I believe the best possible test for AI would be to simply cause this thing to have to survive on its own. Interact on its own. Get a job, take care of itself. This is what we do to measure adulthood in our children, why not use the same thing to measure human similarities in a machine?
I believe there will have to be an amount of chaos to its calculations to give it the ability to be loose and not always correct, and yet all this conversation about randomness, I never do anything at random...people are very sensually stimulated and usually always have personal gain motives involved in our decision making tasks. Self preservation is a key element in human behavior...if you give a logical system this formula, it is liable to do away with anything that threatens it...Sort of like son of man evolutionism. As far as the turing test, I feel it is possible to fool a human on paper, with a phoney story and a precision memory...unless the person issuing judgement is aware that this is highly likely a strong suite of a computer. Maybe the question should have to do with an infinite number and take away the ability to round up...remember the old department store buffer overflow hack in trick. I believe if you have the ability to give a machine 5 senses and base algorhythms around how it analyses and interacts with these "senses" that is more of a human like endeavor. I also would like to reply on another subject that somebody printed...it was said that the computer with AI would be smart enough to trick the average human and would have to use fiction statements to do so...this means the machine for its own advancement has been taught to trick human intellect to be praised or accepted. I believe that I find a machine that is cunning enough to trick a human on purpose for self gain is a scary idea...I don't even like human beings that use deception, let alone a machine that may see us as pollutant biological diseases that plague an earth.

PS: Humans are the only piece that really don't fit into the equation quite right.

Thank you,
TIK777

Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 11/18/2002 4:21 AM by AC Serrano

[Top]
[Mind·X]
[Reply to this post]

I would strongly urge you to refrain from talking about subjects you know nothing about, as your post makes it quite clear that, like many who are interested in AI, you are woefully under-educated in the workings of the human brain.

>> we need to know the chemistry at each synapse as well

>Not each. They are not that different.

In fact, it is imperative to understand the exact chemistry of each synapse. Part of the brain's immense complexity derives from the interactions of the various neurotransmitters with the numerous receptors that correspond to each one.


>> this translates into 10 to the 18th power data (giga giga) points!

>That is not true. We need a lot less. It's not a random data inside brains, but neurons with a position and connections. I think it could be stored in 10^15 bytes or less. Some computers will soon have that amount of storage.

Just as far as the simple math is concerned, the brain contains 100 billion (10 to the 11th) neurons, and each neuron can synapse on over 1000 other neurons, giving over 100 trillion (10 to the 14th) interconections. When you then consider that it is also important where on each neuron the synapse is located, the storage required becomes obscene.

>> The scan needs to be obtained at a very short time (say 1 microsec) to guarantee that nothing moved more than 0.1 micron during the scan.

>What if it moves? A LOT of neurons die every second - but the whole thinking process is not affected much.

That is complete and utter BS. After childhood, neuron death almost never occurs without some kind of trauma to the nervous system, be it injury, toxicity, or lack of oxygen. True, it is possible to destroy sonetimes very large quantities of neurons without hindering fuction, (example: destroying all of the sensory cortex on one side of the brain usually leads to no long term defecits), it is also possible to kill small quantities of neurons and create a huge change in thought processes. (example: destroying the amygdala in both brain hemispheres will produce major emotional changes)

>> it will have to undergo a substantial period of learning before it can succeed in passing the test

>'Substantial period' for an upload could be 14 days for us - and 10 thousand years for him. This is the whole point of the Singularity.

Let's note confuse concepts from quantum physics seep in where they don't belong. No matter what anyone may try to say, there is no way for an "intelligence" to obtain 10,000 years of experience in any less than 10,000 years. (Neglecting the effects of einsteinian time dilation)

AC Serrano

Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 11/18/2002 7:33 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

AC,

I agree with the "gist" of your comments, which I take to be that "fully mapping a human brain" is quite out of reach for the near term.

I am not entirely in agreement that such detail is required to "understand" how the brain operates (to whatever degree that understanding is useful to the development of AI, or even new substrates for "sentience".)

I agree that even small differences in the (momentary) chemical concentrations at synaptic junctions (for instance) contribute to what makes one person an individual in thought.

But take any two people at random, and they have no two "neurons" in precisely the same locations, nor precisely the same interneural "wiring", nor possess precisely the same chemical concentrations, and yet both people tend to stop at red lights, wait for the "walk" sign to cross a street, understand a common language, etc.

This suggests that a very great deal can be "abstracted" away from the uniqueness of individual brains, while retaining the structural features that serve to provide for intelligent interaction with the world.

Cheers! ____tony b____

Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 07/18/2004 11:07 AM by nomade

[Top]
[Mind·X]
[Reply to this post]

Consider that every 'result' is derived from a process and contains that history. Both causality and evolution insist on the necessity of process to achieve a result. The human is the result of a process from minerals/molecules through living cells and on through stages of evolution. AI proposes that a mind equivalent to that of a human can be constructed from the mineral stage alone and bypass the entire process of cellular evolution-the 'history' of life itself. This seems improbable- I suspect that the 'mind' arrived at by such a process would always remain primitive by our standards.

Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 11/04/2002 11:41 AM by Steve

[Top]
[Mind·X]
[Reply to this post]

Ray mentions all sorts of technology, ie. brain scans, nanotechnology, etc ... but fails to mention the one technological development that is going to make AI possible, Quantum Computing!

Re: A Wager on the Turing Test: Why I Think I Will Win
posted on 11/17/2002 9:13 PM by Dave - dwh

[Top]
[Mind·X]
[Reply to this post]

TIK777 I agree with much of what you had to say. I mean I could not see anything that differed from my own "evolving" views.

cheers
David

lottomagic@net2000.com.au
www.lottomagic.com

Turing test has the sense ONLY in the narrow meaning
posted on 02/06/2003 7:10 AM by Tom Tobula

[Top]
[Mind·X]
[Reply to this post]

The "broad" meaning of Turing test is non-sense.The aspiration of the broad meaning is that if computer is "completely" interchangeable in the communication then they could also be constructed (they could) to behave the same way as a human and that they could have the same (or perhaps even more intelligent) impact (or at least judgement) to its neighboorhood as humans. This is the BROAD sense in my point of view as anything less is still "narrow", limited to some sort of test restrictions. They could not. I can never love a robot no matter how intelligent he seems to be.

In fact Turing test is just an abstract intellectual scheme which fits some pre-arranged environment and human life cannot be reduced to ANY fixed environment. Take a human into a new environment and a man shall find a new solutions of new problems. Take a learned computer into a new environment and in the best he shall derive something from his past knowledge base. In many cases the derivatives shall be completely different from the human solution and one shall easily discover that the opposite IS A MACHINE.

Machines may only imitate human intelligence but they need to be feeded with the current status of the results of human understanding of the reality and of the environment. Machines whose behaviour (or just judgment to stay with Turing) is indistinguishable (in the short term) from the humans are human dependent in the long term but this dependency allows to distinguish them from the human really.

(In some sense even the poeple are "interdependent" to exchange the information, one human culture may find different solutions to similar problems if the the groups are separated. Thus one easily recognizes that the opposite is from another culture as the customs of his behaviour are different. But "machines on their own" would be easily recognizable as out of all human cultures I believe.)

There are many mistakes in the assumptions of the broad Turing test. As mentioned the first is time, the second is the concentration to an individual body. But the human life is not only the life (and the intelligence) of the individual but of a community and of generations. Human ancestors tend to fix the mistakes of the predcessors. Not because that they are more intelligent beeings but because they discover that the teachings of their predcessors do not fit the new environment, the new findings, the new organization of the society etc.

Human history is full of effort to find the finite set of rules, the finite knowledge, the recipe for successfull and happy life. It did not work. We have found just some guidelines which must be re-interpreted by the living man again and again.

Other mistakes in the broad Turing test are that "telecommunication" is limited only to a rational symbolic way of the communication. Humans communicate a lot by non-verbal means and even the words often have another hidden meaning according to some quite complex context (the context may be irrational too). Thus even robot fitting Turing test is still far away of communicating the same way as a human.

If I have to limit myself in the communication with the computer (e.g. I cannot even express my feelings in a full way and see at least the "rational part" of the reaction ) this means that I am significantly restricted in my human abilities. It is simply stupid to try to compare computer with a man with such limited conditions in the test which purpose is to prove that computer is equal to man.

Creating any other sort of machine intelligence (e.g. adaptive replication) which could manufacture itself and exist independently from poeple but be clearly distinguishable from them, for good or for bad, is different story then the Turing test.

Thus there is nothing to win. One cannot win in the discipline which is poorly defined. You may win only by narrowing the conditions.