Origin > How to Build a Brain > Why I Think I Will Win
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0412.html

Printable Version
    Why I Think I Will Win
by   Mitch Kapor

Will a computer pass the Turing Test (convincingly impersonate a human) by 2029? Mitchell Kapor has bet Ray Kurzweil that a computer can't because it lacks understanding of subtle human experiences and emotions.


Published April 9, 2002 on KurzweilAI.net. Click here to read an explanation of the bet and its background, with rules and definitions. Read why Ray thinks he will win here. Also see Ray Kurzweil's final word on why he will win.

The essence of the Turing Test revolves around whether a computer can successfully impersonate a human. The test is to be put into practice under a set of detailed conditions which rely on human judges being connected with test subjects (a computer and a person) solely via an instant messaging system or its equivalent. That is, the only information which will pass between the parties is text.

To pass the test, a computer would have to be capable of communicating via this medium at least as competently as a person. There is no restriction on the subject matter; anything within the scope of human experience in reality or imagination is fair game. This is a very broad canvas encompassing all of the possibilities of discussion about art, science, personal history, and social relationships. Exploring linkages between the realms is also fair game, allowing for unusual but illustrative analogies and metaphors. It is such a broad canvas, in my view, that it is impossible to foresee when, or even if, a machine intelligence will be able to paint a picture which can fool a human judge.

While it is possible to imagine a machine obtaining a perfect score on the SAT or winning Jeopardy--since these rely on retained facts and the ability to recall them--it seems far less possible that a machine can weave things together in new ways or to have true imagination in a way that matches everything people can do, especially if we have a full appreciation of the creativity people are capable of. This is often overlooked by those computer scientists who correctly point out that it is not impossible for computers to demonstrate creativity. Not impossible, yes. Likely enough to warrant belief in a computer can pass the Turing Test? In my opinion, no. Computers look relatively smarter in theory when those making the estimate judge people to be dumber and more limited than they are.

As humans:

  • We are embodied creatures; our physicality grounds us and defines our existence in a myriad of ways.
  • We are all intimately connected to and with the environment around us; perception of and interaction with the environment is the equal partner of cognition in shaping experience.
  • Emotion is as or more basic than cognition; feelings, gross and subtle, bound and shape the envelope of what is thinkable.
  • We are conscious beings, capable of reflection and self-awareness; the realm of the spiritual or transpersonal (to pick a less loaded word) is something we can be part of and which is part of us.

When I contemplate human beings in this way, it becomes extremely difficult even to imagine what it would mean for a computer to perform a successful impersonation, much less to believe that its achievement is within our lifespan.. Computers don't have anything resembling a human body, sense organs, feelings, or awareness after all. Without these, it cannot have human experiences, especially of the ones which reflect our fullest nature, as above. Each of knows what it is like to be in a physical environment; we know what things look, sound, smell, taste, and feel like. Such experiences form the basis of agency, memory and identity. We can and do speak of all this in a multitude of meaningful ways to each other. Without human experiences, a computer cannot fool a smart judge bent on exposing it by probing its ability to communicate about the quintessentially human.

Additionally, part of the burden of proof for supporters of intelligent machines is to develop an adequate account of how a computer would acquire the knowledge it would be required to have to pass the test. Ray Kurzweil's approach relies on an automated process of knowledge acquisition via input of scanned books and other printed matter. However, I assert that the fundamental mode of learning of human beings is experiential. Book learning is a layer on top of that. Most knowledge, especially that having to do with physical, perceptual, and emotional experience is not explicit, never written down. It is tacit. We cannot say all we know in words or how we know it. But if human knowledge, especially knowledge about human experience, is largely tacit, i.e., never directly and explicitly expressed, it will not be found in books, and the Kurzweil approach to knowledge acquisition will fail. It might be possible to produce a kind of machine as idiot savant by scanning a library, but a judge would not have any more trouble distinguishing one from an ordinary human as she would with distinguishing a human idiot savant from a person not similarly afflicted. It is not in what the computer knows but what the computer does not know and cannot know wherein the problem resides.

Given these considerations, a skeptic about machine intelligence could fairly ask how and why the Turing Test was transformed from its origins as a provocative thought experiment by Alan Turing to a challenge seriously sought. The answer is to be found in the origins of the branch of computer science its practitioners have called Artificial Intelligence (AI).

In the 1950's a series of computer programs were written which first demonstrated the ability of the computer to carry out symbolic manipulations in software in ways which the performance (not the actual process) began to approach human level on tasks such as playing checkers and proving theorems in geometry. These results fueled the dreams of computer scientists to create machines which were endowed with intelligence. Those dreams, however, repeatedly failed to be realized. Early successes were not followed with more success, but with failure. A pattern of over-optimism was first seen which has persisted to this day. Let me be clear I am not referring to most computer scientists in the field of AI, but to those who take an extreme position.

For instance, there were claims in the 1980's that expert systems would come be of great significance, in which computer would perform as well or better than human experts in a wide variety of disciplines. This belief triggered a boom in investment in AI-based startups in the 1980's, followed by a bust when audacious predictions of success failed to be met and the companies premised on those claims also failed.

In practice, expert systems proved to be fragile creatures, capable at best of dealing with facts in narrow, rigid domains, in ways which were very much unlike the adaptable, protean nature of intelligence demonstrated by human experts. As we call them today, knowledge-based systems do play useful roles in a variety of ways, but there is broad consensus that the knowledge of these knowledge-based systems is a very small and non-generalizable part of overall human intelligence.

Ray Kurzweil's arguments seek to go further. To get a computer to perform like a person with a brain, a computer should be built to work the way a brain works. This is an interesting, intellectually challenging idea.

He assumes this can be accomplished by using as yet undeveloped nano-scale technology (or not--he seems to want to have it both ways) to scan the brain in order to reverse engineer what he refers to as the massively parallel digital controlled analog algorithms that characterize information processing in each region. These then are presumably what control the self-organizing hierarchy of networks he thinks constitute the working mechanism of the brain itself. Perhaps.

But we don't really know whether "carrying out algorithms operating on these networks" is really sufficient to characterize what we do when we are conscious. That's an assumption, not a result. The brain's actual architecture and the intimacy of its interaction, for instance, with the endocrine system, which controls the flow of hormones, and so regulates emotion (which in turn has an extremely important role in regulating cognition) is still virtually unknown. In other words, we really don't know whether in the end, it's all about the bits and just the bits. Therefore Kurzweil doesn't know, but can only assume, that the information processing he wants to rely on in his artificial intelligence is a sufficiently accurate and comprehensive building block to characterize human mental activity.

The metaphor of brain-as-computer is tempting and to a limited degree fruitful, but we should not rely on its distant extrapolation. In the past, scientists have sought to employ metaphors of their age to characterize mysteries of human functioning, e.g., the heart as pump, the brain as telephone switchboard (you could look this up). Properly used, metaphors are a step on the way to development of scientific theory. Stretched beyond their bounds, the metaphors lose utility and have to be abandoned by science if it is not to be led astray. My prediction is that contemporary metaphors of brain-as-computer and mental activity-as-information processing will in time also be superceded and will not prove to be a basis on which to build human-level intelligent machines (if indeed any such basis ever exists).

Ray Kurzweil is to be congratulated on his vision and passion, regardless of who wins or loses the bet. In the end, I think Ray is smarter and more capable than any machine is going to be, as his vision and passion reflect qualities of the human condition no machine is going to successfully emulate over the term of the bet. I look forward to comparing notes with him in 2029.

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Turing Test Questions
posted on 03/04/2002 9:29 AM by burn69_@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

I would break the computer's Turing test by asking: "How large is your genatalia?"

Of course, this subject may be off limits to the test, as it is also off limits in the realm of polite and civil conversation.

I would also ask: "Ever dance with the devil in the pale moon light?"

And then ask metaphysics, societal problems, and personal interest.

I favor Kurzweil's opinion that it will be done by 2029 -- I check back to see ;-)

Re: Turing Test Questions
posted on 03/06/2002 6:12 PM by wilzis@mail.biu.ac.il

[Top]
[Mind·X]
[Reply to this post]

Asking a computer about the size of "his" genitalia is a relatively easy thing for him to pass. AI programs will certainly understand enough about human psychology to know which questions the average human would not answer, and retort accordingly ("sorry, but I don't respond to questions beneath my underwear"). Keep in mind that passing the Turing Test does not mean the computer should be able to answer ALL questions; quite the opposite -- as humans cannot (and also will not) answer everything asked of them, so too the computer will be able to "fudge" many tricky queries and thus make it harder for the (Turing Test) human referees to figure out if they're human or not.

Re: Turing Test Questions
posted on 03/07/2002 2:04 AM by john@aculink.net

[Top]
[Mind·X]
[Reply to this post]

I agree with your view on the outcome of the bet. Furthermore, it seems to me that it would actually be pretty unfortunate if we were able to develop an AI capable of passing this test, since apparently one of it's primary skills would be deception.

I do believe that we will, in the near future, have a non-human mind, but to judge if it's "really" an intelligent being by it's ability to mimic a human seems pointless.

As you mentioned, we are "grounded" creatures; it appears that it will always be easy to trip up a non-human (computer or otherwise) by asking questions related to this, such as "stick your hand over your ear; can you hear your blood flowing?". This sort of knowledge would be hard to obtain or deduce without having a human body handy and at your disposal.

I'd like to make a response to your comment that the contestants could "fudge" answers. It would not be in the interest of either the computer or the human(s) to evade the question, since they're both trying to convince the judges that they are human, and evading the question is a sure giveaway.

Towards the end of your article above, you took the additional leap of stating that you did not believe that a machine would (over the term of the bet) would be able to emulate Kurzeil's vision and passion. Do you feel that, even if we were able to embody an AI in the real world (robot, etc), we would still not be able to create an AI which would exhibit these qualities?

Finally, what level of AI do you foresee in this time frame? By some estimates, you can now buy lizard-level processing for under $1,000US. Do you think that by the end of the timeframe for the bet we will at least have an embodied AI that you would feel bad about kicking (much as you'd feel bad about kicking your pet dog)?

Re: Turing Test Questions
posted on 04/26/2002 7:25 PM by d3help@xtra.co.nz

[Top]
[Mind·X]
[Reply to this post]

Surely the Turing test won't be required to decide about AI when it arrives? The AI will tell us it is intelligent, and probably be pissed off if we don't believe it! I seem to remember Ray K. saying something to this effect.

I concur that an AI built to pass the Turing test would have to have at its core, some programming designed to prevent itself revealing the truth, and this is disturbing.

Unless you code an AI to lie, simply asking the subject if they are human should decide the test simply enough.

More important, I think, than the Turing "solution" is the common human response to conversations with future AIs. If humans believe that AIs are alive, I wonder if they will have "Computer rights" extended to them? Such as protection from murder, etc. Hmmmmm.

Re: Turing Test Questions
posted on 04/28/2002 3:00 PM by waltpurvis@aol.com

[Top]
[Mind·X]
[Reply to this post]

On the subject of AI being programmed to lie:

First, if I understand the level of AI that Kurzweil is predicting, an AI will have a "consciousness" of some type. A conscious intelligence would presumably form opinions (wouldn't it?) about, well, about anything and everything, including politics, its own superiority to human beings, etc.

Human beings are adept at lying about or sidestepping questions about their opinions, and this is not really a bad thing; indeed, it is an indispensable social skill.

Lying about and disguising one's opinions, as well as other forms of deception, are certainly indispensable skills when it comes to diplomacy and negotiation. If we're ever going to have an AI negotiate purchases, for instance, it wouldn't do to have it respond to a line of questioning honestly, if the honest responses are along the lines of "We don't have any good alternatives, so really you could ask as much as 15% more than the price you're asking now and we would accept it gladly."

And do I even need mention the dire consequences that would ensue if AI fashion assistants were programmed to HONESTLY answer the question "Do I look fat in this dress?"

Since deception is a fundamental part of many intelligent interactions, most AI's will need to be programmed to deceive (to almost borrow a phrase from Hotel California). I was going to say deceive "appropriately" and it would be nice if we could limit their deceptiveness, and I am sure for some types of AI's we can, but perhaps not all.

Of course, any AI that is going to function independently in human society (in either the physical world or the Net) will have to at least be programmed to understand the concepts of lying and deception, and presumably to recognize the probabilities and signs that it might be being deceived.

If an AI understands the concept of lying, and it is curious (almost all visions of advanced AI assume curiosity, and curiosity might even be an inseparable aspect of "conscious intelligence"), then the AI might very well be curious to find out what would happen if it lied in a given situation. Unless you program AI's to never be able to lie (which, if possible at all, would limit their usefulness) they will perhaps be able to use their "appropriate" skills of deception in unintended and undesirable ways.

Lastly, I am not sure it is possible to program a sufficiently advanced AI in such a way that it is not capable of deception. It seems to me that deception might be an unavoidable consequence of consciousness. (Less likely, but still possible, it might even be a necessary ingredient in creating consciousness -- cf. being able to have secrets that you don't tell anyone, or more generally an "interior" life that nobody is privy to even if they ask, has been posited as a necessary ingredient in development of the concept of a "self")

Re: Turing Test Questions
posted on 07/07/2002 10:40 AM by jerico@skateboard.com

[Top]
[Mind·X]
[Reply to this post]

<Unless you program AI's to never be able to lie (which, if possible at all, would limit their usefulness) they will perhaps be able to use their "appropriate" skills of deception in unintended and undesirable ways.>

And, since most of what we say is technically a lie (including hypotheticals or anything else that contradicts the apparent facts of a situation) some have argued that the ability to lie (creatively) is what marks consciousness.

This whole thread makes me think about how silly the turing test is, though. Essentially Alan Turing decided he couldn't figure out what consciousness' really was, so he thinks we can call anything that tricks us as conscious.

The turing test will completely miss what I think would be the most interesting AI's because of its structure. The tt is designed to pick up on the forms of consciousness that mirror our own, making it useless for recognizing anything outside of our narrow categories.

For example, take an alien race, evolved roughly the same way we are (not necessarily looking like us, but by the Darwinian evolutionary methods). It would be an NI (natural intellegence) but could very well fail the tt because it is not human. There could even be humans that fail the turing test, due to mental abnormalities or cultural differences.

The tt is an interesting concept, but I don't think its a good way to measure consciousness or intellegence.

I also don't have an alternative.

Jerico

Re: Turing Test Questions
posted on 07/20/2002 10:03 PM by LennyNY@aol.com

[Top]
[Mind·X]
[Reply to this post]

My understanding of the Turing test's purpose is that it's intended to test specifically for the presence of a *human-like* intelligence, not any conceivable expression of intelligence. In order to cover such a broad field, it would seem the control group would have to consist of a correspondingly broad range of non-human intelligence types, chosen from an extensively varied pool that is not currently available. So, in the interest of practicality, for the moment I think it's fair to assume we're testing for human intelligence qualities.

Regarding the issue of a program lying: Since humans have the ability to lie, any program w/a shot at passing the test would also have to have that capability. However, simply having the capacity to lie isn't what makes us human; humanity also requires a capacity for motivation: the ability & discretion to recognize where a lie might be appropriate, & if so, to what extent. When is a lie motivated by a desire for tact/diplomacy & when by a desire to deceive; if a desire to deceive, then to deceive for personal gain, for an ideal/principle, randomly; or any combination of all the above? We might refer to such motivations as morality or ethics, or their absence.

Regarding the question inquiring about the program's genitalia size: the existence of genitalia, by definition, presumes the existence of gender. Since a human would have the ability to respond to such a question, whether through evasion or directness, the program being tested might either be designed w/a gender identity, have chosen & developed one @ some point in its existence, or have the ability to simulate one as required (and, for that matter, gay or straight). "How big are your genitalia" presumes a male gender; a program having or simulating a female gender might respond, "I'm a woman," or, if the interviewer is known to be male, "What a typical male! Is that all you guys think about?" or, if the interviewer is a female apparently assuming the program to be male, "Slut!"

Since all humans have gender, regardless of how strongly they choose to express it, would the presence or appearance of gender identity in a program be any more of a lie, or any less necessary, than the ability to respond to any question about physical activity?

Re: Turing Test Questions
posted on 11/13/2004 8:55 PM by titan_

[Top]
[Mind·X]
[Reply to this post]

As you mentioned, we are "grounded" creatures; it appears that it will always be easy to trip up a non-human (computer or otherwise) by asking questions related to this, such as "stick your hand over your ear; can you hear your blood flowing?". This sort of knowledge would be hard to obtain or deduce without having a human body handy and at your disposal.


Questions relating to the physical aspects of being human strike me as missing the point of the test. The test was contrived in such a way as to hide physical clues from the judge. So questions like that are simply trying to decide whether the party is a live human being, rather than deciding whether they possess human-like intelligence.

Furthermore, it's conceivable that a computer could find the factual answer to a medical question like this in its library of human literature, just like modern search engines. Again, this is no test of real intelligence.

Re: Turing Test Questions
posted on 07/23/2004 9:39 PM by sbaptista

[Top]
[Mind·X]
[Reply to this post]

sorry, but I don't respond to questions beneath my underwear


Actually, if the machine is built like a male, it's likely to answer "way bigger than yours, puny human."

What I find interesting about the Turing Test is that it's concerned only in whether humans conducting the test can be fooled into thinking the machine is human. It really doesn't matter how that happens.

So questions about whether the machine is conscious, or even has a conscience, or can feel/express emotion, etc. aren't necessarily relevant. We humans might find it endlessly facinating to ponder these things but the fact is, machine intelligence may take a very different turn than ours (hope so, it would be more fun). All they would need to do is "fake" human-ness and they pass the test.

After all, Kasparov stated that Big Blue's chess playing was indistinguishable from that of a grandmaster. But it went about it like, well, like a computer.

However, the fact that we think it's a significant milestone that machines "reach" human intelligence and become indistinguishable from us says more about ourselves then the machine.

Re: Turing Test Questions
posted on 07/16/2007 5:55 AM by doojie

[Top]
[Mind·X]
[Reply to this post]

If the AI had the mentality of an ex-marine like myself, he woud discern quickly if the question came from an attractive woman and say, "Why don't you just feel around and find out for yourself?"

A man would not question as to the size of a woman's genitalia, he would assume from the packaging that all was nice and tight. How much of this would a sharp AI grasp?

Re: Turing Test Questions
posted on 02/08/2003 3:36 PM by Arjuna

[Top]
[Mind·X]
[Reply to this post]



I would believe a machine has conscience when it ask about itself; when it questions his own being; when it begans to doubt; to fall in contradictions and paradoxes. I will begin to believe a machine pass any kind of test when it ask itself how is possible that conscience is looking into conscience.

Of course,I would believe it has conscience when it discovers that to most of the questions must be answered ( aproach?,contemplate?, etc) without words, without a language and then it decides to paint, to compose music, to write, to sculp, etc.

I would believe it has pass any kind of test when that conscient machine realizes that science is a limited a beautiful language ( a bunch of formalized symbols, nothing more and nothing less ) in constant change. Then I would believe it has something like human conscience ( because we are not looking for a machine with the conscience of cat, aren't we?). But specially, when that machine look at the universe

Re: Turing Test Questions
posted on 02/08/2003 4:52 PM by Grant

[Top]
[Mind·X]
[Reply to this post]

A conscience begins with toilet training. All you have to do is scold the AI or punish it when it makes a mess. Over time, it will stretch making a mess to doing other things society doesn't want people or AIs to do. Of course, if the AI doesn't eat or excrete, it's trainer is at a disadvantage as far is training a conscience into it is concerned.

Grant ')

Re: Why I Think I Will Win
posted on 06/11/2003 8:52 AM by iggitcom

[Top]
[Mind·X]
[Reply to this post]

Turing was right and Turing was wrong which makes the Turing test a farce. Neither person will win that test neither has a clue what intelligence is, and neither did Turing.

Re: Why I Think I Will Win
posted on 07/18/2004 10:56 AM by nomade

[Top]
[Mind·X]
[Reply to this post]

The Turing test can actually determine only a certain level of imitation. An apparently perfect imitation of consciousness would still not indicate the actual presence of consciousness.

Re: Why I Think I Will Win
posted on 11/13/2004 10:03 PM by Cyprian

[Top]
[Mind·X]
[Reply to this post]

Interesting, old article. It piques my archaeological interest.

Why don't we just call it the Turing Game? "Can you guess which one's the human? You'll have 5 minutes with 3 lifelines remaining..."

Way before 2029, probably by 2011, specialized AI will have already been maturing to the point where an "adaptive syntax AI" won't seem as far-fetched. Once AI's exhibit varying degrees of learning and language competency, they will be slowly introduced into mock societal positions as academic experiments, with individual support from members of the public. First, we'll be interested in them because they are novel. Then we'll want them on our computers because they are "new." Then we will obsess over them because they are "young." It's only when they're not accepted, because they are "different," is when you know that they've probably met and surpassed our intelligence.

There will be no Turing Test. It will not be televised.

http://www.kurzweilai.net/mindx/show_thread.php?ro otID=28799

Nearly ready to pass the test
posted on 07/16/2007 2:09 AM by Kentonio

[Top]
[Mind·X]
[Reply to this post]

While this article IS six years old, there is still a great amount to be gleaned from it. The frightening part is that there are already programs designed to interact with you via MSN or Yahoo! messengers to emulate a real person. Sure, it's not exactly the perfect, Turing-Test-passing program, but for a free service based out of some guy's basement, it's astounding. These chatterbots emulate human messaging pretty well, so I agree with the notion that lying or a recognition of the program's sentience would be sufficient.

Or another radical idea, when the program tries to better itself, then it passes the Kent test. When a program realizes its shortcomings and attempts to remedy them, that is a significant step in the direction towards impersonating a human (though I know many humans that don't attempt to better themselves, but that's not important here).

The Turing test seems almost too simple now, especially with the Loebner contest showing us more and more human-like responses through Instant Messaging. George (Jabberwacky), A.L.I.C.E, SmarterChild, etc. Just by accessing databases of accepted responses, or querying the database to get information based on the user's comment, these simple programs can emulate (and surpass most regular users') conversations on IM services.

Now give me a program that tries to better itself, uses "LOL" far too often, and misspells words, then I'll say that it's "intelligent." Or at least as intelligent as the average MSN-er.

Re: Nearly ready to pass the test
posted on 07/16/2007 5:00 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

I wrote an essay based on this article not long ago. Most of what is written may be familiar knowledge to veteran Mind Xers, so don't expect any grand enlightenment, but for what its worth, here is the link:

http://transumanar.com/index.php/site/the_20000_qu estion_an_essay_by_extropia_dasilva/

Re: Nearly ready to pass the test
posted on 07/16/2007 11:48 AM by Kentonio

[Top]
[Mind·X]
[Reply to this post]

Definitely a great read, you put a lot of work and effort into that one. Glad that the link is up, as I don't think that I am the only person who will find that essay a great supplement to this thread.

I just firmly believe that the Turing test is too trivial and basic to be a definitive test for human intelligence equivalence. It's too simple, based on the technology we have, to emulate and copy human behaviour, instead of determining what should be done indepedently. Copying isn't thinking, just how a monkey isn't thinking about hammering a nail into a board for its personal gain, it merely emulates the human action. So it will be with this test, unless some radical programmer decides to try to pass the Turing test the way it was intended to be. Otherwise, we'll just have chatterbots giving back previous conversations with humans. As such, Kurzweil's got this one won.