Origin > Visions of the Future > The Age of Intelligent Machines > The Age of Intelligent Machines: ELIZA Passes the Turing Test
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0319.html

Printable Version
    The Age of Intelligent Machines: ELIZA Passes the Turing Test
by   Seymour Papert

From Ray Kurzweil's revolutionary book The Age of Intelligent Machines, published in 1990.


The following anecdote was told by Prof. S. Papert of MIT. Since the story has been transmitted several times indirectly, I'm not sure all the details are accurate; however, it's a good enough story that it probably deserves repetition even if there is not a word of truth in it!

Seymour Papert is Professor of Media Technology and Director of the Learning and Epistemology Group in the Media Laboratory at the Massachusetts Institute of Technology. He is codirector, with Marvin Minsky, of MIT's Artificial Intelligence Laboratory. His seminal work in the field of computers and education led to the development of the programming language LOGO. He is the author of Mindstorms: Children, Computers, and Powerful Ideas (1980) and Perceptrons, coauthored with Marvin Minsky (1969). This article is reprinted from the June 1968 Association for Computing Machinery SIGART (Special Interest Group on Artificial Intelligence) Newsletter.

Late one night several weeks ago an unnamed MIT student went into the A.I. Project Office to use a teletype terminal on the Project MAC time-shared computer system. When he arrived, no one was in the room and the teletype appeared to be available; however, a note over the terminal read, "Computer time reserved by Dan Bobrow." The student did not want to start working if he was going to be interrupted when Bobrow, a lecturer at MIT, arrived. Yet he also knew that Bobrow had a teletype in his home and sometimes worked from there, forgetting to cancel his office reservation.

In an attempt to clarify the situation, the student sat down at the teletype and attempted to link to Bobrow's home terminal to ask him if the office terminal was going to be free. Unfortunately, he dialed the wrong number and unknowingly became linked to a user who had left Weizenbaum's ELIZA program (sometimes known as the "psychiatrist" or "doctor" program) running. A conversation something like the following ensued.

Student: May I use your computer time from the office?

Teletype response: Do you want to use my computer time?

Student: Yes.

Teletype response: I understand.

Student: Are you planning to come into the office tonight?

Teletype response: Why do you want to know whether I am planning to come into the office tonight?

Student: Because I want to use your computer time!

Teletype response: I see.

Student: Well?

Teletype response: What do you mean, "Well"?

Student: Well, what's your answer to my questions?

Teletype response: Let's explore further why you want to use my computer time.

After about ten minutes of increasing frustration and anger, the student rushed to a telephone and dialed Bobrow's home, rousing him from a deep sleep.

Student: What the heck do you think you're doing?

Bobrow: What do you mean, What the heck do I think I'm doing?

I leave it to the reader to imagine the conversation that followed.

 Join the discussion about this article on Mind·X!

Photo by Lou Jones www.fotojones.com
Seymour Papert is Professor of Media Technology and Director of the Learning and Epistemolgy Group in the Media Laboratory at the Massachusetts Institute of Technology. He is codirector, with Marvin Minsky, of MIT's Artificial Intelligence Laboratory. His seminal work in the field of computers and education led to the development of the programming language LOGO. He is the author of Mindstorms: Children, Computers, and Powerful Ideas (1980) and Perceptrons, coauthored with Marvin Minsky (1969). This article is reprinted from the June 1968 Assocation for Computing Machinery SIGART (Special Interest Group on Artificial Intelligence) Newsletter.


 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

The real story.
posted on 03/14/2002 5:36 PM by newspost@vmail.steampacket.net

[Top]
[Mind·X]
[Reply to this post]

From

http://web.mit.edu/sts001/www/Team6/eliza.html

Re: The Age of Intelligent Machines: ELIZA Passes the Turing Test
posted on 07/01/2005 2:39 AM by ScottyDM

[Top]
[Mind·X]
[Reply to this post]

When I read "ELIZA Passes the Turing Test" I thought, no way! However, the responses here are very much ELIZA-like.

ELIZA can fool you only if you let it guide the conversation. Once you try to push it, it "breaks" in rather obvious ways. I would suspect that anyone who is giving a Turing Test would know they are giving the test. To go on in this vein for ten minutes without realizing they were conversing with an ELIZA machine, the student must have either been half asleep or stupid.

Scotty

Re: The Age of Intelligent Machines: ELIZA Passes the Turing Test
posted on 07/01/2005 8:58 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

U dind chatting with a boyt better than some people

Re: Intelligent Avatars
posted on 07/01/2005 9:45 AM by jrichard

[Top]
[Mind·X]
[Reply to this post]

The focus on a computer program such as ELIZA that can pass the Turing test misses the actual trend in the development of intelligent virtual humans or avatars.

Practical applications of this technology today necessarily have to limit the scope of conversational topics. By keeping the language to be interpreted to a controllable range, the avatar can be very useful for a specific purpose.

Suppose you know that there are avatars available that to a very good job for ten different things. By having your personal avatar recognize your wish to interact with one of those other ten avatars, you can say something like: "I want to make a plane reservation."

Your personal avatar immediately puts you in contact with a 'plane reservation avatar'. In this way, over time, the interaction with more and more intelligent avatars can spread from ten to a hundred and then to a thousand.

Re: Intelligent Avatars
posted on 07/01/2005 10:54 PM by ScottyDM

[Top]
[Mind·X]
[Reply to this post]

ELIZA-like language parsing has its applications, and can be a commercial success. But I feel language parsing and rule sets will not lead to intelligence.



Now for something different: JRichard, I object to your sloppy use of the term "avatar".

Originally, "avatar" meant the incarnation of a Hindu deity as either human or animal. That is, the form or appearance the deity takes when it walks among humanity.

We technologists have appropriated the term to mean our representation when we are somewhere else (cyberspace, VR, telecommuting, or wherever). A video feed is not an avatar, because it is a live feed of the real person, nor is listening to someone's voice over the phone. A terrific example of an avatar is when Ray Kurzweil gives a talk and Ramona is projected on a screen behind him, mimicking his movements and translating his voice to hers. Ramona is his avatar. Likewise, an "AI" (however you want to define that) can have an avatar too.

There is a very clear dividing line between the avatar and the "intelligence engine" (human or AI) driving the avatar. The avatar is part of the communication channel, although in typed communications few people would think of the text as the avatar. However, for a phone based reservations system you could think of the synthesized voice as an avatar of sorts, although it is also the communications medium (speech). For example if you go to: http://www.pandorabots.com/botmaster/en/mostactive you will find that ALICE has an avatar (an animated face) and Evil does not.

In a rich communication channel such as VR, an avatar might be a human, a humming bird, or a whale. This will radically affect how the "intelligence engine" can interact with the virtual environment, such as where it can go. So an avatar also embodies part of the interface into the environment (in this case VR): body size, mass, speed, flying ability, etc. Any "intelligence engine" (human or AI) should be able to use any avatar. That is, we can have ELIZA-class bots in VR and we can have humans in VR, and they can appear as anything depending on which avatars they have access to.

The way you've used "avatar", I'd use either "AI", such as: "...intelligent virtual humans or AIs." Or personal agent, such as: "By having your personal agent recognize your wish..." Unfortunately, "AI" is such a generic term. I suppose the context of an "ELIZA-class bot" one could say, "scripted natural language processing system", but that's a bunch of typing.

It's just semantics, but I feel "avatar" has a reasonably tight definition, and your usage did not fit.

Scotty