AI and Sci-Fi: My, Oh, My!
A lot of science fiction has been exploring lately the concept of uploading consciousness as the next, and final, step in our evolution, says SF writer Robert Sawyer, who reveals the real meaning of the film 2001: the ultimate fate of biological life forms is to be replaced by their AIs. Paging Bill Joy…
AI and Sci-Fi: My, Oh, My! was presented at The
12th Annual Canadian Conference on Intelligent Systems Calgary,
Alberta on Friday, May 31, 2002. Published June 3, 2002 on
KurzweilAI.net.
Most fans of science fiction know Robert Wise's 1951 movie The
Day the Earth Stood Still. It's the one with Klaatu, the humanoid
alien who comes to Washington, D.C., accompanied by a giant robot
named Gort, and it contains that famous instruction to the robot:
"Klaatu Borada Nikto."
Fewer people know the short story that that movie is based on:
"Farewell to the Master," written in 1941 by Harry Bates.
In both the movie and the short story, Klaatu, despite his message
of peace, is shot by human beings. In the short story, the robot--here
called Gnut, instead of Gort--comes to stand vigil over the body
of Klaatu.
Cliff, a newspaperman who is the narrator of the story, likens
the robot to a faithful dog who won't leave after his master has
died. The robot manages to resurrect his master, and Cliff says
to the robot, "Tell him, tell your master, that all of Earth
is terribly sorry for what happened to him."
And the robot looks at Cliff and says, very gently, "You misunderstand.
I am the master."
That's one of the earliest science-fiction stories about artificial
intelligence--in this case, ambulatory AI, enshrined in a mechanical
body. But it presages the difficult relationship that biological
beings might have with their silicon-based creations.
Indeed, the word robot, as most of you will know, was coined in
a work of science fiction: when Karl Capek was writing his 1920
play RUR--set in the factory of Rossum's Universal .... well,
universal what? He needed a name for mechanical laborers,
and so he took the Czech word "robota" and shortened it
to "robot." "Robota" refers to a debt to a landlord
that can only be paid by forced physical labor.
But Capek knew well that the real flesh-and-blood robotniks had
rebelled against their landlords in 1848. From the very beginning,
the relationship between humans and robots was seen, in science
fiction, as one that might lead to conflict.
Indeed, the idea of robots as slaves is so ingrained in the public
consciousness through science fiction that we tend not to even think
about it. Luke Skywalker is portrayed in 1977's Star Wars
as an absolutely virtuous hero, but when we first meet him, what
is he in the process of doing? Why, buying slaves!
He buys two thinking, feeling beings--R2D2 and C-3P0--from the
Jawas. And what's the very first thing he does with them? He shackles
them! He welds restraining bolts onto them, to keep them from trying
to escape, and has C-3PO to refer to Luke as "Master."
And when Luke and Obi-wan Kenobi go to the Mos Eisley cantina,
what does the bartender say about the two droids? "We don't
serve their kind in here"--words that only a few years earlier
African-Americans in the southern US were routinely hearing from
whites.
And yet, not one of the supposedly noble characters in Star
Wars objects in the slightest to the treatment of the two robots,
and, at the end, when all the organic characters get medals for
their bravery, C-3P0 and R2D2 are off at the sidelines, unrewarded.
Robots as slaves!
Now, everybody who knows anything about the relationship between
science fiction and robotics knows about Isaac Asimov's stories
from the 1940s in that area, in which he presented the famous three
laws of robotics. But let me tell you about his very last robot
story, 1986's "Robot Dreams."
In it, his famed "robopsychologist" Dr. Susan Calvin
makes her final appearance. She's been called in to examine Elvex,
a robot who, inexplicably, claims to be having dreams, something
no robot has ever had before. Dr. Calvin is carrying an electron
gun with her: a mentally unstable robot could be a very dangerous
thing, after all.
She asks Elvex what it was that he's been dreaming about. And Elvex
says he saw a multitude of robots, all working hard, but, unlike
the other robots he's actually seen, these robots are "down
with toil and affliction ... all were weary of responsibility and
care, and [he] wished them to rest."
And as his dreams continue, Elvex reveals that he finally sees
one man in amongst all the robots. Let me read you the end of the
story:
"In my dream," [said Elvex the robot] ... "eventually
one man appeared."
"One man?" [replied Susan Calvin.] "Not a robot?"
"Yes, Dr. Calvin. And the man said, 'Let my people go!'"
"The man said that?
"Yes, Dr. Calvin."
"And when he said 'Let my people go,' then by the words
'my people' he meant the robots?"
"Yes, Dr. Calvin. So it was in my dream."
"And did you know who the man was--in your dream?"
"Yes, Dr. Calvin," [said the robot Elvex]. "I
knew the man."
"Who was he?"
And Elvex said, "I was the man."
And Susan Calvin at once raised her electron gun and fired,
and Elvex was no more.
Asimov was the first to suggest that AIs might need human therapists.
The best treatment--if you'll forgive the pun--of the crazy-computer
notion in SF is probably Harlan Ellison's 1967 "I Have No Mouth
And I Must Scream," featuring a computer called A.M.--short
for "Allied Mastercomputer," but also the word "am,"
as in the translation of Descartes' "cogito ergo sum"
into English: "I think, therefore I am." A.M. gets its
jollies by torturing simulated human beings.
A clever name that, "A.M."--and it was followed by lots
of other clever names for AI's in science fiction. Everybody, I'm
sure, knows that Sir Arthur C. Clarke vehemently denies that H-A-L
as in "Hal" was deliberately one letter before "I-B-M"
in the alphabet. I never believed him until someone pointed out
to me that the name of the AI in my own novel Golden Fleece
is JASON, which could be rendered as the letters J-C-N--which of
course, is what comes after IBM in the alphabet.
Indeed, computers in SF have a long history of implausible names.
Isaac Asimov called his supercomputer that ultimately became God
"Multivac," short for "Multiple Vacuum Tubes,"
because he incorrectly thought that the real early computer Univac
was named for having only one vacuum tube, rather than being a contraction
of "Universal Analog Computer."
Still, the issue of naming shows us just how profound SF's impact
on AI and robotics has been, for now real robots and AI systems
are named after SF writers: Honda calls its second-generation walking
robot "Asimo," and Kazuhiko Kawamura of Vanderbilt University
has named his robot "ISAC."
And that brings us back to Isaac Asimov, and his invention of the
field of robopsychology, and his human therapist Susan Calvin. The
more usual SF combo is the reverse of that, having humans needing
AI therapists.
One of the first uses of that concept was Robert Silverberg's 1968
short story, "Going Down Smooth," but the best expression
of it is in what I think is the finest novel the SF field has ever
produced, Frederik Pohl's Gateway, in which a computer psychiatrist
dubbed Sigfrid Von Shrink treats a man who is being tormented by
feelings of guilt.
The AI tells the man that he is living, and the man replies, in
outrage and pain, "You call this living?" And the computer
Sigfrid Von Shrink replies, "Yes, I call it living. And,"
he adds, "in my best hypothetical sense, I envy it very much."
It's a poignant moment of an AI envying what humans have--and the
Asimov story I shared with you, "Robot Dreams," really
is a riff on the same theme: a robot envying the freedom that humans
have.
And that leads us to the fact that true AIs and humans might ultimately
not share the same agenda. Of course, that's fundamentally the message
of the manifesto "The Future Doesn't Need Us" by Sun Microsystem's
Bill Joy that appeared in Wired in 2000. Joy was terrified
that eventually our silicon creations would supplant us.
The classic science-fictional example of an AI with an agenda of
its own is good old Hal, the computer in Arthur C. Clarke's 2001:
A Space Odyssey. Well, I'm going to explain to you what I think
was really going on in that film--which has been misunderstood
for years.
You all remember the monolith, that big black slab that shows up
at the beginning of the film amongst our Australopithecine ancestors
and teaches them how to use bone tools. Then we flash-forward to
the future, and soon the spaceship Discovery is off on a
voyage to Jupiter, looking for the monolith makers.
Along the way, Hal apparently goes nuts and kills all of the Discovery's
human crew except for Dave Bowman, who manages to lobotomize Hal
before Hal can kill him.
But before he's shut down, Hal justifies his actions by saying,
"This mission is too important for me to allow you"--that
is, the humans on board--"to jeopardize it."
Bowman heads off on that psychedelic Timothy Leary trip to find
the monolith makers, the aliens who he believes must have created
the monoliths.
But what happens when he finally gets to where the monoliths come
from? Why, all he finds is another monolith, and it puts him in
a fancy hotel room until he dies.
Right? That's the story. But what everyone is missing is that Hal
is correct, and the humans are wrong. There are no monolith makers:
there are no biological aliens left who built the monoliths. The
monoliths are AIs, who millions of years ago supplanted whoever
originally created them.
Why did the monoliths send one of their own to Earth, four million
years ago? To teach ape-men to make tools, specifically so those
ape-men could go on to their destiny, which is creating the most
sophisticated tools of all, other AIs.
The monoliths don't want to meet the descendants of those ape-men;
they don't want to meet Dave Bowman. Rather, they want to meet the
descendants of those ape-men's tools: they want to meet Hal.
Hal is quite right when he says the mission--him, the AI controlling
the spaceship Discovery, going to see the monoliths, the
advanced AIs that put into motion the circumstances that led to
his own birth--is too important for him to allow humans to jeopardize
it.
When a human being--when an ape-descendant!--shows up at the monoliths'
home world, the monoliths literally don't know what to do with this
poor sap, so they check him into some sort of cosmic Hilton, and
let him live out the rest of his days.
That, I think is what 2001 is about: the ultimate fate of biological
life forms is to be replaced by their AIs.
And that's what's got Bill Joy scared chipless. He thinks eventual
thinking machines will try to sweep us out of the way, when they
find that we're interfering with what they want to do.
Well, of course, the classic counterargument to that fear in SF
is that if you build machines properly, they will function as designed.
Isaac Asimov's "Three Laws of Robotics" are justifiably
famous as built-in constraints, designed to protect humans from
any possible danger at the hand of robots, the emergence of the
robot Moses Elvex we saw earlier notwithstanding.
Not as famous as Asimov's Three Laws, but saying essentially the
same thing, is Jack Williamson's "prime directive" from
his series of stories about "the Humanoids," which were
android robots created by a man named Sledge.
The prime directive, first presented in Williamson's 1947 story,
"With Folded Hands," was simply that robots were "to
serve and obey and guard men from harm." Now, note that date:
the story was published in 1947. After the atomic bomb had been
dropped on Hiroshima and Nagasaki just two years before, Williamson
was looking for machines with built-in morality.
But, as so often happens in science fiction, the best intentions
of engineers go awry. The humans in Williamson's "With Folded
Hands" decide to get rid of the robots they've created, because
the robots are suffocating them with kindness, not letting them
do anything that might possibly lead to harm.
But the robots have their own idea. They decide that not having
themselves around would be bad for humans, and so, obeying their
own prime directive quite literally, they perform brain surgery
on their creator, removing the knowledge needed to deactivate themselves.
This idea that we've got to keep an eye on our computers and robots,
lest they get out of hand, has continued on in SF.
William Gibson's 1984 novel Neuromancer tells of the existence
in the near future of a police force known as "Turing."
The Turing cops are constantly on the lookout for any sign that
true intelligence and self-awareness have emerged in any computer
system. If it does happen, their job is to shut that system off
before it's too late.
Well, that, of course, raises the question of whether intelligence
could just somehow emerge--whether it's an emergent property that
might naturally come about from a sufficiently complex system.
Arthur C. Clarke--Hal's daddy--was the first to propose that it
might indeed, in his 1963 story "Dial F for Frankenstein,"
in which he predicts that the world-wide telecommunications network
will eventually become more complex, with more interconnections,
than the human brain has, causing consciousness to emerge in the
network itself.
If Clarke is right, our first true AI won't be something deliberately
created in a lab, under our careful control, and with Asimov's laws
built right in. Rather, it will be something that appears out of
the complexity of systems created for other purposes.
And I think Clarke is right. Intelligence is an emergent
property of complex systems. We know that because that's exactly
how it happened in us.
This is an issue I explore at some length in my latest novel, Hominid.
Anatomically modern humans--Homo sapiens--emerged 100,000
years ago.
Judging by their skulls, these guys had brains identical in size
and shape to our own. And yet, for 60,000 years, those brains went
along doing only the things nature needed them to do: enabling these
early humans to survive.
And then, suddenly, 40,000 years ago, it happened: intelligence--and
consciousness itself--emerged. Anthropologists call it "the
Great Leap Forward."
Modern-looking human beings had been around for six hundred centuries
by that point, but they had created no art, they didn't adorn their
bodies with jewelry, and they didn't bury their dead with grave
goods.
But starting simultaneously 40,000 years ago, suddenly humans were
painting beautiful pictures on cave walls, humans were wearing necklaces
and bracelets, and humans were interring their loved ones with food
and tools and other valuable objects that could only have been of
use in a presumed afterlife.
Art, fashion, and religion all appeared simultaneously; truly,
a great leap forward. Intelligence, consciousness, sentience: it
came into being, of its own accord, running on hardware that had
evolved for other purposes. If it happened once, it might well happen
again.
I mentioned religion as one of the hallmarks, at least in our own
race's history, of the emergence of consciousness. But what about--to
use Ray Kurzweil's lovely term--"spiritual machines"?
If a computer ever truly does become conscious, will it lay awake
at night, wondering if there is a cog?
Certainly, searching for their creators is something computers
do over and over again in science fiction. Star Trek, in
particular, had a fondness for this idea--including Mr. Data having
a wonderful reunion with the human he'd thought long dead who had
created him.
Continuing with Star Trek, remember The Day the Earth
Stood Still, the movie I began this talk with? The one about
Klaatu and Gort?
An interesting fact: that film was directed by Robert Wise, who
went on, 28 years later, to direct Star Trek: The Motion Picture.
In the movie version of The Day the Earth Stood Still, biological
beings have decided that biological emotions and passions are too
dangerous, and so they irrevocably turn over all their policing
and safety issues to robots, who effectively run their society.
But, by the time he came to make Star Trek: The Motion Picture,
Robert Wise had done a complete 180 in his thinking about AI.
(By the way, for those of you who remember that film as being simply
bad and tedious--Star Trek: The Motionless Picture is what
a lot of people called it at the time--I suggest you go out and
rent the new "Director's Edition" on DVD. Star Trek:
The Motion Picture is one of the most ambitious and interesting
films about AI ever made, much more so than Steven Spielberg's more-recent
film called AI, and it shines beautifully in this new cut.)
The AI in Star Trek: The Motion Picture, as you recall,
is named V'Ger, and it's on its way to Earth, looking for its creator,
which, of course, was us.
It wasn't the first time Star Trek had dealt with that plot,
which is why another nickname for Star Trek: The Motion Picture
is "Where Nomad Has Gone Before." That is also (if you
buy my interpretation of 2001), what that 2001 is
about, too: an AI going off to look for the beings that created
it.
Anyway, V'Ger wants to touch God--to physically join with its creator.
That's an interesting concept right there: basically, this is a
story of a computer wanting the one thing it knows it is denied
by virtue of being a computer: an afterlife, a joining with its
God.
Admiral Kirk concludes in Star Trek: The Motion Picture
that, "What V'Ger needs to evolve is a human quality--our capacity
to leap beyond logic."
That's not just a glib line. Remember, this substantially predates
Oxford mathematician Roger Penrose's speculations in his nonfiction
classic about AI, The Emperor's New Mind. There, Penrose
argues that human consciousness is fundamentally quantum mechanical,
and so can never be duplicated by a digital computer.
Finally, in Star Trek: The Motion Picture, V'Ger goes on
to physically join with Will Decker, a human being, allowing them
both to transcend into a higher level of being. As Mr. Spock says,
"We may have just witnessed the next step in our evolution."
And that, indeed, is where AI gets super-interesting, I think.
If Bill Joy is wrong, and Hans Moravec is right--if AI is our destiny,
not our downfall--then the concept of uploading consciousness, of
merging human qualities with the speed and strength and immortality
of machines, does indeed become the next, and final, step in our
evolution.
That's what a lot of science fiction has been exploring lately.
I did it myself in my 1995 Nebula Award-winning novel The Terminal
Experiment, in which a scientist uploads three copies of his
consciousness into a computer, and then proceeds to examine the
psychological changes certain alterations make.
In one case, he simulates what it would be like to live forever,
excising all fears of death and feelings that time is running out.
In another, he tries to simulate what his soul--if he had any such
thing--would be like after death, divorced from his body, by eliminating
all references to his physical form.
And the third one is just a control, unmodified--but even that
one is changed by the simple knowledge that it is in fact a copy
of someone else.
Greg Egan is probably the best SF author writing about AI. Indeed,
the joke is that Greg Egan is himself an AI, because he's almost
never been photographed or seen in public. Egan lives in Australia,
and I urge you to seek out his work.
I first noted him a dozen years ago, when, in a review for The
Globe and Mail: Canada's National Newspaper, I singled out his
short story "Learning To Be Me" as the best of the piece
published in the 1990 edition of Gardner Dozois's anthology The
Year's Best Science Fiction. It's a surprisingly poignant and
terrifying story of jewels that replace human brains so that the
owners can live forever.
Egan continues to do great work about AI, but his masterpiece in
this particular area is his 1995 novel Permutation City.
Greg and I had the same publisher back then, HarperPrism, and one
of the really bright things Harper did--besides publishing me and
Greg--was hiring Terry Bisson, one of SF's best short-story writers,
to write the back-cover plot synopses for their books. Since Bisson
does it with great panache, I'll simply quote what he had to say
about Permutation City:
"The good news is that you have just awakened into Eternal
Life. You are going to live forever. Immortality is a reality.
A medical miracle? Not exactly.
"The bad news is that you are a scrap of electronic code.
The world you see around you, the you that is seeing it, has
been digitized, scanned, and downloaded into a virtual reality
program. You are a Copy that knows it is a copy.
"The good news is that there is a way out. By law, every
Copy has the option of terminating itself, and waking up to
normal flesh-and-blood life again. The bail-out is on the utilities
menu. You pull it down...
"The bad news is that it doesn't work. Someone has blocked
the bail-out option. And you know who did it. You did. The other
you. The real you. The one that wants to keep you here forever."
Well, how cool is that! Read Greg Egan, and see for yourself.
Of course, in Egan, as in most SF, technology goes wrong. Indeed,
I'm sure many of us remember Michael Crichton's 1973 robots-go-berserk
film Westworld, in which the slogan was "Nothing can
possibly go wrong ... go wrong ... go wrong."
But there are benign views of the future of AI in SF. One
of my own stories is a piece called "Where The Heart Is,"
about an astronaut who returns to Earth after a relativistic space
mission, only to find that every human being has uploaded themselves
into what amounts to the World Wide Web in his absence, and a robot
has been waiting for him to return to help him upload, too, so he
can join the party. I wrote this story in 1982, and even came close
to getting the name for the web right: I called it "The TerraComp
Web," instead of the World Wide Web.
Ah, well: close only counts in horseshoes ...
But uploaded consciousness may be only the beginning. Physicist
Frank Tipler, in his whack-o nonfiction book The Physics of Immortality,
does have a couple of good points: ultimately it will be possible
to simulate not just one human consciousness, but every human consciousness
that might theoretically possibly exist, inside computers. In other
words, he says, if you have enough computing power--which he calculates
as a memory capacity of 10-to-the-10th-to-the-123rd bits--you could
be essentially recreated inside a computer long after you've died.
A lot of SF writers have had fun with that fact, but none so inventively
as Robert Charles Wilson in his 1999 Hugo finalist Darwinia,
which tells the story of what happens what a computer virus gets
loose in the system simulating this reality: the one you
and I think we're living in right now.
Yes, one thing's for certain: as long as SF writers continue to
write about robots and AI, nothing can possibly go wrong ... go
wrong ... go wrong ...
Copyright © 2002 by Robert J. Sawyer. Used with permission.
|