|
|
Kinds of Minds
In Beyond AI, published today, J. Storrs Hall offers "a must-read for anyone interested in the future of the human-machine civilization," says Ray Kurzweil. In this first of three book excerpts, Hall suggests a classification of the different stages an AI might go through, from "hypohuman" (most existing AIs) to "hyperhuman" (similar to "superintelligence").
Originally published in Beyond
AI: Creating the Conscience of the Machine, Ch. 15. Reprinted
with permission on KurzweilAI.net May 30, 2007.
Perhaps our questions about artificial intelligence are a
bit like inquiring after the temperament and gait of a horseless
carriage.
—K. Eric Drexler
Now we will classify the different stages AI might go through by
using the Greek prepositions. These have been adopted into English
as prefixes, particularly in scientific usage. In some cases the
concepts have been applied to advancing AI before and in other cases
not. The reason for introducing these new terms is they provide
a framework that puts any given level of expected AI capability
in perspective vis-à-vis the other levels, and in comparison
to human intelligence.
Hypohuman AI
Hypo means below or under (think hypodermic,
under the skin; hypothermia or hypoglycemia, below
normal temperature or blood sugar), including, in the original Greek,
under the moral or legal subjection of. Isaac Asimov's robots are
(mostly) hypohuman, in both senses of hypo: they are not quite as
smart as humans, and they are subject to our rule. Most existing
AI is arguably hypohuman, as well (Deep Blue to the contrary notwithstanding).
As long as it stays that way, the only thing we have to worry about
is that there will be human idiots putting their AI idiots in charge
of things they both don't understand. All the discussion of formalist
float applies, especially the part about feedback.
Diahuman AI
Dia means through or across in Greek (diameter,
diagonal), and the Latin trans means the same
thing, but the commonly heard transhuman doesn't apply
here. Transhuman refers to humans as opposed to AIs, humans who
have been enhanced (by whatever means)and are in a transitional
state between human and fully posthuman, whatever that may be. Neither
concept is very useful here.
By diahuman, I mean AIs in the stage where AI capabilities are
crossing the range of human intelligence. It's tempting to call
this human-equivalent, but the idea of equivalence is misleading.
It's already apparent that some AI abilities (e.g., chess playing)
are beyond the human scale , while others (e.g., reading and writing)
haven't reached it yet.
Thus diahuman refers to a phase of AI development (and only by
extension to an individual AI in that phase), and this is fuzzy
because the limits of human (and AI) capability are fuzzy. It's
hard to say which capabilities are important in the comparison.
I would claim that AI is entering the early stages of the diahuman
phase right now; there are humans who, like today's AIs, don't learn
well and who function competently only at simple jobs for which
they must be trained.
The core of the diahuman phase, however, will be the development
of autogenous learning. In the latter stages, AIs, like the brightest
humans, will be completely autonomous, not only learning what they
need to know but also deciding what they need to learn.
Diahuman AIs will be valuable and will undoubtedly attract significant
attention and resources to the AI enterprise. They are likely to
cause something of a stir in philosophy and perhaps religion, as
well. However, they will not have a significant impact on the human
condition. (The one exception might be economically, in the case
that diahuman AI lingers so long that Moore's law makes human-equivalent
robots very cheap compared to human labor. But I'm assuming that
we will probably have advanced past the diahuman stage by then.)
Parahuman AI
Para means alongside (paralegal, paramedic).
The concept of designing a system that a human is going to be part
of dates back to cybernetics (although all technology throughout
history had to be designed so that humans could operate it, in some
sense).
Parahuman AI will be built around more and more sophisticated theories
of how humans work. The PC of the future ought to be a parahuman
AI. MIT roboticist Cynthia Brazeal's sociable robots are the likely
forerunners of a wide variety of robots that will interact with
humans in many kinds of situations.
The upside of parahuman AI is that it will enhance the interface
between our native senses and abilities, adapted as they are for
a hunting and gathering bipedal ape, and the increasingly formalized
and mechanized world we are building. The parahuman AI should act
like a lawyer, a doctor, an accountant, and a secretary, all with
deep knowledge and endless patience. Once AI and cognitive science
have acquired a solid understanding of how we learn, parahuman AI
teachers could be built which would model in detail how each individual
student was absorbing the material, ultimately finding the optimal
presentation for understanding and motivation.
The downside is simply the same effect, put to work with slimier
motives: the parahuman advertising AI, working for corporations
or politicians, could know just how to tweak your emotions and gain
your trust without actually being trustworthy. It would be the equivalent
of an individualized artificial con man. Note by the way that of
the two human elements that were part of the original cybernetic
anti-aircraft control theory, one of them, the pilot of the plane
being shot at, didn't want to be part of the system but was, willy-nilly.
Parahuman is a characterization that does not specify a level of
intellectual capability compared to humans; it can be properly applied
to AIs at any level. Humans are fairly strongly parahuman intelligences
as well; many of our innate skills involve interacting with other
humans. Parahuman can be largely contrasted with the following term,
allohuman.
Allohuman AI
Allo means other or different (allomorph, allonym,
allotrope). Although I have argued that human intelligence
is universal, there remains a vast portion of our minds that is
distinctively human. This includes the genetically programmed representation
modules, the form of our motivations, and the sensory modalities,
of which several are fairly specific to running a human body.
It will certainly be possible to create intelligences that while
being universal nevertheless have different lower-level hardwired
modalities for sense and representation, and different higher-level
motivational structure. One simple possibility is that universal
mechanism may stand in for a much greater portion of the cognitive
mechanism so that, for example, the AI would use learned physics
instead of instinctive concepts and learned psychology instead of
our folk models.
Such differences could reasonably make the AI better at certain
tasks; consider the ability to do voluminous calculations in you
head. However, if you have ever watched an experienced accountant
manipulate a calculator, you can see that the numbers almost flow
through his fingers. Built-in modalities may provide some increment
of effectiveness compared to learned ones, but not as much as you
might think. Consider reading—it's a learned activity, and
unlike talking, we don't just "pick it up." But with practice, we
read much faster than we can talk or understand spoken language.
Motivations and the style and the volume of communication could
also differ markedly from the human model. The allohuman AI might
resemble Mr. Spock, or it might resemble an intelligent ant. This
likely will form the bulk of the difference between allohuman AIs
and humans rather than the varying modalities.
Like parahuman, allohuman does not imply a given level of intellectual
competence. In the fullness of time, however, the parahuman/allohuman
distinction will make less and less difference. More advanced AIs,
whether they need to interact with humans or to do something weirdly
different, will simply obtain or deduce whatever knowledge is necessary
and synthesize the skills on the fly.
Epihuman AI
Epi means upon or after (epidermis, epigram,
epitaph, epilogue). I'm using it here in a combination
of senses to mean AI that is just above the range of individual
human capabilities but that still forms a continuous range with
them, and also in the sense of what comes just after diahuman AI.
That gives us what can be a useful distinction versus further-out
possibilities. (See hyper below.)
Science fiction writer Charles Stross introduced the phrase "weakly
godlike AI." Weakly presumably refers to the fact that such AIs
would still be bound by the laws of physics—they couldn't
perform miracles, for example. As a writer, I'm filled with admiration
for the phrase, since weakly and godlike have such contrasting meanings
that it forces you to think when you read it for the first time,
and the term weakly is often used in a similar way, with various
technical meanings, in scientific discourse, giving a vague sense
of rigor (!) to the phrase.
The word posthuman is often used to describe what humans may be
like after various technological enhancements. Like transhuman,
posthuman is generally used for modified humans instead of synthetic
AIs.
My model for what an epihuman AI would be like is to take the ten
smartest people you know, remove their egos, and duplicate them
a hundred times, so that you have a thousand really bright people
willing to apply themselves all to the same project. Alternatively,
simply imagine a very bright person given a thousand times as long
to do any given task. We can straightforwardly predict, from Moore's
law, that ten years after the advent of a learning but not radically
self-improving human-level AI, the same software running on machinery
of the same cost would do the same human-level tasks a thousand
times as fast as we. It could, for example:
- read an average book in one second with full comprehension;
- take a college course and do all the homework and research in
ten minutes;
- write a book, again with ample research, in two or three hours;
- produce the equivalent of a human's lifetime intellectual output,
complete with all the learning, growth, and experience involved,
in a couple of weeks.
A thousand really bright people are enough to do some substantial
and useful work. An epihuman AI could probably command an income
of $100 million or more in today's economy by means of consulting
and entrepreneurship, and it would have a net present value in excess
of a $1 billion. Even so, it couldn't take over the world or even
an established industry. It could probably innovate well enough
to become a standout in a nascent field, though, as in Google’s
case.
A thousand top people is a reasonable estimate for what the current
field of AI research is applying to the core questions and techniques—basic,
in contrast to applied, research. Thus an epihuman AI could probably
improve itself about as fast as current AI is improving. Of course,
if it did that, it wouldn't be able to spend its time making all
that money; the opportunity cost is pretty high. It would need to
make exactly the same kind of decision that any business faces with
respect to capital reinvestment.
Whichever it may choose to do, the epihuman level characterizes
an AI that is able to stand in for a given fairly sizeable company
or for a field of academic inquiry. As more and more epihuman AIs
appear, they will enhance economic and scientific growth so that
by the later stages of the phase the total stock of wealth and knowledge
will be significantly higher than it would have been without the
AIs. AIs will be a significant sector, but no single AI would be
able to rock the boat to a great degree.
Hyperhuman AI
Hyper means over or above. In common use as an English
prefix, hyper tends to denote a greater excess than super,
which means the same thing but comes from Latin instead of Greek.
(Contrast, e.g., supersonic, more than Mach 1, and hypersonic, more
than Mach 5.)
In the original Singularity paper, “The Coming Technological
Singularity,” Vernor Vinge used the phrase superhuman
intelligence. Nick Bostrom has used the term superintelligence.
Like some of the terms above, however, superhuman has a
wide range of meanings (think about Kryptonite), and most of them
are not applicable to the subject at hand. We will stay with our
Greek prefixes and finish the list with hyperhuman.
Imagine an AI that is a thousand epihuman AIs, all tightly integrated
together. Such an intellect would be capable of substantially outstripping
the human scientific community at any given task and of comprehending
the entirety of scientific knowledge as a unified whole. A hyperhuman
AI would soon begin to improve itself significantly faster than
humans could. It could spot the gaps in science and engineering
where there was low-hanging fruit and instigate rapid increases
in technological capability across the board.
It is as yet poorly understood even in the scientific community
just how much headroom remains for improvement with respect to the
capabilities of current physical technology. A mature nanotechnology,
for example, could replace the entire capital stock—all the
factories, buildings, roads, cars, trucks, airplanes, and other
machines—of the United States in a week. And that's just using
currently understood science, with a dollop of engineering development
thrown in.
Any sufficiently advanced technology, Arthur Clarke wrote, is indistinguishable
from magic. Although, I believe, any specific thing the hyperhuman
AIs might do could be understood by humans, the total volume of
work and the rate of advance would become harder and harder
to follow. Please note that any individual human is already in a
similar relationship with the whole scientific community; our understanding
of what is going on is getting more and more abstract. The average
person understands cell phones at a level of knowing that batteries
have limited lives and coverage has gaps, but not at the level of
field-effect transistor gain figures and conductive trace electromigration
phenomena. Ten years ago the average scientist, much less the average
user, could not have predicted that most cell phones would
contain cameras and color screens today. But we can follow, if not
predict, by understanding things at a very high level of abstraction,
as if they were magic.
Any individual hyperhuman AI would be productive, intellectually
or industrially, on the scale of the human race as a whole. As the
number of hyperhuman AIs increased, our efforts would shrink to
more and more modest proportions of the total.
Where does an eight-hundred-pound gorilla sit? According to the
old joke, anywhere he wants to. Much the same thing will be true
of a hyperhuman AI, except in instances where it has to interact
with other AIs. The really interesting question then will be, what
will it want?
©2007 J. Storrs Hall
| | |
|
Mind·X Discussion About This Article:
|
|
|
|
Re: Kinds of Minds
|
|
|
|
(1) at first glance a epihuman corresponds roughly to an "orion's arm" S1, and a hyperhuman corresponds to an S2, in "sophont levels".
(2) I'd like to hear from people (and this is nothing but a quiz) when these types of AI's will be developed; -
my estimate for "undeniably real" epihuman will be between 2020 and 2040, with the highest likelyhood of these emerging around 2025. The emergence of epihumans will signal the beginning of the end of the human era; I will seek to acquire several *very loyal* epihumans around me in a small cabal protecting my interests, as soon as they become commercially available. Plus I will see to become an epihuman as soon as possible. Being human has been no succes so far.
my estimate for "undeniably real" hyperhuman AI will be between 2025 and 2075, with the highest spike of plausibility (in my view) occuring somewhere around 2035, i.e. soon after epihumans.
The emergence of hyperhuman, or across-the-board-posthuman AI will signal the end of the human species in its current form.
Some scattered "amish" humans will remain obviously, "at the mercy of", etc. My best hope (less than 10% chance by my estimate) of surviving in some form when all this happens is to become a subroutine IN a hyperhuman.
I anticipate the first years of emergence of any of the above to be riddled with bugs; most of first generation epi/hyper human intelligences (AI , upgrade, augments or evolutes) will be "idiot savants" for later models.
Dare we hope for the best? |
|
|
|
|
|
|
|
|
Re: Kinds of Minds
|
|
|
|
What if hyper AI proves that god exists but he/she/it is not the god of any religion? That would be hilarious.
I've been saying that all along. If there is a god, why would that god be subject to the rules, dogma, and imaginations of humans?
Assuming that god is working with human beings, why would he/she/it select from among the rules and frameworks developed by humans and make that his/her/its work of truth?
If humans can define and create a religious truth of god, they do not need god. They can simply produce AI that contains the necessary knowledge of god within itself to answer our questions.
If AI could prove that god exists outside of AI, then it would automatically demonstrate, by extension, that god is not the god of any religion.
Why? because AI itself is the creation of rules and algorithms developed from human thought and, even if extended to a power beyond human thought, there would still be the recognition that AI is the creation originally of human minds.
If AI proved the existence of god as proven by some religion, then both god and AI would synonymous, extending from the thought processes of humans. In that case, AI and god are processes of the same extnsions of the human mind.
Further, we ARE AI, and we ARE god, based on that reasoning. |
|
|
|
|
|
|
|
|
Re: Kinds of Minds
|
|
|
|
Happy Wanderer makes my point. All such reasoning ends in tautology. Which is another way of stating Romans8:7 the natural mind is enmity against god and cannot be subject to god's laws.
To assume that God is consistent with human reasoning is to conclude that humans can create God in the form of SAI, embodying all God's qualities. The Chruch-Turing thesis deals somewhat with this, equating brains with computers in the sense that both are subject to physical laws, and if subject to physical laws, or laws of physics, then the brain can be mathematically modelled at some point to greater precision resembling the human brain.
Consequently, if the knowledge of God can be captured by human reasoning and transferred in the form of knowledge, that knowledge can be translated into language, which can be translated into algorithms, which can be programmed into SAI and robots posessing SAI capabilities.
However, as Happy Wanderer points out, this is tautological reasoning. We can't do it.
Whether there is a God or not is irrelevant in this perspective. There is no evidence, biblical or otherwise, that we can develop any form of "brain" capable of achieving godhood, except as we define godhood, which results in infinite descriptions.
This alters the christian definitions of free will choice as choosing "Christ" for "salvation".
If it is possible to make such choices, those choices cannot be modeled in any logical way to represent truth, since that very process would enable transmission by language,algorithms, and programming into mechanical form. Since that is cancelled, so is the process of organizing into churches that represent the christian God.
For christian religions, the only process of "choosing Christ" would be to make a choice devoid of knowledge or awareness of truth, which would have no value in any sense. Highly entropic and destructive, as history has shown.
The same logic that cancels the legitimacy of creating a robotic "son of God" cancels the legitimacy of true churches of God, since both arise from the same process of human reason, both being subject to the same process of physical laws, leading us back to the truth of Romans 8:7. |
|
|
|
|
|
|
|
|
Re: Kinds of Minds
|
|
|
|
I live in Montana where I'm part of a fast-growing, survivalist, Luddite community, the Montana Sanctuary. When humanity exterminates itself through (pick your favorite) robotics, nanotechnology, bio-weapons, high-energy-particle collisions, global environmental destruction - we'll be in our massive communal hot tub, enjoying a dip. Yes, we humans are a backward lot who have learned little from our 500 million year sojourn on earth. We'll happily make robots and gray goo to wipe ourselves out while thinking we are on the path to godhood. Unfortunately, delusional thinking is quite common amongst our most intellectually gifted members. (Yes, the very same ones pondering dia and para humans - very funny!) Frankly, even with our most gifted members we are not quite up to maintaining a viable world community. It doesn't take much to cause this program to crash. Just look at the hyper-velocity trading programs Wall Street is using - or the orchestrated short selling of essential commodities. Yes, short term profits - but injury to the system as a whole. However, long before humanity is exterminated by hyperhuman robots; we'll probably be hunted by roving bands of Mexican drug gangs. At least, we'll keep our demise in the human family. |
|
|
|
|
|
|