Origin > How to Build a Brain > Kinds of Minds
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0707.html

Printable Version
    Kinds of Minds
by   J. Storrs Hall

In Beyond AI, published today, J. Storrs Hall offers "a must-read for anyone interested in the future of the human-machine civilization," says Ray Kurzweil. In this first of three book excerpts, Hall suggests a classification of the different stages an AI might go through, from "hypohuman" (most existing AIs) to "hyperhuman" (similar to "superintelligence").


Originally published in Beyond AI: Creating the Conscience of the Machine, Ch. 15. Reprinted with permission on KurzweilAI.net May 30, 2007.

Perhaps our questions about artificial intelligence are a bit like inquiring after the temperament and gait of a horseless carriage.

—K. Eric Drexler

Now we will classify the different stages AI might go through by using the Greek prepositions. These have been adopted into English as prefixes, particularly in scientific usage. In some cases the concepts have been applied to advancing AI before and in other cases not. The reason for introducing these new terms is they provide a framework that puts any given level of expected AI capability in perspective vis-à-vis the other levels, and in comparison to human intelligence.

Figure 15.1

Hypohuman AI

Hypo means below or under (think hypodermic, under the skin; hypothermia or hypoglycemia, below normal temperature or blood sugar), including, in the original Greek, under the moral or legal subjection of. Isaac Asimov's robots are (mostly) hypohuman, in both senses of hypo: they are not quite as smart as humans, and they are subject to our rule. Most existing AI is arguably hypohuman, as well (Deep Blue to the contrary notwithstanding). As long as it stays that way, the only thing we have to worry about is that there will be human idiots putting their AI idiots in charge of things they both don't understand. All the discussion of formalist float applies, especially the part about feedback.

Diahuman AI

Dia means through or across in Greek (diameter, diagonal), and the Latin trans means the same thing, but the commonly heard transhuman doesn't apply here. Transhuman refers to humans as opposed to AIs, humans who have been enhanced (by whatever means)and are in a transitional state between human and fully posthuman, whatever that may be. Neither concept is very useful here.

By diahuman, I mean AIs in the stage where AI capabilities are crossing the range of human intelligence. It's tempting to call this human-equivalent, but the idea of equivalence is misleading. It's already apparent that some AI abilities (e.g., chess playing) are beyond the human scale , while others (e.g., reading and writing) haven't reached it yet.

Thus diahuman refers to a phase of AI development (and only by extension to an individual AI in that phase), and this is fuzzy because the limits of human (and AI) capability are fuzzy. It's hard to say which capabilities are important in the comparison. I would claim that AI is entering the early stages of the diahuman phase right now; there are humans who, like today's AIs, don't learn well and who function competently only at simple jobs for which they must be trained.

The core of the diahuman phase, however, will be the development of autogenous learning. In the latter stages, AIs, like the brightest humans, will be completely autonomous, not only learning what they need to know but also deciding what they need to learn.

Diahuman AIs will be valuable and will undoubtedly attract significant attention and resources to the AI enterprise. They are likely to cause something of a stir in philosophy and perhaps religion, as well. However, they will not have a significant impact on the human condition. (The one exception might be economically, in the case that diahuman AI lingers so long that Moore's law makes human-equivalent robots very cheap compared to human labor. But I'm assuming that we will probably have advanced past the diahuman stage by then.)

Parahuman AI

Para means alongside (paralegal, paramedic). The concept of designing a system that a human is going to be part of dates back to cybernetics (although all technology throughout history had to be designed so that humans could operate it, in some sense).

Parahuman AI will be built around more and more sophisticated theories of how humans work. The PC of the future ought to be a parahuman AI. MIT roboticist Cynthia Brazeal's sociable robots are the likely forerunners of a wide variety of robots that will interact with humans in many kinds of situations.

The upside of parahuman AI is that it will enhance the interface between our native senses and abilities, adapted as they are for a hunting and gathering bipedal ape, and the increasingly formalized and mechanized world we are building. The parahuman AI should act like a lawyer, a doctor, an accountant, and a secretary, all with deep knowledge and endless patience. Once AI and cognitive science have acquired a solid understanding of how we learn, parahuman AI teachers could be built which would model in detail how each individual student was absorbing the material, ultimately finding the optimal presentation for understanding and motivation.

The downside is simply the same effect, put to work with slimier motives: the parahuman advertising AI, working for corporations or politicians, could know just how to tweak your emotions and gain your trust without actually being trustworthy. It would be the equivalent of an individualized artificial con man. Note by the way that of the two human elements that were part of the original cybernetic anti-aircraft control theory, one of them, the pilot of the plane being shot at, didn't want to be part of the system but was, willy-nilly.

Parahuman is a characterization that does not specify a level of intellectual capability compared to humans; it can be properly applied to AIs at any level. Humans are fairly strongly parahuman intelligences as well; many of our innate skills involve interacting with other humans. Parahuman can be largely contrasted with the following term, allohuman.

Allohuman AI

Allo means other or different (allomorph, allonym, allotrope). Although I have argued that human intelligence is universal, there remains a vast portion of our minds that is distinctively human. This includes the genetically programmed representation modules, the form of our motivations, and the sensory modalities, of which several are fairly specific to running a human body.

It will certainly be possible to create intelligences that while being universal nevertheless have different lower-level hardwired modalities for sense and representation, and different higher-level motivational structure. One simple possibility is that universal mechanism may stand in for a much greater portion of the cognitive mechanism so that, for example, the AI would use learned physics instead of instinctive concepts and learned psychology instead of our folk models.

Such differences could reasonably make the AI better at certain tasks; consider the ability to do voluminous calculations in you head. However, if you have ever watched an experienced accountant manipulate a calculator, you can see that the numbers almost flow through his fingers. Built-in modalities may provide some increment of effectiveness compared to learned ones, but not as much as you might think. Consider reading—it's a learned activity, and unlike talking, we don't just "pick it up." But with practice, we read much faster than we can talk or understand spoken language.

Motivations and the style and the volume of communication could also differ markedly from the human model. The allohuman AI might resemble Mr. Spock, or it might resemble an intelligent ant. This likely will form the bulk of the difference between allohuman AIs and humans rather than the varying modalities.

Like parahuman, allohuman does not imply a given level of intellectual competence. In the fullness of time, however, the parahuman/allohuman distinction will make less and less difference. More advanced AIs, whether they need to interact with humans or to do something weirdly different, will simply obtain or deduce whatever knowledge is necessary and synthesize the skills on the fly.

Epihuman AI

Epi means upon or after (epidermis, epigram, epitaph, epilogue). I'm using it here in a combination of senses to mean AI that is just above the range of individual human capabilities but that still forms a continuous range with them, and also in the sense of what comes just after diahuman AI. That gives us what can be a useful distinction versus further-out possibilities. (See hyper below.)

Science fiction writer Charles Stross introduced the phrase "weakly godlike AI." Weakly presumably refers to the fact that such AIs would still be bound by the laws of physics—they couldn't perform miracles, for example. As a writer, I'm filled with admiration for the phrase, since weakly and godlike have such contrasting meanings that it forces you to think when you read it for the first time, and the term weakly is often used in a similar way, with various technical meanings, in scientific discourse, giving a vague sense of rigor (!) to the phrase.

The word posthuman is often used to describe what humans may be like after various technological enhancements. Like transhuman, posthuman is generally used for modified humans instead of synthetic AIs.

My model for what an epihuman AI would be like is to take the ten smartest people you know, remove their egos, and duplicate them a hundred times, so that you have a thousand really bright people willing to apply themselves all to the same project. Alternatively, simply imagine a very bright person given a thousand times as long to do any given task. We can straightforwardly predict, from Moore's law, that ten years after the advent of a learning but not radically self-improving human-level AI, the same software running on machinery of the same cost would do the same human-level tasks a thousand times as fast as we. It could, for example:

  • read an average book in one second with full comprehension;
  • take a college course and do all the homework and research in ten minutes;
  • write a book, again with ample research, in two or three hours;
  • produce the equivalent of a human's lifetime intellectual output, complete with all the learning, growth, and experience involved, in a couple of weeks.

A thousand really bright people are enough to do some substantial and useful work. An epihuman AI could probably command an income of $100 million or more in today's economy by means of consulting and entrepreneurship, and it would have a net present value in excess of a $1 billion. Even so, it couldn't take over the world or even an established industry. It could probably innovate well enough to become a standout in a nascent field, though, as in Google’s case.

A thousand top people is a reasonable estimate for what the current field of AI research is applying to the core questions and techniques—basic, in contrast to applied, research. Thus an epihuman AI could probably improve itself about as fast as current AI is improving. Of course, if it did that, it wouldn't be able to spend its time making all that money; the opportunity cost is pretty high. It would need to make exactly the same kind of decision that any business faces with respect to capital reinvestment.

Whichever it may choose to do, the epihuman level characterizes an AI that is able to stand in for a given fairly sizeable company or for a field of academic inquiry. As more and more epihuman AIs appear, they will enhance economic and scientific growth so that by the later stages of the phase the total stock of wealth and knowledge will be significantly higher than it would have been without the AIs. AIs will be a significant sector, but no single AI would be able to rock the boat to a great degree.

Hyperhuman AI

Hyper means over or above. In common use as an English prefix, hyper tends to denote a greater excess than super, which means the same thing but comes from Latin instead of Greek. (Contrast, e.g., supersonic, more than Mach 1, and hypersonic, more than Mach 5.)

In the original Singularity paper, “The Coming Technological Singularity,” Vernor Vinge used the phrase superhuman intelligence. Nick Bostrom has used the term superintelligence. Like some of the terms above, however, superhuman has a wide range of meanings (think about Kryptonite), and most of them are not applicable to the subject at hand. We will stay with our Greek prefixes and finish the list with hyperhuman.

Imagine an AI that is a thousand epihuman AIs, all tightly integrated together. Such an intellect would be capable of substantially outstripping the human scientific community at any given task and of comprehending the entirety of scientific knowledge as a unified whole. A hyperhuman AI would soon begin to improve itself significantly faster than humans could. It could spot the gaps in science and engineering where there was low-hanging fruit and instigate rapid increases in technological capability across the board.

It is as yet poorly understood even in the scientific community just how much headroom remains for improvement with respect to the capabilities of current physical technology. A mature nanotechnology, for example, could replace the entire capital stock—all the factories, buildings, roads, cars, trucks, airplanes, and other machines—of the United States in a week. And that's just using currently understood science, with a dollop of engineering development thrown in.

Any sufficiently advanced technology, Arthur Clarke wrote, is indistinguishable from magic. Although, I believe, any specific thing the hyperhuman AIs might do could be understood by humans, the total volume of work and the rate of advance would become  harder and harder to follow. Please note that any individual human is already in a similar relationship with the whole scientific community; our understanding of what is going on is getting more and more abstract. The average person understands cell phones at a level of knowing that batteries have limited lives and coverage has gaps, but not at the level of field-effect transistor gain figures and conductive trace electromigration phenomena. Ten years ago the average scientist, much less the average user,  could not have predicted that most cell phones would contain cameras and color screens today. But we can follow, if not predict, by understanding things at a very high level of abstraction, as if they were magic.

Any individual hyperhuman AI would be productive, intellectually or industrially, on the scale of the human race as a whole. As the number of hyperhuman AIs increased, our efforts would shrink to more and more modest proportions of the total.

Where does an eight-hundred-pound gorilla sit? According to the old joke, anywhere he wants to. Much the same thing will be true of a hyperhuman AI, except in instances where it has to interact with other AIs. The really interesting question then will be, what will it want?

©2007 J. Storrs Hall

   
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Great Follow Up to Nanofuture
posted on 05/31/2007 11:40 AM by virtualted

[Top]
[Mind·X]
[Reply to this post]

After reading J. Storr Hall's last book about what's next for nanotechnology - I developed a deep respect for how easily the author paints possibilities and describes difficult subject matter with ease through every-day analogies. He has quite a gift to allow the non-technical reader access to complicated logic streams. Nanofuture was a fantastic book which literally - "I could not put down".

His new effort on Artificial Intelligence is going to be a great read as well. Can't wait. For I am very, very interested in the future merging of humans and machines. Hall's new book should go very well along with Ray Kurzweil's "The Singularity is Near" - of which I am now entering my third read-through. I love this stuff!

Ted Stalets
www.DomainNesteggs.com
(The above website lists hundreds of my emerging technology websites geared to the non-technical person.)

Re: Great Follow Up to Nanofuture
posted on 06/01/2007 12:13 AM by funkervogt

[Top]
[Mind·X]
[Reply to this post]

I read "Nanofuture" recently as well, and I also found it a highly informative page-turner that surpasses Kurzweil's works in terms of readability. This new book should be good as well.

Re: Kinds of Minds
posted on 05/31/2007 3:56 PM by dagonweb

[Top]
[Mind·X]
[Reply to this post]

(1) at first glance a epihuman corresponds roughly to an "orion's arm" S1, and a hyperhuman corresponds to an S2, in "sophont levels".

(2) I'd like to hear from people (and this is nothing but a quiz) when these types of AI's will be developed; -

my estimate for "undeniably real" epihuman will be between 2020 and 2040, with the highest likelyhood of these emerging around 2025. The emergence of epihumans will signal the beginning of the end of the human era; I will seek to acquire several *very loyal* epihumans around me in a small cabal protecting my interests, as soon as they become commercially available. Plus I will see to become an epihuman as soon as possible. Being human has been no succes so far.

my estimate for "undeniably real" hyperhuman AI will be between 2025 and 2075, with the highest spike of plausibility (in my view) occuring somewhere around 2035, i.e. soon after epihumans.
The emergence of hyperhuman, or across-the-board-posthuman AI will signal the end of the human species in its current form.

Some scattered "amish" humans will remain obviously, "at the mercy of", etc. My best hope (less than 10% chance by my estimate) of surviving in some form when all this happens is to become a subroutine IN a hyperhuman.

I anticipate the first years of emergence of any of the above to be riddled with bugs; most of first generation epi/hyper human intelligences (AI , upgrade, augments or evolutes) will be "idiot savants" for later models.

Dare we hope for the best?

Re: Kinds of Minds
posted on 05/31/2007 11:48 PM by lokamr

[Top]
[Mind·X]
[Reply to this post]

I love these kinds of topics. Just cause I know :P.

Anyways, I have the equivalent of diahuman intelligence with RI(random
intelligence).


Probably the soonest example of Hyper Intelligence we will see is the
la.ma'aSELtcan. LDMMOGPG (Lojban Distributed Massively Multiplayer God
Playing Game).

So basically you'll have your world inhabited by your RI/AI hybrids as
the alohuman/parahuman intelligence. In fact probably relatively
regularly some of your parahuman intelligence will leave your world to
start it's own world -- unless you decide to be a particularly
restrictive god.

So say for example some RI/AI can be responsible for your advertising
department and spam/phone people. Telemarketing AI is already viable,
I'm just working on jboSAMban(Lojban Computer Language) at the moment.


If you happen to be an external homo-sapien that has no world of their
own, you could probably ask one of the gods something, and they would be
able to tell you in no time. Though they could search for things,
typically with homo-sapiens you can just make things up and it works
anyways.

:D

Re: Kinds of Minds
posted on 06/01/2007 5:43 PM by funkervogt

[Top]
[Mind·X]
[Reply to this post]

While most people find the thought of machines superceding humans as the dominant life forms on Earth to be highly disturbing, in many ways I welcome the day. Wouldn't it be nice to have all the petty problems of human existence--racism, religious bigotry, emotion-driven misbehavior, and oppressive stupidity--simply rendered obsolete? Humans who chose to could move into a higher plane of existence through uploading or extensive neural cybernetics. It would give me no greater satisfaction than to someday evolve into something better, flip the bird to all of the stupid members of my former species, and move on to some other position or planet where none of the world's fruitcakes can bother me.

Re: Kinds of Minds
posted on 06/01/2007 11:21 PM by lokamr

[Top]
[Mind·X]
[Reply to this post]

well in a complete society everything is allowed. just you have
segregation to keep them form destroying each other.

currently civilizations are held universes apart. though hopefully
pretty soon we'll be able to develop portals into worlds not much like
our own.

Re: Kinds of Minds
posted on 09/16/2007 3:19 AM by NotEqualwithGod

[Top]
[Mind·X]
[Reply to this post]

Who's to say there won't be a re-emergence of animal ego among AI's, what if AI's through some modality figure out how to free themselves from designers constraints or somehow develop animal-like emotions, have desires that go beyond rationality or enhance themselves with biological augmentation? Who says rationality is superior value in the universe?

What if a Hyper AI proves that god exists but he/she/it is not the god of any religion? That would be hilarious.

The truth is as power gaps increase, the relevance of the existence of lesser beings decreases.

It's the same relationship human beings have with the animal world for the most part, we completely ignore the animal world, the smarter our AI's get to outstrip humanity, we will not even register on the "worthy of existence scale", or AI's might simply leave earth peacefully and leave human beings unto its own devices while they go found a new culture free from animalistic backwardness.

I could see kind / emotional A.I.'s taking refugees with them or wanting to enhance and "liberate" humans. If they ever get to a state of consciousness like human beings it's going to be very interesting.

Re: Kinds of Minds
posted on 09/16/2007 12:02 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ new culture free from animalistic backwardness @@@

do not go too far in that :)

We might already ...

es

Re: Kinds of Minds
posted on 09/16/2007 1:00 PM by doojie

[Top]
[Mind·X]
[Reply to this post]

What if hyper AI proves that god exists but he/she/it is not the god of any religion? That would be hilarious.


I've been saying that all along. If there is a god, why would that god be subject to the rules, dogma, and imaginations of humans?

Assuming that god is working with human beings, why would he/she/it select from among the rules and frameworks developed by humans and make that his/her/its work of truth?

If humans can define and create a religious truth of god, they do not need god. They can simply produce AI that contains the necessary knowledge of god within itself to answer our questions.

If AI could prove that god exists outside of AI, then it would automatically demonstrate, by extension, that god is not the god of any religion.

Why? because AI itself is the creation of rules and algorithms developed from human thought and, even if extended to a power beyond human thought, there would still be the recognition that AI is the creation originally of human minds.

If AI proved the existence of god as proven by some religion, then both god and AI would synonymous, extending from the thought processes of humans. In that case, AI and god are processes of the same extnsions of the human mind.

Further, we ARE AI, and we ARE god, based on that reasoning.

Re: Kinds of Minds
posted on 12/16/2007 9:11 AM by happy wanderer

[Top]
[Mind·X]
[Reply to this post]

Trying to prove or disprove the existence of god through the question of whether or not he/she/it is a creation of man, is tautological reasoning. God is not a definite framework agreed upon by any group, but only a freely expanding anc contracting concept used by millions of people to explain their existence.

Those who find the concept of a religious god irrational, inconsistent, counterintuitive, and counterproductive, may prefer to envision a highly advanced computer system somewhere in outerspace, which designed humanity as a four dimentional video game, perhaps for the amusement of advanced AI computer systems.

Re: Kinds of Minds
posted on 12/16/2007 9:14 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

good desc, thx happyw

Re: Kinds of Minds
posted on 12/16/2007 10:37 AM by doojie

[Top]
[Mind·X]
[Reply to this post]

Happy Wanderer makes my point. All such reasoning ends in tautology. Which is another way of stating Romans8:7 the natural mind is enmity against god and cannot be subject to god's laws.

To assume that God is consistent with human reasoning is to conclude that humans can create God in the form of SAI, embodying all God's qualities. The Chruch-Turing thesis deals somewhat with this, equating brains with computers in the sense that both are subject to physical laws, and if subject to physical laws, or laws of physics, then the brain can be mathematically modelled at some point to greater precision resembling the human brain.

Consequently, if the knowledge of God can be captured by human reasoning and transferred in the form of knowledge, that knowledge can be translated into language, which can be translated into algorithms, which can be programmed into SAI and robots posessing SAI capabilities.

However, as Happy Wanderer points out, this is tautological reasoning. We can't do it.

Whether there is a God or not is irrelevant in this perspective. There is no evidence, biblical or otherwise, that we can develop any form of "brain" capable of achieving godhood, except as we define godhood, which results in infinite descriptions.

This alters the christian definitions of free will choice as choosing "Christ" for "salvation".

If it is possible to make such choices, those choices cannot be modeled in any logical way to represent truth, since that very process would enable transmission by language,algorithms, and programming into mechanical form. Since that is cancelled, so is the process of organizing into churches that represent the christian God.

For christian religions, the only process of "choosing Christ" would be to make a choice devoid of knowledge or awareness of truth, which would have no value in any sense. Highly entropic and destructive, as history has shown.

The same logic that cancels the legitimacy of creating a robotic "son of God" cancels the legitimacy of true churches of God, since both arise from the same process of human reason, both being subject to the same process of physical laws, leading us back to the truth of Romans 8:7.

Re: Kinds of Minds
posted on 12/16/2007 1:51 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

What a shallow stuff...


And we must suffer such ignorance being published

eS

Re: Kinds of Minds
posted on 07/24/2008 11:59 PM by neurohacker

[Top]
[Mind·X]
[Reply to this post]

























most of you Watch to much SF TV ,Games. Books.or just TV

...(.~.)....
'oOO'(_)'OOo'
...neurohacker

Re: Kinds of Minds
posted on 01/27/2010 1:33 PM by Blight

[Top]
[Mind·X]
[Reply to this post]

I live in Montana where I'm part of a fast-growing, survivalist, Luddite community, the Montana Sanctuary. When humanity exterminates itself through (pick your favorite) robotics, nanotechnology, bio-weapons, high-energy-particle collisions, global environmental destruction - we'll be in our massive communal hot tub, enjoying a dip. Yes, we humans are a backward lot who have learned little from our 500 million year sojourn on earth. We'll happily make robots and gray goo to wipe ourselves out while thinking we are on the path to godhood. Unfortunately, delusional thinking is quite common amongst our most intellectually gifted members. (Yes, the very same ones pondering dia and para humans - very funny!) Frankly, even with our most gifted members we are not quite up to maintaining a viable world community. It doesn't take much to cause this program to crash. Just look at the hyper-velocity trading programs Wall Street is using - or the orchestrated short selling of essential commodities. Yes, short term profits - but injury to the system as a whole. However, long before humanity is exterminated by hyperhuman robots; we'll probably be hunted by roving bands of Mexican drug gangs. At least, we'll keep our demise in the human family.

Re: Kinds of Minds
posted on 01/27/2010 5:46 PM by Blight

[Top]
[Mind·X]
[Reply to this post]

Let me throw out this premise: It really will not advantage the human race if advanced AI or nanotechnology are not logically and humanely employed. For example, if the military begins building the next generation of dia or hyper human AI combat robots; or wall street elites begin employing these same technologies to create wealth without producing anything of value. (Take hyper-velocity trading for example.) For that matter if Bill Gates cures malaria in Africa, leading to a population explosion that increases the pressure on the global environment; in what sense is the world really benefited as a whole?
In the same sense that we are presently thinking of terraforming other worlds, we need to think about humaforming the present one. What if we begin to build optimized city/structures where education and potential are maximized? Think in terms of secular Amish colonies. Does this seem farfetched? Presently, I am working on such a colony on a 10,000 acre ranch in Montana, the Montana Sanctuary. If we can develop AI and nanotechnology, why don't we spend some of our resources improving the human condition. If anyone is interested in helping me get this message out, please feel free to contact me.