Origin > The Singularity > The coming superintelligence: who will be in control?
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0223.html

Printable Version
    The coming superintelligence: who will be in control?
by   Amara D. Angelica

At some point in the next several decades, as machines become smarter than people, they'll take over the world. Or not. What if humans get augmented with smart biochips, wearables, and other enhancements, accessing massive knowledge bases ubiquitously and becoming supersmart cyborgs who stay in control by keeping machines specialized? Or what if people and machines converge into a mass-mind superintelligence?


Originally published July 25, 2001 on KurzweilAI.net.

Panelists at the recent EXTRO-5 conference in San Jose thrashed out these and other mind-stretching scenarios in a session on "Convergent or Divergent Super-Intelligence?".

Experts disagree on what "superintelligence" (SI) means (see What is superintelligence?), but the panelists shared a concern about the possibility of a "hard takeoff," in which "brains in a box" use their intelligence to improve themselves enough to achieve SI, rocket far beyond humans, and gain enormous power, a dystopian view often seen scifi movies, from The Forbin Project to The Matrix.

"If we believe in a hard takeoff, it's going to change our views on what safeguards we need for that software, what ethics we have to instill, and are we going to support such research," advised computational neuroscientist Anders Sandberg.

But he discounted the likelihood of a machine with "general intelligence" (with human-like understanding and capable of learning) being developed in the near future. Reasons: very few researchers working on it ("it's more profitable to make paperclips for Microsoft"), special-intelligence machines (special-purpose systems, like IBM's planned Blue Gene) are more effective, and it takes a long time to train general-intelligence systems because of the "Bias-Variance dilemma" (a machine can learn faster if there's a narrow set of things to learn, but then it's limited in its intelligence).

In addition, "there doesn't seem to be a strong evolutionary trend toward general intelligence," Sandberg said. "Successful animals are not very smart." The exception: the rise of homo sapiens.

Source: Anders Sandberg
Special-intelligence AI (red) vs. general-intelligence AI (green)

A more likely path, he said, is a more gradual, controllable "soft takeoff" via intelligence augmentation (IA)--"human enhancement through a plurality of intelligent but very specialized systems," such as agent programs, wearables, and brain-enhancing devices. "When general intelligence appears, it will be integrated into this system because it's such a useful extra capability."

This will also be a society-wide event, he explained later, "not something driven by single projects or highly augmented individuals, but more like the race onto the Internet."

AI vs. IA

AI expert Peter Voss was not convinced. "There are huge barriers to working with wetware," he said. "Working with humans is much more difficult than working with machines. Who cares if you hack a machine, kill it, reboot it, and copy it or whatever? Try doing that to humans; you'll have some real opposition there. Computers smart enough to help us dramatically enhance ourselves will be more than smart enough to radically enhance themselves."

There are many advantages of working with artificial systems rather than wetware, he added. "It's easier to experiment, we can try millions of different designs. You can't do that with human brains. [Biological] neuronal learning speeds are much slower than computers, and that gap will widen. There's an easy upgrade path. As soon as we get the extra hardware speed and capacity we can utilize it."

There also are advantages in engineered vs. blind evolution, Voss pointed out. "We don't have to limit ourselves to dealing with biological evolved systems. We can capitalize on our engineering and cognitive strengths. We can design an airplane with thrust rather than flapping wings. We can have a more logical flow-chart design with debugging tools built. And we don't have the evolutionary baggage to worry about, such as epigenesis (the theory that an individual is developed by successive differentiation of an unstructured egg) and reproduction." In addition, he said, it's slow and painful to build specialized computers and less flexible.

"As Ray [Kurzweil] mentioned, [an engineering design] can be duplicated and mass-produced at low cost. We don't have to go through a 20-year training process. We've got 7x24 operation in machines. How much of their time will humans give you a day? And as machine intelligence increases it will become a lot cheaper than humans.

"Ray estimated that as early as 2010 we might have sufficient processing power in a supercomputer to equal a human brain. I think a lot more processing power could be utilized if people believed that general intelligence was a worthwhile pursuit. There's an enormous amount of processing power out there and there are huge areas of improvement in efficiency. Factors of 1000 are quite easily achieved by optimizing the hardware architecture for general intelligence or machine intelligence and having massively parallel, field-programmable arrays.

"Once people were convinced that was the way to go, I think factors of a million times the processing power we have right now could be achieved very quickly by more hardware being made available, more people working on the problem, and by having more specialized hardware.

Global superbrain

Marc Stiegler[http://www.skyhunter.com/marc.html] took a very different approach. Stiegler was Acting President of Xanadu while he was VP of Engineering for Autodesk. In a seminal 1988 article, seminal 1987 article, "Hypermedia and the Singularity," in Analog magazine, he predicted that global hypertext systems would cause a quantum leap in augmenting human intelligence--six years before the Web became a viral phenomenon.

His novel Earthweb[http://www.the-earthweb.com] addresses the questions: How do you enable a billion people to work together as a tightly knit team? And how might our lives change when a mature version of the World Wide Web becomes the underpinning fabric of global society?



The first SI on this planet will be "the organism that the World Wide Web, as it matures, will evolve into," he said. "A mature World Wide Web is secure, has bi-directional links, reputation-based systems, an idea futures market, 10 billion man-years of experience, and teramips of computing power; and it's a fairly entertaining system."

Perhaps most interestingly, in his vision, the Earthweb is also a truthful organism. "The more important a question is, the more forcefully it will search for the truth and the more brutally it will search for falsehoods." The Earthweb also "defends most kinds of property rights, such as digital certificates, better than lawyers. It has a strong instinct for self-repair and a gentle sense of self-preservation."

"It's maniacal about growth. All of the individual cells are constantly looking for new buyers of current products and looking for opportunities to invent new products for new buyers and expand markets."

He compared the Earthweb to the Borg collective from Star Trek, "which is also maniacal about growth. But joining the Earthweb is voluntary, and you can leave any time and come back any time. The Earthweb is constantly inventing new reasons to join and stay, so it's a much stronger organism than the Borg collective ever could be."

So what happens if the Earthweb runs into another SI--a brain in a box? "With 10 billion man-years of experience, even the brainbox will find interaction with the Earthweb advantageous, at least for a few days. The brainbox becomes a part of the Earthweb organism and both are stronger for it. As the brainbox outgrows beyond the value of this offering, the Earthweb will create new offerings, quite possibly by deploying additional brainboxes for the first brainbox to work with."

As these brainboxes figure out better ways to cooperate with each other, the Earthweb "will happily build out new mechanisms for the brainboxes and they will all be absorbed into the Earthweb collective.

"For the doomsters of the world, I have the following warning: the Earthweb's goal is to prevent all of your dreams of planetary destruction and human misery from coming true. And the Earthweb is coming to a global village near you."

"Marc's cute description of 'SI meets Earthweb' assumes that the SI will be interested in being just a minor player in the overall network," Voss told us afterwards. "We enjoy playing with animals, but don't ultimately let them hold us back. This consideration would hold true both for an SI with 'a mind of its own' and for one under human control."

Does that mean an SI could take over the world? "Even if the brain-in-a-box, hard-take-off super-intelligence scenario comes about (before dramatic human augmentation), it is not clear to me that such an SI would have the motivation to take over," says Voss. "It may simply do what it's told. This, of course, may be substantially more dangerous. Which group of humans will be controlling the SI? An independent, self-motivated--yet benign--SI would seem to represent a much better outcome.

"If the hard-take-off SI is not independent--if the SI does not have a mind of its own, or chooses not to exercise it--then some humans will control its awesome power. Without speculating who those people might be, I find this possibility rather unnerving. (Personally, I'm more afraid of governments than 'the rich'.)

"Assume that no substantial progress on general AI is achieved in the near future. Even in this case, it seems likely that as more processing power becomes available, and as more and more specialized intelligence is developed, we will come the point where what is needed for general intelligence will become relatively obvious and easy. The hard takeoff will just be delayed. Researchers will recognize some overall patterns of what is important in developing different intelligent systems and various aspects of general intelligence will be developed incidentally, as byproducts of specialized designs. In addition, large increases in processing power will allow less efficient designs to show some encouraging results. The sheer number of different intelligent applications designed, the increased number of people working in the field, together with vastly improved hardware will make it more likely that crucial aspects of general intelligence will be 'stumbled upon.' At that stage, because of the substantial technological/ economic advantage, several groups are likely to vigorously pursue it."

In the meantime, general-intelligence development is on hold until Voss and other enthusiasts can inspire more research. As he proposes on optimal.org: "Of all the people working in the field called "AI" ...

  • 80% don't believe in the concept of General Intelligence (but instead, in a large collection of specific skills and knowledge); of those that do,
  • 80% don't believe that (super) human-level intelligence is possible--either ever, or for a long, long time; of those that do,
  • 80% work on domain-specific AI projects for commercial or academic-political reasons (results are a lot quicker); of those left,
  • 80% have a poor conceptual framework."

What is superintelligence?

In How Long Before Superintelligence transhumanist philosopher Nick Bostrom defines superintelligence (SI) as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.

Ray Kurzweil would expand that definition to "combine these extra-human capabilities with the natural advantages of machines--knowledge sharing, speed, and memory capacity."

Peter Voss argues that "any entity that (given the same knowledge base) can easily solve problems that are far beyond human ability should be called 'superintelligent.' For example, speedier results could be achieved by more effective rather than faster thinking. One could imagine genetic engineering producing such entities." In addition, an SI "can acquire additional--and novel--skills and abilities beyond those programmed in or taught. 'Creativity' touches on that, but may not be explicit enough: the words learning, innovation, and discovery come to mind."

Bostrom's definition "leaves open how the superintelligence is implemented," says Voss. "It could be a digital computer, a dramatically augmented human, some other biological engineered or evolved entity, or others forms that are hard to imagine today."

However, "superintelligence is another diffuse concept," says Anders Sandberg. "It is not obvious how to measure or compare forms of intelligence with each other, but it is clear that there can exist both quantitative and quantitative differences. As Vinge has suggested, a dog mind transferred to a vastly faster hardware would still not be able to solve mathematical problems, regardless of the amount of canine education given to it, since its basic structure likely does not allow this form of abstract thought. Minds may have different abilities to find and exploit patterns.

"It may be more relevant to describe superintelligence in terms of ability rather than in what structural way it is 'smart.' That is in itself an interesting question (there might be incompatible and very different forms of intelligent thinking) but what is particularly relevant is that the superintelligence is better at achieving its real-world goals than humans, given the same initial resources and knowledge."

Marc Stiegler proposed a more topical definition of the term in his EXTRO-5 talk: "An intelligence so great it can make California politicians understand the law of energy supply and demand."

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

smart machines
posted on 07/30/2001 11:08 PM by gxm21@psu.edu

[Top]
[Mind·X]
[Reply to this post]

Machines will become smarter than humans but not smarter than humans who know how to use machines. The Global Brain metaphor treats the global network of humans connected to machines and to each other as an emergent global complex system. Asking who will be "in control" of it is like asking which cells are in control of the body.
These and other issues were discussed during "GBrain-0", a recent workshop in Brussels (http://pespmc1.vub.ac.be/GB/), with a webcast by Complexity Digest (http://www.comdig.de/Conf/GB0/)

Gottfried J. Mayer, Ph.D.
Editor, Complexity Digest

Re: smart machines
posted on 08/02/2001 5:20 PM by artisu@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

It takes ignorance of Kurzweil to talk about
"machines becoming smarter than humans in decades",
he has a reason - making money on this hype.
did you have a reason to waste time reading
his trash ?

rather read good sci-fi, or scientific work,
not Kurzweiliada ..

Re: smart machines
posted on 08/02/2001 6:09 PM by mike99@lascruces.com

[Top]
[Mind·X]
[Reply to this post]

Kurzweil has an impressive record as an inventor and thinker. Perhaps that's why he is predicting these AI developments. Of course, you may believe otherwise. But must you do it in such a foul, disagreeable way?

Re: smart machines
posted on 08/03/2001 2:16 PM by trinitytherat@cs.com

[Top]
[Mind·X]
[Reply to this post]

>It takes ignorance of Kurzweil to talk about
>"machines becoming smarter than humans in >decades",
>he has a reason - making money on this hype.
>did you have a reason to waste time reading
>his trash ?
>rather read good sci-fi, or scientific work,
>not Kurzweiliada ..

Stumbled upon the wrong site huh? Why not try:
www.dacinghamsters.com

You might find material there better suited to your intellectual calliber;-)

Re: smart machines
posted on 08/03/2001 2:22 PM by sequoia5150@cs.com

[Top]
[Mind·X]
[Reply to this post]

"Machines will become smarter than humans but not smarter than humans who know how to use machines. The Global Brain metaphor treats the global network of humans connected to machines and to each other as an emergent global complex system. Asking who will be "in control" of it is like asking which cells are in control of the body."

What an interesting analogy. I like it, but it still leaves out the possibility of machine to machine intelligence (a network of individually intelligent machines which excludes humans), which might surpass human-machine capability.

Re: smart machines
posted on 08/06/2001 7:19 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

It is not that obvious, that will happen just this. That will be no autonomous AI - independent of humans.

It is not that obvious - but true nevertheless. I only want to emphasize your point.

- Thomas Kristan

Re: The coming superintelligence: who will be in control?
posted on 10/26/2001 10:34 PM by moonsinger13@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Excuse my ignorance of the subject at hand, but in reference to the interaction of 'brain-in-box' SIs and the broader global SI(s), could such a meeting prove disasterous? Supposing either the brain SI or the global SI deems their counterpart a threat to its existence, what might possibly occur, and how would the consequences of such occurances affect humanity as a whole?

Re: The coming superintelligence: who will be in control?
posted on 06/28/2002 3:52 AM by sunny_day_432000@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

There seems to be WAY too much speculatory literature, but very little hard science. This is why transhumanism at the moment is a joke, it's at the intellectual level of a Hollywood movie.

Most transhumanists don't even have an understanding of the biology of human general intelligence. How many of you know the current identified biological correlates of general intelligence, which is the type of intelligence IQ tests measure? Have any of you studied the works of J. Phillipe Rushton, Linda S. Gottfredson, Richard Lynn, and Kevin MacDonald? It seems like none of you know anything about the physiology/biology of human intelligence and behavior, yet you are bullshiting us with tall tales of a "Singularity." Let me suggest some websites for you to actually learn something about human intelligence:

http://www.mugu.com/cgi-bin/Upstream/
http://lrainc.com/swtaboo/
http://www.mankind.org/
http://www.cycad.com/cgi-bin/pinc/index.html
http://www.charlesdarwinresearch.org/
http://www.csulb.edu/~kmacd/
http://www.rlynn.co.uk/
http://www.pioneerfund.org/
http://www.webcom.com/zurcher/thegfactor/index.html
http://www.theoccidentalquarterly.com/
http://www.wcotc.com/euvolution/euvolution/
http://www.wcotc.com/euvolution/articles.htm
http://www.eugenics.net/