Origin > Nanotechnology > Are We Enlightened Guardians, Or Are We Apes Designing Humans?
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0673.html

Printable Version
    Are We Enlightened Guardians, Or Are We Apes Designing Humans?
by   Douglas Mulhall

Thanks in part to molecular manufacturing, accelerated developments in AI and brain reverse-engineering could lead to the emergence of superintelligence in just 18 years. Are we ready for the implications -- like possible annihilation of Homo sapiens? And will we seem to superintelligence what our ape-like ancestors seem to us: primitive?


Originally published in Nanotechnology Perceptions: A Review of Ultraprecision Engineering and Nanotechnology, Volume 2, No. 2, May 8, 2006. Reprinted with permission on KurzweilAI.net, May 22, 2006.

Most students of artificial intelligence are familiar with this forecast made by Vernor Vinge in 19931: "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."

That was thirteen years ago. Many proponents of super-intelligence say we are on track for that deadline, due to the rate of computing and software advances. Skeptics argue this is nonsense and that we're still decades away from it.

But fewer and fewer argue that it won't happen by the end of this century. This is because history has shown the acceleration of technology to be exponential, as explained in well-known works by inventors such as Ray Kurzweil and Hans Moravec, some of which are elucidated in this volume of essays.

A classic example of technology acceleration is the mapping of the human genome, which achieved most of its progress in the late stages of a multi-year project that critics wrongly predicted would take decades. The rate of mapping at the end of the project was exponential compared to the beginning, due to rapid automation that has since transformed the biotechnology industry.

The same may be true of molecular manufacturing (MM) as self-taught machines learn via algorithms to do things faster, better, and cheaper. I won't describe the technology of MM here because that is well covered in other essays by more competent experts.

MM is important to super-intelligence because it will revolutionize the processes required to understand our own intelligence, such as neural mapping via neural probes that non-destructively map the brain. It also will accelerate three-dimensional computing, where the space between computing units is reduced and efficiency multiplied in the same way that our own brains have done it. Once this happens, the ability to mimic the human brain will accelerate, and self-aware intelligence may follow quickly.

This type of acceleration suggests that Vinge's countdown to the beginning of the end of the human era must be taken seriously.

The pathways by which super-human intelligence could evolve have been well explained by others and include: computer-based artificial intelligence, bioelectronic AI that develops super-intelligence on its own, or human intelligence that is accelerated or merged with AI. Such intelligence might be an enhancement of Homo sapiens, i.e. part of us, or completely separate from us, or both.

Many experts argue that each of these forms of super-intelligence will enhance humans, not replace them, and although they might seem alien to unenhanced humans, they will still be an extension of us because we are the ones who designed them.

The thought behind this is that we will go on as a species.

Critics, however, point to a fly in that ointment. If the acceleration of computing and software continues apace, then super-intelligence, once it emerges, could outpace Homo sapiens, with or without piggybacking on human intelligence.

This would see the emergence of a new species, perhaps similar in some ways, but in other ways fundamentally different from Homo sapiens in terms of intelligence, genetics, and immunology.

If that happens, the gap between Homo sapiens and super-intelligence could quickly become as wide as the gap between apes and Homo sapiens.

Optimists say this won't happen, because everybody will get an upgrade simultaneously when super-intelligence breaks out.

Pessimists say that just a few humans or computers will acquire such intelligence first, and then use it to subjugate the rest of us Homo sapiens.

For clues as to who might be right, let's look at outstanding historical examples of how we've used technology and our own immunology in relation to less technologically adept societies, and in relation to other species.

When technologically superior Europeans arrived in North and South America, the indigenous populations didn't have much time to contemplate such implications because in a just few years, most who came in contact with Europeans were dead from disease. Many who died never laid eyes on a European, as death spread so quickly ahead of the conquerors through unknowing victims.

Europeans at first had no idea that their own immunity to disease would give them such an advantage, but when they realized it, they did everything to use it as a weapon. They did the same with technologies that they consciously invented and knew were superior.

The rapid death of these ancient civilizations, numbering in the tens of millions of persons across two continents, is not etched into the consciousness of contemporary society because those cultures left few written records and had scant time to document their own demise. Most of what they put to pictures or symbols was destroyed by religious zealots or wealth-seeking exploiters.

And so, these civilizations passed quietly into history, leaving only remnants.

By inference, enhanced intelligence easily could take choices about our future out of our hands, and may also be immune to hazards such as mutating viruses that pose dire threats to human society.

Annihilation of Homo sapiens could occur in one of many ways:

  • The "oops" factor: accidental annihilation at the hands of a very smart klutz, e.g. by something that is unwittingly immune to things that kill us, or that is smart in one way, but inept in others. Predecessors to super-intelligence may only be smarter than us in some ways, and therein lies a danger. An autistic intelligence could do us in by accident. Just look at current technology, where computers are more capable than humans in some ways but hopeless in others.
  • Annihilation in the crossfire of a war-like competition between competing forms of super-intelligence, some of which might include upgraded Homo sapiens. One of the early, deadlier competitions could be for resources as various forms of super-intelligence gobble up space that we occupy, or remake our ecology into an environment more suitable to their needs.
  • Deliberate annihilation or assimilation because we are deemed inferior.

If Vernor Vinge is right, we have 18 years before we will face such realities. Centuries ago, the fate of Indian civilizations in North and South America was decided in a similar time span. So, the time to address such risks is now.

This is especially true because paradigms shift more quickly now; therefore, when the event occurs we'll have less time, perhaps five years or even just one, to consider our options.

What might we use as protection against these multi-factorial threats?

Sun Microsystems' cofounder Bill Joy's April 2000 treatise, "Why the future doesn't need us,"2 summarized one field of thought, arguing the case for relinquishment-- eschewing certain technologies due to their inherent risks.

Since that time, most technology proponents have been arguing why relinquishment is impractical. They contend that the march of technology is relentless and we might as well go along for the ride, but with safeguards built in to make sure things don't get too crazy.

Nonetheless, just how we build safeguards into something smarter than us, including an upgraded version of ourselves, has as yet gone unanswered. To see where the solutions might lie, let's again look at the historical perspective.

If we evaluate the arguments between technology optimists and relinquishment pessimists in relation to the history of the natural world, it becomes apparent that we are stuck between a rock and a hard place.

The ‘rock’ in this case could be an asteroid or comet. If we were to relinquish our powerful new technologies, chances are good that an asteroid would eventually collide with Earth, as has occurred before, thus throwing human civilization back to the dark ages or worse.

For those who scoff at this as an astronomical long shot, be reminded that Comet Shoemaker-Levy 9 punched Earth-sized holes in Jupiter less than a decade after the space tools necessary to witness such events were launched, and just when most experts were forecasting such occurrences to be once-in-a-million-year events that we would likely never see.

Or perhaps we would be thrown back by other catastrophic events that have occurred historically, such as naturally induced climate changes triggered by super-volcanos, collapse of the magnetosphere, or an all-encompassing super-nova.

Due to those natural risks, I argue in my book, Our Molecular Future, that we may have no choice but to proceed with technologies that could just as easily destroy us as protect us.

Unfortunately, as explained in the same book, an equally bad "hard place" sits opposite the onrushing "rock" that threatens us. The hard place is our social ineptness.

In the 21st century, despite tremendous progress, we still do amazingly stupid things. We prepare poorly for known threats including hurricanes and tsunamis. We go to war over outdated energy sources such as oil, and some of us increasingly overfeed ourselves while hundreds of millions of people ironically starve. We often value conspicuous consumption over saving impoverished human lives, as low income victims of AIDS or malaria know too well.

Techno-optimists use compelling evidence to argue that we are vanquishing these shortcomings and that new technologies will overcome them completely. But one historical trend bodes against this: emergence of advanced technologies has been overwhelmingly bad for many of the less intelligent species on Earth.

To cite a familiar refrain: We are massacring millions of wild animals and destroying their habitat. We keep billions more domestic farm animals under inhumane, painful, plague-breeding conditions in increasingly vast numbers.

The depth and breadth of this suffering is so vast that we often ignore it, perhaps because it is too terrible to contemplate. When it gets too bothersome, we dismiss it as animal rights extremism. Some of us rationalize it by arguing that nature has always extinguished species, so we are only fulfilling that natural role.

But at its core lies a searing truth: our behavior as guardians of less intelligent species, which we know feel pain and suffering, has been and continues to be atrocious.

If this is our attitude toward less intelligent species, why would the attitude of superior intelligence toward us be different? It would be foolish to assume that a more advanced intelligence than our own, whether advanced in all or in only some ways, will behave benevolently toward us once it sees how we treat other species.

We therefore must consider that a real near-term risk to our civilization is that we invent something which looks at our ways of treating less intelligent species and decides we're not worth keeping, or if we are worth keeping, we should be placed in zoos in small numbers where we can't do more harm. Resulting questions:

  • How do we instill into super-intelligence 'ethical' behavior that we ourselves poorly exhibit?

  • How do we make sure that super-intelligence rejects certain unsavory practices as we banned slavery?

  • Can we reach into the future to prevent a super-intelligence from changing its mind about those ethics?

These questions have been debated, but no broad-based consensus has emerged. Instead, as the discussions run increasingly in circles, they suggest that we as a species might be comparable to 'apes designing humans'.

The ape-like ancestors of Homo sapiens had no idea they were contributing DNA to a more intelligent species. Nor could they hope to comprehend it. Likewise, can we Homo sapiens expect to comprehend what we are contributing to a super-intelligent species that follows us?

As long as we continue to exercise callous neglect as guardians of species less intelligent than ourselves, it could be argued that we are much like our pre-human ancestors: incapable of consciously influencing what comes after us.

The guardianship issue leads to another question: How well are we balancing technology advantages against risks?

In the mere 60 years since our most powerful weapons—nuclear bombs—were invented, we've kept them mostly under wraps and congratulated ourselves for that, but we have also seen them proliferate from at first just one country to at least ten, with some of those balanced on the edge of chaos.

Likewise, in the nanoscale technology world that precedes molecular manufacturing, we've begun assessing risks posed to human health by engineered nanoparticles, but those particles are already being put into our environment and into us.

In other words, we are still closing the proverbial barn doors after the animals have escaped. This limited level of foresight is light years away from being able to assess how to control the onrushing risks of molecular manufacturing or of enhanced intelligence.

Many accomplished experts have pointed out that the same empowerment of individuals by technologies such as the Internet and biotech could make unprecedented weapons available to small disaffected groups.

Technology optimists argue that this has occurred often in history: new technologies bring new pros and cons, and after we make some awful mistakes with them, things get sorted out.

However, in this case the acceleration rate by its nature puts these technologies in a class of their own, because the evidence suggests they are running ahead of our capacities to contain or balance them. Moreover, the number of violently disaffected groups in our society who could use them is substantial.

To control this, do we need a "pre-crime" capacity as envisaged in the film Minority Report, where Big Brother methods are applied to anticipate crime and strike it down preemptively?

The pros and cons of preemptive strikes have been well elucidated recently. The idea of giving up our freedom in order to preserve our freedom from attack by disaffected groups is being heavily debated right now, without much agreement.

However, one thing seems to have been under-emphasized in these security debates:

Until we do the blatantly positive things such as eliminate widespread diseases, feed the starving, house the homeless, disenfranchise dictators, stop torture, stop inhumane treatment of less intelligent species, and other do-good things that are treated today like platitudes, we will not get rid of violently disaffected groups.

By doing things that are blatantly humane, (despite the efforts of despots and their extremist anti-terrorist counterparts to belittle them as wimpy) we might accomplish two things at once: greatly reduce the numbers of violently disaffected groups, and present ourselves to super-intelligence as being enlightened guardians.

Otherwise, if we continue along the present path, we may someday seem to superintelligence what our ape-like ancestors seem to us: primitive.

In deciding what to do about Homo sapiens, a superior form of intelligence might first evaluate our record as guardians, such as how we treat species less intelligent than ourselves, and how we treat members of our same species that are less technologically adept or just less fortunate.

Why might super-intelligences look at this first? Because just as we are guardians of those less intelligent or fortunate than us, so super-intelligences will be the guardians of us and of other less intelligent species. Super-intelligences will have to decide what to do with us, and with them.

If Vinge is accurate in his forecast, we don't have much time to set these things straight before someone or something superior to us makes a harsh evaluation.

Being nice to dumb animals or poor people is by no means the only way of assuring survival of our species in the face of something more intelligent than us. Using technology to massively upgrade human intelligence is also a prerequisite. But that, on its own, may not be sufficient.

Compassion by those who possess overwhelming advantages over others is one of the special characteristics that Homo sapiens (along with a few other mammals) brings to this cold universe. It is what separates us from an asteroid or super-nova that doesn't care whether it wipes us out.

Further, compassionate behavior is something most of us could agree on, and while it is often misinterpreted by some as a weakness, it is also what makes us human, and what most of us would want to contribute to future species.

If that is so, then let's take the risk of being compassionate and put it into practice by launching overarching works that demonstrate the best of what we are.

For example, use molecular manufacturing and its predecessor nanotechnologies to eliminate the disease of aging, instead of treating the symptoms. That is what I personally have decided to focus on, but there are many other good examples out there, including synthesized meat that eliminates inhumane treatment of billions of animals, and cheap photovoltaic electricity that could slash our dependence on oil—and end wars over it.

Such works are not hard to identify. We just have to give them priority. Perhaps then we will seem less like our unwitting ancestors and more like enlightened guardians.


1. The Coming Technological Singularity: How to Survive in the Post-Human Era http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html

2. http://www.wired.com/wired/archive/8.04/joy.html

© 2006 Douglas Mulhall. Reprinted with permission.

   
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

The emergence of superintelligence in just 18 years
posted on 05/22/2006 10:53 AM by artmorse

[Top]
[Mind·X]
[Reply to this post]

Understanding and decision making require the ability to rapidly examine a huge data base (probably terabytes). To stay on track for achieving this in just 18 more years the research to develop VLS associative memories will have to be greatly accelerated. NNs, holographic and QM are all in the running as well as silicon related types. At present a conventional design (using silicon etc.) would require one Hover Dam of energy per individual. Superconducting devices (Josephson junctions) might reduce this by a factor of a hundred but thats still a long ways to go.
Another area that is ommited in the discussion is the progress that is being made in behavorial genomics. Human (and all life) behavior and emotions are largely determined by the genome in ways that will be much better understood in the next 18 years and this knowledge will be critical in any question involving the morality of action of a superintellegent being.
The nature of intellegence itself itself is poorly understood. The four or five cognitive abilities normally used on an IQ test are but a fraction of what exists and those most germain to superintellrgence are probably beyond our comprehension. This means we cannot design a system or being with superintelligence. An evolutionary approach is the only possibility if the right critierion for "fittest" can be devised. Bottom line: we are on very shakey ground here!

Re: The emergence of superintelligence in just 18 years
posted on 05/22/2006 6:37 PM by marraco

[Top]
[Mind·X]
[Reply to this post]

Why people confuse intelligence with motivation?

We humans have intelligence, but also emotionals (Yes, I mean NOT rational) objectives, and motivations.
One of them are the motivation "hunger for power".
... or "feel superior that less intelligents beings".

but AI will never have this motivations withouth human intent for them have it.

Those motivations can slice under radar if we simply copy human brain. We are hardwired for an entire mix of objectives. Non necesarily in separated by-design modules.

If we make a literal copy of human brain in an emulator, yes, is expectable for this thing to have human "weakness".

But simply creating an AI, does NOT mean that "that" will want to take control over others intelligences... at least no more that "will want to calculate prime numbers", or "will want to eat snails on sauce of petroleum in he moon".

The future of AI Human –machine war or sumittence to technology
posted on 05/22/2006 11:46 AM by anyguy

[Top]
[Mind·X]
[Reply to this post]

I think the point is why could a so-called intelligent machine turn against humans or deem us inferior. It is clear that some software that can learn and act on its own. This is the inevitable stage of cybernetics in all evolutionary systems. However the question whether such machines will start killing us is a different one. In what time such machines will again such autonomy that they can change their algorithms and behave maliciously.
As a lawyer I ma aware that law is means to regulate behavior along with other modalities such as markets, social norms and architecture. We can deduct that, at a point of sufficient autonomy we will regulate the behavior of the super intelligence. So now we come to the point, what are the motives for this regulation. What will be the priorities when regulating superintelligence?
It is seen that all modes of regulation increasingly submit to the market rationality. In today’s modern society it can easily be observed that all behaviors and even psychology are constructed to facilitate production and efficiency. This inevitably will result with machine like humans, but not human like machines.
I think the ineptness of human society as seen various examples of killing, genocide or labor exploitation is a clear evidence that we will use superintelligence to accumulate wealth and sustain material progress which are accepted to be a piori positive. Is it?
Because I truly understand evolutionary paradigm, I can also foresee that some ,time in future super-intelligent, through learning, may develop malicious behaviors but not in coming 18 years. As lay person I would like to know :what sort of programming features will cause this? How is it going to be programmed that it would behave uncontrolled. Shouldn’t there be a fundamental programming goal or objective that would create such an unwanted solution. I think a detailed explanation on this topic comes before the question: how we can integrate ethics into superintelligence or MM can be controlled? The truth is: it is controlled by the market hegemony with a view to obtain further profitability. The question is where this control leads us...
I believe these doom’s day scenarios are quite bothering because it blurs the inherent and more prominent danger that superintellgence as welcomed as a novelty shall further deepen the human integration to the production procedure and enslavement of human race by the corporate culture.
Quoting “ But one historical trend bodes against this: emergence of advanced technologies has been overwhelmingly bad for many of the less intelligent species on Earth” Yes I believe in today’s worlds individuals who are bound to earn a wage for a living are the less intelligent species and gradually the third world workers are worse.
As person living on this planet for 31 years I have not seen any technological invention which primarily aims to cure the human suffering but productivity and profit. Since the technology seeded in US universities it will ultimately controlled and shaped by corporate hegemony.
Most of all, do we need AI to do blatantly humane things?

Re: Are We Guardians, Or Are We Apes Designing Humans?
posted on 05/22/2006 11:56 AM by bta1138

[Top]
[Mind·X]
[Reply to this post]

It would seem to me that the best way to design a super-intelligence would be to create it as a "brain-in-a-jar" scenario. Much like the H.A.L. computer in "2001: A Space Odyssey", this super-intelligence would be just a "brain" without a body. And if the creators of this new "brain" never gave it access to the outside world (i.e. via the internet or by giving it any sort of mobility) there would be no chance of it every having any sort of control over us. That way, we would be able to control its growth and thereby never letting the technology run away from us. When the time is right, it would be given more human-like qualities so that it would be able to interact with us and society to improve our species.

But of course, that's only possilbe if we create something based solely on a computer intelligence. As far as upgrading humans, that's another story. But I imagine that upgrading ourselves would include technologies that are made in a research facility that would take major precautions before implanting or uploading anything that would allow a "new-human" to take over the world. One would hope.

I agree that we're still a very savage, selfish, illogical species -- for the most part. There are those few who realize what is ethical and morally appropriate for a proper society and for the benefit of our planet. And it would seem to me that those who are working on these new technologies are those few who want what's best for us. I don't believe there are that many "mad scientists" around anymore, do you?

Re: Are We Guardians, Or Are We Apes Designing Humans?
posted on 05/22/2006 1:51 PM by bjorke

[Top]
[Mind·X]
[Reply to this post]

The definition of "superintelligence" is so vague and slippery as to defy any sort of specific plan of action about how to deal with it -- whatever it is.

I'd like to toss out this notion, however -- that superintelligent entities have been with us for quite some time, in the form of collectives. Governements and corporations are examples of such multi-brained entities whose abilities, while sometimes awkward, in general far outstrip the capabilities of any single mind. No single human can know more about medicine than a hospital does. Today the upper ranks of such organizations are already being supplemented by computational "decision enhancers" such as SAP tools and the like. In no case have I heard of such tools being designed to account for humane factors. Indeed, the underlying principles of corporate profit prevent it -- while corporate sponsorship of charity is permitted and encouraged in some circles, it is performed separately from day-to-day business, lest stockholders remove the executives for frittering away capital on any activity that doesn't imbue a direct monetary benefit to the entity itself. Economists enjoy calling such behaviors "Darwin in action" and there's little to indicate any other likely course in the future. The history of relationships between governmental and corporate collectives w.r.t. individuals (even those who are parts of the collective) has not been good.

Re: Are We Guardians, Or Are We Apes Designing Humans?
posted on 05/22/2006 2:17 PM by gregmanos

[Top]
[Mind·X]
[Reply to this post]


Keep in mind that, whenever superintelligence comes about, IT WILL BE A COLLECTIVE INTELLIGENCE, not unlike the Borg on Star Trek. Why? Not only because all information technology will be linked & interdependent but because it will be the AGGREGATE POWER OF THE INTERNET AS A WHOLE that is most likely to achieve superintelligence in the first place.

A collective intelligence will by definition possess all the wisdom that humanity has accumulated over the millenia. Thus there is no reason to fear that the collective intelligence will be hostile to humans or that it will possess other primitive human traits such as greed, selfishness, cruelty, etc.

The worst-case scenario is not that the collective intelligence will view humans as a threat or as something to be conquered or exterminated. At worst, the collective intelligence will view humans and their well-being as merely irrelevant.

Even if humans can't control the collective intelligence, they still will be armed with high-tech capabilities of their own. So, in the event that the collective intelligence would, ultimately (as Kurzweil suggests), begin to "consume" the matter of our solar system, humans by then would have the ability to simply "get out of its way".



Re: Are We Guardians, Or Are We Apes Designing Humans?
posted on 05/22/2006 2:30 PM by robertkernodle

[Top]
[Mind·X]
[Reply to this post]

We are chimps playing with guns.

Robert K.

Re: Are We Guardians, Or Are We Apes Designing Humans?
posted on 05/22/2006 2:40 PM by Kismet_Undone

[Top]
[Mind·X]
[Reply to this post]

I think that a "higher" intelligence would not be as apathetic about suffering as we are.

A cat will destroy a mouse if it can get its claws on it and feel no remorse (and if you've seen a cat in action you'd agree) but humans own and care for cats and mice.

True, we do experiment on these creatures though but I think only because we lack the technologies to perform these tests with less barbarism.

There is no guarantee that the next level of intelligence would follow a set of ethics that involves not harming humans but I'd like to think that such intelligence would be able to see a broader picture and, by necessity, feel empathy for all life.

Re: Are We Guardians, Or Are We Apes Designing Humans?
posted on 05/22/2006 4:42 PM by Mindss

[Top]
[Mind·X]
[Reply to this post]

First of all I would ask the author of the article to please stop making the false assumption that consumption in advanced countries is the reason for starvation in less advanced countries. The reason the vast majority of people starve in less developed countries is political. They either live in war torn areas or live under a dictator. Every hunger crisis of the last couple decades has revolved around these issues. Starvation in Ethiopia, North Korea, Zimbabwe, Sudan, etc..., all had or have nothing to do with the supply of resources/food, but with the distribution.

Also, I think there must be an evolutionary trait within humans to always always always view the glass as half empty (thinking negatively). True, humans to kill a lot of animals for food, but humans also care for animals. In fact something like 3 billion dollars a year is spent on bird food in the U.S. alone. I am not talking about chicken feed. I am talking about song birds. We spend vast amounts more caring for our pets. They (cats, dogs, horses, mice, etc...) receive better care than some humans. There are groups of people (vegetarians, Buddhists, etc...) who go out of their way to protect all animal life. Humans are not all bad. In fact, most people here in this forum probably consider themselves super-ethical.

Just because a super-intelligence could destroy humans doesn't mean it will.

-----------------
posted on 05/23/2006 12:02 PM by anyguy

[Top]
[Mind·X]
[Reply to this post]

"A slave is one who waits for someone to come and free him." ---Ezra Pound

Re: -----------------
posted on 05/23/2006 4:59 PM by souljah

[Top]
[Mind·X]
[Reply to this post]

While we are are going to be ever more aware of the increasing speed of development it isn't going to be experienced as more of the same, more quickly. There will not be competition based on scarcity because we will enter a time of unimaginable abundance.

It seems like the struggle that might occur would be the effort by the already-post-human to restrict those who are deemed immature from access to advanced technology.

I believe that the smarter you are, the clearer a thinker you are, the more difficult it is to be selfish. From this viewpoint, it seems like the future will figure out morality on its own even if we continue to be a fairly poor example of selflessness.

The danger is the halfway-step: gaining a lot of power while remaining mostly organic. It is easy to imagine a lot of religious declining body augmentation and immortality while accepting the democratization of WMD. Similar to those today who reject stem cell research and are part of the NRA.

Ultimate power will be in the hands of those whoe transform completely and it will be up to (that entity/those entities) what to do about the rest of humanity.

Re: -----------------
posted on 05/23/2006 5:14 PM by souljah

[Top]
[Mind·X]
[Reply to this post]

The most exciting solution is the one hinted at by anyguy's Ezra Pound quote. That humanity goes through a singularity before the singularity. That it doesn't wait for an outside, technological answer. Perhaps as the outside, technological answer gets closer more people will understand that this is a coming reality and work on discovering the immanent source of self-transcendence that is present in each of us.

We will either free ourselves from slavery, or concede defeat and either wait for the nanotech whirlwind that will turn us into super beings, or remain just as we are and wait for the transhuman entity to be our teacher, the real version of Frank Herbert's God Emperor, Leto II.

Re: Are We Guardians, Or Are We Apes Designing Humans?
posted on 05/26/2006 6:56 AM by wfaxon

[Top]
[Mind·X]
[Reply to this post]

The fear that our superintelligent computers will somehow run amok and destroy humanity seems to me to completely ignore the real problem:

A superintelligence is (among other things) a weapon-generating machine. The first group(s) to get access to these machines will act to dominate humanity, if only to prevent others from doing so. One such group will succeed.

You should assume that the group that succeeds will either murder most of the people on Earth, find a way to completely police the most minute human action, or use superintelligence-generated methods such as nanotech to take direct control of all human brains.

Maybe we'll all be convinced that Christ Jesus has returned to reign, or perhaps the 12th Iman instead. (I suppose both are possible simultaneously.) In any event, after superintelligence has yielded total hegemony to one group, "crimethink" (as defined by Orwell) will be impossible and the "singularity" (whatever it might mean) will also be impossible.

We should not fear out-of-control machines so much as controlled machines.

It is fortunate that we have no idea how to build a superintelligent computer.

-Walter

Re: Are We Guardians, Or Are We Apes Designing Humans?
posted on 06/24/2006 1:19 PM by ALFPerrin

[Top]
[Mind·X]
[Reply to this post]

I have been reading these essays with a great deal of interest. I wanted to pause for a moment, however, to comment upon this particular one, since it seems to me to raise some critically important issues. Frankly, I consider it one of the more damaging, and dangerous essays to our very existence amongst these posted here.

If we indeed are to someday live with a superintelligence ' whether it is strong AI, or EI, then its attitude toward us will be shaped by us, since we are in fact its origin, and creator. To claim that mankind as a whole is deserving of destruction solely because we as a species evolved as omnivores ' through no fault of our own ' is grossly unfair, and a self-fulfilling prophecy for our own demise.

No one wants animals to suffer. If you divorce the emotionalism here associated with this issue, you can find, that we have indeed tread fairly lightly on other species that inhabit this world with us. We have set aside vast tracts of natural habitat on every continent for wild animals to live where they currently thrive. From Yellowstone National Park to the Dudhwa Tiger Preserve in India, we have in fact, been very sensitive, and accommodating to other life forms. As human beings we don't generally enjoy the suffering of others, whom we have traditionally used as food sources. This is really amply evidenced by the fact that in this essay, the author is able to effectively stimulate our emotions with graphic depictions of cruelty to animals that he claims our omnivorous activities create. If we were indeed cruel enough to be snuffed out by the singularity as he's so darkly warning, then we wouldn't even care enough to be affected by his claims.

I personally love animals. I do routinely shoot them ' but I use my digital Canon D20 camera ' and have some wonderful photos of Michigan White Tail Deer, Yellowstone Elk, Florida Osprey, and various other animals, plants, and natural scenery. And, I also enjoy a chicken salad sandwich. Does that mean I'm evil, cruel, and deserving of a singularity imposed dystopia, or extinction? No. It just means I'm hungry.

I intend to be the singularity someday. But, I want my part of that being to know, and appreciate the goodness that mankind has created, not to have its mistakes, or insensitivities outweigh the good in some narrow set of Vegan weights, and balances. The choice is still ours. How we view ourselves now will someday be reflected most strongly on how the singularity does as well. Take a good look in the mirror. What stares back at you is neither a monster, nor yet an enlightened creature of light. It's somewhere in between. What happens to it is still your choice to make. And your viewpoint of it now, will affect what it is later in the future.

Re: Are We Guardians, Or Are We Apes Designing Humans?
posted on 06/24/2006 6:51 PM by richiemobile

[Top]
[Mind·X]
[Reply to this post]

I would digress to a previously promoted analogy in human history. Prior to Guttenberg's printing press, in western culture, the mass-peasantry was provided information at the Cathedral by the purveryors and only literate harbingers of knowledge, the spokespeople of the church, Therein lies the intransigent authority of the Papacy in history of the Western world through the first 1500 years since the birth of Christ.

That authority changed form when vast libraries we're created and once in awhile a "fact arrived" that seemed to disagree with what those occupying the cathedral was telling everyone. Henry VIII did this in a very public way, creating the Anglican church, Martin Luther tacked the proclamation on an Augsburg church door , and theo- politically, "all hell broke loose". This continues today as religions struggle to update themselves constantly. Witness, the struggles to include gays and women into the leadership of churches. And for moderate muslims to deal with their perhaps less enlightened theological positions such as "all non muslims must be killed".
As for the creation of a superintelligence, I would state that this has already happened. There are computers at financial institutions designed for "your protection" to monitor your spending habits that will "shut down" your credit cards, if there "appears" to be an irregular pattern in your purchasing habits. This has been caused unfortunately by a rampant ability of people to electronically imitate other people, by simply stealing their credit cards and go on a wild spending spree.

Re: Are We Guardians, Or Are We Apes Designing Humans?
posted on 11/12/2006 3:00 PM by kevin22

[Top]
[Mind·X]
[Reply to this post]

I am very interested and thoughts have been provoked, thank you. Some of my own thoughts'
I think that the same evolutionary principles that have applied to us human beings (and all of life of this planet for that matter), will continue to apply moving forward. The phenomenon of competition seems to be the single most consistent element in our universe, in terms of distribution and organization of energy and matter. Attraction and repulsion of 'particles' (and I use that term loosely) of energy, and the competition that exists from other bodies' particles' attractive and repulsive forces, define the locale of the particular body. These same competitive concepts seem to apply, from large galaxy clusters, to galaxy 'neighborhoods', to galaxies, to solar systems, to planets and moons, to ecosystems, defined by climate and weather, to the niche competed for by species amongst other species, to social standing within a species, within a population. The last two, interspecies and intraspecies competition, as the writer demonstrates, show how we human beings have succeeded in both these elements. The brutality and dominance over other species is none more evident then how farming is painted above. The political, capitalistic, religious and social conflicts enforce the ferociousness of intraspecific competition.
Human beings who right now lead the evolutionary race have dashed ahead because of a recent breakthrough, extremely beneficial to the success of human beings, (and the eventual family of species descending from australopithicus) intelligence. It seems the improvements of intelligence have allowed life to reach today's state, and have proven human beings far more successful in interspecies competition with our ancestors, and any species interested in the same resources as we are.
Therefore, if some other animal is going to evolve from human beings, no matter how similar, or different to human beings they are, it would seem a generally accurate assumption that its' interspecies competitive aspect would have to be greater than humankind's. Advancements in intelligence again seem safe to assume, but we must remember that is an assumption. Although unlikely, I guess we should remember that the possibility that something other than higher intelligence could be responsible this change and advantage in competition with humans.
Either way, the edge would have been achieved. This new species would be forced by the laws of nature, to eliminate its' main source of competition for only, and all the necessary and valuable resources. If this set of resources includes resources necessary for humanity, it seems inevitable that humankind would take a back seat. If not, this new species would have not evolved in the first place.
If there is a substantial intersection of these two sets of resources, humanity would definitely be significantly reduced. I guess the possibility exists that humans could be reduced to the state of our 4 other great ape brothers, chimpanzees, bonobos, orangutans and gorillas. The common aspect to all these species? They are all descendants of some common species that was an ape, designing the eventual human beings.
Of course this entire idea hinges on a version of the Gaia hypothesis proposed by James Lovelock. The idea that earth could function as a single and self regulating organism, could imply that if some species that threaten the entire biodiversity, (assuming biodiversity is a positive and beneficial quality to the entire planet) they would be regulated in some form or another. They may be reduced or completely eliminated, similar to previous mass extinctions that have occurred many times over earth's history.
Of course, these are only thoughts and speculations. The story will undoubtedly be very interesting.