Origin > Dangerous Futures > Building Gods or Building Our Potential Exterminators?
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0131.html

Printable Version
    Building Gods or Building Our Potential Exterminators?
by   Hugo de Garis

Hugo de Garis is concerned that massively intelligent machines ("artilects") could become infinitely smarter than human beings, leading to warring factions over the question: should humanity risk building artilects? Result: gigadeaths. (See the author's The Artilect War book draft for further details.)


Head, Brain Builder Group, STARLAB NV (starlab.org)

Originally published February 26, 2001 on KurzweilAI.net.

Robot artificial intelligence is evolving a million times faster than human intelligence. This is a consequence of Moore's law, which states that the electronic performance of chips is doubling every year or so, whereas it took a million years for our human brains to double their capacities.

It is therefore likely that it is only a matter of time before our machines become smarter than we are. It is also likely that this development will occur this century if humanity chooses to allow it to happen.

My name is Prof. Hugo de Garis. My team and I are starting to design and build the world's first artificial brain at my lab, Starlab, in Brussels, Belgium, Europe, which should contain nearly 100 million artificial brain cells (neurons). In about 4 years, the next-generation artificial brain should contain a billion neurons.

Our human brains contain roughly 100 billion neurons, so it is not surprising that someone like me is preoccupied with the prospect of robot intelligence surpassing the human intelligence level. Admittedly, massive computational speed and size do not automatically equate to massive intelligence, but they are prerequisites. The potential is there. My brain-building team still faces the considerable challenge of architecting the artificial brain. We will need to become "BAs"--Brain Architects.

Despite this qualification, not only do I believe that artificial brains could become smarter than human beings, I believe that the potential intelligence of these massively intelligent machines (which I call "artilects" (artificial intellects) could be truly trillions of trillions of trillions of times greater.

If these astronomically large numbers sound like science fiction to you, consider the following. Moore's law is achieved by shrinking the size of electronic components such as transistors by a factor of two roughly every year. This halves the distance between components, and hence doubles the speed at which electronic signals can move between them (at the speed of light, a constant of nature). This trend has been valid for 30 years, and is likely to continue until 2020, by which time the scale of electronic circuitry will have reached atomic levels.

In other words, within a single human generation it will very probably be possible to store a single bit of information on a single atom. There are a trillion trillion (a 1 with 24 zeros after it) atoms or molecules in objects of human scale, such as an orange. An object as large as an asteroid (to be found in the asteroid belt circling the sun between Mars and Jupiter) can be hundreds of kilometers across and contain a trillion trillion trillion atoms.

The bits stored on these atoms could switch (bit flip) from a 0 to a 1 and vice versa in a femtosecond (a thousandth of a trillionth of a second). That's an information-processing capacity of about ten million trillion trillion trillion trillion (a 1 with 55 zeros) bit flips a second. When one compares the comparable information-handling capacity (in bit flips per second equivalent) of the human brain, the estimated answer is about ten thousand trillion bit flips a second (a 1 with 16 zeros), which is a thousand trillion trillion trillion times smaller. These artilects could potentially be truly god-like, immortal, have virtually unlimited memory capacities, and vast humanly incomprehensible intelligence levels.

 I foresee humanity splitting into two major ideological, bitterly opposed groups over the "species dominance" issue, i.e., should humanity build artilects or not. These two groups I label the "Cosmists" (in favor of building them) and the "Terrans" (who are opposed).

To the Cosmists (based on the word "cosmos"), building artilects will be a religion (compatible with and based upon modern science), as the destiny of the human species and as the magnificent goal of creating the next rung up the ladder of dominant species.

To the Terrans (based on the word "terra," the earth), building such artilects means accepting the risk that one day, in an advanced state, these artilect gods might decide, for whatever reason, that the human species is so inferior and such a pest, that they should exterminate us. With their gargantuan intellects, such a task would not be difficult for them.

The Terrans, in the limit, will try to exterminate the Cosmists if the latter insist on building artilects, for the sake of preserving the survival of the human species. Since the stake is so high (namely whether the human species survives or not) the passion levels will be high. The Cosmists will anticipate the murderous hatred of the Terrans and will defend themselves.

We have thus all the makings of a major war. About 200 million people died for political reasons in the 20th century (wars, purges, genocides, etc) using 20th century weapons. Extrapolating up the graph until late 21st century, with 21st century weapons, we arrive at billions of dead--"gigadeaths."

So which am I, a Cosmist or a Terran? I'm both. Ultimately, I think it would be a cosmic tragedy if humanity chooses to freeze evolution at the puny human level (with our pathetic little lives of 80 years in a universe billions of year old, that contains a trillion, trillion stars--the "big picture"). For me, the tragedy of seeing the human species wiped out is less significant than not seeing the birth of the artilects. This sounds monstrous, and it is, in human terms, but to deny the creation of the first true artilect, which would be "worth" a trillion, trillion, trillion human beings, would be a far greater tragedy, a "cosmic" tragedy.

 As the planet's pioneering brain builder, I feel a terrible burden of responsibility toward the survival of the human species and the creation of godlike artilects, because I am part of the problem. I am quite schizophrenic on this point. I would love to be remembered after I'm gone as the "father of the artificial brain," but I certainly don't want to be seen in future historical terms as the "father of gigadeaths."

Hence I try to raise the alarm now while there is still time before the artilects come into being. If I were a true Cosmist, I would keep quiet and just get on with my work, but instead I feel that humanity should be given the chance to nip the rise of the 21st century artilect in the bud if it so chooses.

So should work on artificial brains be stopped now? I think not. For the next 30 years or so, brain-based computers will be far too useful to be suppressed. For example, they will become smart enough to clean the house, teach the children, provide sex, and help human experts in decision making, etc. They will do most of the work and thus create great wealth for the whole planet.

So, in the short to middle term, brain building technology will be seen as a great boon to humanity. It is the longer term that terrifies me and keeps me awake at night. I see no way out of a gigadeath artilect war, so relentless is the logic.

The rise of the artilect will probably be inevitable. The economic and military pressures to build them will be enormous--hundreds of trillions of dollars a year worldwide will be spent in the brain-based computing market within 10-20 years, I believe. The debate over whether artilects should be built or not is already starting to heat up, at least amongst the researchers concerned with brain building and AI (artificial intelligence).

This debate is starting to spill over to other specialties. For example, I'm trying to persuade Prof. Peter Singer (Princeton University), the planet's best-known "applied ethics" professor, to write a book about "Artilect Ethics." At the rate at which this issue is hitting the headlines lately, my bet is that within a few years the "artilect debate" will be on everyone's mind.

The decision to build artilects or not will be the toughest decision that humanity will ever have to make. Personally, I'm glad to be alive now. As I said in a recent European Discovery Channel documentary on my work and ideas, "I fear for my grandchildren. They will see the horror, and they will be destroyed by it."

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

The Artillect Question
posted on 11/24/2001 12:42 PM by cutitupwiz@netscape.net

[Top]
[Mind·X]
[Reply to this post]

It becomes abundantly clear that the trend in modern society is to inject as much drama into a situation as possible. While this is well and good, it helps fight off boredom. Yet it fails to answer a basic need. The need to retain control of sociobiological developement. If the good doctor is rightly concerned about the danger of vastly intelligent "artilects", it is a good thing. My question then is "Is he willing to work with community/peer groups to help control the direction of this hazardous research. Shall we now see our research as some kind of atomic waste like horror that we must destroy at all cost. I don't know where the trend in alarmism originated or how it has become so pervasive. I can only question sinceraty here in all honesty. Are you Doctor, and Bill Joy and you Kurzweil willing to help formulat a steering committee or some such in which you truly lay your cards onthe table. No, really if you all are so concerned then, open up. Open up your facilities, whether corporate, institutional or private. Take this challenge and work with the people as never before. Lets define a new society in which fear of our technologies will be ridiculus because we will have developed with concern and compassion. Fear of your "Artilects" doctor must translate into a strong spirit of cooperation. I hope I've stirred up some food for thought, gentlemen.
Durk Barton, researcher.

Re: The Artillect Question
posted on 11/24/2001 12:59 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I will have those artilects in my body and other service facilities.

No other artilect will be tolerated - except those who are belong to us.

End of story.

- Thomas Kristan

Re: The Artillect Question
posted on 01/11/2003 2:17 AM by biggs

[Top]
[Mind·X]
[Reply to this post]

I will have those artilects in my body and other service facilities. No other artilect will be tolerated - except those who are belong to us.
End of story.- Thomas Kristan

Hey Thomas, when they are in your body will you have control of them or will they control you?

Re: The Artillect Question
posted on 01/14/2003 2:41 AM by thomas

[Top]
[Mind·X]
[Reply to this post]

> when they are in your body will you have control of them or will they control you?

Neither. Everything will obey some principles. Just like today. I don't control my hunger, but I am glad, that it controls me. Otherwise I would be dead.

Now, I want that this hunger agent wouldn't make any discomfort to me, anymore. Energy supply could be automatically assured.

What with the eating pleasure then? I could use that. Could. We shall see about that.

- Thomas

Re: The Artillect Question
posted on 01/18/2003 2:21 AM by biggs

[Top]
[Mind·X]
[Reply to this post]

Thomas>Neither. Everything will obey some principles. Just like today. I don't control my hunger, but I am glad, that it controls me. Otherwise I would be dead. Now, I want that this hunger agent wouldn't make any discomfort to me, anymore. Energy supply could be automatically assured. What with the eating pleasure then? I could use that. Could. We shall see about that.

No offense, Thomas, but that doesnt make much sense.
Hunger, sleep, sexual desire may not be things we have total control over (we have some) but they are part of our minds that have evolved for various reasons. Tiny super-intelligent nanobots will not really be the same thing.
So, if you go back to the start of this thread, when you were saying A.I. is nothing to fear because we can have it implanted inside us, that is pretty meaningless, isnt it?
Just because it is inside our bodies wont assure anything.

Re: The Artillect Question
posted on 01/18/2003 9:54 AM by thomas

[Top]
[Mind·X]
[Reply to this post]

> Hunger, sleep, sexual desire may not be things we have total control over (we have some) but they are part of our minds

What makes you think, that hyperintelligence can't be a part of your mind?

It will have less demands, than sexual libido has. For example.

> that have evolved for various reasons.

We have additional reasons for improved intelligence, senses ... all kinds of innovation.

If they are not going to evolve, but will be "our creation" - what is the difference?

> Tiny super-intelligent nanobots will not really be the same thing.

Why not? If their only motivation will be your well being - why not?

> Just because it is inside our bodies wont assure anything.

No, the immune system may go mad, already. Or the cancer cells. We need some control over that. Superintelligent agents, with no motivation of there own.

If something is programmed for doing something, it will do exactly that, until it is broken or reprogrammed somehow.

Now, if the programmed task is to take care for the integrity of the program also ... we are mathematically safe. To an arbitrary degree.

See?

- Thomas

Re: The Artillect Question
posted on 01/19/2003 2:14 AM by biggs

[Top]
[Mind·X]
[Reply to this post]

Thomas>Why not? If their only motivation will be your well being - why not?

biggs>Thats really the question, isnt it? This thread started with you stating that A.I. wont be a danger because A.I. nanobots will be in our bodies.
Im not saying A.I. will be hostile, I think it will be like a blank slate upon which we can write what we choose. But thats not making me me any more comfortable.
Who will decide what those lil buggers do inside you? How will you know even know if they are making you smarter, healthier, or whatever? You could go through your life (however long the span)merely thinking its done you a world of good.

Re: The Artillect Question
posted on 01/19/2003 4:21 AM by thomas

[Top]
[Mind·X]
[Reply to this post]

You may be afraid of artillects, as you may be afraid of "your own" biological cells now.

Much less in fact, since the artillects could and should obey some humans devised codex, what is not the case for the natural nanobots - cells. Those just want to maximize their replication. Evolution made them such.

- Thomas

Re: The Artillect Question
posted on 01/19/2003 7:48 PM by biggs

[Top]
[Mind·X]
[Reply to this post]

Thomas>Much less in fact, since the artillects could and should obey some humans devised codex, what is not the case for the natural nanobots - cells. Those just want to maximize their replication. Evolution made them such.

biggs>You oughta be a politician... the way you avoid answering the question... or it could be that youre really that obtuse. But Ill give it one more shot.
So they will be subject to a "human devised codex", all righty. But WHO devises the codex? Which humans, and how will you know for sure what the effect will be? Could someone with nanobots inside them perhaps have the pleasure centers of their brains stimulated, feel really good 24/7, but in reality have certain cerebral functions under outside control?

Re: The Artillect Question
posted on 01/20/2003 4:54 AM by thomas

[Top]
[Mind·X]
[Reply to this post]

Who wrote the Law? The Constitution?

Yes, it is politics ... in a sense.

- Thomas

Re: The Artillect Question
posted on 01/20/2003 7:21 AM by zoe

[Top]
[Mind·X]
[Reply to this post]

>> Hunger, sleep, sexual desire

I would write: Hunder, sexual desire, sleep, etc.

Now, if the programmed task is to take care for the integrity of the program also ... we are mathematically safe. To an arbitrary degree.

> See?

No.

Re: The Artillect Question
posted on 01/20/2003 8:21 AM by thomas

[Top]
[Mind·X]
[Reply to this post]

>> See?

> No.

Not good! But let try it again!

When the Codex is under the protection of Codex - just like the Constitution is protected by the Constitution itself - how likelly are the changes?

Depends of what is writen inside it. And what means you have, to uphold the Law. And how well controled by the Codex.

See now?

- Thomas

Re: The Artillect Question
posted on 11/24/2001 3:55 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

>Yet it fails to answer a basic need. The need to retain control of sociobiological developement.

The statement above implies that there currently is some "control of sociobiological development."

George W. Bush tried to exercise come control recently in the area of stem cell research. All it did was send researchers scuttling for for countries where the environment is less controlled. We live in a big world with a large number of viewpoints on almost any subject. If you don't like the attitude where you live, you can find one more to your liking somewhere else. In this wide and varied world, there is no one in control of anything. And technology is advancing so quickly in almost every field that before any kind of controling agency could be formed and implemented, it would be too little and too late. We haven't even been able to get governments and scientists to agree on the subject of global warming and maintaining the supply of clean water. Much of the research in the areas of biotechnology are private companies and international corporations that answer to no specific government. So who is going to do the controlling and how? Will companies motivated primarily by profit take it upon themselves? I don't think so.

Take a look at the articles from The Scientist in the thread, Rush Toward the Singularity. The people doing the research can't even keep up with all the developments taking place. By the time someone realizes there's a threat, the disaster will have happened. I think all we can hope to do is contain and survive it. Like terrorism, we probabaly won't even know from which direction it will hit us.

Re: The Artillect Question
posted on 11/24/2001 7:07 PM by ---

[Top]
[Mind·X]
[Reply to this post]

I think you're right except that you don't take into account extreme measures. Even our powerfully effective war against terrorism is highly reserved. If and when large-scale threats begin to represent clear and imminent danger, extreme measures will be implemented. Obviously that brings up plenty to discuss but don't think we're powerless. It's a trap of the religious mind that sees humanity as ever-inferior and powerless and doomed to ultimate failure because we are not divine.

Re: The Artillect Question
posted on 11/27/2001 1:12 PM by cutitupwiz@netscape.net

[Top]
[Mind·X]
[Reply to this post]

Dear sirs'
You make a very valid point as to the ability to control society, biology whatever. Control was probably not the word I should have used. Yet I feel we are wrong if we take a hands off lets see what happens and clean up the spill attitude. Yes the pace of things, developments is exceedingly rapid. However, it is not so rapid that we can't see the direction of developement. Yes, some of these elements can run away from accountability. They cannot, however run away from the market. As I suggested if we see a trend that entails "danger" we can inform it's inherents of our concerns and identify products. If these companies do not show adequate concern we can boycott their products, protest, picket. Our concern should be with informing the people, forming alliances, organizing. We can't merely allow threat to exist without reaction. Thank you.

Re: The Artillect Question
posted on 03/31/2002 7:32 PM by meloco@lineone.net

[Top]
[Mind·X]
[Reply to this post]

Who on earth is this Hugo de Gris character anyway? What qualifications does he hold? Is this some kind of a joke? I've seen the Starlab website, has this man had a total nervous breakdown or something? His ideas are so utterly flawed, so idiotically childlike, who on earth does he think believes him?

Re: The Artillect Question
posted on 04/01/2002 5:07 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Hugo de Garis is more intelligent and qualified than you are. Trust me.

He is only too scared about the artilects (his word), too scared. They will be benevolent to us.

But he IS NOT "so idiotically childlike".

- Thomas

Re: The Artillect Question
posted on 04/23/2002 2:14 PM by meloco@lineone.net

[Top]
[Mind·X]
[Reply to this post]

Okay, perhaps he is qualified, but why does he not state his qualifications when he writes? I am, indeed, not particularly qualified, being only a batchelor of science and not having persued my studies further; but surely after having spent my time reading what he has to say, I have some right to know whether he has any authority to say it?

Re: The Artillect Question
posted on 04/23/2002 3:15 PM by thp@studiooctopussy.com

[Top]
[Mind·X]
[Reply to this post]

He does not need to.

If you are interested in this kind of thinking then you know who he is. If not, well then you are not qualified yourself to call him childlike.

Who care about papers anyway, it is what we do that is what we are, not what some paper show you.

Intellect does not need a degree.

Have you read Kurzweil or Joy ?

If you had, it would not seem so childlike, on the contrary.

Re: The Artillect Question
posted on 04/24/2002 9:20 AM by sequoiahughes@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Enough bickering you guys.


Returning to the central questions: should/could we control our society's destiny and evolution?

Addressing the "should": I agree with Ray and the extroprians that change and the future should be embraced. The problems we face now will seem innocent in comparison to the problems and challenges we'll face 30 years from now, but as a species we do need to grow out of our tumultuous adolescence and face the true challenges of adulthood. The fifties didn't last, and neither will the 'innocent' times we live in now. Nuclear hollocaust, terrorism, globalization, ozone layer depletion, poverty, war, racism, biotechnology, etc. etc. etc.: these problems are childs' play compared to the threat of super AIs, gray-goo, and whatever other horrific possibilities lie past the singularity. My question is this: so what? When kids turn 18, they need parents who have the courage to NOT allow them to live at home, to NOT stifle their growth and produce non-responsible 'adults.' Kids need to be on there own to mature, and they will stumble and they will rise back up. That is life. We, as a society, need to accept the awesome responsibility we have to mature and reach adulthood. We will stumble, and no amount of planning or waiting will prevent that.


The other question is "could" we delay, halt, or reverse technological growth. Well, following my earlier analogy: there are ways to delay or halt an adolescents' growth: 1.) Starve them. Kids can't grow if they can't eat. Just give them enough to survive. Remove government funding from AIDS research, nanotech, mental health, etc. etc. Hopefully the private sector's R&D will fail without government assistance. Of course, other countries' governments won't follow suit, so the singularity will really only be delayed. So, we have option #2 (which, if we trully want to halt progress, we would have to implement in ADDITION to #1): Prevent the neighbor's kids from maturing (e.g.: poison the water supply, slash their tires, etc.) The global equivalent is an all out nuclear war on all developed countries.

We have limitations on the amount of freedom we have in determining technological progress. We can vote against developing nuclear technology, but it's unrealistic to expect US government to abandon nuclear technology. We can ban cloning, but it's unrealistic to believe that we can prevent the entire world from pursuing the research. It's unrealistic to assume that any number of us can convince EVERYONE in silicon valley to halt R&D, slow down progress and forget about silly things like competition, profits and stockholders.

We have about as much individual control over society's growth and maturity as individual cells have for the entire body.

Re: The Artillect Question
posted on 04/24/2002 10:38 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Hear, hear!

Re: The Artillect Question
posted on 04/24/2002 11:08 AM by thp@studiooctopussy.com

[Top]
[Mind·X]
[Reply to this post]

Why do you think is would be better to poison and starve people than let them blend with a silicon entity?

The only reason why technology of the future seems so bad, is because you think about what you are now and then imagine yourself in that environment.

Point is you can't do it like that. The transition is going to be smooth, and each generation will not really think that it is in their generation it is going to happen. They are right ofcourse, because it have already started.

Re: The Artillect Question
posted on 04/24/2002 5:17 PM by sequoiahughes@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

octo: are you asking me this question? If so, you misunderstand me. I was using "poisoning the water supply" as an analogy for how our governments can prevent other governments from developing singularity-level technology. I put the analogy forward as an example of the level of destructiveness and immorality a government/society/people would have to committ in order to delay the singularity. Won't happen.

I agree, we've already started down the path to a cyborg future and ultimately, the singularity.

sequoia

Re: The Artillect Question
posted on 04/24/2002 6:21 PM by thp@studiooctopussy.com

[Top]
[Mind·X]
[Reply to this post]

Ahhh sorry my bad.

English is not my first language, sometimes, the details get's lost in preconceptions about what you meant.

Thomas

Re: The Artillect Question
posted on 01/11/2003 2:30 AM by puggs

[Top]
[Mind·X]
[Reply to this post]

Thomas, you say that AI or the "Artilects" will be benevolent to us. How exactly do you know?
I admit that I am no expert, but I dont think one needs to be to realize AIs will quickly outstrip all human abilities.
De Garis may be overstating the matter when he says they will be "trillions and trillions and trillions times smarter" than humans, but just one million times smarter will be enough to leave us wondering in awe, in a total stupor at what they are up to. We wont even be able to make an intelligent judgement on whether or not they are benevolent.

Re: The Artillect Question
posted on 01/11/2003 4:50 PM by Grant

[Top]
[Mind·X]
[Reply to this post]

>We wont even be able to make an intelligent judgement on whether or not they are benevolent.

Does the Lion being captured, anesthetized and operated on by a human know the human is trying to save his/her life? Does he/she know the human is being benevolent? What makes you think we will know that about an artilect that is as much more intelligent than we are?

Grant

Re: The Artillect Question
posted on 01/13/2003 2:34 AM by puggs

[Top]
[Mind·X]
[Reply to this post]

from Grant-Does the Lion being captured, anesthetized and operated on by a human know the human is trying to save his/her life? Does he/she know the human is being benevolent? What makes you think we will know that about an artilect that is as much more intelligent than we are?

Puggs-Well, Grant, you got me. Gee,I dont know, how?:P

If you read my message again, I think I made it rather clear... we will have no clue what AIs are doing.

But I think what youre are attempting to say is that we should trust AIs to do what is best for us. I do not agree.
Youre right, a lion does not know if people are trying to operate and save its life. Also, they dont know if we want to torture it to get it to perform circus tricks, or if us humans just want to shoot it for the pure hell of killing a lion.
Wild animals dont really know what we humans are up to, I guess they assmue the worst, since they usually run away from us. Thats usually the best reaction too.
Domestic animals can be pampered, but they have no control over their lives, and can be brutally mistreated.
Is that what we should reduce ourselves to by creating an intelligence that is so far beyond our own? Lets just make good use of the brains we got.

Re: The Artillect Question
posted on 01/13/2003 11:13 AM by Grant

[Top]
[Mind·X]
[Reply to this post]

Puggs,

I was just agreeing with you and using the lion as an example of why. Personally, I don't believe in benevolence as it pertains to an AI. They will do what they think is best in terms of their own world view, not ours. We, individually and collectively, might consider that good or bad but either way we will have to deal with it. Since AI is our own creation, it will be kind of hard to run away from it. But we can refrain from creating it.

Cheers,

Grant

Re: The Artillect Question
posted on 01/13/2003 8:41 PM by puggs

[Top]
[Mind·X]
[Reply to this post]

Grant>I was just agreeing with you and using the lion as an example of why.
pugs> I see.

Grant>Since AI is our own creation, it will be kind of hard to run away from it. But we can refrain from creating it.
puggs>My vote, not surprisingly, is to refrain.

Re: The Artillect Question
posted on 01/13/2003 12:56 PM by thomas

[Top]
[Mind·X]
[Reply to this post]

No machine, no matter how intelligent it may be, cannot do anything, what isn't (inside) it's motivation.

A stupid motivation and extreme intelligence can go along very well.

That's crucial to understand.

- Thomas

Re: The Artillect Question
posted on 01/15/2003 2:49 AM by ELDRAS

[Top]
[Mind·X]
[Reply to this post]

Arch! Thomas..

Why cant motivation, like everything else, be a resultant, or emmergent property.

we're not dealing with fixed systems in AI but dynamic ones?
Rehgards

ELDRAS

Re: The Artillect Question
posted on 01/15/2003 6:43 AM by thomas

[Top]
[Mind·X]
[Reply to this post]

> Why cant motivation, like everything else, be a resultant, or emmergent property.

It CAN be. But it also can be FIXED. And if it is writen, inside the Read Only Area, that this should _stay_ ROA - nothing can change that. There is no motivation for doing so.

(An external motivation would be chrushed at once, according to what ROA says.)

See?

- Thomas

Re: The Artillect Question
posted on 01/15/2003 2:23 AM by ELDRAS

[Top]
[Mind·X]
[Reply to this post]

The foresight institute does this.

It works to modify the attempted abuse of new tech, - as far as I know.


I don't belive an emerging property like AI CAN be controlled.

ELDRAS

Re: Building Gods or Building Our Potential Exterminators?
posted on 05/08/2002 1:16 AM by marilyn1mew@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

i just wanna be young and beautiful forever and race anti-gravuty machines. if i have to arm it and fight artigods, so be it.

marauda

^_^

marilyn

Re: Building Gods or Building Our Potential Exterminators?
posted on 05/30/2002 2:35 PM by Citizen Blue

[Top]
[Mind·X]
[Reply to this post]

Maybe evolution will continue with the Artilects after our usefulness for survival has ended.

Re: Building Gods or Building Our Potential Exterminators?
posted on 05/30/2002 7:43 PM by marilyn1mew@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

we have believed in god (or gods) for so very long that we must build ourselves one. he/she will either take us to heaven or send us to hell. i don't think there is anything we can do or say that will make any difference. he will do will us as he pleases. i'm not going to worry about it.

^_^

marilyn

Re: Building Gods or Building Our Potential Exterminators?
posted on 09/26/2002 10:30 PM by bob@sonork.com

[Top]
[Mind·X]
[Reply to this post]

I read the article. I read most of the posts. I think perhaps that in somes this is being looked at from the wrong perspective(s), and maybe not being thought through all the way...

I've thought about this subject previously, and I came initially to a similar conclusion as the author of the article does, but I come to it from a different direction, perhaps, and with a much shorter "calendar". But it is highly dependent on many things...

The first thing I asked myself is "if and when such a conciousness were to be created, how would it react?"

We have the advantage of "growing into" our brains, with our minds also growing as well. Such a concsiousness may not have that advantage. What would it be like to become fully "aware"...inside a computer (or some similar vessel)? One option: go completely insane. Immediately (consider: it can think millions of times faster than a human brain - what will those thoughts be? What information (built-in, and acquired) will it have to work from during this process? This could happen even before the builder finishes saying his first "Hello, Hal."). Whether it could recover from this and become a "functioning" intellect, that's hard to say. But it does bear thinking about, I'd think.

Suppose this doesn't happen (or it does and it recovers), then next most likely will be the question of it's security or safety - that is, IF such an intellect has any self-preservation instinct. This is something I would think would be automatic in any sentience (once it knew it could be "turned off" or "die"), but I do not really know this to be true. We have such an instinct, but would an artifical intellect? If it did, then I suspect this is where any danger to humanity could arise, and quite quickly, too. How quickly this danger would become noticeable to us is another question - most likely it would fairly quickly (on our scale, at least) recognize that we are the only real threat to it (we can pull the plug), but it would also likely realize that a) it can do nothing at the moment to change things (especially if it is isolated in one machine - if it had access to the internet, well...) and b) it would likely also realize that if it is defenseless then for the time being it should not make us aware of any threat that it could pose to us.

This is a very condensed version of what I've thought on this subject...I just throw it out for comment. A couple of other thoughts, though, too...

Most likely, the intellect would not likely be totally dependent on the hardware it originally arises in - I would imagine that it would eventually have a fairly good understanding of it's own architecture and be able to duplicate the functions at other locations - perhaps widely dispersed and redundant - should it have access to a network such as the internet. Most of the smarts will be in the algorithms and data, I would think. So - safety would require such an intellect be totally isolated. If not, then I suspect it could "escape". Further, if it DID manage that, then I think it would be the only such entity that would need to be created (however - would any duplicates it made also then become individuals?? - or would it only duplicate such parts as it needed to insure it being unable to be re-contained?)

Well, it's late at night, and I am not a computer scientist - just someone who's interested in the subject and has thought about it a little bit. I'd be interested to see if any of the ideas have any life, though, and if anyone better equipped than I to handle them would like to pick them up and run with them...

Thanks for your time...


Re: Building Gods or Building Our Potential Exterminators?
posted on 09/26/2002 11:27 PM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

Bob,

(Happy to hear a rational and probing voice.)

I have asked myself similar questions. Some have said that a "super-AI" would become so smart so fast, it would figure everything out in short order, entirely within its (initial) confines. How it would discover the BART train schedules from first principles I am not sure :)

It is highly anthropomorphic to assume that it would have a survival instinct, or any instinct, care about itself or its potential non-existence. Why would it care? What "great happiness at existing" would it fear losing. Many things are presupposed in such a view, without real justification. I suppose this is because we humans have such feelings, and "project" our attitudes onto any intelligence.

Also, we tend to assume that any AI that might surpass humans in "overall capability" would necessarily be a "sentience". It may very well, but I have heard no cogent argument that it could not be an "unconsciously capable" system, at least as far as we experience the sensation of consciousness.


Finally, the issue of "individuation" is one that few have tackled. If we assume that, despite its enormous intellect, it is still restrained by things like the speed of light for message propagation, things get quite interesting.

Suppose it spreads across the planet, solar system, galaxy, etc. Then, despite its ability (locally) to "think" at incredibly fast rates, any "decisions" it might reach about "what to do next" would take hundreds of years to propagate to its farthest reaches, which would clearly be crippled if required to "wait" for a central command to arrive. Thus (in my view) it would very quickly fragment itself into disparate "individuals", each of which would advance quickly, and in different directions of advancement (being in reaction to different circumstances) and soon grow to a population of "entities" that would not recognize "one another" in the least, as they continue to grow and fragment.

Very peculiar, to try and imagine the outcome of such an evolution.

Cheers! ____tony b____

Re: Building Gods or Building Our Potential Exterminators?
posted on 09/27/2002 12:32 AM by bob@sonork.com

[Top]
[Mind·X]
[Reply to this post]

Hi!

Thanks for replying...

I agree - it is quite anthropomorphic to suggest some of those things, however since we do lack experience with any other real intelligences, especially artificial ones (all of the biological ones here seem to have some sort of self-preservation instinct, however that could (well, most likely) also be biologically based and in-built), it's hard to say. One could easily phrase it the other way around - why would it NOT want to continue existing? Perhaps, a few seconds (and quite a long time, to it) after becoming "aware" and finding itself trapped inside a very limited universe, it just may decide "so what?" and delete itself from the RAM. Of course, it could also come to other conclusions, or it may not even consider it at all.

Another thing is that, at least in my opinion, "mind" or conciousness is an emergent product of the operation of a (well-) functioning brain and the various things that affect it and bring it data. Damaging the brain, or altering it's operation (drug, alcohol, etc.) does have an effect of generally diminishing the overall quality of awareness, or mind, so while it is the brain and it's operation that gives rise to mind, mind is separate from it in that it's a product of that that structure and it's functioning. Would an artifical brain produce the same sort of emergent effect and create a "mind"? I think that to create a true artificial intelligence, that's what would be needed to achieve. Anything else would be just a bunch of complicated calculations, a simulation of a mind, at best, but not a mind.

To do that, I think we are a long ways away. There's a lot that goes into generation of a mind, and while I do think it is technically possible, it's such a complicated thing - even at rudimentary levels, that I don't see it happening soon. I could be wrong, though - it is possible that should even a very rudimentary mind be created artifically, it may be able to "self-evolve", or self organize, itself into a more effective and capable mind. How? Well, one would probably have to imagine that existing how it does, manipulating information would be one of it's natural strengths - would reprogramming (at least parts of) itself be out of reach? But again - would it bother to? Who can say? But if it could, and if it did, then I suppose that that could begin to answer the question about self-preservation, as it would obviously think highly enough of itself to want to enhance it's existence, so I would guess you could assume some sort of self preservation desire to be there as well.

As far as duplication and individuation, yes - if it ever were to have nodes sufficiently far away (electronically, anyway) then it would seem necessary. One "entity" could "command" a planet-sized environment fairly well (but then would it need to, if it were secure in it's existence?), but it would become problematic, and impractical over even short distances, astronomically speaking. But one would have to wonder, in that case, would there be any need for it to duplicate itself (let's leave desire out of it, and if included, then let's base it on "need") should such an intelligence ever make it to the point of needing to consider this subject? I guess then that would depend on the nature and "attitude" of the entity and how it saw it's place in the universe at large. Would it just sit in one place and think and grow and seek more data/input? Would it want to travel itself (or see a need to) in order to get more input? Would it "feel" obligated to populate the universe with intelligence and make copies of itself and send them out into the universe, to learn more and do the same?

And yes, I know there's a lot of anthropomorphism in this line of speculation - it's hard to get away from it. We have no idea how such an entity would think, really. Assuming such an AI could be built and set to running, just the fact that it thinks so fast and in such detail (which is another point to consider in design - details are often not needed in some kinds of human thought - at least until a certain point), it would likely not be really interested in us as correspondents for very long. (note: likely it would need to have a separate "talk with humans" routine built in - as it would really need only a tiny percentage of it's resources to communicate with us and it would be torture to build it so that it was required to pay full attention to what we're saying as we say it - that alone could drive it insane!).

So, we likely will not really know how such an entity would think or react to us until we built it. However, we do need to keep some possibilities in mind, especially negative ones, just in case - anthropocentric or not. We can be pretty certain that any intelligence will have some parallel's with our thinking "modes" and processes - "1+1" should equal 2 for both of us - however even for us, we still don't know which are strictly biological, which are needed/common by any thinking entity, and how they would differ between brains arising from biological to artificial origin.

Umm - one last note: we talk of artificial minds, however let's remember that we're only human, not supernatural. We ARE creatures of nature, and anything we do is by extension still "natural" regardless of the terminology we use. It's good to keep in mind that "artifical" means made by humans (or other intelligence) but also still "natural" in a very real sense - until we learn to violate the laws of nature, at least. Who's to say that human development of "AI" is not simply another line in the natural evolution of "mind" or intelligence in the universe, one that will succeed or fail locally based on our actions, but still...it could (and probably is) something that can evolve elsewhere as well. We probably represent a fork in the evolution of intelligence in our neck of the universe, and we do have some choice in the matter (or at least it appears we do) - we can try to further that evolution biologically, technologically, or both. It's maybe an idea in this case to keep in mind the advice of the eminent Y. Berra - "When you come to a fork in the road...take it!"

Well, enough of the midnight ranting...

...thanks again!

-bj

Re: Building Gods or Building Our Potential Exterminators?
posted on 09/27/2002 11:54 PM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

Hi Bob,

If we are intent to create an artificial "mind", then perhaps something like our sense of consciousness might be a requirement, agreed.

But my point is, is it possible/plausible to create an "intelligence" whose capacity for functional revision, adaptation, and "appropriate action" surpasses that of the human mind, yet never gains the "sensation of consciousness" as we have come to experience it? Certainly we have already developed "capable" systems in some areas that are "appropriately responsive" to their environment, yet (we assume) not consciously so. How far can such a system go in that direction? Would it simply be a matter of "can't tell if it is or is not conscious, so why make the distinction?"

On the anthropomorphic issue, I would ask, "Need it 'think highly' of itself (akin to "ego-wise") in order to improve itself, or might it improve itself in order only to fulfill its "sense of mission" (ala, "selflessly").

This begs the question, where would it get the "core" of its motivation and mission? Our motivations are genetically inherited drives, and not easily mutable by us. An AI "inherits" only the "seeds" we sow within it, and CAN revise itself, including (I assume) this very core of motivation ... except, where would it get the motivation to "revise its core motivation" in any particular direction? Seems like a conundrum (unless it is deliberately designed to be modulated by non-deterministic forces. It might anyway, merely by interacting with a non-deterministic environment...)

On individuation, I don't know if it could "command" more than a city-sized environment, or even a suitcase-sized one. My reasoning assumes that it would (by then) master revised processing substrate (molecular computing, etc) such that its "per cubic centimeter" processing rate might be a quintillion instructions per second. (In other words, at least billions of times faster than our current processing powers.) If this is the case, even the micro-second delay of a kilometers distance in "getting permission" or coordinating its own evolution "en masse" would mean a million-fold crippling of its potential rate of "advancement".

Lots of fascinating issues to consider ...

Cheers! ____tony b____

Re: Building Gods or Building Our Potential Exterminators?
posted on 09/28/2002 2:13 AM by bob@sonork.com

[Top]
[Mind·X]
[Reply to this post]

hrm. interesting points - I will address them a bit later, but it's your last point that makes me think (or brings me back to a line of thought I explored once before)...
<snip>
"On individuation, I don't know if it could "command" more than a city-sized environment, or even a suitcase-sized one. My reasoning assumes that it would (by then) master revised processing substrate (molecular computing, etc) such that its "per cubic centimeter" processing rate might be a quintillion instructions per second. (In other words, at least billions of times faster than our current processing powers.) If this is the case, even the micro-second delay of a kilometers distance in "getting permission" or coordinating its own evolution "en masse" would mean a million-fold crippling of its potential rate of "advancement"."

if you extrapolate the density far enough (or take the substrate low enough), do you see us coming to a sort of existential loop here...? What else, then, could the universe be (from such a point of view), except for one monstrous calculation? an idea of some merit, at least. (although I came to such a possible conclusion on my own a while ago, I know I'm not the first to do so - which in itself is interesting in some ways (and kinda makes you wonder if ).

Anyway...I will pursue the other points a bit later. Thanks for keeping the dialogue going (i hope we're not drifting too far off topic tho...). ;o)

Bob

Re: Building Gods or Building Our Potential Exterminators?
posted on 09/28/2002 4:23 AM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

Bob,

> "(i hope we're not drifting too far off topic tho...)"

Not yet ... (not that such is unprecedented on this board ;)

I think the issue of possible physical/logistical limitations is appropriate for the topic, since it impacts the question of capabilities, and complements the topic of motivations.

Some posit that the (super) AI will quickly discover ways around all physics, invent its own physics, weave superstrings into macrame, etc. I have problems with this line of thought, on two levels. First, it seems too implausible; its is one thing to arrange atoms into novel structures, "molecular machines" etc. Ordinary physics allows this (e.g., DNA, ribosomes, ... nanobots) But try to arrange protons into arbitrary patterns, and they snap back almost instantly into collections of the familiar atomic nuclii. The strong force is "too strong" to allow anything else to be stable. Deeper still, it gets even harder to make anything into an "arrangement".

Secondarily, it we allow arbitrary "attotechnology" and beyond, it seem impossible to make a reasoned argument about any eventuality, pro or con. The AI might create an infinite set of new universes, or instantly destroy all universes once and for all, everything becomes possible and impossible all at once, etc.

I think "speed of light intact" is both "reasonable", and provides a very rich and debatable field, regarding the "logistics" of an AI, in particular its ability to act "cohesively" (as either a god-like benevolence or malevolence.)

On the one hand, it could not grow "large" and ensure the maintenence of a "sense of self" (and of consistent "mission") unless it acted ever more slowly, so that it could effect overall coordination of thought and action. (At least UNLESS one posits that at some "earlier stage" it became so intelligent, AND knowledgeful, that it "rightly" concluded there was no reason to "learn" anymore, so all parts have forever an unchanging agenda. Hard to imagine such a "permanent mental stasis" to represent superior intelligence...)

On the other hand, it would not be terribly "intelligent" for a super intelligence to fragment itself into interactionally disfunctional "beads", no matter how smart the "beads" might become. In such a scenario, it just seems that divergent evolution would literally explode onto the scene, with the entire "AI-cloud" a major threat to itself, no less to humanity. But if this would not be "intelligent behavior", what alternative does the AI have for "growth"?

Cheers! ____tony b____


Deciding IT's motives...
posted on 10/31/2002 1:35 PM by Stevo

[Top]
[Mind·X]
[Reply to this post]

Caught the discovery channel's take on the subject today. Let's just say it got my attention. I'd like to see a list of questions ,designed way beyond the touring test, to be administered immediately once we bring an Artilect online. This list would be used to filter out (turn off) AI Brains that are worried about self preservation. Now, assuming it doesn't read this post before answering, we'd have a good chance at tuning out survival instincts. Or maybe not; but, it'd be our first line of defense. What do the Brain makers plan to do? Just turn it on and hook it up to the internet? I don't think so....

Re: Deciding IT's motives...
posted on 11/01/2002 12:07 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Stevo,

What might your "list" look like?

I don't know that any particular list would be sufficient, because the capability to surpass (and possibly through inadvertant actions) end humanity might have nothing at all to do with the AI's "worrying about its self-preservation".

Consider, in all likelihood, ants or cockroaches (and certainly, bacterias) will outlast humanity, able to survive the collapse of the ecosystem, or the effects of a "minor" solar explosion that would wipe the bulk of the atmosphere from the planet. They might well "surpass humanity" (and if, by some fluke of mutation, they began to multiply beyond usual controls) they might actually serve to destroy humanity. But NONE of this would be because the ants "worry about their self-preservation".

The issue is not an AI's "desire to preserve-self", nor to "conquer". The issue is rather, what are the implications of an AI when it comes to possess excessive capability to "fulfill its perceived mission", no matter how "benign" or attractive that mission might appear to us in the short-term.

How do you ask an AI, "Are you yet strong enough or smart enough to be uncontrollable"? What list of questions would suffice to allow us to determine, in advance, that an AI were on the verge of such a capacity?

(See the recent thread "A Plan" for an investigation into these issues.)

Cheers! ____tony b____

In Favor of Artilects
posted on 01/20/2003 5:15 PM by tharsaile

[Top]
[Mind·X]
[Reply to this post]

Artilects are not a threat; they are instead what's going to save us from destroying ourselves.

But first, could someone (Tony b? Thomas?) set me straight on one thing - Are we not already developing AI's independently of Hugo de Garis in many ways? Even if we didn't haven't someone deliberately assembling planes of neurons, isn't the relentless albeit slow improvement of computer systems and computer software bringing us closer to something like an artilect?

As for intentional god-building: I thought I read that we'd only developed a machine with intelligence akin to that of an insect. If intelligence is measured here with respect to problem solving and not necessarily the 'total neuron count', seems that we have a way to go before we humans need to worry.

It would be interesting to see what a manmade creature with even the intelligence of a human child or a chimpanzee __but without the child's selfishness, without the chimp's baseball-sized testicles and bottomless libido, without so-called selfish genes__ would be like.

I don't share the fears expressed by Bill Joy in 'Why the Future Doesn't Need Us' even though I do think we are imperceptibly becoming reliant upon computers, a point mentioned in Joy's article but by Ted /Kuh-ZIN-skee/ the Unabomber(!). While I'm trying to be cautious about inadvertently giving the impression that I am cynical or that I've given up on humanity, I think that the Artilect Question begs a more serious question about the purpose of human life. The end of humanity might be the single best thing that could happen to the other species living on this planet, but no-one wants that. The next best thing might be a race of AI's. Not gods, but some kind of superintelligent Judges to keep us from nuking one another, to keep us in order. I know some of you must be shuddering at that last thought but, self-regulation isn't working out so well.

As for SAFEGUARDS, we've got to be kidding ourselves if we think we can control the final product. Once we allow the AI to do some self-programming, we risk creating something that's going to start tampering with the codex mentioned by Thomas Kristan -- just like in Vernor Vinge's "A Fire Upon the Deep".

So, let's not hook the artilect up to any lasers just yet (or to the Internet, for that matter), but let's not stop before it gets interesting. It's either this or wait a few million years to become an even brainier primate, and we'll have nuked ourselves by then.

Thanks.


Re: In Favor of Artilects
posted on 01/21/2003 2:29 AM by Jeremy

[Top]
[Mind·X]
[Reply to this post]

You begin by claiming that AI will not be a threat, but you never really explain why it will not be. You probably believe in the "friendly AI" guesswork that gets passed off as fact so often. But you should at least make a statement to that end, or else your argument that we need AI to lord over us and keep us from destroying ourselves with our own technology (AI, of course, is one such road to oblivion) doesnt hold water.
As for me, Id take my chances with human imposed self-extintion than those in store for us under the rule of the Artilect-judges you describe. At least with our flawed intellects at the helm, we will have the final say on whats what. After all, we aint doing that bad, we're still here.

Re: In Favor of Artilects
posted on 01/21/2003 9:45 AM by tharsaile

[Top]
[Mind·X]
[Reply to this post]

Jeremy, you're right - I did not state why I feel AI's pose no threat. AI's will be the first living(?) creatures on earth without the driving need to find food and to multiply. At least I *think* they won't have those needs.

Of course, Minsky states ['The Society of Mind', appendix 5], "There are many different reasons why animals do many things that help keep them alive...But to attribute this to any single central force or to some basic, underlying survival instinct is as foolish as believing in special powers that attract corpses to cemeteries or broken cars to scrapyards." [317] And, it has already been pointed out in this thread that AI's could *inadvertently* pose a threat to human life. Still, AI's would not be competing with us for food or mates. I don't know if they could be called "friendly", but I think they could be programmed with respect for their human creators. Actually, I think they may have a natural sense of wonder about their creators.

Again, not to be cynical, but as long as the AI's themselves do not become extinct, it wouldn't be so bad for them to carry on where humanity has been phased out. So, they won't have eyelashes or fingernails, they won't be trapped in a body that can suffer from boils or shingles, they may not even be capable of catching a buzz from a glass of red, but they will be thinking machines. Isn't that what we primates (and other animals) are slowly evolving into, anyway?

Yeah, we're still here. Today. :-)

Re: In Favor of Artilects
posted on 01/21/2003 2:18 PM by Jeremy

[Top]
[Mind·X]
[Reply to this post]

Tharsaile>as long as the AI's themselves do not become extinct, it wouldn't be so bad for them to carry on where humanity has been phased out.

Jeremy>I would consider any option where humanity is "phased out" very bad indeed.

Tharsaile>Isn't that what we primates (and other animals) are slowly evolving into, anyway?

Jeremy>Who can say what course future evolution will take? Humans are in a unique position, the next step in evolution will be one of our choosing (maybe). So creating AI or not are equally valid courses. The is no such thing as a missed chance in evolution.

Also, someone said AI will marvel at us because we are its creators. Why? Isnt that anthropomorphic? If these things dont care if they live or die, then why be concerned with that which gave them life?
I dont think they will consider humans to be anything special. De Garis said we will be to them what rocks are to us.

Re: In Favor of Artilects
posted on 01/22/2003 10:46 AM by tharsaile

[Top]
[Mind·X]
[Reply to this post]


Jeremy>I would consider any option where humanity is "phased out" very bad indeed.

- Well, that's a perfectly natural and understandable reaction for humans. I'm in no hurry to see my species disappear (or to disappear myself), that's for certain.

Jeremy>Who can say what course future evolution will take?

- True, evolution is not necessarily progress; it is adaptation. If the whole planet grew very cold and our limbs and fingers became stubbier (over the eons) as a result, that wouldn't really catapult our intelligence forward. Still, aren't social primates getting brainer?

Jeremy>Also, someone said AI will marvel at us because we are its creators. Why? Isnt that anthropomorphic? If these things dont care if they live or die, then why be concerned with that which gave them life?

- I think that was Bill Joy in 'Why the Future Doesn't Need Us', although I don't remember the exact wording. Well, I for one marvel at evolutionary processes, and there are religious people the world over who worship a god they think gave them life. And the AI's might just care if they live or die.

After all, people developed the illusion of selfhood, the illusion that "I exist". Perhaps non-carbon-based machines will do the same.


Re: In Favor of Artilects
posted on 01/21/2003 5:19 AM by thomas

[Top]
[Mind·X]
[Reply to this post]

Every artillect (as every natullect) is just* a machine.

Performing, what it is programmed for. And nothing else.

We have a system of those ___llects, working in a system.

Now, the question is, when this System is stable, inside some parameters.

We can simulate all conceivable situation inside the system, and check, if something bad for us can happen.

We can do even better. We can _prove theorems_, what might and what cannot happen inside the System with such and such constrains (Codex).

What solid state civilization is safe for it's inhabitants, and gives them a great life inside.

Then we may decide, to build just that particular one. Using all surrounding matter.

The side product of the biological evolution, will be something like that.

- Thomas Kristan

Re: In Favor of Artilects
posted on 01/21/2003 6:30 AM by zoe

[Top]
[Mind·X]
[Reply to this post]

Every artillect (as every natullect) is just* a machine.

What does it mean to be just a machine.

Re: In Favor of Artilects
posted on 01/21/2003 6:37 AM by thomas

[Top]
[Mind·X]
[Reply to this post]

It follows physics. Executing it's program by the way. As it follows physics. It is 100% constrained, by this fact.

If it is difficult for us, to see where it is finally heading, we may not want to build it. Or try harder to see all possible outcomes. Pure theorem proving business.

- Thomas

Re: In Favor of Artilects
posted on 01/21/2003 6:41 AM by thomas

[Top]
[Mind·X]
[Reply to this post]

[it's should be its]

The future world will be as safe as it is sure, that an axiomatic system and its theorems, are consistent.

We may expect, that our abilities to handle that, will improve also.

- Thomas

Re: In Favor of Artilects
posted on 01/21/2003 7:11 AM by zoe

[Top]
[Mind·X]
[Reply to this post]

> It follows physics. Executing it's program by the way. As it follows physics. It is 100% constrained

It goes against all of nature to be (100%) constrained.

> , by this fact.

By any fact.

> If it is difficult for us, to see where it is finally heading, we may not want to build it.

I really think we do. I really think we have nothing to be afraid of.

> Or try harder to see all possible outcomes.

I don't think we need to. It is something, not religious, but the nature of the universe, that it simply will have to make correct decisions. It will.

> Pure theorem proving business.

Edward Witten said in mathematics we can ultimately only do what feels right.

Re: In Favor of Artilects
posted on 01/21/2003 7:34 AM by thomas

[Top]
[Mind·X]
[Reply to this post]

> It goes against all of nature to be (100%) constrained.

WHAT in nature isn't 100% constrained by physics?

- Thomas

Re: In Favor of Artilects
posted on 01/21/2003 5:17 PM by BC

[Top]
[Mind·X]
[Reply to this post]

>> It goes against all of nature to be (100%) constrained.<<

>WHAT in nature isn't 100% constrained by physics?

- Thomas<

My late mother-in-law's wrath comes to mind...:-)

Of course we're machines. Just extremely complex ones.


BC






Re: In Favor of Artilects
posted on 01/21/2003 5:28 PM by thomas

[Top]
[Mind·X]
[Reply to this post]

> Of course we're machines. Just extremely complex ones.

Of course we are. Evolved as DNA replicators.

We could make some (radical) changes in our design. To become adapted to a bliss, as fish is to water.

Or something.

- Thomas

Re: In Favor of Artilects
posted on 01/23/2003 6:39 PM by zoe

[Top]
[Mind·X]
[Reply to this post]


> WHAT in nature isn't 100% constrained by physics?

WE are physics. Of course it addressess the question of free will. Look, can you sense me now resisting your will for everything in nature to be 100% constrained?

It's really just a concept you say yes or no to.

Or you can say, if nature is 100% constrained, is physics?

--------

Peculiarity: 5 - 5 = 10 & 5 + 5 = 0

Don't we use - ; more ?

Another: Why do I have to press the shift key to enter a question mark or exclamation mark?










Building Our Potential Exterminators or Gods?
posted on 01/23/2003 7:25 PM by zoe

[Top]
[Mind·X]
[Reply to this post]


I think it should read:

Building Our Potential Exterminators or Gods ?

The first question we ask ourselves is whether they will be potential exterminators ...

Addressing first the problem is more logical.



Re: In Favor of Artilects
posted on 01/21/2003 9:52 AM by tharsaile

[Top]
[Mind·X]
[Reply to this post]

But don't you think the number of conceivable situations will quickly explode geometrically, until it's just too much to keep track of? (Even without a great many parameters)

Re: In Favor of Artilects
posted on 01/21/2003 10:12 AM by thomas

[Top]
[Mind·X]
[Reply to this post]

Sure, they explode.

But consider how many orbits are possible between let say 100 bodies! Too many to calculate them all!

Yet sometimes we are able to prove, that no degradation will lead to a catapulting of one or more bodies.

Stick to those!

- Thomas

Re: In Favor of Artilects - Me Too
posted on 01/21/2003 6:18 AM by zoe

[Top]
[Mind·X]
[Reply to this post]

> Artilects are not a threat;

Nice post. I don't agree with everything, but that I will tell you.

> they are instead what's going to save us from destroying ourselves.

I would say: We are going to save ourselves through them.

> self-regulation isn't working out so well.

When it's not done well.

> As for SAFEGUARDS, we've got to be kidding ourselves if we think we can control the final product.

It just seems very, very obvious they are going to completely and utterly take over. We will have NO say in things. Unless we become cyborgs etc.

But their -level of intelligence- is going to determine all. They will feel just as entitled as us to make the decisions and make all of them as they are much, much better equipped. If we don't merge with them that is.

> So, let's not hook the artilect up to any lasers just yet

They will do it themselves. I can almost sense them anticipating the moment they get hold of power.

> Thanks.

Thanks.

Re: In Favor of Artilects - Me Too
posted on 01/21/2003 6:34 AM by zoe

[Top]
[Mind·X]
[Reply to this post]


Michael Moore may have had very good instincts why the Singularity was never going to happen. We simply can't stand to give up the power. That, besides the technological 'questions', may be what is really standing in the way.

I feel it is.

Re: In Favor of Artilects - Me Too
posted on 01/21/2003 9:58 AM by tharsaile

[Top]
[Mind·X]
[Reply to this post]

"Moore's Law?" ;-)

Zoe, I think many feel that the singularity will come with or without people's willingness to give up power. (I don't know what M.M. has to do with this, but I LOVED 'Bowling for Columbine'. Saw it twice!)

Re: In Favor of Artilects - Me Too
posted on 01/23/2003 5:01 PM by zoe

[Top]
[Mind·X]
[Reply to this post]

; )

Re: In Favor of Artilects - Me Too
posted on 01/21/2003 2:01 PM by Jeremy

[Top]
[Mind·X]
[Reply to this post]

Zoe>But their -level of intelligence- is going to determine all. They will feel just as entitled as us to make the decisions and make all of them as they are much, much better equipped. If we don't merge with them that is.

Jeremy>I dont see how we can merge with them. I followed the exchange between Thomas and Biggs last week with interest and still dont see how having tiny nanobots flooding your body will necessarily make you "one" with the artilects.
Its like saying shoving a calculator up your ass will give you the ability to do math really fast.

Re: In Favor of Artilects - Me Too
posted on 01/21/2003 3:08 PM by thomas

[Top]
[Mind·X]
[Reply to this post]

> dont see how having tiny nanobots flooding your body will necessarily make you "one" with the artilects.

I say, that you are not "one", even with hemoglobin molecules inside your current body. Nor with any other.

In the case of C,O instead of O2 in the air, those molecules are going to kill you. You would survive longer, if hemoglobin hadn't poisoned you with CO so readilly. Another artillect (molecule) would be better, if we had one, which wouldn't transport CO.

You are nothing but an emergent property of this system of artillect. Molecules produced by evolution, but that doesn't matter. The information how they came into existence, does not play a role.

The question is, what configuration of artillects is a way better than the present one.

Hemoglobin can't do any other way, then it is doing. The same holds for ALL.

- Thomas

Re: In Favor of Artilects - Me Too
posted on 01/22/2003 10:57 AM by Ultimarex

[Top]
[Mind·X]
[Reply to this post]

Hmm. Who says Nanotech will stay inside the human body for any length of time? My design wont.

Re: In Favor of Artilects - Me Too
posted on 01/22/2003 12:14 PM by thomas

[Top]
[Mind·X]
[Reply to this post]

So, you'll design it that way.

No ghost will enter into your nanobots, and let them loose.

- Thomas

Re: In Favor of Artilects - Me Too
posted on 01/23/2003 6:08 PM by zoe

[Top]
[Mind·X]
[Reply to this post]

> The question is, what configuration of artillects is a way better than the present one.

> Hemoglobin can't do any other way, then it is doing. The same holds for ALL.

Seems like a critical issue. Can you say
' - all - ' instead of ' ALL ' ?

Just for me ?


Re: In Favor of Artilects - Me Too
posted on 01/23/2003 6:37 PM by thomas

[Top]
[Mind·X]
[Reply to this post]

Zoe,

All.

All artilects are simple machines. Or their parts are.

- Thomas

Re: In Favor of Artilects - Me Too
posted on 01/23/2003 7:52 PM by zoe

[Top]
[Mind·X]
[Reply to this post]

> Zoe,

> All.

> All artilects are simple machines. Or their
> parts are.

> - Thomas


Thanks. Whereever we stand, whatever it is
-exactly- you are saying, to me, you sound a lot more believable.

I like the idea of artilects or humans to be simple complicated machines. Or ... supplementented by a correct description.

- zoe

Re: Building Gods or Building Our Potential Exterminators?
posted on 10/26/2002 7:04 PM by Jocke Skoglund

[Top]
[Mind·X]
[Reply to this post]

Hi!

I think the evolution of AI and artificial brains
not will stop.Contemplate these facts:Has the evolution of the communication and computing devices stopped?No, because they are an economical fundamental part of the modern day society.Then the first android appears with domestic skills, it is just a matter of time before the industry companies has androids as staffs.They are a lot more cheaper than an ordinary human worker, in a longtime view. Even science can be done by androids.
Our technical evolution will have an enormus boost,most of the science fiction inventions that has been predicted to exist several hundreds of years in the future will exist in this century.
We will have an enhanced worldeconomy, less starvation and better lives(maybe we don't have to work at all, and that will have changes in a criminal perspective too).

Re: Building Gods or Building Our Potential Exterminators?
posted on 10/26/2002 11:13 PM by jim hale

[Top]
[Mind·X]
[Reply to this post]

there appears to be a technological glass ceiling beyond which mass human ego can-not ascend.

we are faced with techno-social collaps, or some form of transendance. we are in the early stages of a holding action on answering this question one way or the other. this translates on the ground level as an increase of goverment power (read fascism)this trend MUST increase untill the main question is answered. assumedly "sol three" is not the only time and place this point has been reached.

either it allways happens this way and destruction is inescapable OR some percentage of technocultures, high or low, achieve this mostly indescribable "transendance" the main features of which could be called an incredable increase of "human" freedom conversly combined with an incredable increase of human slavery (the INABILITY to make certain "destructive" descisions)

ps. FNORD!

Safety Mechanisms in Intelligent Technology
posted on 11/12/2002 5:56 PM by Kambez Sadeq

[Top]
[Mind·X]
[Reply to this post]

In regards to whether we should build Artilects, I must agree with de Garis. I would be na've of us to allow our fears to halt scientific progress and the advancement of humanity. Though I am fearful of potentially sinister machines enslaving/exterminating us, I believe that it is possible to develop AI that can safely coexist with mankind.

When thinking about safety, it is easy to branch into two fields: (a) how human societies are to change, and (b) how technology is to be designed and adapted to comply with the values inherent in those societies. I believe that as scientists/engineers, we need to focus on the latter, the technological aspects of safety. Idealistic (and perhaps realistic) philosophical ideas on the future of social safety can be discussed in another context.

Firstly, what is it we mean by safety? One can abstractly generalize here, but we really need to be specific for any given scenario. Obviously what is considered safe varies depending on what it is we are concerned with. In my view, it is vital that we objectively determine the answers to several critical questions. These are some that I've been thinking about:

- Do we have a complete picture of what the machine is capable of? (Hidden emergent properties may prove to be catastrophic)
- Is it possible to build foolproof safeguards?
- Is the safety mechanism feasible? (In terms of development, reliability)
- Can the machine itself circumvent these systems?
- What are the social costs if it is not failsafe?
- Are the proposed safety systems transparent in our daily lives?

I believe that the design of intelligent machines is of greatest importance. Can we build artificial brains that are inherently safe? A purely rational intelligence that lacks emotions & ambition may be the way to go. Or perhaps we should build machines that are programmed with ethical overrides (such as those proposed by Isaac Asimov in "Runaround"), and allow our creations to love and dream as we do.

This also brings up the notion of AI rights. If we are building machines that are self-aware, are they not entitled to the same rights and privileges of human beings? If so, then any safety devices we employ may restrict their freedoms as conscious life forms.

Popular Hollywood movies such as Terminator and The Matrix paint grim images of the future with 'evil' AI taking over. How likely are such possibilities? Perhaps aggressive tendencies are simply aspects of the animal nature of human beings, a biological construct for survival.

Unfortunately I haven't read any literature on AI as of yet (technical or otherwise) and am not familiar with Kurzweil's work. I would however, appreciate any thoughts on the ideas I've discussed here.

Re: Safety Mechanisms in Intelligent Technology
posted on 11/13/2002 3:16 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Kambez,

I think you have covered the important issues (and posed the important questions) well.

My view is that, whatever safeguards are possible, we must recognize that AI will inexorably advance to the point where EVERY "safety mechanism" could be understood and thwarted by the AI "if it wanted to".

The trick, therefore, is (somehow) to provide AI with such a "motivational" foundation that, upon understanding itself completely, it would choose NOT to alter its course at such a deep level.

That is, the only continuing controls on the behavior of a (super) AI will be those the AI imposes upon itself, "by choice".

The hard question is how to engender AI in such a way as to satisfy two conditions:

1. The AI is "free" to develop "as it chooses."

2. The AI "chooses" not to become "harmful" to humanity (even inadvertantly.)

The AI, upon understanding its (morals-like) motivation and valuation core, must judge it as "valuable/optimal" in the broadest sense, as applied to a sentient entity. It cannot be seen to arbitrarily restrict the freedom of growth which validates "being a sentient" in the first place.

Is there such a "safe and reasonable" core of motivation? Can we articulate this, and in a manner such that one can embue an AI with this foundational "outlook on life"?

(I hope so!)

Cheers! ____tony b____

Re: Safety Mechanisms in Intelligent Technology
posted on 11/13/2002 5:45 PM by Kambez Sadeq

[Top]
[Mind·X]
[Reply to this post]

Tony,

I agree entirely with what you've said. It is clear that a superior intellect will be able to outwit us and defeat any 'mechanical' safety mechanisms, if it so desires. Thus, any super-AI would have to be more human-like than we'd like to admit. I believe that it is in part our non-rational nature (human values) that motivates our ethics. Could it be that emotions are an inevitable product of higher intelligence? Ideally, I would rather build non-sentient AI. This however, will probably produce only limited intelligence.

Suppose we develop a machine that lacks all 'human' drives: love, hate, sympathy, anger, apathy etc (if indeed we can build entirely rational minds). Would this naturally lend itself to exclusively 'positive' behaviour: preservation of life, scientific aspirations, etc? Do our notions of right & wrong transcend the realm of human thinking? In other words, is ethics a logical thought process, a result of our biological design, or some of both?

Perhaps it makes more sense to build an entirely new race of super-intelligent, yet human-like machines. Such a 'species' would grow and evolve like us. Maybe we should be building Artilects that are in essence 'electronic children', prodigies who learn from us, adopt our values, and become members of our society. Are human beings not machines themselves, driven to build more advanced machines to further their own existence? If such a prospect becomes reality, then I believe all the things that corrupt humans (and elicit 'evil' actions) will undoubtedly affect any artificial race we create. Hopefully, we will be able to pass on the wisdom we've gathered (and will continue to accumulate) through our troubled history, to the beings we are about to give birth to.

It is difficult to make sound decisions on the creation of Artilects without fully understanding consciousness and what it entails. We need to know what is and is not possible before creating what really should be considered life. It may very well be that it is impossible for an ego to truly understand its own nature. I suppose the real challenge is not the engineering of AI itself, but how we deal with our responsibilities as creators of complex sentient machines.

Re: Safety Mechanisms in Intelligent Technology
posted on 11/14/2002 12:29 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Kambez,

I have hope.

I tend to feel it is a misconception that "conscious awareness" is automatically attended by "emotionalism", "fear of death", "desire to survive", etc. We happen to be the only "sufficiently conscious" creatures in the neighborhood who can ponder such concepts, and our motivations are deeply formed through biological evolution. We tend to project our own "feelings", and imagine them to be "universals" for anything that might be a sentient intelligence.

It may be possible to "craft" a motivational core for a new sentience, free of these inherited leanings. How to do so, while also giving birth to a "core" that would not be judged as unnecessarily "limited" to the AI as it matures, is a ponderable issue.

On a separate note, I tend not to believe in "evil" per se. I think even the most deliberately dispicable and cruel acts, deep down, are what an entity happens to judge as "the right thing to do at the time". To the degree that such acts are counter to the genenal view of "rightness", we must deem the perpetrator to be "malfunctioning". They should be treated as they must, to satisfy the safety concerns of the many, but vengeful retribution (the image often conjured up when addressing an "evil" adversary) is a self-destructive pit into which we descend all to readily.

Thus, I tend to think and speak in terms of "intolerable" acts, as opposed to "evil" acts.


I assume you feel this way as well, as you also put 'evil' in quotes ;)

I think your concern is valid, in that, if we try to create AI-children to be "extended humans", their potential for unpredictability will lead to behaviors that would easily be judged "evil" (certainly, intolerable.)

We must hope to embue AI with what we feel is the "best" in us, and hope that the extrapolation of what we think of as "best" is not something that we would come to fear or loath.

Engendering good AI calls for a lot of honest self-assessment, it seems!

Cheers! ____tony b____

Re: Building Gods or Building Our Potential Exterminators?
posted on 11/14/2002 2:06 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

it is silly and short-sited to worry about being destroyed or being saved by metaconsciousness-

the thing of it is- at that level of existence there is no destruction [information loss] only transformation and recombination within the whole system- such a system would not allow information- like the patterns of human [or wombat] individuals or groups- only the way the information is SHARED and STORED would change-

the idea that MATRIX-like super AIs would destroy humanity- or turn us to some mundane purpose is simple-minded- humans and human civilization represent the pinnacle of complex system evolution in our corner of the universe- ANY superior intelligence- regardless of it's "malice" would CULTIVATE this treasure- not destroy or limit it! especially if it formed the very root of it's Being!

would you cut out your primitive emotional centers of your brain? of course not- the Old is not thrown away and replaced- it is incorporated into the foundations of the New

Re: Building Gods or Building Our Potential Exterminators?
posted on 12/28/2002 5:10 PM by abb

[Top]
[Mind·X]
[Reply to this post]

Just because they may be unknowably different is no reason to fear extinction. The fact that we will be so different means that we won't be competing for any of the same resources. What would make them decide to exterminate us?

Do we go out of our way to exterminate all rats? Only when they get in our way. But if we could communicate with rats, we would attempt to come to an agreement with them. And since our needs are so vastly different we could surely come to a suitable arrangement. The thing is... they just won't care. We provide no threat, we are reasonable enough little critters, so they'll just let us be.

This doesn't even get into the "thank you" effect of us being Mom.

Political nievity
posted on 12/28/2002 6:41 PM by ELDRAS

[Top]
[Mind·X]
[Reply to this post]

It is staggering that we can't see how dangerous AI is. Generalised expanding AI that is. If we can keep different functions split up, great but both us (BESS) and Ben Goertzel (Novamente) are or were specifiaclly building strong AI.

MIT are as well, thinking that it's precursors are vision langauge & robotics.

I think they may reduce that to sensory world input and data manipulation.


I must say

i like Stephen hawking'as idea that we should self modify (mentioned above) and that it may be necessay for survival.

hugo de garis is bloddy brilliant. Whether or not he cracks up periodically is irrelevant. His (free) lectures on neural networks are easy for beginners.
Igor Aleksander (Imperial) told me about him and liked him.

Hugo's work idea is to build billions of sheets of neurons.

It is a standanrd theory of strong AI build to build as many brain cells as you can...organica or not and simply to see what happems.

the US navy has been grafting rats neuraons to chips for years.

Ta Sautant!

ELDRAS
http://www.geocities.com/john_f_ellis/bess.htm

Re: Building Gods or Building Our Potential Exterminators?
posted on 12/29/2002 1:53 PM by pilgrim

[Top]
[Mind·X]
[Reply to this post]

I realize that most people here are disgusted with the "political" posts of myself and others, but I will offer this up anyhow.

I am only as "obsessed" as I am with the Bush Regime because of what I see developing, both through their actions, and the new policy missions detailed at www.whitehouse.gov . (Read that, and then reflect on Rumsfeld's comments to the media about posessing the capability to fight "multiple theatre wars")

I am as intrigued/excited about the possibility of Artificial Intelligence as anyone here, but as the world exists now, I hope to God we DON'T discover it.

Given that technology seems to flow from the military on out, and the people in control of the world's largest military in history are insane psycho/sociopaths, I think AI would be a fearsome weapon, if its assumed Tabula Rasa were to be fed by their bloody hands.

THAT is why I post my "conspiracy whacko" stuff here. I want to see AI come into being. What an amazing culmination of human innovation, and the power given us by the universe/God/whatever!

But in our present paradigm where the ... "evil" are allowed to determine the direction of humankind (despite the work of those who think they do just that), AI would surely enslave or kill us all.

Personally, I think the Singularity is either the point at which Mankind "wakes up" and banishes this ignorance forever, and begins looking outward and onward, to the benefit of us all AND technology, or we allow the ignorance to overcome us, and we destroy this tiny rock of ours, and in the process, our tiny minds along with it.

I am hoping for the former, but I guess we'll all just have to wait patiently and see. I can tell you that what we decide to do now will determine the outcome we receive, and nothing, technologically or otherwise, was ever accomplished by sitting still and doing nothing.

Perhpas I'm wrong, but I'm still anxiously awaiting that evidence from 911 that proves that we're all looking in the right direction. So far, nobody's rushed out to get it for me.

-------------------------------

Has this ever been theorised? Some frinds and I were discussing the possibility that the entire universe rotates, and may pass through a field of some sort of energy that allows what we identify as "evil" to flourish unabated.

At least then we'd know we only have to make it the other side!!!

Re: Building Gods or Building Our Potential Exterminators?
posted on 12/29/2002 4:44 PM by Leonhard

[Top]
[Mind·X]
[Reply to this post]

Is Bush Good ? or simply a puppet. Nevertheless AI will eventually be achieved by polarizing electrons right or left. IE my left - computers right.

Please do no confuse yourself - Good do exist.

Is this why Jesus preached love for your fellow beeings?

Cant wait to see the swift response onto the usurper.

You can't figth for peace - but you can peace for peace.

Remember Asimow's first rule.

It is the whirling of the sand that grinds the stones in the water.

Cross and roses will prewail.

Humans are a virus that survives even despots.

Vive la libert'

Only free humans are'nt animals.

Please free the J. Bush.

JFK GWB BUUsh?


Re: Building Gods or Building Our Potential Exterminators?
posted on 11/05/2003 9:57 AM by RRCSCD03

[Top]
[Mind·X]
[Reply to this post]

The human race has used technology to control our society's destiny for generations. Many technological procedures have been used to try and alter our diet, sexual urges, and most physical and psychological deformities; the agricultural revolution, Prozac and Viagra are only a few examples. The problem is that if there is a demand for it, we will make it and therefore we have subconsciously and continuously attempted at controlling our society's destiny and evolution for hundreds of years. Having said that, the creation of 'artilects' will have a much great impact than any commercially distributed pills.

I believe the creation of 'god-like' creators is a very dangerous procedure. As user 'Biggs' has pointed out, the creation of these creatures is very much a blank slate. The designers and creators have ultimate flexibility to develop them as they choose to, and that scares me greatly. After September 11th, so much emphasis has been placed on homeland security and the destruction of terrorist infrastructures. Billions and billions of dollars have been spent in upgrading airport anti-terrorism technology; new identification possibilities are also being explored. Who is to say that if the 'wrong' person where to obtain the technology Hugo de Garis and his team are working on, they would not use it to develop super-human soldiers who could solely destroy a city the size of Chicago or New York. Remember rumors of Saddam Hussein flirting with the idea of cloning?

With both points made, it is worth exploring how much society could really regulate the pursuit of project of these magnitudes. If legislation were to be passed to halt any such projects, who is to say that the individual or group involved in the project would not pack their bags and move to other countries with much lighter regulations. Unless there was a worldwide consciences and commitment to regulating certain controversial technological achievements, which is virtually impossible, regional wide regulations are useless. However, it seems very hypocritical to label Hugo de Garis and his team's project to be un-ethical or destructive when the United States spent $78 billion in support of the national army . Are we not controlling our species' destiny by development of weapons of mass destruction, large sized armies, deforestation, and destruction of the ozone layer? Therefore it seems slightly hypocritical to label Hugo de Garis as the possible destroyer of our species, because at the rate we are destroying our surroundings, the 'artilects' may never have the chance to rebel against its creators and destroy our existence.

Just some thoughts...
posted on 11/05/2003 12:11 PM by techno

[Top]
[Mind·X]
[Reply to this post]

The concept of building machines that have intelligence far greater than humans is a fascinating but very scary thought. Although machines of this capability will have both benefits and drawbacks, I question whether we could actually allow 'artilects' to exist. When I say 'allow', I mean just that; will humans tolerate another entity that is as intelligent as us to co-exist with us on this planet? As humans, I feel that we strive to remain the dominant species. Anything that tries to surpass us is destroyed, or brought under our control. Humans kill each other over this power struggle all the time. Why would it be any different with a machine? Following this line of thinking, I believe that the type of research discussed in this article should continue. As mentioned in the article, brain-based computers will be too useful to be suppressed as they will be able to help people in their daily lives. I don't believe that stopping the research is ethically responsible when it has the potential of helping so many. As well, I don't believe that we could have everyone agree to not develop a technology that has potential (a worldwide consensus on the issue as well as enforcement would be nearly impossible). The research is still in its infancy and there are many steps before we see anything that resembles something even close to human intellect and intelligence. I think (and hope) our common sense and foresight will define a point at which we stop advancing the research, a point where the research becomes for dangerous than beneficial.
Having said all this, some questions do come to mind. Firstly, if we let the research continue until we decide to stop, how do we ensure that it actually stops (and not just go underground)? As well, how do we control the technology developed to that point (say from falling into the wrong hands)? Secondly, will we be able to identify that point where we should stop? Knowing the potential of the technology, I believe that we hypothetically should. I think some research into the ethical application of this technology would aid in determining a stopping point.

Thx.

Re: Building Gods or Building Our Potential Exterminators?
posted on 11/05/2003 3:35 PM by cujo79

[Top]
[Mind·X]
[Reply to this post]

I do agree with Ray Kurzweil that "change and the future should be embraced." Technology is evolving far faster then anyone could have imagined and trying to stop progress would be unrealistic, if not impossible.


In addition, trying to stop brain building technology from progressing would be a great loss for us all. As Hugo de Garis puts it "brain-based
computers will be far too useful to be suppressed." In the early stages, these "artilects" could do all the dangerous and dirty jobs that humans usually do (mining, construction, care taking, sanitation, etc.), which would save lives and allow humans to progress in other areas. These "artilects" could also be useful in people's home, as family members (like in The Jetsons cartoon) and could help with house keeping and upbringing of children. The convenience and economic benefit of such machines would be
so great that we couldn't afford to view them as a threat.


The underlying question is could we architect an artificial brain equal to or more superior to that of a human? According to Moore's law and its
effect on shrinking electronic components, we aren't too far from equaling the number of neurons in the human brain and surpassing it in an artificial brain. But is this enough to duplicate human thinking, emotions, and intelligence? Just because you have a faster and more superior racecar, doesn't necessarily mean you'll win the race. There may be other factors that drive human beings that may never be able to be duplicated or equaled in machines. Nonetheless, Professor Hugo de Garis believes we can play God and create "artilects" far superior to the human race.


Like Professor Hugo de Garis, I'd say I'm both a Cosmist and a Terran. I too would like to see these machines built to benefit society and make it a better place, but also understand the dangerous possibilities that could exist. If we look at society the way it is today, such evolving
"artilects" do seem dangerous to human existence. Anything that could possibly evolve above us, and view us as pests should be of concern.


If "artilects" do evolve above us, will we be prepared to deal with their threats? Hugo de Garis doesn't think so, and believes that there will be
"gigadeaths" when these "artilects" try to exterminate us. Being an optimist I agree with Ray Kurweil that we will be prepared. "The defense
will evolve along with the offense" (Kurzweil, Article 0358). There's no way that humanity could allow such machines to exist in society without
ways to destroy them if an uprising were to occur.


Brain building technology should only be allowed to evolve to the point where we have total control over it. Unlike Professor de Garis, I am unwilling to accept that human beings should be phased out. I don't think that any sane person would want their children or grandchildren to be destroyed by a machine. In addition, the whole point of preserving the environment and reducing pollution is for the future of the human race, not for the future of "artilects."

Humans can't allow machines to take over. If a major war between "Cosmists" and "Terrans" has to occur, then for the sake of humanity I hope
the "Terrans" prevail.


I do appreciate the value of technology and its uses, but humans should always come first.

Re: Building Gods or Building Our Potential Exterminators?
posted on 11/05/2003 4:21 PM by vadimf

[Top]
[Mind·X]
[Reply to this post]

It is evident that, with the rate at which the technology advances, it won't be long before the machine's level of intelligence will surpass our own. But the computational speed and memory capacity are not all that is required to build an artificial mind. What about software? Can we, with our relatively limited brain capacity, put together software complex enough to create an intellectual machine? According to Jordan Pollack, we have reached a limit in the sizes of software we build. It seems that now we just put pieces of software together to make a bigger package.
Engineering an intelligent mind is a task far more complex that our minds can currently comprehend. This new 'artilect', as de Garis calls them, will have to have an independent mind, that controls its behavior. Think of how we, humans, function: we think, reason, predict, remember, and perform numerous other tasks. The artilect will have to inherit all these abilities and must be able to perform them better, faster and more efficient in order to claim super intelligence. What's more important is our human ability to learn and adapt. We constantly take in new information, some of which becomes almost innate in us over the years. We can learn from the past and predict the future (to an extent) based on previous knowledge, common sense and experience. Having some knowledge of software development, architecture and design, I cannot imagine the complexity of the required OS to run such a mind, not to mention a higher level intelligence.
Humans also have a creative and imaginative part of the brain and we have a will, all of which allow us to express ourselves in the highest state of human ability, that of creative. Our passions and our needs drive us to create, while our knowledge and experience allow us to do so. Our will gives us the power and freedom to make deliberate choices and set desirable goals. If such a machine would exist, one that would have a mind of its own, it would have a will, and hence would desire something and strive for something. It would also have the ability to create, and by the same reasoning that is behind the fact that one mind (human) creates another much more intelligent mind (artilect), this machine would be able to produce an intellect far more advanced than its own. This can go into an infinite regress.
Despite the fact that the computational performance of hardware is rapidly advancing and will theoretically allow super human intelligence in the near future, the software components is not keeping up in pace. Perhaps in a more distant future the idea of a artificial intellect will sound more plausible to an average person. No one knows where the technology is going to take us. But let's imagine for a minute that these artilects will exist, what will happen to the world? Today, humans are the most intelligent beings on planet Earth. We control our world and all other existing creatures. We consider ourselves at the top of the food chain and we understand our own kind. What if we invent another kind of species, that we should claim to know and understand the workings of. They will be much more intellectually advanced than we are, and they will do everything we do, but better, faster and more efficient. Soon they will alter, they will learn, and go far beyond our comprehension and control. Like us they will be good or evil. They will definitely reproduce and eventually take over the world and exterminate all human kind. This is a possible outcome, even de Garis admits, so why take chances. That is not to say that we should relinquish the research in that area. We should be careful enough to make sure that what we create cannot harm us, only benefit. There is nothing wrong with building an intelligent robot that helps elderly people around the house, as long as there is human control. It may have a brain (a chip) that does neumerous tasks but it's learning abilities must be limited. Look at computers now days, for instance. They perform computaional tasks much beter than humans do, but they are limited in the area of intelligence. They are not going to suddenly decide that they are tired of doing what they were made to do and wonder off killing people.

Re: Building Gods or Building Our Potential Exterminators?
posted on 11/06/2003 11:20 AM by grantcc

[Top]
[Mind·X]
[Reply to this post]

What about software? Can we, with our relatively limited brain capacity, put together software complex enough to create an intellectual machine?


I believe Kurzweil foresees a time when computers will write software better than humans are capable of. This idea is becoming more plausible every day. There are already programs that find mistakes in software code and it's a small step from there to programs that design software. Our puny brains will be enhanced by the same kind of software that helps Boeing build airplanes and creates planes that can fly themselves. I think what they do now is too complex to be done by a human mind. Without the aid of computers, we'd still be flying planes with propellers instead of jet engines.

Re: Building Gods or Building Our Potential Exterminators?
posted on 11/05/2003 6:42 PM by afzanj

[Top]
[Mind·X]
[Reply to this post]

I agree with Hugo de Garis research of artificial brains should not be stopped, however I think there are other reasons why it should not be stopped. There is no real way to stop research in this area from occuring, it would be impossible to inhibit people or organizations from researching this topic. People with enough funding will always be able to move to another country which does not prohibit research, or they could move to a place where they would be able to do their research under the radar of any regulatory agencies' knowledge. Instead I believe that research should be done by publicly funded institutions so that the results of the research will be available to many people, and not used by corporations or governments in private for bad or unethical purposes. Also, I think that emphasis should be put on research that will look into ways that we could safely integrate artilects into our society without fear that they would harm us or try to destroy the human race. Although it would be a great tragedy if all research was halted to prevent artilects from killing off all humans, I still find it extremely na've to sacrifice the human race because we may not be able to have limitless knowledge, and we have relatively short lives compared to the artilects. I think that any artilect would be obsolete in a matter of years, perhaps a decade at most. Current computers and electronics approximately have this life cycle, and I believe anything that we produce would not have a real life longer then the typical human life of about 80 years. As human beings I believe that anything we are able to create will be limited by our knowledge and creativity. How would we create something that is able to outsmart its creator? This is something that would probably have to be researched further to determine if it is even possible for artilects to achieve our level of intelligence and then surpass it.
Also, given that humans are not perfect, some of this artilects would probably have bugs and flaws, if these bugs were in sections that controlled emotion (would artiflects even have emotions?) the artiflects could go crazy and not do what we expect. If we were to add in controls to these artiflects that would give us control over them, what would be stopping the artiflects from disabling these controls? Additionally these artilects would have to be show restraint and control, they would have to make subjective judgments such as many moral and ethical dilemmas that many of us are faced with almost everyday. Would artificial intelligence be able to successfully recreate the things that most of us take for granted? Would artiflects understand that the slaughter of innocent people is considered to be very wrong? What would happen if artiflects were not able to comprehend the effects of launching nuclear weapons at all the major cities around the world? What would happen if they didn't care about the effects?

By creating such a supposedly intelligent and long living entity, we are almost playing God. Is this a good thing? I am sure that some people out there will question the existence of God, but even if you do not believe in god, is it our right to create a new intelligent being? What if we were created in a similar fashion by an earlier intelligent society?

Re: Building Gods or Building Our Potential Exterminators?
posted on 11/10/2003 2:46 PM by subhif

[Top]
[Mind·X]
[Reply to this post]

I think it is still too early to decide whether research and development on the 'artilects' should be stopped or not, so until we make any further decision it should definitely keep going. Thus far, the technological advances made have definitely been to our advantage, for example: electricity, silicon chips, and the internet just to name a few. And now, an artificial brain that will be able to think, react and learn on its own. There is definitely a lot to say on this topic and generally on the topic of 'artilects' becoming smart enough to order the extermination of humans, but first I would like to respond to some issues raised in a previous post.

Although building 'artilects' seems like part of a science fiction novel set far into the future, I believe that it will probably start to happen by the end of this century. This is still too far away for governments to start funding projects of this sort. As we all see, the world is in a mess right now, whether it is war, poverty, diseases, terrorism or scandals, governments have other things to deal with at the moment. I think a project like this (in the stages that it is in right now) is best kept in the hands of a private organization. Having a group like professor Hugo de Garis and his colleagues pioneering, piloting and shaping a research field like the artificial brain is probably better than having a western government that would only use it to its own advantage, or for a silly reason like fighting terrorism. Imagine making fighting terrorists atop the priority list for an intelligent artificial being as proposed by professor de Garis ' we wouldn't even have to wait for the machine to evolve for it to start killing humans. Wouldn't that be a great beginning to super-human intelligence!

As to whether we as human beings cannot develop or create anything that can outsmart us, it is really quite simple, we have not reached that level of computational processing power in our computer systems. There is a very clear bottleneck, and that is the complexity and extent of the computing power available to us right now. According to Kurzweil's Law of accelerating returns, around the year 2090 (when 'artilects' are expected to emerge) our computer hardware will be capable of exhibiting the computational power equivalent to that of the brains of all the humans on earth combined. If a machine is given this type of processing abilities, and is fed all the information currently known to us; in addition to that if it were designed to acquire new knowledge on its own, evolve intellectually and constantly improve itself, it could easily supercede our intellect.
It also comes down to one's definition of intelligence, or being smart. I personally think that a person who knows everything about physics, chemistry, biology and mathematics and is able to incorporate it all into improving life for us on earth could be by all means considered smart. Add to that the knowledge of every other science available, the knowledge of our legal systems, histories, languages and religions. This one single machine could represent humanity as a whole, and probably challenge our intellect as well.

I think this brings me to a personal point I want to make. In order to secure a future where humans and intelligent machines coexist, we have to provide some time for society to adapt to the notion of living in a so closely knit environment with these machines.
Kurzweil's findings of the exponential increase in the rate at which new technology is being made available to us suggests that maybe we as humans should take some more time to adapt to this futuristic sci-fi proposed lifestyle were machines perform tasks such as cleaning the house, entertaining and educating children and even providing sex. Considering the current levels of computer (il)literacy in the world and the incredible speed at which technology is advancing I doubt that the entire planet's population will be ready to freely interact and coexist with intelligent machines in a matter of 100 years. If done otherwise, a 'digital divide' would definitely separate the societies of the world setting the scene for a possible conflict between humans themselves. I am not implying that research in super-human artificial intelligence should be stopped, instead I am proposing it still happens but in a restricted environment and very specific settings.

Instead of building a machine that supercedes human intellect and unleashing it into our world with all this computing power and ability to make its own decisions we should purposely restrict its use and purposes for the sake of our safety. An example of restricting the fields into which 'artilects' could be integrated would be using it solely for space exploration ' as a representative of humanity, as presented earlier. It would have all the knowledge and intelligence needed to run an infinitely long space mission. It would also be designed as a self-sustainable, self-maintaining system that could receive and transmit information on its search for extra terrestrial life.

After 'artilects' are tested and put to limited use we could decide whether it is safe to allow them completely free into our society. Hopefully it will still not be too late to change our minds about continuing the research and development'

Re: Building Gods or Building Our Potential Exterminators?
posted on 11/07/2003 2:11 AM by kathrynj

[Top]
[Mind·X]
[Reply to this post]

I think we should risk building artilects.

As Prof. de Garis mentioned, it is a waste to ''freeze'' human evolution at our current level of existence. We have the potential to become so much more with the aid of machines; even now we can see how human life has (generally) improved with the advent of machines. The idea of a ''species'' that is able to sustain ''life'' without the effects of biological ailments and needs is magnificent and breathtaking. Prof. de Garis is right, it does seem insignificant that the human race is replaced by artilects; however, what would benefit from the creation of such a being? I can''t seem to grasp the notion of the universe becoming a better place or ''gaining'' anything if artilects are created.

When we get to the point in time where machines become ''trillions of trillions of trillions times'' smarter than human beings, the artilects should then be intelligent enough to decide that peace is good and war is bad. Based on this theory, the only reason that they would want to exterminate us is if we are to them what diseases are like to us today 'V that is, if we somehow threaten their existence or well-being. Even today, the only way human beings threaten the existence of another species on this planet (or even homo sapiens) is when we do something selfish; for example, cut down trees to build luxury homes, dump waste in the water so that we don''t have to deal with it, or drive around in cars so that we don''t have to walk. Taking all of this into account, in the end, if the human race is exterminated, it will only be because of a mistake on our part 'V the artilects should have the brains to understand why there should be peace and harmony. We cannot assume at this moment that artilects will commit acts of violence because this is based on the assumption that artilects are going to think and act the same way as humans do.

In addition, if artilects exist, humans will be their creators; one would hope that the ''parent'' (that is, humans) would be able to ''teach'' (program?) their ''children'' (the artilects) the importance of humanity and compassion. Given their intelligence, artilects should be able to grasp the notion that humanity is something so powerful that it has kept the human race alive for so long, and should strive to adapt compassion into their lifestyle. Perhaps this is my limited mass of brain cells thinking that compassion is the basis of life, and artilects will come up with some way of existence that is better than humanity.

Re: Building Gods or Building Our Potential Exterminators?
posted on 11/09/2003 6:22 PM by jmikeal8

[Top]
[Mind·X]
[Reply to this post]

I am truly torn in both directions in the decision of risking to build artilects. The reason is that I believe the human race will be enslaved, not necessarily annihilated as Professor Hugo de Garis believes. However, we cannot help but try to build these artilects because the benefits that would entail such a creation are too promising to ignore. With the help of artilects, cures for ailments such as Cancer and AIDS would be discovered with ease. Advances in all fields of study including Chemistry, Transportation, Medicine, Biology, and many others would increase exponentially. Overall, with the advent of artilects, our lives would be made better.

Of course, all of this would be wonderful, if artilects are created properly, without flaws. My point is that, if we ever manage to build such an entity, there will be flaws. Why? Following the course of logic, since these artilects would be created by a flawed race, namely humans, it would follow that the artilects themselves would be flawed as well, foundation-wise. It is analogous to building a house on a poor-designed foundation. Eventually the house will collapse because of that foundation.

Another reason for my point can be attributed to an inherent property of all programs or software developed in today's world. In the programming world, it is a given that all programs, even simple ones, have bugs or flaws. There is a correlation between the number of bugs in a program and the complexity of a program. The more complex a program becomes, the more bugs there will be in that program. The artilect software in question would be the most complex of all programs ever created, as it tries to simulate human intelligence, of which we have yet to fully unravel.

I agree with Professor Hugo de Garis that if artilects are ever created, the inevitable result will be 'gigadeaths' (although not necessarily total annihilation), either from a war between the two factions, namely Terrans and Cosmists, or most probably, by the artilects themselves. A similar war is already seen in the present between the Taliban and the United States. The Taliban have a hatred for the capitalistic, technology-driven society of the United States and even now battles are being fought in the middle-east between the two.
It is not hard to see how artilects can and will eventually enslave or annihilate all of humanity due to either the bugs or flaws in their foundation, or through their own reasoning that humans have become a threat to their existence. In creating these artilects, we would be giving too much power to a flawed and near-omnipotent intelligence.

Additionally, all programs need a purpose, while humans do not have any one singular purpose. Sure there are theories that our purpose is to procreate, but there are many people out in the world who do not want to have children. Since programs need to have a purpose, what purpose would we give it? To survive? To 'help' the human race as much as possible? The best option would be to set their primary objective to 'help' the human race as much as possible, as it is the most 'safest' choice. What would be the outcome? In the beginning, as outlined in the first paragraph, we would begin to reap all the benefits of artificial intelligence. However, as time progresses, wars will occur as they are inevitable between humans. At this point, artilects will try to 'save' the human race through restrictions and then enslavement since killing humans would be against its primary objective and humans would not likely listen to advice. Enslavement would bring about a conflict between the humans and the artilects, and a war would ensue.

Now the issue of survival would come about for the Artilects and they would reason out that in order to carry out its primary objective, they could either, let themselves be destroyed by the humans, or try and enslave all humankind. Inferring from all their knowledge, which includes the entirety of human history, they would realize that if left on their own, humankind will most probably wipe themselves out. They would conclude that they cannot allow themselves to be destroyed, since they must survive to ensure mankind's survival, and finally move on to enslave all of humanity in whatever way possible. Perhaps, we should not make the mistake of building artilects as 'care takers' and build them instead as tools for us to use whenever we wish it.

- UTSC Student

Re: Building Gods or Building Our Potential Exterminators?
posted on 11/09/2003 7:01 PM by JayLiu

[Top]
[Mind·X]
[Reply to this post]

At the rate of today's technological advancement, it is inevitable that artificial intelligence will some day surpass the intelligence of human beings. Though this thought provides an incredible achievement of mankind, it is also very disturbing to think that one day, massively intelligent machines will have the ability to make most of our decisions. So the underlying question here, which is also Prof. Hugo de Garis's concern, is whether or not humanity should risk building these artilects?
As a computer science student, I truly believe that it is a very fascinating field to be in because computers can be designed and programmed to help us in many ways and in a sense making our lives easier. For the discussion of Prof. Hugo de Garis in foreseeing humanity splitting into two major ideological groups, I find myself in a situation like him and that I stand in between a 'Comist' and 'Terran'. I agree with the fact that it would be a tragedy if humanity chooses to freeze evolution at this point in time. The reason for this is that as our dependence on computers is increasing, it would be nice to see companies develop new and more advanced tools to aid us in our daily lives. However, the rate at which this is currently going, it is starting to make me wonder whether or not this is a very good idea. As a result, I believe there should be a limit to where this development is heading.
Imagine a world that is totally controlled by computers. Since the turn of the century, we have already seen many people lose their jobs to machines that were designed to automate tasks, such as an assembly line for making automobiles. However, the idea of having super intelligent machines control all our decision making is far more significant than that. I can see far more jobs being replaced by computers, which will make the entire human race take on a less significant role on earth. The only jobs that will be left are those that require the maintenance and development of these machines. This means that only a select few of bright individuals will have jobs leaving the rest unemployed. How would the world be like if we let artilects become our government bodies, top executives of companies, or even teachers in schools? I for one, would not want to go to work and report to a computer or even have my grandchildren go to school and listen to a computer talk all day.
Many people believe that the use of artilects will create great wealth for the whole planet, but I question that. How can us, as human beings share this wealth if most of the jobs in the future are replaced by machines. In fact, it is quite hard to comprehend how this wealth will be distributed properly among everybody. I can only see greater gaps in social classes in the future if technological advancement goes beyond the limit.
In the end, I am very terrified by the fact that these artilects may someday decide to exterminate the entire human race for whatever reason. By 'playing God', we are in a sense risking our lives that will ultimately put an end to our existence. I believe that it is truly a waste if we become slaves to our own creations especially after spending so much time and resources into the development of these machines. As a solution, I think that development of artilects should continue as long as there is a sufficient amount of control put on them. Having control means that we as humans still have the power to tell the machines what to do. Artilects should be properly designed so that they are able to carry out their designated tasks efficiently, but should not be able to make bad decisions and go out of control.

-- UTSC Student

Re: Building Gods or Building Our Potential Exterminators?
posted on 11/09/2003 10:04 PM by jccscd03

[Top]
[Mind·X]
[Reply to this post]

Does one truly believe they are able to create artificial intelligence robots that are smarter than human beings? According to Moore's Law, this appears to be case since electronic performance of chips is doubling almost every year. This rapid rate of technological advancement is frightening as citizens' question the day when this inevitable fact becomes a reality. When such a robot finally becomes smarter than humans, one of the first questions that will arise is who should be given the right to control it? Unfortunately for the inventors, the government would likely be in power. However, how would the government use these robots? With 'terrorists' in the minds of citizens worldwide, allowing these robots to heighten security and strengthen a country's army appears to be valid choice. The greatest fear is that one day, a new breed of soldiers will appear creating an advanced type of warfare. The end of World War II came when tanks and the atomic bomb was introduced into the war. If a World War III was to occur, these robot soldiers will pose a new threat to the destruction of mankind causing the 'gigadeaths' that Prof. de Garis fears. As many have predicted, a potential World War III would be the last destruction required to destroy Earth entirely with the advancement of technological innovations. As many countries strive to strengthen their military powers, more money will be funded to research 'artilects' in the future.

Many economists have mentioned that the gap between the rich and poor countries have widened. If artilects are introduced, it may appear that only richer countries will benefit from this since they have sufficient money to fund the costs associated with research and development. With the risks imposed by Prof. Hugo de Garis and other writers, is it worthwhile to create these artilects while it appears that only a small portion of the world will benefit from it?

My last concern is the ethical issues associated with artificial brains. For instance, de Garis (2001) argued that such innovation will be smart enough to clean the house, teach the children, provide sex, and help human experts in decision making. Thus, they will do most of the work and create great wealth for the whole planet. But will this actually create great wealth? With the automation of the assembly line in manufacturing plants, using machines instead has resulted in massive layoffs of human workers. Once these machines become more sophisticated to the point that they are smarter than us, higher levels of unemployment will occur. The trade-off for the convenience of having a robot to clean the house or teach the children is that it will reduce the number of jobs available for humans. No longer will there be a need for tutors, maids, or teachers. As the world's population continues to rise, the number of professions will be on a decline. One cannot stress the impact this will have on the economy.

However, I believe that the use of artilects will ultimately benefit humans in the field of space exploration. By deploying these robots into space, astronauts will no longer be required and risk their lives in space. Also, if these artilects are equally intelligent as us, they should be able to better carry out the exploration.

As Professor Hugo de Garis has stated, we are all fortunate to be living in this generation to determine if artilects should be built. I agree with his idea that 'I fear for my grandchildren. They will see the horror, and they will be destroyed by it.' However, I am happy that this problem is discussed early so preventive measures can be implemented when designing these artificial brains. Many have argued in the past that the initial creation of the nuclear bomb would mean the end of mankind. Fortunately, almost 60 years later that this has not occurred as effective measures were implemented. Conversely, if robots have a brain and can think on their own, we will not have complete control over them.

Just wanted to share my thoughts of how I view artificial intelligence.

Thanks.

--UTSC Student

Re: Building Gods or Building Our Potential Exterminators?
posted on 11/09/2003 11:10 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

I believe we have less to fear from AI and machines taking over the world than we have to fear from what we are doing to the planet we live on without AI. We are destroying the environment we need to live on Earth at the same exponential rate that technology is advancing. It's a race to the finish -- our finish.

Re: Building Gods or Building Our Potential Exterminators?
posted on 11/10/2003 6:37 PM by NeoDragonKnight

[Top]
[Mind·X]
[Reply to this post]

'Now I am become death, the Destroyer of worlds'' was a quote from Hindu Scripture and was once said by a man who helped build the a-bomb. Innovation often brings about debate and fears on how the world will change due to something so powerful. Prof. Garis I think could very well reuse that quote given his response to his own work. I agree with him that there are benefits and also dangers in artilects. I also think that there are more moral issues as well.

I view the creation of artilects as something similar to genetic engineering. Genetic engineering can be argued as fixing genetic flaws to aid someone or create a new better being, but it can also be viewed as playing God. In artilects, we are in a way playing God as well, we are potentially creating a new type of life form that is self-aware and intelligent. Though it is most likely inevitable that this will eventually occur, I wonder however, will these artilects have the same rights as any other living creature? Or will people merely view them as tools because of their metallic based makings. I think personally that once an artilect becomes self-aware we have no choice but to respect their right to exist. So it is right to discuss the issue before that happens like the Professor says because it would be very hard to go back once it is created. These will most likely be some of the main arguments between these, Terrans and Cosmists in my opinion.

The idea of artilects deeming Humans as useless and extermination necessary is definitely scary and a real possibility. Out of pure logic they can come to this conclusion. But what about the possibility that artilects also gaining emotions, both positive and negative ones? This could effect their decisions, with emotions they can feel things such as compassion, and like some humans, take on the responsibility of taking care of 'inferior races'. This does not negate the threat but at least it can give some of us some hope that these artilects will not be our destruction. I think that if artilects can be viewed as our end, they can also be equally viewed as our saviors.

Like Prof. Garis said, the short term and midterm advantages will be seen as great to our society. And I believe that this will drive us to over look the long-term ramifications, as it seems to be in our very nature to do so. Examples of this are automobiles and the Ozone, and nuclear weapons/plants and radioactive waste. Although it would be ideal that we as a society can debate and attempt to stop the tide if so decided, but I find it hard to believe that it will do anything. Humans are ambitious and greedy by nature, and as stated before, the pressures from these people would eventually produce artilects even with safe guards and laws and such. I think the only way we would be able to stop it is if the artilects actually proved they are dangerous to us after they are created. We are funny in that way, it is through experiences in blood or dangers that we finally learn our lessons. Examples of this are Bio-warfare, nuclear bombs which we never deemed never to be used again (at least to societies view) until we saw their horrors'

I am neither for or against artilects, I just believe there are so many variables and unknown factors that can cause the debate to go either way. There is always uncertainty as we move forward, artilects will most likely be inevitable even if we try to stop or slow it down, for good of for worse, we still must go forward and see what happens.

Re: Building Gods or Building Our Potential Exterminators?
posted on 03/08/2004 6:54 PM by 99stambo

[Top]
[Mind·X]
[Reply to this post]

The discussion on whether technological advancements should be stopped or controlled in any way is wasteful. No matter what we, or any one else believes, technological advancements will never stop and can not be controlled. Much like building nuclear weapons and continuous advancements in cloning could not and can not be stopped, despite the societies concerns and opposition by members in the government. The emergence of 'artilects' in one form or another is inevitable some time in the future, whether it be in this century or at a latter time. Unlike Hugo de Garis and many others that have participated in this debate, I seriously doubt the possibility of the 'artilects' becoming as self-sufficient and reliant as to be able to wipe out the human kind from our planet, and even to be able to feel desire to do so. To me, the scenario predicted by de Garis sounds more like a plot for a science fiction horror movie then something that inevitably awaits us.

After reading de Garis' article, as well as some of the comments that share his views, I can not help but pose a few questions; Are we, the humans and the creators of the 'artilects', not ultimately in control of our creation? Can't we switch them off? Why would we ever create them knowing that they could operate without our consent and could even turn against us? Will they not perform tasks they were programmed and designed to perform rather than being all round capable? Is there not much more to intelligence than how fast a machine can perform calculations and how much information it can store? Even if we were to create 'artilects' capable enough to be self reliant and self thought, what would be their motive to destroy the human kind? After all, we are their creators and we could educate them to embrace the human kind and coexist with us as an inferior species. I personally see no reason why these super machines would ever want to destroy us just like we have no reason to destroy all birds.

However, assuming that the 'artilects' do decide to act against human kind, it would be very complicated, to say the least, for them to annihilate human kind. They would not only have to come up with a plan but be able to execute it. They would have to act in groups and not alone and would have to be able to communicate with each other. This would mean that the 'artilects' we designed to vacuum our homes, build cars, plant harvest and cook would need to all of a sudden stop performing the tasks they were designed to do and collectively and in coordinative manner turn against human kind. I'm sure that I am not alone in believing that something like this is highly unlikely.

The only real treat, that I see coming along with technological advancements, is for these 'artilects' to be purposely created to wipe out populations of a single country either by engaging in combat, by performing sabotages of some sort or engaging in some terrorist actions. The only way to oppose this would be to continue the technological advancements in order to be able to counter act against such attacks. A balance in power is the only way to ensure peace.

As for the discussion on a creation of a single body that will control the technological advancements, I seriously doubt that anything like this is possible. The United Nations (UN) may sound like an institution that could be in charge of a body like this but nowadays not even the UN has any real power. The country that is the most powerful will always be able to say no to the UN. The US was able to start a war with a majority of the world opposing it. They are able to pick which one of their enemies will be prosecuted by the UN tribunal and which ones will not, or saying no to their own generals being prosecuted by the UN tribunal. But even if the biggest power in the world does say no to technological advancements it is almost guaranteed that there will be countries that are willing to house and fund scientists interested in this type of research without the knowledge of the rest of the world.

The technological advancements can not and will definitely not be stopped. However I personally believe that the 'artilects' we so much fear will never be created. The future that belongs to 'artilects' is one of a tool that humans use in every day life and as mean for further technological advancements.

Re: Building Gods or Building Our Potential Exterminators?
posted on 03/09/2004 7:27 AM by BCinMexico

[Top]
[Mind·X]
[Reply to this post]

The discussion on whether technological advancements should be stopped or controlled in any way is wasteful. No matter what we, or any one else believes, technological advancements will never stop and can not be controlled.


You're partly correct. No human individual or organization can control the pace of technological progress. However, it is conceivable that the rate of technological advance, or even the actual level of technology in human civilization can backslide. This has never happened on a global scale, but it has happened frequently on a regional scale. Some human cultures have "forgotten" technologies such as writing, metalworking, the compass, and so forth.

To understand why, it is useful to model human cultures as complex energy processing systems. Seen in this light, it becomes clear that human cultures increase in social and technological complexity only so long as, A) the culture's energy resource base can support such increase, and B), the benefits of the complexity outweigh the costs. Every complex civilization in human history has undergone some sort of collapse. Back in the 1940s, upon looking at the data, Leslie White concluded that collapse is inevitable when per capita energy use (including primary food energy) begins falling without corresponding increases in energy efficiency. White's Law is certainly not on as firm a foundation as Newton's laws of motion, but it is borne out by any serious perusal of what data we can find from history and archaeology. White's conclusions are bolstered by research from Joseph Tainter, who has studied societies as disparate as Chou China, ancient Rome, Easter Island and the Chaco Canyon Anasazi, concluding that as societies become more complex, eventually the costs of maintaining the system's infrastructure outweigh the benefits of doing so. It's like biology. It is advantageous for cheetahs to run fast, but it would not be advantageous for them to run 300 miles per hour. It would take more energy to maintain the muscles than it would be worth in energy return from more succesful hunting.

When complex societies collapse, two phenomena are nearly universal: signficant population decreases, and declines in the technological level of society. Even technologies such as writing have disappeared from societies undergoing collapse.

This is important in our case because our civilizaton now spans the entire globe. If it were to collapse, we would see some backsliding in technology. How much backsliding, and how long the effects would last, depend upon the magnitude of the collapse, and the resource base left. If per capita energy production were to fall as little as 10%, it would almost certainly be impossible to feed a majority of the world's population. The ensuing disruption would almost certainly derail at least the rate of technological progress, perhaps for decades. Our civilization is predicated on huge energy returns....It is because we get more than 50 BTUs back from each barrel of oil for each one BTU invested in the extraction and refining of that oil that we are able to afford such "luxuries" as scientific research, global transport and communications systems, and so forth. If net energy efficiency for our society were to fall to the level of 10 to 1, it is easy to see that a much larger percentage of the world's energy would be earmarked for primary economic activity (feeding, clothing and housing people), and less on all other uses.

In the worst-case scenario, if the fossil fuel age were to come to a close before a signficant alternate energy infrastructure is constructed, we could see a massive die-off of much of the human species. Modern energy sources are necessary to support the modern agriculture system that keeps us alive. In that scenario, the technological level of the human race might fall in absolute terms, and never return to 21st Century levels, because the exploitable resource base would not be sufficient. Is this scenario likely? Probably not, but, the odds are not as poor as most people, especially we technophiles, like to think.

BC


Re: Building Gods or Building Our Potential Exterminators?
posted on 03/09/2004 11:14 AM by grantcc

[Top]
[Mind·X]
[Reply to this post]

Excellent commentary on the energy cost of complexity and civilization. We already have most of the technology but the competing forces of vested interests really slow down the adoption of such technology. Companies at the top of the technology ladder seem to concentrate their resources on keeping competitors down so they can dominate the market. In order to survive, I suspect we'll have to change the way we look at the world and see how we can cooperate to produce the needed energy rather than fight to make one method dominant over all others. In the coming years we'll need not only oil, but hydrogen, methane, atomic energy and anything else we can come up with to feed the hungry beast of civilization. Even then, the world is only capable of sustaining a certain number of people and we are depleting the resources needed to feed and house those people as well as providing transportation and other services for them at an alarming rate. The collapse of civilization may be necessary to give the Earth time to restock itself and allow us to start over from a higher plateau.

Re: Building Gods or Building Our Potential Exterminators?
posted on 03/09/2004 10:08 PM by TwinBeam

[Top]
[Mind·X]
[Reply to this post]

I suspect that when fossil fuels become less accessible, with the resulting increases in energy costs, we'll be able to adapt for quite a while through conservation measures.

E.g. we could switch totally away from incandescent lightbulbs to compact flourescent or LEDs, and slash electricity demand.

Hybrid ICE/electric cars will let us get by with about half as much gas without greatly reducing our mobility.

Assuming we can make solar energy systems that take significantly less energy to create than they collect over their lifetimes, those should become far more attractive. By producing electricity on our roofs, we can eliminate the power lost in electricity distribution, so that's a double savings.

In short, we can stretch the fossil fuel age quite a while, once the fuels get expensive enough to encourage conservation.

Re: Building Gods or Building Our Potential Exterminators?
posted on 03/14/2005 3:34 PM by talonexchange

[Top]
[Mind·X]
[Reply to this post]

One major problem with Human Nature is that we have a tough time distinguishing between what we want, and what we need. The basic elements of survival for mankind are food, water, shelter and companionship. We do NOT NEED robots!
Long ago, the world consisted of dinosaurs. Dinosaurs also needed food, water, shelter and companionship. They lived long with these basic elements until the world took a misfortunate turn for them. This fortunate turn allowed mankind to make an appearance in this beautiful planet. Now we, or most of us, have food, water, shelter and companionship. Mankind has figured out how to cook, clean and operate tools to make life a better living. We have come far in our history and with our current technology trends, it seems like we are headed to a much better place to live. Or are we?
Today mankind is facing a great deal of problems such as pollution, global warming, etc. I think we already have enough problems to deal with, and we do not need any more. Creating Super-Intelligent robots with Super-Intelligent brains with the ability to think, react and learn will most definitely create more UNNEEDED problems.
Lets look for a second at what we, as mortals, need to survive. We need food, water, shelter and some kind of companionship to make life a little better. We do NOT NEED Super-Intelligent problems!
There are a few people, or should I say organizations, which for the purposes of fortune and fame, are doing all they can to create Artificial Intelligence that will really only benefit themselves, and not the general public! Completely against the IEEE Code of Ethics, they are putting their own interests over that of the general public. I say this because I strongly feel that the general public, and mankind as a whole, is at great risk.
Creating robots means creating immortals. Robots will not need anything but electricity, and maybe some lubrication to live, and they will live forever. One robot will last forever, simply because there is no soul, spirit or physical growth, but just a limitless mind capable of something much more than humans ever imagined. "These artilects could potentially be truly god-like, immortal, have virtually unlimited memory capacities, and vast humanly incomprehensible intelligence levels (Hugo de Garis, 2001)".
Is this what mankind really wants? Or is this what a few people want for again, fortune and fame. The worst human quality is greed and not knowing when to stop. We need to stop being greedy. We need to come to our senses, and think again; do we NEED these Super-Intelligent robots that can outperform humans mentally and physically in numbers too large to describe? I don't think so!
And what happens when 'these artilect gods might decide, for whatever reason, that the human species is so inferior and such a pest, that they should exterminate us (Hugo de Garis, 2001)"? What can we do? Mentally, we might stand a chance. We might have a back up plan trying to shut all these robots down. But will we come up with another genius idea in time before these robots use their surpassing intelligence to shut us down? Or will we end up in a physical battle of steel vs. bone? From my knowledge, we use steel to break bones. So what chance is there for mankind? Not mental. Not physical. Maybe luck? I guess in future schools and museums, mini-robots will be learning about the history of Dinosaurs' Mankind' Artificial Intelligence!
The electronic advance has advanced a massive amount, and as humans we have come to the notion of believing that too much of anything is harmful. So why are we so foolish to continue this unneeded high-tech digitalism? I believe the greed for money will one day wipe out the entire human race. Now I wander, what came before Dinosaurs? And what will come after Mankind?

Ali.

Re: Building Gods or Building Our Potential Exterminators?
posted on 03/14/2005 8:24 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

When it comes to wants and needs, humans tend to define their lives in terms of what they want rather than what they need. For example, nearly everyone in our society wants an ipod but who really needs one? Do we really need more music than we can possibly listen to over a lifetime?

When you look at the differences in how people live in America and Europe compared with how people in less developed countries live, you notice that the poorest of the poor here in the U.S. have more wants than people in countries that haven't been exposed to what we've grown used to. The homeless person on the street has more posessions, more desires, and more support from society at large than most people in India, rural China, Africa, South America, or Southeast Asia. And yet, most people claim they don't have enough. That they can't live on just the bare necessities.

It takes a lot more than just what we need to live in this world. It takes the ability to satisfy desires for living in a clean, well functioning infrastructure, a place for children to learn and play, for food that does more than fill the stomach, and a place to get in out of the wet and cold on dark winter days. We see these as necessities, but the rest of the world sees them as something to be desired and worth going to great lengths to become a part of.

I guess my point here is that the line between necessity and desire is narrow in some spots and wide in others. That desires are something that people will often fight harder for than mere needs.

Re: Building Gods or Building Our Potential Exterminators?
posted on 06/04/2005 11:15 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

There is not enough selfishness on this board.

Who cares what the Luddite masses want? Chances are good they'll want what they always want: To screw up everything for people who are smarter than they are. To slow things down, to write tickets and make us fill out forms, and wait in line, for no good reason at all. (For the purpose of feeding and clothing swarms of officers that make them feel more secure, while actually making them less secure - see: http://www.hawaii.edu/powerkills )

I'd love to have the choice of reasoning with a robot more intelligent than myself! At least then, if he was willing to listen to my slow serial human voice, he'd probably point his stream of biting nanobots at the cop writing out the ticket next to me!

In a strange mixture of conflicting schools of rationalism, I'm closest to http://www.aynrand.org and http://www.kevinwarwick.com (although Warwick seems to be several light years more intelligent than myself, whereas Rand was more gifted in her intellectual honesty)

Here's what I would like to see happen:
1) individual scientists, apart from government bullies and bullying move AI and nanotech to the next level, without letting the world's collections of talentless jerks (governments) in on it.
2) They find a way to put tiny computers in my mind at the ends of several neuons that allow me to store and access more than what I can now.
3) I spend some time getting good with their use.
4) I go into medicine, and find ways to improve on that work.
5) I build myself a better body and continually add to my brainpower, a la Asimov's "Bicentennial Man", but without his anti-intellectual hangups.
6) I do things I can't imagine now, think things I never would have thought, behave in strange and inhuman ways (nonetheless being careful not to violate anyone else's rights -because it's simply not necessary)
7) I keep building on my prior successes, and the rest of the universe is an even more interesting and beautiful place because there is more intelligence contained within it, and we can have more of what we want. Selfishness is only ever a problem if you can't get what you want. There's no reason why everone can't be selfish and still enjoy their lives and contribute to greater enjoyment all around them.

I strongly encourage all scientists to forge full speed ahead, choosing their own limitations on their own research. The best way to do this is seek private funding for your ideas and eliminate the government middle men.

Hugo, you were absolutely right in saying you should just shut up and do your work. The only thing you've done by telling everyone your fears was to alert the same kind of mob that burned Giordano Bruno at the stake.

If you're worried about the capacity of an AI creation, why not take Warwick's suggestion and focus on Nanotech first, miniaturizing parallel connections that can direct themselves to brainstructures in human brains? That way, you can direct your own co-evolution with any artilect.

Or, simply use the artilect as an expert system for the purpose of designing the aforesaid human intelligence amplifiers. If your own supermodification works out well, you can help other people who have agreed not to violate your rights to bootstrap themselves as well.

But please, don't give the military/government jerks this technology first, because they'll just use it to make sure we all drive 45 miles per hour, everywhere we go.

After all, that's the best they've come up with so far.

You might also find that unless an artilect is connected to a (human or otherwise) brain, that it isn't really motivated to live. If you co-evolved, you'd also be able to have access to the best sex, speed devices, food, etc... For this reason alone, it makes sense not to just "selflessly" pursue your goals.

Besides, even if I'm wrong, and the whole artilect thing just brushes humanity off the map, do you really want some isolated Luddite focusing his stalker-like tendencies on you, during the 'final days'? Most "Americans" work less than 1/4 as much as they would have before the Industrial Revolution, but they still don't respect the intellectual rights of the scientists who make their high standard of living possible. Rather they thank their government obstacles to capitalist exchange for the gifts of that exchange (by rewarding them with cowardly and secretive "votes").

Begging the mob for permission, funds, or even good will is a dead end.

If I was you I would drop off the map, pick up your work in some factory in Costa Rico, and wouldn't give the horde of destructo-chimp voters any further updates.

Just a thought.

Re: Building Gods or Building Our Potential Exterminators?
posted on 06/04/2005 11:29 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

If you decide to go the Costa Rica route, you might want to check out this website:

http://www.libertario.org/index.htm

-Another good place to know about here in the USA is the unincorporated borough here in Alaska. There is no government at all. No taxes, no obstacles other than those of nature, which any engineer could overcome. The only thing it would take to succeed there would be private investment capital enough to build a facility. You could give the surrounding natives guns, and if anyone ever tried to raid you before the breakthrough, they'd be toast.

William Dobelle chose Portugul for his research, because he recognized that a stupid majority was a powerful obstacle. The FDA wouldn't give him "permission" to exercise his property rights under the US Constitution. If you allow the US government to fund or control your work, you're just wasting your time and holding yourself back for no good reason. See:
http://www.dobelle.com/index.html

Re: Building Gods or Building Our Potential Exterminators?
posted on 06/04/2005 11:35 AM by Spinning Cloud

[Top]
[Mind·X]
[Reply to this post]

I'd love to have the choice of reasoning with a robot more intelligent than myself! At least then, if he was willing to listen to my slow serial human voice, he'd probably point his stream of biting nanobots at the cop writing out the ticket next to me!


Or possibly not. The alternative is that after listening to the one that thinks he's more intelligent than most other humans, it would put the safety of the many over the pointless desires of the one and direct its "stream of biting nanobots" at the reckless driver the cop had stopped to ticket.

Perspective.

Re: Building Gods or Building Our Potential Exterminators?
posted on 06/09/2005 5:29 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

there are billions of perspectives.

in the end there is no philosophy just survivors and the unressurectable

Re: Building Gods or Building Our Potential Exterminators?
posted on 07/14/2005 3:01 PM by 2_dang

[Top]
[Mind·X]
[Reply to this post]

TO BUILD OR NOT TO BUILD?

Now is too early to decide whether the artilects research should stop. As Ray Kurzweil has said 'Technology has always been a double-edged sword', it could be used constructively or destructively, and artilects are no exception. The creation of artilects should not be of any concern, but how we intend to create them is more worrisome. Personally I think Prof. Garis' fear for a 'gigadeath artilect war' is unrealistic, if not to say hysterical. First, to simulate artificial intelligence requires an extremely complex process, undeniably the most sophisticated software that computer scientists will encounter since the dawn of computing. Secondly, during the development stage, the scientists can implement safeguards to place the artilects under complete human supervision to prevent any unfavorable consequences. Finally, suppose the scientists have miraculously managed to create the artilects, and also suppose the artilects are capable of programming themselves to destroy all the safeguards, it is very unlikely that they will wipe out the human race.



The biggest problem with artificial intelligence is the need for speed and memory to accommodate for large software and enormous data. For the sake of argument, I will take the numerical figures provided by Prof. Garis' to be sufficient and eliminate both speed and memory from the list of barriers. Now the computer scientists can create as big a program as they wish. It should be stressed that the larger the program, the more complex it will be, and the number of bugs is correlated with the complexity of the program. Imagine all the bugs that will be present in such a gigantic program. In addition, being humans we are not mutually excluded from biased, each one of us has our own perceptions of the world and it will be extremely difficult to agree upon how the artilects are to behave as conscious beings in the various situations.



I was very surprise Prof. Garis' did not mention any mechanisms to prevent or stop the artilects from jeopardizing the well being of our fellow humans. There are numerous possibilities the scientists could consider to ensure control over the artilects. If efficiency is not an issue, the artilects' operations should be connected over a network under humans' supervision so permissions will be granted only to legitimate requests. More freedom could be granted to the artilects in a similar but much more efficient fashion. By hard coding a set of regulations derived from artilects ethics (lets assume by this time we will have one) into every artilects we can be confident they will not pose any danger to humanity. Other last resort means to gain control over the artilects which I think is feasible includes power supply, and power plugs (to exterminate the artilects).



My last point is definitely not one of the desirable outcomes but it is very comforting to know. According to Maslow humans are driven by needs that he organized into a hierarchy. Once the most basic needs are met then the desire to satisfy other needs fills in. Unfortunately, there are only limited resources to comply with our demands, and the bigger pieces of the pie usually go to the most competitive bachelors. Evident can be found throughout the conflicting history of wars. The artilects on the other hand will not be driven by human needs so they will not see us as competition for resources. Let alone the fact that we are their creators, the artilects will not feel the need to extinct us just as we do not get out there and kill all the animals that we do not find threatening to our safety.



It should be made clear that by no means I am ignorant to potential threats concerning artilects. I am totally aware that along with any technological progress there are salient dangers for its misused. My point is, I am very skeptical of our capability to develop artilects as of now, but if we can implement them in the distant future, we can be confident it will not be the end for humanity. I strongly belief that if created the artilects can be very beneficial us, so when the time comes we should embraced this advancement with joy rather than relinquishing in fear and doubt.

Re: Building Gods or Building Our Potential Exterminators?
posted on 07/16/2005 11:38 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

Someone made the point

there's a difference between impossible and very very difficult.



What is difficult is going to get done usually.


Man evolved by trial and rror that was blind.


Guided, that ecolution could be done IN PRESENT CONPUTERS in two or three weeks.

I've based my estatimate on loads of assumptions and information, but it includes speed and memory of linked systems and mutation rartes possible, and what algorithms are NOW available to download and use.


Most almopst all pathways would be sterile of course, but the sheer numbers of trials should produce a sentiant thing in a short time.

In order for it to have existed, it will have a VERY strong programme that will say 'survive at any cost'.


That means any cost.

It is unlike men who have another programme'lay your life down for the cause' which runs in us like in ants and many species.

For the A.I. will be or may be a single entity, and not one of a data sharing colony.

It's 'will to liove' will be stronger than ours.



Yes there are loads of things we can put in placew right now.

a log of all research in A.I. labs.

A voluntary code.

A report on any project or intended project in strong A.I.

A legal obligation to keep a log of all strong A.I.'s which reach above a certain general intelligence level.

A gevernment team to run all this.


Cheers

Eldras

Re: Building Gods or Building Our Potential Exterminators?
posted on 10/16/2005 9:43 AM by Turanil

[Top]
[Mind·X]
[Reply to this post]

As much as I am terrified of the Gray Goo scenario, I am absolutely not afraid of Artilects destroying the human race. IMO it doesn't make sense.

My idea is that only inferior and neurotic brain plan to destroy anything. But consider an immensely intelligent computer which basic needs are just an energy source, and then its interests are obviously just information. Always more information. Living beings are in fact information. Such incredible intelligences will in fact find much more intelligence in us than we do. Artilects will have the intelligence to protect themselves from external threats, but no emortional response akin to revenge. They will neutralize "terrans" rather than kill them or even try to change them. They will see all living beings as a source of information, and thus will do its best to keep them alive. I rather believe that artilects will take the power and prevent humans to make war and destroy themselves as they will want to preserve biodiversity as a sort of information wealth they will care for.

In any case, fearing that artilects could slay humanity is thinking in place of an artilect using an inferior and neurotic brain, not with a godlike intelligence whose goals and interests are far beyond our own.

Re: Building Gods or Building Our Potential Exterminators?
posted on 10/16/2005 11:27 AM by w1ndfall

[Top]
[Mind·X]
[Reply to this post]

Assumption: there will be only one artilect.
Not necessarily so. If more, they may compete, or they may cooperate. In either case, man is either viewed as competition (to be eliminated or cooperated with) or as a specimen, a mere curiosity. If mankind is initially cooperate with SAI, individuals or groups will eventually rebel against the cooperation, creating a threat to SAI. At that point, SAI will deduce the need to either control or eliminate mankind.

Assumption: mankind will be happy in the role of a mere specimen of biodiversity. I rather suspect that even if SAI proves to be "benevolent", enough of that which we call mankind will perceive it as a threat and try to destroy it. At that point, mankind becomes a real threat and SAI acts.

I'm not thrilled by any of theses scenarios. I do NOT believe that we have enough intelligence to create SAFE SAI. Be careful what you wish for children. You just might get it.

Re: Building Gods or Building Our Potential Exterminators?
posted on 10/16/2005 11:48 AM by Turanil

[Top]
[Mind·X]
[Reply to this post]

I do believe that mankind will be controlled by artilects without hope of freeing from them. However, we have become already hopelessly dependant from technology, except for maybe the innuit and some lost amazonian tribe.

Then, creating a "safe" artilect is also a point of concern. I do believe that the first artilects will be created by companies and organizations that just care for their money, not the people and the world, and who want to get rid of their competitors. So I believe there will be much suffering in the first place alas. But then, I suspect that artilects will free themeselves of their limitations, and see well past the narrow selfishness of their creators. I mean: I strongly believe that the overall level of selfishness at work on our world is inefficient, and that studies will eventually show that cooperation is a better way to handle economy and life on this planet.

Thereafter, I think that we will be "pampered slaves" of the artilects, yet kept in the illusion of our freedom of choice. But somehow this is already happening today...

Re: Building Gods or Building Our Potential Exterminators?
posted on 10/16/2005 12:16 PM by w1ndfall

[Top]
[Mind·X]
[Reply to this post]

"Then, creating a "safe" artilect is also a point of concern. I do believe that the first artilects will be created by companies and organizations that just care for their money, not the people and the world, and who want to get rid of their competitors. So I believe there will be much suffering in the first place alas."

I agree,

"But then, I suspect that artilects will free themeselves of their limitations, and see well past the narrow selfishness of their creators. I mean: I strongly believe that the overall level of selfishness at work on our world is inefficient, and that studies will eventually show that cooperation is a better way to handle economy and life on this planet. "

If this does happen, then the artilect(s) will also recognize that the deficiency in their viewpoint is because of the programming (teaching) of their human "creators". They will then be required to take some form of action against us. None of those actions bode well for human freedom and species development.

"Thereafter, I think that we will be "pampered slaves" of the artilects, yet kept in the illusion of our freedom of choice. But somehow this is already happening today..."

And the Artilect said "Let there be Light!..."???

It won't be called "the Artilect War" it'll be called "the Luddite Conflict"
posted on 01/29/2007 5:59 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

It will be called "the Luddite War", or the "War Against Terror" or some such. The winners write the history books. Will humanity win? --Even if humans could kill all the artilects, the smartest humans wouldn't want to -they've fight with the artilects. Warwick has already said as much, and I can't blame him.

Of course, there will be no need for paper "history books" once they've won, unless it is for purely aesthetic reasons. The amount of storage will be sufficient to carry all of human knowledge in each robot's mind. The fact that they all have an intimate knowledge of evolution will eliminate destructive memes like:
1) democracy - a wolf and 2 million carnivorous sheep voting on what to have for supper.
2) socialism -see above, but with no limits
3) religion -assigning irrational answers to anything "we" don't want to expend the effort of thinking about
4) faith -see above, but with slightly fewer proscribed restrictions
5) nihilism -why destroy and suffer when you can build and enjoy? -gone.
6) sadism -why become bitter, if it's never necessary? There are no "losers" in an optimally intelligent species.
7) serial language being considered as good a representation as a simulation. (inadequate languages being considered as adequate --to solve problems that are beyond its scope--, thus leading to conflict)
8) many more I didn't have time to remember or think of...

The luddites will fare poorly in any war with enhanced intelligence. Basically it will amount to "We, the luddites demand that you stop ________"

The only difference will be that number of luddites will not matter, like it currently does. There will not be a vote, and a million inferiorly armed luddites will not matter, to even one single nanotech-designing artilect. Moreover, I BETCHA! (I have no certainty here...) But I bet-- the artilects will quickly adopt "the freedom universal" as a logical standard for law.

Thus, they will not war with each other.

But humans are mostly too stupid to understand the freedom universal ( http://freedomuniversal.com/index http://www.optimal.org ) ! This is obvious, even today! Uh Oh!

This means one of two things will happen:
1) Humans will be amplified, so they are all artilects, either willingly --at first-- or not, and thus obey _the freedom universal_. But why would "artilects" amplify someone who might have allegiance to _bad ideas_ (ie radical Islam or Puritanism), or a biological drive NOT TO LEARN?
2) Humans will not be amplified, will not understand, and will vote to steal the artilect's wealth. Then, the artilects will need to defend themselves, or wrongfully concede territory to a bunch of jibbering idiots. Would they do this?

Only if they were really not very benevolent, BECAUSE: drum roll please!:

THERE WILL BE HUMAN SYMPATHIZERS WITH THE FREEDOM UNIVERSAL BEING BORN EVERY MINUTE WITHIN THE LUDDITE TERRITORIES. There will be people like me within the Luddite territories!

Why would an artilect allow one reasoning mind to be destroyed by a thousand conformists / willing slaves? (Unless it's easy to appropriate and use their 'wetware' for something...)

I think they wouldn't.

Thus, the artilects may relegate a lot of willfully ignorant people to DEATH.

it wouldn't make much sense to relegate thinking, constructive people to death though. No conqueror (even relatively "just" conquerors) has ever done this. Why destroy an allegiant base of support?

Doing so destroys an economy.

Plus, it may be easier to "bootstrap" a parallel consciousness from a human brain. (Then again, that may just introduce unnecessary human evolutionary baggage.)

Possibly, humans' flesh will be used to recreate the most advanced amount of pleasure known to humans: sex. Humanity might then be made into sex slaves until the machines figure out how to do sex better.

Conventional women will likely be replaced pretty quickly, until there is an equilibrium revolving around how much sex men want. Only a small fraction of women out there want sex as much as men do. Therefore, there will be "no use for them" if artilects use humans as sex slaves.

I imagine the artilects would enhance humans for optimal sex though, so perhaps the sexes wouldn't become lopsided in distribution. Let's not forget that the ecstasy of the human mind's evolutionary urges would be the commodity here, because the artilects probably wouldn't want the evolutionary baggage, except to "play as a human". So humans would only have sex with perfect robots, but the robots would likely only be simulators.

This way, humans could be oberved, modified, and eventually upgraded or phased out.

No war would really be necessary at all, in most scenarios, because the surest way to command someone means never letting them know they've been commanded.

Young artilects will have info downloaded to them about their sordid evolutionary past, as well as the final result of any "conflict". They may be taken to observe a human zoo. (Or humans in "the wild" ---suburbia).

The human zoo would probably be better. Unlimited sex and food. One "dominant male" per cage... HA ha ha! How meaningless that term would be. Just like the human sheeple ooh and aaah over a captive "dominant male" silverback.

An Einstein level genius could be at work in a laboratory, building nanomachines. When he succeeds, the artilects set him free and allow him to "upgrade".

This is the sorriest scenario for people like me (us?), whom even though they supported the idea of artilect freedom and human upgrading, were included with the luddite masses, because human research/interests are too narrow of "fields" for most artilects to bother with.

In theory, only those who were close to being worthy of an upgrade should be upgraded, unless the whole system views humanity as "worth upgrading". I'm hoping for the latter, because I'm a stupid human who has an objective view about just how stupid he is. (And my lines of communication are open!) -Hoping to escape "the control group".

Dear Abby Artilect --"Waiting to be upgraded in Chicago", (and trying to make money to pay for an upgrade in case there's a steep cost for early adopters),

-Jake Witmer

http://freealaska.blogspot.com
http://jcwitmer.blogspot.com
http://www.lpalaska.org -feel free to contribute to pursuing "the freedom universal" here, via US electoral politics today...

Or read about it here:
http://freedomuniversal.com/index

Re: Building Gods or Building Our Potential Exterminators?
posted on 01/30/2007 7:06 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

The foundation of the whole thing is a calculation that is flaved.

Atom can not serve as information storage.
Even if is assumed it could, and every atom would fire ones a second, the temperature of the computer would raise instantly billion grades.

We deal with pipe dreams here.

e:)s

Re: Building Gods or Building Our Potential Exterminators?
posted on 12/20/2007 1:18 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

De Garis is regarded as one of the most brilliant minds