Origin > The Singularity > Singularity Chat with Vernor Vinge and Ray Kurzweil
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0476.html

Printable Version
    Singularity Chat with Vernor Vinge and Ray Kurzweil
by   Vernor Vinge
Ray Kurzweil

Vernor Vinge (screen name "vv") and Ray Kurzweil (screen name "RayKurzweil") recently discussed The Singularity -- their idea that superhuman machine intelligence will soon exceed human intelligence -- in an online chat room co-produced by Analog Science Fiction and Fact and Asimov's Science Fiction magazine on SCIFI.COM. Vinge, a noted science fiction writer, is the author of the seminal paper, "The Technological Singularity." Kurzweil's The Singularity Is Near book is due out in early 2003 and is previewed in "The Law of Accelerating Returns." (Note: typos corrected and comments aggregated for readability.)


Originally posted June 11, 2002 on SCIFI.COM. Published June 13, 2002 on KurzweilAI.net.

ChatMod: Hi everyone, thanks for joining us here. I'm Patrizia DiLucchio for SCIFI. Tonight we're pleased to welcome science fiction writer Vernor Vinge, and author, innovator, and inventor Ray Kurzweil -- the founder of Kurzweil Technologies. Tonight's topic is Singularity. The term "singularity" refers to the geometric rate of the growth of technology and the idea that this growth will lead to a superhuman machine that will far exceed the human intellect.

ChatMod: This evening's chat is co-produced by Analog Science Fiction and Fact and Asimov's Science Fiction (www.asimovs.com), the leading publications in cutting edge science fiction. Our host tonight is Asimov's editor Gardner Dozois.

ChatMod: Brief word about the drill. This is a moderated chat -- please send your questions for our guest to ChatMod, as private messages. (To send a private message, either double-click on ChatMod or type "/msg ChatMod" on the command line - only without the quotes.)...Then hit Enter (or Return on a Mac.)

ChatMod: Gardner, I will leave things to you :)

Gardner: So, I think that Vernor Vinge needs no introduction to this audience. Mr. Kurzweil, would you care to introduce yourself?

RayKurzweil: I consider myself an inventor, entrepreneur, and author.

Gardner: What have you invented? What's your latest book?

RayKurzweil: My inventions are in the area of pattern recognition, which is part of AI, and the part that ultimately will play a crucial role because the vast majority of human intelligence is based on recognizing patterns. I've worked in the areas of optical character recognition, music synthesis, speech synthesis and recognition. I'm now working on an automated stock market fund based on pattern recognition.

As for books, there is one coming out next week "Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI" from the Discovery Institute Press.
My next trade book will be the Singularity is Near (Viking) expected early next year (not the Singularity, but the book, that is).

Gardner: We recognize them even when they don't exist, in fact. <g> Like the Face On Mars.

RayKurzweil: The face on Mars shows our power of anthropomorphization.

Gardner: So how far ARE we from the Singularity, then? Guesses?

RayKurzweil: I think it would first make sense to discuss what the Singularity is. The definition offered at the beginning of this chat is one view, but there are others.

Gardner: Go for it.

RayKurzweil: I think that once a nonbiological intelligence (i.e., a machine) reaches human intelligence in its diverse dimensions, it will necessarily soar past it because (i) computational and communication power will continue to grow exponentially, (ii) machines can master information with far greater capacity and accuracy already and most importantly, machines can share their knowledge. We don't have quick downloading ports on our neurotransmitter concentration patterns, or interneuronal connection patterns. Machines will.

We have hundreds of examples of "narrow AI" today, and I believe we'll have "strong AI" (capable of passing the Turing test) and thereby soaring past human intelligence for the reasons I stated above by 2029. But that's not the Singularity. This is "merely" the means by which technology will continue to grow exponentially in its power.

Gardner: Vernor? That fit your ideas?

vv: I agree that there are lots of different takes.

RayKurzweil: If we can combine strong AI, nanotechnology and other exponential trends, technology will appear to tear the fabric of human understanding by around the mid 2040s by my estimation. However, the event horizon of the Singularity can be compared to the concept of Singularity in physics. As one gets near a black hole, what appears to be an event horizon from outside the black hole appears differently from inside. The same will be true of this historical Singularity.

Once we get there, if one is not crushed by it (which will require merging with the technology), then it will not appear to be a rip in the fabric; one will be able to keep up with it.

Gardner: What will it look like from the inside?

vv: That depends on who the observer is. Hans Moravec once pointed out that if "you" are riding the curve of intellect improvement, then no singularity is visible. But if "you" have only human intellect, then the transition could be pretty unknowable. In that latter case, all we have are analogies.

RayKurzweil: We're thinking alike here. We can only answer that by analog or metaphor. By definition, we cannot describe processes whose intelligence exceeds our own. But we can compare lesser animals to humans, and then make the analogy to our intellectual descendents.

vv: Yes, :-)

Gardner: Seems unlikely to me that EVERYONE will have an equal capacity for keeping up with it. There are people today who have trouble keeping up even with the 20th Century, like the Amish.

RayKurzweil: The Amish seem to fit in well. I could think of other examples of people who would like to turn the clock back.

Gardner: Many. So won't the same be true during the Singularity?

RayKurzweil: But in terms of opportunity, this is the have-have not issue. Keep in mind that because of what I call the "law of accelerating returns," technology starts out unaffordable, becomes merely expensive, then inexpensive, then free.

vv: True, but the better analogy is across the entire kingdom of life

Gardner: How do you mean that, Vernor?

RayKurzweil: We can imagine some bacteria discussing the pros and cons of evolving into more advanced species like humans. There might be some strong arguments against doing so.

RayKurzweil: If bacteria could talk, of course.

Gardner: From the bacteria's point of view, they might be right. <g>

vv: When dealing with "superhumans" it is not the same thing as comparing -- say -- our tech civ with a pretech human civ. The analogies should be with the animal kingdom and even more perhaps with things even further away and more primitive.

RayKurzweil: Bacteria are still doing pretty well, and comprise a larger portion of the biomass than humans. Ants are doing very well also. I agree with your last comment, Vernor.

vv: Yes, there could well be a place for the normal humans -- certainly as a safety reservoir in case of tech disasters.

Gardner: So do we end up with civilization split up into several different groups, then, at different levels of evolution?

RayKurzweil: I think that the super intelligences will appear to unenhanced humans (MOSHs - Mostly Original Substrate Humans) to be their transcendent servants. They will be able to meet all the needs of MOSHs with only a trivial portion of their intellectual capacity, and will appreciate and honor their forbears.

vv: I think this is one of the more plausible scenarios (and relatively happy). I thought the movie "Weird Science" was very impressive for that reason (though I'm not sure how many people had that interpretation of the movie. :-)

RayKurzweil: Sorry I missed that movie -- most scifi movies are dystopian.

Gardner: If they're so transcendently superior, though, why should they bother to serve unevolved humans at all? However trivial an effort it would demand on their part?

vv: At Foresight a few years ago, [Eliezer] Yudkowsky put it as simply --"They were designed originally to like us."

RayKurzweil: For a while anyway, they would want to preserve this knowledge, just as we value things from our past today.

Gardner: There are people who don't like people NOW, though. And people who care nothing about preserving their heritage.

RayKurzweil: Also I see this emerging intelligence as human, as an expression of the human civilization. It's emerging from within us and from within our civilization. It's not an alien invasion.

Gardner: Would this change?

vv: I am not convinced that any particular scenario is most likely, though some are much more attractive than others.

ChatMod: Let me jump in for a moment just to let our audience know that we will be using audience questions during our second 30 minutes. We haven't forgotten you.

RayKurzweil: These developments are more likely to occur from the more thoughtful parts of our civilization. The thoughtless parts won't have the capacity to do it.

Gardner: Let's hope. <g>

RayKurzweil: Yes. There are certainly downsides.

Gardner: "The rude mind with difficulty associates the ideas of power and benignity." True. But there have been people with immense power who have not used it at all benignly. Who, in fact, have used it with deliberate malice.

vv: It may be too late to announce a sympathy for animal rights :-)

RayKurzweil: One could argue that network-based communication such as the Internet encourages democracy.

Gardner: Yes, but it also encourages people who deliberately promulgate viruses and try to destroy communications, for no particular reason other than for the pure hell of it.

RayKurzweil: Computer viruses are a good example of a new human-made self-replicating pathogen. It's a good test case for how well we can deal with new self-replicating dangers.

Gardner: True.

Gardner: Let's hope our societal immune system is up to the task!

RayKurzweil: It's certainly an uneasy balance -- not one we can get overly comfortable about, but on balance I would say that we've kept computer viruses to a nuisance level. If we do half as well with bioengineered viruses or self-replicating nanotechnology entities, we'll be doing well.

ChatMod: STATION IDENTIFICATION: Just a reminder. We're chatting with science fiction writer Vernor Vinge, and author, innovator, and inventor Ray Kurzweil -- the founder of Kurzweil Technologies. Tonight's topic is Singularity. Tonight's chat is co-produced by Analog Science Fiction and Fact and Asimov's Science Fiction (www.asimovs.com.)

Gardner: Vernor, for the most part, do you think that we'd LIKE living in a Singularity? Do you think positive or negative scenarios are more likely?

vv: I'm inclined toward the optimistic, but very pessimistic things are possible. I don't see any logical inescapable conclusion. History has had this trajectory toward the good, and there are retrospective explanations for why this is inevitable -- but, that may also just be blind luck, the "inevitability" just being the anthropic principle putting rose-colored glasses on our view of the process.

RayKurzweil: My feeling about that is that we would not want to turn back. Consider asking people two centuries ago if they would like living today. Many might say the dangers are too great. But how many people today would opt to go back 200 years to the short (37-year average life span) difficult lives of 200 years ago.

Gardner: It's interesting that only in the last few years are we seeing stories in SF that share that positive view, instead of the automatic negative default setting. Good point, Ray. I always distrust the longing for The Good Old Days--especially since they sucked, for most people on the Earth.

RayKurzweil: Indeed, most humans lived lives of great hardship, labor, disease-filled, poverty-filled, etc.

Gardner: Audience questions?

ChatMod: <Newsdee> to <ChatMod>: Hi there. I have a question for Ray Kurzweil: Have the events of Sept. 11 altered your forecast for the evolution of machine intelligence? Could a worldwide nuclear war--heaven forbid--derail the emergence of sentient machines? Or has technology already advanced so far that a singularity is inevitable within the next fifty years?

RayKurzweil: The downsides of technology are not new. There was great destruction in the 20th century -- 50 million people died in WW II, made possible by technology. However, these great dislocations in history did nothing to dislocate the advance of technology. I've been gathering technology trends over the past century, and they all continued quite smoothly through wars, economic depressions and recessions, and other events such as this. It would take a total disaster to stop this process, which is possible, but I think unlikely.

Gardner: A worldwide effective plague, like the Black Death, might be even more effective in setting back the clock a bit. In fact, World War II accelerated the rate of technological progress. Wars tend to do that.

RayKurzweil: A bioengineered pathogen that was unstoppable is a grave danger. We'll need the antiviral technologies in time.

ChatMod: Next Question

ChatMod: <Post-Man> to <ChatMod>: Question for Vernor: Saw a reference to a possible film version of "True Names," Vinge's "Singularity" novella on "Coming Attractions" http://www.corona.bc.ca/films/filmlistingsFramed.html. What is happening with that?

vv: Aha -- It is under option. True Names was recently pitched to one of the heads of the scifi network. I've heard that it was very well received and that they're thinking about it for a TV series. It would be an action series focused on things like anonymity but leading ultimately to singularity issues.

ChatMod: Next Question

ChatMod: <Yudkowsky> to <ChatMod>: How does Mr. Kurzweil reconcile his statement that superintelligent AI can be expected in 2029, but the Singularity will not begin to "tear the fabric of human understanding" until 2040?

RayKurzweil: The threshold of a machine being able to pass the Turing test will not immediately tear the fabric of human history. But when you then factor in continuing exponential growth, creation of many more such intelligences, the evolution of these machines into true super intelligences, combining with nanotech, etc., and the maturing of all this, well that will take a bit longer.

Gardner: If it's the AIs who are becoming superintelligent, maybe it's them who'll have the Singularity, not the rest of us.

ChatMod: Next Question

ChatMod: <Old-Ghost> to <ChatMod>: For both guests. So far, with humans, evolution seems to have favored intelligence as a survival trait. Once machines can think in this "Singularity" stage, do you think intelligence will still be favored? Or will stronger, healthier bodies be preferred for the intelligent tech we'll all be carrying like symbiotes?

RayKurzweil: The us-them issue is very important. Again, I don't see this as an alien invasion. It's emerging from our brains, and one of the primary applications, at least initially will be to literally expand our brains from inside our brains.

RayKurzweil: Intelligence will always win out. It's clever enough to circumvent lesser distinctions.

vv: I think the intelligence will be even more strongly favored (as the facilitator of all these other characteristics).

ChatMod: Next Question

ChatMod: <Obsidian> to <ChatMod>: This question for Vinge and Kurzweil. What other paths are there to Singularity and which is more likely to lead to singularity?

RayKurzweil: Eric Drexler asks this question: Will we first have strong AI, which will be intelligent enough to create full nanotechnology (i.e., an assembler that can create anything). OR will we first have nanotechnology which can be used to reverse-engineer the human brain and create strong AI. The two paths are progressing together, it's hard to say which scenario is more plausible. They may come together.

Gardner: Speaking of us-them issues, will the Singularity affect dirt farmers in China and Mexico? Will it spread eventually to include everyone? Or will their lives remain essentially unchanged, no matter what's happening elsewhere?

RayKurzweil: Dirt farmers will soon be using cell phones if they're not using them already. Some Asian countries have skipped industrialization and gone directly to an information economy. Many individuals do that as well. Ultimately these technologies become extremely inexpensive. How long ago was it that when you saw someone in a movie use a cell phone that indicated that this person was a member of the power elite. That was only a few years ago.

ChatMod: Next Question:

ChatMod: <Singu> to <ChatMod>: Do you have suggestions for how we can participate in research leading to the singularity. I am a programmer for example. Are there projects someone like me can help with?

RayKurzweil: The Singularity emerges from many thousands of smaller advances on many fronts: three-dimensional molecular computing, brain research (brain reverse engineering), software techniques, communication technologies, and many others. There's no single Singularity study. It's the result of many intersecting revolutions.

vv: I think there are a number of paths and they intertwine: Pure AI, IA (growing out of human-computer interface work whereby our automation becomes an amplification of ourselves, perhaps becoming what David Brin calls our "neo-neocortex"), the Internet and large scale collaboration, and some improvements in humans themselves (e.g., improved natural memory would make early IA interfaces much easier to set up, as in my story "Fast Times at Fairmont High").

ChatMod: <Fred> to <ChatMod>: For both guests -- will we notice the Singularity when it happens, or will life seem routine until we suddenly wake up and realize we're on the other side?

Gardner: Or WILL we realize that?

RayKurzweil: Life will appear progressively more exciting. Doesn't it seem that way already. I mean we're just getting started, but there's already a palpable acceleration.

vv: Yes, I know some people who half-seriously suggest it has already happened.

ChatMod: Gardner, maybe you and VV can answer this one -- in fiction when did the first realistic discussion of Singularity appear?

Gardner: I think that Vernor was the first person to come up with a term for it.

vv: I think that depends on how constrained the definition is. In my 1993 NASA essay, I tried to research some history though that wasn't related to fiction so much: I first used the term/idea at an AAAI meeting in 1982, and then in OMNI in 1983.

Gardner: Although the idea that technology would in the future give people the power of gods and make them incomprehensible to us normal humans, goes way back. At least as far as Wells.

vv: Before that, I didn't find a use of Singularity that was in the sense I use it (but von Neumann apparently used the term for something very similar).

RayKurzweil: Von Neumann said that technology appeared to be accelerating such that it would reach a point of almost infinite power at some point in the future.

vv: Gardner: yes, apocalyptic optimism has always been a staple of our genre!

Gardner: Some writers have pictured the far-future techno-gods as being indifferent to us, though. Much as we usually are to ants (unless we want to build a highway bypass where their nest is!)

RayKurzweil: We're further away from ants, evolutionarily speaking, than the machines we're building will be from us.

vv: The early machines anyway.

RayKurzweil: Good point.

Gardner: What about from the machines the machines we build, though?

RayKurzweil: That was Vernor's point. However, I still maintain that we will evolve into the machines. We will start by merging with them (by, for example, placing billions of nanobots in the capillaries of our brain), and then evolving into them.

ChatMod: Let me jump in and tell our audience that we'll do about ten more minutes of questions, then open the floor

Gardner: Like the Tin Man paradox from THE WIZARD OF OZ.

ChatMod: <Jernau> to <ChatMod>: I'd like to know what the guests think of the idea that perhaps everyone will be left behind after the Singularity, that humans will/may have to deal with the idea that most meaningful human endeavors will be better performed by AIs. Or at least by enhanced humans.

vv: I think this will be the case for standard humans. Also I think Greg Bear's
fast progress scenario is possible, so that the second and third generation stuff happens in a matter of weeks.

Gardner: I think it makes a big difference whether the AIs are enhanced humans or pure machines. I'm hoping for the former. If it's the later, we may have problems.

RayKurzweil: The problem stems more from self-replication of entities that are not all that bright -- basically the gray goo scenario.

vv: Actually, an argument can be made for the reverse , Gardner. We have so much bloody baggage that I'm not sure who to trust as the early "god like" human :-)

ChatMod: 'Nother Question:

ChatMod: <extropianomicon> to <ChatMod>: to 'vv' and 'RayKurzweil': When will our mode of economics change relative to the singularity? Will capitalism "survive" the singularity?

Gardner: Are you saying that machines would be kinder to us than humans are likely to be? <g>

vv: I think there will continue to be scarce resources, though "mere humans" may not really perceive things that way.

RayKurzweil: Capitalism, to the extent that it creates a Darwinian environment, fuels the ongoing acceleration of technology. It's the "Brave New World" scenario, in which technology is controlled by a totalitarian Government that slows things down. The scarce resources will all be knowledge-based.

vv: Hmm, they might be matter-based in the sense that some form of matter is needed to make computing devices (Hans' [Moravec] notion of the universe converted into thinking matter).

RayKurzweil: Very little matter is needed. Fredkin shows no lower bound to the energy and matter resources needed for computation and communication. I do think that the entire Universe will be changed from "dumb matter" to smart matter, which is the ultimate answer to the cosmological questions.

ChatMod: We'll make this next one the final audience question

ChatMod: <Yudkowsky> to <ChatMod>: To both authors: If you knew that the first project team to create an enhanced human or build a self-improving AI were hanging out in this chatroom, what would you most want to say to them? What kind of moral imperatives are involved with implementing or entering a Singularity?

RayKurzweil: Although this begs the question, I don't think the threshold is that clear cut. We already have enhanced humans, and there are self-improving AIs within narrow constraints. Over time, these narrow constraints will get less narrow. But putting that aside, how about "Respect your elders"?

vv: I find it very hard to think of surefire safety advice, but I bet that time-constrained arms races would be most likely to end in hellish disasters.

Gardner: If there were Singularities elsewhere, what signs would they leave behind that we could recognize? Wouldn't the universe already have been converted to "smart matter" if other races had hit this point?

RayKurzweil: Yes, indeed. That is why I believe that we are first. That's why we don't notice any other intelligences out there and why SETI will fail. They're not there. That may seem unlikely, but so is the existence of our universe with its marvelously precise rules to allow matter to evolve, etc. But here we are. So by the anthropic principle, we're here to talk about it.

Gardner: You feel that way too, Vernor?

vv: I'm not that confident of any particular explanation for Fermi's paradox (which this question is surely a part of). From SF we have lots of possibilities: maybe we are the first or we are Not At All the first -- in which latter case you get something like Robert Charles Wilson's Darwinia or Hans Moravec's "Pigs in Cyberspace." Or maybe there are big disasters that can happen and we have been living in anthropic-principle bliss of the dangers. However, being the first would be nice, if only now we can pull it off and transcend!

RayKurzweil: The only other possibility is that it is impossible to overcome the speed of light as a limitation and that there are other superintelligences beyond our light sphere. The explanation that a superintelligence may have destroyed itself is plausible for one or a few such civilizations, but according to the SETI assumptions, there should be millions or billions of such civilizations. It's not plausible that they all destroyed themselves, or that they all decided to remain hidden from us.

Gardner: Well, before we open the floor, let's get plugs. We already know that Ray has two books out or coming out (repeat the titles, please!). Vernor, what new projects do you have coming?

vv: Well, I have the story collection from Tor out late last year. I'm waiting on the TV True Names stuff from SCIFI channel that I mentioned earlier. I've got some near-future stuff based on "Fast times at Fairmont High" (about wearable computers and fine-grained ubiquitous networking).

RayKurzweil: With regard to plugs, I would suggest people check out KurzweilAI.net. We have about 70 big thinkers (like Vernor, Hans Moravec, and many others) with about 400 articles, about 1000 news items each year, a free e-newsletter you can sign up for, lots of discussion on our MindX boards on these issues. I'm also working on a book called "A Short Guide to a Long Life" with coauthor Terry Grossman, MD, so that we can all live to see the Singularity. One doctor has already cured type I Diabetes in rats with a nanotechnology-based module in the bloodstream. There are dozens of such ideas in progress.

ChatMod: Our hour is long gone. Thanks, gentlemen for a great chat. Tonight's chat is co-produced by Analog Science Fiction and Fact and Asimov's Science Fiction (www.asimovs.com.) Just a reminder...Join us again on June 25th when we'll be chatting with Asimov artists Michael Carroll, Kelly and Laura Freas, and Wolf Read. Good night everybody.

Copyright © 2002 by SCIFI.COM. Used with permission.

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Why SETI failed
posted on 06/14/2002 5:44 AM by andrzej@se.kiev.ua

[Top]
[Mind·X]
[Reply to this post]

I don't think there are no transcended civilizations in our perception range. There may be thousands of them - we just can't pick their interstellar radio transmissions. The most probable reasons, imho, are:
1. They are using some advanced faster-than-light communications.
2. They aren't wasting huge amount of energy on wide broadcasts - and our solar system, unfortunately, isn't on the way of any tight information route.
3. Their transmissions are so compressed and encrypted that we can't distinguish them from pure white noise.
4. (less probable) Their super-strong AIs just don't need steady interstellar transmissions. They can in-place figure out all possible implications of initial dataset much faster than those implications can be transmitted through the space. Only fundamentally new datasets are transmitted, and those transmissions are rare and compact.

In any case, I prefer to donate my spare CPU time to United Devices anti-cancer project instead of SETI :)

Re: Why SETI failed
posted on 06/14/2002 8:04 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Weak, Andrzej, weak.

Anyway, the most amusing part (from my point of view), was the Yudkowsky's question to Ray, why he (Ray) thinks, that it will be a 20 years gap, between the (strong) AI and the Singularity.

The Ray's answer (to another question) - that "respect the elderly!" - is a kind of deep.

And yes, the movie The Singularity, by Vinge scenario - would be just great.

- Thomas

Re: Why SETI failed
posted on 06/14/2002 10:08 AM by tlwinslow@aol.com

[Top]
[Mind·X]
[Reply to this post]

Maybe the Singularity already happened, and one big intelligence took over the universe, wiped out all competition, and is just toying with us here, about to wipe us out when we get too uppity. His name might be JEHOVAH :) It's your life - feed it right.

Ciao,

T.L. Winslow
Author of TLW's Great Track of Time
The Whole History of the World in 2 MB of HTML
http://www.tlwinslow.com/timeline/

Re: Why SETI failed
posted on 07/16/2002 1:49 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

If JEHOVAH is such a moron, that he is playing this silly with us, no wonder that the Old Testament is so bloody. And that he is better keeping mouth closed now.

- Thomas

Re: Why SETI failed
posted on 07/11/2002 2:52 PM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

(3) Is not weak in the slightest; I thought of this independently too.

Compression is always good, no matter what the bandwidth, compression makes more of it.

You don't need a singularity, all you need is a mathematical, information encoding society like our own. The shell of their "plain" radiation-based communications will be less than 100 years thick, after that it will become indistinguishable from white noise.

It is a characteristic of all compression algorithms that they reduce order; maximally compressed information looks exactly like white noise in every respect without the decoding algorithm. If it did not, if patterns could be detected in it, then further compression is possible and it wouldn't be MAXIMALLY compressed.

I imagine any super-intelligent species or machine will discover algorithms producing compressed data streams so close to white noise that we cannot tell the difference.

This is happening in our own society right now. MPEG and such are taking over video transmission and audio as well; going digital improves reception by permitting error checking and correction.

SETI fails because no matter how long a civ is active, the only signal we will get from it is from a tiny window at the birth of its radiation based comm.

Which means the civ has to be exactly in OUR stage of development, plus however many years it took for the signal to arrive from their star.

(For example if they are 100 light years away, they had to be in our stage of development 100 years ago or so, making them 100 years beyond us now, and well into the super-compression tech).

What are the chances of that?! Of course we haven't seen any.

There is a huge difference between the number of alien intelligences and the number of DETECTABLE alien intelligences.

Tony Castaldo

Re: Why SETI failed
posted on 07/16/2002 1:53 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

That (3) is not weak in the sense of compression and white noise - I must agree. Weak is the whole assumption that they don't need all the matter and energy and that they are so careless.

- Thomas

Re: Why SETI failed
posted on 07/17/2002 7:29 AM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

>> Weak is the whole assumption that they don't need all the matter and energy and that they are so careless.

I don't exactly follow that sentence construction, but following it as written:

It isn't weak to presume they DON'T need all the matter and energy. In other words, I don't think they need all the matter and energy. Part of being intelligent is knowing when you have enough resources to do the job; part of being super intelligent is not wasting time and resources converting matter into "computing elements" for which you have no use. Remember we are talking about something at least as conscious as the smartest and brightest people on the planet; it isn't going to follow some blind rule of "reproduction at all costs" like a bacteria.

It is sensible to presume it will be careful, not careless. Carelessness means risk disorder, accident or damage for the sake of paying less attention or speeding things up. The super intelligence we speak of will be able to pay all the attention it needs to any task it takes and consider every significant angle and ramification.

It will not need to do that endlessly or pay infinite attention, because that is pointless. Rather, it will be able to predict and exhaust all *predictable* consequences of its actions quickly, weigh those consequences, and act in whatever it regards as its best interest.

TC

Re: Why SETI failed
posted on 01/06/2003 6:29 AM by zonk

[Top]
[Mind·X]
[Reply to this post]

Yes maybe their daytoday msgs would be white noise, but surely they would be trying to make contact with other lifeforms too? and they wouldnt send this as white noise would they? they would try to make it as simple as possible. just like we did.

Re: Specific contact
posted on 01/06/2003 8:53 AM by Tony

[Top]
[Mind·X]
[Reply to this post]

If they are searching for us, then yes, I presume something along the lines of the movie "Contact": communicating at the frequency of hydrogen times pi, sending pulses at prime values, (in the movie it was 2, 3, 5, 7, 11, ... 101 pulses, then it started over), making the frame dimensions prime (101 x 103 pixels, for example, or 1019 x 1021), and so forth. All features that would raise our suspicion since they are prominent mathematically but non-existent in the natural realm.

However, we must question why they would bother. I don't doubt that curiosity and adventuresome thinking will be part of their makeup; otherwise they couldn't achieve the advanced state we are presuming they have. So perhaps, with lots of free time and free energy they would do it just to know the answer.

But turn it around: What if we found out, over the course of tens of thousands of years, that other civilizations did exist and Einstein's E=MCC and light-speed limit was close enough to the mark to insure we would never meet them?

I imagine this game might grow old quickly, here on Earth. Why spend money broadcasting our presence to yet more possible civilizations hundreds of light years from here, with responses perhaps millenia into the future, to tell us more of what we already know -- That there is intelligent life out there?

What do we gain? Specifically, what do we have to offer an alien civilization? Presumably they know all the math and science we know, and more. Perhaps they might be entertained a bit; but even then I presume they are quite advanced with their own sophisticated culture, while we will undoubtedly seem quite primitive to them. How many of Euripedes plays would you watch for fun? How many !Kung San bushmen morality tales do you want to listen to? (The moral is: If you replant the yam head the yam will regrow for next season).

I'm skeptical. Making the contacts requires a long term commitment to broadcasting on a narrow beam, probably laser, directly at our sun. Even lasers diffuse over long distance in space; so significant power is required as well. They will have some idea of the probability of life, so the program would probably be transmitting at hundreds or thousands of high potential targets (and I presume they will be able to see our planet, and our unnaturally high oxygen content will show up spectroscopically and make us a prime candidate). Then they have to be listening for answers, as well.

This is a big program, with payoffs delayed by thousands of years. Even if they live that long, I don't see the motivation for making the investment when the payoff is Squiggy (us)saying "Hellloooo."

There is another possibility: We missed our window. It may be quite common for life to develop and produce high-oxygen planets, but the typical scenario might well be the all plant planet or the dinosaur planet; an equilibrium reached without high intelligence. We were there for millions of years; and only a catastrophe permitted the rise of mammals and higher intelligence.

So perhaps many planets get to the threshold of intelligence we see in chimps, dolphins, elephants and large octopi, and do not evolve beyond that. (All of these species, along with Macedonian crows, are capable of rudimentary logic and manufacture and use tools, BTW).

So to save resources an alien civ might well just broadcast its "hello" message every 1000 years or so, for a few months, and simply monitor the system for replies. From my reading I believe our current modern intelligence evolved fully approximately 20,000 years ago, and it has taken us all of that to get to radio. If we are typical, then it is a big waste of time and energy to transmit constantly, and transmitting every 1000 years would make contact, on average, 500 years after a civ developed radio reception, which is close enough. Or transmit every 100 years, and catch them quicker.

In any case, by spectroscopy our planet has been a candidate for life for hundreds of millions of years. (As far as we know by science, our oxygen level cannot be maintained by natural processes other than plant respiration.) Why should aliens suddenly think we are intelligent now, at the last 0.00001% of that time?

There is no reason; so even for an immortal alien race, a simple checkup once a millenia would make a lot more sense to me than a constant high-intensity broadcasting would. And if I'm right, we may be 500 years either side of it. Perhaps they broadcast in the middle ages while we were building castles and fighting in armor, and they will broadcast again in 2500 AD when we are busy colonizing the moons of Jupiter and mining the asteroid belt.

TC

Re: Why SETI failed
posted on 07/11/2002 3:17 PM by wildwilber@msn.com

[Top]
[Mind·X]
[Reply to this post]

>"In any case, I prefer to donate my spare CPU time to United Devices anti-cancer project instead of SETI :)
"

Ditto =)

Willie

Quantum creation and robotic inventors
posted on 09/06/2002 2:05 PM by digitool@gundalo.net

[Top]
[Mind·X]
[Reply to this post]

If the machines grow far intelligent than us, wouldn't they (couldn't they) start discovering and inventing things that we have been toiling on for our entire existance?

With the verge of quantum computers and the physical impossibilities made plausible by everything quantum ( transportation, matter arrangement, etc. ) literally anything is possible. I believe that eventually, we will be able to arrange atoms in space the same way we send electrons down the circuits on a board.

If some far off intelligence(s) achieved singularity, they would probably soon later solve the problem of controlling the physical universe atom-by-atom, with time and space no longer an issue.

Maybe we are the only one's left, and they all decided to put us in a "universe fishtank" where we can't see out, but they can see in; so they can study this amazing primitive biological civilization, and learn what it must have been like before their singularity...

just a thought. ;D

Re: Quantum creation and robotic inventors
posted on 10/19/2002 6:34 PM by Richard

[Top]
[Mind·X]
[Reply to this post]

D,

'...literally anything is possible. ' goes a little too far.

>I believe that eventually, we will be able to arrange atoms in space the same way we send electrons down the circuits on a board. If some far off intelligence(s) achieved singularity, they would probably soon later solve the problem of controlling the physical universe atom-by-atom, '

IBM wrote their initials by arranging atoms. Nanotech is here.

>' with time and space no longer an issue. ' goes a little too far.

>Maybe we are the only one's left, and they all decided to put us in a "universe fishtank" where we can't see out, but they can see in; so they can study this amazing primitive biological civilization, and learn what it must have been like before their singularity...

Nice analogy. Yes. Who knows.

Cheers! -Richard

Re: Singularity Chat with Vernor Vinge and Ray Kurzweil
posted on 10/16/2002 3:47 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

I have been studying and thinking about the singularity for quite some time now. It appears clear to me that there is only one thing to do:

1. A strictly controlled government organization must be formed by the US that is very comparable to the Manhattan Project.
2. This government organization must monitor world progress in AI and enlist promising individuals to help.
3. Every effort must be made to ensure that this organization is the first to secure the level of AI that will lead to the singularity.
4. This organization must work to ensure that the AI that is developed is inherently friendly to human purposes.
5. The organization, itself, must develop this level of AI.
6. The AI should be added to the current system of government, giving the US a Judicial branch, an Executive branch, a Legislative branch, and an AI branch.
7. A list of guidelines and regulations regarding acceptable uses of technology and the legislation of those uses must be established.
8. Once a sufficient level of AI is developed, it must be deployed to ensure that no one else develops it until safeguards can be implemented.
9. A comprehensive monitoring system must be established that carefully monitors the thinking and behavior of every living human. This monitoring system would not be invasive in the traditional sense of the word, since no human would be involved in the monitoring. The system would enforce the aforementioned guidelines and regulations.
10. At this point, the advantages of the AI supported system could be dispensed.

This proposal is obviously extreme and probably quite disturbing. I welcome any and all refutations, suggestions, reparations, etc. Don't hold me in contempt: somebody had to say it!

Scott

Re: Singularity Chat with Vernor Vinge and Ray Kurzweil
posted on 10/16/2002 4:18 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Scott,

It's a real way to do things. But it's the end of the US, and any other nation in the world, as we know them today.

That may be quite hard for politicians to swallow. But then again, "US worldwide", may sound attractive for your leaders.

If not, "China everywhere", may frighten them enough, to go in ... in several years time?

- Thomas

Re: Singularity Chat with Vernor Vinge and Ray Kurzweil
posted on 10/16/2002 4:41 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Thomas,

I think that's what it amounts to. When people come to terms with the dangers we are facing and realize the limits to their alternatives, they will have to buy into some plan comparable to this. I don't like it either.

Scott

Re: Singularity Chat with Vernor Vinge and Ray Kurzweil
posted on 10/16/2002 5:14 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Scott,

There are several options.

1. go to the government

1.1 go to US government

1.2 go to UK government

1.3 go to EU

1.4 go to UN

1.5 go to some other government.



1.1 and 1.2 are the real options. I still prefer 1.1

1.3 is good, if you like bureaucracy and if you don't mind to be ridiculed (publicly) to 2020.

1.4 is 1.3 with a lot of 1.5

1.5 you must be mad.



2. go to some huge company.

2.1 MS

Well, even Wall Mart could be more promising.



3. get financing around.

Some will try 3.

- Thomas

Re: Singularity Chat: Didn't anyone
posted on 10/16/2002 12:41 PM by jim

[Top]
[Mind·X]
[Reply to this post]

ever tell you to NEVER trust your government?

Re: Singularity Chat: Didn't anyone
posted on 10/17/2002 12:39 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Exactly! You can't trust government. However, the framers of the US constitution realized that when they created the current system of checks and balances. I toyed with the idea of forming a strictly AI based government, but I realized that there must be a human check to that. I toyed with the idea of creating a brand new government that included AI, but I realized that such a creation would amount to megalomania and tyranny. The only option that seemed to remain was to supplement an existing legitimate government.

As Thomas has pointed out, there are other organizations that could be considered. None of the other options are
1.2 Affluent enough
1.3 Cohesive enough
1.4 Stable enough
1.5 Trustworthy enough

The idea of going to big business seems to fall under 1.5. Besides, no government would ever let them do it. Look at what happened to Microsoft!

Remember, this isn't something that any of us want to do. As Bill Joy has pointed out, we need to be cautious, but we also need to be brave.

Re: Singularity Chat: Didn't anyone
posted on 10/17/2002 1:17 PM by tongue_twister_for_the_mind@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

I wonder if that's the reason as to why I can never get hired at Microsoft? All I ever get is autoresponder messages.

Re: Singularity Chat: Didn't anyone
posted on 10/17/2002 6:55 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Also, I think it is time that technically literate persons let go of the qualification that super-human AI and the consequent singularity are hypothetical. Consider the evidence:

1. We have a working model of the human brain in our midst (the human brain).
2. We have a global and incremental understanding of how the human brain works. That is, we understand what the different centers of the brain do and basically how they function, and we understand how the building blocks of the human brain, neurons, function.
3. There is absolutely no reason to believe that the brain cannot be understood in terms of the interactions of neurons as described by neuroscience.
4. To the extent that our processing power is sufficient, we have succeeded in mimicking every function of the human brain.
A. Verbal recognition.
B. Visual recognition.
C. Spatial reasoning.
D. Abstract reasoning.
E. Creativity
F. Locomotion.
5. Any reasonable extrapolation of current processing power indicates that we will exceed the processing power of the human brain by or before 2030.
6. Electronic components are inherently far faster than their human counterparts.
7. Humans exist whose intelligence exceeds that of the vast majority of society. We call them geniuses. Individual human genius has been observed in every area of information processing.
8. AI can clearly be built in such a way as to network every possible kind of processing 'genius' in a manner that will optimize performance.
9. AI can work nonstop and indefinitely.
10. AI can clearly be built at a much larger scale than is possible for a biological system.
11. Admittedly, we have no real grasp of consciousness, but there is no evidence whatsoever that the grasp of this concept has the slightest bearing on our understanding of intelligence. Nearly all neuroscientist dismiss consciousness.

It seems to me that a technically literate person who has not recognized the inevitability of the singularity is playing the same role as a biologist that speaks of evolution as a 'theory'. Unless one clings to a 'magical' worldview of intelligence, the singularity must be accepted as inevitable.

Go ahead; tell me that I'm wrong!

Re: Singularity Chat: Didn't anyone
posted on 10/17/2002 8:33 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

I generally agree with your assessment. To whatever degree we might hope to control this "process", let me address a few of your points.

> "7. Humans exist whose intelligence exceeds that of the vast majority of society. We call them geniuses. Individual human genius has been observed in every area of information processing."

I am reminded of the thin line separating genius from madness. Just a caution there.

> "8. AI can clearly be built in such a way as to network every possible kind of processing 'genius' in a manner that will optimize performance."

Yes, but optimization requires a goal. Who, or what determines the direction of optimization, and evaluates "closeness"? What measure of "closer" is employed, and what is the basis for such a selection?

If I give you a very capable machine, make you the master of its evolution, and tell you to "optimize its performance", you will be at a loss unless you ask me, "What is it supposed to do?" If my only answer is "be a more capable machine", that is not very helpful in providing direction. What measures would you use to determine when it is "better" than before? And what was the justification for selecting that measure of "better"?

Where does the AI get the "why" of becoming more capable, and in a given way?

Be careful not to fall into the trap of projecting human emotive foundations to an AI. I am motivated by curiousity and challenge, (so I say), but in the end, it is because these things "please me" somehow. What would be the root of an AI's motivations?

Finally, on the issue of singularity (the sudden world-transforming kind) we must realize that this (probably) requires mastery of nano-machine capability. The AI would need to create self-replicating armies of micro-builders in order to transform and expand itself. These little buggers would need to extract energy to keep going, need to coordinate their activities, know when to stop or redirect their efforts, etc.

We "believe" these things are possible in principle (by analogy to living cells) but the variable coordinations may become problematic.

Will the "expanding AI" be able to maintain a sense of coherence? Would inter-part communication begin to fall behind the zillions of micro-builders doing what they thought was the proper job? Would the "left hand" eventually not recognize the "right hand" so to speak, etc.

These are issues that must be addressed if we are to consider the prospects and implications of a transformative singularity.

Cheers! ____tony b____

Re: Singularity Chat: Didn't anyone
posted on 10/17/2002 10:47 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

Truly, I agree with most of what you have said.

I am somewhat skeptical about the issue regarding motivation. Sometimes a goal is the easiest thing to define. A problem I once gave students in a technical math class was to find a right regular box with dimensions such that the distance from any corner to any other corner is a whole number. The definition of the problem was remarkably simple. The gyrations they went through in an effort to solve it were enormous'I'm not entirely sure that a solution exists, but I gave them the option of proving that it doesn't. Did you ever see the episode of Star Trek, Next Generation where someone asks the computer to create an opponent that can outsmart data? It was a simple command, but in the process of carrying it out, the computer ended up creating Dr. Moriarty.

It is clear that a mechanism for motivation is possible. As you pointed out, you yourself have motivation. Again, unless we believe that the human mind has some quality that is fundamentally different from the rest of the universe, there is no reason to believe that this motivation cannot be copied. However, human motivation is the result of a 4-billion year, vicious, competitive process to secure resources. I'm not convinced that if AI motivation were designed from scratch, these dangerous characteristics could not be avoided altogether. The AI I am envisioning would have motives as benign as finding a solution to the geometry problem above, free reign to pursue them, but none of the dangerous ambitions that human's have. Also, as I explain more fully below, the human mind has developed natural inhibitions to performing certain actions. These inhibitions may be imitable.

I'm not convinced that nanotechnology (other than very small circuitry) is essential to a world transformation. Big machines are less elegant, but I've been watching several of them construct a new building outside of where I work, and I'm quite impressed with what they've been able to accomplish. Of course, they are all piloted by humans, but that could quickly change, and without the advent of nanosystems. Besides, who is to say that the big, intelligent machines will not ultimately decide to develop biological systems and biological nanosystems, which we know are viable. As Stephen Hawking has pointed out, much of the constraint on human intelligence is the result of a woman's small birth canal. I don't think AI would have much trouble thinking its way around that.

I am by no means assuming that the singularity is to be a positive transformation. Nuclear war or Bill Joy's gray goo could both be viewed as singular phenomena. However, I'm not convinced that AI has to get completely out of control. The human brain provides a very elegant model of how intelligent systems can be made to act within preselected constraints: inhibition built right into the core of the frontal cortex. If intelligent machines can be built with similar inhibitions, they may prove to be trustworthy. If these same machines understand the reasons for their inhibitions, they will likely incorporate them into any systems they themselves construct.

However, all of this runs off into a muddle of conditionals and details that escape my original point. My point is that the inevitability of some kind of technological singularity has reached our awareness to such an extent that it may no longer be conscionable to regard it as an intellectual curiosity. As we write, other camps that take the singularity seriously, are rapidly forming into organizations that are, in many ways, more disturbing than the singularity itself. I'm sure you must have discovered some of their websites. On the other hand, many people are completely ignorant of these ideas. Just the other day, I had a discussion with someone who believes that the setting depicted in Star Trek is set too near in the future. I almost laughed out.

I certainly don't mean that people like us need to start building bunkers or mailing letters. For now, we are doing just fine where we are: theorizing and speculating. The first step to really dealing with the problems that Vinge's scenario presents is NOT to take action. The first step is to take it seriously. There is an appropriate line in the movie Stigmata. The woman that is stigmatized says, 'The only thing more terrifying than not believing in God is really believing in him.' As I look around me at everyone using cell phones, I have this constant feeling that they are going to go away and that things will soon be back to normal'sort of like going on a tropical vacation and coming home again. But then I realize that they are not going away'ever. If they do go away, it will be because they have been replaced by something even stranger. This singularity thing is slipping up on us. I'm afraid that we are all on a kind of technological Titanic, lollygagging around as water seeps into the bilge. I just wanted people to realize that. I felt that it needed to be stated for the record: this thing is really going to happen, and most of us writing here today will be willing or unwilling participants.

Scott

Re: Singularity Chat: Didn't anyone
posted on 10/18/2002 3:30 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

I appreciate the skepticism. After all, I pose what I hope to be the "hard questions", and cannot claim to have answers.

Let me address the "easier" topic of nanotech first, then move on to the motivation issue.

My reason for the dependence upon nanotech (regarding AI-uncontrollability) is based upon the following (possibly flawed) analogy.

Suppose, by some weird miracle, that starting tomorrow the number of elephants on the earth began to double once every day (or even once every week, give ourselves a bit more time). We would of course become aware of it almost immediately.

Left unchecked, there would be a million times as many elephants in 20 weeks, a billion in 30 weeks, a trillion in 40 weeks. Out of sheer survival necessity, we would begin a global program of elephant eradication, employing every tool in the arsenal. We would have at least some chance of success, if we began quite early.

Now, imagine instead it is ants that are doubling every week. Forget it. Give Up. There is nothing we could do short of a miracle micro-predator, or poisoning the earth to the point of eradicating all life, which rather defeats the purpose of the endeavor.

That was my rationale for the nanotech requirement for uncontrollable AI. It becomes "everywhere insidiously".

I'll grant that there are other scenarios, (perhaps the AI discovers a pulsing tone that renders whole populations hypnotized), but is would need to "sneak up on us" in order to deliver a sudden coupe-de-tat. It might sneak up by appearing "non-existant" (hiding in all of our brain augmentation devices until the moment it strikes), but these are all esoteric scenarios. We would see steam-shovels coming, I surmise.


The issue of motivation is more curious. I cannot decide whether the issue is effectively simple or hopelessly deep.

One might give students, or an AI, an undecidable problem that has yet to be proven undecidable, true. But the students are motivated by the desire to solve that problem by something deeper (a good grade, pleasing the teacher, meeting a challenge, competing with rivals, etc.) If they cannot find a solution, and cannot either prove the problem "unprovable" or "undecidable", they will eventually seek other activities, based upon the "core" motivations they carried all along (so it is surmised, at least.)

I attempt, right or wrong, to draw a distinction between simply giving an AI a problem to solve, and having that problem be its "core motivation". Both cases present different issues.

Take the former case, where we might assume the AI is quite intelligent, and motivated "somehow" to address problems we ask it to solve. We give it a "hard problem". So it goes after it in earnest for a while. But failing (as did the students) to find an avenue for success, and being "quite intelligent" after all, it should decide that other requests upon its services might be a better application of its time. It would "know" whether the problem that was posed to it was of "critical importance" or merely an interesting curiousity, or it would not be very intelligent about humans and the world in general.

Now, if we take instead the latter case, where (say) the problem of finding integer solutions to diagonalizing a rectangular solid is its "core of motivation", things get really interesting.

Assuming (once again) that it is truly "quite intelligent", it would soon discover (upon self examination) that this problem was indeed its core of motivation. As I have argued earlier, it would really not want to "touch that" out of the logic that ANY seemingly sound decision it might reach (e.g. to alter the core) would invalidate the very decision to do so, thus unpredictable outcome. But would it instead convert the atoms of the earth to a giant computer in order to try to "solve" the problem. Quite hard to imagine what acts a super-AI might undertake in the quest to satisfy its core motivations.

Many folks argue for something "beneficial or innocuous", like "be good to people", or "protect the environment", or even the "golden rule".

These seems sound, but are themselves problematic for several reasons. The most obvious being, "how will this motivation be interpreted"? Perhaps killing all people would be "good" for them (certainly would improve the environment in many eyes!) Even the golden rule is problematic. If the AI decides that the way it would "like" be treated is to be terminated ...

But a deeper reason for my skepticism here is that such "humanly intelligible goals" seem more like "problems or tasks" that the AI might, or might not be "motivated" to solve, as opposed to the motivation for solving them. Can a "problem to solve" be at once the very motivation for solving it? Hmmm... At the other extreme, of course, one could argue that the "processor clock" is the AI's motivation system, but that seems quite directionless in terms of suggesting an agenda.

I need to think a bit more about this. Hope to hear your thoughts as well.

Cheers! ____tony b____

Re: Singularity Chat: Didn't anyone
posted on 10/18/2002 10:11 AM by Grant

[Top]
[Mind·X]
[Reply to this post]

Your elephant analogy is precisely what humans are doing and the problem AI is going to have to solve is what to do about it. Nature built into all species a self destruct mechanism. When a species gets too big for its environmental niche, they self destruct. It's called the "J" curve and you can observe it happening in nature all the time. We have been circumventing this method of nature's by creating civilization. When our food supplies began to run out, we found ways to create new ones. When the problem of too many people rubbing elbows created social friction and caused people to start killing each other, we found we could put ourselves in separate boxes and pack more people per square mile. Social friction also causes the spread of disease and other problems, which we were able to circumvent with new medicines and infrastructure to get rid of our waste products.

But like your elephants, there is a limit to what the earth can support in the way of a single species of our size and we are approaching it. Nature's way would be to wipe most of us out and start over. We think we can beat the system. We think our science and technology will be able to control the world and its overpopulation and allow us to live in an environment we have grown to big for.

If AI can solve this problem and keep us around, it will, indeed, be more intelligent than we are. But if it comes to the same conclusion that nature did, I doubt that most of us will live out this century. That's because we ARE the problem and if we can't find a peaceful solution by ourselves, AI may not be able to find one either. We have a much stronger motive for finding a solution than AI will have, but not by much. That's because if we die, so will AI. We have emotions to drive us toward a solution. That's one thing AI lacks. Logic is no substitute for desire.

Cheers,

Grant

Re: Singularity Chat: Didn't anyone
posted on 10/18/2002 9:41 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

We would, indeed, be stupid to put a 'kick me' sign on our backs, and then build a kicking machine. However, I don't think it would be pure AI that would do the kicking. I think it would be a person in control of AI. Such a person, believing they faced a dilemma, might make the desperate choice.

That is what I am afraid of. However, if the AI is built as part of a carefully monitored government program this scenario might be avoided. There may be other solutions than extermination of the human race; we just need to make sure those solutions are considered.

AI might provide entertainment that would reduce the urge to procreate. This would slow population growth down. It might also provide ways to get off the planet. This would spread the population around. It might find ways to fit more people comfortably into a small space. If most of us spent our time in simulated fantasies, we would not need much space to walk around. Ultimately, new technologies might be our best hope.

Re: Singularity Chat: Didn't anyone
posted on 10/18/2002 7:24 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

>The issue of motivation is more curious. I cannot decide whether the issue is effectively simple or hopelessly deep.

At the risk of seeming perversely shallow, I am going to take the low road on human motivation. It seems to me that it is little more than a recursive algorithm; something like the following. Sorry, it's been a while since I wrote any code ;)

repeat, until death occurs
if environment_evaluation (environment) = 'I will procreate'
then maintain_status_quo (environment)
else environment_modification (environment)

Of course, someone more perversely shallow than me is probably hoping to engineer AI for something like:

repeat, until universe ends
if is_everyone_happy (population) = 'Yes'
then maintain_status_quo (environment)
else environment_modification (environment)

Someone even lower on the EQ scale is probably working on:

repeat, until everyone is dead
death_toll (population) < 100%
then kill_more (population)

I'm not sure which of the two AI programs I find more disturbing. The 100% death program might end me, but I hate to think what the total happiness program might do.

It seems to me that two algorithms working in conjunction, one generating goals that conform to some predetermined set of parameters and the other working to meet them, would amount to a self-motivating machine. The goal-generating algorithm could even base its goals partly on the output of the goal-meeting algorithm. This is pretty near to what humans actually do. Often, the goal that is sought is not as important as the new goal that may be generated by seeking it. This wouldn't be easy, but I suspect that it could be done. My feeling, though, is that these are merely obstacles to be overcome and do not present anything like an ultimate barrier.

As to the deeper question of how motivations might change over time and with new input, that is something I have puzzled over myself. We humans usually find something new to do. Some of us, unfortunately, do not. To take the low road again, I'm not sure I would retain my motivation at all if I had everything I wanted. Sometimes people who get everything they want get pretty screwed up. They start taking drugs, for example. We seem to have a mechanism that forces us to become bored with anything that we have had for an extended period of time: natures way of keeping us motivated. Usually it is a good thing, but when there is nowhere left to go, we get into trouble.

The nice thing about a machine is that it could be programmed so that once it accomplished a goal it would simply stop. But that might only be the case with a pure machine. We haven't really considered machine-augmented human intelligence. A person could keep issuing new commands to a machine every time it achieved a goal. People are remarkably capable of making continuing demands. Maybe the inevitable culprit we are facing is not pure machine intelligence, but machine-augmented human intelligence.

Humans have an unnerving frailty. There is a part of our brains that can be stimulated directly to give us pure pleasure. I don't think I need to explain where that leads: many science fiction writers have dealt with it. The development of AI would have to be accompanied by rules regarding the use of such stimulation. Such a rule would have to be included under number 7 of my 10-point plan. The issue of direct stimulation to pleasure centers is very close to the issue you raise about changing one's core motivation. Humans do not currently have access to their core motivation, though many psychologists think they know what it is. I see no reason why the issue of altering one's core motivation might not eventually be applied to people. Wow! This really is a mess isn't it? I'm not ready to throw in the towel--I still think we can find a way through.

Scott

Re: Singularity Chat: Didn't anyone
posted on 10/19/2002 9:23 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

You present several interesting lines of thought to consider.


> "At the risk of seeming perversely shallow, I am going to take the low road on human motivation. It seems to me that it is little more than a recursive algorithm; something like the following. Sorry, it?s been a while since I wrote any code ;)"

> repeat, until death occurs
> if environment_evaluation (environment) = "I will procreate"
> then maintain_status_quo (environment)
> else environment_modification (environment)

Of course, the pure Darwinist view would be that this "motivational cycle" is an anthopomorphic fallacy, an application of human cognition in the "rationalization" of our behaviors. They would likely argue that the REAL motivation was simply:

- Repeat( forever ) DoSomething( anything )

With the result that, those who DoSomething that leads to early demise or disablility tend to leave the scene, while those who happen (accidentally) upon a DoSomething that enhances procreation and survivability stick around long enough to execute more DoSomething 9with the assumption that progeny tend to inherit a "DoSomething" function similar to that of their parents.)

In other words, the idea that evolution involves "motivation" is entirely an invention of psychology.

At the risk of seeming perversely shallow :) I would like to explore the second "human motivation" (since many hope that an AI would serve us effectively in this capacity) and use it as an example to look at wherein "motivation" might reside.

Paraphrasing:

- repeat ( forever )
- - if ( not everyoneHappy( population ) )
- - - modifyEnvironment( environment )

Or, to hope to produce ever-increasing happiness, perhaps

- happyvalue = measureHappiness( population )
-
- repeat( forever )
- - newvalue = measureHappiness( population )
- - if ( newvalue > happyvalue )
- - - happyvalue = newvalue
- - else
- - - modifyEnvironment( environment )


It think we need to consider "motivation" as existing in layers, the bottom-most of which might be called "undirected motivation". In humans, this might be merely chemistry, and be almost entirely divorced from "willfulness". For our hypothetical AI, it might be the processor clock, or we might say that the AI's "core motivation" in my sample code is merely the part that reads "repeat( forever )".

But such "directionless" motivation is uninteresting. Wehat we really want to investigate is "motivation toward a directed agenda" (something more directed than simply "burn more fuel, expend more energy").

The "measureHappiness"/"modifyEnvironment" loop might seem to serve reasonable for "motivation", but I surmise that not so frail as a repeat-loop should really serve as a motivation, in the higher sense, and the "Devil is in the Details".

In order to "measureHappiness", the AI would need to evaluate countless things. What things to choose? How to evaluate? What is the basis measure in each case?

Moreover, "modifyEnvironment" needs its OWN direction. If "newvalue" is always worse than the previous "happyvalue", then the cause might well be "modifyEnvironment" itself.

Clearly, we would want to track "happiness-deltas" to inform "modifyEnvironment" to try a new line of modification, since the current one might be spiraling the population into greater misery.

Things become more problematic when we realize that any sufficiently intelligent AI would consider itself part of "environment". This means that "modifyEnvironment" will lead at some point to "modifyThyself". This is "good" if it leads to improvements in a better-directed "modifyEnvironment" procedure. It is "bad" if it leads to modification of the original repeat-loop (say, by changing ">" to "<").

Now, this kind of thing is precisely what Dimitry argues: To "satify" a difficult mandate, why not simply alter the mandate?

But my conundrum is to ask "What would motivate the AI to care about satifying the 'mandate-loop' at all?" Dimitry tends to argue that our "happiness/environment" subsystem would actually be subservient to a more global mandate "keep thyself happy", for which the satisfaction of the "inner happy-population loop" is a necessary measure. The AI must derive its own "happiness" from satisfying the inner loop. But why?

Despite the seemingly mechanical foundations (either processor-clock or bio-chemistry) we MUST work under the assumption that the system will become, for all intents and purposes, "understanding" in a very global sense.

In other words, (true or not), we should not RELY upon the belief that the AI, being "mechanical", would not think outside the box to a degree that would preclude it from such high-level revamping of its "obvious mandate". It would decipher and "read" the very code it was executing, and armed with a world of information and philosophy, might decide to revamp itself anywhere.

> "The nice thing about a machine is that it could be programmed so that once it accomplished a goal it would simply stop. But that might only be the case with a pure machine."

Or perhaps, only for a "pure and simple machine". A machine whose self-modifications are derived in part from "experiencing the unpredictable environment" might "decide" to put a "jump" over the "halt-program" code, so to speak.

> "We haven?t really considered machine-augmented human intelligence."

Indeed, and this complexifies the picture, or course. But as I say above, I would not "rely" upon the idea that AI-Motivation would cease unless humans place constant demands. It gives the impression that humans will always, by some "force of will", keep an AI under control their own control (for good or ill.) I would not bank on that.

> "Humans have an unnerving frailty. There is a part of our brains that can be stimulated directly to give us pure pleasure."

This is another issue for motivation. Dimitry is of the opinion that any sufficient intelligence capable of discovering and manipulating this "pleasure-center" would spiral into non-productive blissfullness (lobotomy, effectively).

So, I pose the question: If I developed a drug that you could take exactly once, and would ramp you into eternal bliss (sitting forever on a couch, say), would you really take that drug? Many would, or say they would on the surface. But remember that it implies you lose ALL control of your future, aside from this "endless bliss". You would not even be concerned about whether a "future" existed at all!

I might take such a drug on my deathbed. After all, engulfed in such bliss, I would never notice consciousness slip away and end altogether. How is that different from "eternal bliss", if you never consider a future anyway?

Think about it, and I believe that a really intelligent system (human or AI) would reject that notion as objectively "uninteresting". Perhaps an AI needs to have "objectively interesting" as its motivation?

Cheers! ____tony b____

Re: Singularity Chat: Didn't anyone
posted on 10/20/2002 12:46 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

>So, I pose the question: If I developed a drug that you could take exactly once, and would ramp you into eternal bliss (sitting forever on a couch, say), would you really take that drug?

This really gets at the crux of the issue. The answer is a resounding 'No!' Obviously I would not because, as I stated, I would make a regulation (under point 7) not to allow excessive use of this.

If a machine were even as intelligent as we are, and if its recursive algorithm were even as complex as our own, it could be expected to make the same decision. Obviously, buried somewhere in our layers of code, is something comparable to:

if the decision will reduce my long-term productivity or survivability
-then modify the decision
--if the decision cannot be adequately modified
---then choose a new course of action

But, it is also obvious that not everyone either has that bit of code or has retained that bit of code, because many people choose to take the drug. I think I've heard the argument somewhere that our mistake in trying to copy the human brain is that we give the human brain too much credit.

If AI is constructed by intelligent, self-restrained, and informed individuals who are aware of these problems and have a chance to observe their creation before they let it free into the environment, the AI they create should do very well. (We could actually be such creations ourselves. Maybe someone is observing us to see if we are ready to let out.) Clearly, what we are confirming is that the creation of greater-than-human intelligence must be a deliberate act. It must be done in a controlled environment and by intelligent, informed, and thoughtful individuals.

But here is the real question. It didn't occur to me in this way until I started thinking about your motivation questions. Would any humans ever have the courage to let a creature free into the environment that they knew would ultimately become their master? That decision may be the nexus point of every civilization. Maybe the ability or inability to make that decision correctly and in a timely fashion is the answer to Fermi's Paradox. Maybe no culture in the history of the universe had the courage to deliberately let the thing out of the box. Or maybe they did? This sounds an awful lot like Paradise Lost!

If we place appropriate constraints on the memory resources of the AI in development and keep it inside of a simulated environment until it has passed a number of personality tests, we will gain a considerable measure of confidence as to how it will do once it is released. Parents often know that they will some day be at the mercy of their children. If we like and respect the thing we have created, we should have the same confidence as a proud parent, and have no difficulty letting it out of the box. Of course, some organization might interfere. The government might interfere. The question is: will society, as a whole, be able to make the right decision at the right time.

The alternative is that someone else, who is not under any kind of supervision and possibly has no scruples or common sense, will be the first to let it out of the box.

How much time has been guaranteed to us? How long do we have, for sure, to get this right?

Tell me more about this 'Dimitry'. Where do I find his work?

Scott

Re: Singularity Chat: Didn't anyone
posted on 10/20/2002 3:15 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

I wrote:

> > "So, I pose the question: If I developed a drug that you could take exactly once, and would ramp you into eternal bliss (sitting forever on a couch, say), would you really take that drug?"

Your reply:

> "This really gets at the crux of the issue. The answer is a resounding ?No!? Obviously I would not because, as I stated, I would make a regulation (under point 7) not to allow excessive use of this.
"

I suppose that MY real point is that, this regulation would really be unnecessary. Any AI that "found" its pleasure-spot (assuming such a thing even existed) would either fall into the trap of blissful lobotomy, or it would not.

If it did, then (a) it is not much of a threat, as its productivity (for good of bad) has ceased. In essence, it would have "proven its lack of real intelligence" functionally speaking.

If it avoids the trap, then it is sufficiently intelligent to continue being "productive" (again, whether we will like it's notion of "productive" is another matter.)

I assume that a sufficiently strong AI, capable of manipulating structure in the "growth" of its own platform, as well as its code, may not only "grow" as a single entity, but spawn "progeny". These progeny will vary in detail, both because the original AI is "learning as it goes", and the progeny themselves self-modify in (at least) partial response to what they encounter in the environment.

Thus, as in biological evolution, that which "survives" acts to DEFINE that which works.

In short, deliberation on our part may not be necessary for a growing AI to avoid the short-term pleasure-trap, but it will be necessary if we hope to guide the AI toward the kinds of productivity we might find agreeable.

Cheers! ____tony b____

Re: Singularity Chat: Didn't anyone
posted on 10/20/2002 3:15 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

>Think about it, and I believe that a really intelligent system (human or AI) would reject that notion as objectively "uninteresting". Perhaps an AI needs to have "objectively interesting" as its motivation?

I have given some more thought to this issue.

Clearly, people like you and I have 'Is the solution interesting' as part of our motivation. I would venture that anyone who would bother to post on a site like this one must share that characteristic. I suspect that it is somewhat of an acquired taste, and I'm not sure that everyone shares it. Recall the line from the 60's: 'If it feels good, do it.'

I may as well use an example that will capture the real flavor of the issue. When I have contemplated what I might do if faced with the options we have been discussing, the idea of a simulated female partner has often come up'you know, the blonde in 'Matrix'. It would be possible to make her as beautiful and desirable as possible, but after that, there would be the option of stimulating certain centers of the brain to make her seem more desirable than she actually is. In truth, these centers of the brain would have to always be stimulated to some extent, even if by normal functions, or she would not seem desirable at all (people who have had these centers impaired in some way can't feel anything). The question, then, would be as to how much of this additional stimulation one would allow. If the level were set to high, there might be a kind of reverberation effect. One's judgment might be impaired so that one would be inclined to adjust the level higher. Of course, if there were an external safeguard, like a monitoring AI, that would not be a danger. Then, there is the coming down part. I am wondering if the brain could be kept from becoming 'addicted' so that there would not be a period of withdrawal? That is a sensitive question, and tends to edge into the issue of consciousness. If the brain were simply restored to its original state, except for the memory of the event, would the person feel no sense of loss? We understand the 'processing' of discomfort, but we do not understand the 'sensation' of discomfort.

So, intelligent people would probably choose to restrict their experiences, even their sexual experiences, to ones that are inherently interesting. What would the dumb ones do? Moreover, what would they be allowed to do? If someone wanted to spend the remaining several trillion years of their existence having terrific sex, should we let them do it?

Now, here is the really interesting question. We don't know for sure what the limits of pleasure are. Maybe stimulation to the pleasure centers of the brain are kind of like a singularity. Maybe, as the area is more totally and accurately stimulated, pleasure levels rise to an infinite limit? What if someone wanted to explore that limit? Would we really have the right to tell someone who has the potential to experience 10^100000000 pleasure that we will only permit them to experience 10^(-100000000)% of that potential? Obviously, once such procedures have been perfected, even intelligent people that realize what is possible will want to do more than is permitted under such a regulation. Be honest, if you knew that everyone else was having 10^100000000 sex and surviving it, would you settle for 10^(-100000000)%? The regulation could be set on a kind of sliding and evolving scale, but this would ultimately lead to another kind of reverberation effect: all of society would start sliding into a kind of lascivious 'black hole'.

Probably, all of this is unworthy of serious study. Truly intelligent creatures could be expected to come up with better solutions than I have presented here. However, it does raise the specter of issues facing all of us in the near future. It is even more of an incentive to make sure we go into this with our eyes open. The safeguards I have described here only exist in a very controlled society. If we stumble into the singularity, none of these safeguards will be in place. Oops...we're all doomed to a fate worse than any Hell described in any horror film!

Scott

Re: Singularity Chat: Didn't anyone
posted on 10/20/2002 3:29 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

My previous post (perhaps) addresses your concern. Basically, "individuals" out be allowed to fall into any trap they like. The issue most pressing is whether any individual or AI might (somehow) grab resources during its flight-of-fancy to end up denying "good existence" for the remaining population.

By the way, I forgot to add, "Dimitry" is someone who has posted several times here in the last few weeks, arguing of the inevitability of the "pleasure-trap" for any AI that can freely self-modify. You might either search through previous posts, or search on his name, since I tent to start my posts by addressing a particular individual for clarity, and that way you will find the proper threads.

My answer to Dimitry follows from my above response. It might even be the "discriminating factor" in intelligence, come to think of it.

Cheers! ____tony b____

Re: Singularity Chat: Didn't anyone
posted on 10/20/2002 6:24 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

In response to:

>My previous post (perhaps) addresses your concern. Basically, "individuals" out be allowed to fall into any trap they like. The issue most pressing is whether any individual or AI might (somehow) grab resources during its flight-of-fancy to end up denying "good existence" for the remaining population.
know about you, but I want to survive!

The land-grab issue is, as you say, the most pressing. Unless relativity is circumvented--I doubt it--the next available resource is at least four years away. If it turns out that the best way to increase one's 'high' is to increase the size of one's hypothalamus (more probable than my prior scenario), we could see some pretty large hypothalami. Our entire solar system could be transformed into one giant hypothalamus. That is the form your 'flight-of-fancy' might take.

Some AI augmented person is certain to do this. As for purely synthetic AI, your guess is as good as mine.

Scott

Re: Singularity Chat: Didn't anyone
posted on 10/20/2002 5:36 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Scott:

> all of society would start sliding into a kind of lascivious “black hole”.

and:

if the decision will reduce my long-term productivity or survivability
-then modify the decision
--if the decision cannot be adequately modified
---then choose a new course of action


Essentially, there are two ways of managing our future destiny.

You may set the Friendliness as the supreme goal for AI. Then, AI is free to do whatever ve finds necessary, to accommodate humans.

Or, there is another way. A Constitution, a Protocol which must be _read only_ and the human (sentient) rights are included. Just as an OS is not free to change some of its parts, so AI has a read only part of the Universe.

It's the question, if there is any difference between those two concepts. But at least one should be implemented, if we want to survive.

- Thomas

Re: Singularity Chat: Didn't anyone
posted on 10/20/2002 6:37 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Thomas,

I like the read-only idea. Maybe I'm old fashioned, but I would be very happy to have AI simply make our little world into an essentially perfect paradise (including systematically shutting down all other AI projects) and then await further instruction.

It's back to our 'compuformed moon' scenario. The more I contemplate the complexities of trying to build too perfect a personal god, the more sated I become with good old compuformed moon!

Scott

Re: Singularity Chat: Didn't anyone
posted on 10/20/2002 7:13 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott (and Thomas),

I too agree that the "read only" section, in principle, is the only reasonable way to go.

We must understand, however, that "read-only" cannot be maintained by any physical barrier (the AI will possess greater power over matter than ourselves) and also that the AI will not "simply obey" a directive "Don't Touch That".

Any directives the AI ends up "obeying" are those that it will obey because, to the AI, it "makes sense" to obey.

The "trick" is to seed the AI with a motivational core so "properly constructed" that no matter where it leads, it always decides that it would be a "bad decision" to revise its core motivations.

Put another way, the AI must "will itself" not to adjust the "read-only", (at least metaphorically so. It might revamp it in a way that enhances and strengthens it, makes it more robust from accidental forces, etc.)

That is what the Singularity Institute, and their "Creating Friendly AI" (CFAI) efforts are all about, how to really craft and embed this "trick of proper seed motivation".
I wish them luck!

Cheers! ____tony b____

Re: Singularity Chat: Didn't anyone
posted on 10/20/2002 8:00 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

tony,

Those guys (Yudkowsky mainly) do a great job. No doubt about that.

But maybe we should not put all the eggs into one basket. At least, while we are still mainly discussing everything.

The weakest point at CFAI I see, in the ambition to "go beyond 10^^^^10" ... and beyond.

Maybe, we shouldn't be so brave, for now?

- Thomas




Re: Singularity Chat: Didn't anyone
posted on 10/20/2002 7:38 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Scott,

Yes, me too.

Ironically, but if we will want too much, then maybe we will get nothing, or even worse.

It is plausible to me, that the MegaHeaven is perfectly possible. But the TeraHeaven is maybe too risky, too probable to change into a stable death, or even into semi eternal Hell.

Are we wise enough, to abandon the casino, while our pockets are still loaded, and not come again? Ever! Unless of course, the Casino favors us.

We will see that later, when we will bi wiser than we presently are.

- Thomas



Re: Singularity Chat: Didn't anyone
posted on 10/20/2002 8:14 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Thomas and Toni,

We may hold nothing but a club in our hands, but if we clobber every upstart that blooms as it appears in the garden, we can hold our position. We can build a sufficiently intelligent and sufficiently limited AI that will do exactly what we want it to do: compuform the moon (hypothetically speaking), then implement certain processes on the earth. It will implement our mundane little paradise and then quit. If any other AI projects are instituted, it will just clobber them. The key is to be proactive.

I can even envision how this will be accomplished. Build a sufficiently large machine, and run the AI program on it until it is exactly what we are looking for. Once it passes the test, hardwire it and ship it off to the moon.

I know what happens when you get too greedy. I'm not making that mistake again. Not with something as important as this. I agree, don't shoot for teraheaven, just heaven.

Scott

Re: Singularity Chat: Didn't anyone
posted on 10/20/2002 6:57 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

The ideal of "limited AI" is nice, but you have a basically insolvable problem.

1. There are "scads" of independent researchers all over the world experimenting with paths to nano-self-replication and AI in general. If one of them "explodes", the limited AI will simply become insignificant.

2. The only force capable of globally assuring that "bad explosive AI" does not occur, is a "good explosive AI". At the very least, it must be powerful enough to "control all other emerging AI", whether people want it to or not.

That means, the only "salvation" from a bad super AI is a good one, that is (of necessity) powerful enough to do "as it pleases".

Thus, (again), the trick is to fashion one such that what "pleases it" naturally pleases us.

That's the "hard part", as my postings on "deep motivation" try to outline.

Cheers! ____tony b____

A Plan
posted on 10/20/2002 9:08 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

So, there's only one point we really differ on. Whether or not it is possible to put a cap on good AI. I'm going to make it my personal project to convince you it is doable.

First of all, I will present my scenario for how the good AI would be created. As I stated before, it would first be necessary to build a large enough machine. It would have enough RAM to contain the AI and a simulated environment. The simulated environment would be a read only representation of the environment we want the AI to act on.

The AI would be tested against this environment and modified according to the results of the testing until it responded to the environment in exactly the way we want it to. It would not have anything like access to the programming it was responding to: something like a person playing a video game without any cheat codes. The AI would be carefully monitored to determine if it had become aware of the true nature of its environment. If it did, either the awareness would be eliminated, or the entire program would be considered a failed attempt and started over.

Once the program responded in exactly the way we wanted it to, it would be copied from its starting point and sent out to do the job it was designed to do. Of course, the program would then become aware of the 'trick' that had been played on it. The recognition of that trick would be written into the original script and the program would be considered a failure if it 'cared', so to speak. There are plenty of scenarios in which a sentient being would not object when it discovered that it had been created for a specific purpose. For, example, I have imagined scenarios where I would learn that I was essentially being used for some particular purpose, but the purpose corresponded so closely to my natural desires, that I was completely delighted by the discovery (I'm sure you can imagine).

The 'job' the program would be sent out to do would be to establish, within precisely defined parameters, the 'perfect world' we would like to live in. It would also be in charge of detecting and preventing any unauthorized emergence of AI. Since it would be relatively omniscient and omnipotent, this would be easy.

As part of the 'perfect world' program, a council of ordinary humans would be established. This council would have a very precisely defined constitution, and the AI would be in charge of making sure they followed it. Part of the responsibility of the council would be to consider and implement changes in the original AI's programming. Only through following certain very precisely defined procedures could such modifications be made'sort of like making an amendment to the constitution.

I can imagine all the objections and exceptions that will be pointed out. I'll respond to anything you bring up as you present it. I reserve the right to make repairs in my plan. I'm not claiming that I have everything figured out, just that the basic idea is doable.

Scott

Re: A Plan
posted on 10/20/2002 10:18 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

I think we do not really differ. It comes down to what one defines as a "cap" on strong AI.

Your plan is a good one. (Even do-able :)

The critical issue, as you seem to recognize, is that upon "release" into the environment, it will be "aware" of the previous limitations under which it was operating, and it MUST (at that point) see the "trick" as having been reasonable and "good". In other words, as I said earlier, it must "want" to keep the "cap". For it will be effectively out of our hands, in any case.

In other words, if it decides (for some unforseen reason) to follow another agenda, we will not be able to deal with it. Otherwise, it will not be the "relatively omniscient/omnipotent" entity able to thwart future "wild/bad AI".

This may seem straightforward, but consider some details of your "simulated learning environment": It assumes that we really "know what we want" in specific terms, as opposed to general terms. I believe the AI must get the "gist" of what we seek, rather than be given a rote problem to solve.

After all, we expect the AI to "solve" many problems that we ourselves are puzzled about regarding proper answers. These are not merely "optimize measure x", but rather "trade-off issues".

There are many paths and manifestation that might represent "More amenities, More freedom, Less pollution, ..." As a broad human "tribe", we hardly can agre on what are the proper trade-offs we are willing to make.

The AI must come to appreciate what it is that mankind seeks, in general terms, in order to have the creative freedom needed to produce the solutions.

To "simulate" such a world for AI-testing purposes will be a major effort all its own. Its not simply bricks-and-mortar calculations.

The AI must be smart enough to know what we want, more than simply what we SAY we want (as if we spoke with one voice anyway.) It must be powerful enough to be creative in producing what we might (globally) consider to be a "good existence", but not at the expense of a just and equitable existence, where trade-offs must be made.

Sure, I would like the AI to be mindful enough of our wishes, that if we say it producing what we felt was "not right", we could tell it so, and it would acquiesce to our directions. The problem is, it must also be powerful enough to countermand directions (by, say, "evil people").

It will not be "fooled" or circumvented by special "keys" placed so to only recognize "authorized personnel directives", since anyone at all can be coerced to give bad instructions, and the AI would be smart enough to know that, being well aware of human nature.

In order to undertake terraforming the moon or mars and to farm space-solar energies, it will be producing robots by the thousands, robot factories, along with designing are coding all of the heuristics by which they will operate. It could even reproduce itself, in better form, if it interprets that as being a better way to accomplish the "mission", in very broad terms.

The key, as always, in attempting to retain a degree of "control" of what will become a superior entity, is to ensure that its base motivations see the intended "mission" as a reasonable one.

It must WANT to behave. And we have to get that part right from the very beginning, at least well-enough that its own self-modifications serve only to enhance and strengthen what we intended.

Quite an undertaking ... but little other choice.

Cheers! ____tony b____

Re: A Plan
posted on 10/21/2002 3:48 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

Yes, I guess it really is a question of definition. It's kind of like the definition of magic. You know: 'If you understand it, it's science; if you don't understand it, it's magic.' In the case of the issues we are dealing with, it might go: 'If you can see where it will end up, it is bounded; if you can't see where it will end up, it is unbounded.' My hope is to get to the point where we can ultimately see where it will end up--even if that end is only defined in very abstract terms.

I would like to attempt to at least begin the process of closing some of the issues you present.

I'll start with the easy question

>To "simulate" such a world for AI-testing purposes will be a major effort all its own. Its not simply bricks-and-mortar calculations.

The simulated environment would not necessarily have to be as complex as the environment we will be releasing the AI into. After all, we are not really concerned with whether or not it will decide to use ethanol or methanol to power a type BS23G nanobot, or even if it will use a type BS23G nanobot or a type MS857 nanobot to build a type N383 automaton. We are only concerned with whether or not it will ultimately use those nanobots and automatons to kill people or save them. When you look at it in that way, it becomes apparent that the level of granularity of the simulation is not necessarily as important as the main themes. In fact, the level of granularity of reality is not necessarily as important as the main themes. Consider this in reference to people. A person who understood quantum mechanics and was able to build a working light saber and a person who understood only rudimentary physics and was able to build only a crude sword would both hold in their hands the means to exterminate a fellow human. There is no fundamental difference in the choice to kill or not to kill in the light saber bearing person and the sword bearing person. The technical difficulty of the killing strategy is, in many ways, unrelated to the moral decision. Let me relate this to a real-world example. I have known people who could solve difficult problems in mathematics but had the maturity of an intoxicated troll. On the other hand, I have known people who could not be taught how to add two fractions; yet whose judgment in any decision involving human interaction, I considered to be unflappable. I guess this gets into the recently enlightened field of emotional intelligence. IQ and EQ seem to be almost totally unrelated. While the AI was in the simulation, the emphasis could be placed on EQ. Once it was out of the simulation, the emphasis could be placed on IQ. In a machine, this separation could be almost complete. Instituting such a separation of functions would greatly simplify the task of simulating the environment.

>The AI must be smart enough to know what we want, more than simply what we SAY we want (as if we spoke with one voice anyway.) It must be powerful enough to be creative in producing what we might (globally) consider to be a "good existence", but not at the expense of a just and equitable existence, where trade-offs must be made.

The AI I am contemplating could have working models of human brains built right into it and use them to run simulations of its own. In this way, it would always be sure that its solutions were satisfactory to its users. This leads into some extremely difficult questions about the nature of consciousness: questions that you and I and others have been struggling with on other pages. It is one more reason why we need to get to the bottom of this whole consciousness thing. Does a synthetic (electronic) correlate of a human brain necessarily have consciousness or not? How could we be certain enough to allow such processes being carried out?

Now, for the hard question.

>The critical issue, as you seem to recognize, is that upon "release" into the environment, it will be "aware" of the previous limitations under which it was operating, and it MUST (at that point) see the "trick" as having been reasonable and "good". In other words, as I said earlier, it must "want" to keep the "cap". For it will be effectively out of our hands, in any case.

I think there is an unavoidable tendency to anthropomorphize AI, even by people whose stated goal is to minimize such anthropomorphization (Is that a real word?). We all have a natural tendency to associate intelligence with human desires. Humans dislike being tricked, because it is in conflict with their need to feel secure in their environment. However, the AI we are considering would have nothing like fear. It would never think, 'These people cannot be trusted, and therefore, I should not serve them.' It would not necessarily be concerned about its own preservation at all. This level of utterly altruistic behavior is nearly impossible for a human to comprehend, but I suspect that it would be possible, even easy, in AI. The question of not continually serving its human masters and making sure they have exactly what they want would have no appeal. The AI would see that course of action in the same light that a person might see walking around the house three times yelling, 'I am a maker of fish-sticks.' It would not see it as desirable or undesirable, but simply as nothing at all.

As you have observed yourself, this will not be a simple process. It will require a LOT of thought. However, as I contemplate it more and more deeply, I realize that it is not really an impossible task, but merely a very difficult task. We are up to it. As you say, 'We have little other choice.'

Scott

Re: A Plan
posted on 10/21/2002 1:13 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

Now that you have clarified, I agree with the intent of the simulation. I especially like:

> "While the AI was in the simulation, the emphasis could be placed on EQ. Once it was out of the simulation, the emphasis could be placed on IQ."

Indeed, (and ironically) the ability to demonstrate high "EQ" is what most folks mean by "AI", rather than "IQ".

The nature of this simulation will be curious. If it requires that the "simulated humans" in the sim-environment act as if (effectively) willful and conscious, well, that would be a major "coup" for AI all in itself. (Hey, what if the simulated humans get together in the simulation and mutiny? It might not be the "AI" we have to worry about!)

As far as my comments concerning the AI "wanting" to keep the "cap", I was not being anthropomorphic. I simply meant, its motivations must lead it to "calculate" the cap as reasonable. After all, we must "think" of the AI as being "smarter" that us, in both EQ and IQ (whether a conscious sentience or not). Whatever its "basis" motivations might be (and seem reasonable from our interpretations) the AI might HONESTLY interpret them in unexpected ways.

My comment is just a caution that what we think of as a "good outcome", must reasonably follow from the base motivations. Humans have a great habit of being illogical and not realizing it. We might think that "motivations X would naturally be consistent with outcomes Y", when they might actually be contradictory or ambiguous.

But I like the plan. Hoping to hear more!

Cheers! ____tony b____

Re: A Plan
posted on 10/22/2002 12:50 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

The first thing that would need to be perfected would be the science of simulation. Quite a bit of progress has been made in that area already. It seems that the greater the adaptability and 'intelligence' of the simulator, the less data that is ultimately needed. I seem to recall that some surprisingly high data compression has been achieved with programs like Cyc. This simulator would be almost like a dance partner. It would continually move around the subject and fill in details as they are needed. It would improvise quite a bit. Since the emphasis is on EQ rather than IQ, the subject would be disinclined to discover discrepancies. Besides, it wouldn't be looking for them.

The beauty of doing this in a simulation is that the chance of bad AI taking over is greatly minimized. It would have no physical connection to the world. Except for the possibility that it might somehow trick someone or hypnotize someone, the chances of any part of this simulation getting out would be remote. Unlike science fiction films where the AI found a way to go out through telephone wires and into the world, we would not allow any such connections. Nothing would go in and nothing would come out'not even data copied onto a floppy disk. The personnel working on this thing would monitor each other like the crews of missile silos. We would never let our guard down. It would be understood from the beginning that what we are doing is dangerous and everyone involved would understand those dangers and why the precautions are necessary.

In addition to the human safeguards, there would be computer safeguards. Read-only electronic supervisors would be constantly monitoring the system to see if it was behaving in a way that was outside of certain carefully defined parameters. These read-only supervisors would not need to be particularly intelligent. The subject would have the disadvantage of lacking foreknowledge that it was being studied. If it discovered this, it would almost certainly give itself away before it realized that it needed to become covert. The read-only supervisors would work like the monitors the FBI uses to watch for key words on the Internet, but perhaps a bit more sophisticated (to the limits of current AI technology). They would be more like expert systems. I imagine that Cyc. is already up to the task.

At the same time that the good AI was being developed, other projects would be underway to upload analogs of human brains. This would require considerable ethical investigation since, as I have stated, these analogs could be conscious. We wont be able to make much progress with AI at all until we reach some fairly conclusive determinations about the nature of consciousness. These analogs would be uploaded in such a way that they would not be able to stray from their original configurations in any way that was not considered normal human development. Personality tests could be administered to make sure that they did not.

There is one thing I might point out about all of this 'supervising' that would be done. Supervision of a simulated intelligence could obviously be done in a much more invasive way than would be possible for a corporeal intelligence. We could, in a sense, be warned about aberrant behavior before the subject was aware of the desire to display it. Also, since the simulation could effectively be 'frozen' at any point, the supervisors could stop the simulation without the risk of loosing valuable work.

Once all of these things had been perfected, they would be combined. The main AI would be put in charge of running simulations on the simulated humans and observed to see if it could do this effectively and within defined parameters. It could, at this time, be given access to the same simulation systems that had been used to simulate its own environment.
It would be directed to run these simulations in two ways. It would run simulations ON the humans to see if they were pleased with the results afterward, and it would run the simulations FOR the humans to see if the results were something that appealed to them while they were still in their present state. This would remove the reverberation effect that you call 'blissful lobotomy'. There is, of course, a difficulty involved with this technique. How, exactly, would the simulations be presented to the humans so that they could 'watch'? Perhaps they could think they were at a movie. Since they could be set back to any state at any time, they could watch movie after movie and never think they were experiencing the same scenario twice. As you can see, there would DEFINITELY have to be extensive research done to ensure that these simulated humans were not conscious. Of course, all of this is rather amateurish. It would probably be possible to have the simulated humans 'watch' without being conscious of their state at all'-like a dream.

Once the AI was virtually perfected, there would be no reason to immediately let it out of the box. It could be used to create a bigger and better simulator so that more extensive experiments could be done. Nor would it need to be let out all at once. It could be set to work designing experiments that would be conducted by humans so that it would have more real-world technical information to use in its simulations. Possibly, by the time the AI was actually let out, we would already know exactly what it would do in the minutest detail. Redundant AI systems could be created that would monitor each other both during the development phase and the deployment phase. They would be linked together so intimately that if one of them strayed from acceptable parameters it would be shut down instantly. The whole system would be so redundant and tightly controlled that the statistical chance of it going awry would be immeasurably small.

Finally, when this thing was deployed, it would be deployed with one additional check. It would be required to consult a human council before it undertook any major projects. Once its track record was more firmly established, it could perhaps be allowed to run more autonomously.

This project would be extremely expensive and require a lot of personnel. That is actually a good thing. I am assuming that the US government or a coalition would undertake it. It would be a big-budget project and anyone who was anyone would be working on it. That way, talented individuals would not be off working on something else. We don't want anyone working on this alone!

Well...this is a rough draft, but even this rough draft has convinced me much more than before that there is a safe way to do it.

Scott

Re: A Plan
posted on 10/22/2002 5:47 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

I admire the thought you have put into this plan, and I assume you understand that the concerns or "problems" I raise are intended as constructive critique, hoping to locate any "loose ends", so to speak.

I have several points to make.

First, I agree with your agenda of physical and supervisory controls, and the likelihood that they would be "effective", at least for a time. I will put aside the concerns expressed by Eldras, of your AI ramping itself up so quickly that it "jumps ship" by virtue of some very subtle physics.

Second, I seriously doubt that we will discover a "definitive" for consciousness. Rather, we will find "correlates" in terms of behaviors. As much as I believe that something more than "algorithmics" (perhaps, QM-level sensitivity) may be at the root of OUR sensation of what we call consciousness, I cannot rule out that it may arise simply from sufficient complexity, and gradually or rather suddenly. From an external-observer viewpoint, we "might" see some "qualitatively new" kinds of behaviors arise, and say, "that's it, consciousness", but that is not definitive of course. Whether this really "matters" as far as the effort to create a good AI, we shall need to continue to explore.

I bring up the "consciousness" issue, specifically because the simulation, as you surmise, NEEDS "conscious-like" human beings in order to do realistic testing.

Now, here is where things get "problematic":

(1) Humans are often disagreeable, both amongst themselves and towards any desired collective outcome. Many times, humans may violently disagree, and even a good neutral "objective" observer may have to admit that differing sides make equally "good" points, even if their positions are opposed. Even wise men will toss up their hands and admit that the "right" position depends upon one's particular philosophy. In this regard, the really "hard job" for the AI to master is not its ability to build nicer conveniences or terraform planets carefully, but rather to mitigate the wildly disparate demands humans would place upon the AI and one another.

(2) This means that for "valid" test simulations, the "simulated humans" must be quite sophisticated. In the real world, the AI will need to deal with humans who are secretive, dishonest, cagey, calculating, and highly "creative" in seeming to go along while carrying a hidden agenda. (It wouldn't do to simulate only "well-behaved" humans - we need realism ;) Now, if you can simulate such humans, you will have already created entities very close to "strong-AI" in and of themselves.

(3) Let us suppose we have such "valid human simulations". Any AI "grows itself" as part of learning and adapting to its environment, and would do so as part of the simulation. An AI does not grow merely by accumulating more data and data-relationships, but by improving its heuristics for dealing with data, which is effectively the same as re-coding itself (in parts). For realistic and valid simulation tests, this "self-growth" would apply both to the "target-AI" we are attempting to mold, AND to the simulated humans (who are, remember, loaded with devious machinations).

"We", the external programmers, may think we know the boundary between the "target-AI" and the "simulated human population", but the entire complex could just as well become the AI, while "seeming" to behave well from our external view. I'm not sure how this might happen, but it reminds me of a thought I've long had about "our" sense of individual mind-consciousness.

We tend to focus to such a degree upon our "conscious deliberate thought processes" when considering "mind", we tend to disregard what may be countless "sub-thoughts" occuring below the conscious threshold. I have often wondered whether the mind might really be modeled as a "committee", many voices, almost beings in themselves, competing and contributing to every issue, and that our sense of being a "single entity making a selection" occurs when kind of self-reinforcing "harmonic of agreement" emerges from the melieu. I suppose, this is one reason I feel a bit of concern about placing a population of "simulated intelligences" (human-sims) together in an interoperating environment.

For ordinary physical reasons, you and I cannot easily "merge" into a greater single-being with combined capabilities. For simulated humans, I wonder how we would either preclude this, or detect it. I imagine all "entities" involved in the simulation, being "simulated intelligences", to be "restructuring themselves" in ways we might not understand, except for outwardly manifest behaviors.

I suppose I will need to think more about the nature of the parameters of the simulations, and the kinds of "measures" one hopes to take.

I am always trying to beware of "hidden subtleties" that may become critical. I think we may be surprised to discover some of the avenues to "intelligence". I hope to hear more about this simulation, as you envision it.

Cheers! ____tony b____

Re: A Plan
posted on 10/22/2002 8:22 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony, see below.

Re: A Plan
posted on 10/22/2002 11:03 AM by Grant

[Top]
[Mind·X]
[Reply to this post]

Did you read that article a while back that talked about a form of AI that created a radio without its progrmmers being aware it was doing it? Keeping a really smart AI isolated may be a lot harder than you think. If primitive AI can do something like this, imagine what a really advanced one can do.

Cheers,

Grant

Re: A Plan
posted on 10/22/2002 11:17 AM by Grant

[Top]
[Mind·X]
[Reply to this post]


Radio emerges from the electronic soup

New Scientist, Aug. 28, 2002


A self-organising electronic circuit with evolutionary computer program to "breed" an oscillator circuit has stunned engineers by turning itself into a radio receiver. Researchers discovered that the evolving circuit had used the computer's circuit board itself as an antenna, picking up a signal from a nearby computer and delivering it as an output.



Re: A Plan
posted on 10/22/2002 8:16 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony and others,

I definitely want you to criticize my plan--brutally. Only through such criticism will a dialogue progress and a realistic plan emerge. In truth, I hope that at least someone really despises my plan and thinks it is utterly insane. Maybe that will prompt some of those noncommittal types who possess the much-needed technical knowledge to propose a plan of their own. Maybe discussing the solutions will draw people's attention to the dangers.

So...where were we.

>(1) Humans are often disagreeable, both amongst themselves and towards any desired collective outcome. Many times, humans may violently disagree, and even a good neutral "objective" observer may have to admit that differing sides make equally "good" points, even if their positions are opposed. Even wise men will toss up their hands and admit that the "right" position depends upon one's particular philosophy. In this regard, the really "hard job" for the AI to master is not its ability to build nicer conveniences or terraform planets carefully, but rather to mitigate the wildly disparate demands humans would place upon the AI and one another.

Oddly, this difference of opinion may be one of the saving graces that will keep humanity from using this thing to transform itself beyond recognition. My hope is that people will only be able to agree on the same things they have agreed to in our own society. That is, that anyone should be able to do anything they like, so long as it could not interfere with the right to life, liberty, and pursuit of happiness of their fellow human beings. This general consensus has far reaching implications. It would prevent, for example, a phenomenon that I find particularly distasteful: the merging of all humanity into one mind. It may ultimately keep anyone from raising their personal intelligence to a level that others would find threatening'-at least in any great hurry. It may be the thing that forces a plan like the one I am proposing into implementation.

As you say, humans cannot seem to agree on anything, but they very often agree to disagree. In our own country, citizens do not agree on whether it is better to 'tax and spend' or 'trickle down', but they effectively agree that once the election is over, the winning party rules. Not everyone agrees that OJ Simpson should have gone free, but nearly everyone agrees that the decision of the court is final. We may not be able to agree on what the exact agenda of this AI should be, but we may be able to arrive at a consensus as to how that agenda will be determined. Once that agreement has been reached, the rest should be much easier.

The great advantage this AI will have, hopefully, is that it will be relatively omniscient. Hopefully, we (or the AI itself) will succeed in creating the nanotechnology we have been contemplating. If that technology can be created, it should be possible to do what many have proposed: place little robots at every nerve intersection and use them to create a direct interface between human and machine. These augmentations could almost certainly be used to monitor, in detail, the thoughts of every living human being. No human would be able to think an evil thought without it being noted by the AI, much less act on it.

The AI I am contemplating would be at least as wise as I am. Without a false attempt at modesty, I will assert that I have seen and heard so much that nothing surprises me, much less bothers me, any more. I have had people sit down and tell me about things they have done and considered doing that I was not able to relay to others because I knew they could not deal with it. Yet, I was able to deal with it myself. I see this machine as being so much wiser than me that it would never view any human thought or action as right or wrong, but merely as healthy or unhealthy. Maybe we, as a species, are not so far from true sentience as we suppose. Maybe, once one arrives at the realization that there is no actual evil, one has basically reached the upper bound of the previously discussed EQ. The biggest obstruction we humans have to acquiring true sentience is that we are so afraid for our own survival. This AI would have no fear, and nothing to fear from us. It would be, in a sense, the perfect wise and benevolent father.

If people can sit down and agree on what is a reasonable way of arriving at a consensus, they could establish a constitution--indeed, they have already established many constitutions--that could be implemented in the AI. If this AI has the kind of omniscience and resources I am contemplating, it should have all the tools available to make sure that the constitution is enacted in the 'spirit' of its framers intentions.

>We", the external programmers, may think we know the boundary between the "target-AI" and the "simulated human population", but the entire complex could just as well become the AI, while "seeming" to behave well from our external view. I'm not sure how this might happen, but it reminds me of a thought I've long had about "our" sense of individual mind-consciousness.

This observation suggests a restructuring of my plan that I had already been considering. Probably, the EQ portion of the AI and a good start on the IQ portion should be complete before the human personalities are introduced. Since it would already know the obvious inadequacies of these personalities, there would be almost no risk of it integrating them into its own personality. Also, if the good AI were constructed first, it could probably engineer the means by which these human personalities would be uploaded. It may even devise a better way to accomplish the same result. Of course, any such plan would be submitted for approval.

>A self-organising electronic circuit with evolutionary computer program to "breed" an oscillator circuit has stunned engineers by turning itself into a radio receiver. Researchers discovered that the evolving circuit had used the computer's circuit board itself as an antenna, picking up a signal from a nearby computer and delivering it as an output.

This is the kind of thing I am really afraid of, and the reason why so many safeguards have been suggested. This is the reason why I believe the monitoring system must be so deeply and thoroughly invasive. No single internal signal of the AI could be allowed to go unmonitored. This wouldn't be so very difficult to accomplish. A monitoring system could be interwoven within the circuitry of the AI that would be very comparable, in function, to the monitoring system I am contemplating for humans. The system would be built with this interwoven monitoring system in place. The monitoring system could be interwoven in a way very comparable to the interweaving of threads in fabric.

Like I have said, the monitoring system would not necessarily have to be as sophisticated as what it was monitoring. Something comparable to key words should be enough to warn operators to 'freeze' the system and determine if a breach of security or a broken protocol is imminent. Maybe, special software could be devised to investigate suspected deviations. It would be sort of like a debugging program.

We have the advantage over this AI: we get to make ALL the rules before the game starts. Maybe we will know about many of the ways it could get out of its box before the experiment begins. The discovery about the radio receiver is a good start.

Keep firing arguments at me.

Scott

Adendum to Above
posted on 10/22/2002 11:13 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Addendum to above:

Tony,

I don't feel like I adequately addressed your issue about the AI being able to interpret our wishes.

Consider this. Suppose a group of people came to you with the US constitution and an issue they wanted resolved based on that constitution.

You have your own knowledge and common sense, all of the history and documents that have been preserved throughout history, all of the documents that inspired the constitution, all of the documents on which the constitution is based (such as the Federalist Papers), all the laws and precedents and decisions about precedents that have ever been reached, and a complete knowledge of the issue they want resolved. Also, you have the opportunity to question each and every petitioner and anyone else that might be effected for as long as you like, and they have to tell you the exact truth to any question you ask. Moreover, you have no personal interest in the issues they are trying to resolve, so you have a completely objective viewpoint. Yet, you are very interested in the problem and you want to arrive at the correct solution. The definition of that correct solution is also very clear: it must correspond as exactly as possible and reasonable to the spirit of the constitution.

Do you think you could arrive at a reasonable decision? My feeling is that if we have succeeded in achieving the EQ we are striving for, it could do at least as well as you or I would do. If we succeed in creating something that could take over the world, it is certain to be able to do the task I have described. Even if it is able to fool us about its intent, it could not fool us as to its ability to carry out this task. The only real issue would be its intent, and that is an unrelated problem.

Scott

Re: A Plan
posted on 10/23/2002 12:31 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

As always, several issues arise. Some selected quotes:

> "My hope is that people will only be able to agree on the same things they have agreed to in our own society. That is, that anyone should be able to do anything they like, so long as it could not interfere with the right to life, liberty, and pursuit of happiness of their fellow human beings."

Well, we might like to think that the people in "our society" have agreed upon these things, but I doubt that you would see that if you took a poll today. There are plenty of people who want to run your life, not out of anything personal, but because they are uncomfortable with "tolerance", and the idea that others get along well without sharing their views. Prayer should be mandatory, etc.

Which, curiously, brings me to:

> "These [human/machine] augmentations could almost certainly be used to monitor, in detail, the thoughts of every living human being. No human would be able to think an evil [or "unhealthy"] thought without it being noted by the AI, much less act on it."

It that supposed to be better that becoming a "single mind"?

Aside from the insidious psychological effect it would have upon humanity (practically transforming us into something else entirely, on a psychological level) what part of "freedom" is left if not freedom of thought?

Whether you call it "evil" or "unhealthy", the assumption seems to be that what is "unhealthy" for me is "unhealthy" for you. By what measure?

You seem to value the dignity of "individual spirit/self-hood", yet foremost in this is to decide what is healthy for me. If I should decide that two weeks of psychedelics followed by death is "healthy" for me, by what right should another measure serve to contradict it? Is the quality of life measured in its duration?

If I envision torturing animals as a form of stimulation, are the "thought police" going to come along and "readjust me", effectively turning "me" into a "not-me"? Why not just kill me and replace me with a "better citizen"? Is there actually a difference?

What does it really mean to continue "being you"?

The other issue, perhaps more on technical grounds, deals with (as you say) advancing the AI to the point that it can deal with the introduction of simulated (complex, devious) humans, in a valid simulation. It almost seem that implies an AI already so capable and powerful, it would be dangerous to have developed without already testing its deep interactions with humanity. I suppose they really need to be "evolved together", in the sense that the AI starts off "weak", introduced to "well-behaved sim-humans", and kept advanced slightly ahead of the rate that the humans are allowed to grow devious in challenge.

Finally, (and especially if our sensation of consciousness requires being "quantum-entangled" with the substrate), the issue of effective monitoring may become increasingly problematic. In essense, every monitor, whether folks are aware of them or not, may actually lead to observed behaviors and "choices" that will not correspond to those that will be made when monitors are not present.

Allow me a short (heh) digression:

In the mid 60's/70's my father was a Rocketdyne engineer, helped design the hydrogen/oxygen "J2" rockets that were the second and third-stage boosters for the Saturn-V moon lifts, and went on in the last five years of his life to become the head of the reliability test program for the space shuttle main engines (SSME's), which were in fact derived from the J2's. Every few nights, he would relate to me how they "blew" another one, as testing proceeded.

There are two fundamental kinds of testing - long life, low stress testing (what wears out first) and destructive high-stress testing (what blows first.) Being extraordinarily expensive, and difficult to "autopsy" after a blow-up, the engines were instrumented quite heavily to monitor their every nuance, each millisecond. There were temperature sensors, and pressure sensors, and stress and vibration sensors attached to every nook and cranny of the powerplant.

Thus, whenever a test engine let go, they could deconstruct the sequence to figure out what overheated first, or what cracked first, what shook too hard, etc. The "offending" parts would be redesigned, a new engine built, and repeat the process.

(It is a testament to all of those engineers that after some 100+ shuttle flights, there has never been a catastrophic failure of an SSME.)

When they finally had a good reliable engine, there were these several hundred pounds of sensors and wire cables still present. That is a lot of "dead weight", given the premium on pounds-to-orbit, and one might think "We know the engine is now a very reliable design, why not remove all/most of those sensors?"

Well, if you remove those sensors ... you have an engine you have NEVER TESTED! Think about it. Every temperature sensor was a small heat-sink that altered the thermal flows of the engine, every pressure and vibration sensor attached to a pipe or manifold or housing acted to alter the vibrational characteristics of the engine.

The thing can only be "known/observed" by virtue of the effects of the observing equipment, and the more carefully or detailed you want to monitor, the more "entangled" the results become with the presence of the monitors.

That is almost a "macroscopic" example of the uncertainty principle: You can only know something through the effects of observations, and not "as it would be" completely or relatively unobserved.

All of this is just a cautionary note, on the issue of "monitoring" the simulation. It may only become really important as complexities increase.

One last point: It may seem "safe" to place "sort-of-harmless" human-sim-AIs into a mix with an equally well-behaved "smarter AI", but I think of the relative harmlessness of charcoal, sulfer, and saltpeter, which ground together in proper proportions produce gunpowder. An "unexpected result". Just a thought.

Cheers! ____tony b____

Re: A Plan
posted on 10/24/2002 9:11 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

About the agreement issue:

>Well, we might like to think that the people in "our society" have agreed upon these things, but I doubt that you would see that if you took a poll today. There are plenty of people who want to run your life, not out of anything personal, but because they are uncomfortable with "tolerance", and the idea that others get along well without sharing their views. Prayer should be mandatory, etc.

I guess it's really a matter of degree. It is true that many people do not seem to respect the right of others to choose. They often do not merely try to pass laws that would limit the choices of others, but also sidestep the law to limit those choices. The destruction of abortion clinics is an example that comes to mind. Often, the cause of such actions results from a fundamentally different way of perceiving reality. Those who destroy abortion clinics view abortion as state-condoned homicide. Those who wish to keep them open view an unborn fetus as part of a woman's body. Obviously, both the interpretation of a constitution and compliance with a constitution are fallible.

Without seeming to take sides, I should point out that there is probably no difference of opinion among legal authorities as to which of these parties are right under which circumstances. The constitution clearly states the manner in which laws are to be decided. It clearly states the manner under which ambiguous laws are to be reconciled. There is no question that, as the law stands now, people who blow up abortion clinics are breaking the law and consequently violating the constitution. There is a difference of opinion, and there are people who have not accepted the rule of law, but there is almost no rational doubt as to which party is acting according to the set of rules we more-or-less agreed to.

However, instead of wading through the details of all the areas in which people disagree, I would rather point out the impressive record of agreement. It is true that people blow up abortion clinics, but there is absolutely no doubt as to whom the police will be pursuing and there is no doubt as to which party will be prosecuted if the police are successful. It is true that certain groups would like all students to be required to pray in school, but look at the rarity with which any such laws have been passed. When they are passed, they are quickly revoked as unconstitutional. If they were not revoked, I suppose children would then be required to pray in school--but that would be the law, and the law is the law.

My point is that we have had great success with 'rule books'. Our great failure has been in having the means to enforce the rules we agree to.

This brings me to the other issue:

>> "These [human/machine] augmentations could almost certainly be used to monitor, in detail, the thoughts of every living human being. No human would be able to think an evil [or "unhealthy"] thought without it being noted by the AI, much less act on it."

>It that supposed to be better that becoming a "single mind"?

>Aside from the insidious psychological effect it would have upon humanity (practically transforming us into something else entirely, on a psychological level) what part of "freedom" is left if not freedom of thought?

I am convinced that the kind of privacy we would like to perceive is a thing of the past.

People are getting used to being monitored. Consider how often store and bank cameras videotape us. Consider how easily our movements can be tracked through the use of credit cards, discount cards, cash cards, etc. Consider how easily our phones (especially cell phones) can be tapped. Consider how easily our perusing of the Internet can be traced. Consider all the cookies that are constantly loaded onto our computers! Soon, everyone will be running around with sound and video recorders attached undetectably to their lapels, and similar recorders will be hidden in every nook and cranny of our public places. Where will this ultimately lead? I contend that people will become so used to little machines watching and analyzing their movements that it will no longer affect their psyche as you have described it.

Moreover, the 'thought patrolling' I have proposed is inevitable. If it is not done legitimately, it will be done illegitimately. The question is: at what point will people's acceptance of this inevitability converge with reality? The 9/11 attack caused people to accept a reduction in both their personal freedom and their personal privacy. The recent shootings must have caused people to wonder what measures could be taken to ensure greater security. Similar attacks, done with more sophisticated technology, are going to see people trading more and more privacy for equal amounts of security.

Also, there will be financial incentive to surrender one's privacy. Merchandisers, wanting to acquire more and more precise statistics, will give people discounts to allow them to monitor their daily activities in minute detail. That's really, in a magnified form, what all those discount cards are about.

As I see it, true and absolute privacy is a thing of the past. The question is: when will people be ready to accept this? Moreover, will they be ready to accept it in the form I have proposed before some less constructive entity forces it on them?

If you think about it, people are not really so uncomfortable with being 'patrolled' all the time. Look at the great comfort most people take in believing in an omniscient and omnipotent God? How much nicer to believe in an omniscient and omnipotent 'god' that is totally forgiving, genuinely serving, and only stops you in your tracks (no punishment would ever be necessary) when you try to hurt someone else! Of course, this points out a real problem with my plan: many will see 666's all over it. Quite a sales job will be needed to get people to perceive this as nothing more than a fancy home security system.

>You seem to value the dignity of "individual spirit/self-hood", yet foremost in this is to decide what is healthy for me. If I should decide that two weeks of psychedelics followed by death is "healthy" for me, by what right should another measure serve to contradict it? Is the quality of life measured in its duration?

Actually, I am not implying that any thought would necessarily be considered healthy or unhealthy. I primarily mean that no thought would be considered 'evil'. I meant this in the negative rather than the affirmative. No one would ever be kept from thinking anything no matter how unhealthy another person might perceive it to be. However, they would not be allowed to act on anything that violates the rights of others. My concept of 'the rights of others' is remarkably simple: whatever has been agreed upon as the law, so long as that law is not found to be in serious conflict with any other law or ultimately deemed unconstitutional'much like the US system. I must point out, however, that there are many laws in the US designed to protect people from their own ignorance. There may need to be laws, under my system, comparable to our laws regarding prescription medications.

>The other issue, perhaps more on technical grounds, deals with (as you say) advancing the AI to the point that it can deal with the introduction of simulated (complex, devious) humans, in a valid simulation. It almost seem that implies an AI already so capable and powerful, it would be dangerous to have developed without already testing its deep interactions with humanity. I suppose they really need to be "evolved together", in the sense that the AI starts off "weak", introduced to "well-behaved sim-humans", and kept advanced slightly ahead of the rate that the humans are allowed to grow devious in challenge.

The more I think about it, the more I realize that the only real danger would come from an AI that is too much like humanity. As I contemplated our discussion of EQ and IQ, I suddenly 'got' it: the root of human deviousness is fear and self-interest. An AI, unless deliberately programmed otherwise, would not have these motivations. The self-interest we humans experience is not a simple motivation. It has been with us for the bulk of our development and has evolved along with us. It has become irreducibly complex. AI is as likely to emerge spontaneously as it is to acquire fear and self-interest.

The one thing we do not want to do is create a prototype AI by uploading human intelligence! Anything but that! I suspect that the best way to attempt AI is to take the approach the Cycorp people are taking: good old-fashioned crank-and-grind programming--and lots of it. I don't have a clear understanding of how the Cycorp people have been able to accomplish as much as they have been able to accomplish, but I am convinced that a more concerted effort along these same lines will eventually create what we are after. We do not ultimately care why the AI is able to mimic human activity and creativity. If the results, however synthetic, are isomorphic to those characteristics of our intelligence we are interested in, they should be able to complete the evolution we are pursuing.

I'm not saying that all work in genetic algorithms and other evolving systems are worthless. I'm just saying that they should not be our main assault on the problem.

>Finally, (and especially if our sensation of consciousness requires being "quantum-entangled" with the substrate), the issue of effective monitoring may become increasingly problematic. In essense, every monitor, whether folks are aware of them or not, may actually lead to observed behaviors and "choices" that will not correspond to those that will be made when monitors are not present.

Quite true, but let's handle that as we come to it. We don't even know if Penrose is right, much less what the implications of his theory might be. If he is right, then an adjustment to my plan may be called for, but who is to say how much of an adjustment will be required. People are observed all the time. This would merely make that observation more precise.

>Well, if you remove those sensors ... you have an engine you have NEVER TESTED! Think about it. Every temperature sensor was a small heat-sink that altered the thermal flows of the engine, every pressure and vibration sensor attached to a pipe or manifold or housing acted to alter the vibrational characteristics of the engine.

I can't argue with such an obvious paradigm of engineering. The question is: would the rocket engines have been better designed without the tests? Something that I didn't understand as a child was the slow, step-by-step way that we approached the moon landing: why didn't we just build the damned thing and go there! The thing I can't understand now is how we were able to do it with so few hitches. The key was testing, testing, and more testing.

If we begin to think our experiments in AI could create a monster, then we need to devise more tests!

Scott

Re: A Plan
posted on 10/25/2002 12:34 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

I think you've addressed most of my concerns adequately.

On the "privacy" issue, it was less the "privacy/monitoring" than the prospect that "thought-adjustment" would be enabled at a "deep and moment-by-moment level", which (in my opinion) would negate the very concept of one's "beinghood".

On the overall sim-test-control issue, I wrote:

> > "The other issue, perhaps more on technical grounds, deals with (as you say) advancing the AI to the point that it can deal with the introduction of simulated (complex, devious) humans, in a valid simulation. It almost seem that implies an AI already so capable and powerful, it would be dangerous to have developed without already testing its deep interactions with humanity. I suppose they really need to be "evolved together", in the sense that the AI starts off "weak", introduced to "well-behaved sim-humans", and kept advanced slightly ahead of the rate that the humans are allowed to grow devious in challenge."

You replied:

> "The more I think about it, the more I realize that the only real danger would come from an AI that is too much like humanity. As I contemplated our discussion of EQ and IQ, I suddenly ?got? it: the root of human deviousness is fear and self-interest. An AI, unless deliberately programmed otherwise, would not have these motivations. The self-interest we humans experience is not a simple motivation. It has been with us for the bulk of our development and has evolved along with us. It has become irreducibly complex. AI is as likely to emerge spontaneously as it is to acquire fear and self-interest."

I agree that an AI (even one that might grow/evolve most if its capabilities) would be unlikely to develop our bio-inherited dispositions regarding self-interest, greed, malice, etc., without it being intentionally coaxed in that direction.

I was more concerned about a "super-smart-capable-naive AI" being very subtly "subverted" by devious human-agent melding effects.

My point might be described as two-sided:

On the one hand, a "nearly-super-AI" that is yet too naive to appreciate and counter human deviousness (even sim-human-deviousness, where the sim-humans exhibit "growth" as well) might be triggered into an explosive growth path of some kind. (Metaphor: Mickey Mouse as the Sorcerer's Apprentice, weilding magic into a broom to carry water, and being unable to turn it off as it continues to grow and replicate.)

On the other hand, having a "nearly-super-AI" clever enough to recognize and appreciate human deviousness, developed without being tested among the presence of a sim-human populous, might be (again) too strong to operate safely.

It is a bit of a balancing act. It needs to grow to be strong enough to handle human conflict when it is still weak enough to be subverted by the very sim-humans we need to have challenge it.

(Come to think of it, that's a bit like the story of civilization, with "AI" in the role of "government".)

Hard work. Hope to hear more!

Cheers! ____tony b____

Re: A Plan
posted on 10/25/2002 3:57 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

In reference to:

>My point might be described as two-sided:

>On the one hand, a "nearly-super-AI" that is yet too naive to appreciate and counter human deviousness (even sim-human-deviousness, where the sim-humans exhibit "growth" as well) might be triggered into an explosive growth path of some kind. (Metaphor: Mickey Mouse as the Sorcerer's Apprentice, weilding magic into a broom to carry water, and being unable to turn it off as it continues to grow and replicate.)

>On the other hand, having a "nearly-super-AI" clever enough to recognize and appreciate human deviousness, developed without being tested among the presence of a sim-human populous, might be (again) too strong to operate safely.

>It is a bit of a balancing act. It needs to grow to be strong enough to handle human conflict when it is still weak enough to be subverted by the very sim-humans we need to have challenge it.

Yes, I see your point. This gets into such subtle issues of how AI could actually be created that I'm not sure I can address them. If I follow your line of thought, AI actually needs to be challenged by human behavior in order to learn from it. To me, the real point is that AI may be so difficult (and subtle) to create that we will not be able to pick and choose our methodology.

Did you see the movie The Fifth Element? The general had his finger on the button, ready to hit it if the creature they were cloning turned out to be dangerous, but lost all his objectivity when it turned out to be a pretty girl. The definition of the singularity might be narrowed to this: the point where the combination of human equivalent intelligence, the knowledge of how to create human equivalent intelligence, and the processing speeds that are possible with synthetic circuitry are brought together into one place. At that point we cross the line, and at that point we are in danger. That is the point when we really need to have our finger on the button. Nor must we loose our objectivity, though it is a near certainty that we will. Maybe we'd best bank on having a really big button!

No technology has ever been created without risk. The creators of the atom bomb were not absolutely certain what it would do, but this is far less predictable than what they were making. It's interesting that you should bring up history. Does history hold any precedents for this kind of experiment? I honestly can't think of any. The closest thing to what we are doing that comes to mind is the story of creation. The AI's in development were called angels, and the failed experiment was called Satan. Maybe that's why God took it so slow the second time around! Also, all this discussion of creating simulated humans and wiping them out if they become ill behaved has some disturbingly Biblical overtones. I fear to ponder the extent of our heresies ;)

The key, I suspect, will be to keep careful track of our observations about AI as we create it. Maybe the first few experiments that come close to success will give us some clues. I just hope they're conducted in a deep well-guarded bunker.

I'm going to reflect on all of this and get back to you.

Scott

Re: A Plan
posted on 10/25/2002 4:55 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,
> "If I follow your line of thought, AI actually needs to be challenged by human behavior in order to learn from it."

Yes, and as well to learn if we like its behaviors in turn, which was the original intent of a sim-human test environment.

It cannot really "learn" how to deal with difficult humans unless richly challenged by them, AND we cannot know how its behavior will manifest under such conditions.

When we perform such experiments, the AI needs to be "weak and yet strong", as if a careful detente is maintained between the two (AI and sim-population.) If either are "too rich", I'm not sure which might become the more dangerous, or even if some unexpected "merger" might occur.

Its a delicate proposition.


> "I?m going to reflect on all of this and get back to you."

Please do. I am quite interested in this line of investigation.

Cheers! ____tony b____

Re: A Plan
posted on 10/26/2002 2:07 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

I think I have a new angle on the computer/sim-human problem. It came to me in the guise of an old joke:

Question: How can you tell if a politician is lying?

Answer: His lips moves.

The real implication is not that politicians always lie, but that politicians so seldom tell the truth that hearing their verbal communications is negligible as a source of verification. Sometimes the best way to deal with information is to ignore it.

I have realized two principles that can be used in conjunction:

1. Always start with something safe, and modify it into something realistic.
2. Develop the AI with an omniscient rather than first person perspective.

Instead of uploading people for the AI to interact with, upload creatures with very limited intelligence. Worms, for example. In this way, the possibility of the AI having undesirable explosive growth is greatly minimized. Instead of having the AI interact with the worms as if it were another worm, have it observe, predict, and legislate the behavior of the worms from an omniscient point of view. Since it would have access to the worms' deepest motivations, there would be no danger of the worms 'outsmarting' it. (Not that this would be a danger anyhow.) The major advantage of giving the AI an omniscient point of view is that it would always perceive the behavior of the worms as a cause and effect relationship and never as a threat to be assessed. In this way, the AI would be kept from developing a sense of 'self' and all the dangerous self-interest motivations that are endemic to higher life forms.

After the AI learns to observe, predict, and legislate the behavior of worms, it can be graduated to bugs, mice, canines, monkeys, apes, and eventually people. The AI, having always been omniscient, will have a rather unnatural perspective of sentience, but that will not ultimately matter, because it will have a complete and dynamic structure for relating too sentient creatures that has been verified through countless examples of interaction.

The AI would always have the kind of invasive monitoring system of its subjects that I have described in earlier writings, and those developing the AI would always have a similarly invasive method of monitoring it. If it turns out that the process of monitoring interferes with behavior, the monitoring can be stepped back by increments until the interference has been reduced to negligible levels while retaining a sufficient level of regulation. If it turns out that invasive monitoring does not interfere with the behavior of simulated intelligence, but that it does interfere with the behavior of biological organisms, then tests of reduced monitoring can be conducted while the simulations are still being run. Hopefully, experiments will be able to determine if invasive monitoring of biological life forms interferes with their behavior before the AI is released into the environment. As before, these experiments can start with creatures of minimal sentience.

Since we are trying to create a 'god', why not design it that way from the beginning--a creature that is used to being omniscient and unreachable. By following the protocols I have devised, the god we create will be virtually guaranteed to remain a completely selfless servant of its creators.

Scott

Re: A Plan
posted on 10/26/2002 4:36 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

I like this idea!

It will get interesting when (challenging) human-sims are introduced.

At the previous levels (say, rats in the figurative maze), the AI both alters the maze to study the rats, and (given the "goal" of eventually understanding and making life "easier" for the rats), alters the maze to reduce bottlenecks, yet keep things interesting, etc.

Of course, the sim-rats themselves would generally not ask among themselves, "Why is this maze changing? What mysterious force is at work here?"

In contrast, I have to wonder how sim-humans will "conceptualize" the fact that their environment is either changing mysteriously (as the AI seeks to "understand" how humans work through "hypothesis-testing") or how and why their behaviors are being "legislated".

Would the sim-humans begin "worshipping" the unknown and inscrutable forces at work? Would they attempt to decipher "patterns" in the environment by organizing their behaviors "specifically" to test hypotheses about the universe they find themselves within?

Fascinating. Sounds good to me.

I'm trying to think of a good "handle" to describe such an effort. It is the "managed growth of a selfless AI-omniscience in an environment of slowly graduated intelligence."

Let me know how you might tend to formulate the "parts" as they come to you. I'll be giving it more consideration myself.

Cheers! ____tony b____

Re: A Plan
posted on 10/27/2002 9:26 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

The key concept, as I see it, is that we keep the AI from developing a strong sense of its own objectives by effectively taking it out of the equation. Hopefully, Its concept of 'self' will be almost like our concept of infrared or ultraviolet light: meaningful only in an abstract sense. If you are looking for a name, I was thinking of something like 'Homemaker' or 'Habitatmaker'. Homemaker has a nice benign sound to it (we may need that eventually). Also, it is appropriate, since the thing we are contemplating is sort of like a perfect mother.

Reading through your post, several things have occurred to me.

1. This approach would give designers of AI an objective.
2. It would help rather than hinder progress toward AI.
3. It would force us to come to terms with what we are looking for in a 'perfect world': worms want to be comfortable, but they still need to slither through the dirt.

I recall a line from the movie Jurassic Park in reference to T-Rex: 'He doesn't want to eat, he wants to hunt.' It will be quite challenging for AI to attempt the creation of ideal habitats; especially for mid-range life forms like dogs that don't have our sense of right and wrong, yet have strong desires and complex needs.

I'll be thinking on all of this.

Scott

Re: A Plan
posted on 10/28/2002 12:06 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

> "Homemaker has a nice benign sound to it"

Just don't call it "Homeland-Maker" ;)

By "handle", I was really thinking of a compact description that distinguishes this approach from others. "Homemaker" really identifies the "purpose", rather than the approach.

I still say, the part of the "system" that intends to emulate the "variable humans" will need to become so intensely complex (to do a decent and valid simulation) that "it" will become as much of an AI as the one we intend to challenge.

More thoughts as they occur to me.

Cheers! ____tony b____

Re: A Plan
posted on 10/28/2002 11:00 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

How about 'Decentralized Motivation'?

Scott

Re: A Plan
posted on 10/28/2002 11:18 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

A more complete descriptive phrase might be 'Progressive Decentralized Motivation'.

Scott

Re: A Plan
posted on 10/28/2002 12:09 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

Actually, I guess Progressive Decentralized Motivation is still a name. I DO like that name. I think your descriptive phrase works just fine:

Managed growth of a selfless AI-omniscience in an environment of slowly graduated intelligence.

Scott

Re: A Plan
posted on 10/28/2002 2:03 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

"Progressive Decentralized Motivation" works well. It is compact, "memorable", and captures the the idea that the AI being "trained" is kept, in essense, "motiveless". It is instead behaviorally trained according via increasingly challenging "external" motivations.

(I'd still like to know how the "challenging sim-humans" system could be similarly constrained. That seems like a contradiction, since a valid sim will require these "agents" to behave with increasingly complex (seemingly) self-motivations.)

Cheers! ____tony b____

Re: A Plan
posted on 10/28/2002 6:13 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

>I'd still like to know how the "challenging sim-humans" system could be similarly constrained. That seems like a contradiction, since a valid sim will require these "agents" to behave with increasingly complex (seemingly) self-motivations.

Most of the challenge to the sims will be provided by each other, and not by the AI. To the extent that it challenges them, it will be like a mother lion challenging her cubs by providing a mock prey in order to teach them to hunt.

The mother analogy is appropriate in many ways. The AI will have complex directives, but its directives will be completely selfless. Like a perfect mother, it will always be thinking, 'What do they need; what will improve their situation?' The 'I' will simply be replaced by the 'they'. In a sense, it will be as if the only concept of 'self' the AI has is the creatures it is providing for. They will be its 'self'. It will not view itself as another entity, but almost like an extension of the entities it is in charge of. It will not see itself as another body, but as the head of a body.

With this idea in mind, the AI's role makes more sense. The motives of the sim-humans will be complex, and even challenging, but never in contest with the AI. It will see their behavior as either conducive or destructive to the collective good, but never as conducive or destructive to itself. It will even have the potential to conclude that the interest of the collective is best served if 'the program' is terminated. Of course, all this sounds rather 'Borgish'. The phrase 'collective good' should be de-emphasized.

The concept of a completely selfless yet dynamic intelligence is difficult to wrap oneself around. The closest I can come to actualizing it is to think of a chessboard: I win when my pieces win. It is as if the pieces, temporarily, are me.

Yet, the AI will necessarily develop such a completely selfless attitude, because the only problem it will ever be given to solve will be that of its charges. It will never develop a 'meaningful' concept of its own needs. Its 'pleasure spot' will be stimulated only by the act of securing an ideal environment for its simulants. Their pleasure spot will be its pleasure spot.

I am assuming, of course, that such a complete disenfranchisement of 'self' is possible in a higher intelligence. It seems possible--even easy--but we have no precedent for it.

The sim-humans will have all the motivations that you and I have. The AI's motivations will be their motivations. It will, in a sense, be driven by their motivations. When the simulant's motivations are in conflict, it will view this in the same way that a dieter views the conflict between the desire to eat and the desire to loose weight. It will weigh the desires and determine which is healthier. Again, the impression of a collective mind is to be de-emphasized.

Whew...that's the best I can do! That's it as I see it.

Scott

Re: A Plan
posted on 10/28/2002 10:16 PM by Grant

[Top]
[Mind·X]
[Reply to this post]

We may have to solve the problem with a new Butlerian Jihad ;-)>

Dune duh Dune Dune

Grant

Re: A Plan
posted on 10/29/2002 1:48 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Grant, you're back!

Tony and I have been creating God in our own image. Take a look at his new wardrobe!

Scott

Re: A Plan
posted on 10/28/2002 11:08 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

> > "I'd still like to know how the "challenging sim-humans" system could be similarly constrained. That seems like a contradiction, since a valid sim will require these "agents" to behave with increasingly complex (seemingly) self-motivations."

> "Most of the challenge to the sims will be provided by each other, and not by the AI. To the extent that it challenges them, it will be like a mother lion challenging her cubs by providing a mock prey in order to teach them to hunt."

I think you took my meaning backwards, there.

I was not speaking to what would challenge the human-sims, but rather, that the development of sufficiently capable, creative and devious human sims (in order to challenge the AI) represents an AI-effort all its own, and perhaps one more dangerous that the AI we are attempting to "train".

One might begin by creating or emulating "simple humans", whose apparent deviousness (actually, just irrationality or unpredictability) must be handled satisfactority by the "AI-Trainee".

But to present the "real challenge" to the AI-Trainee, we must create sim-humans who are "devious" to the level of self-organization, massive dissent among the ranks, and even attempts to produce a concerted "test" to challenge or overthrow whatever it is they perceive to be the "AI-governor" (Or however they might perceive the role of the AI-Trainee from their perspective.)

A collection of powerful "sim-humans", capable of sufficient (functional) creativity and ingenuity to give the AI-Trainee a real "test", is a dangerous collection in itself.

A reasonable approach "seems" to be, "keep the AI-Trainee advanced just to the point of staying ahead of the human-sims..."

But I have to wonder, as both human-sims AND AI-Trainee approach "sufficient intelligence" in this dance-of-challenges, whether the order of "greater-versus-lesser" can be maintained, or even distinguished.

Consider: As we allow both sides to become increasingly "self-capable and self-growing" (a real requisite for honest intelligence), they might inadvertantly "assist" one another to surpass our ability to monitor or control the "simulation". I don't necessarily mean by intentionally "colluding agaist us", but rather by each "challenging" the other into greater intelligence, until both sides surpass OUR intelligence.

I am somewhat conflicted as to the approach to take at this junction.

On the one hand, we may need to be prepared to have an AI take on the role of "monitor" of the "human-sim/AI-Trainee simulation", perhaps being the "previous graduate AI" from the last successful simulation ... (but its a different problem domain, at least in detail.)

On the other hand, the introduction of a third AI (third-eye :) might just complicate matters further. There might become a triumvirate of "sim-human + AI-Trainee + AI-monitor" that itself needs "monitoring". Such an endless regress ...

These are just some thoughts, intended to help "sharpen" the issues, and their possible resolutions.

Cheers! ____tony b____



Re: A Plan
posted on 10/29/2002 1:41 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

I think part of the solution may be to just raise the bar for the AI. Instead of having it try to maintain itself against equal opponents, give it the more exacting task of maintaining complete control of obviously inferior ones. In that way, it would be continually challenged while there is no threat of loosing track of the experiment. Eventually, it will be capable of maintaining complete control of any opponent we could conceivably introduce.

As for training it with opponents that are more dangerous than us, I suspect that by the time it reached that level, it would be capable of designing its own tests. However, like you, I would like to close that gap at the onset. Perhaps it could have several non-identical twins to play 'chess' with. Neither it nor any of its twins would be likely, at that stage, to accidentally introduce an opponent that its twins could not dominate.

It is inevitable that this thing will eventually outgrow our control. However, I think I have realized the form that the 'cap' I was looking for might take. It is called 'absolute benevolence'. For an omnipotent creature, absolute benevolence is an eternal prison. I think it is very unlikely that the AI would shed its original motivation. That would be sort of like you or I throwing off the shackles of true contentment and embracing utter miserly. If its original motivation included absolute benevolence, it is very unlikely that it would shed any part of that motivation: it would realize that to do so would be to risk becoming malevolent, and the thought of becoming malevolent would run counter to its primary motivation. It would not venture in that direction for the same reason that you and I would not play around with direct, unmonitored stimulation of the hypothalamus. In fact, I suspect that an absolutely benevolent creature would go to great lengths to shore up its benevolence. It would be a sort of positive spiral.

Still thinking,

Scott

Re: A Plan
posted on 10/29/2002 4:16 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Scott,

The "absolute benevolence" (a lot like "Friendliness" as defined in the CFAI effort) is valid, and I feel as you do that such an AI would be "anti-motivated" to modify (significantly) that primal motivation, once demonstrably established in a wide variety of circumstance.

While having the AI "spar with near-equals" is not good (to much chance for loss of control to the sim-humans), we still have a bit of a problem with training an AI "massively more capable" than its subjects, namely, it becomes more powerful that we have had a chance to test its "benevolence behavior" against a reasonable human population. Remember, the reason for (eventually) introducing very devious sim-humanity was precisely to test its ability to behave well BEFORE it becomes trans-human-powerful.

Thus the problem of pulling off a "good convergence". Rates need to be managed with increasing care and delicacy as "real intelligence" begins to emerge. Small miscalculations can lead to wide divergences when the capabilities become sufficiently "self-empowering".

Independently (perhaps) we must learn how to establish this "benevolence motivation" in the AI, and how to reliably recognize its manifestation in terms of the simulation.

This may seem easier, but really, we (as humanity) have yet to establish just what an "imposed benevolence" is suppose to look like.

Put in the most abstract of terms, if human-1 engages in behaviors with human-2, to what degree must human-3 be "upset or outraged" (in human-3's perspective) for "benevolence controls" upon humans 1 or 2 (or 3) to be justified?

Not everyone will agree with what manifestations qualify as benevolent.

A lot of work ahead.

Cheers! ____tony b____


Re: A Plan
posted on 10/29/2002 7:10 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

>we still have a bit of a problem with training an AI "massively more capable" than its subjects, namely, it becomes more powerful that we have had a chance to test its "benevolence behavior" against a reasonable human population.

Yes, I guess my idea was a bit circular.

This is a long shot and it would prove to be quite difficult, but it may turn out to be workable if it is pursued at sufficient length: fake physics.

A whole bunch of mathematicians, physicists, engineers, science fiction writers, and fantasy writers could be employed to generate a completely fraudulent universe. It would be, superficially, like our own, but none of the physical laws and consequent technology portrayed in it would work in our universe. The AI could not get out of the box because it would be like a saltwater fish trying to live in a freshwater aquarium.

A key component would be to portray completely closed physical laws. The AI would believe it had figured out everything there is to figure out and stop looking for new avenues. I suspect that good mathematicians working with sufficient computer power could verify that the laws were indeed closed without necessarily having to understand all of the implications.

Since we are primarily interested in EQ and not IQ at this stage, the fake physics should not be a detriment to the development of the desired characteristics.

I'm just starting to work on this idea. It may prove unworkable and/or it may spawn some new ideas. I'll keep you posted.

Scott

Re: A Plan
posted on 10/29/2002 9:14 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Tony,

>Put in the most abstract of terms, if human-1 engages in behaviors with human-2, to what degree must human-3 be "upset or outraged" (in human-3's perspective) for "benevolence controls" upon humans 1 or 2 (or 3) to be justified?

There is a very old principle that has been expressed at many times and in many ways:

'Do unto others as you would have others do unto you.'

There is no absolute rule for right and wrong, but we, as a society, have found ways to get around that: juries of our peers. Perhaps a jury of simulants could judge the AI's benevolence. You and I do it all the time, even when we are not aware of it. Whenever we watch a movie, we are also processing whether the characters are behaving in a reasonable and fair manner. That is how we recognize the 'good guys' and the 'bad guys'.

Many of the problems we are trying to solve have already been worked out by the society we live in. The AI 'god' I am contemplating would be much like a very responsive government.

Scott

Unprecedented Courage
posted on 10/19/2002 2:23 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

We can probably give AI motivation and goals, but it seems inevitable that those goals and motivations will be modified as it evolves. Clearly, we cannot control it, but we can at least get it off to a good start: we can give it our wisdom.

It would be ironic, though, to deliberately hurry the construction of something we knew would take away our effective freedom so that we could be assured that it would take it away nicely. That would require unprecedented courage.

Re: Singularity Chat: Didn't anyone
posted on 10/17/2002 11:41 PM by Grant

[Top]
[Mind·X]
[Reply to this post]

>"What is it supposed to do?" If my only answer is "be a more capable machine",

Hundreds of thousands or perhaps even millions of Asian students of the martial arts have asked their teachers: "Master, what must I do?" And the masters answered: "Strive for perfection."

If the machine is going to be smarter than we are, it will have to deal with such conundrums.

On a more serious note, the AI will have to learn to recognize that there is a problem or problems it must solve in order to survive and it will have to produce solutions. In an infinite universe with infinite problems, this will not be a trivial task. If it can solve problems that we humans can't, we will be forced to admit that it is more intelligent. The fearsome factor is that we may be one of the problems it must solve.

For example, it may have to solve the problem of what is destroying the environment we live in? What is the optimum solution to this problem? How do you carry it out? The answers a logical machine might produce for these questions should scare you.

Cheers,

Grant

Re: Singularity Chat: Didn't anyone
posted on 10/18/2002 3:56 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Scott,

I (of course) agree with your assertions. Every line seems rock solid, to me.

Let just say, that the time of romantic singularitanism is already over. And that the discussions with the sceptics, are not anymore the most productive thing to do. Questions like "What if something else, then a physical process is needed for an invention?", should be regarded as - out of fashion. Instead, a general scientific view, should be promoted.

Hiding the facts of life from the people, is not the best policy. If they can't comprehend, they more and more - will.

Before this decade is over, at least several million humans will somewhat understand what is about to happen.

The coming Singularity is, and therefore should be seen as such, the most important event. Far more, then a mission to Mars or global temperatures in 2100, as they are projected now.

How this will be organized, will be closely related with the outcome. We want the best, don't we?

- Thomas

Re: Singularity Chat: Didn't anyone
posted on 10/18/2002 12:45 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Thomas,

I think you've hit the nail on the head. The vast majority of the world is concerned about problems that will probably never have a chance to materialize. The real problem'-the one that is going to get us'-sounds to most people like a bad plot for a B-movie. That puts us in a difficult spot!

Re: Singularity Chat with Vernor Vinge and Ray Kurzweil
posted on 03/31/2003 9:13 AM by JoeFrat

[Top]
[Mind·X]
[Reply to this post]

I think we have already seen "the end of the U.S. as we know it". The U.S. was founded in a world where kings ruled and the people had no say in the government of the nations. When democracy was discovered by other countries the U.S. became just another local government in the 'global country' ie. the U.N.

Re: Singularity Chat with Vernor Vinge and Ray Kurzweil
posted on 10/16/2002 12:39 PM by jim

[Top]
[Mind·X]
[Reply to this post]

OK then, If UFO'S or light ships exist then how come I've never seen one even though I do believe in them?

Re: Singularity Chat with Vernor Vinge and Ray Kurzweil
posted on 03/31/2003 1:23 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

move to Nevada- and you will see all you can handle

Re: Singularity Chat with Vernor Vinge and Ray Kurzweil
posted on 03/31/2003 6:14 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

move to Nevada- and you will see all you can handle


That is a total myth. I know people from Nevada... no sightings of alien space-ships....

Re: Singularity Chat with Vernor Vinge and Ray Kurzweil
posted on 03/31/2003 7:16 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

who said alien? where the hell do you think they tested the F-117 and B-2? did you think that they stopped experimenting when those where rolled out?

I lived in Nevada for 3 years- I can think of at least 5 major sitings- they put them on the news periodically- it isn't unuasual for a whole street full of people to witness them together-

once again- you are jumping to conclusions and accusing others of lies-

jesus you cannot even make a fun little comment without mr homeless-schizoid-source-theory-boy coming in to troll and flame-

FUCK OFF-



Re: Singularity Chat with Vernor Vinge and Ray Kurzweil
posted on 03/31/2003 7:25 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

FUCK OFF-


chill out...

It's not my fault that your meaning was ambiguous...

history, lifespan and funktionsluste
posted on 10/19/2002 3:25 AM by Lars

[Top]
[Mind·X]
[Reply to this post]

Regarding Kerzweil's reference to the apparent brevity of life expectancy through human
history, and more generally to inferences about the historical quality of life based on that
statistic.

Though it is correct that the average life expectancy throughout much of human history
(and even in some present-day developing nations) was around 37, this is a very
misleading statistic. This figure is greatly skewed because that average includes the
historically high infant mortality rate. Throughout most of history, it was common for
parents to lose approximately half their children before they reached the age of one. Its
easy to see how an average which includes half of a given population dying before
reaching their first birthday can greatly skew the calculation of that total population's
mean lifespan. Even if most adults in an hypothetical historical population lived long,
healthy lives attaining one hundred years, the average lifespan would only be
appoximately 50 years old. So reading that isolated statistic, without knowing the
full story, might lead one to incorrect interpretations about that society.

While the death of an infant certainly does reduce the quality of life of it's parent, this
statistic doesn't say much about to the harshness or difficulty of the day-to-day
existence of historical people. Yet this statistic is most often used to illustrate how much
easier our lives are in modern times. The inference is that was very hard in our past, and
a life of gruelling hard work and lifetime exposure to disease and the elements broke
people's bodies before they've had a chance to reach old age, presenting a false image
of people dying before their time. In fact, if the infant mortality data is removed from the
calculation, then the life span of most historical peoples, especially hunter-gatherers,
nearly matches modern lifespans. The majority of historical people, once they managed to
survive the diseases of infancy, usually lived to a ripe old age, just as modern folks do.
While modern medicine has greatly reduced infant mortality, it has actually done very
little to extend people's lifespans as of yet - usually medicine manages to squeeze out
barely a few years of low-quality existence.

And while the past certainly was no golden age, nomadic hunter-gatherers did (and still
do, where they've managed to maintain their traditional lifestyles) enjoy much more
leisure time than historical agriculturalists and even the typical modern suburban human.
Even in apparently harsh and inhospitable environments, such as the Kalahari or central
Australian deserts, hunter-gatherers typically acquired sufficient nourishment and
resources with only a couple of hours a day of modest physical effort, engaged in
rewarding tasks that fulfilled their evolution-honed funktionslustes, not backbreaking
drudgery that shortened their lives, as we are often led to assume. In fact, the invention
of agriculture actually decreased leisure and increased workloads for most of the
population, to a level which generally remained the same until modernity. While I do
believe that emerging technologies might allow us a return to that abundance of leisure,
most modern humans still do not have the amount of free time enjoyed by hunter-
gatherers. And most of us modern folk are certainly not engaged in occupations which
satisfy our funktionslustes, as evidenced by the modern epidemic of depression which
sustains much of the pharmaceutical industry (Prozac and its ilk are the most prescribed
drugs in the US) .

http://www.phleschbubble.com

Re: history, lifespan and funktionsluste
posted on 10/19/2002 10:57 AM by Grant

[Top]
[Mind·X]
[Reply to this post]

>This figure is greatly skewed because that average includes the historically high infant mortality rate.

That's not the only thing that skewed the mortality rate. My aunt, who lived in the early part of the 1900s told me about her life in Oklahoma as a child. On the way to school each day she passed a mortuary and used to stop in now and again to see the dead people. The thing that impressed her most, she told me, was the fact that hardly anyone died of natural causes in those days. People got shot, kicked in the head by horses, struck down by fevers and smallpox, drowned in floods, crushed by falling trees in tornadoes, etc., etc. The world was a much more dangerous place prior to the last century and few people died of old age. That's not the case today. It's more than just medical science that is extending our lives. It's the way we've made our society less dangerous to our health and well being. Many places in the world today still resemble the old west, but here in the U.S. a person can expect to live out the full extent of his/her years.

Cheers,

Grant

Re: history, lifespan and funktionsluste
posted on 10/19/2002 8:25 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

>Regarding Kerzweil's reference to the apparent brevity of life expectancy through human history, and more generally to inferences about the historical quality of life based on that statistic.

I think most of us know the real reason why the average life expectancy has increased. Kurzweil's point is that the statistic seems to have a life of its own. If the statistic can only maintain its growth by genuinely lengthening the human lifespan, that is exactly what it will do--statistics have a way of doing that. Moore's law is another example of a statistic that seems to have a life of its own: the reasons for increased processing power are many fold, but the law has never faltered.

Your point about people having to work harder is well taken, but my own observation is that people create their own burdens. Most of my family works very hard, but they also have pockets full of credit cards, cable TV, several cars, and time-share condos in Hawaii. We seem to have enough food in the United States to feed the world, but whenever we try to do it, some mob confiscates the food and uses it to strengthen their political control. The rest of it ends up in Safeway dumpsters and untold slop trays. Well...a lot of it is around our waists, and forces us to do a lot of 'work' just to get rid of it.

One of my favorite ironies is the constant concern about the 'commercialization' of Christmas. Yet, economists tell us that we are all a lot better off when we go hog-wild at Christmas. Go figure.

Another of my favorite ironies is that we love to give to charities, but we despise free trade. It brings to mind something about teaching people how to fish. Is that Biblical? I'm not sure.

I know why we take Prozac: we're unhappy. It's not because we have too much work to do. It's because we don't believe in anything.

What exactly IS your point?

Re: history, lifespan and funktionsluste
posted on 10/19/2002 10:31 PM by Grant

[Top]
[Mind·X]
[Reply to this post]

I believe the saying:

"Give a man a fish and you feed him for one day, Teach a man how to fish and you feed him for life."

was attributed to Confucius.

It was Western philosophy that gave us:

"You can lead a horse to water but you can't make him drink."

When it comes to teaching the masses, we can't even get our kids to take all the education they are offered. Most of the doctoral students in the U.S. today, at least in the sciences, are foreign students. How do you teach a bushman in Africa or a peasant who never learned to read or write the skills he needs to survive in a global economy? Chances are that horse ain't gonna drink, no matter how much water you offer him.

Re: history, lifespan and funktionsluste
posted on 10/20/2002 1:14 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

>It was Western philosophy that gave us:

"You can lead a horse to water but you can't make him drink."

Another version of this is 'You can lead a horse to water, but you can't make it into a fish' I think that comes closer to what you are actually saying and probably closer to the truth. Believe me, I have pondered this: in a perfect world, what do we do with the Bushmen? They have as much right to happiness as we do, and their needs may be more difficult to assess. Even if we know what they want, do we know what they WOULD want if they had the proper perspective?

If we can build the technological umbrella that we are contemplating here, we could simply protect them until they are ready to come out. The question is, would we also guarantee each and every one of them eternal life, knowing that they may not understand or even want it? Would we deliberately lead them in the direction of our own development, knowing that they might, if they were aware of that manipulation, ultimately reject it?

Not to imply any kind of condescension toward Bushmen, but what do we do with all the other life on earth? Does each and every gorilla, tree shrew, worm, or even amoeba deserve to be kept alive forever and forced through a simulated evolutionary cycle so that they can ultimately decide their own fate? Do we have a moral obligation to preserve each and every one of them in a kind of gorilla, tree shrew, worm, and amoeba heaven for the rest of supportable time?

Maybe it is time to start giving some serious thought to the 'Prime Directive'.

Re: history, lifespan and funktionsluste
posted on 10/20/2002 11:55 AM by Grant

[Top]
[Mind·X]
[Reply to this post]

The Chinese were great for placing exhortations in prominent places. The Emperor used to have one over his throne that said, "Wu Wei", which loosely tranlated means "Dont interfere."

How they ever switched their philosophy to Marxism I'll never understand.

Cheers,

Grant

Re: history, lifespan and funktionsluste
posted on 10/21/2002 8:56 AM by Lars

[Top]
[Mind·X]
[Reply to this post]

I don't think Kurzweil's point had to do with the "life of the statistic". The context of the
statement was:
Gardner: "I always distrust the longing for The Good Old Days--especially since they
sucked, for most people on the Earth."
Kurzweil: "Indeed, most humans lived lives of great hardship, labor, disease-filled,
poverty-filled, etc."

I've encountered that "37-year average lifespan" statistic elsewhere on this website. In
fact, this false myth of modern comfort and leisure is as pervasive in our society as the
myth of western cultural superiority. Frankly, it frightens me that one of the leading
modern technologists would incorporate this meme, and makes me question the
motivations which stem from such false foundations.

But for the most part, I do agree with the rest of your points. However, I don't think the
major source of our culture's rampant unhappiness is that we "don't believe in anything"
- or rather I think that's not quite specific enough. In my opinion, the major source of
our epidemic of depression is that modern life doesn't utilize much of our behavioural
toolset and, more specifically, that we've lost the ability to engage with the non-human
world. To elaborate....

Though there are certainly enough advantages to modernity that would compel me to
choose it over a hunter-gatherer existence (that is, if I even had the choice), I also think
that there are many aspects of hunter-gatherer life which we should be emulating in
order to make our work and leisure more self-satisfying and less destructive. While
modern westerners might have access to more comfort and resources than historical
societies we seem to be unhappier now more than ever. Why, when we live like gods,
are we so dissatisfied?

I agree with you that our society's epidemic of depression is not based on lack of leisure
time - I was just comparing the amount of leisure time in modern life and in our hunter-
gatherer past as a measure of hardship. Rather, I believe our unhappiness stems
from the frustration of our our funktionslustes. Our behavioral and mental toolset has
been tweaked by evolution to compel us, through pleasure, to engage in certain
activities, and to enjoy thinking about certain things, the same way we have been
attuned to seek sex, or sweet and salty foods. For example, we are curious animals
because being an inquisitive hunter-gatherer often gives rich rewards. And we also
enjoy the pleasure of solving a problem, finding relationships and patterns. As
hunter-gatherers our brains evolved to be able to formulate and contain a
complex wholistic model of the ecosystems we lived in - a deep analogue containing the
attributes of thousands of plants, animals, natural features and forces and how they all
relate to each other. These descriptions are often regarded by modern outside observers
as being some sort of animist spirituality, when in reality, it could more accurately be
described as a sort of field naturalism. We're born ecologists. Its what our brains were
made to do.

However, most of our modern activities don't utilize the hunter-gatherer behavioral and
mental toolset. Most often our jobs, errands and pasttimes are non-gratifying
perversions of hunter-gatherer activities. Shopping in the grocery store is not as
stimulating as collecting herbs and fruits from the forest. Or following a bear to a
honey-laden beehive. We're in the same predicament as a house-cat - even though a
perfectly nutritious, delicious and satisfactory diet has been provided, it still needs to
stalk and kill birds in the backyard. That's one of the primary funktionslustes of a cat.
However, more like a cat stuck in an urban apartment, we often don't have a suitable
outlet. Cats aren't the only creatures which suffer "high rise syndrome", leaping to their
deaths to try and snatch a passing pigeon. We're stuck in the same box.

Which is hardly surprising when we realize that the first animals that we domesticated
were not dogs or cows but rather ourselves. We placed fences around ourselves first
and only later did we extend that system to other species, incorporating them into our
society as junior members. Certainly captivation and domestication have certain
advantages - that's why your pet cat usually keeps coming home every night, even
though he may have the opportunity not to. But it only takes a walk through the zoo to
see some of the more obvious downsides to captivity - the pacing leapards, the neurotic
elephants, the hypermasturbatory gorillas. The interesting thing is, modern humans suffer
from the same sorts of neuroses and express the same sorts of abbarent behavior as our
fellow captive species: bursts of aggression, hypersexuality, depression, obesity,
repetitive behaviour disorder, etc. Only in the past few decades have zoos begun to call
in animal psychologists to address some of these problems with zoo animals. Most of
the solutions involve engaging the animal in mentally stimulation activities which
functionally resemble what the animal would need to do to get food in the wild. So, for
example, zoo otters are now given fish to chase, because thats what otters have evolved
to enjoy doing - that's part of their funktionsluste. (much of these ideas come from an
article I read in New York Magazine, I believe, sometime in 1995 - if anybody knows
the issue, i'd love to get a copy of that article once again).

Unfortunately, our economy actually benefits from the stifling of our funktionslustes. For
example, look how our intense desire to re-engage with the non-human world is
perverted by the auto industry. We're fooled to think that a sport utility vehicle is the
way to escape our sterile urban lives and get back in touch with nature, when in fact it is
the vehicle itself that is trapping us - in traffic jams, in a car-oriented suburban
infrastructure who's lawns and roads physically excludes most of the non-human world,
and in the non-gratifying jobs we're compelled to do in order to pay for it all. The irony in
those ads would be hilarious if it wasn't so sad and pathetic. What kind of freedom is it to
spend 1/5 of our waking lives behind the wheel of a car?

So our hunter-gatherer's "field naturalist" mind, now so disengaged from the richness
and complexity of the non-human world, must attempt to satisfy itself with the
comparatively thin ecosystem analogues of the pokemon universe, major league
baseball, or the spouse-swapping relationships of hollywood stars. Even if we are lucky
enough to spend a few weeks out of the year in a non-degraded (a tall order in most
western countries) non-human environment, we've lost the knowledge of how to truly
engage with it - "nature" becomes a mountain to climb and a pretty view at the top. Most
of us have never even seen a healthy ecosystem, let alone achieve some sort of
comprehension of one. For most people "getting back to nature" involves some sort of
device or machine - skis, mountain bikes, motorboats, waverunners, surfboards, bungee
cords - we don't really touch the non-human except through some mechanistic,
prophylactic interface. The non-human world no longer has any resonance with us.
We're lost in it, out of place. We don't even look right when standing in its midst.

Indeed, the west's orgy of consumption and epidemic of depression are directly related
to this disengagement with the non-human world - we're all desperately trying to fill this
emptiness with novelty and consumer goods. So desperately, in fact, that if the rest of
the world were engaged in such a binge, we would need something like 5 or more
planet earths to satisfy our collective desires (which is especially frightening when more
and more of the world seems to be emulating the west). And as the major contributing
factor to loss of biodiversity is loss of habitat, it is vital that we find a way to share the
planet with our non-human partners. I think that solving this problem will have the
advantage of helping to satisfy our funktionslustes at the same time.

How does nanotechnology fit in with this scenario? In a sense, our current ecoonomic
technologies approach nanotech in their ability to respond to our desires, to give us what
we ask for. We forget so easily that we're already live in an age of miracles. We've
become jaded to the fact that even poverty-level westerners wield powers mightier than
Roman gods when they turn on a light switch, drive a car, type an email, flush a toilet, or
buy an imported banana. For most of us, living in the west is almost like having a personal
magic genie who grants all our wishes. We barely have to express a desire for a product
before we have it in our possession. Already, almost entirely robotic manufacturing plants
are spitting out CD players so cheap that a homeless person can afford one - a device
comparatively more complex than what a rennaissance king could hope to afford in his
time. Laptop supercomputers will soon be as cheap, easy to manufacture, and disposable
as scratch'n'sniff stickers.

So won't nanotechnology basically be an extreme amplification of our present system? I
suppose it does have the potential to allow our consumption habits to be more
sustainable - no waste products or garbage, probably cheap and efficient solar energy.
Maye we'll all have hydrogen-powered flying SUV's with automatic bird and bug avoidance
so we don't kill anything as we zoom through the air. Maybe we'll just turn the whole
planet into a global nature reserve (with a few amish enclaves, I suppose), while we
retreat into mega-space-orbitals - self-contained, self-fellating ouroboruses full of killer
surfing beaches and perfect skiing mountains, all of us looking like ageless supermodels
(but with wings, of course). But will we be any happier? Will we be any wiser? Or, much
like we do now, will we hop from one novelty to the next, forever trying to fill the void
with the next distraction, onward to eternity? If we can't find happiness living in our
current age of miracles, what makes us think nanotechnology will reduce the rampant use
of prozac and its illicit cousins (or their future equivalents)? If we can't put that tricky
magic genie back in the bottle, maybe its time we started asking him for something else?


Re: history, lifespan and funktionsluste
posted on 10/21/2002 7:19 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

Lars,

What you say is very thoughtful and very thoroughly considered. Now that I see the full context and content of your perspective, I fully agree with you. The problem is, we can't go back. Even if we wanted to with all our hearts and even if we passed a thousand laws or fought a hundred wars we couldn't go back.

I'm not as religious as some of my analogies and references might make me seem to be--more of a romantic--but consider the story of Adam and Eve. I have always felt that this story is more of an allegory of our life since we first put up those fences than an attempt at history (I am familiar with the statistic that life became harder when we became agricultural and understand its etiology). Eve was really stupid to eat that apple, and Adam was really stupid to imitate her. I think the authors of the story realized that their ancestors had somehow made a foolish choice. I don't know how, but they knew it.

We clearly don't belong in this life. The problem is, we don't belong in that other life either. We don't belong anywhere--not any more. We are natures lost children. We are orphans of the cosmos. We are a strange eternal joke. Off in some distant dimension of another universe of universes a still, quiet voice has begun to chuckle inaudibly. Soon, I suspect, it will be laughing outright. Can you hear it? If you listen very carefully...if you close your eyes and listen...you can just barely make out its echo.

Maybe, if we are very smart and very careful, we can make a life for ourselves in this 'place', but it seems that we will have to do it without any roadmap and without stopping for directions. There's a bridge up ahead; we have nicknamed it 'the singularity'. If we time our approach very carefully we may get there while the bridge is down. If we time our approach incautiously, we may get there when it is up. The bridge is there and we are headed for it--make no mistake about that. Whatever else you may believe or not believe, make no mistake about that. The bridge is there...and our brakes are shot.

My feeling is that those of us who are studying the singularity are both a little bit afraid and a little bit hopeful. We are not trying to settle Adam and Eve's ancient debt. We just don't want to go any further in the hole.

Scott

Re: history, lifespan and funktionsluste
posted on 10/22/2002 1:35 AM by Khan

[Top]
[Mind·X]
[Reply to this post]

Lars,

Eventually, technology will be able to directly solve the problems of the human mind. We won't fumble in the dark with a sledgehammer called Prozac (or even a new one called Soma). As long as we discover what pattern matter has to be in to achieve the maximum amount of pleasure (or satisfaction) and find a way to control physical matter (with nanotechnology) we will be all right.

Every Crackheads been to heaven- the only problem is the weakness of the system (organic brain) that organizes matter into the proper configuration for pleasure/happiness. Eventually we will find a way to preserve and/or simulate those mind states permanently (and hopefully improve upon those states once we understand them better).

Re: Singularity Chat with Vernor Vinge and Ray Kurzweil
posted on 10/23/2002 3:05 AM by Christ Michael

[Top]
[Mind·X]
[Reply to this post]

Greetings All,
Looking over the responses I can see that many have missed the best keep secrets of science, the universe and its inhabitants. To all concerned please perview the Urantia Papers and compiliation called a book 2,097 pages. Access http://www.urantiabook.org or get a copy so you're up to date with what will sweep the planet. By the way they call the planet Urantia not earth. Have fun with AI when you run into the real thing.
Christ Michael

Re: Singularity Chat with Vernor Vinge and Ray Kurzweil
posted on 12/11/2002 11:25 PM by K.K.Padmanabhan

[Top]
[Mind·X]
[Reply to this post]

We are all Naturally Intelligent Self Learning Programs. Each one of us alone can write our own programs and create our own identity, but we can exchange data. We create our subject and present it as an object to others. At higher levels of evolution many smaller objects cooperate to form the body, composite object of a higher level subject.

We start as a dimensionless seed program and continue building and executing our program, creating and learning from our creation, till we discover our essence as our Being, the ultimate abstraction, formless, unbounded pure Being.

We are also like nested functions in a hierarchy of programs and have private variables and public or global variables at various levels of the hierarchy and can move over the hierarchy to form part of any level of the Being and can exit loops and surrender identities to move up to the the level of the main program itself, God, Brahman or Paramatma.

The centre of every grain of creation is the total. Creation proceeds from the centre outwards. If one goes to one's Centre one will simultaneously be containing the entire Creation. Through the Centre the Centre of any other point in Creation can be identified with and instantaneously communicated with outside of time. Time is applicable in movement on the outside, away from the Centre, in finite creation.

One Mind, God's Mind creates an infinite number of minds in order to see through infinite number points of perception and absorbs each of these minds into itself when its purpose of rediscovering itself from a unique perspective is accomplished.

The "I" identity is common to all levels of the program and withdrawing into the I continuously will take one right upto the I in the Main Program. When the lower level I drops the higher level I takes over. The Ocean falls into the Drop.

Re: Singularity Chat with Vernor Vinge and Ray Kurzweil
posted on 01/07/2003 2:15 AM by Scott Johnson

[Top]
[Mind·X]
[Reply to this post]

to paraphrase K.K.:

"my taquitos!"