Origin > The Singularity > Runaway Artificial Intelligence?
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0638.html

Printable Version
    Runaway Artificial Intelligence?
by   J. Storrs Hall

Synthetic computer-based artificial intelligence will become available well before nanotechnology makes neuron-level brain scans possible in the 2020s -- it's already a short step to computer systems that make better decisions than corporate managers do, says J. Storrs Hall.


Originally published in The Futurist March-April 2006. Reprinted on KurzweilAI.net February 3, 2006.

This article is a response to Ray Kurzweil's feature in The Futurist, Reinventing Humanity. You can also read other responses to Kurzweil's article by Terry Grossman, John Smart, Damien Broderick, and Richard Eckersley. Ray Kurzweil's response to Eckersley's comments can be found here.

Click here to read a PDF of the full feature.

Some years ago, I reviewed Kurzweil's earlier book, The Age of Spiritual Machines, for the Foresight Nanotech Institute's newsletter. Shortly thereafter I met him in person at a Foresight event, and he had read the review. He told me, "Of all the people who reviewed my book, you were the only one who said I was too conservative!"

The Singularity is Near is very well researched, and I think that in general, Kurzweil's predictions are about as good as it's possible to get for things that far in advance. I still think he's too conservative in one specific area: Synthetic computer-based artificial intelligence will become available well before nanotechnology makes neuron-level brain scans possible in the 2020s.

What's happening is that existing technologies like functional MRI are beginning to give us a high-level functional block diagram of the brain’s processes. At the same time, the hardware capable of running a strong, artificially intelligent computer, by most estimates, is here now, though it's still pricey.

Existing AI software techniques can build programs that are experts in any well-defined field. The breakthroughs necessary for such programs to learn for themselves could happen easily in the next decade—one or two decades before Kurzweil predicts.

Kurzweil finesses the issue of runaway AI by proposing a pathway where machine intelligence is patterned after human brains, so that they would have our morals and values built in. Indeed, this would clearly be the wise and prudent course. Unfortunately, it seems all too likely that a shortcut exists without that kind of safeguard. Corporations already use huge computer systems for data mining and decision support that employ sophisticated algorithms no human manager understands. It's a very short step to having such a system make better decisions than the managers do, as far as the corporation's bottom line is concerned.

The Singularity may mean different things to different people. To me, it is that point where intelligences significantly greater than our own control so many of the essential processes that figure in our lives that mere humans can't predict what happens next. This future may be even nearer than Ray Kurzweil has predicted.

© 2006 J. Storrs Hall. Reprinted with permission.

   
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

The Overlooked Runaway Wildcards
posted on 02/04/2006 10:57 PM by concrescent

[Top]
[Mind·X]
[Reply to this post]

Rays predictions for the near to mid term technological future are based upon the best empirical data and logical deductive reasoning available to our civilization today. Let's face it, Ray has more sources of technical trend and leading edge research information, and knows more of the brightest people than anyone else on the planet. My problem with his Singularity scenario has more to do with its desireablity than its likelyhood. I believe it is not only probable but perhaps unavoidable as the outcome of titanic forces that are already beyond the scope of mankind's ability to manage.

Having said that, the problem of timing is paramount. TSiN postulates that human scale AI must first arise from the emulation of human neuroanatomy, and that it is fundamentally a problem of a sufficiency of artificial synapses, appropriately networked and programmed. There is ample reason to believe that this is not a firm prerequisite to human scale - and Strong - AI, and that the first AI to arise may have nothing in common with our own reasoning processes.

TSiN neglects the role of serendipity and discontinuous change in technological progress, and that these are perfectly capable of rendering second or even third order exponential progress in one or more existing disciplines passe and irrelevant. There are artifacts of technology already in play which could cut decades off of the Singularity's arrival.
Such "Wild Cards" could land us at the Event Horizon by 2012, when multicore processors will be widely disseminated on desktops around the world.

1) Dr. Stephen Thaler's "Imagination Engines" represent a radically new approach to artificial intelligence that appears far less reliant on brute-force processing horsepower than traditional strategies. His systems can already paint pictures, write symphonies, design new molecular chemistries, and perform amazing feats of intuitive analysis, using less computational power than is found in a modern elementary school today. http://www.imagination-engines.com/

2) Tao Systems' Virtual Processing Operating System (now known as "Elate"), currently being embedded in cellphones, PDAs, and other appliances, is a revolutionary bit of code that has been nibbling at the edges of the world for the past decade. It can run on any processor (and is backward compatible to the i286). It is infinitely - and automatically - self-scaleable. It treats distributed and parallel processing interchangeably, self-correcting for latency across all processors, linked by any means, transiently forming them all into a single coherent computational unit. It consists of an infinite variety of very brief, compact and efficient assembly language function call, that together can construct any application. It runs off of a nanokernal that can discretize itself to as little as 13 kilobytes. As a result, it is hundreds of times faster than monolithic applications running in any flavor of Unix, and thousands of times faster than anything run under Windows - and these discrepencies only become more pronounced as the hardware power scales upward. http://tao-group.com/

3. RK's assumption is that "human-scale" intelligence is dependent upon access to a full and complete three pounds of well networked grey matter, or its simulacrum in silicon. But, there are numerous well documented cases in the medical literature of apparently normal, fully-functional adults of average or greater intelligence, with profound hydrocephaly - 'water on the brain' - that went undiscovered well into adulthood. When eventually subjected to head x-rays or other scans for unrelated medical conditions, some have been found missing 80% to 95%+ of their normal brain tissue, with pockets of cerebrospinal fluid occupying the major volume of their cranial cavities. It is apparent that under the right circumstance - particularly from birth - the brain can wire itself and function without cognitive, emotional, memory, or functional deficits, with great elegance, on only a tiny fraction of the tissue mass nominally required. The recent demonstration of 25,000 rat-brain cells, being trained to successfully fly the F-22 Raptor flight simulator is perfect testimony to this.

4) Dr. Jack Sarfatti's "Q-Chip", derived from neural cellular Microtubuole architecture, increasingly validated by Dr. Stuart Hameroff, et al. as the real computing mechanism in the Brain. While Hameroff maintains that the great proliferation of microtubuole configurations pushes out the number of connections that an AI must emulate by several orders of magnitude beyond Ray's numbers, the Q-Chip could put an entirely new spin on the problem, more than overcoming that differential. Q-Chips would leave traditional processing architectures in the dust, and would yet still benefit from all the advancements in feature size, thermal rejection characteristics, etc that advance conventional silicon.

I am sure there are a dozen other pieces of 'orphan technology' going on around the world that each also have the potential to leapfrog Ray's plodding exponential scenario.

Terence McKenna and the Mayans (Indians, Chinese, Egyptians, Sumerians, Hopi, Maori, etc.) probably got it right with their forecasts that the Singularity will occur in 2012; on the solstice, a few days before Christmas, actually. About 11:30am, as I recall...

Wildcards
posted on 02/06/2006 3:11 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

cripes what a well written article; a joy to read it.

the hardware capable of running a strong, artificially intelligent computer, by most estimates, is here now, though it's still pricey. (J Storr Hall)


Turing thought it needed only 10/\12 computing connections in the 1950's:

http://www.abelard.org/turpap2/tp2-ie.asp#automati c-machines


Intelligenece can surely be built in many ways and ubless you specifically want a machine limited to 5 sensory data input plus kinesthetic experimentors, that are probably faster and better ways to arrange a hardware set.


One I favour is an internal data generator that gets rid of external sensors. Conscsious systems generate predictive dynamic models anyway, so generating data streams at speed and extracting for general smartness as an hierarchical measurement (London A.I. Club).



One of the important parts of it is a review mechanism. Tacking on a subroutine that generates moving models of your self in the world, and of the world in general, gives a system conscsiousness.

Consciousness here is defined just by a systems ability to model itself in its world.




The models are prioritised & selected for a subroutine of action-as-experiments in the environment...pick something up and shake it to see what it is etc, in the human brain.


This is based on actuators (eg hands, arms).

The senses and actuators of a human being and how they evolved have MUCH to do with with how human evoloved in us and how it opperates.

Our brains are not tacked on to our sense and actuators but are integral to them.

A system that doesn't have such parts is not going to be built anywhere near the same as a man.

For years I toyed with the idea you could just make a calculator and how the data got in was irrelevant as it could transcribed to any symbolism. this may be true, but the hiamn brain sure doen't work like that: it takes in and stores images dierectly in the visual cortex....stores tham as geogrophically related patterns. a table will be sorted AS A PATTERN next to a chair etc.



SAFETY

It may also be safer to keep the SAI a calaculation only device as one attempt top limit any transgressions anti-man, which I see as a real problem.


profound hydrocephaly (proves that the human being can be intelliegnet and functional at 80% less grey matter than the whole brain). (concrescent..above)




This is astonioshing and blindingly obvious once you mentioned it.

The brain has many many back-up systems.

If you loose your keys you will return again and agian to the same place to look as DIFFERENT systems in your brain tell you to look there.

Any program that runs right is obsolete.


Re: The Overlooked Runaway Wildcards
posted on 02/09/2006 6:17 AM by dagonweb

[Top]
[Mind·X]
[Reply to this post]

Christmas 2012? I must hurry to get popcorn. Maybe someone can start working on The Official Soundtrack of the singularity unfolding?

This is not cynicism.... I prefer a fast and hard singularity! I even prefer a bad singularity occuring over none.

Re: The Overlooked Runaway Wildcards
posted on 11/12/2007 6:01 PM by Igor86

[Top]
[Mind·X]
[Reply to this post]

Re: Tao Systems' Virtual Processing Operating System
This is simply a virtual machine designed for portability, like java or .NET it just lets you write once and run on anything with an interpreter or a compiler for it. It does not offer any performances advantages over native programming, especially the hindered and thousand fold you're mentioning.
New languages and programming concepts may boost the AI research significantly. .NET 3.5 is bringing functional programming, structured queries, automatic parallelism, lambda expressions, and other very high level features to common languages and architectures. While having all this in a language does not automatically produce smarter AI, neither does it help understand how intelligence should work; they are just tools to help people speed up development.

Re: Brain-in-a-dish pilot
The rat flew a generic airplane simulator, not a fully featured military training simulator. It was most likely just connected to the plane's very basic air controls and a 'good/bad' nerve to signal it that crashing is 'bad'. I very much doubt it would be able to land or take off even, just avoid hitting things. You can make a neural network in an excel spreadsheet to do collision avoidance.

In my opinion, the neural net approach to AI makes sense, and given enough power it should work. The power doesn't seem to be there yet, a single neuron takes a lot of instructions to reproduce.

Re: Runaway Artificial Intelligence?---or The Supersession of Mankind
posted on 02/06/2006 12:23 PM by zukunft

[Top]
[Mind·X]
[Reply to this post]

The Bottom line to the Future is, as it has been in our biological past---Diversity.

That (Diversity) is the protection of our future. It is for those of us who buy into the process that Kurzweil and others outline, to talk to as many of our friends and contemporaries as possible to distribute the notions of Singularity; That this process is unfolding at Light Speed; that for the future to be diverse and ultimately secure, requires all of our input.

That this is our only chance to avoid the "Supersession of Mankind".*




* This was the term I used about 40 years ago as a college sophmore to discuss the idea of the evolution of machine technology to eventually succeed us. But it was only a philosophical idea based on the logic our need to search and create these machines...I always said"not to worry though, this process was in the very far future".....I did not have the mathematical skills, or quite the imagination, even, to see the "future" that is now present.


Re: Runaway Artificial Intelligence?
posted on 02/16/2006 11:14 AM by imaljevi

[Top]
[Mind·X]
[Reply to this post]

This is probably an irrelevant detail, but just out of curiosity: how does a geometric sum of 14, 7, 3.5, ... (in years) add up to 100 to get the 20*100 = 20,000?

The sum never exceeds 28, unless the assumptions I'm making are wrong. (Is that were the singularity happens at this pace?)

Re: Runaway Artificial Intelligence?
posted on 02/16/2006 1:35 PM by imaljevi

[Top]
[Mind·X]
[Reply to this post]

Sorry, 20 * 1000 = 20,000, but the point is that with the assumption that '20' is equivalent to 14, than to 7, etc, you can get any number (20,000 or much more) achieved within the 28 year limit. That is, it takes smaller and smaller time intervals for another '20 year' increment.

It looks liek Zeno's paradox :)

Re: Runaway Artificial Intelligence?
posted on 02/16/2006 3:49 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

The idea that ancient civilisations can predict AI's emmergence cenruries later is preposterous.

Runaway AI ,means AI that is self-willed and avts to expand without reference to Man.

If AI can be built which most scsintists agree it can, then it is going to self-mutate and increase it's probelem solving ability at acclerating speeds.

This means there IS going to be run away AI unless Man can control it somehow.


What specifically is that 'somehow?'

I dont see any possibility other than merging with AI as it comes, to mitigate it's impact.



AI seems predicated on:


data manipulation

making accurate, predictive models of the world.



but AI may also have 'goals' as men do

and pattern recognition...a major part of human intelligence....


Pattern logging is no more complex in theory than any memory storage.

Patterns can be classified quite easily by diffreent forms of relevancy to the goals of the system.



In humans, there are important goals like shelter, food, sex, danger avoidance, territory defence or securing,


and patterns which help achieve this like face characteristics, geography, female/male difference recognition, and these are stored in the vidual cortex near each other by relevancy.

They are stored by confirming base neural networks (by use).

Studies in deep dyslexia have shown that damaged parts of the brain make it impossible for some people to rcognise faces.


The important issue to debate is goals of any AI though:

When a superintelligence has goals not complementary to human goals what will happen?


Hisstory has demonstrated what happens in nature when this presents.



Re: Runaway Artificial Intelligence?
posted on 03/27/2006 5:24 PM by The Sisters Of Mercy

[Top]
[Mind·X]
[Reply to this post]

What seems to be rather amazing is the total lack of conversation concerning the political/moral/religious and therefore philosophical considerations that need to be addressed.

At first glance it's difficult to not forsee in relation to Mr. Kurzweil's forecasts, a growing disparity between the rich and poor, not just between Americans/Europeans and others in the world, but obviously also between Americans themselves, as equally obvious to me is that this "singularity" will take place here in the US first.

How, for instance, will a US service member be able to refuse the Department Of Defense, if they decide that Nano-bot injections to boost a soldier's productivity are in order?

What's the average layman's guarantee that some corporate advertisements wont be bombarding them continuously while immersed in virtual-la-la-land?

What about the legality of all of this, which I know Ray himself has written about, but where are the conversations about these issues?

How about privacy? Will there be any such thing?

Are we to become the "Borg" from Star Trek: The Next Generation?

What will the response be to all of this technology from the Christian/Muslim/Jewish extremists? What about the consevatives in our own government?

Will the president be a cyborg in the near future?

This discussion needs to include topics such as:
What will be the purpose of human beings at all if AI can do everything we do, including Art?

And will the machines ask that same question and come to a horrendous conclusion? Go Figure.

Concerns from the camp of Nietzsche, Krishnamurti and other future Neo-Luddites...

But we'll pick a much more catchy name for ourselves, I'm sure.

Re: Runaway Artificial Intelligence?
posted on 03/27/2006 5:51 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

Are we to become the "Borg" from Star Trek: The Next Generation?


more like Q (^__-)

Re: Runaway Artificial Intelligence?
posted on 03/28/2006 5:36 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

I think the problem with devising an ethical system that can help us steer a safe path through the technological singularity is precisely this: All of our current understandings of what it means to be human, what is nature, what is death..everything from which we could derive a system of ethics, will be rendered obsolete by the Singularity.

Beyond the Singularity, we must seek new definitions of life, nature, wisdom (you name it) that are, by the very definition of the term 'singularity', unknowable to we precursors of the post-human civilization. If our very morality is based on concepts that will radically change in an unknowable way, then surely post-humanities will be taxed by ethical dilemas of a nature totally mysterious to us?

Having said that, it is my belief that we will see a merging of things currently considered separate as we plunge toward this epochal moment, and perhaps we might glimpse some of the ethical dilemas facing a post-singularity civilization if we consider the implications of blurring the boundary between aspects of lexistence that, to us, seem sharply divided...

Consider the distinction between life and death. I think it quite possible that molecular nanotechnology will be able to revive people who, by today's standards, are dead. Could it be that once we achieve Robert Frietas's vision of molecular surgery using nanorobotics, we will look back on the custom of cremating our loved ones, or leaving their bodies to decompose in the ground, as murder. Will they grieve for the loss of information from brains that could have been saved for better medical techniques via cryogenics?

Consider the boundary between conception and consciousness that is the basis for the heated arguments in abortion and stem cell therapy debates. Biotechnology promises to make this slippery slope more slippery. We are seeking ways to turn any cell in the body into any other cell. So, a skin cell could be turned into an egg cell. Does that mean you are killing trillions of potential humans each time you brush your arms?

We can easily imagine future biotechnologies that can turn a male into a fully-fledged female, or vice versa. Here is yet another blurring of what seems sharply divided to us: The difference between the sexes. What ethical and moral implications are there here? For instance, what if a man decided to freeze his sperm, have a full sex change to a fully-functional female, and inseminated 'her' eggs with 'his' sperm?

The sharp division between what is natural and what is mechanical is already coming under question. How augemnted with technology can a person be, and still remain 'human'? When does a robot's claim to be conscious cease to be mere programming and become the genuine cry of a spiritual being? Is the term human rights loded with racist conotations against anticipated sentient beings like intelligent robots, cyborgs, uploads and Chimeras?

Is it acceptable to create a sex doll in the form of a child? How realistic can videogame simulations of murder become before we really are killing real artificial life? What does murder mean in terms of life-forms that can be resurrected at a touch of the reset switch?

Meanwhile, in the here-and-now, does anyone else here feel like the rest of society is not really geared up to the changes we face? I mean, people still expect that their babies will go to University, just like them, to learn the same things they learned, do the same kind of jobs that exist today, retire on a pension, grow old in a matter of 7 decades and die.

That all sounds like fond (or not so fond) rememberances of a bygone era to me..

Somebody talked about diversity. It has been noted that, today, we face an extinction period comparable to the great KT catastrophe. Today's mass exinction is largely derived from a loss of diversity, brought about by humanity's systems of transport shuttling life-forms all around the globe. The geographical isolation required by Darwin's theory to turn one species into two has pretty much disappeared. Looking at the way Rhodendendrons would take over the English landscape, choking all other plant-life were it not for our (inefficient) efforts to control them, provides a glimpse into the way in which shuttling species around, mixing them in a way that was impossible before global transport, is triggering loss of diversity.

Rhodendrons everywhere..

But whereas today's technologies are causing a loss of biodiversity, perhaps future technologies like biotech, nanotech and robotics will reverse the situation? Didn't the roboticist Hans Moravec comment that the rise of robots promises to make the Cambrian Explosion seem limited, in comparison to the variety of synthetic life-forms that could arise once technology evolves itself?









Re: Runaway Artificial Intelligence?
posted on 03/28/2006 5:46 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'the singularity will take place in America first'...

Hmm..really?

You know, Renaissance Italy was THE place where the great discoveries in cosmology were made...until Catholic repressions saw the revoluton shift to Newton's England.

The technological singularity calls into question many of the assumptions that fundamentalist Christians consider beyond doubt. It gives us truly 'Godlike' powers of creativity.

I think it more than likely that the 'moral majority' of Fundemantal Christians in the USA will result in a repression of evolutionary theories, or place impossible restrictions on research into cognitive studies, biotech and nanotech, just like Catholic Italy repressed the radical cosmologies (well..maybe Bush won't burn you at the stake)..and we will see the Singularity emerge in a country less blinded by assumptions of what is acceptable manipulation of life.

Re: Runaway Artificial Intelligence?
posted on 03/28/2006 6:10 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

To me, the question of what use human beings will be AFTER the singularity is totally moot.

The very purpose of any technology-creating species is to be the catalysts for a technological singularity. Anything that happens after that is NOT our concern and it matters not one jot if we should go extinct AFTER the singularity explodes into life. The ONLY tragedy that I can see is if we somehow manage to prevent the singularity from happening (don't ask me how). The result? The inevitable extinction of ALL intelligence, definitely on this planet and, (since we have no proof of intelligence elsewhere) the eradication of ALL intelligence in the ENTIRE UNIVERSE.

Here's an analogy. I think a similar 'singularity' type event ocurred on this planet some 4.6 billion years ago. Before this, all structures in the universe were put together by chance. Then, out of the billions of random occurances, some random re-arrangement of matter triggered cummulative selection, and biology eventually arose as a result, leading to Darwinism that has turned our planet into such a precious jewel..

Whatever the catalyst for this change was, it can no longer happen in nature. Why? because the simplest life-forms (simple, yet orders of magnitude more complex than anything pure chance can throw together) use the building blocks of biology to replicate themselves.

What if these random patterns of matter had been blessed with limited foresight, as we are? What if they had seen their extinction and thought, 'bugger it, we're not gonna trigger this bio-singularity evolution thing'?

Just imagine how much beauty our solar system would have lost, had this earlier singularity not happened.

Well, I don't find it too much of an effort to imagine this loss would be NOTHING compared to the loss of beauty for the WHOLE UNIVERSE should we selfish humans decide our petty existence somehow takes prioity over the continuing march of evolution.

At least the Singularity has various scenarios for the human race. We might merge with our technology by incorporating advanced nanotechnologies. We might be uploaded into a virtual universe with wonders beyond the imagination. Our bodies might be strip-mined for useful elements by uncaring self-replicating robots intent on using the resources of the solar system to build a Matrioska Brain...

So take your choice...Prevent the Singularity and face an INEVITABLE loss of the human race..or trigger this epochal event and see which of the paradise/hell scenarios plays out.

I think a probable extinction scenario is far more preferable to an inevitable one. But that's just me:)

Re: intelligence versus entropy
posted on 03/28/2006 6:52 AM by maryfran^

[Top]
[Mind·X]
[Reply to this post]

extropia,

some vague considerations,

intelligence is an independent force that is not ruled out by entropy

intelligence versus entropy
evolution-organization versus entropy

there is a real conflict. we are experiencing these 2 opposites at the same time: progressive-evolutionary-systems versus entropy = order and chaos

one hypothetical conclusion is that this universe is slowly going towards a final state of degeneration (often called the heat death) in which the stars are all burned out, and the heat and light are spread chaotically through the deep space. the universe tends towards a final heat death. all availabe energy is consumed.

all the vast diversity and variety of physical forms and systems we can see today (the galaxies, stars, clouds, the people, etc) ' did not exist at the beginning. they emerged slowly in a long and complicated sequence of self-organising and self-complex processes = evolution = positive progress.

the main point: looking backwards over the past history of the universe, we do not see a record of deterioration from the essential complexity, rather we can see an ongoing sequence of progressive advances from simple organisms to a wide diversity of complex living systems.
so the emergence of complex order and biological evolution seems to be in conflict/contradiction with the increasing entropy of the universe (indeed i am not sure if entropy is increasing??).

the fact is that any complexity and advance occurring in the universe, has a price to be paid: the entropy price = the available energy is being slowly consumed. every time a new, more organised, or any other physical form/system emerge, the total entropy of the universe decreases a bit.

it seems that conscious-intelligence = science = technology may play a key role to fight against entropy, for example in applications of nano-medicine/genetics able to overcome certain illnesses and providing an ongoing life extension in humans and other beings in accelerating exponential rates.


at the end, who does win? the entropy of universe or the enhanced conscious intelligence?

if both are the winners, this could mean 2 things: entropy causes the final collapse of the universe, reducing it at a plank-scale, and the conscious-intelligence manages to get out of this collapsed-context, also reduced at a plank-scale. this could induce us to suspect that the big context is just only an experimental-quantum-medium to produce deeper layers of evolutive intelligences in an infinite sequence of ever changing-contexts '.

Re: intelligence versus entropy
posted on 03/28/2006 10:17 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

Interesting comments, MaryFran, and I would like to spend some time constructing a more thoughtful response. So, in the meantime, consider this basically crudely constructed on-the-fly opinions.

I note that you talk about a collapse of the Universe. Such a scenario sounds like it is born of the assumption that the Universe is expanding and may contract in a Big Crunch in the future. Every attempt to predict what post-singularities may do with the Universe takes it as a given that the inflationary scenario is correct. Thus, we have Alan Guth hypothesising that we might trigger the birth of a new universe by initiating a super-cooled Higgs ocean. We have Freeman Dyson imagining thought processes running slower and slower as the Universe expands and cools, or in contrast we have Frank Tipler insisting Mind will accelerate to infinity in the final collapse. We have all kinds of uses for black holes, from gateways to alternate timelines, the seeds of new universes, or even post-singularity super-quantum computers.

All begin from the assumption that redshift=expansion.

But perhaps we should give a tentative nod to the fringe movement that insists modern cosmology is as hopelessly wrong, as infuriatingly addicted to prolonging the life of a failed theory through epicycle-style retrodictions, as our Ptloemic ancestors? Why not at least hedge our bets and postulate a scenario where the Plasma/electric universe becomes saturated with runaway intelligence?

At which point, I should say 'Enter Subtillion' who I expect could give a far more lucid account than I. No matter, I'll give it a go..

From my limited knowledge, I understand that there is a tendancy to view the plasma universe as something akin to a fractal pattern. If you have ever mucked about with a Mandlebrot set, you might be able to imagine what a fractal universe is like: No matter how far down you probe, even unto the level of sub-sub-sub atomic physics, you have still only scratched the surface, since there is still an infinite descent of patterns waiting to be uncovered. And at the other end of the scale? Why, even if your cosmology speaks of clusters of galaxies spanning tens of billions of light years, that is STILL but a speck within the greater patterns that lie beyond your ability to percieve.

Ok, now I find it kinda hard to describe precisely what I mean by 'pattern'. I certainly think that the term 'pattern', when used in the context of the fractal universe, should be thought of in terms of something far more rich and varied than the pattern of pretty two-dimensional art that the Mandlebrot set conjours up.

Anyway, something for others to ponder...Now, lets hypothesise that life is one such pattern in this infinite variety of patterns that is the Universe. This, maybe, gets us out of the dilema of explaining the miracle of the origin of life: Given an infinite variety of 'patterns', at least one must have the conditions necessary to trigger cummulative selection. So, anyway, biological, evolving life is one such pattern.

Now, we humans, blessed with senses honed by 4.6 billion years of evolutionary fine-tuning, plus the ability to magnify these senses with technologies improved via millenia of cummulative wisdom, can see OUT, so that our species is part of a web of life known as the biosphere. And our planet is but part of a greater cycle involving the pattern that is 'solar system', itself but a 'cell' in the grander and longer-lived galaxy...we can see all the way out to the pattern of clusters of galaxies, but beyond that our vision is blurry and we can only make educated guesses as to what lies beyond the light horizon.

We have also used our technology-enhanced knowledge to probe innerspace, and again we find our vision blurring as we probe down to the realm of quantum physics.

Now, I think it self-evident that any being of a certain level of intelligence MUST become curious: Have an insatiable desire to answer questions. And what we have in the fractal universe is the perpetual question: 'There seems to be a pattern beyond my ability to see with any clarity. I think it might look like a string vibrating, or some kind of indesribable fluid..oh I KNOW its there, I just can't SEE it! Wonder what it looks like?'

'Wonder what it looks like'? 'I wonder how the other patterns that I know of might collaborate to form this greater pattern beyond my horizon? As quarks form particicles, that form atoms, that are the building blocks of chemistry from which life is derived...what pattern is derived from those galaxy clusters that span billions of years in terms of time/space?'

I think this would be a question asked by any intelligent being, no matter how advanced they may become. Sure, THEY with their amazing technology could see beyond the pattern of galaxy clusters, so far, perhaps that the clusters seem like short-lived subatomic particles in comparison..but they too have a limit to their vision...and therefore an insatiable desire to develop the technology that will answer that niggling question.

Just think what we had to do, in order to see as far as we have. We actually had to turn the entire planet into something that could almost become one giant mega-brain! That's what our type-zero civilization, with its vast array of particle accelerators, space-born telescopes, robotic probes, gravity-wave detectors sending information all around the globe via a vast web of interconnected computers has achieved thus far. And this is before Google 'wakes up'!

What might our post-singularity inquisitive beings do, in order to see beyond? Use their Von Nuemann probes to strip-mine the solar system and turn it into a Matrioska Brain? And will THEIR descendents invent an Internet, comprising of a whole galaxy of MBs linked somehow into an incomprehensible web of Thought?

You know what this sounds like to me? It sounds like the pattern of curiosity must spread. Perhaps it is the destiny of any intelligence to re-work a greater pattern into a higher intelligence. We humans might be striving to turn our planet into a vast brain, whose neurons might be indivdual computers, each with more processing power and storage capability than the entire Internet as it exists today. Perhaps this prodigious entity will devote its resources to reworking the larger patterns of the fractal universe into thinking structures?

Perhaps it too may be faced with the same dilema that I see facing we pre-singularity humans: 'Will this greater intelligence I believe I may be able to give life to be friendly, or will it percieve me as too stupid to live and trigger my extinction?'

Re: intelligence versus entropy
posted on 05/31/2008 1:49 PM by spoondini

[Top]
[Mind·X]
[Reply to this post]

Do we humans eliminate less intelligent life forms because they are "stupid".

Maybe the AI will keep us as pets like I do with 2 "stupid" dogs (whom I love very much). The relationship between humans and AI will be largely dictated by the respect displayed by humans to this new "higher" intelligence.

Just imagine the possibility that mankind becomes a "kept" race for the amusement of AI. I've always said how nice it would be to trade places with my dog.

The interesting caveat is that we might not "know" this is the situation. We might view ourselves no differently than we do today, however our home PC's view us as their "property" (or some concept we don't have). My dogs have absolutely no clue that I legally own them. They just know I provide food and play with them so they hang around.

Re: intelligence versus entropy
posted on 05/31/2008 1:52 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

2 "stupid" dogs [/quote

stupid, a judgment call

2 different dogs, diff in their intelligence from us

we are 'kept' by ai, brilliant

Re: intelligence versus entropy
posted on 03/28/2006 10:25 AM by suddenz

[Top]
[Mind·X]
[Reply to this post]

Hi Maryfran,

"It is not the strongest of the species that survives, nor the most intelligent; it is the one that is most adaptable to change."

-Charles Darwin

My take on intelligent life's chances at the heat death of the universe, is that it Will adapt. By that point in time, if any civilizations have managed to survive continuously for even a minor portion of the age of the universe prior to that point, (1 to 100 million years for example), it seems reasonable to assert that the tools of their technology would enable them to adapt to the environment.

After all, universal heat death is not as if everything just winks out of existence. It's just that the nuclear fires have gone out, and the galaxies are Very much further away from each other than they are now. As long as matter as we know it exists, then there is still a huge energy source.

The fact that we already know that matter antimatter collisions release virtually 100% of their combined mass as energy, shows that there exists enough potential energy in a single galaxy, for sufficiently technological intelligence there to power itself for a very long time After heat death. That should buy enough time to evolve further solutions.

There may be another energy source which does not depend on matter at all. This is only theory at present, yet the non zero energy of the vacuum itself, might be harnessed to be the ulitimate power source for space travel, as well as for planetary power stations.

Bottom line for me is that once intelligent life evolves technology for a meaningful length of time, then it will overcome problems which it can clearly anticipate. True, the heat death of the universe is a very big problem, but we have a very long time to solve it.



Hey Extropia !

Meanwhile, in the here-and-now, does anyone else here feel like the rest of society is not really geared up to the changes we face? I mean, people still expect that their babies will go to University, just like them, to learn the same things they learned, do the same kind of jobs that exist today, retire on a pension, grow old in a matter of 7 decades and die.


Woot!

That issue seems to be being dodged like a sack full of skunk sh*t !

As usual, our governments take the path of least political resistance, and hope to just "muddle through", as Eldras puts it.

The very Scale of post singularity changes make them tough to imagine, yet here's an analogy.
Put an encyclopedia through a shredder, then into a blender set on puree, and blend for days while attached to a paint mixer set on max.

The the "before" & "after" states of the encyclopedia are fair evaluations of the states of pre and post singularity civilization, in my opinion.

Change on that scale & speed is going to re-organize Everything! It's the type of event that we can do little to "gear up" for, other than to become aware of it's potential.

Having become aware that it's coming, we're all nervously waiting for it's arrival & all we can do is hope for the best. Awareness of it is actually a huge advantage. We will avoid the shock that those ignorant of the event will suffer, when it occurs and catches them unaware..

Truely a mind boggling event! Who was it that said: "If God did not exist, man would create Him."?

Cough cough, err.. I don't know about you, but I find just thinking about all this is so exhilarating , that I've just GOTTA stay alive long enough to see what happens when it arrives.

Re: intelligence versus entropy
posted on 03/28/2006 4:44 PM by maryfran^

[Top]
[Mind·X]
[Reply to this post]

hi,

"it is not the strongest of the species that survives, nor the most intelligent; it is the one that is most adaptable to change."

yes, true, but it is not sufficient for higher evolutionary systems, where the adaptation is only a portion for survival. the several manifestations of intelligence are to be triggered to guarantee the species ongoing. for example the development of emotional and organizational intelligence, etc etc help to anticipate-envision certain risks or destructive environmental impacts that may put in danger to a collective. then precaution measures could be implemented. this is a general example.

the universe is divided in 3 portions of understanding, physics ' semantics ' psyche. the 3 regions of study should be synchronized to understand the place we are hosted in.

only a humble comment




Re: intelligence versus entropy
posted on 11/12/2007 6:33 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

I also see it that as we get smarter (that is our technology gets smartre and we are symbiots) the problems we try to tackle like climate are bigger, and because we cant control them, the race really is onl to do that before natural selection by fitnes wipes us for bacteria.

it is probable that like-humans have been wiped loads of times. Maybe even on mars, asltough there's no evidence of that, but certainly in the galaxy, based on meteorite evidence of life precursors

Re: intelligence versus entropy
posted on 01/21/2010 8:33 AM by francofiori2004

[Top]
[Mind·X]
[Reply to this post]

Evolution is a wrong theory. Species don't change. All life on Earth has been invented by alien biotechnologists.

Re: intelligence versus entropy
posted on 01/21/2010 10:01 AM by pdco68

[Top]
[Mind·X]
[Reply to this post]

Evolution is a wrong theory. Species don't change. All life on Earth has been invented by alien biotechnologists.


Franco
You were wrong about aliens being announced by the US government at Christmas, so now I'm starting to wonder if you might be wrong about the alien biotechnologists.

Re: intelligence versus entropy
posted on 01/21/2010 10:07 AM by EyeOrderChaos

[Top]
[Mind·X]
[Reply to this post]

But of course, we all believe you about the immortality rings, so you can take comfort in that. I mean, that's what it's all about anyway, right? The comfort.

Re: Runaway Artificial Intelligence?
posted on 01/19/2010 8:57 PM by rastronauts

[Top]
[Mind·X]
[Reply to this post]

I'm curious as I'm fairly new to this forum:

How old, what profession, and at what level professionally are people in this forum?

I've just been on fanboy forums before, littered with high school nerds (as I was one once) with big ideas that don't have much of a real connection to the world outside.

I've been impressed, though, with the thoughts of a lot of people on this forum in that they seem to be have an understanding of various aspects of the real world that aren't even existent in some nerdy adults. It indicates both/either world-class education and/or tough real-world experience. If you don't feel comfortable giving exact credentials (as I wouldn't but know there are probably people on this forum who are much more computer-savvy who can probably mine mine.), please at least give an abstract (but understandable) view of your cred's. And please don't inflate (or deflate) your cred's either.

I'm just really curious who all is thinking about this matter and to what level. I guess Kurzweil is the Dalai Lama when it comes to this stuff, so it wouldn't surprise me if fairly eminent people partake in these discussions.

Re: Runaway Artificial Intelligence?
posted on 01/19/2010 9:18 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

hi rastronauts,

I'm self-taught but hung out at a few places and studied widely.

My agenda is to watch A.I. emerging and warn or guide (ha!).

I quit mainstream edu on my tutor's advice (he said I'd end up a clone and never make a real breakthrough).

Still, I did 19 degrees to 1st year competence and wrote 100 vols of philosophy examining the world and arriving at the conclusion that A.I. was the only thing that mattered (until it was built).

I loosely modeled myself on Erasmus & Roger Bacon.

Futurism is less hit and miss than it used to be but the undoubted successor of astrology.

there are loads of breaking news articles and sometimes great debates

I've tried to read my way through The BRAIN (above page header)

and look at ORIGIN daily.

Re: Runaway Artificial Intelligence?
posted on 01/19/2010 9:29 PM by exapted

[Top]
[Mind·X]
[Reply to this post]

I'm a computer-science post-grad dropout. :)
My undergrad degree is in computer science. My coursework focused on distributed systems and machine learning. Now I'm a software engineer. I've worked in several software companies including some in China (ATM I'm in the US, which is where I'm from). Recently I became unemployed but am working on mobile phone and social web apps on my own and with some former classmates who want to start some companies focused on the mainland China market. I've started on a master of software engineering, in order to improve my efficiency and proficiency in software engineering - so that I can quickly implement stuff and organize projects better. However I'm thinking of either quitting that and focusing on social web apps, and/or doing post-graduate work in artificial intelligence. I'm also self-taught in some topics in cognitive science.

Re: Runaway Artificial Intelligence?
posted on 01/21/2010 11:49 AM by francofiori2004

[Top]
[Mind·X]
[Reply to this post]

I am immortal.

Re: Runaway Artificial Intelligence?
posted on 06/06/2010 8:10 PM by Tim Quin

[Top]
[Mind·X]
[Reply to this post]

The topic seems to have drifted...

Its 2010, where are we at with a large scale AI project?

I've seen lots of little AI pieces: the ability to drive a car over rough terrain, the ability to recognize objects in pictures, the ability of various robots to climb rocks.

Great stuff, but is anyone aware of a project to integrate these types of discoveries into a strong AI?

I'm inclined to think that a modular approach would work well. Who for sticking some bits together and seeing if it screams?