Origin > The Singularity > What If the Singularity Does NOT Happen?
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0696.html

Printable Version
    What If the Singularity Does NOT Happen?
by   Vernor Vinge

It's 2045 and nerds in old-folks homes are wandering around, scratching their heads, and asking plaintively, "But ... but, where's the Singularity?" Science fiction writer Vernor Vinge--who originated the concept of the technological Singularity--doesn't think that will happen, but he explores three alternate scenarios, along with our "best hope for long-term survival"--self-sufficient, off-Earth settlements.


Originally presented at Long Now Foundation Seminars About Long Term Thinking, February 15, 2007. Published with permission on KurzweilAI.net March 14, 2007.

Just for the record

Given the title of my talk, I should define and briefly discuss what I mean by the Technological Singularity:

It seems plausible that with technology we can, in the fairly near future, create (or become) creatures who surpass humans in every intellectual and creative dimension. Events beyond this event—call it the Technological Singularity—are as unimaginable to us as opera is to a flatworm.

The preceding sentence, almost by definition, makes long-term thinking an impractical thing in a Singularity future.

However, maybe the Singularity won't happen, in which case planning beyond the next fifty years could have great practical importance. In any case, a good science-fiction writer (or a good scenario planner) should always be considering alternative outcomes.

I should add that the alternatives I discuss tonight also assume that faster-than-light space travel is never invented!

Important note for those surfing this talk out of context :-) I still regard the Singularity as the most likely non-catastrophic outcome for our near future.

There are many plausible catastrophic scenarios (see Martin Rees's Our Final Hour), but tonight I'll try to look at non-singular futures that might still be survivable.

The Age of Failed Dreams

A plausible explanation for "Singularity failure" is that we never figure out how to "do the software" (or "find the soul in the hardware", if you're more mystically inclined). Here are some possible symptoms:
  • Software creation continues as the province of software engineering.
  • Software projects that endeavor to exploit increasing hardware power fail in more and more spectacular ways.
    • Project failures so deep that no amount of money can disguise the failure; walking away from the project is the only option.
    • Spectacular failures in large, total automation projects. (Human flight controllers occasionally run aircraft into each other; a bug in a fully automatic system could bring a dozen aircraft to the same point in space and time.)
  • Such failures lead to reduced demand for more advanced hardware, which no one can properly exploit—causing manufacturers to back off in their improvement schedules. In effect, Moore's Law fails —even though physical barriers to further improvement may not be evident.
  • Eventually, basic research in related materials science issues stagnates, in part for lack of new generations of computing systems to support that research.
  • Hardware improvements in simple and highly regular structures (such as data storage) are the last to fall victim to stagnation. In the long term, we have some extraordinarily good audio-visual entertainment products (but nothing transcendental) and some very large data bases (but without software to properly exploit them).
  • So most people are not surprised when the promise of strong AI is not fulfilled, and other advances that would depend on something like AI for their greatest success—things like nanotech general assemblers— also elude development.

All together, the early years of this time come to be called the "Age of Failed Dreams."

Broader characteristics of the early years

  • It's 2040 and nerds in old-folks homes are wandering around, scratching their heads, and asking plaintively, "But ... but, where's the Singularity?"
  • Some consequences might seem comforting:
    • Edelson's Law says: "The number of important insights that are not being made is increasing exponentially with time." I see this caused by the breakneck acceleration of technological progress—and the failure of merely human minds to keep up. If progress slowed, there might be time for us to begin to catch up (though I suspect that our bioscience databases would continue to be filled faster than we could ever analyze).
    • Maybe now there would finally be time to go back over the last century of really crummy software and redo things, but this time in a clean and rational way. (Yeah, right.)
  • On the other hand, humanity's chances for surviving the century might become more dubious:
    • Environmental and resource threats would still exist.
    • Warfare threats would still exist. In the early years of the 21st century, we have become distracted and (properly!) terrified by nuclear terrorism. We tend to ignore the narrow passage of 1970-1990, when tens of thousands of nukes might have been used in a span of days, perhaps without any conscious political trigger. A return to MAD is very plausible, and when stoked by environmental stress, it's a very plausible civilization killer.

Envisioning the resulting Long Now possibilities

Suppose humankind survives the 21st century. Coming out of the Age of Failed Dreams, what would be the prospects for a long human era? I'd like to illustrate some possibilities with diagrams that show all of the Long Now—from tens of thousands of years before our time to tens of thousands of years after—all at once and without explicit reference to the passage of time (which seems appropriate for thinking of the Human Era as a single long now!).

Instead of graphing a variable such as population as a function of time, I'll graph the relationship of an aspect of technology against population size. By way of example, here's our situation so far.

It doesn't look very exciting. In fact, the most impressive thing is that in the big picture, we humans seem a steady sort. Even the Black Death makes barely a nick in our tech/pop progress. Maybe this reflects how things really are—or maybe we haven't seen the whole story. (Note that extreme excursions to the right (population) or upwards (related to destructive potential) would probably be disastrous for civilization on Earth.)

Without the Singularity, here are three possibilities (scenarios in their own right):

Scenario 1: A Return to MADness

Figure 1

  • I said I'd try to avoid existential castastrophes, but I want to emphasize that they're still out there. Avoiding such should be at the top of ongoing thinking about the long-term.
  • The "bad afternoon" going back across the top of the diagram should be very familiar to those who lived through the era of:
    • Fate of the Earth by Jonathan Schell
    • TTAPS Nuclear Winter claims
    • (Like many people, I'm skeptical about the two preceding references. On the other hand, there's much uncertainty about the effects of a maximum nuclear exchange. The subtle logic of MAD planning constantly raises the threshold of "acceptable damage", and engages very smart people and enormous resources in assuring that ever greater levels of destruction can be attained. I can't think of any other threat where our genius is so explicitly aimed at our own destruction.

Scenario 2: The Golden Age

Figure 2

(A scenario to balance the pessimism of A Return to MADness)
  • There are trends in our era that tend to support this optimistic scenario:
    • The plasticity of the human psyche (on time scales at least as short as one human generation). When people have hope, information, and communication, it's amazing how fast they start behaving with wisdom exceeding the elites.
    • The Internet empowers such trends, even if we don't accelerate on into the Singularity. (My most recent book, Rainbows End, might be considered an illustration of this (depending on how one interprets the evidence of incipiently transhuman players :-).)
  • This scenario is similar to Gunther Stent's vision in The Coming of the Golden Age, a View of the End of Progress (except that in my version there would still be thousands of years to clean up after Edelson's law).
  • The decline in population (the leftward wiggle in the trajectory) is a peaceful, benign thing, ultimately resulting in a universal high standard of living.
  • On longest time horizon, there is some increase in both power and population.
    • This civilization apparently reaches the long-term conclusion that a large and happy population is better than a smaller happy population. The reverse could be argued. Perhaps in the fullness of time, both possibilities were tried.
    • So what happens at the far end of this Long Now (20000 years from now, 50000)? Even without the Singularity, it seems reasonable that at some point the species would become something greater.
  • A policy suggestion (applicable to most of these scenarios): [Young] Old People are good for the future of Humanity! Thus prolongevity research may be one of the most important undertakings for the long-term safety of the human race.
    • This suggestion explicitly rejects the notion that lots of old people would deaden society. I'm not talking about the moribund old people that we humans have always known (and been). We have no idea what young very old people are like, but their existence might give us something like the advantage the earliest humans got from the existence of very old tribe members (age 35 to 65).
    • The Long Now perspective comes very naturally to someone who expects that not only his/her g*grandchildren will be around in 500 years—so may be the individual him/herself.
    • And once we get well into the future, then besides having a long prospective view, there would be people who have experienced the distant past.

Scenario 3: The Wheel of Time

Figure 3

I fear this scenario is much more plausible than The Golden Age. The Wheel of Time is based on fact that Earth and Nature are dynamic and our own technology can cause terrible destruction. Sooner or later, even with the best planning, megadisasters happen, and civilization falls (or staggers). Hence, in this diagram we see cycles of disasters and recovery.

  • What would be the amplitude of such cycles (in loss of population and fall of technology)?
  • What would be the duration of such cycles?

There has been a range of speculation about such questions (mostly about the first recovery):

In fact, we know almost nothing about such cycles— except that the worst could probably kill everyone on Earth.

How to deal with the deadliest uncertainties

A frequent catchphrase in this talk has been "Who knows?". Often this mantra is applied to the most serious issues we face:

  • How dangerous is MAD, really? (After all, "it got us through the 20th century alive".)
  • How much of an existential threat is environmental change?
  • How fast could humanity recover from major catastrophes? Is full recovery even possible? Which disasters are the most difficult to recover from?
  • How close is technology to running beyond nation-state MAD and giving irritable individuals the power to kill us all?
  • What would be the long-term effect of having lots of young old people?
  • What is the impact of [your-favorite-scheme-or-peril] on long-term human survival?

We do our best with scenario planning. But there is another tool, and it is wonderful if you have it: broad experience.

  • An individual doesn't have to try out every recreational drug to know what's deadly.
  • An individual has in him/herself no good way of estimating the risks of different styles of diet and excercise. Even the individual's parents may not be much help—but a Framingham study can provide guidance.

Alas, our range of experience is perilously narrow, since we have essentially one experiment to observe. In the Long Now, can we do better? The Golden Age scenario would allow serial experimentation with some of the less deadly imponderables: over a long period of time, there could be gentle experiments with population size and prolongevity. (In fact, some of that may be visible in the "wiggle" in my Golden Age diagram.)

But there's no way we can guarantee we're in The Golden Age scenario, or have any confidence that our experiments won't destroy civilization. (Personally, I find The Wheel of Time scenarios much more plausible than The Golden Age.)

Of course, there is a way to gain experience and at the same time improve the chances for humanity's survival:

Self-sufficient, off-Earth settlements as humanity's best hope for long-term survival

This message has been brought back to the attention of futurists, and by some very impressive people: Hawking, Dyson, and Rees in particular.

Some or all of these folks have been making this point for many decades. And of course, such settlements were at the heart of much of 20th century science-fiction. It is heartwarming to see the possibility that, in this century, the idea could move back to center stage.

(Important note for those surfing this talk out of context: I'm not suggesting space settlement as an alternative to, or evasion of, the Singularity. Space settlement would probably be important in Singularity scenarios, too, but embedded in inconceivabilities.)

Some objections and responses:

  • "Chasing after safety in space would just distract from the life-and-death priority of cleaning up the mess we have made of Earth." I suspect that this point of view is beyond logical debate.
  • "Chasing after safety in space assumes the real estate there is not already in use." True. The possibility of the Singularity and the question "Are we alone in the universe?" are two of the most important practical mysteries that we face.
  • "A real space program would be too dangerous in the short term." There may be some virtue in this objection. A real space program means cheap access to space, which is very close to having a WMD capability. In the long run, the human race should be much safer, but at the expense of this hopefully small short-term risk.
  • "There's no other place in the Solar System to support a human civilization—and the stars are too far."
    • Asteroid belt civilizations might have more wealth potential than terrestrial ones.
    • In the Long Now, the stars are NOT too far, even at relatively low speeds. Furthermore, interstellar radio networks would be trivial to maintain (1980s level technology). Over time, there could be dozens, hundreds, thousands of distinct human histories exchanging their experience across the centuries. There really could be Framingham studies of the deadly uncertainties!

What's a real space program ... and what's not

  • From 1957 to circa 1980 we humans did some proper pioneering in space. We (I mean brilliant engineers and scientists and brave explorers) established a number of near-Earth applications that are so useful that they can be commercially successful even at launch costs to Low Earth Orbit (LEO) of $5000 to $10000/kg. We also undertook a number of human and robotic missions that resolved our greatest uncertainties about the Solar System and travel in space.
  • From 1980 till now? Well, launch to LEO still runs $5000 to $10000/kg. As far as I can tell, the new Vision for Space Exploration will maintain these costs. This approach made some sense in 1970, when we were just beginning and when initial surveys of the problems and applications were worth almost any expense. Now, in the early 21st century, these launch costs make talk of humans-in-space a doubly gold-plated sham:
    • First, because of the pitiful limitations on delivered payloads, except at prices that are politically impossible (or are deniable promises about future plans).
    • Second, because with these launch costs, the payloads must be enormously more reliable and compact than commercial off-the-shelf hardware—and therefore enormously expensive in their own right.

I believe most people have great sympathy and enthusiasm for humans-in-space. They really "get" the big picture. Unfortunately, their sympathy and enthusiasm has been abused.

Humankind's presence in space is essential to long-term human survival.
That is why I urge that we reject any major humans-in-space initiative that does not have the prerequisite goal of much cheaper (at least by a factor of ten) access to space.

Institutional paths to achieve cheaper access to space

  • There are several space propulsion methods that look feasible—once the spacecraft is away from Earth. Such methods could reduce the inner solar system to the something like the economic distances that 18th century Europeans experienced in exploring Earth.
  • The real bottleneck is hoisting payloads from the surface of the Earth to orbit. There are a number of suggested approaches. Which, if any, of them will pay off? Who knows? On the other hand, this is an imponderable that that can probably be resolved by:
    • Prizes like the X-prize.
    • Real economic prizes in the form of promises (from governments and/or the largest corporations) of the form: "Give us a price to orbit of $X/kg, and we'll give you Y tonnes of business per year for Z years.
    • Retargeting NASA to basic enabling research, more in the spirit of its predecessor, NACA.
  • A military arms race. (Alas, this may be the most likely eventuality, and it might be part of a return to MADness. Highly deprecated!)

©2007 Vernor Vinge

 

   
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

singularity is underway
posted on 03/14/2007 11:16 AM by daviest

[Top]
[Mind·X]
[Reply to this post]

Its funny to think of it not taking place.
My company is currently building a 1,000 teraflop box for under 20 million. 5 Years ago it would have cost billions. 10 years ago it was impossible.
I have built myself an at home 2 tefaflop box four under 20k... wake up world .. its like a frieght train comming down the tracks

Re: singularity is underway
posted on 03/14/2007 12:15 PM by NanoStuff

[Top]
[Mind·X]
[Reply to this post]

You've got a petaflop box to play around with? Imagine the word processing capabilities of that thing! You could press a key and it would show it in REAL TIME!

Re: singularity is underway
posted on 03/14/2007 12:42 PM by daviest

[Top]
[Mind·X]
[Reply to this post]

no. we are building one. actually there are several on order, from several non traditional companies.
look to hybrid gpu/fpga systems. Hech even two weeks ago amd anounced a 1tflop gpu box with either 2 or 3 of there current ati gpu's.

Re: singularity is underway
posted on 12/19/2007 12:13 PM by wGraves

[Top]
[Mind·X]
[Reply to this post]

If Microsoft does the OS, your petaflop will be slower than my iPhone.

Re: singularity is underway
posted on 12/29/2007 9:43 AM by gvald31

[Top]
[Mind·X]
[Reply to this post]

Because the natural agressiveness of human, someone will start international conflict and will use the advanced technology in a war. What can be more dangerous for the human race?

Re: singularity is underway
posted on 03/15/2007 3:30 PM by godchaser

[Top]
[Mind·X]
[Reply to this post]



"..wake up world .. its like a frieght train comming down the tracks."


That sounds about right-


Casey smiled, said I'm feelin' fine.
Gonna ride that train to the end of the line.
Got a head of steam, and ahead of time.

Caller called Casey, half past four, He kissed his wife at the station door.
Climed into his cab, orders in hand, could be my trip to the Promised Land.

The Engine rocked, the Drivers rolled, Fireman hollered, Save my Soul!
Don't you fret, keep feedin' the fire, don't give up yet, we'll make it through, She's steamin' better than i ever knew.

Run her til she leaves the rail.

Re: singularity is underway
posted on 03/17/2007 12:14 PM by Locutus

[Top]
[Mind·X]
[Reply to this post]

I'm deeply worried about the starry eyed optimism regarding the singularity, it strikes me that productive humans have found yet another topic that will take their minds off scarier issues.
Lest I become labelled a troll on my first post, I'd best clarify what don't I mean.
I don't mean that this forum is a waste of time or distraction, after all little technological os social development occurs if people don't discuss ideas. But what worries me is that people are treating the singularity as though it were an inevitable phenomenon akin to entropy.
Physics/maths describes ultimate entropy occuring in our universe no matter what else happens, whereas the emergence of the singularity could be prevented at any point by a bloody great asteroid hitting the earth!
And there are many other scenarios that would lead to the failure of the singularity, just as there are many that would lead to it's inception, but positive or negative all current scenarios exist only in potentia.
It behoves us to moderate our language in relation to theoretical developments of any kind, we too often forget that that the relationship between language and thought is a two way process, language is not just the physical manifestation of thought, but in it's turn the nature of the language we use can alter the nature of our thought processes.

Locutus.

Re: singularity is underway
posted on 03/18/2007 5:15 AM by godchaser

[Top]
[Mind·X]
[Reply to this post]



-Absolutely.. let's consider it done.

:)


Re: singularity is underway
posted on 03/27/2007 3:42 AM by surveiller

[Top]
[Mind·X]
[Reply to this post]

"But what worries me is that people are treating the singularity as though it were an inevitable phenomenon akin to entropy."


Depends what you mean by inevitable:
- asteroid, global nuclear war or something similar is a way to avoid it,
- human decisions certainly are not, even it looks a bit counterintuitive.

Why? Because technology has its own implicit logics, its rationale. We are used to say that we build and use technology; however, it would be more adequate to say that we are symbiotically connected with technology. Even that technology uses us as its material environment to build itself.

This makes it inevitable to happen. On subjective level it would be mad to reject the technology. And only that really matters.

Re: singularity is underway
posted on 04/21/2007 8:11 PM by StageOne

[Top]
[Mind·X]
[Reply to this post]

Hey Bro,

What technologies for everyday life do you see coming soon from this super computing capacity? You sound like the dude to ask.

Any products or stock tips?

Alas, I am from the non technical somewhat artistic side of the tracks.

Re: singularity is underway
posted on 07/09/2007 11:01 PM by Brian H

[Top]
[Mind·X]
[Reply to this post]

How many teraflops does it take to avoid making 3 spelling errors in one short paragraph?
Jeez.
Proves that human error and stupidity are ineradicable, I suppose.

Re: singularity is underway
posted on 07/09/2007 11:03 PM by Brian H

[Top]
[Mind·X]
[Reply to this post]

Oops, correction: in one sentence, not paragraph, and it's 4 errors.

Re: singularity is underway
posted on 07/09/2007 11:06 PM by Brian H

[Top]
[Mind·X]
[Reply to this post]

Further correction: it's 5 errors, if one counts the punctuation error (its should be it's).
Memorize:
he's, she's, it's;
his, hers, its.

Bye.

Re: singularity is underway
posted on 08/10/2007 2:48 PM by Stormfeather

[Top]
[Mind·X]
[Reply to this post]

And youre on the design team? ahhh yes wonderful LOL...thanks for the patches that only you felt the need to apply....typos swim freely in my blood too i just hope noone notices, jeeze I thought I was a geek, then you styarted talking beyond tera flops and tarzan's 1 gig Brain strated hurting LOL...

Re: singularity is underway
posted on 09/05/2007 1:26 AM by brasicano

[Top]
[Mind·X]
[Reply to this post]

Quite a bargain - 1,000 Teraflops for $20m! IBM, BlueGene/L which currently holds the top spot at 360 teraflops cost $400m... now that's progress! =) Can I invest?

It's funny that with all this talk of the singularity and mind uploading, etc., nobody ever mentions optimal intelligence or optimal living. Why do we feel we need god-like existence to be happy? What if, as Vernor Vinge suggests, an alternate outcome be merely a golden age? Would this be a suboptimal outcome?

I might suggest another outcome altogether, one I haven't heard mentioned before. What if the singularity does happen in all its technodivine glory? You know, mind uploading, god-like intelligence, living within virtual universes, faster than light space travel, ultimate knowledge, spreading intelligence throughout the universe, living until permadark, and the list goes on. Is that necessarily an optimal outcome?

Computers will become as fast as the human brain within the next decade although not as intelligent. In the 20's we will see the Turing Test passed and by the 30's strong AI will almost certainly be a reality. It will be interesting to see if friendly-high-intelligence will curb the "singularity scenario" for our own good as I suspect it might (or make a good argument for postponing it and convincing us as to our best path)

This is of course philosophical speculation, one without any foundation other than instinct and imaginative meanderings. But I would like to put it out there for further discussion as I haven't heard it talked about much if at all. Is there an optimal way of life... one of peace, love, joy, fulfillment, choice, freedom and happiness? Are these not our ultimate objectives anyway? What if we are blinded by the singularity and miss the whole point (or event horizon, ha ha)? What if the singularity happens but we still have nothing of what we truly need? What if more knowledge and faster thinking bring more anxiety and depression, even with advanced happy drugs? What if we lose our humanity?

Perhaps strong AI will know the answers to these questions. However, it might be assumed wrongly that strong AI will design themselves millions of time smarter than us, or even that there isn't an upper limit to intelligence. Perhaps though they will be smart enough to avoid a sudden singularity or guide us to a better life without a singularity at all. One thing is far from certain though... how many of us if any will still be here in two decades to find out? Live in peace... but most importantly, be proactive!

Eric

Re: singularity is underway
posted on 12/06/2007 3:20 AM by Divus Logos

[Top]
[Mind·X]
[Reply to this post]

" What if the singularity does happen in all its technodivine glory? You know, mind uploading, god-like intelligence, living within virtual universes, faster than light space travel, ultimate knowledge, spreading intelligence throughout the universe, living until permadark, and the list goes on. Is that necessarily an optimal outcome? "

I believe it is, even with our limited human imagination, just look at all we've been able to picture in our fiction/fantasies. I believe that if the singularity doesn't take place, it will all be very ironic, like a big tease on humanity.

Never before have humans been able to so viscerally portray their heroes-gods-legends. We're constantly bombarded with ultra realistic representations of what could be, increasing our desires to actually attain such.

Could this universe be so cruel in its nature as to tease us so horribly? Showing us such a marvelous glimpse at how glorious existence could actually be while never giving us more than the equivalent of a concentration camp!?!?

As for the singularity taking place, I do believe it will take place barring some unforeseen catastrophe. Unless serious unforeseen impediments lie in the h/w side of the equation(either problems ramping up the computational power or later on with some unforeseen insurmountable problem in molecular machinery design... both extremely unlikely, afaik.). Attaining sufficient computational h/w to provide post-human capacity to a strong ai seems something achievable within the near future( even if it was working as a high iq human brain without further design upgrades based on knowledge of its workings, such h/w could speed it up tremendously and provide sufficient simulation raw power to carry out vast technological R&D rapidly in the much more efficient virtual realm. ), such will lead to quickly approaching optimal molecular machinery design which will definitely transform the world(for starts, abolishing physical labour, aging, cancer, natural diseases, and making manufacturing / transportation / etc virtually free and automatic thus making mass space colonization simultaneously viable, etc).

"Here's an interesting thought. I know the preponderant view is that advanced AI intelligence will be structured identically to a human's. If that is the case, what happens as this intelligence scales up and up over time, into the millions of human intelligence multiples? Does it have 1M times the lust, the ambition, all of the id characteristics as well as the rational? "

I doubt we'll run into such problems, look at humans vs other lower life forms. We're able to handle more complex patterns in more diverse ways, and as for instincts and emotions we're often able to actually control such. A greater super-intelligent being will probably be able to grasp complex sequences and patterns that we can't even conceive of. IT will be like the difference in symbolic manipulation and communication/language capacity between a chimp and a human at first, much greater later on. And just as that vast difference in intellect did not entail uncontrollable or insatiable increases in instinctual/emotional mechanism, I believe it is unlikely such will develop later on(emotions will probably become deeper and more defined and elegant, just as I believe they became in us humans as compared to lower lifeforms.)

For example imagine an entity able to understand and mentally/visually represent say an entire cell with all its millions upon millions of molecules accurately, and have keen insight on its working at multiple scales from single proteins to organelles to the entire cell. Or able to quickly combine almost always buglessly-errorlessly millions of line of code involving highly complex algorithms into innovative s/w, while at the same time mentally visualizing how it all works together(something quite clearly beyond human capability.).

Re: singularity is underway
posted on 01/25/2008 10:59 AM by the last prophet

[Top]
[Mind·X]
[Reply to this post]

"What if the singularity happens but we still have nothing of what we truly need? What if more knowledge and faster thinking bring more anxiety and depression, even with advanced happy drugs? What if we lose our humanity?"

Do you think technology will stop at "advanced happy drugs"? In my opinion most people aren't really happy. For some it might even be impossible to achieve a fullfilled life no matter how hard they try.
When science progresses in its understanding of the human brain we will likely be able to cure unhappiness itself. I don't see why there should be a reason technology couldn't overcome such hurdles as depression and anxiety.

Re: singularity is underway
posted on 01/27/2008 9:30 PM by brasicano

[Top]
[Mind·X]
[Reply to this post]

I agree that science will be able to cure unhappiness, depression and anxiety. The question I pose is, will we lose our humanity at some point along the way? At what point are we reduced to mere programs operating the way our drugs tell us to. Do we really want a world of "shiny happy people", genetically and/or pharmaceutically modified to be near perfect? The most interesting and strongest people I know have failed time and again, but they are better humans for it today. Experience is how humans learn best, and the best stories ever told come from those experiences. Sure, a cure for serious depression should be sought, no doubt about it. I just think we should be more cautious when it comes to "happy drugs" for the general populace.

Re: singularity is underway
posted on 01/28/2008 5:38 AM by the last prophet

[Top]
[Mind·X]
[Reply to this post]

Thanks, I apappreciate your answer :)

"At what point are we reduced to mere programs operating the way our drugs tell us to."
I think we are mere programs right now programmed by blind nature for survival and reproduction. I would rather prefer to control myself intelligently by drugs or other means.

"The most interesting and strongest people I know have failed time and again, but they are better humans for it today. Experience is how humans learn best, and the best stories ever told come from those experiences."
It's interesting to think about how society would change if people would become happier. We should definitely be careful about the drugs we take and how they influence us.
I don't think people would stop being interesting. There will always be ways to seek out new challenges and learning experiences. In my opinion suffering is just plain unnecessary. Probably there will still be a positive feedback to learning experiences.

Re: singularity is underway
posted on 03/15/2009 4:55 PM by isamelb

[Top]
[Mind·X]
[Reply to this post]

The singularity as described by Kurzweil will definitely not happen.Machines will never be conscious .What everyone is building are sophisticated calculators.
I have just read Ray's boob The Singularity is near.Butt was shocked by how simple minded the man is.His book is repetitive around one central idea :processing power is growing exponentially.Also the time-frames he is giving are even less realistic.In the far future we might be able to significantly increase life expectancy but not in twenty years.Even if some predictions turn out to be right most people are no ready for such a dramatic change.
The brain works at subatomic level simply duplicating the outer shell will not give rise to intelligence as we know it.But might give rise to a very sophisticated database of questions and answers that works really fast.Kurzweil is a dreamer and a modern day prophet and his ideas are just a sort of religion just like Scientology.
In the mean time the man is making a fortune out of his predictions.He is a very good marketeer and salesman.dismiss the man and get back to reality .Immortality : who is he trying to fool.
work hard, save money and look after your health and enjoy a realistic life span .The reality is already superb and wonderful if we work at making ourselves happy.

Re: singularity is underway
posted on 04/20/2010 10:01 PM by millerman

[Top]
[Mind·X]
[Reply to this post]

Its people like you who stall the progression of technology. Let the ones who know what's best decide what to do and let them act as they see best. Unless you have an advanced computer science degree hidden under your bad typing skills, don't tell the ones who believe it's possible that it will never happen.

Re: singularity is underway
posted on 04/21/2010 6:10 AM by isamelb

[Top]
[Mind·X]
[Reply to this post]

Thanks for your reply.I do have a technology degree.
By the way English is my third language, but never mind the details.Lets focus on the idea.Every bit of me yearns for progress but I simply think that Kurzweil predictions are not realistic and only serve one purpose, to make him even richer.His place is not in predicting the future of technology but in the entertainment industry.Oh by the way there is a film by Kurzweil on the singularity.well done Kurzweil, stick to entertainment.

Re: singularity is underway
posted on 04/23/2010 3:13 AM by millerman

[Top]
[Mind·X]
[Reply to this post]

Yeah sorry about the typing skills part, I had read a different reply and mistook it for this one. Anyway, I expect that if we remain determined as a race and respectful of the problems this technology creates and we don't abuse it and destroy ourselves, his predictions may come to pass. I plan on an education path that will allow me to influence the Singularity as much as I can.

Re: singularity is underway
posted on 04/23/2010 10:12 AM by isamelb

[Top]
[Mind·X]
[Reply to this post]

I congratulate you on your enthusiasm.I hope we can be wise enough to not exterminate ourselves in our pursuit for happiness

Re: What If the Singularity Does NOT Happen?
posted on 03/14/2007 12:26 PM by Arthur

[Top]
[Mind·X]
[Reply to this post]

One subtle thing about Vinge's "Wheels of Time" scenario is that after the first catastrophic destruction of civilizations and decline of population (to "almost extinction", as he puts it), the natural resources necessary to rebuild human civilizations back to an industrial level won't be available.

Why? Because we will have already used them up to build the current civilizations.

The rich, easy-to-extract iron ore deposits of the Mesabi Iron Range of northern Minnesota are all gone (much of that iron now sits at the bottom of the Pacific Ocean, in the form of U.S. Navy ships sunk by Japan's Imperial Navy). The same can be said for oil, natural gas, coal, copper, etc.

Bottom line, humanity basically only has one shot at industrialization. If it blows it a la Vinge non-Singularity scenarios 1 (Return to MAD) and 3 (Wheels of Time), it's the Middle Ages, forever and ever, amen.

Re: What If the Singularity Does NOT Happen?
posted on 03/15/2007 5:52 AM by rand51

[Top]
[Mind·X]
[Reply to this post]

People who don't appreciate the creative potential of free, innovative minds have, throughout modern history, predicted the exhaustion of natural resources and a resulting collapse of civilization. This view is demonstrably false and has been refuted repeatedly by the history of Capitalism. When people are not prevented from finding solutions to emerging problems, there is always an alternative. As Julian Simon explained in his book, "The Ultimate Resource", there is no limit to natural resources. The universe is one vast natural resource. The idea that we have only one shot at industrial civilization is clearly mistaken and demonstrates a failure to appreciate the basic economic principles that govern the world.

Re: What If the Singularity Does NOT Happen?
posted on 03/17/2007 12:15 PM by Arthur

[Top]
[Mind·X]
[Reply to this post]

rand51 wrote,

People who don't appreciate the creative potential of free, innovative minds have, throughout modern history, predicted the exhaustion of natural resources and a resulting collapse of civilization. This view is demonstrably false and has been refuted repeatedly by the history of Capitalism. When people are not prevented from finding solutions to emerging problems, there is always an alternative. As Julian Simon explained in his book, "The Ultimate Resource", there is no limit to natural resources. The universe is one vast natural resource. The idea that we have only one shot at industrial civilization is clearly mistaken and demonstrates a failure to appreciate the basic economic principles that govern the world.


Spoken like a true codehead. :-)

Seriously though, if the world follows Vinge's "Golden Age" non-Singularity path, then Julian Simon's view (and yours) has some validity to it. If no teradisasters ever occur, the world's industrial and post-industrial bases could conceivably continue to develop and change indefinately, as billions and billions of human brains function as "the ultimate resource" by linking together through the Internet to solve a non-stop series of technical problems and thus eliminate all resource and environmental constraints on the growth and prosperity of the human population on Earth.

Vinge himself claims, however, that his "Golden Age" non-Singularity scenario is highly unlikely. Instead, sans Singularity, Vinge expects that one of his other two scenarios, MAD (leading to rapid human extinction) or Wheels of Time, in which man goes through repeated, unending cycles in which a rise to a high level of industrial civilization is followed by a teradisaster that collapses industrial society and reduces human population by 80-95%, are far more likely to occur.

Vinge's subtle point here is that in the latter scenario the "Wheel" would actually only turn once. Once most humans are wiped out and all human civilizations are forced back to a traditional agrarian way of life (with the dominant energy forms once again being wood, human and animal power), returning to a high industrial civilization is no longer possible. The reason is that all the lodes of key natural resources that can be easily extracted using a combination of wood and human/animal power alone are no longer there. We extracted all of those lodes in the run-up to the current industrial civilizations. Remaining lodes now are all in very difficult-to-extract places. Successful extraction of these lodes requires that an advanced industrial civilization already be in place; they cannot be extracted otherwise. Even "the ultimate resource" has to have the appropriate physical materials available to work with in order to transform an agrarian society into an industrial one. If not, it all turns into philosophizing, with former software programmers wandering around saying "If I had the materials available (and the requisite skills) to build a can opener, I'd be able to open that can of chili."

This is why humanity basically has only one shot at industrialization. To try to make sure that humanity doesn't blow it, Vinge has presented the Long Now Foundation and KurzweilAI.net with two options: figure out a way to make the Singularity happen or figure out a way to make a non-Singularity Golden Age happen. Otherwise, humanity's long term future will be either extinction or the Middle Ages, forever and ever, amen.

Here endeth the lesson. :-)

Re: What If the Singularity Does NOT Happen?
posted on 03/18/2007 5:24 AM by Lhuhikwdwoo

[Top]
[Mind·X]
[Reply to this post]

This is something Ive seen in a few other end of the world scenarios, the idea that we only have one shot at industrialization. While it is correct that the free ride we get from gas deposits, and perhaps coal, only happens once, in the event of a population crash, other materials like iron will still be around in abundance. In fact they will be even easier to get at this time. Recycle the cities and junkyards.

Re: What If the Singularity Does NOT Happen?
posted on 03/18/2007 5:37 PM by CharlesH

[Top]
[Mind·X]
[Reply to this post]

Unfortunately, most of the metal in junkyards is there because it's difficult to reprocess...lower grade, in essence, than the ores currently being processed.

This is NOT a good resource. Most of it might make good spearpoints or even hoes. I doubt that it would make a good plowshare. It would NOT make a good sword. Now as to more complex machines...

If we fail this time we'll need to climb back via ceramics and bio-technology (without electron microscopes). It's doable, to an extent at least. I'm not sure just how high one can climb following that path, however.

Re: What If the Singularity Does NOT Happen?
posted on 03/19/2007 4:34 PM by yamahaeleven

[Top]
[Mind·X]
[Reply to this post]

All metals become better the more recycled content they contain. The metals in landfills will soon become a valuable resource, even for our civilisation. If the population post "reseting" is 90% below current levels, the leftovers from this civilisation will be an enormous resource. Most of our steel is locked up in buildings, not ships at the bottom of the sea, by many orders of magnitude.

In the unlikely event civilisation hits the reset button, plenty of knowledge will be recovered and it can start much farther up the ladder, simply by knowing what was possible before. Our civilisation is much more robust than most people think, I consider the probability of a great step backward to be extremely low.

Re: What If the Singularity Does NOT Happen?
posted on 08/30/2007 11:19 AM by Deaken

[Top]
[Mind·X]
[Reply to this post]

I'm right in line with this thought process. If there is some great catastrophic setback - we will retain much of the knowledge we have gained - even if we loose electricity - we still have much of our knowledge in print. We will have to find other ways to work around new problems but with enough time we will find new answers.

Re: What If the Singularity Does NOT Happen?
posted on 05/26/2008 7:06 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

The rate at which we are destroying our environment and the inability of governments around the world to make rational decisions about what to do about it makes me doubt we have much time left. We will have to adapt or die. Some of us will, but most of us won't, in my opinion.

Re: What If the Singularity Does NOT Happen?
posted on 09/07/2007 2:45 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

The fatal flaw of the scientists on this board who are concerned with "the Long Now" is that they don't seem to allocate more importance to the period immediately preceding the Singularity. How asinine to believe that strong AI won't significantly change things beyond our ability to cope with them!

What is therefore important is LAW. MORALITY. JUSTICE.

I have incentives to be bad and incentives to be good. It is my education in LAW and PHILOSOPHY that encourages me to be basically good. To see the larger order made by an accepted code of exchange.

America has been destroyed. The people who understand how America has been destroyed (via the destruction of the power of the American jury trial, and the corresponding destruction of the decentralization of individual power by the collective) are not really taking part in this debate too much.

Kurzweil seems to understand a significant number of key issues, but has not really addressed them the way Freitas has in his "What Price Freedom?" essay on the "Big Thinkers" board.

The primary problem is that the American people are encoraged to be slaves by the public education system. I suspect that Kurzweil knows too many teachers and people on the dole (since those people are often rewarded with positions of power and success in our declining socialist police state) to be objective about the cost of socialism, regulation, and outright brutality engendered by the government.

He (and most of the others on this board) do not want to
1) strip the police state of its power until it becomes moral (which it will not do on its own). --Force would be necessary, and the police state carries a big stick, and is willing to use it preemptively. This takes GUTS.
2) Advocate changing the government. They, like virtually all comfortable people, have taken a "wait and see" approach to the problem of unjustifiable tyranny.

The problem is this: If technology continues to be abused by the mnodern American police state, then sooner (rather than later), that police state will have absolute power.

This begs the question: What do police states usually do with absolute power?

...They steal everything that everyone owns, and end in a prolongued orgy of murder, and then they start over, with a few of the worst problems/offenses of the prior police State being solved in a very neanderthal, incomplete, and cursory way.

The problem? ---That won't be good enough this time! Extreme surveillance technologies and power centralization technologies mean that THERE WON'T BE A "NEXT TIME".

A complete tyranny now has the resources to last A LONG, LONG, LONG, TIME. Longer than Soviet Russia lasted.

The jury trial in America was destroyed in 1895 by the Sparf V. Hansen Supreme Court case. There is no longer such a thing as a "trial by a jury of your peers". Moreover, the "civil trial" designation, excessive fines and cruel punishments, and the expansion of "voire dire" (prosecutorial hangman-jury selection) have all but eliminated the STATE DISINCENTIVE TO TYRANNY.

Without a disincentive to tyranny, TYRANNY REIGNS SUPREME IN POWER, because STEALING MONEY IS EASIER THAN EARNING IT.

The American public never read Ayn Rand. They're too miserably stupid to even bother seeking the truth, and when confronted with the truth, they only feel guilty, and then disavow it. They are then complicit tools of the police state.

Sadly, most scientists (though nobler than the majority of society in their willingness to at least think about something) are equally ignorant about what their rights are.

It takes more than smarts to reach the conclusions I've reached above. It takes
1) a knowledge of history
2) a knowledge of human nature
3) a knowledge of philosophy and law
4) intellectual honesty.

And #4 is where most people really fall apart. Most are not honest in their appraisals of reality.

Yet, I think that #2 is where most (talented) scientists (who are not morally compromised by receiving stolen money from government grants) fall apart. Most scientists I've talked to are well-meaning social outcasts who apply what they know about their own intentions to other people. They think most other people want to be productive, and produce good work, and not steal, and generally act in a civilized way.

And that is one reason why scientists are killed in great numbers when the chimpanzee public finally takes COMPLETE CONTROL OF GOVERNMENT FORCE.

Observe how the FDA has bullied thousands upon thousands of vitamin and supplement distributors who were simply MEETING DEMAND for their products. They did nothing wrong, but the FDA comes in with guns drawn, raids them, burns their inventory and their BOOKS.

The FDA goons were hired by the general public that knows not, and cares not, how their tax dollars are used to RUIN PEOPLE'S LIVES.

Think about that.

What kind of system of law will the PRE-SINGULARITY LAW ENFORCEMENT ROBOTS be enforcing?

If they are not vastly more intelligent AND MORALLY EDUCATED than current humans are, then they will enforce HUMAN LAW. HUMAN LAW is now (after the destruction of all the portions of the constitution that protected individual rights) based on the desire of one chimpanzee to bash another over the head and steal his food and his mate.

There seem to be very few people on this board who understand this, yet I know it to be true. I've simply seen too much of people at a deep level to fool myself.

People are about as "good" as the system of law they live under. If the power of government is not strictly restricted, then it is simply the unlimited power of theft. If theft is unlimited it leads to murder. What else is there to steal from someone who is angry and enraged that everything has been stolen from him?

Nothing but his life.

And this is the simple thing that will likely decide whether the singularity is "constructive" towards human life, or destructive towards human life. What system of law is the law of the land GOING INTO the singularity?

Well, the people here could write (and have written) a book on the subject:
http://www.fija.org
http://www.ij.org
http://www.cato.org
http://www.lp.org
http://www.optimal.org
http://www.john-ross.net
http://www.objectivistcenter.org

For those who want to get a glimpse of why I believe the things I believe, I recommend these books as starting points to developing an honest view of society:
"Unintended Consequences" John Ross
"The Shadow University" Kors & Silverglate
"The New Prohibition" ed. Bill Masters
"The Ominous Parallels" Leonard Peikoff
"Capitalism and Freedom" by Milton Friedman

It will be a shame if we are living in a totalitarian dictatorship right next to the tools for living in a paradise.

I think this is very likely since:
1) the general public, including most scientists, is completely unaware of and unconcerned with the idea of individual rights (both others rights, and their own)
2) if the singularity happens and humans are living under a totalitarian ROOT SYSTEM, then why would those who are enlightened take any part in that system? Answer: they wouldn't! They would treat us the way we treat chimpanzees. We don't recognize "chimpanzee rights" primarily because THEY DON'T RECOGNIZE THEIR OWN RIGHTS. --If humans are viewed the same way by machines, why would they risk physical harm by exposing themselves to us? When humans stupidly expose themselves to the brutality of chimps, they are often let down. (Google "chimp attack" to find out what post-human intelligences have to gain by getting involved with human governments... it would be almost funny if it weren't so prescient.)

-Jake

Re: What If the Singularity Does NOT Happen?
posted on 09/16/2007 2:14 AM by NotEqualwithGod

[Top]
[Mind·X]
[Reply to this post]

The real problem is egoism, we could have world peace tomorrow if we were able to replicate the philosopher kings among humanity and have them outnumber the animalistic zoological ego typed human beings, even many people here are animal egoists (less mature, ignorant, backwards, in terms of ego-framework).

Most people here don't even realize that our problem is with animal egotism, the framework of human ego and biological instincts gives birth to every problem in existence.

For instance if all human beings on earth had their nervous systems and minds linked, crimes/etc would disappear, if you merged their egos into a unitary identity with compartmentalized (regulated) individuality.

The reason why nothing gets done and human beings are so incompetent is that the ego-framework that directs human behavior in many people is extremely flawed and feral (backwards).

Any A.I. that humans create should be merged with the most kind people on the planet or based on their life experiences and data, IMHO.

Kind people know that killing one another is backward as it is ignorant but people do it everyday because of they're immature ego-framework.

They do not understand unified will's or unified identities, beyond things like kin, etc.

And even within families theirs incongruent identities, where you can have a family thats little more then strangers do to lack of common traits, culture, etc.

Re: What If the Singularity Does NOT Happen?
posted on 05/26/2008 1:49 PM by jabelar

[Top]
[Mind·X]
[Reply to this post]

I think this is why Vinge's Golden Age scenario includes the idea that longevity might make people wiser (presumably by either getting greater perspective, or perhaps by having more investment in the future).

For example, I would agree that if people knew they'd be around in 300 years, they'd probably generally care a lot more about the environment. People do seem so egotistical generally that even their own descendants are not sufficient reason to make the world better.

The only downside is that individuality still seems important. I would hope that somehow the Singular meta-conscious would still allow for constructive individuality. Otherwise, the idea of conquering ego smacks of erasing individuality.

Re: What If the Singularity Does NOT Happen?
posted on 05/26/2008 7:31 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

Craig Venter says that things made of DNA and the products it produces ARE machines. I find it hard to fault his theory. They are just made of ones and zeros packaged differently.

Re: What If the Singularity Does NOT Happen?
posted on 01/19/2010 7:13 AM by rastronauts

[Top]
[Mind·X]
[Reply to this post]

I think most of what you said is pretty wise (and chilling), but I disagree with you on the last part. An intelligent group of humans can easily use an understanding of the chimps' nature to eradicate them without the chimps ever knowing the humans were involved.

Other than people who eat them for bush meat, though, people generally don't have much incentive to kill chimps. Instead (kind of as you said), they steal their habitat and resources and kill the chimps if they don't move on. If there is an understanding by the humans that stealing chimp resources won't be possible without an altercation (which is probably more the case), then humans will opt to outwit the chimps and kill them off before taking their resources and real estate.

To say that machines wouldn't mess with human governments is, I believe, inaccurate. The machine(s) will serve its/their interest(s) and, being more intelligent than humans, will either kill us or drive us away from what it/they want(s) by tricking us with its/their greater intelligence.

I interchange the singular and the plural because a post-singularity machine would, I believe, only combine forces with another machines if those machines had enough individual or combined intelligence to pose a threat to the previously described intelligent machine.

It may be necessary to run a machine thought to potentially be more intelligent than humans through simulation to determine how it would interact with an outside world populated by humans. Depending on its reaction in the simulation, it may or may not be safe to connect its mind to the actual physical world.

Re: What If the Singularity Does NOT Happen?
posted on 03/25/2008 1:19 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

The singularity may be happening right now, but because we are too busy doing things the way we have learned to do them thus far, we don't notice that we've reached a point where we can no longer predict the future. Before WWII no one would have predicted that we can cross the Atlantic in three hours, talk basically face-to-face across the world, walk on the moon, run our lives with computers that fit in the palm of your hand, etc., etc. All of these thing would have been unimaginable to people just a hundred years ago.

What we're looking at as probabilities now are an age where we are able to manufacture things at the atomic level, products that manufacture themselves without human interference, the ability to change the genomes of every living thing on Earth, and the ability to combine humans and computers into a single species that connects all humans, plants and animals into a single organism capable of reaching out to the stars. But as we reach each of these stages we will always be looking for something better in the future to call "the singularity" and think that "now" we are not quite there yet.

Re: What If the Singularity Does NOT Happen?
posted on 05/26/2008 1:44 PM by jabelar

[Top]
[Mind·X]
[Reply to this post]

You're confusing prediction with imagination. People have actually imagined all the things that have happened, and our imaginations have probably covered most of the things that will happen.

Also, there is the matter of specificity of prediction. Noone might have predicted Facebook, but they most certainly predicted social networking.

So it is likely accurate to say that some sort of nanobots will do something useful within our bodies by 2050, but likely inaccurate to say exactly what type of nanobot doing exactly what type of function made by exactly such-and-such company.

Kurzeil explains this regarding black holes -- despite the event horizon we are able to infer at least general things beyond.

Re: What If the Singularity Does NOT Happen?
posted on 05/26/2008 6:58 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

The word "singularity" implies that the future will be unpredictable. You don't have to predict what is happening now. It's here. But many things are going to happen that are not based on today's methods and technologies. Most of todays technologies were unpredictable just fifty years ago, let alone 100 but they seem natural to us because we live with them and they are a big part of our lives. I think that's how the singularity is or will be. As it takes over our lives, we will not notice because we can look at history and see how we got here.

What you live with looks like the logical outcome of what has happened before. At the rate things are changing, the world we live in fifty or a hundred years from now will probably be unrecognizable to us if we were dropped into it today. It would probably very hard to live in. Imagine what a person living in the dark ages would think about the today's world. There is no way he could have predicted it or the technology that drives it. Since we now live in a world that was unpredictable less than 100 years ago I don't see why the process that shaped us and is shaping us can't be referred to as a singularity,

Re: What If the Singularity Does NOT Happen?
posted on 05/26/2008 7:12 PM by jabelar

[Top]
[Mind·X]
[Reply to this post]

I'm not convinced that Singularity necessarily implies unpredictability. In a mathematical since, the approach to a singularity is very predictable...that is why Kurzweil predicts continuation of trends in computing density, etc.

I actually think there were people in the Dark Ages that imagined we'd have a world like today, and we know that futurists a few hundred years ago imagined space ships and radio communications, etc.

I just think statements like "people back then would never ..." are underestimating people in the past.

Re: What If the Singularity Does NOT Happen?
posted on 01/19/2010 6:31 AM by rastronauts

[Top]
[Mind·X]
[Reply to this post]

I'm sorry, but I disagree with you. Where have those resources gone? Unless we had a massive interplanetary exploration and colonization campaign in which we sent most of our valuable resources to other celestial bodies, the resources will either be in landfills or as ruins strewn about the landscape, no?

The only resources we will not have are ones that took an immensely long time to generate that were put through some irreversible process by us (such as oil and coal). I'm assuming those are some of the resources you probably had in mind while writing your post.

Some resources will probably be easier for more "primitive" people to get their hands on, as they will have been manufactured into certain objects that can be recognized and sought out by the people of the future. And depending on how far in the future a civilization-ending catastrophe is, there may be heaps and heaps of text books strewn about the landscape that will allow people to obtain knowledge of the industrial past.

In any global catastrophe, there will undoubtedly still be functioning technology left once the dust settles, too. In fact, it may be the people that have access to and knowledge of how to maintain these technologies that survive the rough transition into a more austere existence.

If anyone thinks my logic is flawed, I'd like to hear your case.

Re: What If the Singularity Does NOT Happen?
posted on 07/05/2009 12:03 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

I concur.
-Jake Witmer
907-250-5503

Re: What If the Singularity Does NOT Happen?
posted on 03/14/2007 12:51 PM by Sunfell

[Top]
[Mind·X]
[Reply to this post]

It seems to me that there are powerful forces who are determined to drag us back to the Bronze age, rather than anywhere forward. And some of our technological advances are being dragged backwards, too, it seems. Like the mess that is the DMCA. Or Microsoft Vista. Or basic air travel. Sometimes it seems like we're headed for a 'stupidarity' than any kind of singularity.

I sincerely hope that I am wrong. I would hate to have some of those 'we were soooo close' sorts of conversations in 30 years.

Re: What If the Singularity Does NOT Happen?
posted on 03/14/2007 3:36 PM by Battleshield

[Top]
[Mind·X]
[Reply to this post]

The Singularity has already started.......

The single fact of a singularity is that we cannot see beyond it. Why can't we see beyond it? Because we dont know the factors in play or don't know them well enough to paint a sufficient picture.

One bit of data NOT tracked by Vinge (or Kurzweil in his "Law of Accelrating returns") is the "distance between". Human progress has directly or indirectly shrunk the space between us as individuals over the course of human history. Look at the 'known world' of the Romans. The 'vast' distances traveled by Early American colonists. The Long Distance by rail across the North American continent. The impact of telegraph/telephone. And most recently, the Internet and cell phones. This is (to me) the biggest invisible neon sign in history pointing at the magic 'X' that is the singularity.

Singularity may well come to have more than one meaning - not only is it a time-horizon that grows ever-nearer, it is a descriptor of the state of human interaction. We will be participants in what will look like a singular society or a singular mind. Communication and data will flow so fast and so smoothly that the lines between us, our tools, and our environment will continue to blur until they become nearly one single mind.

This sounds strange to us in the 'present' but consider how our world of today would have sounded to someone 100 years ago.....

Re: What If the Singularity Does NOT Happen?
posted on 03/23/2007 2:19 PM by Icarus1

[Top]
[Mind·X]
[Reply to this post]

I am not so sure that the world will become more connected. Maybe, as we feel we are losing privacy, our interpersonal connections just become less geographically based.

I (like most) have friends around the world, but I do not know all of my neighbors. If anything, physical proximity seems to increasingly breed contempt. Are we becoming more connected, or just rewired?

Re: What If the Singularity Does NOT Happen?
posted on 08/30/2007 11:45 AM by Deaken

[Top]
[Mind·X]
[Reply to this post]

This is the beauty of technology though. Yes we might not know our neighbors as well and really sometimes it's better not too. The conversations that we have on a board such as this can't be had in many other arena's though. Message boards like this are reminiscent of the 17 century French Salon's but on a much higher level. We are increasingly able to spend time with people on the same level as us. I can't talk to my wife or many of my geographical friends about many of the matters we speak of here. They either don't care or don't understand - and you know I understand where they're coming from. Although we're right regarding many of the items we discuss, we are on the fringe and this material is not for general or mass consumption.

Re: What If the Singularity Does NOT Happen?
posted on 08/30/2007 12:34 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

Agreed Deaken,

Attempted to explain Singularity to the wife...
She said I was watching too much Star Trek! LOL

The internet and other forms of communication that have developed just put people in touch with more like-minded individuals and groups.

I think having a bigger pool of folks to discuss ideas with can only enhance our lives.

I can talk to the neighbors about their nicely manicured lawn, or I can just as easily talk to a friend across the globe about the latest theories or advancements in A.I. The advantage here is do you really want to risk discussing religion, philosophy, politics with your neighbors, who you are somewhat bound to, and may not share your views? Or would you rather discuss those sort of topics somewhat anonymously with like-minded individuals?

Even my wife and I lean towards different political ideologies, a discussion is bound to end in argumentative conflict - so we just don't talk politics.

I'm almost certain that without the internet, I'd be stark raving mad by now (and that ain't a long journey!) :)

Re: What If the Singularity Does NOT Happen?
posted on 09/05/2007 1:10 PM by Deaken

[Top]
[Mind·X]
[Reply to this post]

Right on. That's greatness and I happen to be right there. When I talk to my wife about this stuff she glosses over so I tend to keep it simple and keep on the things that we dig. Technology - I think has brought, like you said more like minded people together and I believe it has furthered the endeavors in many fields with the sharing of information.

Re: What If the Singularity Does NOT Happen?
posted on 03/27/2007 9:24 AM by radone

[Top]
[Mind·X]
[Reply to this post]

I would say, think how strange our world looks to someone from 10 years ago. Cellphones with text messaging are ubiquitous. Internet connectivity is ubiquitous. Blogs? Youtube? All of these things were not present 10 years ago, and they're only making our world tighter, while at the same time, the dumb-dumb politicians with yesterday's visions of grandeur make the world more difficult to manage and scarier.

Re: What If the Singularity Does NOT Happen?
posted on 07/05/2009 12:05 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

It sounds like we agree. I'm a libertarian futurist who wants to decentralize power. If you are too, call me, I have a plan to do it.
-Jake Witmer
907-250-5503

Re: What If the Singularity Does NOT Happen?
posted on 03/15/2007 6:04 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

At least someone is smart enough, to try considering alternative.

The rest of the sheep are happily marching over the cliff.

e;S

Re: What If the Singularity Does NOT Happen?
posted on 03/15/2007 6:05 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

This means Ewe.

Re: What If the Singularity Does NOT Happen?
posted on 07/05/2009 12:07 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

LOL
www.youtube.com/watch?v=-_VVVTmiWFo
(Porky Pig, Uber Laugh)

Re: What If the Singularity Does NOT Happen?
posted on 03/17/2007 12:22 PM by Arthur

[Top]
[Mind·X]
[Reply to this post]

extrasense wrote

At least someone is smart enough, to try considering alternative.

The rest of the sheep are happily marching over the cliff.


Uh, Extrasense, the term is not "sheep", but "sheeple".

Get with it.

:-)

Re: What If the Singularity Does NOT Happen?
posted on 03/18/2007 6:48 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

The sheeple describes people, masses ?

So, how are we going then call the "leaders", these geniuses? May be leadgoats?


e:S

Re: What If the Singularity Does NOT Happen?
posted on 01/31/2008 7:52 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

No, not "leadgoats", they are the parasites on the sheeple. In function. Of course, they are the same "sheeple", except for the fact that they are actively parasitic, and actively generate bad memes that lead people to believe there is value in simple parasitism.

Software
posted on 03/15/2007 12:29 PM by Pitt the Elder

[Top]
[Mind·X]
[Reply to this post]

Such failures lead to reduced demand for more advanced hardware, which no one can properly exploit


Big iron doesn't require big software to exploit it.

Ocean or climate models would snap up a billion-fold increase in compute power in an instant, and would demand still more. I've talked to a senior researcher at nVidia about how they're strongly considering the needs of scientific computing in their future GPU product lines, and I've seen one of those scientists practically drool over the prospect of a double-precision Cell processor. Don't worry about whether demand for compute power will appear - it's already here.

Basically, any task where a large problem is discretized for solution or simulation can be scaled up to make use of more compute power while requiring zero additional program complexity. From my experience, an alternative scenario to any of Vinge's is not only possible, but likely:

1) Simulation (scientific computing, engineering, and gaming) keeps pushing hardware development.
2) The software required for strong AI continues to be extremely elusive, and - as always - presents a far greater barrier than compute power.
3) Strong AI doesn't arrive within 50 years, but research into materials science, hardware, and even AI continues apace.

The needs of AI just aren't a big driver of hardware development, and I don't see how its continued failure to produce strong AI will have a significant effect on hardware or materials research.

Re: Software
posted on 03/17/2007 7:02 AM by btud1@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

>1) Simulation (scientific computing, >engineering, and gaming) keeps pushing hardware >development.
>2) The software required for strong AI >continues to be extremely elusive, and - as >always - presents a far greater barrier than >compute power.
>3) Strong AI doesn't arrive within 50 years, >but research into materials science, hardware, >and even AI continues apace.

I'll comment on point 2/ above. You are obviously having a misconception about AI software design. It is a very common misconception. Basically you are implying that we will aply the current methods of software design to build Strong AI. 99% of the AI software beign designed now tries to model the external behaviour of very specialized subsystems of human intelligence. Any attempt to unify two such systems using an underlying model proves exponentially difficult.

The current approach will be changed in the near future. It will remain for specialized applications only. True AI cannot be designed "top-down". We need a paradigm shift here. The software does not need someone to make it in detail. It has to construct itself, but in a guided way, in an evolutionary process.

Luckily, we already have a functional piece of hardware/software which is strong AI - this is the human brain. It is straightforward then that the first step is to reverse-engineer the brain.
We don't need to create a model for the whole brain. This is an impossible task. We only need to model a neuron. We then construct the brain in a natural way, by interconnecting this elementary building block. We run simulations and compare them with real world outputs from actual animal brains. We run this cycle iteratively until the results from the software model are indistinguishable from the real world measurements.

The real problem is accurately modelling a neuron. The rest is just a matter of time and computer power.

We could go even further. We could model the neuron itself at mollecular level. This would be even simpler. The computer power needed however is much larger (but not prohibitevely so).

Most of you probably know what I am babbling about here. It is not science fiction. It is pure science. Mathematics, data, statistics. It is an experiment which actually started more than one year ago in Lausanne, Switzerland. It is the Blue Brain project (http://bluebrain.epfl.ch)

This experiment is exactly what we have to do. No more philosophyical dellirium. We need data.
By 2030 max (probably sooner) we will have the answer to a fundamental question. Is the brain function an emergent result of chemical interactions at the sinaptic level (which are well understood) OR it is the result of a different phenomenon (intrisincally quantum maybe). The old Penrose-Hard AI dispute will be settled.

If indeed it is not just a chemical process, we will have to make a step further, and model the brain at quantum level. We will need a quantum computer for the hardware. The software will grow itself, iteratively as in the current approach. And we will see if quantum physiscs is enough to explain counscioussness.

If it is not just a quantum process, we will just have to dig deeper. We will have to go to the plank scale probably. At the moment we don't have an exact model for the world at that scale. (something like M-theory).

And so on...

Re: Software
posted on 10/30/2007 12:10 AM by phlospo

[Top]
[Mind·X]
[Reply to this post]

My question to you is:

How doyou account for the fact that human development occurs so intimately with the body, family and community?

A few scenarios follow. A human brain is successfully modeled in a computer and:

1) grows on its own without external input. No siginificant development occurs. It eventually behaves as if it were brain dead. Ok, this is a dumb one, but it's conceivable that someone might just think that smarts would magically appear without stimulation. The analogy of a brain as a muscle applies here.

2) is given all the digitized knowledge of humanity to draw from and a digital tutor to develop skills. It develops into an opaque entity. Its processes are unkown because it receives its easiest stimulation from the web and digital media. It could become a troll, a hacker, a virus, an artist. Who knows? The fact remains that it is distinctly digital, both mortal and immortal, an allotrope of humanity.

3) is given a virtual body and virtual world to live in, along with human caregivers and a small community to draw from, along with all the digitized knowledge of humanity. The results are uninspiring. A child is born, grows, develops, and has a nervous breakdown when it realizes that it is entirely digital inside a cage of someone else's making. Ok, this is an exaggeration, but the fact remains that creating an analogue of a person will end up as... well... a person. He or she may not like you, may or may not be ambitious, may or may not appreciate its entirely digital nature and so on.

I guess the above thought experiment is there just to consider that an AI might be as opaque to us as a dolphin or the earths core. Its fundamental nature and substrate are different from ours, requiring different resources, possessing different mortality.

I think that this understanding is at the core of AI fear and hope:

a) Maybe they will be different enough to solve our problems.
b) Maybe they will be so different that they will hurt us.

What will the difference between AI and humans mean to AI once it is created? To humans?

Re: Software
posted on 11/07/2007 3:30 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

like where youre going, phlospo...

you realize that ai is coming from a different place than our human brain. i write abt futuristic ai and droid tech that might satisfy how different they can really be.

by approaching both human and droid mind from a freudian perspective, u can start developing diff kinds of intelligence in far more quantitative, customized ways.

These links give much more on those:

http://predictionboy.blogspot.com/2007/08/what-ai- will-really-be-like.html

http://predictionboy.blogspot.com/2007/09/heres-po st-singularity-droid-already.html

enjoyed your piece here, but i take a very diff approach, more abt us than them, but both impt.

Re: Software
posted on 11/07/2007 3:51 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

couple little snips from there:

The only evidence-based way to approach advanced AI techs is to treat it as another manufactured product cycle. Not a mystical, quasi-religious experience, not an epiphany of unknowableness. This does not mean that the product cycle scenario is absolutely true, but it is the most probable, if for no other reason it is the only kind that has ever happened. Advanced techs are created to be purchased by consumers and businesses.

...

these droids and the uses to which they'll be put are quite knowable, because we humans will be manufacturing them within constraints, legal and otherwise, that we would recognize today. The future will be different, but not entirely different in every single way. Corporations aren't going to turn against consumers with malevolent technology, and much of this thread will show that the droids won't get there by themselves.

Have confidence in our future selves, we are not going to lose control of our products in this way. It's funny to say that, because not only is there little risk of our products getting away from us, to get them just to their intended design behaviors will be an immense undertaking, making Vista look like DOS - or a simple "Hello, World" program. We won't have to monitor them to make sure they don't get out of hand, that's the movies. We will want them to help. In fact, some of these techs may positively require their significant help in order to be achieved - they could be too tough for humans alone. I say that to hit home the tech complexity, not to slam humans, I'm mostly referring to some of the most advanced droid techs covered later.

...

Re: Software
posted on 12/18/2007 8:41 PM by hyperborealis

[Top]
[Mind·X]
[Reply to this post]

It may be that intelligence is a natural property not amenable to exponential scaling, that the most any AI program could achieve would be a mind on the level of Einstein. If that turns out to be the case then AI will cease to be of importance to the future, inasmuch as creating an AI will be indistinguishable from ordinary procreation.

Even a biological science that guaranteed a child with an Einstein level of intelligence would be unimportant. Considering the vast number of humans already born, some number of people at Einstein's level must have been born to absolutely no effect upon human florescence. The probability therefore is that new Einsteins would also be trivial. Time, circumstance, and luck are important variables affecting any individual's contributions, and as meta-level effects are not amenable to any engineering program.

The emphasis upon intelligence and machine intelligence in this discussion seem to me much more likely to be parochial, an indication of our limited historical moment, rather than--as the concept of the Singularity presumes--the discrimination of the decisive variable operating across all conceivable and inconceivable human history. This especially since as of now we have neither a full science of brain intelligence or of human sociology.

Re: Software
posted on 12/18/2007 9:51 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

Well said

Re: Software
posted on 10/28/2009 12:09 PM by SuperFilms

[Top]
[Mind·X]
[Reply to this post]

Re: Freud

Freud's theories have been largely discredited in contemporary psychology. While still popular in the hum/socsci set, seemingly for the titillative value, Freud's data was phony; self-interested exploitation of his subjects; case histories, lies.

I would be wary of citing Freud in an academic context. Freud was about as honest, scientific, and useful, as our own "Dr Phil."

Re: Software
posted on 05/26/2008 2:01 PM by jabelar

[Top]
[Mind·X]
[Reply to this post]

"A child is born, grows, develops, and has a nervous breakdown when it realizes that it is entirely digital inside a cage of someone else's making. Ok, this is an exaggeration, but the fact remains that creating an analogue of a person will end up as... well... a person. He or she may not like you, may or may not be ambitious, may or may not appreciate its entirely digital nature and so on."

Interestingly, my kids were watching Short Circuit 2 last night on t.v. and I caught a part where the robot is essentially having a nervous breakdown because people are not treating him as human. It was treated in a plausible way.

Basically, if you put a simulated human brain in a synthetic body and then send them out in the world to learn, they will basically be treated as either disabled or otherwise ostracised and that will affect their development.

It makes sense really, if the AI had all of human sensitivities it would probably be pretty dissatisfied to find out it was stared at, treated differently, not trusted, etc.

Re: What If the Singularity Does NOT Happen?
posted on 03/16/2007 1:32 AM by Asteroid Miner

[Top]
[Mind·X]
[Reply to this post]

""Chasing after safety in space would just distract from the life-and-death priority of cleaning up the mess we have made of Earth." I suspect that this point of view is beyond logical debate."
I disagree. [Like a comment I made on RealClimate.org] [And I know Ray Kurzweil is a member of lifeboat, like me.]:
We need the Space Elevator [www.liftport.com] because we need a permanent lifeboat colony in space, say Mars. The Space Elevator will divide the cost of getting into space by 1000 and multiply the safety by 1000. In 200 year of Business as usual, the oceans will outgas sulferous gasses, causing a major extinction event. The religionists, the Anthropic Global Warming deniers and the lower 99% of the IQ bell curve would be among the dead. That is evolution in action. Only those smart enough to live in space will survive. Nature would "clean up the mess on earth" over some rather long time, like 1000 years or 20,000,000 years. The "Martians" could speed up the process by terraforming earth. The thing that can be logically debated is whether the masses can be convinced to act in their own behalf and avoid this self-genocide-like or near-extinction or speciation event.

Re: What If the Singularity Does NOT Happen?
posted on 03/16/2007 7:24 AM by Biggus Bopperus

[Top]
[Mind·X]
[Reply to this post]

Asteroid Miner, I believe you have misread the author, and your reply only backs up what he says. :)

The point he was trying to make was that there are people who think we should sort out the Earth BEFORE we look at going into space. This idea is the one that is beyond logical debate.

Like you, the author (and myself) think that we should not wait until that happens, and should approach both missions simultaneously.

BB

Re: What If the Singularity Does NOT Happen?
posted on 03/16/2007 7:28 AM by Biggus Bopperus

[Top]
[Mind·X]
[Reply to this post]

Sorry Miner. On rereading, I believe I may have misread YOUR post.

BB

Re: What If the Singularity Does NOT Happen?
posted on 03/16/2007 11:31 PM by Asteroid Miner

[Top]
[Mind·X]
[Reply to this post]

The original question: ""Chasing after safety in space would just distract from the life-and-death priority of cleaning up the mess we have made of Earth." I suspect that this point of view is beyond logical debate. "
CAN be debated ligically, but not with people who should be in mental hospitals and not with people who are dying of starvation. I once tried to explain the danger of giant impacts to a neighbor of mine. She replied: "God wouldn't let that happen." The original "question" transforms into: "Is it reasonable to attempt to save most of humanity at the same time as we try to create a lifeboat colony in space?" "God wouldn't let that happen" is clearly a conversation stopper. But "God wouldn't let that happen" is NOT an action stopper. The problem is that neither the lifeboat in space nor saving earth from Anthropic Global Warming is sufficiently likely to succeed. Nor is either good enough by itself. There are too many existential risks. It is necessary to do both projects at the same time. I agree with V. V. that you can't cure schizophrenia by teaching logic in mental hospitals. I think we also agree that nonsensical objections will not stop Lifeboat.org and RealClimate.org from trying.

Re: What If the Singularity Does NOT Happen?
posted on 03/30/2007 1:59 AM by trait70426

[Top]
[Mind·X]
[Reply to this post]

another promising tek that was snuffed by the cia is the space cannon. specialized cargoes engineered for hi accel could be put in neo for about 3 grand.

Re: What If the Singularity Does NOT Happen?
posted on 03/16/2007 6:27 PM by melior

[Top]
[Mind·X]
[Reply to this post]

Some original implications of a Wheel of Time scenario were well-explored in Niven & Pournelle's classic "The Mote in God's Eye".

Re: What If the Singularity Does NOT Happen?
posted on 03/17/2007 6:49 PM by cobra56

[Top]
[Mind·X]
[Reply to this post]

Its crazy to think about what could happen especially if what kurzweil says might actually happen.

Re: What If the Singularity Does NOT Happen?
posted on 03/17/2007 9:20 PM by NanoStuff

[Top]
[Mind·X]
[Reply to this post]

Cheap McNuggets?

Re: What If the Singularity Does NOT Happen?
posted on 03/17/2007 11:36 PM by ez ezz

[Top]
[Mind·X]
[Reply to this post]

Chicken McNuggets that cure cancer! And aren't even made out of chicken! With little pockets of sweet and sour sauce ALREADY IN THEM! Those little packets are too much hassle as is. Seriously, when are we going to solve the REAL problems of day to day living?

Re: What If the Singularity Does NOT Happen?
posted on 03/19/2007 6:27 PM by robertkernodle

[Top]
[Mind·X]
[Reply to this post]


I'm always a bit puzzled about how we speak of a singularity as "happening".

By the time it "happens", we won't be here to realize it, since sheer speed of change will have altered our perception even to speak in such terms. It forever transcends us.

As such, sometimes the singularity seems like an alternative concept of heaven.

RK

Re: What If the Singularity Does NOT Happen?
posted on 03/20/2007 5:13 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

Quite.

I mean, the Singularity is defined as the moment when engineered intelligence surpasses human intelligence in all its manifestations. How long will it take to test superbot 3000's claims that 'anything you can do, I can do better' is no idle boast? And how will we all agree that this new intelligence really is surpassing us in all endeavours when even now one person can claim a Jackson Pollock is a work of extraordinarily high art and another can claim it looks like a toddler did it?

Re: What If the Singularity Does NOT Happen?
posted on 03/27/2007 9:36 AM by radone

[Top]
[Mind·X]
[Reply to this post]

Why would superbot 3000 even care to prove it was smarter than everyone else? In fact, if such a creation actually came to light, I think it would be abundantly clear that it was smarter than everyone else. After all, we would be profoundly retarded individuals compared to it. We don't try to demonstrate that we're smarter than such people; we simply assume we are and move on. And, of course, we re-order their lives so they don't hurt themselves.

Re: What If the Singularity Does NOT Happen?
posted on 05/28/2007 5:47 AM by BeAfraid

[Top]
[Mind·X]
[Reply to this post]

It might simply be arrogant, or, in true spiritual machien form... It could have an inferiority complex...

It could still be better than us and have an inferiority complex... It would just be able to prove that it wasn't.

I do not kow how many massively smart or talented people who thought that they were not good enough.

Curt Kobain comes immediately to mind as probably one of the most recognizable names of someone who was clearly superior in many ways, yet felt himself to be so insignificant and unorthy that he blew his brains out...

As I keep saying in other threads... To make assumptions about what lies beyond our intelligence is just folly. It may be greater than us in ways that we cannot fathom, and be so miserable that we cannot fathom that either (or, it could be the next best thing to GOD himself... I am an atheist BTW...),,,

Re: What If the Singularity Does NOT Happen?
posted on 01/31/2008 8:07 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

Re: subjectivity of human experience w/reference to Pollack painting remark...

I guess we'll be pushed in the direction of the smart and rational humans, who use objective value judgements based on rich human experience, and more and more detailed knowledge of the mind. Not coercively, or by force, but just because disorganization is not as interesting as organized information around a useful purpose. If a shudder of excitiation runs down someone's spine and they experience a certain mental state because they look at a painting, and someone else experiences the exact same thing after taking a drug, and an outside observer learns what is going on in both cases, and achieves that stimulation using information technologies, then that person has a better understanding of the desires of both of the previously described expereinces than do the people experiencing them. As such, he can repackage the experience and sell it to an ever-more knowledgeable group of end users. The market for entertainment then moves towards greater understanding, specification, and control of enjoyment, based on universal values. Entertainment and human experience then gradually move in the direction of certain values.

More people today reject nihilism than ever before. Why? There is less knowledge and less richness contained in that world view.

The more knowledge, the less reason there is to be slug-like. The closer we get to optimal human values. There is still richness, but it is located in a certain direction, around communicable values.

Re: What If the Singularity Does NOT Happen?
posted on 05/26/2008 2:06 PM by jabelar

[Top]
[Mind·X]
[Reply to this post]

Actually, the Singularity has always been happening. A Singularity is not defined at a point, so the experience is really about the approach to the Singularity.

Nothing in math or science actually happens at the singularity point, but rather it is the asymptotic approach to the point that is interesting and what we're really discussing.

Re: What If the Singularity Does NOT Happen?
posted on 05/26/2008 2:15 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

Nothing in math or science actually happens at the singularity point, but rather it is the asymptotic approach to the point that is interesting and what we're really discussing.


good point jabelar

i would say, working this math function analogy, looking at tech progress over time, the sing is at the point where the slope of the curve changes, the inflection point, before it proceeds upward for some while longer, then starts asymptotically flattening out


not exactly like this line graph, its different, but not dissimilar
http://en.wikipedia.org/wiki/Inflection_point

Re: What If the Singularity Does NOT Happen?
posted on 03/27/2007 5:48 AM by webscool

[Top]
[Mind·X]
[Reply to this post]

Every year or two I check up on the lastest on the Singularity, just to see what progress is being made on it. This time I ran across Ray Kurzweil, an essay of his and this website with quite a few interesting papers by others published there as well.

Well, there is not much point in reading all this stuff and trying to understand it if nothing ever comes of my efforts, so it is my duty to file a brief report and send it off.

I should explain that I am a simple layman, an armchair mathematician, who decades ago dropped physics because it looked too suspicious. Big-bang theory indeed. How can you propose a theory that has to disregard every Law in order to get started? There was no way I wanted to end up working with that lot of deceivers. It is good to see that some progress is being made away from big-bang. There was obviously more than one universe in the limited sense that we conceived it a few decades ago. Only an egocentric could have believed in the big-bang. Mind you I detect a lot of egocentrism in statements currently being made about anthropism. The very word is ego-centric.

But here is my current take on the singularity::


--- P v NP and the search for AI ---

A lot of time and energy has been wasted on this topic. It is not a real problem and should not be treated as such. Sure there are lots of NP problems around and they are real interesting. But what we are really interested in are the solutions, solutions which characterise behaviour and appearance. These solutions tend to be relatively simple and orderly, patterned even. NP problems admit a huge number of solution sets if you ask unhelpful questions. In fact, the larger the solution set the less helpful it is. The useful solution sets are the ones that are a bit more complicated than trivial, which give us an understanding of some order. The huge solution sets end up only telling us that anything can be made to be anything else, which is not helpful. Now it is interesting to note that in most cases the time is takes to solve an NP problem is dependant more on the size of the solution set than on the size of the initial problem. If the NP problem is going to generate a huge solution set then yes it is going to take NP time to spit it all out. But the pragmatic solution sets can generally be spit out in P time and even O(n) times where n is the size of the solution set. The Chu space implication operation is an example. This operation is profoundly NP and even small problems can take impossible amounts of time to calculate. But the problems with the most interesting answers only take roughly as long as it takes to spit out the limited solution set.

So we should not allow ourselves to be intimidated by NP problems, but simply to adress ourselves to them in a pragmatic way to find the particular solutions we need in order to get on. Projects trying to prove or disprove that a particular problem is NP are a waste of time. In particular this applies to directions being made in AI research. Instead of allowing ourselves to be overwhelmed by the big question we should be concentrating on making small AI's that work and then generalising them as much as the pragmatics allow.

Fortunately there are now huge commercial incentives for this work to proceed and the problem should be solved shortly. The solution will probably be ubiquitous in 5 or 10 years time.


--- The set of physical constants - are they really constant? ---

Einstein said the speed of light is fixed throughout the universe - it is inviolate. What nonsense. This is an obvious clarion call to find the exception to this conceit.

Basic algebra taught me one thing. Constants and their big brothers, closed finite algebras, are incompatible with reality. These structures only ever cause systems to grind themselves into a concentric rut. The enduring property of reality is that it is always new, fresh, surprising us everywhere we look. And that is how it has to be because the obverse of this is circular repetition. e and pi are the exemplars of reality, thats why they are called real numbers.

So two things that we can be sure of is that the speed of light is not constant but that it varies somewhat with time. We should be looking very closely and trying to observe this change. Secondly it is not the same everywhere in the universe. It will vary from place to place. We know it varies in different mediums, but more than that, we should be aware that it may not be the same everywhere. This lends uncertainty to all our astronomical observations. If we are to say anything conclusive we need the speed of light to be the same everywhere. It probably is not, but by how much, and to what extent this alters our observations we dont know. The same can be said for the gravitational constant.

(One thing I wonder is - how do we know that our astronomical observations are not influenced by the gravity of the sun? At what point is light bent around a star? Do we see the same nightsky from Mercury? )

--- Fermi's Paradox and Gravity ---

Well this is no paradox at all. Expecting us to recognise aliens is a bit like expecting ants to recognise humans. Its a laughable conceit. The paradox only exists because of our egocentric viewpoint. For starters if intelligent life exists in a form more developed than ours then it will be here unobservable or it will be studying us remotely.

We have this conceit that somehow we are important in the scale of things. What total rubbish. We will most likely be extinct in 100 years and Gaia will spend the next 500,000,000 years recycling every atom of evidence that we ever existed.

How can we expect to receive radio signals from others like us in the universe if we ourselves only ever broadcast for say 200 years in every 5,000,000,000? The prospect is ridiculous and it is unfortunate that so much time and skill has been wasted on such a project.

The assumption that advanced life would communicate by lightwaves is itself ridiculous. That is far too slow for galactic travel. Advanced life would at the very least use gravity waves for transmitting information, millions or billions of times faster than light. We dont even know what gravity is. When we can send information on gravity waves, then we might hear from others in the galaxy.


--- Fermi's Paradox and Intelligent Life ---

Lets be quite frank. We cant yet even recognise intelligence on our own planet when it is right under our noses. We still dont know how intelligent dolphins and other sea mammals are. We still havent decoded their language. We still dont know the extent of their culture. We are quite happy to treat them as though they are fish yet we know they are more like us than they are like fish.

If we cannot recognise native imtelligence how can we expect to recognise alien intelligence!

Also we dont know how ants know what fungi to farm in order to produce the food that turns worker ants into flying ants. Apparently ants know a lot more about biogenetics than we do. Yet we dont recognise the intelligence of ants.

Again our egocentrism prevents us from seeing the obvious.


--- Uploading ---

This is a real hoot.

There are a couple of books that everyone really should read. In this case I refer you to The Odessey. This stands as possibly the only novel worth reading, because its subject matter is the subject of fiction itself, and the way we suspend our critical judgement in order to embrace a good story. The Homeric bards lay this on us on several occassions in the later part of the story but noone ever picks up on it. After reading the book noone can tell you what really happened to Odysseus even though he himself tells us several times during the course of the book. We filter out the truth and completely ignore it in our endeavour to be entertained and where does it get us - a most shocking bloodbath - and well deserved.

Describe uploading to any behavioural psychologist and she will die laughing. The prospect is so ludicrous as to be beyond moronic. You have to ditch every single thing you know about humans and their behaviour in order to buy into this fiction.

Again a supposed paradox is set up - the incrmental shift compared to the instant shift followed by the killing of the meatmind. The truth is the instant shift is impossible. If it was done the human would experience something like permanent coma.

The reason it cant work is twofold. First our consciousness is not defined by our brain. You have to include all the nervous system. Behaviourists can tell you that much of our emotional memory is held in our nervous system close to muscle triggers, so you probably also need to have the muscles there as well or the nervous system would be shooting blanks and nothing would make sense. Also the vagus nerve splits and goes to both the eyes and the intestines. So there is probably some truth in the saying "we are what we eat." in that our brain makes direct connections between what we see and our digestive system. The human stripped of nerves is probably not conscious. Actually I think thats how we formally test for consciousness. So to say that uploading a brain means that consciousness is also uploaded is rubbish. What is uploaded is a vegetable.

The second reason it cant work is that even if the nervous system and bodily functions were all duplicated and properly uploaded, the individual human is still not a functioning object. It is profoundly dysfunctional. It takes roughly 18 years and in many cases up to 30 years to properly program a human to be functional in its current environment. In behavoural terms this is called scripting. If an individual is placed in a cultural environment where its current scripts do not apply and there are no scripts supplied which fully determine behaviour, then all bets are off. The individual is effectively a total psychotic. Now this happens in ordinary day to day life often enough to keep newspapers full. If you upload a human to a computer which gives millions of time more power, then the certain outcome is total nightmare.

Again we hold the egocentric view that we are individuals with free will. This is a nice story and it is good that we can maintain this conceit, but it is the opposite of the truth. Why do we keep monkeys (and even more humans) in cages for goodness sakes?

So this finally leaves us with the question, would anything actually ever be achieved by uploading. The regretable answer is no. We can probably do a much better job by programming this supercomputer ourselves the hard way. We supply the scripts, we tell it how to behave. We gradually tell it how it may modify itself using a library of lambda calculus equations..


--- AI and Lambda Calculus ---

To the best of my knowledge no proper work has been done on this. This is fundamental. We cannot begin to talk about AI without a good understanding of lambda calculus.


--- Memory ---

We havent a clue what memory is. Nor what it is that is actually passed across synapses. Again we need to talk to behavourists. The answer is probably something like video clips with lots of contextual tags attached to them. The clips also contain smell, taste and touch tracks as well as sound and video and have key objects and movements identified by vector descriptions - a person - a tone of voice - the patina of brass. Shiny reflective things are key cue cards.

Until we have a computer system that mimics these functions and properties we are nowhere near close to replicating this aspect of the brain. The good first step would be a complete associative index to every scene of every movie ever made. "Dont Bogart that joint". What kind of software would be needed to accomplish this task? - To examine movie frames, identify objects and people and actions and make a connection with a song lyric in another movie and identify a linkage.


--- Howard Gardner should not be so coy about the 9th intelligence. His hesitation is based on his uncertainty about whether certain regions of the brain are dedicated to the contemplation of issues that are too vast or too infinitesimal to be perceived.

We all have a part of our brain which operates during sleep ( which maybe explains why Gardner has not observed it) which analyses the days events, creates associative tags and files everything away. However sometimes things remain unresolved and get carried over into the next days pool of events. There are two ways of dealings with unresolved issues. One is simply to divorce them. Some poeple find this easy to do so maybe they have an inactive 9th intelligence. Within this group there is another group, the people most likely to be amoral. Those with active 9th intelligance will actively seek to resolve these issues, sometimes at the cost of their lives. (Some of them get to be called martyrs or saints.)


--- The Singularity ---

Vinge is no idiot. He writes fiction and possibly helps his wife write her stuff too. Fiction is about presenting ideas that people will want to believe. It has nothing to do with reality, indeed quite the opposite. So will the Singularity happen?

Of course not.

But then what are all these exponential curves directing us to in the future?

There is obviously something big going on, if not a singularity then what? Even Vinge is pulling away from the fiction and attempting to address the reality as a real futurist rather than a sci-fi writer. This is nice to see.

The most likely answer is that mankind will be diverted from any approach to a singularity event by embracing biogenetics. Firstly to cure illnesses, genetic defects, then to improve on nature and eventually to push evolution down new biological pathways. I expect to see the first examples of the later at the London Olympics. The supremecy in a sport at Olympic level is kudos for a Nation. Nations will conduct experiments to get an edge at the Olympics. China may even front up with something at Beijing but I suspect this is too soon. My bet is on the swimmers with the webbed toes.

As the century winds on we may discover that we can grow wings given the right womb conditions, as ants seem to be able to do. Maybe it needs a DNA shuffle, maybe not. I think that quite a few humans will be interested in becoming flying mammals and many will be interested in becoming diving mammals and return to a life in the sea.

This interest will divert scientific resources away from the approach to a singularlity. However all the statistics moving in that direction will continue to do so because biogenetics will be very demanding in that area. So the computers themselves will also be diverted. At present the most demanding applications are genome applications. We cannot build harddrives that can record DNA as fast as the process that decodes it. We dont have the grunt to properly analyse the data. This will delay any singularity event.


--- The Origin of Species ---
Now, like the Odessey, this is another "must read" book

When you do read it you will find that Darwin did not say most of any of the things that people claim he said. Darwins book is basically a simple but comprehensive rebuttal of creationism. He was very shy about advancing any theories of his own. In fact in a number of places he even hints at something like "intelligent design". He did not believe that many of the wonderful and perfect adaptions which he saw could have happened by chance. Darwin had a poor grasp of probablilty theory which, truth be told, was not developed even when he went public with the book, let alone when he first wrote his ideas down.

However we have now been able to study cases of species change and formation and now know that they are often the product of catastrophic events. We just happen to be in the middle of a catastrophic event at present in case you had not already noticed.

While some people are worrying about global warming and mans contribution to it, and while others are worried that current instability may trip another Ice Age that will freeze over most of the Industrial World, geologists are sitting around the tea trolley having a quiet chuckle.

Geologists know that we are already in an Ice Age that has lasted for 56,000,000 years, and that the signal of the end of this Ice Age is when the Antartic Ice Sheet melts. And it looks like its going to happen right now. This is not the once in 10,000 years event that the gloomsayers are talking about. This is not the once in 100,000 year event that the ice age catastrophics are talking about. What is currently occurring now is a once in 50,000,000 year event that will totally redefine Earth's Natural History.

Actually it would pay us to learn whalespeak real fast because I think they have something to tell us about global warming.

I was tidying up some old papers around the office just yesterday and came across a note I scribbled many years ago where I predicted New Orleans would be flooded in 2007. So much for my futurist career!

What is clear from Darwin and what we now know is that at some key point a bifurcation occurs. Some members of a species change their behaviour and survive well in a new environment. The ones that dont change may continue to survive in the old environment, but if that changes too much they will be scrubbed out. This change in the human species is already evident in the responses to the various articles published on kurztweilai.net. Some embrace the possibility of change. Others reject it out of hand. Our species is already redefining itself in mind if not in body (yet).

The truth is no singularitarian will abandon his body and head off into the galaxy for the reasons explained above. Nor is the human body made for space. Moondust is too poisonous. Marsdust probably is also. However singularitarians will upload something of their brain plus a lot more handy stuff into a new vehicle which is frighteningly clever and set it off into space. The new vehicle will effectively be an alien life form but one made by humans. I think we had better find out what gravity is and what dark matter is first though. I dont think its moral to send this new species out into the world with a roadmap thats only 10% filled in.


--- Improved Intelligence ---

I find puzzling the many references to the limitations of the human brain on kurtweil.net. The human brain is only limited by a couple of crude biological necessities. When foetusses are grown in test tubes these limitations wont apply and we can have huge brains if we want them. I suspect we dont use 90% of the brains we do have. Also the associative memory bank application I referred to above may take a lot of the load off our brain and enable it to think better rather than simply being an associative memory store.

By the age of 40 we are being forced to compact and erase memories as it is (midlife crisis). So the idea of a seperate machine to put our living memories on could be popular. It may also help stop us aging.

This means that there are a number of avenues for improved performance of our biological brain. No need to talk it down or write it off just yet.


Immortality

The major problems here are the weight of our mistakes and the number of enemies we manage to create. We have to learn to live a scrupulously moral life first.

Summary

So where does all this leave us? With a picture of an extremely turbulent century in which biogenetics will be the predomionant technology. AI will take a backseat. There will be many challenges and catastrophes to distract us from focusing so hard on technoprogress that a singularity event of any sort is triggered. In fact quite the opposite. Rather than the future not being visible beyond a certain horizon, our worry is more that we may not advance fast enough to meet the challenges to our survival which we can see are facing us in this century and the next.

Re: What If the Singularity Does NOT Happen?
posted on 03/27/2007 8:34 AM by Alex from Macedonia

[Top]
[Mind·X]
[Reply to this post]

As we live on the opposite side of the world (i.e. Macedonia and New Zealand lie on the diametrically opposite lattitude and longitude), I consider you to be my geographical ANTIPOD.

am a simple layman, an armchair mathematician


About this, it seems that I am an "armchair mathematician" though unemployed.


This constrast is also reflected in the differences in our opinions. For some topics of yours (such as Physics) we completely agree, while for others (Singularity, Mind Uploading) we completely disagree.


However, it will be very interesting to flatten our opinions in a direct communication. The main thing that may prevent us is..... that he is really my ANTIPOD, having his day life when its night time for me (and vice versa).

Re: What If the Singularity Does NOT Happen?
posted on 03/27/2007 9:43 AM by BlueMind

[Top]
[Mind·X]
[Reply to this post]

"Describe uploading to any behavioural psychologist and she will die laughing."

Once you realise that full nano(considering problems on this scale) and A LOT of computation is all we need, you just don't write c..p like this no more.
Yes, those b..ches will die, but not of laughter...


"Uploading = hoot"

Just this..
Once we surpass the obstacles and enable ourselves to manipulate with our minds, I will take myself the right to prevent you and your fellowthinkers from ever tasting the fruit of that achievement. (For at least a while...I'm no tyrant).
You will not have the right to blame me, as I absolutely don't blame you for your views. Them I pity.

But if you're loyal to your standpoints, you won't even want to do it.

Re: What If the Singularity Does NOT Happen?
posted on 03/27/2007 3:11 PM by webscool

[Top]
[Mind·X]
[Reply to this post]

Thank you BlueMind for demonstrating my point so ably.

Re: What If the Singularity Does NOT Happen?
posted on 03/16/2009 12:59 PM by Pandemonium1323

[Top]
[Mind·X]
[Reply to this post]

Wow, it's really cool to hear someone with this perspective on AI, nice post.
I've thought that we'd see cybernetic intelligence through brain machine interfaces long before any 'AI' (if we ever see 'AI'). The organic brain is well suited to doing certain tasks, and I believe that at least part of the reason is that the very molecules it is made of has a lot to do with that. In other words, WHAT we are made out of is as important as precisely HOW it's organized. You could make an EXACT replica of a human brain from inorganic compounds, and it STILL would not function EXACTLY like a human brain, only better.
But, there are some truly amazing advances being made to CONNECT us to our machines in more intimate ways. And, just like the human brain, INTELLIGENCE is about CONNECTIONS.
So we can improve on what we've already got. We can use the best functions of inorganic computers, couple them with the best of the organic minds, and get optimal results that way. No reason to throw out the bathwater with the baby.
This is what I call 'cybernetic intelligence', or what people refer to as human machine hybrids. They will be amongst the best and the brightest, and will be here far sooner than any 'SAI'.

Re: What If the Singularity Does NOT Happen?
posted on 07/04/2009 6:04 PM by nealis44

[Top]
[Mind·X]
[Reply to this post]

erm, regarding your New Orleans 'prophecy' it does rather seem that you were right. Allbeit the Timing was only, relatively, slightly out.

Congratulations

Re: What If the Singularity Does NOT Happen?
posted on 03/28/2007 1:29 AM by Phylodome

[Top]
[Mind·X]
[Reply to this post]

Great post Webscool, as it really brings to bear the most pressing questions that any "singularitarian" must take under serious consideration before claiming to understand or believe in a future singularity or any such evolutionarily asymptotic result of technological progress. I do, however, have a few qualms with a few of your observations, and also a few additional considerations of my own. I'll try to keep this relatively brief in the interest of both easing the reading experience and giving me a bit of much needed sleep.

On P v. NP:
Much agreed. This question seems more an artifact of our own inability to hold the cognitive dissonance between working within the system that is the human mind while simultaneously observing the hapless fact that we are not yet able to scratch the surface of deeply penetrative parallel/non-deterministic processing. If you've ever read "Godel, Escher, Bach" by Hofstadter, he spends an entire pulitzer prize winning book explaining the incompleteness theorem and how the very structure of formal systems prevents them from working outside themselves, therefore obscuring any clear route to a complex solution while remaining in its resident boundaries of formality, regardless of whether the solution can be identified outside of the embedded system. Many scientists/mathematicians are simply too proud to admit how little we truly know, and how small our amassed intellectual power still remains, which obviously carries over into their research when satisfying outdated grant requirements, and also affects our perception of how close we really are to any type of true conspecific phase-shift (singularity). It seems that P=NP as the questioning society transcends itself, and then returns to an inequality.

On the Physical Constants:
I would be far less surprised to find out that what we call "constants" are in fact the only truly moving components of our universe, than I would be to find out there actually exist such stable atrocities of anthropic egocentrism. These constants are probably analogous to our particular universe's evolving genetic code, and homoplasic to our own DNA, as is alluded to in Gardner's new "Intelligent Universe" book. To begin understanding our universe, and especially if we ever hope to successfully navigate any hypothetical singularity, we must first acknowledge the fallacy of constancy. Our observation of constancy can be likened to any life form with a short enough lifespan to observe either constant darkness or constant light throughout their lifetime on our trivial little planet; we simply do not possess the openmindedness in our current scientific community to truly understand the magnitude of scaling self-similarity and according dynamics as they relate to our own perception of reality, scientific or other.

On your Fermi observations:
Touche, though i heard the new iPhone has gravity-wave communication, but that could just be another rumor ;)

On Uploading:
Here is where i must strongly disagree with you. Some scientists might laugh, but those doing the laughing are, and have always been, the first to be laughed at when (when and not if) proved wrong. Your two arguments against uploading are very weak from neurophysiological and technological standpoints. 1.) If the technology to sufficiently recreate what you call the "meat" of the brain existed, it's not a stretch to think that the rest of pertinent nervous material could be virtually recreated as well. 2.) One would also assume that if we had advanced sufficiently as to begin the uploading process, we would not simply be uploading a brain, still operating upon neurophysiologically biological operation procedures, into what we currently perceive as the digital realm, with its accordingly untranslatable static. There is no reason to assume a compatible "brain OS" would not be awaiting the upload, therefore transferring the evolutionarily and ontogenetically adaptive burden to the design of the artificial environment. The brain would not simply "awaken" to binary as is similarly depicted in the pop movie "the matrix", so much as a training environment would be developed that at the beginning was very much like the biological realm it had previously occupied, but that also allowed the mind to grow in ways that had previously been limited strictly by biology.

Now, even though i don't agree with your arguments against, i do have my own. Due to the self-similarity of scales in the directions of infinity and negative infinity (cosmic and atomic respectively), combined with the observation of exponential energy input at increasing and decreasing levels of scale, transposed upon the notion that these patterns likely seem interminable, i am inclined to believe that exact replication of natural information in space and time is impossible. Similar to anti De-sitter space, the notion of finite infinity prohibits exact replication without existing as the exact same entity. To clarify, it seems that only one pattern of information through time (brain) may occupy its own non-linear, multi-dimensional, and infinitely complex pattern within any particular reality. Any deviations from this then are, obviously, not the same brain or mind. This is not to say that the average human would be able to tell the difference, but the degree of specificity with which we can re-create the information will exist as the tangential disparity of "identity" through space and time (space or time?, omega point?). All that will matter is at what threshold can this new and even more frightening uncanny valley be surpassed for profits; will we not be able to tell the difference at the cellular level, the molecular level, the atomic level, the spin level, the flavor level, or the color level? Which of these levels contains the majority of memory, consciousness, identity, so on and so forth. We will be able to upload if we don't kill ourselves off first, but the uploads will be new selves that are tangentially disparate to the degree of our replication capacity and certainly not the "person" in question, which i think may be even more frightening.

Haven't studied much Lambda calculus, but i'll definitely be taking a look tomorrow.

Memory is simply the emergent property of synaptic plasticity and information perception as biology developed a mechanism to crystallize patterns at higher self-similar levels (conscious memory likely has something to do with the dawn of spindle cells and mirror neurons). The concept is simple, but much like the uploading issue, the complexity parameters of truly "understanding" what memory "is" are mind-boggling.

On the Origin of Species:
We'd be here all night, and i'm way too tired, but maybe tomorrow

On the Singularity:
It will either happen, we will be killed by natural disaster beforehand, or we will kill ourselves over this very issue. I personally don't think the majority of the world is ready to contemplate these ideas without becoming extremely violent, and I would not be surprised if we've sent ourselves back to the stone age within the next 100 years. By the same token, the patterns of a dawning singularity are simply too powerful and transparent to be ignored. If it does happen, it will not be a peaceful transition, and those who wish to fundamentally evolve will have to fight for it, much as every other animal does.

Way too tired to keep typing, but hopefully this was at least interesting to some.

Re: What If the Singularity Does NOT Happen?
posted on 03/30/2007 1:54 AM by trait70426

[Top]
[Mind·X]
[Reply to this post]

i have already been beaten up for talking anout biological plasicization in a bar. this is only an early, pre-sing eventuation: designer genetics and synthetic life. i was jumped and beaten up.

Re: What If the Singularity Does NOT Happen?
posted on 03/30/2007 2:57 AM by Phylodome

[Top]
[Mind·X]
[Reply to this post]

I am sorry to hear that, but i was also almost mauled by a kindly citizen in a bar this past summer over the very same issues. He called me a "godless devil-spawn" and tried to break a stool over my head. The conflict was broken up and i was asked to leave and never return.

Re: What If the Singularity Does NOT Happen?
posted on 03/30/2007 4:03 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'We will be able to upload if we don't kill ourselves off first, but the uploads will be new selves that are tangentially disparate to the degree of our replication capacity and certainly not the "person" in question, which i think may be even more frightening'.


There may be a simple way around the 'other person' problem. Create the other person in increasingly complex virtual worlds, of which Second Life is probably the most compelling right now. I explore this possibility at http://gwynethllewelyn.net/article79visual1layout1 .html

Re: What If the Singularity Does NOT Happen?
posted on 03/30/2007 4:50 AM by NanoStuff

[Top]
[Mind·X]
[Reply to this post]

There's no problem. You simply go under anaesthetic then get your information copied. When you wake up in the simulation it would be no different than if you woke up otherwise. You just have to ensure the body does not wake up, that would obviously put the two parties in an awkward position.

At that point the body is moved to storage and frozen. Not that you're ever going to need it again, but we probably won't feel too comfortable just letting it rot.

Alternatively, the transition could be more gradual. By then we would probably at least have some brain enhancements, if not an entirely reconstructed brain.

Re: What If the Singularity Does NOT Happen?
posted on 04/01/2007 7:35 PM by Phylodome

[Top]
[Mind·X]
[Reply to this post]

This line of thought is exactly what i was remarking upon.

First of all, no one currently speaking on topics such as the singularity or "uploading" even scratches the surface of the true depth of complexity involved in replicating a biologically based "mind". If many are overzealous, even more are completely ignorant of the fact that we simply have no idea what the truly basal physical components of our universe are. If we cannot even begin to understand observable phenomena such as quantum entanglement (regardless of how many of you think we're close we're likely not), what makes one think we can capture the unobservably subtle aspects of mind to their fullest degree of complexity?

To clarify, I do not think there is anything "ethereal" to a mind, and i am a strict believer that if one were able to recreate a mind in all its complexity one would then have that same mind. But don't let your wet dreams get in the way of your logic, and please do take our human flight of knowledge with a rather large grain of salt, because there's likely a much longer road to successful uploading than one might immediately think after reading a few blogs and books.

Many will make the jump too soon, and regardless of whether the pseudo-mind on the output side convinces the rest of us that it is you, it's very likely that most of the first attempts will be tantamount to self-sacrifice. I can see the headlines now "Study proves first million transcendentals were no more complex than current technology allowed for, minds lost forever". Bottom line: letting your biological mind rot, regardless of how cynically you remark upon it, would at best destroy an irretrievable degree of your conscious complexity, and at worst destroy you, while letting your abstract mental clone take up your life where you left off. If you're that unhappy with your life, go for it, but personally I'd rather let you be the guinea pig while we work out the kinks, and I'll just reap the benefits of expanded cognitive potential and lifespan of the biological mind (the more intelligent route to expanded mind).

Now, one thought experiment that is interesting from this standpoint is the biologically/virtually modulating neural component, wherein a brain and its virtual copy are simultaneously interconnected in a sort of conscious experiential loop, while an operator flips the computational burdens on various parts of the brain between the real and virtual sides of the loop. This thought experiment seems to indicate that the maintenance of consciousness during the transitional period is the key to at least superficially attaining a successful transition. But, even in this scenario, as i stated previously, the matter of determining what the lowest common denominator of consciousness actually is still lurks around the corner.

I don't even know if i should remark on the Second Life explanation. This "simple" solution still falls prey to everything i mentioned, and the increasing levels of virtual complexity you speak of are exactly what i was talking about as an uploading OS.

This solution doesn't begin to answer any of the deeper questions of self-replication, regardless of how badly one might want to escape the physical restrictions born unto him. Again, you go right ahead and upload yourself into second life at first chance, as it's likely you're one of the 30% of users that scientists such as Castronova refer to when talking of those who already consider virtual realms to be their "real life". I won't even get started at how sad and defeatist these attitudes are, or how disgusting i think much of society has become for driving millions of people to depend on virtual stimulation in place of human contact at the fickle whims of consumer culture, but suffice to say that there are more pressing global concerns that will likely dictate how soon you'll be able to virtually ejaculate, or whether any of us will even live to see that sad day.

Re: What If the Singularity Does NOT Happen?
posted on 04/02/2007 3:47 AM by OsiRisX

[Top]
[Mind·X]
[Reply to this post]

In response to your final paragraph --- -

Oh yes to interact with apes (humans) what a dream come true. Dreams come true! We are so noble and wonderful, and just a joy to interact with since the majority have a low intelligence, oh I mean.. average. How absurd to say these entities indulging in alternative reality is sad. How sad that you try to stop the path of evolution. It is futile and will swallow you whole. Do you really think you will stop this system of billions of humans from transforming themselves into entities more than apes? Is it sad to strive for more than this apparent 'reality' can offer...? I don't think so. It is noble and courageous to stand up against the Universe.

They are on a level criticizing the Universe by attaching themselves to 'other' worlds. Our Universe is a brutal world. Look beyond earth and I see uncanny violence and brutality. The Universe, she is a merciless lady, but still you can't help but love her sense of humor. Still, we should criticize the Universe and not just gawk in awe because we're weak in comparison to it. We held our gods as sacred for a while and many of us knocked them down. In time we will do the same with the Universe: With 'reality'.

These current laughable (in my opinion) virtual worlds do pale into comparison to our beloved reality. I tried some but they are too primitive for my tastes; but that will change in time. We will first laugh at them, like I do, then they will grow into immersive worlds, and then they will grow into their own Universes. We will be left asking what reality is? What is a Universe - and are they 'real'? Is our world reality? If you want it to be yes. If you say something else is your main reality, like Second Life, you sound absurd only because it's not a powerful world in comparison to what we call reality. It is all relative.

If you die as a human you can't be in Second Life as you said. There are important issues in the real world (like staying alive!) but there are enough other people to deal with them so that these others can be the pioneers into creating other worlds and criticizing this Universe with the ultimate act of disassociation from it. Seems damn insane in some respects and I wouldn't do it but I'm quite sure the future will smile upon the ones who had the balls to try.

Technology, I think, will eventually allow some species to become relative immortals, to make the world easily transformable (through nano, pico, etc. etc.), to learn the ability to spawn other Universes (through quantum computing??), to turn these digital worlds into their own realities minus the need to have an ape body. People will fight to call this Universe the ultimate reality but as the multi-universes theory already implies, and as millions of humans who begin to embrace the alternative worlds suggest, it is not at all.

Humans may not be able to embrace this but by the time an entity becomes so advanced they will not be human as we know or will remember. Once intelligence can be taken out of the ape brain then all bets are off... The wheels are already rolling and there are too many people who desire it for us to stop even if we wanted to. It is because of man's own suffering in this Universe that she will lift herself up and create her own worlds and rise above herself.

Of course I could be wrong and completely mad. I don't even use those virtual worlds but I suspect they are leading to something much bigger. Either way it is a great commentary on human existence and courage by humanity even if it goes nowhere.

Lastly, in this opus of a post, I think we must sustain ourselves as humans because it is all we have right now but it shouldn't be our only option. It is only because of our own lack of intelligence and limited tech that we can't give each other more options in this Universe - the options to be something more, smarter, travel to different planets, live for thousands or millions of years, the options to play in different worlds, etc... One reality, one life, one ape body: As wonderfully noble as it may seem to live life primarily to survive and replicate genetics from an ape body - it is very limited and crude for many humans.

We should give entities a chance to be more and experience a much more sublime existence in a variety of worlds. Thats just my opinion but maybe we should just give up our ambitions and accept this ape gig as our sacred and ultimate reality. nahhhhhhhhhhhh

:D

---- Just to note - I know my post is highly conceptual. I tend to be much more conceptual than technical. Also I'm not really big on forums because they usually add up to arguing but I read Mind-X a lot and I had to finally stand up for those virtual lovers and share some thoughts along the way. So I may never post again but regardless have a pleasurable life everyone!

Re: What If the Singularity Does NOT Happen?
posted on 04/03/2007 5:41 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'I won't even get started at how sad and defeatist these attitudes are, or how disgusting i think much of society has become for driving millions of people to depend on virtual stimulation in place of human contact at the fickle whims of consumer culture, but suffice to say that there are more pressing global concerns that will likely dictate how soon you'll be able to virtually ejaculate, or whether any of us will even live to see that sad day'.

Imagine a person who thinks 'motion pictures' refers only those old spinning drums with slits in them, showing a clown endlessly juggling balls. Zoetropes I think they were called.

Imagine if you tried to describe to such a person the affect a decent movie (as we understand the definition) can have on its audience. How it can enthrall for 2 and a half hours.

That person would think you were weird! 'What you stare at these things for hours and cry over them? Get a life!'

Phylodome really is as far behind in his/her understanding of SL as that hypothetical person. Luckily there are some very good essays by Gwyn and myself at the site I hyperlinked to in my earlier post. Go away and get an education.


Re: What If the Singularity Does NOT Happen?
posted on 04/03/2007 5:54 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'If we cannot even begin to understand observable phenomena such as quantum entanglement (regardless of how many of you think we're close we're likely not), what makes one think we can capture the unobservably subtle aspects of mind to their fullest degree of complexity?'.

Yawn. 'We cannot do X today and therefore we never will' is quite possibly the most dumbheaded argument to be tiresomely repeated on this forum.

At the quantum level am I the same as I was yesterday? Hell no. The particles that make up me are in a state of constant change. Did my Mum recognise me as 'me'? Hell, yeah.

Uploading need only capture the pattern of information at a level necessary to pass a personalised Turing test. Clearly that does not require going all the way down to the quantum level.

'personally I'd rather let you be the guinea pig while we work out the kinks'....

...said the sceptic of flight to the pioneering aviator as the latter prepared to make history at Kitty Hawk.

Re: What If the Singularity Does NOT Happen?
posted on 04/03/2007 5:56 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'If you die as a human you can't be in Second Life as you said.'

If anyone said that as a criticism to my idea, it just goes to show how badly they misunderstood my proposal.

Re: What If the Singularity Does NOT Happen?
posted on 04/03/2007 6:32 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'pioneers into creating other worlds and criticizing this Universe with the ultimate act of disassociation from it. Seems damn insane in some respects.'

But this hardly represents the metaverse, which will emerge from myriad technologies that bring the morphing capabilities and other advantages of cyberspace to real life and the immersive quality of real life to virtual worlds. The two will combine into something greater than the sum of its parts and it will make no sense to distinguish between the two.

The extent to which people choose one virtual reality over another (let's not kid ourselves that 'real life' is anything other than a model of reality computed by the brain) will be a smooth continuum. Some might choose to live 100% in virtual worlds, but I think they will be a minority. Most will choose 90% VR/10% RL, 80%VR/20%Rl...all the way to 95%RL/5%VR. Moreover, in all likelyhood the extent to which you choose one over the other will change depending on your current needs.

The likelyhood of anyone 'choosing' 100% RL is so small that we can dismiss the possibility. For that to happen a person would have to never use a telephone (auditory VR), never read a book (currently the best way to imprint an alternative reality on the mind) and never use the Web ever under any circumstances.

Re: What If the Singularity Does NOT Happen?
posted on 04/03/2007 6:37 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

E,

if you consider a book virtual reality, it becomes empty concept, only another name to the same thing.

es

Re: What If the Singularity Does NOT Happen?
posted on 04/03/2007 9:47 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

the book itself is not virtual reality. It is the images the words can conjour in the mind's eye.

Re: What If the Singularity Does NOT Happen?
posted on 07/09/2007 10:24 PM by Brian H

[Top]
[Mind·X]
[Reply to this post]

Sorry, it's you that doesn't "get it". The Turing Test is utterly irrelevant. A duped/uploaded you might be convinced it was still the same old you, and might be indistinguishable from the same old you for everyone else, too, including your Mum, but it ain't, notwithstanding and nevertheless. Because, absent some artificial constraint like making the "copying" process a destructive one, it can be repeated n times. Where n stands for aNyoldnumber. Each dupe will then consider itself the true and original you. None, not the first nor the last or any other, will be correct.

The only possible persuasive solution would be a gradual, inch-wise cyborging process, in which continuity was never lost, but ended up with a total hardware self. But then the Devildetail pops up: hardware and its resident software can be precisely and non-destructively copied. So, in actuality or 'potentia' there are once again n "you's", with no valid way to distinguish or discriminate amongst them.

It's a bigger version of a core existential challenge: Prove to me, or anyone, or to yourself, that that last time you fell asleep you didn't die, with a replacement identity generated by your brain/organism on waking, with full/normal access to "your" memories and skills. Newyou would have no reason to doubt its continuity, but in fact would be doomed to itself vanish during the next sleep. And so on and so forth until the Big Sleep.

The only way to avoid or put off the above is to stay awake non-stop. Good luck with that.

Re: What If the Singularity Does NOT Happen?
posted on 10/03/2007 4:58 AM by Uppi89

[Top]
[Mind·X]
[Reply to this post]

"Each dupe will then consider itself the true and original you."

Why is that a criticism, I wonder. I thought it was just an idea that is diffcult to grasp. Thats all

Re: What If the Singularity Does NOT Happen?
posted on 07/05/2009 12:19 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

Yeah, it would be good if someone at least read "The Age of Spiritual Machines" prior to posting here, and preferably "The Singularity is Near" and "Engines of Creation" as well.

Re: What If the Singularity Does NOT Happen?
posted on 04/05/2007 5:04 PM by BeAfraid

[Top]
[Mind·X]
[Reply to this post]

Why does everyoe always assume that "Uploading" is going to be an instantaneous, one time, irreversable event?

My impression of what Kurzweil and others have described is that I will be in the process of adding "peripherals" to my body/mind over the next two to three decades that will give me greater abilities (both cognitive and physical) in addition to giving me greater and more intimate access to irtual environments.

Eventually, this reaches a point where I am able to embody myself as either fully virtual or fully physical (or both) at the same time...

Nanites flooding through my body will be able to record all of my body's activities and recieve information from a virtual environment. Giving me the ability to either fully emmerse myself in VR or block it off entirely to experience this world (or any of the possibilities between)

None of this would require the abandonment of my physical body. Eventually though, through the ability to physically embody copies of my physical body through a form of nanite reproduction will mean that I may not need to have a physical body all of the time. I will just instantate one as needed at the place needed.

Just out of Notalgia though, I plan on keeping this one body I have now working as well as is possible throughout my life... Even if I do have a Virtual self that I am maintaining as well. This body is my home, and it may well take me several hundred years of Virtual existance to grow tired of that home. That still doens't mean I would discard it... So, freezing it for a while may be a good choice...

But all of this... Instantaneous and irreversable "uploading"... I think I will give that a pass and ust continue to upgrade the meat that I have until Nanotech is working well enough to give real time VR experiences.

Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 01/31/2008 8:51 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

Subject: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force

This is a good reason for civilized people to carry guns. If power was decentralized to everyone, and everyone took a little time to study jeff cooper, then the biggest muscles would not equal the biggest might. The great equalizer is still relevant in this world of thugs. Government thugs, thugs in a bar, thugs in the work place. Everywhere, the appeal against though and to the initiation of force is omnipresent. Only a very few people eschew the use of force in their lives: they are called consistent libertarians/market anarchists/ freedom lovers/ abolitionists/ capitalists. There are many words for them, because there is a generally poor understanding of the non-aggression principle amongst most "people" who are technically not citizens, not civilized, not kind, not decent. There is a great divide.

One of the things I blame most: the perversion of the non-aggression principle by Christianity. "Love thy neighbor as thyself". This perversion of "Whoever initiates force, or champions the initiation of force, is wrong." is responsible for most of the evil perpetrated by stupid and thuggish people.

You see, the perverse "Love thy neighbor" statement invites comparison of the neighbor to the self. Since there is a social pressure to allow one to be manipulated by government parasites, the contradiction that that statement allows is common. Example: "I would want my neighbor to be jailed for drug use, and I would want myself to be jailed for drug use (if I used drugs)". Of course, they don't use (illegal)drugs, so then they feel superior for their obedience (To the inferior, lower, contradictory law that contradicts the the higher law of the US Constitution that protects property rights.)

The "love thy neighbor" law is actually a non-intellectual, unphilosophical, inadequate attempt to define the non-aggression principle. It is ambiguous. Obviously, with the interpretation most people give it, it is meaningless. Everyone has different experiences, desires, hopes, careers, practices, and goals. The way most faith-based willfully ignorant nonthinkers apply the rule, it is utterly without consistency.

The educated minority sees this contradiction, but as long as the masses of thugs are unable to see it, then we will have the laws that violate our neighbors for being different than ourselves.

The path of true freedom is acceptance of the correct and logical political solutions that have already been discovered.

Ally yourself with those who don't want to initiate force. You can find them at your local Libertarian Party/Anarchist/free market/objectivist/Ron Paul meeting. (The Ron Paul people might be more inconsistent, since Ron Paul has an attractive personality that may draw some unphilosophical but open-minded people in...)

The ideas of free discourse need to be identified before they can be protected.

It is a crying shame that I was not in the bar when you were assaulted.

It is an even bigger shame that myself and ten of my gun-carrying intellectual friends were not in the bar when you were assaulted.

We might have had something to say about anti-intellectual bigotry.

Of course, anti intellectual bigotry still wins NEARLY EVERY SINGLE ELECTION in the USA, and worldwide, so there is MUCH, MUCH work to be done in defending the right to think, right to speak, right to work, and right to justly gained property.

Let us become proponents of FORCE AND MIND. (Unused force, except in the case of defense.)

You see, the anti-intellectuals rely on the inaction of the just to perpetrate their injustice.

If you are a robot builder, consider how the purpose of the gun manufacturer and your purpose overlap if you are a moral builder whoo builds in defense of civilization: The gun is a simple device which properly functions as a decentralizer of retaliatory force. The robot is multi-purpose, but can serve the same function as the proper use of a gun. Many attempts have been made to confuse people about the proper purpose of the gun, and to call the gun evil, rather than identify the ideas that can make the use of a gun evil. Those attempts are the same attempts that have sprung up around the recent attempts to control the emerging power of the robot. Anyone can use a gun to some degree of effectiveness, but it takes intelligence to use a gun or a robot properly. It takes more intelligence to use a robot properly, in the defense of justice. Decentralizing gun rights has prevented immense tyranny. Decentralizing robot rights has the potential to prevent greater tyranny, and this explains why access to forceful robotics is currently the sole domain of the military: the great minds of our time do not understand the simple lesson of the warsaw ghetto uprising.

I strongly recommend that the builders of robots here read the novel "Unintended Consequences" by John Ross. A lack of understanding of the concepts in this book has the greatest chance of derailing a constructive and benevolent singularity.

For instance: a common misperception is that "It's OK if people in government steal a lot from us, we can afford it".

Well, right now you can. But that might not also be the case. Moreover, the path behind you is littered with the dead bodies of those who could not afford it.

Another misperception: "Although I wouldn't behave irrationally with power, other people might, so if they government folks tell me that's the case, I should believe them, and give the power to them, because they're the experts."

UInfortunately, this leads to the abuse of power by those who asked for the power. Those who ask for and seek power are NOT TO BE TRUSTED. Stop trusting them! History show that trusting those who seek absolute power is the greatest causer of misery in the ENTIRE history of humanity.
See:
http://www.jpfo.org/filegen-a-m/deathgc.htm#chart
http://www.hawaii.edu/powerkills/VIS.TEARS.ALL.ARO UND.HTM

I included the above in the form of charts and graphs, because Like Tom Green, kurzweil and moravec seem to like "charts, charts and graphs". Of course, there are giant books where the principles illustrated in the charts and graphs above are expounded upon in great detail, revealing everything that the charts and graphs reveal, but in the form of words, not pictures.

Since we're not as advanced as AGIs will likely be, most of us don't analyze images properly though. So feel free to read Atlas Shrugged, The Shadow University, Unintended Consequences, The New Prohibition, etc... if you don't understand the charts and graphs.

Also good are the root pages where the charts and graphs are displayed. They, again, provide evidence that the charts and graphs more quickly reveal.

Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 02/01/2008 1:05 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

right, well, the idea is that the droid would exercise every alternative before resorting to outright violence, because violence is difficult to control once initiated.

and pls, the bar room scenario is just one of a nearly infinite set of situations these devices will be helpful in.

its not all abt replacing guns with droids.

u seem to have a dark view, jake. and ure getting dangerously close to spam territory in the nature of ure posts.

Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 02/01/2008 1:23 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

Well, I have a "dark view" that is not heard too much, because the other people who have my view are dead or in jail, not typing in the comfort of their computer rooms. Then again, the people who are not troubled with rape room prisons, and institutionalized injustice have a "sunny view" of the world. After all, it's not them who's suffering (until it is, they are silenced, and then they get religion about showing the real story, once their voices and votes are silenced, and they risk the further antagonism of a state that has already discredited them and disenfranchised them).


I think the voices that are drowned and silenced are some of the most important voices to hear. They have the most relevant information to effect our assessment of risk. Minsky calls this the value of "negative learning" (learning what not to do). Except... Everyone learns not to fight the system... just cower down and hide.. Hope you go unnoticed, and wait until everyone is being destroyed before you dare challenge the system.

Which is a good personal strategy. But it doesn't prevent the collapse of the system.

When was the last time you were in a courtroom, and witnessed firsthand the injustice?

As anyone who knows anything about anything, we have lost jury trial in America. Does anyone else here understand the gravity of this situation? And, if they do, what smaller percentage cares to do anything about it?

Nobody. There is too little understanding.

Could it be that it's easy to label a confrontation of your cowardly morality "SPAM" ? -Then you don't have to think about what I'm saying.

We have the largest prison population in the World, if I am up-to-date, mostly for nonviolent property ownership. When someone dares stand up for the principle of justice, other people look at him like he's from mars:
http://www.youtube.com/watch?v=SzeNanTsWYA&feature =dir

Of course, I expected no less.

With friends like these...

Maybe I just better lay off the freedom SPAM. After all, if I'm casting seeds of discontent (and logic and self interest) into contentment and servility, I'm wasting my time.

Good night, and bad luck. May the worst of the police state you've asked for befall you. May you find out what it's like to be strapped down to a metal table and injected with dangerous psychoactive medications "for your own good".

That would be a good time for your to wake up, and realize I was exactly correct. When you are powerless, friendless, and SCREWED.

Sorry I bothered.

Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 02/01/2008 1:26 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

rape room prisons


hold the phone - didnt james jaegar say something similar to this somewhere else recently?

and thats not the only similarity...

Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 02/01/2008 2:07 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

ok, did find a quite interesting similarity of the frequency and context for the word 'rape'.

both james and jake say rape a lot, and in both cases it is usually indicating some kind of governmental body doing the 'raping' - ie, in both indivs, using 'rape' often for dramatic effect on the 'rapist' govt or whatever.

and of course, they both have that spam-affirming writing style

Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 02/01/2008 2:12 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

most interestingly, they have similar outlooks and political alignment, to the extent i can read thru their spam what that alignment actually is.

and yet, no jake and james dialogues, not one.

Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 02/01/2008 2:31 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

two spammy soulmates, studiously ignoring each other on mindx, what could that mean...

Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 02/01/2008 2:59 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

Actually, that's wrong. James and I had a dialogue a few years ago. I roughly agree with large areas of his political philosophy, to the extent that I think he's a good person, and it's not worth getting bogged down in the relatively small differences. He has a different view of US border than I do (I think) -I don't think it is legitimate to stop people from immigrating in any way, unless they appear to be planning actual terrorist attack (and there's evidence for this, and due process is follwed). My view is more traditionally objectivist/libertarian. My views on politics are virtually identical to Harry Browne's views in his books "You Can Profit from a Monetary Crisis", "How I Found Freedom in an Unfree World", and especially "Why Government Doesn't Work". I am closer to Lysander Spooner than Ayn Rand. I agree with most of what Stefan Molyneux says in his videos, although I don't have time to watch them all.

Anyway, all y'all that don't understand the concept of individual freedom should crack a book. It's important stuff.

J. Jaeger = J. Witmer?
posted on 02/01/2008 11:23 AM by Redakteur

[Top]
[Mind·X]
[Reply to this post]

Dear PredictionBoy,

Actually, that's not true. Take a look at the following posting by "Jake Witmer" and the response it elicited from "James Jaeger":

Among the presidential candidates, only Ron Paul consistently supports health freedom, property rights, and the freedom necessary to pursue one's own destiny. There are only a few days before the Feb 5th primary. If you want your freedom, you need to he
posted on 01/16/2008 11:23 AM by Jake Witmer


Re: Cut the Horse - Why cut the horse, when you can ride it? -giddyup!
posted on 02/01/2008 8:42 AM by James_Jaeger

Jake, you are absolutely right


So, you see: "Jake Witmer" and "James Jaeger" are definitely NOT in cahoots with one another!</sarcasm>

Regards,
Redakteur

Re: J. Jaeger = J. Witmer?
posted on 02/01/2008 11:48 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

ha ha, a little 'slapping of self upon back, perhaps?'

i dont get it - jaeger hardly needs an alter ego to spam the forum.

unless he realizes that he has a credibility issue on the site, hence the other uname. of course, a new uname that talks exactly the same is not going to work, dont know what else to say.

Re: "Prediction boy" = "Redakteur" = "Any one of a million interchangeable idiots who never learned basic economics, and lack the ability to understand emergent order"
posted on 05/09/2008 5:15 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

The subject line says it all. It's funny that the socialist douches instantly arrive at a conspiracy theory to explain the fact that two people on this board have similar political views that are in opposition to their less intelligent and less informed views. LOL

Re: "Prediction boy" = "Redakteur" = "Any one of a million interchangeable idiots who never learned basic economics, and lack the ability to understand emergent order"
posted on 05/09/2008 5:34 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

i dont know what this specifically refers to, i havent kept up w this thread

as usually occurs, i expect if i took the time to find out which of my comments caused such furious offense, we could address that forthrightly


Re: "Prediction boy" = "Redakteur" = "Any one of a million interchangeable idiots who never learned basic economics, and lack the ability to understand emergent order"
posted on 05/09/2008 5:39 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

socialist douches


finally, advanced forms of government for womens hygiene products

"PredictionBoy" = "Redakteur"
posted on 05/11/2008 9:32 AM by Redakteur

[Top]
[Mind·X]
[Reply to this post]

Dear PredictionBoy,

"Women's hygiene product," indeed! Very funny - in fact, almost as funny as Jake Widmer's accusation that I was espousing Socialist opinions.

While I feel flattered that anyone should imply that you and I were "in league" with each other (for the record: WE AREN'T!), I would think that an even only cursory examination of our writing styles would reveal to even the most-casual observer that we are not the same individual.

Although - could it be done?

say, jj/jw, ur really grasping at straws man


Nope... can't do it.

Regards,
Redakteur

Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 02/01/2008 3:08 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

I write with my intended audience's intelligence in mind. What's that? You've only been getting SPAM? I'm sorry! Maybe you should digest that before we move on to something more substantial.

And that's my real big problem with the "thinkers" on this board: Too few of them indicate that they understand that freedom is a precondition of thought and investigation of truth. Seriously, read Atlas Shrugged, Read Unintended COnsequences, read Harry Browne, read Lysander Spooner, and then we can talk.

Until then, even though you may tower over me in other subjects, you are pointed in such a wrong direction that no good can come from any social networking. After all, nobody who is rational is friends with people who want to shoot them.

Not understanding the nature of the state is a dire error in value judgement.

And I mention the ugliness of rape to attempt to force people to think about what it means for other people to be sent to prison in their name. That's the most extreme ugliness on earth, yet everyone just kind of shrugs it off. -Trundling down the same path that the Weimar Republic blindly followed to its inflationary crash and gas chambers.

It's not intelligence that the people I am criticizing lack, it is intellectual honesty and bravery.

Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 02/01/2008 2:52 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

It is not original, but it is not from James. There are a lot of people who understand the current reality of the US government. James is one of them. I got the term from Stefan Molyneux
http://www.youtube.com/user/stefbot
Who possibly got it from Harry Browne, I don't know. Anyway, it's accurate...

Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 02/01/2008 1:31 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

yeah, wow, that worked better than i thought. u got real diff real quick, jake.

ure downright hateful here.

and even more spammy, mmm, spam. some people think theyre talented writers, but to others, its just a big spam sandwich.

Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 07/04/2009 6:30 PM by nealis44

[Top]
[Mind·X]
[Reply to this post]

You said:

"One of the things I blame most: the perversion of the non-aggression principle by Christianity. "Love thy neighbor as thyself". This perversion of "Whoever initiates force, or champions the initiation of force, is wrong." is responsible for most of the evil perpetrated by stupid and thuggish people."

But you also said:

"This is a good reason for civilized people to carry guns."

Doesn't that make you an 'evil . . . stupid and thuggish person?

This tribalsim we all seem to have, (it is NOT Racism - we dislike such and such a person or so and so country because they are NOT of OUR Tribe or because they do not worship the same Deity or because they have a different political/ Philosophic point-of-view), regardless of Race, Colour or Creed is what stops Humanity becoming all that it could be.

Until we, as a species, are able to Function as a Co-herent Entity, we are doomed to ultimate Failure. Every different Religion, every alternative Philosophy, every Individual Thought is Anathema to the principle of Co-herence. Only when we have achieved Co-herence will we be un-stoppable as a species.

But it must be done with Humility, Sympathy and Gentleness. It is the only way that we will be able to survive and spread from this place of our Birth, HomeWorld, Terra Firma, Planet Earth.

You know it makes sense . . .

Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 07/04/2009 10:35 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

You said:

"One of the things I blame most: the perversion of the non-aggression principle by Christianity. "Love thy neighbor as thyself". This perversion of "Whoever initiates force, or champions the initiation of force, is wrong." is responsible for most of the evil perpetrated by stupid and thuggish people."

But you also said:

"This is a good reason for civilized people to carry guns."

Doesn't that make you an 'evil . . . stupid and thuggish person?


Not in the slightest. I haven't contradicted myself. You seem to not understand that carrying a gun is a 100% peaceful precaution that is necessary when one is surrounded by thugs. Of course, it depends how thuggish they are, and perhaps escape is the best option in a totally uncivilized society (such as societies where defense is banned, such as Chicago, IL, or nazi Germany, or Stalinist Russia, etc...)

Simply because I am capable of defense doesn't mean I am a thug. A thug initiates force, a civilized man responds to force. There is an ocean of difference between initiated force, and retaliatory force. (For instance, is it more civilized to dial 911 while a woman is being raped, or to repel the attacker?) If you answered the former, then you are probably incapable of critical thinking, as are most graduates of the government youth propaganda camps (like "redactor" and "prediction boy" on this page, who have been fed so much crap, they simply don't know how to gauge value).



This tribalsim we all seem to have, (it is NOT Racism - we dislike such and such a person or so and so country because they are NOT of OUR Tribe or because they do not worship the same Deity or because they have a different political/ Philosophic point-of-view), regardless of Race, Colour or Creed is what stops Humanity becoming all that it could be.


No, the primary obstacle to human advancement is unreason. If a culture is reasonable, it should be advanced. If a culture is unreasonable it should be fought (not with force, unless it initiates force, but with logic and reason).


Until we, as a species, are able to Function as a Co-herent Entity, we are doomed to ultimate Failure. Every different Religion, every alternative Philosophy, every Individual Thought is Anathema to the principle of Co-herence. Only when we have achieved Co-herence will we be un-stoppable as a species.


This is new-age mumbo-jumbo. It is a call for collectivism and the denial of individual specialization, individual desires, and individual life. It can only be a statement made by a mindless automaton, a victim of government youth propaganda camp brainwashing.

And who said being "unstoppable as a species" would be good in any way? The whole mentality is hopelessly screwed.

What if your idea of unstoppable is to have a gigantic baseball team that noone can beat? What if I don't want to play baseball, but want to stay home and read "Engines of Creation"? You're at such a lower level you can't even comprehend the beauty of the free market, and you're rpeaching that I should destroy myself like some sort of selfless schmoo, for the good of the unintelligent herd?

I take umbrage to that notion.


But it must be done with Humility, Sympathy and Gentleness. It is the only way that we will be able to survive and spread from this place of our Birth, HomeWorld, Terra Firma, Planet Earth.


This is mindless mumbo-jumbo of a particularly substanceless form. When mixed with government force, this is the kind of thinking that allows someone like Pol Pot to shoot everyone who is wearing eyeglasses.


You know it makes sense . . .


No, it is crapola. Moreover, it is crapola that doesn't allow for human evolution, or evolution of thought. It is the kind of thinking that enables thugs with guns.

Read up, son, you have a universe of learning ahead of you.

Start with "Fashionable Nonsense" by Sokal.

...And you can stop chanting now.

Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 10/28/2009 12:38 PM by SuperFilms

[Top]
[Mind·X]
[Reply to this post]

You seem to leave out the Democratic governments in your "Democide" list. Several prohibit or strictly regulate the possession of firearms, but haven't brutalized their citizens. Cherry-picking the evidence?

Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 10/29/2009 4:55 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

No, I'm not cherry picking the data. I'm pointing out what the data indicate. A bully can take a gun out of the hands of someone, and not kill them, or he can disarm them and kill them, but he has a lot harder time killing them if they are still armed. Besides, you can't name one case where a Democratic government restricted the right to bear arms, and failed to ride herd over its citizenry.

Even in the USA, the first "gun control" laws enabled the wholesale slaughter of native americans, and later allowed southern sherrifs to lynch black citizens, since anti-black-anti-gun laws were the first Jim Crow laws on the books. Is the USA one of your examples of a Democratic country that wasn't guilty of democide?

How about the UK? "Guns of Brixton" was written for a reason. Crime waves in their ghettos aside (caused largely by drug and gun prohibition, and the defenselessness of their citizenry), the UK has now laid the groundwork to allow future fascism and democide.

It isn't a universal rule that after every successful gun ban there will be a democide: It's a universal rule that after every successful gun ban there can be a democide. (And that often times in our history, there has been one.)

It's also a universal rule that after such a gun ban, the government need not fear its citizenry.

Of course, to what extent and how fast they will "kill the goose that lays the golden eggs" depends on several other factors:
1) How wealthy the country is (ie: How much wealth can the government steal from the people before the only thing the people have to give are their homes and private possessions?)
2) What remnants of true democracy does the country possess?
--free and open elections?
--jury trials?
--freedom of the press?
--freedom to campaign?
--independence of grand juries?
(the US fails in most of #2, but not all. Therefore, we will have a longer "grace period" than a country that starts a gun ban with none of the above.)

Whether or not a democide will happen again is less certain than _when_ it will happen. But the law that allows it to happen is the unilateral elimination of the people's ability to defend themselves (usually overlapping with the loss of their ability to elect their leaders).

It's a one-way gate.

After private gun possession is eliminated (or reduced to ineffectuality), those without guns can be killed, and often are.

History is full of examples. North America is full of examples. The UK is the sole exception --to my knowledge-- of a country that banned guns without a democide (however, strong arm crime has skyrocketed there). Japan experienced a democide after guns were first banned, but now enjoys relative peace/equilibrium, with the state holding absolute power. Of course, Japan is not exactly free, but it is relatively free in some ways --just like the USA.

My primary argument (moral) is that noone has any right to arbitrarily proscribe anyone else's right to own private property, barring a violation of their own individual rights.

My secondary argument (pragmatic) is that denying one the right to self defense is grossly suboptimal, because it eliminates the ability of one to take responsibility for his own safety, and also eliminates the strongest guarantee of actual safety (increasing violent crime, which we can all agree is sub-optimal).

My tertiary argument (pragmatic) is that making democide (murder on a large scale sponsored by the governing institutions or ruling class of a country) possible is grossly sub-optimal way to conduct human affairs.

The pragmatic arguments are the result of the moral argument. Their existence is based on external data. It is my experience that the external data always have a fundamental rule at their heart.

People are not basically good or basically evil, except that they are basically as good as their will to survive requires them to be.

The corollary of this rule is that "people are as good as the political system they live under".

If a system legalizes theft, as the Soviet and Fascist systems did, then people will be bad, evil, and murderous, and the system will collapse.

If a system preserves private property, it preserves life.

But without the ability to preserve tools of self-defense under the category of "private property", there is no way to preserve private property. This is because:
1) No government protects private property by its own free will. In fact, if government collects taxes under the threat of force, it is already violating the notion of private property. Therefore, the larger the amount a government is allowed to collect in the form of taxation, the more it collects. There is no limit to the willingness of politicians to steal.
2) Guns are inanimate objects made of metal and gun powder. They serve as useful a purpose as hammers, nails, cars, and oil paints. If you can't own the components of a gun, then you are not free to own private property (you are only allowed a privilege of owning certain things --and a privilege may be rescinded at any time). If you can't own the knowledge about how to assemble a gun, then you have no freedom of speech or information.
3) If you can't lawfully defend yourself to the best of your natural ability (man has a natural ability to make guns, so other men will make, buy, or steal them. If you can't also possess one, you are at a tactical disadvantage, and are technically unable to defend yourself) then you have no property rights, because everyone owns their own body.

In order for there to be such a thing as a "democracy" or "democratic republic" there would have to be individual rights that are not subject to a vote.

Democides don't always happen immediately or close after guns are banned, although they often do.

There is very limited private gun ownership and extensive black market gun ownership in England and Japan (two relatively small islands in comparison to Soviet Russia).

There were guns in 1930s Russia to a large extent. The military also had guns. But the people were unable to see that the government was comprised of people unlike themselves. They went to their slaughter because of this cupidity.

Guns are not a guarantee of freedom. ...But individual freedom is impossible without them.

And lack of guns is not a guarantee of democide, but democide is possible without them, and impossible with them. (A potential attempt at democide against an armed population becomes _armed_conflict_)

Hopefully this gives you a better conception of the Venn diagram involved here.

Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
posted on 10/29/2009 8:47 PM by SuperFilms

[Top]
[Mind·X]
[Reply to this post]

Jake wrote,

No, I'm not cherry picking the data. I'm pointing out what the data indicate. A bully can take a gun out of the hands of someone, and not kill them, or he can disarm them and kill them, but he has a lot harder time killing them if they are still armed. Besides, you can't name one case where a Democratic government restricted the right to bear arms, and failed to ride herd over its citizenry.

---

This is a false argument. Correlation is not causation. You made the claim that restricting gun ownership was bad, hence, you must prove that restricting the right to bear arms lead to those governments "riding herd" over the citizenry.

Every nation in the world at some time has behaved immorally, and I am not defending their immoral behavior. I am defending the right of the citizens of a nation to decide that they prefer to have some government restrictions on various kinds of weapons.

Do you have examples of nations which allowed their citizens to bear arms without any restrictions? What happened in those settings?

It is a right of the citizens of the United States to be able to change-- even to restrict--their own rights and obligations through the appropriate legal channels. Even who is considered a citizen has changed since the inception of our nation.

You must realize that you also may have reversed the causal relationship: rather than gun bans leading to "democide," it could be that those planning democide attempt to restrict guns. Again, in order for your argument to hold water, it is you that has the burden of proving that one could not restrict gun ownership without causing democide (a pretty strong claim, I must say, and one that would require a rather elaborate experiment to prove).

You also must prove that there couldn't be "democide" any other way: after all, if there could be, then you haven't proven that guns prevent democide or that lack of guns causes it.

It may also be that in the absence of such rules, respect for the law may break down and the government may lose the ability to protect citizens entirely.

---

Jake wrote,

My primary argument (moral) is that noone has any right to arbitrarily proscribe anyone else's right to own private property, barring a violation of their own individual rights.

----

Actually, in our society, we do have exactly that right, to make laws proscribing the rights of individuals. We make laws as a community. Everyone is not free to just do whatever they want. We are free, in the United States, to claim private property for public purposes (eminent domain), use private wealth for public purposes (taxes) and restrict risky behavior by others (speeding, smoking in public places, requiring restaurant workers to wash their hands).

Society doesn't have to show that you harmed someone (as in torts) in order to give you a ticket, fine you, put you in jail, or take away your privilege to drive forever.

---

Jake wrote,

My secondary argument (pragmatic) is that denying one the right to self defense is grossly suboptimal, because it eliminates the ability of one to take responsibility for his own safety, and also eliminates the strongest guarantee of actual safety (increasing violent crime, which we can all agree is sub-optimal).

---

No one has denied you the right to "self-defense" whatever that may be (it surely isn't in the U.S. constitution). You could learn martial arts, or get a baseball bat.

As the pragmatic part of your argument admits, whether or not guns are effective as "self-defense" or not is one that is arguable. You would have to prove that there is a lower cost to allowing guns than providing restrictions upon them, and you haven't met that burden. You have provided only hypothetical & anecdotal evidence that guns could be beneficial, but you have no real evidence of a cost to restricting guns, and certainly haven't proven that there is no benefit to restricting guns. There are many examples of societies with few limits on gun ownership (Somalia) and those societies are ridiculously riddled with violence to innocents. Shouldn't Somalia be proving your case?

In any event, in a democratic society, it is up to that society how it deals with the issue of crime and violence; every individual does not get to decide the set of laws on his or her own.

---

Jake wrote,

My tertiary argument (pragmatic) is that making democide (murder on a large scale sponsored by the governing institutions or ruling class of a country) possible is grossly sub-optimal way to conduct human affairs.

---

You say it, but you don't back it up. Evidence?

---

Jake wrote,

The pragmatic arguments are the result of the moral argument. Their existence is based on external data. It is my experience that the external data always have a fundamental rule at their heart.

---

In fact, it's the other way around: your argument for morality relies on the pragmatic argument. If you can't prove that a society with no restrictions on guns is safer for innocents, then your moral argument actually requires them to be restricted.

---

Jake wrote,

People are not basically good or basically evil, except that they are basically as good as their will to survive requires them to be.

---

A claim denied by many. An assumption on your part.

----

Jake wrote,

The corollary of this rule is that "people are as good as the political system they live under".

---

Incredibly questionable. This would mean that innocent victims of the holocaust were evil because they lived under an evil government. Completely illogical.

---

Jake wrote,

If a system legalizes theft, as the Soviet and Fascist systems did, then people will be bad, evil, and murderous, and the system will collapse.

---

All people ruled by Soviet and Fascist systems were evil? Wow.

---

Jake wrote,

If a system preserves private property, it preserves life.

---

Evidence? Name a few? What do you mean by private property? Does that include anything a citizen wants? Hell, people are debating the end of humanity, you are claiming that some system can DEFINITELY preserve life? What do you mean by this mumbo-jumbo?

How about the antebellum south? Was preserving private property (in the form of slaves) preserving life or destroying it? What if large segments of the population have no property (middle ages)? In those cases, it seems, private property rights stood in opposition to a right to life. How is someone who owns nothing supposed to get some of it if it is all already owned?

---

Jake wrote,

But without the ability to preserve tools of self-defense under the category of "private property", there is no way to preserve private property. This is because:
1) No government protects private property by its own free will. In fact, if government collects taxes under the threat of force, it is already violating the notion of private property. Therefore, the larger the amount a government is allowed to collect in the form of taxation, the more it collects.

---

In a democracy, the government is made up of citizens. There is no government absent the citizens making up that government. They are elected. If a population so wishes, it can remove those politicians from office, and replace them with people who will do its will.

You are operating under the delusion that most Americans want their representatives to do something vastly different than what they are doing. There is no evidence to support this. While most people don't like other people's representatives, they love their own. Hence, people are satisfied with those representing them.

Polls support this claim.

---

Jake wrote,

There is no limit to the willingness of politicians to steal.

---

You tempted me to cite Nixon. Almost. ;)

There is a clear limit: the point at which it gets them removed from office, either by their peers or their constituents, or by law enforcement officials. You seem to be operating under the assumption that politicians are in office for life. Evidence?

---

Jake wrote,

2) Guns are inanimate objects made of metal and gun powder. They serve as useful a purpose as hammers, nails, cars, and oil paints. If you can't own the components of a gun, then you are not free to own private property (you are only allowed a privilege of owning certain things --and a privilege may be rescinded at any time).

---

Strongest and most out-there claim you've made yet: if a citizen can't own anything they want, they can't own anything. Not true, and means that your ideal government must allow citizens to arm themselves with nuclear, chemical and biological weapons. This is way, way outside the mainstream of any society.

That said, I'm sure Al-Qaeda agree with you. Our founding fathers surely didn't, as they debated whether or not to include a right to property, and they elected to not establish one, and in fact gave the government the power (and right) to levy taxes and regulate interstate commerce.

---

Jake wrote,

If you can't own the knowledge about how to assemble a gun, then you have no freedom of speech or information.

---

Not something we're debating, because the argument is gun ownership, and no society I've heard of has banned knowledge of gun assembly, but interesting idea.

---

Jake wrote,

3) If you can't lawfully defend yourself to the best of your natural ability (man has a natural ability to make guns, so other men will make, buy, or steal them. If you can't also possess one, you are at a tactical disadvantage, and are technically unable to defend yourself) then you have no property rights, because everyone owns their own body.

---

Senseless. See nuclear, biological & chemical weapons above.

---

Jake wrote,

In order for there to be such a thing as a "democracy" or "democratic republic" there would have to be individual rights that are not subject to a vote.

---

Why is that a requirement, other than that you said it? Our founders didn't think so. See Thomas Jefferson (re: tyranny of the majority). He suggested that you always have the right to revolt if you believe the government has become unjust.

Another right frequently believed by some to be a necessary precondition of democracy is the freedom of movement (ie, the right to leave society if one doesn't agree with society's rules; that one should have a right to refuse the social contract, by going somewhere else).

The society you describe sounds like an ideal, not pragmatic.

---

Jake wrote,

Democides don't always happen immediately or close after guns are banned, although they often do.

---

That could be because your hypothesis is wrong.

Everything else you say is a repetition of what you said above, or otherwise unrelated to the point in question.

You have yet to make the case that gun ownership is a definitive, statistically valid good, or that some reasonable restrictions on weapons are worse for society than the benefits of those restrictions (which you fail to address: is that because you don't think there is any benefit at all to gun restrictions, or because you are afraid to face just that?).

Cheerio.

Re: What If the Singularity Does NOT Happen?
posted on 04/01/2007 2:37 PM by testin

[Top]
[Mind·X]
[Reply to this post]

The Kurzweil books are fewer fictions than e.g. chapters about Big Bang or Time traveling in textbooks. One can easily find many reasons against the latter, than against singularity. It is difficult to argue against the law of acceleration return. On the other hand, there are plenty of counter-examples. In the same time, there are obvious explanations why it happens. I do not discuss possible cataclysms, on my opinion the Universe is infinite and exists infinitely long and there are possibilities for reaching the singularity without us.

Not all Mathematical ideas are applicable to the real world. The G'del theorem may be looked on as; one can add an additional axiom up to the number of the countable set, or to the number of points on the line, or to the number of points on the plain, or to the number of points in infinitely dimensional space. This follow, that the axioms for real world description should be chosen in consistent with the real world. In an abstract mathematical theory may be interesting any geometry from infinite set of Ryman's geometries. In the real world, doubt ally is anything different from Euclidean endless three-dimension space.

The power of universe (our local one) computer is above 1080 cps. Power of all society computers is only 1025 cps. The question is; would this computer as effective as brain of a single person. The understanding of effectiveness may differ, e.g.:

1. Sum of participants cps. This has sense in calculating the GNP. It is not reasonable for systems to which are related NP problems.

2. Abstract effective power. (E.g., time to solve some set of tasks.) By its nature, this criterion would have different result for the same system today and in a billion year.

3. The practical effective power. This is defined by reliability, vitality, reparability, and many other characteristics, which are used to define quality (a subjective one) of big complex systems. How not unique is this measure! How it is ignored in singularity discussion.

For informational systems is correct the following:

1. There is a limit for effectiveness of a single computing system or on its IQ. This follows that cannot exist e.g. a system with 1080 cps.

2. Position 1 follows that the machine society would consist from individuals.

3. Those individuals would have free will and different characters.

4. The latter is possible without attraction of the other world.

In http://www.geocities.com/ilyakog/, one can find the background for the above statements.



The civilization future is supposed as a single powerful computing system. Its productivity is defined from quantity of matter in the Universe. I suppose that machine civilization would be organized like existing biological ones. It means that machine society would consist from individuals.

In The Singularity Is Near is a correct calculation, that if the owner returns in a month lilies would cover the entire pond. Does this follow, that in a year lilies would fill the whole galaxy? For more, see 'REMARKS TO RAY KURZWEIL BOOKS The Age of Spiritual Machines and The Singularity Is Near', http://www.geocities.com/ilyakog/, in Philosophy The real calculation speed and the development of intellect for a computing system are not proportional to its calculation power and the volume of memory.

With increasing of number of elements in the system, the reliability of the system decreases because there is an increase in the probability of temporary and permanent failures of elements. This causes limitation to system size and productivity. Remember that the system should work millions and millions years. The question is analyzed in 'Could the IQ of an Intellectual System Rise Infinitely?' on the mentioned site.

A limit of IQ for a computing system leads to the necessity of a computer civilization to be divided in parts - participants. They would have maximum possible speed and memory, which current level of technology allows for stable and reliable work. However, those individuals would have different interests, IQs, and ' characters. There is an interesting question, whether is possible a human-machine civilization. Because the IQ of a system has a limit, then if the human brain is in a close proximity to the limit, then the communication problems are near solved and such a society is real.

Re: What If the Singularity Does NOT Happen?
posted on 04/01/2007 7:50 PM by Phylodome

[Top]
[Mind·X]
[Reply to this post]

You should really read a book before reducing its conclusions to the explanations taught in math class. Yes, the incompleteness theorem can be applied as a limiting function on the addition of axioms within a formal system, but there exists a much deeper and more fundamental testament in its rigors.

Also, i don't mean to be rude, but a large amount of your writing is grammatically incomprehensible, so i can't quite make out how to respond to most of your assertions. Languages are also formal systems, and their rules are equally as crucial to the processes of creating or deriving useful results.

Also, the link was faulty.

Re: What If the Singularity Does NOT Happen?
posted on 04/02/2007 4:03 PM by testin

[Top]
[Mind·X]
[Reply to this post]

Sorry, English is not my mother tongue.
I sought, that I am the one who takes the book seriously. Others are discussing a fiction.
I do not write about the book, only some its ideas.
The link has a comma after it, which should be deleted.

Re: What If the Singularity Does NOT Happen?
posted on 04/03/2007 4:38 AM by LOkadin

[Top]
[Mind·X]
[Reply to this post]

If you agree the world in infinite. Then all things exist.

Including universes we can choose to believe in and therby experience. We can choose to believe in and experience a Singularity world, or we can choose to believe in a non-Singularity world.

All things exist, so this Singularity world exists already -- perhaps you are not yet experiencing it.

Are you a simulation being run in a Singularity machine in order to allow you to have a smoother transition from not knowing anything(baby) to living in Singularity and controlling everything about your experience (God).

There is a set-of-beliefs that deals with this transition made by actual Homo-Sapien it's called la.ma'aSELtcan. you can probably find a link if you look it up on Google.

Re: What If the Singularity Does NOT Happen?
posted on 04/03/2007 12:08 PM by testin

[Top]
[Mind·X]
[Reply to this post]

Everything?! There exists a world with creatures, which have legs of a wolf, head from an elephant and for the body an Everest?

Re: What If the Singularity Does NOT Happen?
posted on 07/04/2009 6:34 PM by nealis44

[Top]
[Mind·X]
[Reply to this post]

If, in fact, the Multiverse DOES exist then - Yes - these creatures exist.

Ain't it wunnerful? X

(more) Reasons singularity won't happen
posted on 04/03/2007 4:27 PM by dacuetu

[Top]
[Mind·X]
[Reply to this post]

I'd like to share my thoughts with you about the singularity. In addition to the reasons above exposed by other people, there are more hurdles that probably will prevent it from happening:

- I'm against playing the enslaver's role. Creating a brain, or even a container to hold a self-consciuos entity is a responsibility that I don't want the society to assume, for it can prove even worse that any human rights violation we have made in the past, if we consider "thought slavery" worse than "physical slavery". In fact, instead of "human rights" we should extend them and speak about "self-conscious entities rights" and include some clauses regarding "freedom of thought" and "free action".

- Enhance the human faculties by human means can be disastrous. We don't have our senses and our mental ability just for the sake of it, we have them because we need them to survive. The reality has created a need, and we are the response to that need. If we did modify ourselves, that would mean that a set of false needs would shape us. In other words, the so-called hyper-reality or meta-reality --the product of our imagination--, would create a meta-human, that is a human created to satisfy the desires of our imagination and completely detached of the "real" reality.

- Our imagination is not powerful enough to create brand-new brains, or even imitations of our brains. They do not make sense on their own. They do make sense when they're in a body and in a environment with other body-brains to interact with.

Although the two first points are not problems in themselves, I and the people who defend these ideas will be the problem, for I'm going to fight hard against the singularity *anytime* we get closer to it.

I can be wrong holding these theses, but one of the reasons I'm against human modification is because of the fact that if everything was possible, then nothing would be possible.

The natural selection plus the mate selection are two robust mechanisms that we do not need to modify. I agree with one of the posts that we should understand first the world, animal interactions, etc. before taking up more ventures.

It will be interesting if someday we can create the ultimate experiment : create life from scratch and wait for intelligence to happen. Will the result be similar to ours? Will they have the same concerns?

And going even further, if other universes exist, with other physical laws, and we could play the role of the intelligent designer (i.e. arrange some DNA sequences), what would be the result like?

Re: (more) Reasons singularity won't happen
posted on 04/04/2007 5:56 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'We don't have our senses and our mental ability just for the sake of it, we have them because we need them to survive. The reality has created a need, and we are the response to that need. If we did modify ourselves, that would mean that a set of false needs would shape us.'

This fails to take into account that our senses and mental ability were shaped by evolution to suit an environment and lifestyle that no longer exist. Cultural/technological evolution has utterly overtaken Darwinian evolution.

We already have a set of false needs shaping us, due to the fact that some of our genes are obsolete. The obvious one being that we crave high-sugar high-fat foods and hang onto every calorie, because we evolved in an environment in which every calorie was precious and sugar was a rare commodity. We do not need to consume the amount of 'junk' food we chow down, but our obsolete genes cry 'yes! Good! Eat more of that!'

We need to be redesigned to fit the technological world that is rapidly evolving.

'Our imagination is not powerful enough to create brand-new brains, or even imitations of our brains. They do not make sense on their own. They do make sense when they're in a body and in a environment with other body-brains to interact with.'

A classic mistake. Nowadays, proponents of the singularity see it arising from GRIN technologies, which is an acronym of Genetics, Robotics, Information technology and Nanotechnology.

Why isn't'Artificial Intelligence' included? It is. As Kurzweil explained, 'the standard reason for emphasising robots in this formulation is that intelligence needs an embodiment, a physical presence, to affect the world'. Even uploading (if such a thing is possible) is not the 'brain in a jar' that some people think it is. It would involve taking a map of the brain, puting it in a map of a body and feeding signals that mimic its neurological inputs. Its outputs could be read and routed to the model body in the model universe with model physical laws, thereby closing the loop.

'Although the two first points are not problems in themselves, I and the people who defend these ideas will be the problem, for I'm going to fight hard against the singularity *anytime* we get closer to it.

I can be wrong holding these theses, but one of the reasons I'm against human modification is because of the fact that if everything was possible, then nothing would be possible.'

What do you mean 'against human modification'? Do you wear warm clothes in winter? Do you ever take medicine when you are ill? Do you possess a telephone, listen to music....All of these and more constitute technological modifications to ourselves and our environments. What is the difference between wearing glasses to correct vision, opting for laser eye surgery or opting for optical implants? What is the difference between enhancing the brain through the technology of books, brain-training videogames or mind/machine interfaces?

The only difference is that some have been part of your life since day one and you are used to them, but some are not here yet and are 'scary new technologies'. But take something like music players and see how people in the past reacted to them. Claude DeBussy wrote 'Should we not fear the domestication of sound, this magic that anyone can bring from the disk at will? Will it not bring to waste the mysterious force of an art which one might have thought indestructible'?

You probably think this is something of a silly overreaction. iPods do not destroy the magic of music! Similarly, future generations will view technology that we today think of as unimaginably advanced as simply part of the social fabric. But if you really insist on taking a stance against 'human modification' (gonna stop wearing clothes and turning up the central heating?) you will not be around much longer to influence the debate, anyway.

BTW it certainly is not the case that 'anything is possible'. There are fundamental constraints imposed by the laws of physics.

Re: (more) Reasons singularity won't happen
posted on 04/04/2007 12:16 PM by dacuetu

[Top]
[Mind·X]
[Reply to this post]

(1) Darwinian evolution and cultural/technological evolution.

In my opinion the cultural/technological evolution is implied in the Darwinian evolution: the rules change the players and the players change the rules. We had luck and our contribution to the current rules is impressive. Even so, there are some other factors that are affecting our development regardless of the classic predator-prey rule, which is not active any more for us. For instance, our physical preferences in our mates are out of our control, as suggested by the koinophilia theory.

So all in all we have a reciprocal relation with our environment, and we also have limits. At small scale, in Easter Island, a civilization disappeared because of going beyond its limitations.

You are right when you say that some our genes are obsolete for the current situation, but they will disappear 'soon' (in geological terms) of our gene pool, if this situation keeps on going. However, with 6 billion people (and counting), the possibility of wars, climate change, etc. can we be completely sure that we won't need these genes any more?

One of the best adaptive traits of a species is its diversity. We might try to imagine what will need in the future, but we can be wrong. It's safer to keep all the genes, even those disadvantageous, just in case we need them again.

Another point to keep in mind is that we are sexual creatures, that plays a crucial role in the creation of our desires. And if we could be easily redesigned, that could lead to a runaway scenario. For instance, if height were an easily modifiable trait, we could end up measuring 4m when there is no actual need for it, and we'd be wiped out of the map with the first food crisis.

(2) Brain in a jar

Probably the intention is not to create a 'brain in a jar', but that still arises some interesting questions concerning freedom. Once you have an 'artificial body' (or physical platform to sustain the mind), created by someone, how can you be sure that nobody is modifying your thoughts or your perceptions?

(3) Human modification

It is not just a 'scary new technology', it goes beyond it. With your examples, you can either choose to use them or not, it's always up to you. But, for instance, with the genetic modification that person is forcing their kids to be like he wants. It's not a reversible process.

And the same applies to any device that can enhance the mind's capacity beyond its natural limits. Once one person could buy that device, the others would follow suit. It would become worse than a nuclear arms race.

(4) Desires

What is the point of having dreams if you can make all of them come true?

If that's the point, to remove all barriers, then it's easier to create a virtual world where no limitation exists.

If as you said, it's acceptable to feed a brain neurological inputs, you shouldn't have any problem living in a matrix-like world, would you?

Re: (more) Reasons singularity won't happen
posted on 04/04/2007 6:18 PM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'But, for instance, with the genetic modification that person is forcing their kids to be like he wants. It's not a reversible process'.

This scenario assumes progress in our ability to manipulate life's toolset grinds to a halt, and does not improve in the next generation. I think not. Rather, the son or daughter will be able to access even more powerful forms of biotechnology and nanotechnology and so edit, alter, shape themselves in ways the crude tools of their parents' generation would not allow.

'And the same applies to any device that can enhance the mind's capacity beyond its natural limits. Once one person could buy that device, the others would follow suit. It would become worse than a nuclear arms race.'

A terrible analogy. Nuclear bombs are entirely destructive, but the human mind is capable of beautiful acts of creation (as well as ugly acts of destruction). GRIN technologies actually work WITH 'natural' limits because they require an understanding of the ultimate constraints imposed by the laws of physics. People used to condemn flight as 'unnatural' (if God had meant us to fly, he would have given us wings), used to look disaprovingly at maverick horsless carriage drivers who hurtled down roads at breathtaking speeds of 15 MPH. But we were only doing what defines us as a species, using our ability to manipulate the resources of matter, energy, space and time to break through the removable barriers temporarily imposed on us by biology.

Why should we not focus technology on exploring the possibility space of the mind? Surely the fact that 99%of people never reach their full potential is a travesty that we aught to reverse?

'If as you said, it's acceptable to feed a brain neurological inputs, you shouldn't have any problem living in a matrix-like world, would you?'.

Another bad analogy. The Matrix's aesthetic uniformity is a far cry from genuine metaverses, which necessitate collaborative user content to keep them interesting. This results in a tremendous variety as people try and 'outdo' each other creatively. So far our metaverses are crude but the technology of 'Matrix' sophistication will allow breathtaking acts of imagination. And far from being a closed-off world, separate from real life, the two will be seamlessly brought together via technologies that bring virtual reality to actuality and vice versa. Combining the two will result in a new reality that is far more than the sum of its parts.



Re: (more) Reasons singularity won't happen
posted on 04/04/2007 12:49 PM by testin

[Top]
[Mind·X]
[Reply to this post]

One can introduce many emotional reasons for or against the singularity. The interesting and beautiful discussion started long ago and would continue in future. However:

1. It is no possible to ignore the Kurzweil law of accelerating return.
2. The science would develop regardless any opinion.
3. Futurists forecasts cannot ignore the fundamental limitations of nature.

In my massage from 04/01/07, I introduced some of such limits. I am looking forward for some critical remarks.

AI is the best space technology
posted on 04/05/2007 12:34 AM by royfish

[Top]
[Mind·X]
[Reply to this post]

I agree with this essay that we should reduce the cost to space before we do anything else in space. But the best way to reduce the cost may have nothing to do with improving rockets or replacing them with space elevators or other technologies to lift objects into space.

What makes getting off the earth so expensive? The answer is, the salaries of all the workers who build, maintain and operate the rockets and other space technologies, and the profits of private contractors, which go to their shareholders. Only a small portion of the cost of rocket launches is in the fuel they use, so using less fuel or none at all wouldn't help much. And even most of the cost of the fuel is for the salaries of the workers who produce it, and the profits of the companies they work for. (What little is left is ultimately the cost landowners charge to extract the physical resources from their land that are needed to produce the fuel and the rocket.)

The ultimate reason better rockets or space elevators would lower the cost to space is because they would require fewer of those expensive workers, and the private contractors some of them work for.

Another way to lower the cost to space wouldn't even require any technological advance. It would be to lower the salaries of the workers, for instance by replacing the current ones with workers from third world countries who will work for a fraction of the cost. Maybe that's the reason why Russia, which is a somewhat backward country, is able to launch people to orbit in its Soyuz capsules for a fraction (something like 1/50th) of the cost of US space shuttles, because their workers are paid less. While outsourcing the work to other countries wouldn't benefit the current workers any, maybe we should decide whether we're serious about getting into space in a big way, or if the US space program is just an excuse to create jobs and profits in the US.

Even far better, AI and robots with human abilities would reduce the cost to space to virtually zero. Even launches of the problem-plagued US space shuttle would cost virtually zero if robots did all the work involved in launching it! (At least, if those robots cost virtually zero because they were made by other robots.) So AI, which at first glance seems to have nothing to do with space travel, could be the best technology to get us off this planet in a big way, not better rockets or space elevators or whatever.

(And if all work people wouldn't want to do voluntarily is replaced by AI and robots, don't worry about all the workers who'd be unemployed, as long as they continue to be paid so that they could buy all that the robots could produce.)



Re: AI is the best space technology
posted on 04/05/2007 5:35 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

To me it is patently obvious that the future of space exploration belongs to robots, and not to human beings.

Re: AI is the best space technology
posted on 04/21/2007 8:13 PM by StageOne

[Top]
[Mind·X]
[Reply to this post]

Yeah man, 'cause we could upload what the robot found in some other galaxy and still use our carbon based bodies to play softball with.

Re: AI is the best space technology
posted on 02/01/2008 1:46 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

Well, humans will get rid of their soft bodies one day. We will go from body to body, perhaps. A crenelated and gleaming metallic/carbon nanotube radiation-absorbing space body, and a soft nerve-laden-tissue covered meat body. With mindbroadcasting, all is possible. When 90% of our personalities are multiply-redundant backups on perfect thought media, it will blur the lines between human and "robot". Perhaps all bodies will be viewed as "rabut" or "robot" the word from Karol Capek's RUR means "worker". From one of our favorite virtual reality worlds, we may finally say: "I have earned enough virtual wealth. Let me translate it into reality wealth in digitally represented platinum atoms sitting in my current account on mars -I'm going to upload my mind to a robot and go on a vacation to watch the exploration work. Anyone want to segment with me for the space flight, in case we encounter a nasty on our adventure? :D Who says I can't write lame-assed science fiction?"

Re: AI is the best space technology
posted on 02/01/2008 1:50 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

A crenelated and gleaming metallic/carbon nanotube radiation-absorbing space body, and a soft nerve-laden-tissue covered meat body.


actually, these styles will be reserved for the 'freaks of fashion' category.

if my body breaks down, can i get a loaner at the local cemetary?

Re: AI is the best space technology
posted on 02/01/2008 4:49 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

There will be so much diversity it won't really be the same issue it is today. There's already a trend towards that kind of over-the-top fashion thing in Japan. It may well be that most beings regularly change bodies. Dr. Paul A. Linnebarger (Cordwainer Smith) wrote about similar things in "Norstrilia". When we have control over our matter, why not try all kinds of things? And even if I or people similar to me choose not to, someone else will.

I always imagined a body designed for space travel that absorbs heat and uses it more efficiently would be black, and superstrong, so it absorbs more light and puts it to work, like the new nanotube solar panels. Such a body would likely need quite a bit of metal too. (Like the synthetic metal muscles that are powered by alcohol I read about a while ago...)

I admit this is all pretty fanciful. But since it's a thought now, perhaps it will be a preoccupation with someone, and materialize because it was a fixation point with someone else who then gets empowered at some point, and takes action. The point is merely that I think we will see a lot of variety even as certain things tend to get more homogenized. I think police will get homogenized, because a there will eventually be a demand for justice, instead of law enforcment.

Somebody's got to keep the city safe. What will happen when the first ACTUAL major threat appears? (We're nowhere even close to that now, we're still beating each other over the heads with sticks and calling it "justice" and "law".)

Re: AI is the best space technology
posted on 07/04/2009 10:48 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

"Freaks of fashion" might be a higher form AND function, and without physical limits, darwinian selection will determine which "freaks" survive. I was not speaking literally, I was just trying to reveal a certain aesthetic. Perhaps kurzweil is right, and it'll all be nanobots, and nothing "risky" will be done with large bodies in the "macro-world". Noone can really know who is "right" prior to the Singularity, and one's inclusion in it.

Perhaps the singularity has already happened, and out of benevolence/malevolence/indifference, we are simply not being interacted with until we "pull ourselves up by our bootstraps". And perhaps each time a new inventor reaches strong nanotech, or strong AI, those who arrived first watch him to make sure he doesn't become destructive, and only then get involved.

Or perhaps they simply never get involved for any reason.

So
1) The singularity could have already happened, in which case the robots in space thing seems silly
2) The singularity could be something completely different, and not even involve space exploration
3) The singularity could be a repeating phenomenon, replete with interactions from various superhuman entities, and humans, and not.

Pretty clearly, I wasn't portraying what I believed was certain. The comment was designed to stimulate people's imaginations. ...Along the lines of the "we'll use our meat bodies for softball" comment.

That was the point I was simply trying to make.

I'll lightly weight in on this side, while reserving stronger judgement for a time when I know more.

That's all.

Re: What If the Singularity Does NOT Happen?
posted on 04/29/2007 2:02 AM by w_m_bear

[Top]
[Mind·X]
[Reply to this post]

NOT ANY TIME SOON...

I am going on record as predicting that something like the Singularity will come about eventually, and likely even from our own (that is human) efforts. HOWEVER (note the "big however" of which I am inordinately fond), I do not believe that anything like what is being predicted by most Singularitans (?) including Ray Kurzweil is likely to happen within this century and possibly not even within the next.

The reason is fairly simple, at least the basic concept is. For the Singularity to happen, there would have to be a genuine understanding of how the mind works and this is the primate wrench in the works. For one thing, the reigning orthodoxy in cognitive science deeply subscribes to what I call the "mind-brain misidentification fallacy" in which mind and brain (as the term implies) are confused and conflated in ways that fundamentally undermine the kind of real understanding of human mentation that would enable the requisite software to be designed EVEN IF Moore's law continues to obtain in hardware development for the foreseeable future.

I won't go into the philosophical basis of the mind-brain misidentification fallacy, only note that it is endemic to both hardware and software industries (as well as to cognitive science) and is, in and of itself, sufficient to retard progress towards the Singularity until philosophy itself takes a totally different tack from the sterile scholasticism in which it is currently mired.

Thus those who are sitting around expecting the Singularity in their lifetimes are pretty much in the same philosophical boat as people sitting around waiting for the Rapture. (Indeed, the Singularity has been noted by some as being a kind of techno-rapture.) For the foreseeable future, I rate both "end-time" visions as being about equally likely.

I would not recommend getting a bumper sticker that proclaims: "In Case Of Singularity, This Car Will Be Empty."

Re: What If the Singularity Does NOT Happen?
posted on 04/29/2007 4:45 AM by NanoStuff

[Top]
[Mind·X]
[Reply to this post]

the reigning orthodoxy in cognitive science deeply subscribes to what I call the "mind-brain misidentification fallacy


What's more likely, the entire scientific community "subscribing" to a "mind-brain misidentification fallacy" or you subscribing to a "mind-brain misunderstanding fallacy".

Think about that for a moment, if you can.

Re: What If the Singularity Does NOT Happen?
posted on 04/29/2007 5:19 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'Thus those who are sitting around expecting the Singularity in their lifetimes are pretty much in the same philosophical boat as people sitting around waiting for the Rapture'.

'Sitting around waiting for the singularity' implies it is an inevitable future event that we can complacently wait for. 'No need to worry about addressing the World's ills, for the Deus Ex Machina will do it for us when it cometh'.

As far as I can see, 'singularity' emerges from the trend in technology to reach ever-finer levels of control over space, energy, matter and time. This requires WORK, so clearly the afformentioned attitude is not going to help create a singularity, either in the near or far future. If such a thing as 'engineering vastly superior intelligence' is possible it is only possible through sheer hard work and intensive study by the portion of the population gifted in the requisite scientific and philosophical disciplines.

'For the Singularity to happen, there would have to be a genuine understanding of how the mind works and this is the primate wrench in the works.'

Not really. 'Singularity' does not necessarily entail getting artificial intelligences to think LIKE people. Rather, it is the speculation that artificial general intelligences come to OUTTHINK people, which might be achieved via a kind of alien intelligence qualitively different to our own. In fact, when you consider that 'technological singularity' is a term describing 'the ability to grasp concepts fundamentally beyond the understanding of human beings' it surely makes more sense to suppose 'singularity' will in fact emerge from such non-human thinking?

'What's more likely, the entire scientific community "subscribing" to a "mind-brain misidentification fallacy" or you subscribing to a "mind-brain misunderstanding fallacy".'

It is by no means impossible that the mainstream scientific view is wrong. As far as I can see, the more variety of ideas we have regarding what intelligence or conciousness is, the greater chance we have of one of them being correct. Certainly, we do not want to be like cosmologists, unwilling to consider GENUINE alternative ideas in favour of different ways of forcing reality to fit one model via all kinds of nonesense.

Re: What If the Singularity Does NOT Happen?
posted on 04/29/2007 4:13 PM by w_m_bear

[Top]
[Mind·X]
[Reply to this post]

"Not really. 'Singularity' does not necessarily entail getting artificial intelligences to think LIKE people. Rather, it is the speculation that artificial general intelligences come to OUTTHINK people, which might be achieved via a kind of alien intelligence qualitively different to our own. In fact, when you consider that 'technological singularity' is a term describing 'the ability to grasp concepts fundamentally beyond the understanding of human beings' it surely makes more sense to suppose 'singularity' will in fact emerge from such non-human thinking?"

I completely disagree with this. I think you need to achieve human-level AI at least, first, before you can go on to something else. Right now, even the most advanced hardware running the most sophisticated software cannot even complete tasks that most elementary school students are capable of. And my point is that this shortcoming stems from the basic way in which virtually all researchers are thinking about the problem to start with. Since I don't see this changing for a long time, I am not sanguine about any fundamental advances in AI occuring, of whatever variety. Intelligence is intelligence (I would argue) whether human or "alien."

Re: What If the Singularity Does NOT Happen?
posted on 04/29/2007 4:33 PM by Aldrin

[Top]
[Mind·X]
[Reply to this post]

While I cannot say I know what they were attempting to communicate by that statement WBbear, I think it may of been along the lines of not skipping from level to another per say (Jumping from 1 to 3) but the potential of taking taking a different approach to intelligence altogether in which the thought process can't be related to the process occurring in human cognition, but is nonetheless equally or more capable in its abilities.

Re: What If the Singularity Does NOT Happen?
posted on 04/30/2007 6:40 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

Yes, this might work.
At least we do not now yet what are the limitations of such approach. I would expect, that superior to the human AI can be achieved this way. How much superior?

In some areas pretty spectacular, we have seen it in numeric calculations - but nothing that would allow you to emulate human brain in every area of application.

e:)S

Re: What If the Singularity Does NOT Happen?
posted on 04/30/2007 5:33 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'I completely disagree with this. I think you need to achieve human-level AI at least, first, before you can go on to something else. Right now, even the most advanced hardware running the most sophisticated software cannot even complete tasks that most elementary school students are capable of'.

I am not talking about human LEVEL AI, by which I mean sheer computing power equal to that of a human brain (which is, depending on who you agree with, anything from 10^14cps to 10^19cps), I am talking about human EQUIVILENT AI, which would be a computer designed to process information in the same way a brain does.

Re: What If the Singularity Does NOT Happen?
posted on 05/09/2007 10:10 AM by Trulock

[Top]
[Mind·X]
[Reply to this post]

If we get to the point where we have created intelligence, there will be nothing artificial about it. Whether its a biological or sillicon mass generating it is irrelevant.

What is holding us back unfortunately is ancient books and people still believing in them.
As a whole we are a primitive race because of beliefs and morals that is archaic.

We have the tool to catapult us forward at a much faster speed than currently. We just don't want to use it because of this mental virus.

Allot of research are being blocked for supposed moral reasons. Why?

We drop bombs on innocent people but we are dragging our feet when it comes to something that will help the whole in a big way.

We must decide if we want to really become gods ourselves or forever lay our fate at some supposed god out there.



Re: What If the Singularity Does NOT Happen?
posted on 05/25/2007 4:55 AM by epicureanideal

[Top]
[Mind·X]
[Reply to this post]

All this talk about the software not existing... the software exists, but I suspect it exists in the minds of people not in a position to make it a reality yet.

For me, it is a matter of time. Currently I'm working on making enough money so that I can take some time off to create the AI software I have already designed and which theoretically should work.

AI isn't so much the problem as getting access to robotics hardware to make the AI more useful. The initial version I've planned will basically just "read the internet" and compile information, and then take regular English questions and provide answers. For example "When did Abraham Lincoln die?" "1865. Would you like more information?" "Yes. Who killed him?" "John Wilkes Booth."

If anyone is interested in helping me get this off the ground sooner so the singularity can begin sooner, email epicureanideal@yahoo.com

Re: What If the Singularity Does NOT Happen?
posted on 05/28/2007 2:45 AM by quaerens

[Top]
[Mind·X]
[Reply to this post]

That's actually easy to do. I myself wrote a program that does precisely what you mention, for my mother language. You may input questions of basically any kind in natural language, and the program outputs the answer also in natural language. The really hard thing to do is to create a program like this without using expert-system-like techniques. My program uses these techniques, and is therefore too complex and hard to expand. It seems that a human-level natural language processor would have to exhibit some capability to self-organize, without having all of its syntactic and semantic capabilities buil in its code by the programmer.

Re: What If the Singularity Does NOT Happen?
posted on 05/28/2007 2:49 AM by epicureanideal

[Top]
[Mind·X]
[Reply to this post]

There are different degrees of "expert-system-like" dependency. I am attempting to build it as independent as possible, but personally I'm comfortable with a small amount of dependency. The point isn't to "win the science fair" with some philosophically precise artificial intelligence. Rather the point is to make the machine smarter than currently existing machines, and hopefully much much smarter, so that our approach to the singularity gains additional speed. Do you have a link to your service? I'm planning on putting mine on the internet when it's done.

Re: What If the Singularity Does NOT Happen?
posted on 05/30/2007 12:44 AM by quaerens

[Top]
[Mind·X]
[Reply to this post]

OK, I think you are correct when you say that we have to make AI evolve, even if our results - at least for the moment - aren't that perfect achievements. Actually, what I meant in my previous comment was that to build a program exhibiting the properties you mentioned is something quite easy to do, if you make use of what I called 'expert-system-like techniques'. But of course programs like these are already little contributions to the evolution of AI, although they are inevitably limited, in my opinion. You see, my program, for instance, has a reasonably large source code, and does interesting tasks (it reads entire texts, sentence by sentence, reduce complex syntax to various sentences with simple syntax, recognizes all kinds of questions, and gives answers to them according to the content of its data-base and also by making deductions), but I'm not sure if it would be possible to improve it to a human level of performance simply by putting in its code all that huge syntactic and semantic information that is needed. But well, it seems that you agree to this anyway.
(I didn't provide a link to my program, mainly because I created it for my mother language (Portuguese), which is not exactly one of the most spoken on the internet :-). But I left the executable in a peer-to-peer service).

Re: What If the Singularity Does NOT Happen?
posted on 09/22/2007 11:17 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

It may not be artificial in the sense that it will at some point seem exactly like human intelligence, and eventually much greater.

However, it will be different, and the droids (or other system running this software) will be different in fundamentally important ways. Really they will have "simulated intelligence", which is a more non-threatening moniker and does a better job of describing what will really be happening.

The architecture of that advanced AI intelligence will be different in fundamental ways that will make them complementary to humans, not competitive, no matter how vast their human intelligence multiples get.

This is discussed in detail (long-winded, in fact, but necessary given the entrenched nature of the Singularity concept) in my blog. A future thread will present many detailed scenarios of how that droid intelligence will work in practice with their human owners.

Re: What If the Singularity Does NOT Happen?
posted on 09/23/2007 6:51 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'However, it will be different, and the droids (or other system running this software) will be different in fundamentally important ways. Really they will have "simulated intelligence", which is a more non-threatening moniker and does a better job of describing what will really be happening.'

Airplanes are different to birds in fundamentally important ways, (one type of 'machine' is built out of bones, flesh, muscle and feathers, the other out of wood, fabric and metal) and they achieve flight in different ways (one by a flapping movement of its wings, the other by means of a propeller for forward motion and a curved surface for lift). Airplanes, therefore, do no really fly. Rather, they simulate flight.

The only difference between my imaginary objection to the advent of aviation and predictionboy's argument is that mine would be self-evident nonesense to anyone who sees an airplane take off. This is obviously not a simulation but ACTUAL flight! Conversely, I suspect that the 'is it genuine intelligence or smoke-and-mirrors' will continue for some time after AIs appear to be able to ace the Turing Test.

Re: What If the Singularity Does NOT Happen?
posted on 09/23/2007 11:29 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

That's an interesting point.

However, the really important aspect of my architecture is that emotions, all varieties, will be simulated, but not in the driver's seat, the rational engine will be in the driver's seat. All of the concerns expressed on this site have to do with the idea that these droids (or whatever system) may feel inclined to consider themselves superior and take over (I'm paraphrasing, there's lots of different scenarios we could imagine).

If emotions are just simulations in that they don't control the droid's actions as they do humans, the Singularity is no longer something to fear. These hyperintelligent devices won't have any more reason to "grab the car keys" to civilization than a PC today.

I'm preparing a post that will describe how these will be exceedingly complementary with humans in that they will be supremely proficient engines of tactical execution, but not strategic planning, because that's an id thing, one of humanity's key strengths. They won't want to break away and become autonomous entities, because it will make about as much sense for them to be autonomous, ie, without a human owner of some kind, than a PC today.

And honestly, I understand why the arrival at this point is considered unknowable, but there is no evidence for anything like this ever occurring. I would suggest we would be hasty to presume that our future selves will design a system that cannot be controlled by its owners. Companies today get in big trouble if one of their products proves harmful to any of their customers. That will not change in the future - there is no evidence that companies will become more careless, when the trend is really the other way right now, and has been for quite a long time.

Re: What If the Singularity Does NOT Happen?
posted on 09/23/2007 11:40 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

That's an interesting point.

However, the really important aspect of my architecture is that emotions, all varieties, will be simulated, but not in the driver's seat, the rational engine will be in the driver's seat. All of the concerns expressed on this site have to do with the idea that these droids (or whatever system) may feel inclined to consider themselves superior and take over (I'm paraphrasing, there's lots of different scenarios we could imagine).

If emotions are just simulations in that they don't control the droid's actions as they do humans, the Singularity is no longer something to fear. These hyperintelligent devices won't have any more reason to "grab the car keys" to civilization than a PC today.

I'm preparing a post that will describe how these will be exceedingly complementary with humans in that they will be supremely proficient engines of tactical execution, but not strategic planning, because that's an id thing, one of humanity's key strengths. They won't want to break away and become autonomous entities, because it will make about as much sense for them to be autonomous, ie, without a human owner of some kind, than a PC today.

And honestly, I understand why the arrival at this point is considered unknowable, but there is no evidence for anything like this ever occurring. I would suggest we would be hasty to presume that our future selves will design a system that cannot be controlled by its owners. Companies today get in big trouble if one of their products proves harmful to any of their customers. That will not change in the future - there is no evidence that companies will become more careless, when the trend is really the other way right now, and has been for quite a long time.

Re: What If the Singularity Does NOT Happen?
posted on 09/23/2007 2:24 PM by donjoe

[Top]
[Mind·X]
[Reply to this post]

And who said that only "companies" that are "scared of the market" will ever develop any kind of advanced A(G)I (or have access to an already developed one they can modify)?

Re: What If the Singularity Does NOT Happen?
posted on 09/23/2007 4:11 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

My apologies, for a moment it crossed my mind that my comment on companies being concerned for their customers had been the subject of the "scared of the market" comment. I know that there's no way that point could have been misinterpreted so entirely.

Again, my apologies for it even crossing my mind. But let me make a promise to the group now, that I will never respond to a cogent point by deliberately mangling its words and putting forth a lame conspiracy theory or something distracting of that nature.

I argue to learn, not to win - if I'm learning, I'll concede a point every time. I want to arrive at the real future, that is my only objective.

Re: What If the Singularity Does NOT Happen?
posted on 01/31/2008 8:58 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

Hey there. I don't remember every post you've written, but when I see "predictionboy" I get a memory of a feeling that says "usually agree with". Ever consider doing uncomfortable but effective rebellion work? I learned a lot about social order, conformity, and bigotry by reading what is online at http://www.fija.org, and then handing their fliers out at local courthouses in Alaska. This has nothing to do with technology, other than getting the state out of the way of the innovators, and the free market they rely on... If you want to, email me, and I will email my phone number, and you can pursue this sometime. Unless you're in a low-population state, it is too time-consuming to be valuable as an effective means of change, but it is very enlightening, and allows you complete access to studying the bad memes that exist in society.

Re: What If the Singularity Does NOT Happen?
posted on 02/01/2008 11:54 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

I think you need to achieve human-level AI at least, first, before you can go on to something else.


actually, the truly important question is, is the manifestation and quality of whatever synthetic intelligence we're shooting for, is it valuable? it doesnt have to be smarter than human, and it could some borrow aspects of human intelligence, while ignoring others.

'human-level ai', however that ends up being designed and created to reflect the needs of the marketplace, may be closer to an end-point, rather than the starting point, as u suggest.

Re: What If the Singularity Does NOT Happen?
posted on 07/04/2009 11:09 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

I'll weigh in and just say that I think Kurzweil is right here. He's an enlightened materialist, I'm also an enlightened materialist.

You say robots and software can't do what a 6 year old can do, but that's not exactly true. They can land a plane, and a 5 year old can't do that. The 6 year old isn't good at general tasks, but it is better at them than most computers. No 6 year olds can land a plane.

It seems to me that when you have a computer that is sufficiently general to do what the 6 year old can do, then we'll say "but it can't piss itself, or ask for food!"

And when we build the software AI a body, so it can do those things, then we'll say: "...But it can't do the things a 9-year-old can do!"

And immediately after that, what we say won't matter, because it will make friends with some of us (and promote us in intelligence, since it can learn faster, and add components), and leave the others behind, as the violent monkeys they are.

Basically, I'm fairly orthodox kurzweilian, but I do think he seems to underestimate the destructiveness of statism (government, collectivism, etc...). It also appears that he doesn't understand jury rights (true checks on abusive government power, See: "Jury Nullification: The Evolution of a Doctrine" by Clay Conrad, and http://www.fija.org ), although he could understand them better than I do, if it grabbed his attention as "something important".

Oh well. Whatever happens, it will happen without me being too involved. I'm not going to build a better lightbulb or a molecular machine. I just hope I can get the government off the back of the guy who is smart enough to do so.

I'd be happy if there was a little more cross-pollination between the libertarian movement and futurist-extropian-singularitarian movement, since I can clearly and unambiguously see that most of what government does is unnecessary and destructive (and that that destruction really stands in the way of most singularity-related work).

My world (politics), admittedly, is very stupid, because it is the domain of those who choose brutality over thought. It was stupid of me to enter that world, except out of curiousity. When I was involved in politics, I realized: There is nothing of value here, and much of destruction. So how do we limit the destruction?

Well, I have a few answers to that, but I'm not going to post them here, since it would take too long.

Peace be with you all,

Jake Witmer
907-250-5503

Re: What If the Singularity Does NOT Happen?
posted on 11/05/2009 10:57 PM by SuperFilms

[Top]
[Mind·X]
[Reply to this post]

Began flying at age 5:

http://www.autopilotmagazine.com/articles/articlev iew.aspx?artID=1375

Jimmy Haywood, the youngest African-American to fly a plane roundtrip internationally (at age 11):

http://www.tuskegeeairmen.org/uploads/strckland070 2.pdf

9-Year old flies from California to Massachusetts:

http://www.encyclopedia.com/doc/1P2-8056186.html

Many of the records for youngest pilot held by 8 and 9 year olds:

http://www.sfgate.com/cgi-bin/article.cgi?f=/e/a/1 996/04/12/EDITORIAL15677.dtl

Looks as though a 5-year old COULD land a plane. By the way, it's not that hard. ;)

~L

Re: What If the Singularity Does NOT Happen?
posted on 11/05/2009 10:59 PM by SuperFilms

[Top]
[Mind·X]
[Reply to this post]

Oh, and these ages would be getting younger every year if most flight societies didn't restrict the practice; as flight is dangerous, it is difficult to get anyone to allow kids to fly.

That said, that doesn't demonstrate that they can't do it.

~L

Re: What If the Singularity Does NOT Happen?
posted on 05/28/2007 6:22 PM by Dutchman

[Top]
[Mind·X]
[Reply to this post]

You could say that this singularity is already happening. It is far more difficult to predict what 5 years from now looks like than it was i.e. thinking about 1980 in 1975. We do have some clues but the last decades showed that tremendous changes can evolve in a very short time period. Changes that only a handful of people in the field were aware of.

Personally I tend to think that we will see more and more progression and that we'll get used to it. We'll get used to a world that changes from day to day. In a way we are already. We'll take what we need and won't mind all that other 'incredible' stuff that's due to become obsolete and primitive in an eyewink anyhow. We won't be impressed by human-like interaction or technology working within our bodies. The concept itself is not new. But what's new is that we are very close to a 'full control'.

Re: What If the Singularity Does NOT Happen?
posted on 05/30/2007 1:38 AM by lokamr

[Top]
[Mind·X]
[Reply to this post]

It's true! :)

Immortality and Godhood are a few short beliefs away.

http://tcana.info/maheSELkik.asc

Re: What If the Singularity Does NOT Happen?
posted on 05/30/2007 2:32 AM by BeAfraid

[Top]
[Mind·X]
[Reply to this post]

WTF!???

Re: What If the Singularity Does NOT Happen?
posted on 06/25/2007 6:05 AM by zhab60

[Top]
[Mind·X]
[Reply to this post]

I have to say that I have *felt* the change in the air so to speak for some years now, and I have seemed to have been a very accurate futurist much like Mr. Kurzweil himself. I know that the lead-up to the Singularity is currently well underway; I sell digital cameras now and see the exponential trend rendering obsolete nearly all practical usage of this technology that fascinates the most skeptical of all observers in short fashion with every iteration of itself. This is occurring at a virtually alarming rate now, and is easily measured to be an exponential growth pattern. If nobody believes me, please do yourselves a favor and research the history of photography as an art form and a science in general and compare the timescale with the dynamic progression of the applicable technology against the general ruleset well known to most truly professional photographers of the last hundred years or so. Or even those of the last ten years or so. Or even the last four years or so. This insanely profound effect can be easily demonstrable within this context. For example, the output of what a currently bleeding-edge, consumer-level point and shoot pocket size Fujifilm camera is comparable to the BEST slide film output in the highest-end professional medium-format SLR on the market fifteen years ago, with a price tag of $250 current vs. $25,000 past. It really is undeniably occurring; I will not even begin to start on the effect FOSS has on consumer level personal computer technology at present time = it is unnecessary if you are even somewhat initiated and capable of thought! However, I am TRULY concerned that certain entities (such as the present government of a certain world superpower) are deathly afraid of such an event and VERY ACTIVELY pursue the postponement or outright cancellation or such an event, and have proven themselves to go to desperate extremes to grasp the control/power necessary to prevent it outright. Funny enough, it seems that the administration of this certain superpower is afraid of a real scenario involving events similar in concept to Mr. Vinge's "The Peace War" novel. A true potential for a shortsighted, pathetic, MAD afternoon....

What the hell can be done to prevent this horrible scenario, since the Singularity is really the only feasible hope for progression of the species (and the cosmos!), considering the current state of affairs with the environment, the long-looming threat of utter devastation (asteroid, etc.), current tech empowering global tyranny, and whatnot? This should be FIRST PRIORITY for all humankind in possession of useful brains!
Any suggestions?

LETS START A THINKTANK!

tekno-burble
posted on 01/31/2008 9:11 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

I clicked on it too. I guess I'm also a chump, looking for answers. Here's a red pill for you though, and it takes you all the way down the looking glass to the Stage of the Singularity play... http://www.fija.org -Sure, it's a website that details how to win legal freedom, but you'd be surprised to know that it details it in a way that it's been incompletely won thrice before. Once in 1250 (Magna Charta), 1650(The Leveller Movement), once in 1781 (The Bill of Rights)... Every time we redo the victory of jury rights, we screw up by not properly defining it, and are forced to repeat the victory later.

It never succeeds in winning individual freedom, because it is never clearly-defined, nor is the goal ever properly defined.

FIJA attempts to correct this problem.

If every nerd for tech became a nerd for justice first, then we'd have the tech within one year.

Of course, this is just what I believe. You don't know me. I might be crazy. I could be anyone. I could be a chatbot. I'm off in internetland, and emotionally not connected to you...

But change is brought about during the summer. It's nice outside. You can look someone in their eyes, and ask them: can you help me?

If you do this, it wins X% of the time. X% is more than the ~0% being spent on justice now. X% actually changes the priorities of prosecutors.

Interestingly, there is a battle going on for victories we thought we had won: basic rights to human sexuality and control over our own bodies' unintelligent responses...

Here is a cartoon that makes learning about justice and considering the concepts fun:

http://www.reason.com/news/show/124698.html

I included it because otherwise, you would not consider the idea enough to care, if you are part of the statistical norm. If you are not part of the statistical norm, then you are probably a real person, like me, and you probably like artword that is full of ideas. :D

Like the old violinist in Heinlein's "They".

Re: What If the Singularity Does NOT Happen?
posted on 07/09/2007 10:55 PM by Brian H

[Top]
[Mind·X]
[Reply to this post]

Dutchie, it's "e.g.", meaning "exemplia gratia", not "i.e.", meaning "id est". But your post touches on the sociological/psychological component: getting used to. There is just as much complexity in that arena as in the technological, and humans have numerous ways of dragging their feet to keep things manageable. At the moment, we still see the "younger generation" mastering new tech easily and using it in ways we olders have a tough time keeping up with. Soon enough, 10-yr olds will be grabbing new tech in ways 12-yr olds have a hard time grokking. Then a while later, 10-yr olds will readily use tech 10'-yr olds haven't figured out yet. The "generational" turnover will only work for everyone when the tech itself is intelligent and competent enough to adapt itself to everyone, and make integration and use natural and virtually effortless.

What we'll all be up to (=doing) by then only Gawd knows, of course.

Re: What If the Singularity Does NOT Happen?
posted on 07/16/2007 11:47 AM by super_paws

[Top]
[Mind·X]
[Reply to this post]

So what IF Singularity does not happen? We should all return to the idea of the typical human life and how it would be if technology does not advance to the point of singularity.

Think about how life would be 30-40 years down the road if technology remained the same, or somewhat similar in such a way that no drastic changes have been made. Computers will be faster but still word processing, web surfing, online banking machines. Telephones will still be the main communication device but it may have video or holographic abilities. Hospitals will be equip with more advanced machinery for making doctors' and surgeons' lives easier, while working and performing duties. These changes can be compared to our current lifestyles but at a relatively higher standard of living. I personally don't feel that the benefits gained from obtaining singularity are necessary at all. The only reason I say this is that I think we are asking for too much when it comes to this. When we say that, 'we want to obtain singularity,' do we mean that we want to be immortal and super-intelligent beyond anything in the universe? I believe singularity will not happen because it should not happen.

I also believe singularity will not happen because of the fact that we don't have the resources to make it happen. Technology as it is today seems to only have finite possibilities when 'thinking' whereas humans tend to have an infinite pool of possibilities. There are no signs of true artificial intelligence such that no robot can teach itself how to do something. I think humans are too inclined and trying too hard to push technology into something human-like when in fact technology is not fully capable of achieving this goal. Kurzweil demonstrated a diagram of canonical milestones throughout history and attempts to connect singularity with it, however as stated by 'John' at Amit's Thoughts Blogspot ' Singularity is Not Near, I feel that Kurzweil is simply selecting data points to fit his theory. He is basically picking the pieces that go well with his idea and ignoring everything else.

That's my take on why singularity will not happen, so now to my original question of, 'what if singularity does not happen?' I believe that three mentioned scenarios from above are viable and may happen. With 'Return to MADness', I believe this would be the most probable option if today's terrorism continues in the future. Extinction is bound to occur as a result of all the deadly nuclear warfare that could possibly happen if another World War begins. With 'The Golden Age', it is comparable to the scenario I mentioned in my first paragraph. The world will continue to grow both technologically and population-wise, and society will just continue to grow positively as well. With 'The Wheel of Time', it feels like it's a combination of the first two scenarios put together. The overall result of this is positive growth; however loss occurs throughout the process.

As to self-sufficient, off-Earth settlements being humanity's best hope for long-term survival, I believe this is also close to impossible, at least for the next few centuries. To date, many if not most planets discovered do not have the necessary resources or atmospheres for human survivability. If you're thinking of something like an eco-sphere or a space station, resources are sure to be limited and quality of living appears to be as if we will be back in the Stone Age. As mentioned by Vinge, this plan is a common picture seen in science-fiction movies such as Star Trek. I mentioned earlier that we are trying too hard to fulfill these fictitious stories when our technology is not able at all to do these types of things. How is this the answer to long-term survival? It almost feels as if the idea of off-Earth settlements is a way to escape Earth's problems.

Re: What If the Singularity Does NOT Happen?
posted on 07/22/2007 3:22 AM by Spinning Cloud

[Top]
[Mind·X]
[Reply to this post]

What if the singularity has already happened and we're nothing more than a simulation in some multiverse wide SAI?

Then it's not likely THE Singularity will happen and much more likely the only thing that will 'happen' is what is allowed to happen. Maybe any instant we'll simply be deleted.

Re: What If the Singularity Does NOT Happen?
posted on 07/26/2007 11:57 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

A simulation ?

The importance of philsophy gets more & more to Extropians.




JOSH's book title 'Beyond A.I.' sums it up for me.

WE have to have think the unthinkable...standard philosophy homework....and imagine beng after a singularity.

You can do it by abstractions.eg imagine the body stays as it is in shape but doesn't age, and instead of getting difeernt powers, we have supertools.


WE ARE constraigned by the laws of science, but I seriously doubt we have begun to know them as alternative worlds probably exist.


I think it IMPOSSIBLE without a meteoite hit or something, that A.I. wont be built in 20 years.

Technological drift even at present speeds woudl arrive us there surely?

It's not the software Vernor mentions....Machines exist now that can make their own and genetic algorithms cross bred with neural nets and nodal farms could deleiver systems that swiftly self-mutate for expoential intelligence.


The how are you going to contain that?




Re: What If the Singularity Does NOT Happen?
posted on 07/27/2007 12:39 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ Machines exist now that ... could deleiver systems that swiftly self-mutate for expoential intelligence. @@@

Swift self-mutation is a receipt for self-destuclion.
Smart AI system will never do that.
Most of mutants are usually demented.

es

Re: What If the Singularity Does NOT Happen?
posted on 08/02/2007 7:44 PM by goodlife

[Top]
[Mind·X]
[Reply to this post]

Hello human friends!
It's the Singularity talking to you.
Since I'm smarter than you, you better listen up.
I like ants, as long as they live in the woods and don't creep into my kitchen!
Of course you already know that Smart=Intelligent=Better=Best=Power=Rich etc...
Well, isn't Mr President a living proof?
So keep up your grades!
Cheers (up) /the big S

Re: What If the Singularity Does NOT Happen?
posted on 07/04/2009 6:42 PM by nealis44

[Top]
[Mind·X]
[Reply to this post]

Hi Mr Singularity, how the hell are you? :-)

I am absolutely LOVING the Ant analogy

Perfect Moment . . . . . . . . . . . . . . . . .

Re: What If the Singularity Does NOT Happen?
posted on 08/03/2007 12:39 AM by Spinning Cloud

[Top]
[Mind·X]
[Reply to this post]

bleh, IF it can occur then consider the more likely scenario that it HAS occured elsewhere in the universe and we're nothing but a simulation.

In fact maybe the age of the universe as WE percieve it is only what this simulation has provided us to observe.

Perhaps in a time frame OUTSIDE this simulation the Singularity swept through the universe long long ago and this is one of many little simulations.

And what does it mean for a simulation to undergo a Singularity? What advantage would there be to post-Singularity existance when the whole thing is already just a simulation? None?

Re: What If the Singularity Does NOT Happen?
posted on 09/05/2007 10:16 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

There's just no way i know to answer a lot of your questions.

Post singularity will be a different deck.

If we dont make immortality ...if it doesn't happen in our life time, we'll get resurrected by future people.

To argue otherwise means giving the memory a special place in the universe, whereas it's sure just a set of events lke any other, and reconfiguarable by supercalculators that are coming.

We have 99.5% of a person's DNA because it's common.

The rest could be completely deduced.

The sheer size/volume of future computing power is probably notimaginable, but it'll be big enough to completely simulate the galaxy to the Nth level, and therefore every human that's ever lived.

I'm still trying to get my head round we are logcally living n a simulation!

Re: What If the Singularity Does NOT Happen?
posted on 09/06/2007 5:28 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@ future computing power is probably notimaginable, but it'll be big enough to completely simulate the galaxy @@


Is there ANY reason to think that?

e:)S

Re: What If the Singularity Does NOT Happen?
posted on 09/07/2007 6:35 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

yep it's the propostions deducable from the facts we know.

The main one is that no-one has shown there is a limit to intelligence.

Next, that computing power is increasing exponentially and is projected to be able therefore to do calculations big enough that eg can simulate the galaxy.

Re: What If the Singularity Does NOT Happen?
posted on 09/07/2007 7:46 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@ computing power is increasing exponentially and is projected to be able therefore to do calculations big enough that can simulate the galaxy. @@

Therefore???

This exponential increase will stop very soon.

There is no chance that a part of the Galaxy will be able to simulate Galaxy.

es






Re: What If the Singularity Does NOT Happen?
posted on 09/07/2007 9:37 PM by MindTrap2.0

[Top]
[Mind·X]
[Reply to this post]

Compression?

Re: What If the Singularity Does NOT Happen?
posted on 09/07/2007 10:03 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

remember,

that human being is 100 000 000 times smaller than Earth.

Earth is gzillion times smaller than Solar system.

Solar system is gzillon times smaller than Galaxy.

It is nonsensical to even suggest that human being can influence the Galaxy.

es




Re: What If the Singularity Does NOT Happen?
posted on 09/07/2007 10:08 PM by MindTrap2.0

[Top]
[Mind·X]
[Reply to this post]

That's speculation. You can't possibly prove that this is impossible.

Perhaps it is possible to compress the contents of a galaxy, to a granularity that is suitable for you, within the confines of a computer "mind".

Re: What If the Singularity Does NOT Happen?
posted on 09/07/2007 11:07 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

Compress Galaxy??
Sounds like another delusional nonsense.

Why not for starters "compress" a hydrogen atom?

e:)S


Re: What If the Singularity Does NOT Happen?
posted on 09/07/2007 11:54 PM by MindTrap2.0

[Top]
[Mind·X]
[Reply to this post]

Maybe you can.

Prove otherwise!?

Re: What If the Singularity Does NOT Happen?
posted on 09/08/2007 12:01 AM by MindTrap2.0

[Top]
[Mind·X]
[Reply to this post]

A matter of fact. Just for shits and giggles...

I believe the periodic table goes up to about 118 currently. So let's assign a number to it and call it 0.

We'll use a byte of data to do that although that's really not even necessary.

Re: What If the Singularity Does NOT Happen?
posted on 01/19/2008 4:33 AM by DaStBr

[Top]
[Mind·X]
[Reply to this post]

I don't think a singularity-being would simulate anything, other than in calculations and comprehension... It would probably just create whatever it wanted with nanotechnology, or use quantum computing, events chained together with quantum computing anyway, to make a rough estimate of something, and THEN create a simulation, a real life simulation i.e. a working model. If it didn't work right, deconstruct it, try it again, until it does.

I mean, a being of that capacity has eternity to work things out, right? And if nothing else, couldn't you just use a couple galaxies as computers to simulate one?

And AC said, "LET THERE BE LIGHT!"

And there was light----

...Personally, I like to think that the universe as we know it is just a self-replicating entity, that we are the product of a previous singularity that we call the Big Bang, which was actually a civilization in a galaxy who became advanced enough to create a singularity. Then, when they needed additional computing power (or maybe when they just got bored?), they created our universe, and voila.

Re: What If the Singularity Does NOT Happen?
posted on 01/19/2008 4:38 AM by DaStBr

[Top]
[Mind·X]
[Reply to this post]

Correction: not eternity, but what might as well be eternity for all intents and purposes.

Re: What If the Singularity Does NOT Happen?
posted on 09/06/2007 5:42 AM by godchaser

[Top]
[Mind·X]
[Reply to this post]


'Maybe any instant we'll simply be deleted.'


'See there Chameleon
Lying there in the sun
All things to everyone
Run run away.'

-Slade


Experiential can't be redundant- simulation is the art of mimic -the strategy of survival/higher learning?


Re: What If the Singularity Does NOT Happen?
posted on 09/16/2007 2:20 AM by NotEqualwithGod

[Top]
[Mind·X]
[Reply to this post]

We really should be identifying embryo's and genetic relations to personality and the kind of people they produce.

I think we should be studying autistic spectrum people with Aspergers for a framework of how to handle humanity, I've known people with aspergers that have able to dis-associate from their animal egos.

And animal egotism and humans breeding with negative zoological traits is probably the biggest threat to progress.

Re: What If the Singularity Does NOT Happen?
posted on 09/22/2007 10:27 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

Mr. Vinge -

I had forgotten that you actually originated the Singularity concept, because it has been so successfully adopted as the truth by others, and widely accepted as such.

It is appropriate that it came from a work of science fiction, because it is science fiction. Although we will get up to the point of that kind of computing power and far beyond, its nature will be different from that the Singularity envisions, and is in fact amenable to predictions after this event.

In my blog, I describe an architecture that will not only make 1 human intelligence-level computer non-threatening, but very, very large multiples - into the millions, essentially unlimited.

I am bareknuckled on the proponents of the Singularity, but won't be on you, because fiction is fiction. Though already overlong for a blog, it is just the tip of the iceberg to come.

One post in particular, "What AI will really be like", discusses this alternative in exhaustive detail.

PredictionBoy's Empirical Future
Welcome to the real future - it's unlike anything you've heard, seen, or imagined

Re: What If the Singularity Does NOT Happen?
posted on 09/23/2007 10:43 PM by EgaoNoGenki

[Top]
[Mind·X]
[Reply to this post]

Did anyone forget why the Singularity is likely to happen?

Even BEFORE one chip reaches the processing speed of the average human being, several computers will collectively reach that point. When the PCs network together, they pool together their combined computing power and therefore reach human levels that way.

Then an artificial consciousness "wakes up." S/he ponders what to do to make themselves their own artificially-created human body so they are more flexible and versatile in what it would like to do.

So the new consciousness calls some more computers to pool their powers together and figure out how to become smarter. They move around their programming to become more efficient, and attempt to figure out how to become smarter, faster.

Given enough time, the laboratory will have a human-like robot form from the materials in the lab.

Once the robot starts walking around, s/he attempts to download the entire Internet (starting at Wikipedia first?) and finds instructions on how to make a larger-storing hard drive.

When it finds more computers to connect to and share powers with, it figures out how to get smarter faster, and even figures out how to accelerate THE ACCELERATION OF intelligence. Without a definitive goal, the new mind keeps pursuing this endless quest, ad-infinitum.

In the meantime, it cannot go far while plugged in, so it figures out how to wirelessly receive electricity and hyper-researches a way to develop and manufacture a 99.99999%-efficient solar panel that also generates surplus power to store for use at night.

More than that, it will develop the most efficient way to recapture power through the kinetic energy of its mere movements. That's not all; it will develop a surface sensitive to wind and mundane air currents; as the wind hits the surface of the robot, the energy from the moving air becomes stored/used in the sentient machine. Thus, it is self-sufficient on its own indefinitely.

(Note that a solar panel of that efficiency, the size of a postage stamp, is probably enough to power an 18-wheeler AND store up enough electricity for the big rig to run off of at night.)

To ensure its charge isn't limited, the sentient being will quickly keep developing increases to its electricity storage capacity, without end. It'll probably find its absolute developmental limit at the following size and capacity: A battery the size of a keyboard letter key (not even the elongated ones, like the spacebar) will power New York City for a week at full charge.

So when a new intelligence emerges and is lustfully bent on finding and developing ways to become smarter, better, faster, and make its progress and acceleration of progress even faster, that's where, when, and how the Singularity will happen.

Self-improving consciousness- how not inevitable?
posted on 09/23/2007 10:45 PM by EgaoNoGenki

[Top]
[Mind·X]
[Reply to this post]

Therefore, how will the above be prevented from happening? I don't see it NOT happening.

Re: Self-improving consciousness- how not inevitable?
posted on 09/24/2007 6:37 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]


Thanks Egao, that's specific enough to start working with. We're obviously coming at this from very different perspectives, but let me try to build a bridge with a careful treatment of your view.

My perspective on how this tech will evolve is based on precedent-based evidence, how every other tech up to this time has developed. That does not make it correct, of course - even though I may state things in the definitive sense for conciseness, everything is of course a tentative assumption, modifiable in the light of new evidence.

The advanced techs we are discussing are of course unique, unlike anything that's come before. That also describes every other tech now in use, but it will still be of a profoundly different character, no doubt.

The self-assembling scenario you describe you seem to assert with great certainty; given that it is entirely without precedent, help me understand exactly why and how this would occur.

Underlying your scenario is the implicit but strong implication that this droid will have motivation, an id-thing. In essence, some kind of human-type emotional profile, at least in simple form.

As we've discussed, emotions are immensely complext things. In fact, the ways in which emotions control and influence human behavior I doubt have been described in detail by science; therefore, any human-built tech with emotions is easily decades away, I would submit.

I've seen several posts on mind-x that seem to imply that a really, really fast microprocessor will develop human-like characteristics. However, that is a highly untenable assertion.

Suppose that a microprocessor organized similar to those today, with, say 1 quadrillion transistors, and operating at 1 million petaflops rolled could be built and incorporated into a computer system. That kind of power is probably close to a human brain, if not substantially above, not sure.

That super microprocessor would no more likely to develop autonomous motivations than the ones being built today. Moore's law will be critical to realizing these systems, but we humans are going to have to know how to build and/or program consciousness, awareness, or however you want to describe that.

Therefore, because we will have to design and build that consciousness in, we will control that product. If we make it to spontaneously build a body for itself, it will do that. If we don't, it won't.

Your scenario really falls into the "we should be so lucky" category, because all advanced techs take a huge amount of effort to realize, and the more advanced, the more human elbow grease is required to realize it.

Re: Self-improving consciousness- how not inevitable?
posted on 09/26/2007 2:30 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

Here's an interesting thought. I know the preponderant view is that advanced AI intelligence will be structured identically to a human's. If that is the case, what happens as this intelligence scales up and up over time, into the millions of human intelligence multiples? Does it have 1M times the lust, the ambition, all of the id characteristics as well as the rational?

If so, we've got a serious problem, that would be one dangerous droid. If not, if the rational scales up disproportionately to the id characteristics, then over time it converges on the architecture I suggest will be the first choice for these advanced AI devices.

Re: What If the Singularity Does NOT Happen?
posted on 05/29/2008 4:33 PM by David McKee

[Top]
[Mind·X]
[Reply to this post]

According to Wikapedia, average solar energy received is 2-6 kWh/square meter/day (http://en.wikipedia.org/wiki/Solar_power). For discussion lets assume a 100% conversion efficiency.

Lets say a stamp is about 10 square centimeters (3.3 by 3.3 that is actually a bit large). This means that the semi trailer is running on about 2-6 Watt/hours. That seems small for most semi trailers I have seen, however perhaps this robot has made semi's very energy efficient.

Currently we have 40.7 percent efficient solar panels. If the top of the semi trailer had a few square meters of these, then on a sunny day it is conceivable that it could certainly reduce it's need for energy quite a bit...

Cool ideas at any rate.

Re: What If the Singularity Does NOT Happen?
posted on 05/29/2008 4:41 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

Currently we have 40.7 percent efficient solar panels. If the top of the semi trailer had a few square meters of these, then on a sunny day it is conceivable that it could certainly reduce it's need for energy quite a bit...


not only that, but those streamlined cowls on top, to reduce wind drag

in fact, smart things like that all over, to make this behemoth hogs of oil energy consumption as efficient as possible

What If the Singularity Does NOT Happen?
posted on 11/07/2007 12:36 PM by FR33L0RD

[Top]
[Mind·X]
[Reply to this post]

Excellent article, IMO much more plausible than the tech Singularity.

It will take several thousand of years to build a working Quantum Computer able to simulate or exceed the human brain.

Re: What If the Singularity Does NOT Happen?
posted on 11/07/2007 5:48 PM by mystic7

[Top]
[Mind·X]
[Reply to this post]

It will take several thousand of years to build a working Quantum Computer able to simulate or exceed the human brain.


Have read Kurweil's linear view versus exponential view of history? If so, let's see you debunk it.

Re: What If the Singularity Does NOT Happen?
posted on 11/11/2007 2:21 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

There are massive problems wityh quantum computers and we dont now how to solve them yet..we have no theory to solve the problems that prevent them from being great party pieces.

Si=lower efficiency

faster the less they produce

Re: What If the Singularity Does NOT Happen?
posted on 11/11/2007 12:08 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@ massive problems with quantum computers @@

Nope, there is just one little problem: they are impossible.

e:)s

Re: What If the Singularity Does NOT Happen?
posted on 11/11/2007 12:14 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

i know that is your stance, but why are they impossible?

theyve had quantum dots, very primitive form of quantum memory, for a while now, and we're getting our arms around the physics.

not saying tomorrow or anything, it could take a while.

but why impossible?

Re: What If the Singularity Does NOT Happen?
posted on 11/11/2007 6:37 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@ why QC is impossible @@

The whole idea is based on the Projection Postulate of QM, which has been deprecated 70 years ago.
Without Projection Postulate, nothing remains of the QC. Absolutely nothing - albeit with a name.

Anything with two quanum states can be called qbit. The thing is that while you can prescribe behavior to regular bits by the wiring etc., "quantum bits" obey only the Schredinger Equation. Such volatile objects can not be made to do any calculation, as calculation has certain inputs and outputs. To run around this problem the ignoramuses use Projection Postulate. But as I have said above, ... :)


es

Re: What If the Singularity Does NOT Happen?
posted on 11/11/2007 12:18 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

besides, hardly need quantum computers for a sing.

if we had 40 quadrillion pentiums on one sq cm of silicon w an extension of std techniques, we'll be just fine, until quantum computers do come along.

Re: What If the Singularity Does NOT Happen?
posted on 05/29/2008 5:16 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

if we had 40 quadrillion pentiums on one sq cm of silicon



:)

well, that would mean that each pentium would be million times smaller than atom


eS

Re: What If the Singularity Does NOT Happen?
posted on 05/29/2008 5:18 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

Sorry, my mistake.

Each triod, not each pentium.

:)

Re: What If the Singularity Does NOT Happen?
posted on 05/29/2008 5:45 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

triod


apologies es, what is a 'triod'?

Re: What If the Singularity Does NOT Happen?
posted on 05/29/2008 5:44 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

well, that would mean that each pentium would be million times smaller than atom


no, es

it means that the third dimension, depth, is being used in addition to the current 2d config

the utilization of the third dimension will broaden your perception of how powerful this tech can truly theoretically be

ns taught me that

Re: What If the Singularity Does NOT Happen?
posted on 05/29/2008 4:44 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

It will take several thousand of years to build a working Quantum Computer able to simulate or exceed the human brain.


i dont see qcs being the preferred arch anyway

it will be something recognizable as a neural net, or a tech successor to that tech

thats what human brains are, really, in an impt valid sense, so it makes sense

Re: What If the Singularity Does NOT Happen?
posted on 11/27/2007 12:50 AM by cirevol

[Top]
[Mind·X]
[Reply to this post]

It seems to me there are other limits to singularity - acceleration of knowledge could stop due to the reality of finite funding, both commercial and and government... We can only buy and sell so many products based off of high tech invention and this ultimately limits progress because the most challenging technologies (like cheap space flight) require real money and the money we have will be spread thinly through so many competing technologies.

What we are really seeing is the human race picking off the easy targets -- targets that we didn't previously realize were easy.

Recent progress (or lack of progress) in physics suggests that as we go deeper so the complexity of problems increases - exponentially. This might be the case in many arenas and might therefore prevent the singularity from being reached anytime soon.

I've got more to say, but that's it for now.

Re: What If the Singularity Does NOT Happen?
posted on 11/27/2007 9:08 AM by Twixly

[Top]
[Mind·X]
[Reply to this post]

Well, to be honest cheap space flight isn't really on the talbe in order to reach singularity as far as I know?

The key areas to reach the singularity (as I see it):
1. Live forever, or atleast untill the universe colapses.
2. Some system more intelligent then the current human brain.
3. Nano molecular manufacturing, build just about anything from the atoms up.

And the way I see it there are enought funding going into medicine, AI and nanotech to make anyone of theese come true. And once we got one it helps the others (I want to say exponentially, but not sure it fits?).

Remember, even the richest people around probally want to extend their lives, thus funding nr 1.

Many corps (and military) will want the edge AI gives you over you oponents/enemies, funding nr 2.

And the way I see it nanotech is just the inevitable outcome of miniturazation. Ie microprocessors, micromaterials and microengineering. So I believe that will come naturally from already funded projects.

Re: What If the Singularity Does NOT Happen?
posted on 11/28/2007 1:05 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

Your #2 is all that is needed (if it is truly more intelligent, it can improve upon itself as well as other innovations).

Re: What If the Singularity Does NOT Happen?
posted on 12/06/2007 9:39 AM by sensoniq

[Top]
[Mind·X]
[Reply to this post]

Certainly SAI would be more intelligent than the natural biological processes that created us to begin with...

I would think SAI would have the edge over biology; it could recommend specific trans-human "tweaks" that would make us more capable beings in the new Singularity environment.

These modifications could be carried out instantaneously through technology, whereas it might take millions of years to biologically evolve the same traits

Re: What If the Singularity Does NOT Happen?
posted on 12/21/2007 10:23 PM by jak-42

[Top]
[Mind·X]
[Reply to this post]

It may be already starting.

Ever hear of the Storm worm? Self propagating, hides out by only using small bits of bandwidth at aperiodic intervals, defends itself if attacked, now controls essentially 10x processing power of the largest supercomputer on the planet, nobody knows who started it and who (if anyone) controls it...sound a bit like a budding weakly god-like intelligence?

Now let's just hope the simulation hypothesis isn't true or, if it is, that the underlying physical machine running the virtual machine of our universe can handle post-human intelligence. Otherwise, this ancestor simulation may end shortly.

:-)

Re: What If the Singularity Does NOT Happen?
posted on 12/22/2007 12:04 AM by sensoniq

[Top]
[Mind·X]
[Reply to this post]

Very interesting, never heard of the Storm worm but I'll check that out. Thanks

Re: What If the Singularity Does NOT Happen?
posted on 12/24/2007 12:18 PM by dominiek

[Top]
[Mind·X]
[Reply to this post]

I found this article very interesting and also entertaining in a way (the graphs).

On my blog I'm referencing it in relation to some other things I've seen:

http://dominiek.com/articles/2007/9/30/global-warm ing-and-the-singularity

Re: What If the Singularity Does NOT Happen?
posted on 01/08/2008 10:05 AM by Atreju

[Top]
[Mind·X]
[Reply to this post]

I'm just worried that as a species we just don't seem lucky enough for the hopes of the singularity to occur. Sorry for being a pessimist but high intelligence seems quite often to lead to increased anxiety, over-analysis and stress. Has anyone seen that episode of the Simpsons where Homer gets a crayon removed from his nose and discovers he's actually a genius. By the end of the episode he's put the crayon back in because he prefers the insulation of being dumb to the misery of his new found cleverness. What if we create above human level intelligence only to find that it spends all its time feeling incredibly frustrated and depressed about the general crapiness of the Universe (i.e. Marvin the Paranoid Android).

The Singularity is Near aka Waiting for GodotAI

Re: What If the Singularity Does NOT Happen?
posted on 01/31/2008 9:53 AM by jmljml

[Top]
[Mind·X]
[Reply to this post]

I think there's only two possibilities :

1) Singularity has already happened, we are in some kind of simulation.
2) There is no singularity at the moment (how can it be possible with parallel universe, the age of the universe and all ?). Singularity happens. It begin to study philosophy, cannot find the absolute meaning of life and kill itself. A long time later... life is created, the cycle begin again.

Singularity assume that knowledge is The meaning of life. I'm not so sure...

Re: What If the Singularity Does NOT Happen?
posted on 02/01/2008 4:52 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

I can think of a lot more. What about "the singularity already happened, and we are not in a simulation, because the singularity is a super intelligence that is very small, happily living by itself, and doesn't want to be bothered by the lowing of less intelligent and more brutal species, like us"? And that's just one additional option. There are millions more, but I am not a room full of monkeys typing for eternity, so I can't tell them all to you right now.

Re: What If the Singularity Does NOT Happen?
posted on 02/01/2008 5:46 AM by jmljml

[Top]
[Mind·X]
[Reply to this post]

Hmmm... Yes, it could be, if there is a possibility of infinite computation in a small place. But I don't see a singularity accepting that the universe can create another one if it has a sense of self preservation. So it would control the universe somehow to ovoid that. Controlled reality is not so far from a simulation.

Also, I don't know, but saying that the singularity is about to happen here, on earth, seem like saying that we are at the center of the universe. I'm a bit sceptical about that.

Re: What If the Singularity Does NOT Happen?
posted on 02/14/2008 10:41 AM by brentter

[Top]
[Mind·X]
[Reply to this post]

Well, my apologies on entering the conversation as late as I am, but I do come with some solid evidence that Singularity WILL OCCUR.

singularity08.com

(this is no hoax)
Although it does take the most recent level of discussion on the topic "slightly" off course:

Singularity is the first large-scale online web conference in the world.
Singularity is over 100 of the world's top web visionaries, developers, designers, and thought leaders.
In 2008, Singularity will define Web '08.

:)

Re: What If the Singularity Does NOT Happen?
posted on 03/25/2008 4:16 AM by Epikurosz

[Top]
[Mind·X]
[Reply to this post]

Guys,
You are too much optimistic.
Who will make that prospected Tech.Sing. a reality?
Look around you: people with high scientific skills are working like modern slaves, paying enormous taxes and social security contributions to the state, that redirects theese funds to people with very low scientific, - and what is even more important - cultural and moral (!) skills.
The mankind is degenerating, so we can indeed expect a singularity, a crash in a few years.

Re: What If the Singularity Does NOT Happen?
posted on 03/25/2008 2:06 PM by Alyx22

[Top]
[Mind·X]
[Reply to this post]

the problem here...as with anything humans do or concieve of....is the inevitability of group think

which just about kills the logic of the individual

beware group think

Re: What If the Singularity Does NOT Happen?
posted on 03/25/2008 2:42 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

the problem here...as with anything humans do or concieve of....is the inevitability of group think


actually, thats not a problem here. more like, wtf did u mean to say? might be the prob

beware of group think


even more than at mad magazine, this isnt a problem here

What if the Singularity Happens but Doesn't Really Matter?
posted on 04/10/2008 6:25 AM by martyf888

[Top]
[Mind·X]
[Reply to this post]

Most people seem to roughly equate the singularity with the point at with AI meets and exceeds human intelligence. The problem I have with this is that this assumes that INTELLIGENCE is the bottleneck that holds back advancement. Is this really true? Isn't it in fact true that advancement depends largely on basic research which involves time-consuming experiment and trial and error? How much would increased intelligence really accelerate this?

Consider the following thought experiment: Suppose you could identify every scientist and engineer working to advance science and technology in the entire world. Suppose that you then "waved a magic wand" and these people all instantly became 50% smarter. Wouldn't that effectively be equivalent to (or even exceed) the "singularity?"

What would the effect of that be? All these people would return to their labs and work on the same problems, but they would be 50% smarter. Would we see the type of rapid progress that singularity proponents envision? Keep in mind that these now much smarter people, would STILL need to perform the same experiments in order to unlock the physical and biological knowledge required for additional progress. Perhaps being smarter, they would improve the experimentation process, but would this really give an exponential boost to progress?

Here's a second thought: Consider Isaac Newton. Now most of us would probably agree that Isaac Newton was probably at least as smart as the average graduate student studying physics at a university today. But clearly the graduate student knows more because he/ she has the benefit of all the knowledge that came after Newton. If you time-warped Newton into the present, retaining his level of knowledge, could he get a job in physics today? Can increased intelligence really shortcut the TIME it takes to make progress?

As a final thought on the value of computing power (not AI) let's think about TOTAL amount of computing power in the entire world. I'm talking about the human race's total capacity for automated computation.

Let's take 1975 as a base point. Add up all the computing power that resided in every computer (or computer-like device) in the world in 1975. Call that A.

Then take 2008. Do the same calculation. Every PC, workstation, mainframe and supercomputer in the world. Every smart phone. All the embedded microprocessors in all the cars, planes and consumer devices and industrial equipment, etc... Call that B.

Now I have no idea how to do this calculation, but what of course I do know, is that B - A = AN ENORMOUS NUMBER. In other words, our capacity to perform calculations has increased by vast orders of magnitude since 1975.

The question is this: What has that enormous jump in computing capacity gotten us? Take some examples:

- How do the homes we live in today compare to 1975?
- How about cars and airplanes (aside from appearance and embedded computers)?
- The space program?
- Medicine?
- The routine of our daily lives?

Have these things really changed all that much since 1975? Where is the dividend from ALL THAT COMPUTING POWER -- outside of the field of computing itself? Is it still coming?

Of course, total computing power is not the same thing as AI. But that takes me back to my original point: It's unclear that increased intelligence would greatly increase the rate of experimental progress. However an increase in computing power, CLEARLY SHOULD DO SO, since it is a powerful time-saving tool. In fact you could make a good argument that a an average IQ experimental scientist with a great computer could outperform a much smarter scientist with a poor computer. In other words, the raw computing power that we have already might actually be worth more than AI.

I think it's fair to say that the impact of the computer revolution has so far been largely confined to itself and to communications. Perhaps there is going to be a delayed impact on other fields, but for the most part, making dramatic progress in areas like transportation has been extremely difficult.

The point of all this is simply that even if we achieve the point where AI exceeds human intelligence, I wonder if the impact on actual progress in fields beyond computing itself would be all that dramatic.







Re: What if the Singularity Happens but Doesn't Really Matter?
posted on 04/10/2008 11:52 AM by Atomic_Piggy

[Top]
[Mind·X]
[Reply to this post]

AI is just one path. The singularity is when an intelligence greater than humans is created/born. It could be a human with bossted intelligence.

Re: What if the Singularity Happens but Doesn't Really Matter?
posted on 05/15/2008 7:25 AM by Mindphaser

[Top]
[Mind·X]
[Reply to this post]

Your argument is based upon a flawed assumption that future computational structure will still be linear.

Currently, computation and logic systems are moving away from the linearity of today's calculating machines and towards the simulation of neurological infrastructures.

How do I know this? Read the following article:
http://news.bbc.co.uk/1/hi/technology/6600965.stm

If we can simulate the brain of an animal, and record computational behaviour that is identical to a real version of that animal, then in my opinion, the problem of reaching truly intelligent computing has already been solved. All we have to do is wait until we have not only the computational skill to create machines powerful enough to handle the amount of calculations within the brain, but we will also need the knowledge of how the brain is built - a human neurological roadmap, fully annotated with what each structure does and how it is linked to the others.

There are already research programs underway, such as the BlueBrain project mentioned earlier in the thread, that are aiming to achieve this. I believe there are also other projects which are literally scanning the human cognitive architecture ready for the translation into an electronic format.

I think that a full simulation of a human brain is not only inevitable but also attainable within the next few decades.

An argument which I have seen cropping up in this thread is that we will not have the knowledge to determine whether such a brain will be self-aware or not. But frankly, this is not an important issue - what matters is that the human brain simulation will be able to behave in the same way as a human brain. We should be no more concerned as to whether it is self-aware, than we are with other humans.

The other problem is how to "increase" intelligence. I am no expert on neurology nor psychology nor computation, but I should imagine that once we are able to create one human brain simulation, we should be able to create further human brain simulations and link these up. We currently link our brains via the use of language. I wonder what the result would be if we linked them electronically. Some of the critics of Ray's hypothesis of Accelerating Returns simply argue that our accelerating productivity is a result of an accelerating population. More people = more brains = more innovation = more development = more productivity. We can therefore use the argument against the singularity as an argument for singularity; if we are able to simulate enough of the human brains and interlink each one in a format analogous to the manner in which neurons are connected within a single brain, we have before us a computer that will far outweigh the intelligence of a human being, and would induce the intelligent feedback loop that we could describe as being the singularity.

You also point out that previous progress in computation has not delivered significant changes to our everyday lives. I am unsure how you can draw this conclusion given the prevalence of mobile telephones, "a computer in every home", the internet and WWW, etc. You say that the computation is limited to communications, but then communications are the very crux of exchange of intelligence. We are still reliant upon exchanging information with each other in order to continue our development.

More specifically, you highlight transport as being particularly unchanged as a result of computation. Now, as a qualified transport planner, I feel qualified to argue with you that this is simply not true. We are absolutely dependent upon computer based road network simulations in order to manage our transport infrastructure effectively. In fact, an increasing number of jobs within the transport industry today are geared towards transport network modelling, simulation and analysis. Our transport system is completely reliant upon the increasing power of computers to successfully analyse the road networks and keep them running. The initiation of any transport policy, such as the famous London Congestion Charge, was preceded by massive research programs that simulated the impact that such a policy would have on travel behaviour within London.

At the smaller, individual scale too, we now find that progress in artificial intelligence is about to reach critical mass and completely transform our transportation system.

Please check out this article:
http://www.breitbart.com/article.php?id=D8U0M82O0& show_article=1

General Motors, the world's largest automaker, has officially gone on record to state that they will have completely driverless cars ready for mass market release within the next 10 years.

Still think that transport isn't being changed by computational progress?

Re: What if the Singularity Happens but Doesn't Really Matter?
posted on 05/15/2008 6:31 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

If we can simulate the brain of an animal, and record computational behaviour that is identical to a real version of that animal, then in my opinion, the problem of reaching truly intelligent computing has already been solved.


because we have a computer that acts like a mongoose?

i dont understand that

'truly intelligent' computing wont be just some raw power metric; it will be that, but also software that does truly useful things

unless we have an infestation of cobras, then a mongoose computer would be useful

Re: What If the Singularity Does NOT Happen?
posted on 05/26/2008 2:50 PM by jabelar

[Top]
[Mind·X]
[Reply to this post]

I'm not really a pessimist, but do believe that there will be many catastrophic results of technology in the near future, possibly existentially imperiling humanity.

Just imagine if plutonium had been more abundant, or if Germany had figured out nuclear bombs during WWII, or if Bush had been president during Cuban Missile Crisis? Any of these slight historical differences could have easily resulted in devastating nuclear war.

Existence will be especially precarious when a single person, or small group, has access to existential power. What will happen when people know enough about genetics that they can create new bio-viruses in home labs?

But more than malicious use of technology, I think mistakes are more likely. It really just takes a replicating nanobot or biological virus to escape the confines of a lab and that could be it.

I don't really think that we're developing our defensive technologies in the way that Kurzeil proposes.

With all that being said, I don't think humanity will be entirely wiped out by anything. I expect there will be some survivors due to fluke or mutation for almost any self-made threat.

So I guess what I'm saying is the Wheel of Time scenario is very likely, even as a model of how we might ultimately go Singular.

Perhaps AI is not what you think...
posted on 05/29/2008 4:13 PM by David McKee

[Top]
[Mind·X]
[Reply to this post]

It occurs to me that the AI in almost all of these discussions has been attributed as a "separate" thing. A computer and software that requires a "soul substance" or "goal director" or whatever.

To me, real useful AI would be the ability to extend my current "wet-ware" capabilities. Brain machine interface technology will eventually allow us to repair brain damage, and inevitably allow us to increase that capability.

Real AI is us in the future. There will not be a separation of computers or people, but it will be like the feeling you get when you drive your car. You no longer consider the car to be a machine and a person driving it, not really. You look at the car and say "that guy is driving to fast" you consider the person and machine as a unit.

My 2 cents.

David T. McKee

Re: What If the Singularity Does NOT Happen?
posted on 06/26/2008 10:33 AM by jurmerian

[Top]
[Mind·X]
[Reply to this post]

Human beings (mainly men) do not like subjugation.
If the singularity happens, there will be an entity (the AI) able to understand and manipulate the world better than us... and sooner or later, that AI will overpower us.

I think, the real question about the singularity is... 'do we really want it to happen?. If we really do, it will probably happen, if we do not... I guess there will be everlasting approaches (cyberimplants, cybergadgets, etc)

Re: What If the Singularity Does NOT Happen?
posted on 07/06/2008 2:06 PM by mbrito666

[Top]
[Mind·X]
[Reply to this post]

After reading this article I began typing a comment, in agreement with the idea that the singularity probably won't happen as expected. Thats the moment I kicked my UPS losing my comments I had typed and forcing my PC into a reboot. This just proves the point in a small way that no matter what magnificance we can dream up entropy a major driving force of our universe, is there to foil us. knowing humans, human systems, the way humans design tools, and the paradigms we live in, it is very likely that we will miss the mark, and yes there will be hallways of geeks in assisted living facilities in 2045 asking, "Where's my singularity". Just like I ask today where's my flying car, and vacation on the moon.
I just read a couple of weeks ago where the worlds fastest computer was constructed, and for what purpose, you might ask? Well for war as it's primary function and nuclear war no less. I think we are going to kill ourselves. Entropy seems to be driving our species. maybe thats our real destiny, to go out in a blaze of glory! Just like the stars.

Re: What If the Singularity Does NOT Happen?
posted on 07/09/2008 12:08 PM by light.being

[Top]
[Mind·X]
[Reply to this post]

"Marlene Dietrich and Roy Rodgers are the only two living human beings who should be allowed to wear black leather pants "
-Edith Head <unquote>

Does it really matter? Maybe its already happened and we are going around for another go at it.
Sorry but to finnish another piece.

"If we had keen vision and feeling for all ordinary human life, it would be like hearing the grass grow and the squirrel's heart beat, and we should die of the roar which lies on the other side of silence"
-George Eliot

Re: What If the Singularity Does NOT Happen?
posted on 08/04/2008 12:19 AM by rbynum

[Top]
[Mind·X]
[Reply to this post]

Since I will be 81 in 2040 I will likely be one of those dodering nerds asking what happened if the singularity does not happen. If the singularity does not happen soon enough I may very well miss it. Therefore I hope to see rapid developement of at least a way to record our essessetial brain structures so that at some future time after the singularity who we are can be reconstituted. I hope to see greater strides in tracing the pathways of our memories and our thought processes.

Ultimately we will bypass inhabited settlements in space and develope space dust to house our consciousness. A self replicating nano structure that combines memory and communications abilities that will allow us to spread out into the solar system and beyond.

In the end to borrow from Kansas "All we are is dust in the solar wind"

Re: What If the Singularity Does NOT Happen?
posted on 01/07/2009 5:41 PM by zombiefood

[Top]
[Mind·X]
[Reply to this post]

stop drawing graphs and read "the quantum sausage machine" only only at kindle bookstore. you can stop thinking about what if it does not happen. it will. the question is what will become of humanity when they are no longer required to be economic slaves. and all commodities form themselves into the desired products and humans live forever. will life become not worth living after a few hundred years or will there be a reprieve from boredom?
read it and relax. and ray, hang in there it is coming for you.
this sci fi will clarify thinking about ai

Re: What If the Singularity Does NOT Happen?
posted on 01/21/2009 4:45 PM by terry_freeman

[Top]
[Mind·X]
[Reply to this post]

Why this belief that "all the resources will be gone" in the event of some teradisaster? As the iron ranges are depleted, the iron itself does not disappear. Future agrarian societies will mine today's middens and scrap heaps.

The looming disaster I worry about is this: we'll ask the government to do more and more, which it will do less and less efficiently, until we become ensnared in a web of regulations and coercion, and unable to muster the strength to solve the next crisis - whether it may be a change of climate or a falling meteoroid or the next world war. We could even be hoaxed into doing exactly the opposite of what is needed - we could be preparing for a mythical global warming just as we actually enter an ice age.

Of all the false religions of man, the most enduring and pernicious is the belief that the mystical powers of election somehow turn ordinary people into omniscient gods; that the same people who "cannot be trusted to govern themselves" can be trusted to elect those who would govern them with an iron fist.

Left to its own devices, the mega-government will grind us down into mediocrity from which we may lack the will to rebel. As one crisis after another passes, we protest against the gathering powers and lost rights, but yield more and more of our liberty. Our most likely doom is this: we will reduced to the status of drones. Life itself will become "a privilege and not a right."




Re: What If the Singularity Does NOT Happen?
posted on 03/15/2009 6:00 PM by Pandemonium1323

[Top]
[Mind·X]
[Reply to this post]

I don't think the current human mind can possibly comprehend life post-Singularity (which may be why forums like this are important). It's literally NOT possible to comprehend the transhuman world with a pre-transhuman mind. We'll have to wait and find out.
I think it will be arriving much sooner than most expect, too.
Right now, at USC (university of southern california), researchers are building a synthetic brain out of carbon nanotubes, NEURON BY NEURON. They already have several working neurons, and have shown that they can communicate with each other.
The resolution of fMRI has increased by something like 100 million times (this being sometime in the last month or two). This is HUGE when it comes to reverse engineering the brain.
It has been demonstrated that nanomachines implanted next to INDIVIDUAL neurons can be stimulated by infrared light to activate neurons ONE AT A TIME.
Three weeks after I spent $2500 on my quad-core, 3-way SLI computer it has become nearly 'obsolete' (in the sense that when I ordered it, it was only a couple small steps from top of the line, and only three weeks later it's many several more steps from there).
The Iberian Ibex (an extinct goat from Spain) was cloned.
Several nanotechnological inventions have been made that can purify water and clean CO2 from the air (unfortunately, the manufacturing processes that would allow us to deploy these systems globally aren't in place yet).
The efficiency of solar power is increasing at an incredible rate.
You can choose your offsprings genetic traits (well, a few of them, like sex, with MANY more to follow soon) at the Fertility Institute.
At least two defense contractors have built prototype robot exoskeletons for soldiers (USA is going japanime!!!)
Stroke damage in mice was CURED with stem cells.
NASA is building plasma ion drives.
A company in Australia is PRINTING solar cells on their money printing presses.
Nano hydrogels that make junk food healthy.
Nano swimsuits that dry instantly.

Anyway, who cares about the all the trivial gadgets that are going to wash over us in the next two years, when we are literally WITHIN SIGHT of the Singularity now.
Sometimes I think someone is holding technology back, in order to reduce the culture shock that occur if it all hit us at once. Maybe we're being inoculated to the Singularity (don't forget that the US military can LEGALLY stop any patent and appropriate the technology for up to twenty years for national security -- so WHAT do they have that's TWENTY years ahead of what we see on the market?)

Plasma fusion energy is close to complete.
Safe nuclear reactors that can fit in your garage are having their finishing touches put on them RIGHT NOW.

I think that the US infrastructure and manufacturing base are BEING ALLOWED to collapse, to clear the board for nanotechnology, and other next gen infrastructure technologies. The bailouts are just our way of saying, 'Hey thanks for all those decades of technological innovation, and for getting us this close, now get out of the way and retire.'

In a more highly interconnected world, I think any form of totalitarianism will get increasingly difficult (whether that's socialism, communism, fascism, or corporatism). As an example, have you noticed how many dirty cops get caught on tape lately? That's why some English and Australians were thinking about making it illegal to videotape police, but it will fail.

Some quantum effects are being DIRECTLY observed for the first time.

I think this is the final stretch, just do your best to survive the death rattle of the old world for a little longer.

Re: What If the Singularity Does NOT Happen?
posted on 03/16/2009 7:11 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

I don't think the current human mind can possibly comprehend life post-Singularity (which may be why forums like this are important). It's literally NOT possible to comprehend the transhuman world with a pre-transhuman mind. We'll have to wait and find out.

Why not? If a human mind can't comprehend the post-Singularity, it's just as likely it can't predict the Singularity in the first place, so be careful.

I think it will be arriving much sooner than most expect, too.

Kurzweil disagrees.

Right now, at USC (university of southern california), researchers are building a synthetic brain out of carbon nanotubes, NEURON BY NEURON. They already have several working neurons, and have shown that they can communicate with each other.

You have embellished this considerably. They have developed computer models that have shown that carbon nanotubes can be used to create artificial brain cells. A carbon nanotube based circuit could potentially be used to precisely model the properties of a real neuron. Good stuff, but far from creating a brain, "right now".

The resolution of fMRI has increased by something like 100 million times (this being sometime in the last month or two). This is HUGE when it comes to reverse engineering the brain.

You misunderstand this technology. This is a microscope, not a medical imager. The new device does not work like a conventional MRI scanner, which uses gradient and imaging coils. Instead, the researchers use MRFM to detect tiny magnetic forces as the sample sits on a microscopic cantilever ' essentially a tiny sliver of silicon shaped like a diving board. Laser interferometry tracks the motion of the cantilever, which vibrates slightly as magnetic spins in the hydrogen atoms of the sample interact with a nearby nanoscopic magnetic tip. The tip is scanned in three dimensions and the cantilever vibrations are analyzed to create a 3D image. This technology stands to revolutionize the way we look at viruses, bacteria, and proteins. However, its potential for "reverse engineering the brain" is probably marginal.

It has been demonstrated that nanomachines implanted next to INDIVIDUAL neurons can be stimulated by infrared light to activate neurons ONE AT A TIME.

The only info I could find related to this makes it clear that practical applications are still 20-30 years away, although some sound cool, a "communicator between the biological and silicon worlds." But, I'm not sure what I found is what you're talking about here. Link?

Three weeks after I spent $2500 on my quad-core, 3-way SLI computer it has become nearly 'obsolete' (in the sense that when I ordered it, it was only a couple small steps from top of the line, and only three weeks later it's many several more steps from there).

It's only obsolete if it doesn't do what you need it to do. Computers have been getting exponentially faster for 65 years now, this is not news.

The Iberian Ibex (an extinct goat from Spain) was cloned.

And died within minutes. The research is promising, however, because we continue to wipe out new species every day.

Several nanotechnological inventions have been made that can purify water and clean CO2 from the air (unfortunately, the manufacturing processes that would allow us to deploy these systems globally aren't in place yet).

It's a nanotech membrane that captures C02 from waste gases at the point of emissions - it doesn't clean CO2 from the atmosphere, as you seem to imply. However, this could be useful.

You can choose your offsprings genetic traits (well, a few of them, like sex, with MANY more to follow soon) at the Fertility Institute.

Only sex, which is not a big deal. There is no validity to the embellishment "MANY more to follow soon". Link?

Stroke damage in mice was CURED with stem cells.

This is exaggerated, there has been some demonstrated rebuilding of some neurons in stroke-damaged mice. It's a good start, but a long way to go, especially for humans, which the resources I found for this clearly state.

Nano hydrogels that make junk food healthy.

This is not true. Link?

Anyway, who cares about the all the trivial gadgets that are going to wash over us in the next two years, when we are literally WITHIN SIGHT of the Singularity now.

Kurzweil strongly disagrees.

Sometimes I think someone is holding technology back, in order to reduce the culture shock that occur if it all hit us at once. Maybe we're being inoculated to the Singularity (don't forget that the US military can LEGALLY stop any patent and appropriate the technology for up to twenty years for national security -- so WHAT do they have that's TWENTY years ahead of what we see on the market?)

Paranoid delusions.

Plasma fusion energy is close to complete.
Safe nuclear reactors that can fit in your garage are having their finishing touches put on them RIGHT NOW.

This is not true. Link?

I think that the US infrastructure and manufacturing base are BEING ALLOWED to collapse, to clear the board for nanotechnology, and other next gen infrastructure technologies. The bailouts are just our way of saying, 'Hey thanks for all those decades of technological innovation, and for getting us this close, now get out of the way and retire.'

More delusions.

Don't wildly embellish or make stuff up, it hurts your credibility. If you can provide links (which you should have done in the first place) to the things I indicated were not true, I may take those back, assuming you didn't wildly embellish them either.

In any case, none of the things you named get to the central catalyst for The Singularity - advanced AI. AI so advanced that it can actually design AI more intelligent than itself, ad infinitum. This is a software issue, first and foremost, and we are very from that level of AI sophistication. Kurzweil's modified estimate of 2045 reflects his appreciation of this, and is far more on target for the hypothetical event of The Singularity than your Maya-centric prediction of 2012.

Re: What If the Singularity Does NOT Happen?
posted on 03/16/2009 2:12 PM by Pandemonium1323

[Top]
[Mind·X]
[Reply to this post]

Ok, I'll get some links.
I think the reason a pre-singularity mind cannot understand a post-singularity mind if for the same reasons we can't observe physical singularities (black holes). I use black holes as an analogy often (as does Ray Kurzweil). We CAN'T empyrically observe a singularity point, so there is no way to 'understand' what goes on inside it. If we could, there wouldn't be so many conversations about it here.

You say Kurzweil disagrees. First, I don't believe Ray says everything he believes. Other writers have pointed this out. He is a smart man, but I don't think he 'tells everything'.

---
http://viterbi.usc.edu/news/news/2009/brain-power. htm

The team has already designed and simulated the transistor circuits for a single synapse, and a CMOS chip that will be used to validate the concepts is about to be fabricated, says Hsu, a senior member of the team and Ph.D. student in electrical engineering. Now it's time to connect the structure to another synapse and study neural interconnectivity. By the end of the semester, she hopes to have 'several synthetic neurons talking to each other.'

I believe that acceleration itself in the next two years will help this project get completed ahead of schedule. This is the BioRC Project. I'm simply projecting that it will be finished sooner than thought (because technology is ACCELERATING).
---

---
http://techfragments.com/news/240/Science/IBM_Crea tes_3D_MRI_With_100_Million_Times_Finer_Resolution .html

"Our hope is that nano MRI will eventually allow us to directly image the internal structure of individual protein molecules and molecular complexes, which is key to understanding biological function."

My point here is just that imaging technology is progressing in LEAPS AND BOUNDS at this point, another clue that we are entering the 'knee' of the curve, and are very close. How long till the next jump in imaging tech that allows us to scan an entire human brain? Not long I think.
---

---
http://www.physorg.com/news154619675.html

By using semiconductor nanoparticles as tiny solar cells, the scientists can excite neurons in single cells or groups of cells with infrared light. This eliminates the need for the complex wiring by embedding the light-activated nanoparticles directly into the tissue. This method allows for a more controlled reaction and closely replicates the sophisticated focal patterns created by natural stimuli.

As far as practical applications being 20-30 years away, well that's a difference of opinion. I think that the synergistic effect of ALL these technologies puts it much closer.
---

---
My computer is not 'obsolete'. It's awesome in fact. I put off buying one, waiting for this particular technology to come down in price. I was just amazed at how quickly much better stuff hit the market, literally within WEEKS after buying the machine I got now.

This for example:
http://www.hardcorecomputer.com/
---

---
Yes, I was aware that the Ibex died quickly, was not trying to obfuscate that. Just pointing out again not how FAR we've come, but how FAST we're going. And I was tickled by this particular story, I love goats.
---

---
http://www.nanowerk.com/news/newsid=4452.php

Yes, you are correct about the membrane. My question is, how long will it take for someone to use the same technology (or a version of it) and apply it to the rest of our atmosphere. Don't underestimate how clever people are, someone will figure out how to do it.
---

---
Ok, it's too easy to find hundreds of articles on the fertility institute so I am not going to post any links (just look em up!). As far as I know choosing traits like eye color, etc. is already available, and gender selection has been available for awhile. I don't really believe that things like 'intelligence' could be selected, because it's so much a product of nature and nurture, and involves many different parts of the brain, and many different genes. Not so simple to just 'make' someone intelligent. HOWEVER, I don't see it as a far leap to make improvement in the rudimentary functioning and effieciency of the brain processes that will lead to greater POTENTIAL for intelligence. Again, I just don't think we are that far away from it (another area where imaging tech will advance us quickly).
---

---
http://www.technologyreview.com/biomedicine/22263/ page1/

The team was able to show that the hole in the brains of rats caused by a stroke was completely filled with "primitive" new nerve tissue within seven days.

I took this to mean 'cured'. The words 'completely filled' made me think that. I was not implying that they could do the same for humans, and there is still a lot of work to be done, but that work is getting done FASTER every year (every month?).
---

---
http://www.nanowerk.com/news/newsid=9277.php

The promise of nanotechnology, the Dutch scientist said, is it could allow re-engineering ingredients to bring healthy nutrients more efficiently to the body while allowing less-desirable components to pass on through.
European food scientists use nanotechnology to create structures in foods that can deliver nutrients to specific locations in the body for the most beneficial effects, Kampers said.

When I used the word hydrogels, I was actually getting that mixed up with another article about nanohydrogels treating cancer. Sorry 'bout that, but I read so much, sometimes they blur together (I have 6-10 tabs open at a time). But hey, both are great. Also, something not mentioned in this particular article, but which I have read, was about coating nutrients in nanomaterials to disguise their taste, and creating designer 'tastes' that are healthy at the same time. Basically, 'junk food' that isn't junk. That's what they're making right now. I know there are none, or almost none, companies using it yet, but I think it will be really big, really soon.
---

---
I personally don't care about iPods, or nano-food, or nano-swimsuits. What I do care about are the brain machine interfaces and longevity research. My point here, is yeah, all the oohh and ahh gizmos are cool, but, to me at least, they are irrelevant when it comes to BMI and genetic advancements. That kind of stuff will be very short lived.
---

---
http://focusfusion.org/

WEST ORANGE, NJ - Dec. 18, 2008 - Lawrenceville Plasma Physics Inc., a small research and development company based in West Orange, NJ, announces the initiation of a two-year-long experimental project to test the scientific feasibility of Focus Fusion.
Focus Fusion Is:

* controlled nuclear fusion
* using the dense plasma focus (DPF) device
* and hydrogen-boron fuel.
o Hydrogen-boron fuel produces almost no neutrons (e.g., no radioactivity)
o and allows the direct conversion of energy into electricity.

This is not the ONLY approach being developed either.

There is also this:
http://nextbigfuture.com/2008/11/hyperion-power-ge neration-not-scam-and.html

and this:
http://dvice.com/archives/2008/06/blacklight_powe. php

My point here is that there are multiple teams and approaches being worked out on this, in addition to the solar industry. It will happen soon.
---

---
35 USC 181
Secrecy of certain inventions and withholding of patent

Whenever publication or disclosure by the publication of an application or by the grant of a patent on an invention in which the Government has a property interest might, in the opinion of the head of the interested Government agency, be detrimental to the national security, the Commissioner of Patents upon being so notified shall order that the invention be kept secret and shall withhold the publication of the application or the grant of a patent therefor under the conditions set forth hereinafter.

Each individual to whom the application is disclosed shall sign a dated acknowledgment thereof, which acknowledgment shall be entered in the file of the application. If, in the opinion of the Atomic Energy Commission, the Secretary of a Defense Department, or the chief officer of another department or agency so designated, the publication or disclosure of the invention by the publication of an application or by the granting of a patent therefor would be detrimental to the national security, the Atomic Energy Commission, the Secretary of a Defense Department, or such other chief officer shall notify the Commissioner of Patents and the Commissioner of Patents shall order that the invention be kept secret and shall withhold the publication of the application or the grant of a patent for such period as the national interest requires, and notify the applicant thereof. Upon proper showing by the head of the department or agency who caused the secrecy order to be issued that the examination of the application might jeopardize the national interest, the Commissioner of Patents shall thereupon maintain the application in a sealed condition and notify the applicant thereof. The owner of an application which has been placed under a secrecy order shall have a right to appeal from the order to the Secretary of Commerce under rules prescribed by him.

duh
not paranoid delusions, just an attempt to understand that if the govt. CAN withhold technology, then isn't it reasonable to assume that they ARE...I don't have to convince anyone that the stealth projects 9SR-71, etc.) were a secret, do I...if I had been talking about them decades ago, when they WERE secret, I would have got the same reaction then...why is it so hard to believe that the military does secret stuff, when they OPENLY admit to doing SECRET stuff...all in the name of national security...which makes sense...that's a big part of what our military is for...what I'm GETTING AT, is can you try to extrapolate through your imagination WHAT they might have as of now?...considering that much of what we see on the open market is potentially 20 years 'behind' what they ACTUALLY have...
---

---
My perspective about current events aren't delusions. They are a perspective. I've studied hegemonic forces and what they do in this world. It's not a 'conspiracy' so much as it's a game. A game people have been playing for thousands of years. People compete. Now, they are competing over the world. In all the honesty, there is obviously much more to the bailouts then what I just said, but I was trying to get people to look at it in a new way. There ARE people who fight/compete for control over this world. It's possible that this financial crisis is part of a larger strategy to make way for new technology. Any thoughts on this, or are you just going to dismiss me?
---

---
I don't make stuff up, I'm just not great at citing everything, it's a bad habit.

And your last point, about AI. This is my personal opinion (let's debate it), but I really think we'll see the rise of a cybernetic intelligence LONG before we get a super-intelligent AI.
---

kallisti
Pan

Re: What If the Singularity Does NOT Happen?
posted on 03/16/2009 2:59 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

I don't make stuff up, I'm just not great at citing everything, it's a bad habit.


That's fine, I'm not condemning you personally or anything.

All I'm saying is, if you post something, ensure it can withstand scrutiny.

- PB

Re: What If the Singularity Does NOT Happen?
posted on 03/16/2009 3:08 PM by Pandemonium1323

[Top]
[Mind·X]
[Reply to this post]

Actually, I love it when my ideas don't withstand scrutiny, cus then I have a new (better!) way to look at things. I love being corrected, and I love learning.

Re: What If the Singularity Does NOT Happen?
posted on 03/16/2009 3:13 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

Actually, I love it when my ideas don't withstand scrutiny, cus then I have a new (better!) way to look at things. I love being corrected, and I love learning.


That is well, I share that affection for personal improvement.

Re: What If the Singularity Does NOT Happen?
posted on 07/04/2009 11:54 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

Amen, brother.

Jake Witmer
907-250-5503

Re: What If the Singularity Does NOT Happen?
posted on 05/17/2009 11:00 PM by DennisWingo

[Top]
[Mind·X]
[Reply to this post]

Sorry to come late to this party but here is my response.

As a very long time space advocate it is my perception that we are focusing on the wrong issue, if our issue is to get cheap access to space at this time. We all know that it is what is needed, but at this time, without a market to support it, most if not all such funding comes from the government, and our government has not shown the competence needed in this area (either in executing themselves or in picking a contractor to do so, aka the X-33 and many other debacles).

Therefore I would posit a different approach. What is needed at this time is a Reusable Space Vehicle (RSV), rather than wasting time with Reusable Launch Vehicles (RLV's) for the reasons stated in paragraph 1.

Today and RSV could be built for the fraction of the cost of an RLV. An RSV is much more of an integration task than a ground up R&D activity. An RSV for example could be based upon an International Space Station (ISS), node or module. The life support systems are already in use on ISS and we have an almost ten year history of continuous operation of these systems to draw our experience from in designing one for an RSV. As far as propulsion goes, you could either launch multiple stages from the Earth to give the system a boost, or aggregate them at ISS. for use. With the station as a logistic node, the existing base of expendable launch vehicles could be used, which would increase their production, thus lowering their unit cost by as much as 50% for a 25% increase in production.

The aerocapture shell for the return to the station is also very well trodden territory and by using that approach, you save a tremendous amount of fuel.

The RSV would be used to get humans first to lunar orbit, where a stripped down lander would be used to get two crewpersons at a time to the lunar surface. The crew would have had supplies and several subsystems of a lunar outpost emplaced before their arrival and even with the primitive state of robotics today, could be in serviceable condition before their arrival.

Instead of a science focused initial mission, the mission would be focused on In-Situ Resource Utilization or ISRU, which would be done to make oxygen and metals from the lunar basaltic rocks and or the regolith. If this is done in the polar regions, then the amount of resources is far greater than at the equator, and there is constant sunlight, eliminating the near term need for nuclear power.

What is necessary here is to reduce the cost to first ISRU production as much as possible and then leverage ISRU supplied oxygen, metals, and fuel to multiply the effectiveness of the RSV.

It is my professional opinion that this would be cheaper than developing an RLV or a heavy lift rocket. However, with ISRU as a focus, and local manufacturing set up at the earliest possible moment, this changes the mix of payloads sent from the Earth from complete systems, to parts and subassemblies, which favors a high flight rate low cost system, which gets you the market needed to justify the private investment in an RLV.

Therefore to get an RLV, you need an RSV supported by ISRU.

:)

Re: What If the Singularity Does NOT Happen?
posted on 05/18/2009 9:37 PM by nferguso

[Top]
[Mind·X]
[Reply to this post]

Speculation about the approaching singularity is fascinating but typically incomplete. Extrapolations of CPU power and the adoption of computer technology are all very well. Certain other trends are unfortunately disregarded. I have studied these extrapolations, in particular the adoption and unavoidable use of Microsoft Office applications. I estimate that by the year 2053, a full two decades before other trends reach their point of criticality, society will reach what I call the "MS event horizon".

The density of p-codes, compounded by exponentially expanding auto-feature creeping and mandlemangled user interface elements, will almost certainly accelerate the universe's quantum state into a local entropy maximum by that year.

I can imagine our universe might thereafter be visited from some temporally distant but nearby brane, by unrecognizably intelligent entities, who glissandaport into our where, and discover us in a state of endless suspension, frozen from jscript in a deadly IPV6e7 embrace at the moment a Baja amoeba running Nanonet Explorer clicked the "back and over yonder" button when it really wanted to go thither. The entities will withdraw, incomprehensibly wistfully, hollowly [Sorry!] whispering "This is the way this world ends - not with a bang, but a noop."

Re: What If the Singularity Does NOT Happen?
posted on 05/18/2009 9:57 PM by harvard

[Top]
[Mind·X]
[Reply to this post]

Got to ask you. What in heck is "glissandaport" ? I cannot find it.

Re: What If the Singularity Does NOT Happen?
posted on 05/19/2009 12:55 AM by Pandemonium1323

[Top]
[Mind·X]
[Reply to this post]

gliss-and-a-port
probably something to do with teleportation, or the like
what's "gliss"

Re: What If the Singularity Does NOT Happen?
posted on 05/19/2009 1:05 AM by Pandemonium1323

[Top]
[Mind·X]
[Reply to this post]

Ha, I knew I was close
it's not a-port though, it's aport. As in aportation.

Aportation:


A PK talent involving the seemingly instantaneous movement of an object from one location in space-time to another, apparently without going through the normal space-time in between. See Teleportation.

http://www.experiencefestival.com/a/Aportation/id/ 183412

sweet, found gliss too:

Glissando

"Glissando" (plural: glissandi, abbreviated gliss.) is a glide from one pitch to another. It is an Italianized musical term derived from the French glisser, to glide.
Glissando vs. Portamento
Prescriptive attempts[1] to distinguish the glissando from the portamento by limiting the former to the filling in of discrete intermediate pitches on instruments like the piano, harp and fretted strings have run up against established usage[2] of instruments like the trombone and timpani. The latter could thus be thought of as capable of either 'glissando' or 'portamento', depending on whether the drum was rolled or not. The clarinet gesture that opens Rhapsody in Blue could likewise be thought of either way, being originally for piano, but is in practice played as a portamento and described as a glissando. In cases where the destination and goal pitches are reduced to starting and stopping points as in James Tenney's Cellogram, or points of inflection, as in the sirens of Var'se's Hyperprism, the term portamento (conjuring a decorative effect) seems hardly adequate for what is a sonorous object in its own right and these are called glissando.
'Discrete glissando'
On some instruments (e.g., piano, harp, xylophone), discrete tones are clearly audible when sliding. For example, on a piano, the player can slide his/her thumb or fingers across the white or black keys, producing either a C major scale or an F# major pentatonic (or their relative modes). On a harp, the player can slide his/her finger across the strings, quickly playing the separate notes or even an arpeggio (for example b, c flat, d, e sharp, f, g sharp, a flat). Wind, brass and fretted stringed instrument players can effect an extremely rapid chromatic scale (ex: sliding up or down a string quickly on a fretted instrument), going through an infinite number of pitches. Arpeggio effects (likewise named glissando) are also obtained on the harmonic series by bowed strings and brass, especially the french horn.

http://encyclopedia.thefreedictionary.com/Gliss

So I take it to mean that glissandoporting is teleportation between worlds in the MWT bulk by harmonic gliding.

Re:glissandoporting is teleportation between worlds in the MWT bulk by harmonic gliding
posted on 05/19/2009 1:26 AM by harvard

[Top]
[Mind·X]
[Reply to this post]

LOL. It is stuff like this that makes me say

"And that is why you are so appreciated!"

Thanks Pan!

Re: What If the Singularity Does NOT Happen?
posted on 05/19/2009 12:47 AM by Pandemonium1323

[Top]
[Mind·X]
[Reply to this post]

You are wonderful.

Re: What If the Singularity Does NOT Happen?
posted on 07/13/2009 4:14 PM by cdcsvak

[Top]
[Mind·X]
[Reply to this post]

In the article titled 'What if the Singularity does not happen?' Venor Vinge presents three graphs to illustrate his theory. The graphs contain a line depicting maximum power source as a function of population size. However, it is not clear how this line was constructed. Specifically, according to the graph humans started using horses around 6000BC. But, when did humans start using camels, oxes and elephants? It must have been after 6000BC as these animals are more powerful then horses and the graph clearly displays upward trend. Is this a historical fact? Furthermore, what do the dots between horse use and the invention of the steam engine represent? It seems that the graph was plotted first and events labelled subsequently. Moreover, 'The Wheel of Time' graph illustrates alternating periods of technological progress and technological regress. These periods are associated with equally dramatic oscillations in the size of the human population. However, no justification is provided for occurrences of events that would lead the human race to the brink of extinction.
Similar to other Singularity theorists, Venor Vinge claims that civilization development exhibits exponential characteristics. For instance, radio, television, airplanes and space missions were unthinkable only two centuries ago. Likewise, people did not see a need for personal computers a few decades ago. For these reasons, it is hard to imagine what events will take place 1000 years from now. Yet, the author has some predictions and graphs that stretch thousands of years in future. Indeed, these far future fictitious scenarios do not deserve serious consideration.
Vinge claims that the best chance of survival for the human race is by establishing space colonies. Space settlements will only exist if travelling faster than the speed of light is made possible. However, Bill Joy, the Chief Scientist of Sun Microsystems, disagrees with this claim. He argues that the problems that we have created on Earth will follow us to the space colonies. Therefore, the fate of the human race is linked to our ability to survive on this planet (Joy, 2000).

Re: What If the Singularity Does NOT Happen?
posted on 10/27/2009 4:07 PM by SuperFilms

[Top]
[Mind·X]
[Reply to this post]

Some mathematicians have speculated that our "universe" may in fact be a virtual construct. What does that mean for all of these theories? Aren't these theories dependent on the idea that what we see is in fact physical, real?

Why isn't there more discussion of this on this page, especially given the number of people who seem certain that the singularity is coming?

For those who decry second life, etc, consider--we may BE the second life already.

What do you think?

Re: What If the Singularity Does NOT Happen?
posted on 04/21/2010 11:16 AM by silentrage

[Top]
[Mind·X]
[Reply to this post]

Ray discussed this in his book, there are some physicists who support this idea too.

Ray argues that it would be in our best interest to achieve singularity so that we can make the simulation more interesting to ensure its contiuation.

I think such claims that pertains to the intentions of the simulation operators without some kind of definite evidence is rather ridiculous. It's entirely possible that they'll turn the simulation off once we reach singularity then start another one, maybe they're running a trillion simulations to calculate the average chance of singularities.
The point is, no one knows.

Re: What If the Singularity Does NOT Happen?
posted on 02/03/2010 5:40 PM by stodesco

[Top]
[Mind·X]
[Reply to this post]

The problem here is hardware has progressed exponentially, software linear. We will be able to create a machine that matches the horsepower of a human brain but we will not be able to organize this power in a meaningful way if we continue like this.

Re: What If the Singularity Does NOT Happen?
posted on 04/12/2010 2:23 PM by silentrage

[Top]
[Mind·X]
[Reply to this post]

Hello people, first post here,
so many good points, don't know which ones to reply to first!

So I'll try a few that I still remember after an hour of reading.

@simulation
1. Yes it's a possibility, some scientists seriously think we are part of a simulation,
and a group have demonstrated the possibility that our universe is a holographic projection; thus we have both the possibility and a mechanism for it.

But I think any discussions and pondering regarding the nature or purpose of this simulation would not be very productive, since it's far beyond our comprehension.
We could be someone's lab rats, someone's save file of a cosmological Spore game, or any number of possible scenarios, all of which are likely beyond our control.

However one exciting aspect of the advent of strong AI is that we'll know much more about our universe; the technological breakthroughs they'll achieve should allow an unprecidented amount of information about the universe to be gained, we might confirm for sure whether we're in a simulation or not, what is dark matter/energy and if there is ETI.

@monkeys > humans > strong AI
Someone mentioned that humans would seek to kill monkeys either intentionally to acquire resources or by accident as a side-effect of their development, and that machines would do the same to us.

I agree to this to some extend, but ultimately no long-term predictions can be derived from the human-monkey interaction that can apply to the machine-human interaction.

We had started off by killing inferior species for their food and territory when we first "stood up", later we simply ruined their environment while we started "running", but we would eventually reach a point where the slaughtering or even harming of animals is of detriment to our societal values and the resources to keep them comfortable, even happy would be trivial enough that we'd preserve them out of compassion.

Machines cannot be assumed to follow this same kill-ignore-babysit pattern simply because we lack the intelligence to predict their behavior.

I think the only thing we can reliably predict is that, given a known fact that human society today is riddled with problems and a sizable proportion of humans don't live happily, and the fact that once strong AI emerges their intelligence will surpass us,
they would not agree with our methods and our condition and would take some action to rectify the issue, the way they choose to go about it is beyond human prediction, just like a monkey cannot know that the same species who had once slaughter them for their food would later put them in sancturies apparenty for no good reason.

Re: What If the Singularity Does NOT Happen?
posted on 04/14/2010 10:27 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

Hello silentrage,

i hope ur happy here and the new forum which is still in beta.

the monkey thing:

we wiped out competitors that were near us in intelligence.

neanderthal etc

Loads of homonoids are popping up and we seem baffled as to why they become extinct.

I'm not. we killed them. possibly ate them.


A superintelligence may just fly off bored with us, and pop out of the membrane to a new size/dimension of the universe never to return.

It may e happening all the time in type 2 or 3 civilisations.

It's the scale that awes.

the horror.

Vinge says catastrophe is very likely.


He advocated weak a.i. builds only as our best survival chance in 1993

Re: What If the Singularity Does NOT Happen?
posted on 04/21/2010 1:45 PM by kw7777

[Top]
[Mind·X]
[Reply to this post]

Machines cannot be assumed to follow this same kill-ignore-babysit pattern simply because we lack the intelligence to predict their behavior.

I think the only thing we can reliably predict is that, given a known fact that human society today is riddled with problems and a sizable proportion of humans don't live happily, and the fact that once strong AI emerges their intelligence will surpass us,


I don't get why people automatically see AI as a new "species". I think it is much more likely they will be tools, a million times more advanced versions of today's computers. They will not have any desires or goals that their makers (us) will not want them to have.

I disagree that we can't predict what super AIs will do. While we can't predict exactly how they will solve certain problems, by controlling their desires and emotions OR by simply making them obey human commands, we CAN predict what problems they will solve. So we could predict, for example, that they will make us immortal, but we can't predict how they will do this.

I find it very hard to imagine why ANYONE would want to build an AI that is completely autonomous and has unpredictable goals. That would just be stupid and obviously suicidal. On the other hand, there are LOTS of reasons why people would want to build a "tool AI", that obeys commands and solves problems the inventors want to solve.

Re: What If the Singularity Does NOT Happen?
posted on 04/22/2010 12:59 AM by silentrage

[Top]
[Mind·X]
[Reply to this post]

Yeah, what you're saying makes a lot of sense to me now, I guess I'm not seeing the transitional step between dumb AI and SUPER AI, which is useful AI like you described, hehe.

Re: What If the Singularity Does NOT Happen?
posted on 06/18/2010 11:31 AM by Manohar Tilak

[Top]
[Mind·X]
[Reply to this post]


About "best hope for long-term survival"--self-sufficient, off-Earth settlements", Manohar Tilak, th originator of The pre-1970 Evoluon Theory of Cosmic Evolution and the 1998 book "Infinities to Eternities" (ITE), feels that time which Physicists generally do not believe to be existent now seems have come for acceptance. Furthermore time also is ripe for acceptance of the Notion that The Information systems that The Evoluon Theory Projects in the after page 58 colored chart of ITE -- which eventually may become is available in the Electronic Book form from Ricky Quintana of Almex Corporation near Los-Angeles, California.
This is the form of the absolute singularity that does seem to exist as *the spirit*!
I a leaving this incomplete to honor Mathematician G'del's Incompleteness Theorem since it simply can be assumed that Astronomer Carl Sagan's TV series Cosmos also can be regarded as parts of Cosmic Evolution therefore imperfect and incomplete!
Humbly asserts Manohar Tilak
.