|
|
|
|
|
|
|
Origin >
The Singularity >
What If the Singularity Does NOT Happen?
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0696.html
Printable Version |
|
|
|
What If the Singularity Does NOT Happen?
It's 2045 and nerds in old-folks homes are wandering around, scratching their heads, and asking plaintively, "But ... but, where's the Singularity?" Science fiction writer Vernor Vinge--who originated the concept of the technological Singularity--doesn't think that will happen, but he explores three alternate scenarios, along with our "best hope for long-term survival"--self-sufficient, off-Earth settlements.
Originally presented at Long
Now Foundation Seminars About Long Term Thinking,
February 15, 2007. Published with permission on KurzweilAI.net
March 14, 2007.
Given the title of my talk, I should define and briefly discuss
what I mean by the Technological Singularity:
It seems plausible that with technology we can, in the fairly
near future, create (or become) creatures who surpass humans in
every intellectual and creative dimension. Events beyond this event—call
it the Technological Singularity—are as unimaginable to us
as opera is to a flatworm.
The preceding sentence, almost by definition, makes long-term
thinking an impractical thing in a Singularity future.
However, maybe the Singularity won't happen, in which case planning
beyond the next fifty years could have great practical importance.
In any case, a good science-fiction writer (or a good scenario planner)
should always be considering alternative outcomes.
I should add that the alternatives I discuss tonight also assume
that faster-than-light space travel is never invented!
Important note for those surfing this talk out of context
:-) I still regard the Singularity as the most likely non-catastrophic
outcome for our near future.
There are many plausible catastrophic scenarios (see
Martin Rees's Our
Final Hour), but tonight I'll try to look at non-singular
futures that might still be survivable.
A plausible explanation for "Singularity failure" is that we never
figure out how to "do the software" (or "find the soul in the hardware",
if you're more mystically inclined). Here are some possible symptoms:
- Software creation continues as the province of software engineering.
- Software projects that endeavor to exploit increasing hardware
power fail in more and more spectacular ways.
- Project failures so deep that no amount of money can disguise
the failure; walking away from the project is the only option.
- Spectacular failures in large, total automation projects.
(Human flight controllers occasionally run aircraft into each
other; a bug in a fully automatic system could bring a dozen
aircraft to the same point in space and time.)
- Such failures lead to reduced demand for more advanced hardware,
which no one can properly exploit—causing manufacturers to
back off in their improvement schedules. In effect, Moore's Law
fails —even though physical barriers to further improvement
may not be evident.
- Eventually, basic research in related materials science issues
stagnates, in part for lack of new generations of computing systems
to support that research.
- Hardware improvements in simple and highly regular structures
(such as data storage) are the last to fall victim to stagnation.
In the long term, we have some extraordinarily good audio-visual
entertainment products (but nothing transcendental) and some very
large data bases (but without software to properly exploit them).
- So most people are not surprised when the promise of strong
AI is not fulfilled, and other advances that would depend on something
like AI for their greatest success—things like nanotech general
assemblers— also elude development.
All together, the early years of this time come to be called the
"Age of Failed Dreams."
- It's 2040 and nerds in old-folks homes are wandering around,
scratching their heads, and asking plaintively, "But ... but,
where's the Singularity?"
- Some consequences might seem comforting:
- Edelson's Law says: "The number of important insights that
are not being made is increasing exponentially with
time." I see this caused by the breakneck acceleration of
technological progress—and the failure of merely human
minds to keep up. If progress slowed, there might be time
for us to begin to catch up (though I suspect that our bioscience
databases would continue to be filled faster than we could
ever analyze).
- Maybe now there would finally be time to go back over the
last century of really crummy software and redo things, but
this time in a clean and rational way. (Yeah, right.)
- On the other hand, humanity's chances for surviving the century
might become more dubious:
- Environmental and resource threats would still exist.
- Warfare threats would still exist. In the early years of
the 21st century, we have become distracted and
(properly!) terrified by nuclear terrorism. We tend to ignore
the narrow passage of 1970-1990, when tens of thousands of
nukes might have been used in a span of days, perhaps without
any conscious political trigger. A return to MAD is very plausible,
and when stoked by environmental stress, it's a very plausible
civilization killer.
Suppose humankind survives the 21st century. Coming
out of the Age of Failed Dreams, what would be the prospects for
a long human era? I'd like to illustrate some possibilities with
diagrams that show all of the Long Now—from tens of thousands
of years before our time to tens of thousands of years after—all
at once and without explicit reference to the passage of time (which
seems appropriate for thinking of the Human Era as a single long
now!).
Instead of graphing a variable such as population as a function
of time, I'll graph the relationship of an aspect of technology
against population size. By way of example, here's
our situation so far.
It doesn't look very exciting. In fact, the most impressive thing
is that in the big picture, we humans seem a steady sort. Even the
Black Death makes barely a nick in our tech/pop progress. Maybe
this reflects how things really are—or maybe we haven't seen
the whole story. (Note that extreme excursions to the right (population)
or upwards (related to destructive potential) would probably be
disastrous for civilization on Earth.)
Without the Singularity, here are three possibilities (scenarios
in their own right):
- I said I'd try to avoid existential castastrophes, but I want
to emphasize that they're still out there. Avoiding such should
be at the top of ongoing thinking about the long-term.
- The "bad afternoon" going back across the top of the diagram
should be very familiar to those who lived through the era of:
- Fate
of the Earth by Jonathan Schell
- TTAPS
Nuclear Winter claims
- (Like many people, I'm skeptical about the two preceding
references. On the other hand, there's much uncertainty about
the effects of a maximum nuclear exchange. The subtle logic
of MAD planning constantly raises the threshold of "acceptable
damage", and engages very smart people and enormous resources
in assuring that ever greater levels of destruction can be
attained. I can't think of any other threat where our genius
is so explicitly aimed at our own destruction.
(A scenario to balance the pessimism of A
Return to MADness)
- There are trends in our era that tend to support this optimistic
scenario:
- The plasticity of the human psyche (on time scales at least
as short as one human generation). When people have hope,
information, and communication, it's amazing how fast they
start behaving with wisdom exceeding the elites.
- The Internet empowers such trends, even if we don't accelerate
on into the Singularity. (My most recent book, Rainbows
End, might be considered an illustration of this
(depending on how one interprets the evidence of incipiently
transhuman players :-).)
- This scenario is similar to Gunther Stent's vision in The
Coming of the Golden Age, a View of the End of Progress
(except that in my version there would still be thousands of years
to clean up after Edelson's law).
- The decline in population (the leftward wiggle in the trajectory)
is a peaceful, benign thing, ultimately resulting in a universal
high standard of living.
- On longest time horizon, there is some increase in both power
and population.
- This civilization apparently reaches the long-term conclusion
that a large and happy population is better than a smaller
happy population. The reverse could be argued. Perhaps in
the fullness of time, both possibilities were tried.
- So what happens at the far end of this Long Now (20000
years from now, 50000)? Even without the Singularity, it seems
reasonable that at some point the species would become something
greater.
- A policy suggestion (applicable to most of these scenarios):
[Young] Old People are good for the future of Humanity!
Thus prolongevity research may be one of the most important undertakings
for the long-term safety of the human race.
- This suggestion explicitly rejects the notion that lots
of old people would deaden society. I'm not talking about
the moribund old people that we humans have always known (and
been). We have no idea what young very old people are like,
but their existence might give us something like the advantage
the earliest humans got from the existence of very old tribe
members (age 35 to 65).
- The Long Now perspective comes very naturally to someone
who expects that not only his/her g*grandchildren will be
around in 500 years—so may be the individual him/herself.
- And once we get well into the future, then besides having
a long prospective view, there would be people who have experienced
the distant past.
I fear this scenario is much more plausible than The
Golden Age. The Wheel of Time is based on fact
that Earth and Nature are dynamic and our own technology can cause
terrible destruction. Sooner or later, even with the best planning,
megadisasters happen, and civilization falls (or staggers). Hence,
in this diagram we see cycles of disasters and recovery.
- What would be the amplitude of such cycles (in loss of population
and fall of technology)?
- What would be the duration of such cycles?
There has been a range of speculation about such questions (mostly
about the first recovery):
In fact, we know almost nothing about such cycles— except
that the worst could probably kill everyone on Earth.
A frequent catchphrase in this talk has been "Who knows?". Often
this mantra is applied to the most serious issues we face:
- How dangerous is MAD, really? (After all, "it got us through
the 20th century alive".)
- How much of an existential threat is environmental change?
- How fast could humanity recover from major catastrophes? Is
full recovery even possible? Which disasters are the most difficult
to recover from?
- How close is technology to running beyond nation-state MAD
and giving irritable individuals the power to kill us all?
- What would be the long-term effect of having lots of young
old people?
- What is the impact of [your-favorite-scheme-or-peril] on long-term
human survival?
We do our best with scenario planning. But there is another tool,
and it is wonderful if you have it: broad experience.
- An individual doesn't have to try out every recreational drug
to know what's deadly.
- An individual has in him/herself no good way of estimating the
risks of different styles of diet and excercise. Even the individual's
parents may not be much help—but a Framingham
study can provide guidance.
Alas, our range of experience is perilously narrow, since we have
essentially one experiment to observe. In the Long Now, can we do
better? The
Golden Age scenario would allow serial experimentation
with some of the less deadly imponderables: over a long period of
time, there could be gentle experiments with population size and
prolongevity. (In fact, some of that may be visible in the "wiggle"
in my Golden
Age diagram.)
But there's no way we can guarantee we're in The Golden Age
scenario, or have any confidence that our experiments won't destroy
civilization. (Personally, I find The
Wheel of Time scenarios much more plausible than The
Golden Age.)
Of course, there is a way to gain experience and at the same time
improve the chances for humanity's survival:
This message has been brought back to the attention of futurists,
and by some very impressive people: Hawking, Dyson, and Rees in
particular.
Some or all of these folks have been making this point for many
decades. And of course, such settlements were at the heart of much
of 20th century science-fiction. It is heartwarming to
see the possibility that, in this century, the idea could move back
to center stage.
(Important note for those surfing this talk out of context:
I'm not suggesting space settlement as an alternative to, or evasion
of, the Singularity. Space settlement would probably be important
in Singularity scenarios, too, but embedded in inconceivabilities.)
Some objections and responses:
- "Chasing after safety in space would just distract from the
life-and-death priority of cleaning up the mess we have made of
Earth." I suspect that this point of view is beyond logical debate.
- "Chasing after safety in space assumes the real estate there
is not already in use." True. The possibility of the Singularity
and the question "Are we alone in the universe?" are two of the
most important practical mysteries that we face.
- "A real space program would be too dangerous in the short term."
There may be some virtue in this objection. A real space program
means cheap access to space, which is very close to having a WMD
capability. In the long run, the human race should be much safer,
but at the expense of this hopefully small short-term risk.
- "There's no other place in the Solar System to support a human
civilization—and the stars are too far."
- Asteroid belt civilizations might have more wealth potential
than terrestrial ones.
- In
the Long Now, the stars are NOT too far, even at relatively
low speeds. Furthermore, interstellar radio networks would
be trivial to maintain (1980s level technology). Over time,
there could be dozens, hundreds, thousands of distinct human
histories exchanging their experience across the centuries.
There really could be Framingham studies of the deadly uncertainties!
- From 1957 to circa 1980 we humans did some proper pioneering
in space. We (I mean brilliant engineers and scientists and brave
explorers) established a number of near-Earth applications that
are so useful that they can be commercially successful even at
launch costs to Low Earth Orbit (LEO) of $5000 to $10000/kg. We
also undertook a number of human and robotic missions that resolved
our greatest uncertainties about the Solar System and travel in
space.
- From 1980 till now? Well, launch to LEO still runs $5000 to
$10000/kg. As far as I can tell, the new Vision
for Space Exploration will maintain these costs. This approach
made some sense in 1970, when we were just beginning and when
initial surveys of the problems and applications were worth almost
any expense. Now, in the early 21st century, these
launch costs make talk of humans-in-space a doubly gold-plated
sham:
- First, because of the pitiful limitations on delivered
payloads, except at prices that are politically impossible
(or are deniable promises about future plans).
- Second, because with these launch costs, the payloads must
be enormously more reliable and compact than commercial off-the-shelf
hardware—and therefore enormously expensive in their
own right.
I believe most people have great sympathy and enthusiasm for humans-in-space.
They really "get" the big picture. Unfortunately, their sympathy
and enthusiasm has been abused.
Humankind's presence in space is essential to long-term human
survival.
That is why I urge that we reject any major humans-in-space initiative
that does not have the prerequisite goal of much cheaper
(at least by a factor of ten) access to space.
- There are several space propulsion methods that look feasible—once
the spacecraft is away from Earth. Such methods could reduce
the inner solar system to the something like the economic distances
that 18th century Europeans experienced in exploring Earth.
- The real bottleneck is hoisting payloads from the surface of
the Earth to orbit. There are a number of suggested approaches.
Which, if any, of them will pay off? Who knows? On the other hand,
this is an imponderable that that can probably be resolved
by:
- Prizes like the X-prize.
- Real economic prizes in the form of promises (from governments
and/or the largest corporations) of the form: "Give us a price
to orbit of $X/kg, and we'll give you Y tonnes of business
per year for Z years.
- Retargeting NASA to basic enabling research, more in the
spirit of its predecessor, NACA.
- A military arms race. (Alas, this may be the most likely eventuality,
and it might be part of a return to MADness. Highly deprecated!)
©2007 Vernor Vinge
| | |
|
|
Mind·X Discussion About This Article:
|
|
|
|
Re: singularity is underway
|
|
|
|
"..wake up world .. its like a frieght train comming down the tracks."
That sounds about right-
Casey smiled, said I'm feelin' fine.
Gonna ride that train to the end of the line.
Got a head of steam, and ahead of time.
Caller called Casey, half past four, He kissed his wife at the station door.
Climed into his cab, orders in hand, could be my trip to the Promised Land.
The Engine rocked, the Drivers rolled, Fireman hollered, Save my Soul!
Don't you fret, keep feedin' the fire, don't give up yet, we'll make it through, She's steamin' better than i ever knew.
Run her til she leaves the rail.
|
|
|
|
|
|
|
|
|
Re: singularity is underway
|
|
|
|
Quite a bargain - 1,000 Teraflops for $20m! IBM, BlueGene/L which currently holds the top spot at 360 teraflops cost $400m... now that's progress! =) Can I invest?
It's funny that with all this talk of the singularity and mind uploading, etc., nobody ever mentions optimal intelligence or optimal living. Why do we feel we need god-like existence to be happy? What if, as Vernor Vinge suggests, an alternate outcome be merely a golden age? Would this be a suboptimal outcome?
I might suggest another outcome altogether, one I haven't heard mentioned before. What if the singularity does happen in all its technodivine glory? You know, mind uploading, god-like intelligence, living within virtual universes, faster than light space travel, ultimate knowledge, spreading intelligence throughout the universe, living until permadark, and the list goes on. Is that necessarily an optimal outcome?
Computers will become as fast as the human brain within the next decade although not as intelligent. In the 20's we will see the Turing Test passed and by the 30's strong AI will almost certainly be a reality. It will be interesting to see if friendly-high-intelligence will curb the "singularity scenario" for our own good as I suspect it might (or make a good argument for postponing it and convincing us as to our best path)
This is of course philosophical speculation, one without any foundation other than instinct and imaginative meanderings. But I would like to put it out there for further discussion as I haven't heard it talked about much if at all. Is there an optimal way of life... one of peace, love, joy, fulfillment, choice, freedom and happiness? Are these not our ultimate objectives anyway? What if we are blinded by the singularity and miss the whole point (or event horizon, ha ha)? What if the singularity happens but we still have nothing of what we truly need? What if more knowledge and faster thinking bring more anxiety and depression, even with advanced happy drugs? What if we lose our humanity?
Perhaps strong AI will know the answers to these questions. However, it might be assumed wrongly that strong AI will design themselves millions of time smarter than us, or even that there isn't an upper limit to intelligence. Perhaps though they will be smart enough to avoid a sudden singularity or guide us to a better life without a singularity at all. One thing is far from certain though... how many of us if any will still be here in two decades to find out? Live in peace... but most importantly, be proactive!
Eric |
|
|
|
|
|
|
|
|
Re: singularity is underway
|
|
|
|
" What if the singularity does happen in all its technodivine glory? You know, mind uploading, god-like intelligence, living within virtual universes, faster than light space travel, ultimate knowledge, spreading intelligence throughout the universe, living until permadark, and the list goes on. Is that necessarily an optimal outcome? "
I believe it is, even with our limited human imagination, just look at all we've been able to picture in our fiction/fantasies. I believe that if the singularity doesn't take place, it will all be very ironic, like a big tease on humanity.
Never before have humans been able to so viscerally portray their heroes-gods-legends. We're constantly bombarded with ultra realistic representations of what could be, increasing our desires to actually attain such.
Could this universe be so cruel in its nature as to tease us so horribly? Showing us such a marvelous glimpse at how glorious existence could actually be while never giving us more than the equivalent of a concentration camp!?!?
As for the singularity taking place, I do believe it will take place barring some unforeseen catastrophe. Unless serious unforeseen impediments lie in the h/w side of the equation(either problems ramping up the computational power or later on with some unforeseen insurmountable problem in molecular machinery design... both extremely unlikely, afaik.). Attaining sufficient computational h/w to provide post-human capacity to a strong ai seems something achievable within the near future( even if it was working as a high iq human brain without further design upgrades based on knowledge of its workings, such h/w could speed it up tremendously and provide sufficient simulation raw power to carry out vast technological R&D rapidly in the much more efficient virtual realm. ), such will lead to quickly approaching optimal molecular machinery design which will definitely transform the world(for starts, abolishing physical labour, aging, cancer, natural diseases, and making manufacturing / transportation / etc virtually free and automatic thus making mass space colonization simultaneously viable, etc).
"Here's an interesting thought. I know the preponderant view is that advanced AI intelligence will be structured identically to a human's. If that is the case, what happens as this intelligence scales up and up over time, into the millions of human intelligence multiples? Does it have 1M times the lust, the ambition, all of the id characteristics as well as the rational? "
I doubt we'll run into such problems, look at humans vs other lower life forms. We're able to handle more complex patterns in more diverse ways, and as for instincts and emotions we're often able to actually control such. A greater super-intelligent being will probably be able to grasp complex sequences and patterns that we can't even conceive of. IT will be like the difference in symbolic manipulation and communication/language capacity between a chimp and a human at first, much greater later on. And just as that vast difference in intellect did not entail uncontrollable or insatiable increases in instinctual/emotional mechanism, I believe it is unlikely such will develop later on(emotions will probably become deeper and more defined and elegant, just as I believe they became in us humans as compared to lower lifeforms.)
For example imagine an entity able to understand and mentally/visually represent say an entire cell with all its millions upon millions of molecules accurately, and have keen insight on its working at multiple scales from single proteins to organelles to the entire cell. Or able to quickly combine almost always buglessly-errorlessly millions of line of code involving highly complex algorithms into innovative s/w, while at the same time mentally visualizing how it all works together(something quite clearly beyond human capability.). |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
One subtle thing about Vinge's "Wheels of Time" scenario is that after the first catastrophic destruction of civilizations and decline of population (to "almost extinction", as he puts it), the natural resources necessary to rebuild human civilizations back to an industrial level won't be available.
Why? Because we will have already used them up to build the current civilizations.
The rich, easy-to-extract iron ore deposits of the Mesabi Iron Range of northern Minnesota are all gone (much of that iron now sits at the bottom of the Pacific Ocean, in the form of U.S. Navy ships sunk by Japan's Imperial Navy). The same can be said for oil, natural gas, coal, copper, etc.
Bottom line, humanity basically only has one shot at industrialization. If it blows it a la Vinge non-Singularity scenarios 1 (Return to MAD) and 3 (Wheels of Time), it's the Middle Ages, forever and ever, amen. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
rand51 wrote,
People who don't appreciate the creative potential of free, innovative minds have, throughout modern history, predicted the exhaustion of natural resources and a resulting collapse of civilization. This view is demonstrably false and has been refuted repeatedly by the history of Capitalism. When people are not prevented from finding solutions to emerging problems, there is always an alternative. As Julian Simon explained in his book, "The Ultimate Resource", there is no limit to natural resources. The universe is one vast natural resource. The idea that we have only one shot at industrial civilization is clearly mistaken and demonstrates a failure to appreciate the basic economic principles that govern the world.
Spoken like a true codehead. :-)
Seriously though, if the world follows Vinge's "Golden Age" non-Singularity path, then Julian Simon's view (and yours) has some validity to it. If no teradisasters ever occur, the world's industrial and post-industrial bases could conceivably continue to develop and change indefinately, as billions and billions of human brains function as "the ultimate resource" by linking together through the Internet to solve a non-stop series of technical problems and thus eliminate all resource and environmental constraints on the growth and prosperity of the human population on Earth.
Vinge himself claims, however, that his "Golden Age" non-Singularity scenario is highly unlikely. Instead, sans Singularity, Vinge expects that one of his other two scenarios, MAD (leading to rapid human extinction) or Wheels of Time, in which man goes through repeated, unending cycles in which a rise to a high level of industrial civilization is followed by a teradisaster that collapses industrial society and reduces human population by 80-95%, are far more likely to occur.
Vinge's subtle point here is that in the latter scenario the "Wheel" would actually only turn once. Once most humans are wiped out and all human civilizations are forced back to a traditional agrarian way of life (with the dominant energy forms once again being wood, human and animal power), returning to a high industrial civilization is no longer possible. The reason is that all the lodes of key natural resources that can be easily extracted using a combination of wood and human/animal power alone are no longer there. We extracted all of those lodes in the run-up to the current industrial civilizations. Remaining lodes now are all in very difficult-to-extract places. Successful extraction of these lodes requires that an advanced industrial civilization already be in place; they cannot be extracted otherwise. Even "the ultimate resource" has to have the appropriate physical materials available to work with in order to transform an agrarian society into an industrial one. If not, it all turns into philosophizing, with former software programmers wandering around saying "If I had the materials available (and the requisite skills) to build a can opener, I'd be able to open that can of chili."
This is why humanity basically has only one shot at industrialization. To try to make sure that humanity doesn't blow it, Vinge has presented the Long Now Foundation and KurzweilAI.net with two options: figure out a way to make the Singularity happen or figure out a way to make a non-Singularity Golden Age happen. Otherwise, humanity's long term future will be either extinction or the Middle Ages, forever and ever, amen.
Here endeth the lesson. :-) |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
The fatal flaw of the scientists on this board who are concerned with "the Long Now" is that they don't seem to allocate more importance to the period immediately preceding the Singularity. How asinine to believe that strong AI won't significantly change things beyond our ability to cope with them!
What is therefore important is LAW. MORALITY. JUSTICE.
I have incentives to be bad and incentives to be good. It is my education in LAW and PHILOSOPHY that encourages me to be basically good. To see the larger order made by an accepted code of exchange.
America has been destroyed. The people who understand how America has been destroyed (via the destruction of the power of the American jury trial, and the corresponding destruction of the decentralization of individual power by the collective) are not really taking part in this debate too much.
Kurzweil seems to understand a significant number of key issues, but has not really addressed them the way Freitas has in his "What Price Freedom?" essay on the "Big Thinkers" board.
The primary problem is that the American people are encoraged to be slaves by the public education system. I suspect that Kurzweil knows too many teachers and people on the dole (since those people are often rewarded with positions of power and success in our declining socialist police state) to be objective about the cost of socialism, regulation, and outright brutality engendered by the government.
He (and most of the others on this board) do not want to
1) strip the police state of its power until it becomes moral (which it will not do on its own). --Force would be necessary, and the police state carries a big stick, and is willing to use it preemptively. This takes GUTS.
2) Advocate changing the government. They, like virtually all comfortable people, have taken a "wait and see" approach to the problem of unjustifiable tyranny.
The problem is this: If technology continues to be abused by the mnodern American police state, then sooner (rather than later), that police state will have absolute power.
This begs the question: What do police states usually do with absolute power?
...They steal everything that everyone owns, and end in a prolongued orgy of murder, and then they start over, with a few of the worst problems/offenses of the prior police State being solved in a very neanderthal, incomplete, and cursory way.
The problem? ---That won't be good enough this time! Extreme surveillance technologies and power centralization technologies mean that THERE WON'T BE A "NEXT TIME".
A complete tyranny now has the resources to last A LONG, LONG, LONG, TIME. Longer than Soviet Russia lasted.
The jury trial in America was destroyed in 1895 by the Sparf V. Hansen Supreme Court case. There is no longer such a thing as a "trial by a jury of your peers". Moreover, the "civil trial" designation, excessive fines and cruel punishments, and the expansion of "voire dire" (prosecutorial hangman-jury selection) have all but eliminated the STATE DISINCENTIVE TO TYRANNY.
Without a disincentive to tyranny, TYRANNY REIGNS SUPREME IN POWER, because STEALING MONEY IS EASIER THAN EARNING IT.
The American public never read Ayn Rand. They're too miserably stupid to even bother seeking the truth, and when confronted with the truth, they only feel guilty, and then disavow it. They are then complicit tools of the police state.
Sadly, most scientists (though nobler than the majority of society in their willingness to at least think about something) are equally ignorant about what their rights are.
It takes more than smarts to reach the conclusions I've reached above. It takes
1) a knowledge of history
2) a knowledge of human nature
3) a knowledge of philosophy and law
4) intellectual honesty.
And #4 is where most people really fall apart. Most are not honest in their appraisals of reality.
Yet, I think that #2 is where most (talented) scientists (who are not morally compromised by receiving stolen money from government grants) fall apart. Most scientists I've talked to are well-meaning social outcasts who apply what they know about their own intentions to other people. They think most other people want to be productive, and produce good work, and not steal, and generally act in a civilized way.
And that is one reason why scientists are killed in great numbers when the chimpanzee public finally takes COMPLETE CONTROL OF GOVERNMENT FORCE.
Observe how the FDA has bullied thousands upon thousands of vitamin and supplement distributors who were simply MEETING DEMAND for their products. They did nothing wrong, but the FDA comes in with guns drawn, raids them, burns their inventory and their BOOKS.
The FDA goons were hired by the general public that knows not, and cares not, how their tax dollars are used to RUIN PEOPLE'S LIVES.
Think about that.
What kind of system of law will the PRE-SINGULARITY LAW ENFORCEMENT ROBOTS be enforcing?
If they are not vastly more intelligent AND MORALLY EDUCATED than current humans are, then they will enforce HUMAN LAW. HUMAN LAW is now (after the destruction of all the portions of the constitution that protected individual rights) based on the desire of one chimpanzee to bash another over the head and steal his food and his mate.
There seem to be very few people on this board who understand this, yet I know it to be true. I've simply seen too much of people at a deep level to fool myself.
People are about as "good" as the system of law they live under. If the power of government is not strictly restricted, then it is simply the unlimited power of theft. If theft is unlimited it leads to murder. What else is there to steal from someone who is angry and enraged that everything has been stolen from him?
Nothing but his life.
And this is the simple thing that will likely decide whether the singularity is "constructive" towards human life, or destructive towards human life. What system of law is the law of the land GOING INTO the singularity?
Well, the people here could write (and have written) a book on the subject:
http://www.fija.org
http://www.ij.org
http://www.cato.org
http://www.lp.org
http://www.optimal.org
http://www.john-ross.net
http://www.objectivistcenter.org
For those who want to get a glimpse of why I believe the things I believe, I recommend these books as starting points to developing an honest view of society:
"Unintended Consequences" John Ross
"The Shadow University" Kors & Silverglate
"The New Prohibition" ed. Bill Masters
"The Ominous Parallels" Leonard Peikoff
"Capitalism and Freedom" by Milton Friedman
It will be a shame if we are living in a totalitarian dictatorship right next to the tools for living in a paradise.
I think this is very likely since:
1) the general public, including most scientists, is completely unaware of and unconcerned with the idea of individual rights (both others rights, and their own)
2) if the singularity happens and humans are living under a totalitarian ROOT SYSTEM, then why would those who are enlightened take any part in that system? Answer: they wouldn't! They would treat us the way we treat chimpanzees. We don't recognize "chimpanzee rights" primarily because THEY DON'T RECOGNIZE THEIR OWN RIGHTS. --If humans are viewed the same way by machines, why would they risk physical harm by exposing themselves to us? When humans stupidly expose themselves to the brutality of chimps, they are often let down. (Google "chimp attack" to find out what post-human intelligences have to gain by getting involved with human governments... it would be almost funny if it weren't so prescient.)
-Jake |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
The real problem is egoism, we could have world peace tomorrow if we were able to replicate the philosopher kings among humanity and have them outnumber the animalistic zoological ego typed human beings, even many people here are animal egoists (less mature, ignorant, backwards, in terms of ego-framework).
Most people here don't even realize that our problem is with animal egotism, the framework of human ego and biological instincts gives birth to every problem in existence.
For instance if all human beings on earth had their nervous systems and minds linked, crimes/etc would disappear, if you merged their egos into a unitary identity with compartmentalized (regulated) individuality.
The reason why nothing gets done and human beings are so incompetent is that the ego-framework that directs human behavior in many people is extremely flawed and feral (backwards).
Any A.I. that humans create should be merged with the most kind people on the planet or based on their life experiences and data, IMHO.
Kind people know that killing one another is backward as it is ignorant but people do it everyday because of they're immature ego-framework.
They do not understand unified will's or unified identities, beyond things like kin, etc.
And even within families theirs incongruent identities, where you can have a family thats little more then strangers do to lack of common traits, culture, etc. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
I think most of what you said is pretty wise (and chilling), but I disagree with you on the last part. An intelligent group of humans can easily use an understanding of the chimps' nature to eradicate them without the chimps ever knowing the humans were involved.
Other than people who eat them for bush meat, though, people generally don't have much incentive to kill chimps. Instead (kind of as you said), they steal their habitat and resources and kill the chimps if they don't move on. If there is an understanding by the humans that stealing chimp resources won't be possible without an altercation (which is probably more the case), then humans will opt to outwit the chimps and kill them off before taking their resources and real estate.
To say that machines wouldn't mess with human governments is, I believe, inaccurate. The machine(s) will serve its/their interest(s) and, being more intelligent than humans, will either kill us or drive us away from what it/they want(s) by tricking us with its/their greater intelligence.
I interchange the singular and the plural because a post-singularity machine would, I believe, only combine forces with another machines if those machines had enough individual or combined intelligence to pose a threat to the previously described intelligent machine.
It may be necessary to run a machine thought to potentially be more intelligent than humans through simulation to determine how it would interact with an outside world populated by humans. Depending on its reaction in the simulation, it may or may not be safe to connect its mind to the actual physical world. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
The singularity may be happening right now, but because we are too busy doing things the way we have learned to do them thus far, we don't notice that we've reached a point where we can no longer predict the future. Before WWII no one would have predicted that we can cross the Atlantic in three hours, talk basically face-to-face across the world, walk on the moon, run our lives with computers that fit in the palm of your hand, etc., etc. All of these thing would have been unimaginable to people just a hundred years ago.
What we're looking at as probabilities now are an age where we are able to manufacture things at the atomic level, products that manufacture themselves without human interference, the ability to change the genomes of every living thing on Earth, and the ability to combine humans and computers into a single species that connects all humans, plants and animals into a single organism capable of reaching out to the stars. But as we reach each of these stages we will always be looking for something better in the future to call "the singularity" and think that "now" we are not quite there yet. |
|
|
|
|
|
|
|
|
Software
|
|
|
|
Such failures lead to reduced demand for more advanced hardware, which no one can properly exploit
Big iron doesn't require big software to exploit it.
Ocean or climate models would snap up a billion-fold increase in compute power in an instant, and would demand still more. I've talked to a senior researcher at nVidia about how they're strongly considering the needs of scientific computing in their future GPU product lines, and I've seen one of those scientists practically drool over the prospect of a double-precision Cell processor. Don't worry about whether demand for compute power will appear - it's already here.
Basically, any task where a large problem is discretized for solution or simulation can be scaled up to make use of more compute power while requiring zero additional program complexity. From my experience, an alternative scenario to any of Vinge's is not only possible, but likely:
1) Simulation (scientific computing, engineering, and gaming) keeps pushing hardware development.
2) The software required for strong AI continues to be extremely elusive, and - as always - presents a far greater barrier than compute power.
3) Strong AI doesn't arrive within 50 years, but research into materials science, hardware, and even AI continues apace.
The needs of AI just aren't a big driver of hardware development, and I don't see how its continued failure to produce strong AI will have a significant effect on hardware or materials research. |
|
|
|
|
|
|
|
|
Re: Software
|
|
|
|
>1) Simulation (scientific computing, >engineering, and gaming) keeps pushing hardware >development.
>2) The software required for strong AI >continues to be extremely elusive, and - as >always - presents a far greater barrier than >compute power.
>3) Strong AI doesn't arrive within 50 years, >but research into materials science, hardware, >and even AI continues apace.
I'll comment on point 2/ above. You are obviously having a misconception about AI software design. It is a very common misconception. Basically you are implying that we will aply the current methods of software design to build Strong AI. 99% of the AI software beign designed now tries to model the external behaviour of very specialized subsystems of human intelligence. Any attempt to unify two such systems using an underlying model proves exponentially difficult.
The current approach will be changed in the near future. It will remain for specialized applications only. True AI cannot be designed "top-down". We need a paradigm shift here. The software does not need someone to make it in detail. It has to construct itself, but in a guided way, in an evolutionary process.
Luckily, we already have a functional piece of hardware/software which is strong AI - this is the human brain. It is straightforward then that the first step is to reverse-engineer the brain.
We don't need to create a model for the whole brain. This is an impossible task. We only need to model a neuron. We then construct the brain in a natural way, by interconnecting this elementary building block. We run simulations and compare them with real world outputs from actual animal brains. We run this cycle iteratively until the results from the software model are indistinguishable from the real world measurements.
The real problem is accurately modelling a neuron. The rest is just a matter of time and computer power.
We could go even further. We could model the neuron itself at mollecular level. This would be even simpler. The computer power needed however is much larger (but not prohibitevely so).
Most of you probably know what I am babbling about here. It is not science fiction. It is pure science. Mathematics, data, statistics. It is an experiment which actually started more than one year ago in Lausanne, Switzerland. It is the Blue Brain project (http://bluebrain.epfl.ch)
This experiment is exactly what we have to do. No more philosophyical dellirium. We need data.
By 2030 max (probably sooner) we will have the answer to a fundamental question. Is the brain function an emergent result of chemical interactions at the sinaptic level (which are well understood) OR it is the result of a different phenomenon (intrisincally quantum maybe). The old Penrose-Hard AI dispute will be settled.
If indeed it is not just a chemical process, we will have to make a step further, and model the brain at quantum level. We will need a quantum computer for the hardware. The software will grow itself, iteratively as in the current approach. And we will see if quantum physiscs is enough to explain counscioussness.
If it is not just a quantum process, we will just have to dig deeper. We will have to go to the plank scale probably. At the moment we don't have an exact model for the world at that scale. (something like M-theory).
And so on...
|
|
|
|
|
|
|
|
|
Re: Software
|
|
|
|
My question to you is:
How doyou account for the fact that human development occurs so intimately with the body, family and community?
A few scenarios follow. A human brain is successfully modeled in a computer and:
1) grows on its own without external input. No siginificant development occurs. It eventually behaves as if it were brain dead. Ok, this is a dumb one, but it's conceivable that someone might just think that smarts would magically appear without stimulation. The analogy of a brain as a muscle applies here.
2) is given all the digitized knowledge of humanity to draw from and a digital tutor to develop skills. It develops into an opaque entity. Its processes are unkown because it receives its easiest stimulation from the web and digital media. It could become a troll, a hacker, a virus, an artist. Who knows? The fact remains that it is distinctly digital, both mortal and immortal, an allotrope of humanity.
3) is given a virtual body and virtual world to live in, along with human caregivers and a small community to draw from, along with all the digitized knowledge of humanity. The results are uninspiring. A child is born, grows, develops, and has a nervous breakdown when it realizes that it is entirely digital inside a cage of someone else's making. Ok, this is an exaggeration, but the fact remains that creating an analogue of a person will end up as... well... a person. He or she may not like you, may or may not be ambitious, may or may not appreciate its entirely digital nature and so on.
I guess the above thought experiment is there just to consider that an AI might be as opaque to us as a dolphin or the earths core. Its fundamental nature and substrate are different from ours, requiring different resources, possessing different mortality.
I think that this understanding is at the core of AI fear and hope:
a) Maybe they will be different enough to solve our problems.
b) Maybe they will be so different that they will hurt us.
What will the difference between AI and humans mean to AI once it is created? To humans? |
|
|
|
|
|
|
|
|
Re: Software
|
|
|
|
couple little snips from there:
The only evidence-based way to approach advanced AI techs is to treat it as another manufactured product cycle. Not a mystical, quasi-religious experience, not an epiphany of unknowableness. This does not mean that the product cycle scenario is absolutely true, but it is the most probable, if for no other reason it is the only kind that has ever happened. Advanced techs are created to be purchased by consumers and businesses.
...
these droids and the uses to which they'll be put are quite knowable, because we humans will be manufacturing them within constraints, legal and otherwise, that we would recognize today. The future will be different, but not entirely different in every single way. Corporations aren't going to turn against consumers with malevolent technology, and much of this thread will show that the droids won't get there by themselves.
Have confidence in our future selves, we are not going to lose control of our products in this way. It's funny to say that, because not only is there little risk of our products getting away from us, to get them just to their intended design behaviors will be an immense undertaking, making Vista look like DOS - or a simple "Hello, World" program. We won't have to monitor them to make sure they don't get out of hand, that's the movies. We will want them to help. In fact, some of these techs may positively require their significant help in order to be achieved - they could be too tough for humans alone. I say that to hit home the tech complexity, not to slam humans, I'm mostly referring to some of the most advanced droid techs covered later.
... |
|
|
|
|
|
|
|
|
Re: Software
|
|
|
|
"A child is born, grows, develops, and has a nervous breakdown when it realizes that it is entirely digital inside a cage of someone else's making. Ok, this is an exaggeration, but the fact remains that creating an analogue of a person will end up as... well... a person. He or she may not like you, may or may not be ambitious, may or may not appreciate its entirely digital nature and so on."
Interestingly, my kids were watching Short Circuit 2 last night on t.v. and I caught a part where the robot is essentially having a nervous breakdown because people are not treating him as human. It was treated in a plausible way.
Basically, if you put a simulated human brain in a synthetic body and then send them out in the world to learn, they will basically be treated as either disabled or otherwise ostracised and that will affect their development.
It makes sense really, if the AI had all of human sensitivities it would probably be pretty dissatisfied to find out it was stared at, treated differently, not trusted, etc.
|
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
Re: subjectivity of human experience w/reference to Pollack painting remark...
I guess we'll be pushed in the direction of the smart and rational humans, who use objective value judgements based on rich human experience, and more and more detailed knowledge of the mind. Not coercively, or by force, but just because disorganization is not as interesting as organized information around a useful purpose. If a shudder of excitiation runs down someone's spine and they experience a certain mental state because they look at a painting, and someone else experiences the exact same thing after taking a drug, and an outside observer learns what is going on in both cases, and achieves that stimulation using information technologies, then that person has a better understanding of the desires of both of the previously described expereinces than do the people experiencing them. As such, he can repackage the experience and sell it to an ever-more knowledgeable group of end users. The market for entertainment then moves towards greater understanding, specification, and control of enjoyment, based on universal values. Entertainment and human experience then gradually move in the direction of certain values.
More people today reject nihilism than ever before. Why? There is less knowledge and less richness contained in that world view.
The more knowledge, the less reason there is to be slug-like. The closer we get to optimal human values. There is still richness, but it is located in a certain direction, around communicable values. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
Every year or two I check up on the lastest on the Singularity, just to see what progress is being made on it. This time I ran across Ray Kurzweil, an essay of his and this website with quite a few interesting papers by others published there as well.
Well, there is not much point in reading all this stuff and trying to understand it if nothing ever comes of my efforts, so it is my duty to file a brief report and send it off.
I should explain that I am a simple layman, an armchair mathematician, who decades ago dropped physics because it looked too suspicious. Big-bang theory indeed. How can you propose a theory that has to disregard every Law in order to get started? There was no way I wanted to end up working with that lot of deceivers. It is good to see that some progress is being made away from big-bang. There was obviously more than one universe in the limited sense that we conceived it a few decades ago. Only an egocentric could have believed in the big-bang. Mind you I detect a lot of egocentrism in statements currently being made about anthropism. The very word is ego-centric.
But here is my current take on the singularity::
--- P v NP and the search for AI ---
A lot of time and energy has been wasted on this topic. It is not a real problem and should not be treated as such. Sure there are lots of NP problems around and they are real interesting. But what we are really interested in are the solutions, solutions which characterise behaviour and appearance. These solutions tend to be relatively simple and orderly, patterned even. NP problems admit a huge number of solution sets if you ask unhelpful questions. In fact, the larger the solution set the less helpful it is. The useful solution sets are the ones that are a bit more complicated than trivial, which give us an understanding of some order. The huge solution sets end up only telling us that anything can be made to be anything else, which is not helpful. Now it is interesting to note that in most cases the time is takes to solve an NP problem is dependant more on the size of the solution set than on the size of the initial problem. If the NP problem is going to generate a huge solution set then yes it is going to take NP time to spit it all out. But the pragmatic solution sets can generally be spit out in P time and even O(n) times where n is the size of the solution set. The Chu space implication operation is an example. This operation is profoundly NP and even small problems can take impossible amounts of time to calculate. But the problems with the most interesting answers only take roughly as long as it takes to spit out the limited solution set.
So we should not allow ourselves to be intimidated by NP problems, but simply to adress ourselves to them in a pragmatic way to find the particular solutions we need in order to get on. Projects trying to prove or disprove that a particular problem is NP are a waste of time. In particular this applies to directions being made in AI research. Instead of allowing ourselves to be overwhelmed by the big question we should be concentrating on making small AI's that work and then generalising them as much as the pragmatics allow.
Fortunately there are now huge commercial incentives for this work to proceed and the problem should be solved shortly. The solution will probably be ubiquitous in 5 or 10 years time.
--- The set of physical constants - are they really constant? ---
Einstein said the speed of light is fixed throughout the universe - it is inviolate. What nonsense. This is an obvious clarion call to find the exception to this conceit.
Basic algebra taught me one thing. Constants and their big brothers, closed finite algebras, are incompatible with reality. These structures only ever cause systems to grind themselves into a concentric rut. The enduring property of reality is that it is always new, fresh, surprising us everywhere we look. And that is how it has to be because the obverse of this is circular repetition. e and pi are the exemplars of reality, thats why they are called real numbers.
So two things that we can be sure of is that the speed of light is not constant but that it varies somewhat with time. We should be looking very closely and trying to observe this change. Secondly it is not the same everywhere in the universe. It will vary from place to place. We know it varies in different mediums, but more than that, we should be aware that it may not be the same everywhere. This lends uncertainty to all our astronomical observations. If we are to say anything conclusive we need the speed of light to be the same everywhere. It probably is not, but by how much, and to what extent this alters our observations we dont know. The same can be said for the gravitational constant.
(One thing I wonder is - how do we know that our astronomical observations are not influenced by the gravity of the sun? At what point is light bent around a star? Do we see the same nightsky from Mercury? )
--- Fermi's Paradox and Gravity ---
Well this is no paradox at all. Expecting us to recognise aliens is a bit like expecting ants to recognise humans. Its a laughable conceit. The paradox only exists because of our egocentric viewpoint. For starters if intelligent life exists in a form more developed than ours then it will be here unobservable or it will be studying us remotely.
We have this conceit that somehow we are important in the scale of things. What total rubbish. We will most likely be extinct in 100 years and Gaia will spend the next 500,000,000 years recycling every atom of evidence that we ever existed.
How can we expect to receive radio signals from others like us in the universe if we ourselves only ever broadcast for say 200 years in every 5,000,000,000? The prospect is ridiculous and it is unfortunate that so much time and skill has been wasted on such a project.
The assumption that advanced life would communicate by lightwaves is itself ridiculous. That is far too slow for galactic travel. Advanced life would at the very least use gravity waves for transmitting information, millions or billions of times faster than light. We dont even know what gravity is. When we can send information on gravity waves, then we might hear from others in the galaxy.
--- Fermi's Paradox and Intelligent Life ---
Lets be quite frank. We cant yet even recognise intelligence on our own planet when it is right under our noses. We still dont know how intelligent dolphins and other sea mammals are. We still havent decoded their language. We still dont know the extent of their culture. We are quite happy to treat them as though they are fish yet we know they are more like us than they are like fish.
If we cannot recognise native imtelligence how can we expect to recognise alien intelligence!
Also we dont know how ants know what fungi to farm in order to produce the food that turns worker ants into flying ants. Apparently ants know a lot more about biogenetics than we do. Yet we dont recognise the intelligence of ants.
Again our egocentrism prevents us from seeing the obvious.
--- Uploading ---
This is a real hoot.
There are a couple of books that everyone really should read. In this case I refer you to The Odessey. This stands as possibly the only novel worth reading, because its subject matter is the subject of fiction itself, and the way we suspend our critical judgement in order to embrace a good story. The Homeric bards lay this on us on several occassions in the later part of the story but noone ever picks up on it. After reading the book noone can tell you what really happened to Odysseus even though he himself tells us several times during the course of the book. We filter out the truth and completely ignore it in our endeavour to be entertained and where does it get us - a most shocking bloodbath - and well deserved.
Describe uploading to any behavioural psychologist and she will die laughing. The prospect is so ludicrous as to be beyond moronic. You have to ditch every single thing you know about humans and their behaviour in order to buy into this fiction.
Again a supposed paradox is set up - the incrmental shift compared to the instant shift followed by the killing of the meatmind. The truth is the instant shift is impossible. If it was done the human would experience something like permanent coma.
The reason it cant work is twofold. First our consciousness is not defined by our brain. You have to include all the nervous system. Behaviourists can tell you that much of our emotional memory is held in our nervous system close to muscle triggers, so you probably also need to have the muscles there as well or the nervous system would be shooting blanks and nothing would make sense. Also the vagus nerve splits and goes to both the eyes and the intestines. So there is probably some truth in the saying "we are what we eat." in that our brain makes direct connections between what we see and our digestive system. The human stripped of nerves is probably not conscious. Actually I think thats how we formally test for consciousness. So to say that uploading a brain means that consciousness is also uploaded is rubbish. What is uploaded is a vegetable.
The second reason it cant work is that even if the nervous system and bodily functions were all duplicated and properly uploaded, the individual human is still not a functioning object. It is profoundly dysfunctional. It takes roughly 18 years and in many cases up to 30 years to properly program a human to be functional in its current environment. In behavoural terms this is called scripting. If an individual is placed in a cultural environment where its current scripts do not apply and there are no scripts supplied which fully determine behaviour, then all bets are off. The individual is effectively a total psychotic. Now this happens in ordinary day to day life often enough to keep newspapers full. If you upload a human to a computer which gives millions of time more power, then the certain outcome is total nightmare.
Again we hold the egocentric view that we are individuals with free will. This is a nice story and it is good that we can maintain this conceit, but it is the opposite of the truth. Why do we keep monkeys (and even more humans) in cages for goodness sakes?
So this finally leaves us with the question, would anything actually ever be achieved by uploading. The regretable answer is no. We can probably do a much better job by programming this supercomputer ourselves the hard way. We supply the scripts, we tell it how to behave. We gradually tell it how it may modify itself using a library of lambda calculus equations..
--- AI and Lambda Calculus ---
To the best of my knowledge no proper work has been done on this. This is fundamental. We cannot begin to talk about AI without a good understanding of lambda calculus.
--- Memory ---
We havent a clue what memory is. Nor what it is that is actually passed across synapses. Again we need to talk to behavourists. The answer is probably something like video clips with lots of contextual tags attached to them. The clips also contain smell, taste and touch tracks as well as sound and video and have key objects and movements identified by vector descriptions - a person - a tone of voice - the patina of brass. Shiny reflective things are key cue cards.
Until we have a computer system that mimics these functions and properties we are nowhere near close to replicating this aspect of the brain. The good first step would be a complete associative index to every scene of every movie ever made. "Dont Bogart that joint". What kind of software would be needed to accomplish this task? - To examine movie frames, identify objects and people and actions and make a connection with a song lyric in another movie and identify a linkage.
--- Howard Gardner should not be so coy about the 9th intelligence. His hesitation is based on his uncertainty about whether certain regions of the brain are dedicated to the contemplation of issues that are too vast or too infinitesimal to be perceived.
We all have a part of our brain which operates during sleep ( which maybe explains why Gardner has not observed it) which analyses the days events, creates associative tags and files everything away. However sometimes things remain unresolved and get carried over into the next days pool of events. There are two ways of dealings with unresolved issues. One is simply to divorce them. Some poeple find this easy to do so maybe they have an inactive 9th intelligence. Within this group there is another group, the people most likely to be amoral. Those with active 9th intelligance will actively seek to resolve these issues, sometimes at the cost of their lives. (Some of them get to be called martyrs or saints.)
--- The Singularity ---
Vinge is no idiot. He writes fiction and possibly helps his wife write her stuff too. Fiction is about presenting ideas that people will want to believe. It has nothing to do with reality, indeed quite the opposite. So will the Singularity happen?
Of course not.
But then what are all these exponential curves directing us to in the future?
There is obviously something big going on, if not a singularity then what? Even Vinge is pulling away from the fiction and attempting to address the reality as a real futurist rather than a sci-fi writer. This is nice to see.
The most likely answer is that mankind will be diverted from any approach to a singularity event by embracing biogenetics. Firstly to cure illnesses, genetic defects, then to improve on nature and eventually to push evolution down new biological pathways. I expect to see the first examples of the later at the London Olympics. The supremecy in a sport at Olympic level is kudos for a Nation. Nations will conduct experiments to get an edge at the Olympics. China may even front up with something at Beijing but I suspect this is too soon. My bet is on the swimmers with the webbed toes.
As the century winds on we may discover that we can grow wings given the right womb conditions, as ants seem to be able to do. Maybe it needs a DNA shuffle, maybe not. I think that quite a few humans will be interested in becoming flying mammals and many will be interested in becoming diving mammals and return to a life in the sea.
This interest will divert scientific resources away from the approach to a singularlity. However all the statistics moving in that direction will continue to do so because biogenetics will be very demanding in that area. So the computers themselves will also be diverted. At present the most demanding applications are genome applications. We cannot build harddrives that can record DNA as fast as the process that decodes it. We dont have the grunt to properly analyse the data. This will delay any singularity event.
--- The Origin of Species ---
Now, like the Odessey, this is another "must read" book
When you do read it you will find that Darwin did not say most of any of the things that people claim he said. Darwins book is basically a simple but comprehensive rebuttal of creationism. He was very shy about advancing any theories of his own. In fact in a number of places he even hints at something like "intelligent design". He did not believe that many of the wonderful and perfect adaptions which he saw could have happened by chance. Darwin had a poor grasp of probablilty theory which, truth be told, was not developed even when he went public with the book, let alone when he first wrote his ideas down.
However we have now been able to study cases of species change and formation and now know that they are often the product of catastrophic events. We just happen to be in the middle of a catastrophic event at present in case you had not already noticed.
While some people are worrying about global warming and mans contribution to it, and while others are worried that current instability may trip another Ice Age that will freeze over most of the Industrial World, geologists are sitting around the tea trolley having a quiet chuckle.
Geologists know that we are already in an Ice Age that has lasted for 56,000,000 years, and that the signal of the end of this Ice Age is when the Antartic Ice Sheet melts. And it looks like its going to happen right now. This is not the once in 10,000 years event that the gloomsayers are talking about. This is not the once in 100,000 year event that the ice age catastrophics are talking about. What is currently occurring now is a once in 50,000,000 year event that will totally redefine Earth's Natural History.
Actually it would pay us to learn whalespeak real fast because I think they have something to tell us about global warming.
I was tidying up some old papers around the office just yesterday and came across a note I scribbled many years ago where I predicted New Orleans would be flooded in 2007. So much for my futurist career!
What is clear from Darwin and what we now know is that at some key point a bifurcation occurs. Some members of a species change their behaviour and survive well in a new environment. The ones that dont change may continue to survive in the old environment, but if that changes too much they will be scrubbed out. This change in the human species is already evident in the responses to the various articles published on kurztweilai.net. Some embrace the possibility of change. Others reject it out of hand. Our species is already redefining itself in mind if not in body (yet).
The truth is no singularitarian will abandon his body and head off into the galaxy for the reasons explained above. Nor is the human body made for space. Moondust is too poisonous. Marsdust probably is also. However singularitarians will upload something of their brain plus a lot more handy stuff into a new vehicle which is frighteningly clever and set it off into space. The new vehicle will effectively be an alien life form but one made by humans. I think we had better find out what gravity is and what dark matter is first though. I dont think its moral to send this new species out into the world with a roadmap thats only 10% filled in.
--- Improved Intelligence ---
I find puzzling the many references to the limitations of the human brain on kurtweil.net. The human brain is only limited by a couple of crude biological necessities. When foetusses are grown in test tubes these limitations wont apply and we can have huge brains if we want them. I suspect we dont use 90% of the brains we do have. Also the associative memory bank application I referred to above may take a lot of the load off our brain and enable it to think better rather than simply being an associative memory store.
By the age of 40 we are being forced to compact and erase memories as it is (midlife crisis). So the idea of a seperate machine to put our living memories on could be popular. It may also help stop us aging.
This means that there are a number of avenues for improved performance of our biological brain. No need to talk it down or write it off just yet.
Immortality
The major problems here are the weight of our mistakes and the number of enemies we manage to create. We have to learn to live a scrupulously moral life first.
Summary
So where does all this leave us? With a picture of an extremely turbulent century in which biogenetics will be the predomionant technology. AI will take a backseat. There will be many challenges and catastrophes to distract us from focusing so hard on technoprogress that a singularity event of any sort is triggered. In fact quite the opposite. Rather than the future not being visible beyond a certain horizon, our worry is more that we may not advance fast enough to meet the challenges to our survival which we can see are facing us in this century and the next. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
Wow, it's really cool to hear someone with this perspective on AI, nice post.
I've thought that we'd see cybernetic intelligence through brain machine interfaces long before any 'AI' (if we ever see 'AI'). The organic brain is well suited to doing certain tasks, and I believe that at least part of the reason is that the very molecules it is made of has a lot to do with that. In other words, WHAT we are made out of is as important as precisely HOW it's organized. You could make an EXACT replica of a human brain from inorganic compounds, and it STILL would not function EXACTLY like a human brain, only better.
But, there are some truly amazing advances being made to CONNECT us to our machines in more intimate ways. And, just like the human brain, INTELLIGENCE is about CONNECTIONS.
So we can improve on what we've already got. We can use the best functions of inorganic computers, couple them with the best of the organic minds, and get optimal results that way. No reason to throw out the bathwater with the baby.
This is what I call 'cybernetic intelligence', or what people refer to as human machine hybrids. They will be amongst the best and the brightest, and will be here far sooner than any 'SAI'. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
Great post Webscool, as it really brings to bear the most pressing questions that any "singularitarian" must take under serious consideration before claiming to understand or believe in a future singularity or any such evolutionarily asymptotic result of technological progress. I do, however, have a few qualms with a few of your observations, and also a few additional considerations of my own. I'll try to keep this relatively brief in the interest of both easing the reading experience and giving me a bit of much needed sleep.
On P v. NP:
Much agreed. This question seems more an artifact of our own inability to hold the cognitive dissonance between working within the system that is the human mind while simultaneously observing the hapless fact that we are not yet able to scratch the surface of deeply penetrative parallel/non-deterministic processing. If you've ever read "Godel, Escher, Bach" by Hofstadter, he spends an entire pulitzer prize winning book explaining the incompleteness theorem and how the very structure of formal systems prevents them from working outside themselves, therefore obscuring any clear route to a complex solution while remaining in its resident boundaries of formality, regardless of whether the solution can be identified outside of the embedded system. Many scientists/mathematicians are simply too proud to admit how little we truly know, and how small our amassed intellectual power still remains, which obviously carries over into their research when satisfying outdated grant requirements, and also affects our perception of how close we really are to any type of true conspecific phase-shift (singularity). It seems that P=NP as the questioning society transcends itself, and then returns to an inequality.
On the Physical Constants:
I would be far less surprised to find out that what we call "constants" are in fact the only truly moving components of our universe, than I would be to find out there actually exist such stable atrocities of anthropic egocentrism. These constants are probably analogous to our particular universe's evolving genetic code, and homoplasic to our own DNA, as is alluded to in Gardner's new "Intelligent Universe" book. To begin understanding our universe, and especially if we ever hope to successfully navigate any hypothetical singularity, we must first acknowledge the fallacy of constancy. Our observation of constancy can be likened to any life form with a short enough lifespan to observe either constant darkness or constant light throughout their lifetime on our trivial little planet; we simply do not possess the openmindedness in our current scientific community to truly understand the magnitude of scaling self-similarity and according dynamics as they relate to our own perception of reality, scientific or other.
On your Fermi observations:
Touche, though i heard the new iPhone has gravity-wave communication, but that could just be another rumor ;)
On Uploading:
Here is where i must strongly disagree with you. Some scientists might laugh, but those doing the laughing are, and have always been, the first to be laughed at when (when and not if) proved wrong. Your two arguments against uploading are very weak from neurophysiological and technological standpoints. 1.) If the technology to sufficiently recreate what you call the "meat" of the brain existed, it's not a stretch to think that the rest of pertinent nervous material could be virtually recreated as well. 2.) One would also assume that if we had advanced sufficiently as to begin the uploading process, we would not simply be uploading a brain, still operating upon neurophysiologically biological operation procedures, into what we currently perceive as the digital realm, with its accordingly untranslatable static. There is no reason to assume a compatible "brain OS" would not be awaiting the upload, therefore transferring the evolutionarily and ontogenetically adaptive burden to the design of the artificial environment. The brain would not simply "awaken" to binary as is similarly depicted in the pop movie "the matrix", so much as a training environment would be developed that at the beginning was very much like the biological realm it had previously occupied, but that also allowed the mind to grow in ways that had previously been limited strictly by biology.
Now, even though i don't agree with your arguments against, i do have my own. Due to the self-similarity of scales in the directions of infinity and negative infinity (cosmic and atomic respectively), combined with the observation of exponential energy input at increasing and decreasing levels of scale, transposed upon the notion that these patterns likely seem interminable, i am inclined to believe that exact replication of natural information in space and time is impossible. Similar to anti De-sitter space, the notion of finite infinity prohibits exact replication without existing as the exact same entity. To clarify, it seems that only one pattern of information through time (brain) may occupy its own non-linear, multi-dimensional, and infinitely complex pattern within any particular reality. Any deviations from this then are, obviously, not the same brain or mind. This is not to say that the average human would be able to tell the difference, but the degree of specificity with which we can re-create the information will exist as the tangential disparity of "identity" through space and time (space or time?, omega point?). All that will matter is at what threshold can this new and even more frightening uncanny valley be surpassed for profits; will we not be able to tell the difference at the cellular level, the molecular level, the atomic level, the spin level, the flavor level, or the color level? Which of these levels contains the majority of memory, consciousness, identity, so on and so forth. We will be able to upload if we don't kill ourselves off first, but the uploads will be new selves that are tangentially disparate to the degree of our replication capacity and certainly not the "person" in question, which i think may be even more frightening.
Haven't studied much Lambda calculus, but i'll definitely be taking a look tomorrow.
Memory is simply the emergent property of synaptic plasticity and information perception as biology developed a mechanism to crystallize patterns at higher self-similar levels (conscious memory likely has something to do with the dawn of spindle cells and mirror neurons). The concept is simple, but much like the uploading issue, the complexity parameters of truly "understanding" what memory "is" are mind-boggling.
On the Origin of Species:
We'd be here all night, and i'm way too tired, but maybe tomorrow
On the Singularity:
It will either happen, we will be killed by natural disaster beforehand, or we will kill ourselves over this very issue. I personally don't think the majority of the world is ready to contemplate these ideas without becoming extremely violent, and I would not be surprised if we've sent ourselves back to the stone age within the next 100 years. By the same token, the patterns of a dawning singularity are simply too powerful and transparent to be ignored. If it does happen, it will not be a peaceful transition, and those who wish to fundamentally evolve will have to fight for it, much as every other animal does.
Way too tired to keep typing, but hopefully this was at least interesting to some. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
This line of thought is exactly what i was remarking upon.
First of all, no one currently speaking on topics such as the singularity or "uploading" even scratches the surface of the true depth of complexity involved in replicating a biologically based "mind". If many are overzealous, even more are completely ignorant of the fact that we simply have no idea what the truly basal physical components of our universe are. If we cannot even begin to understand observable phenomena such as quantum entanglement (regardless of how many of you think we're close we're likely not), what makes one think we can capture the unobservably subtle aspects of mind to their fullest degree of complexity?
To clarify, I do not think there is anything "ethereal" to a mind, and i am a strict believer that if one were able to recreate a mind in all its complexity one would then have that same mind. But don't let your wet dreams get in the way of your logic, and please do take our human flight of knowledge with a rather large grain of salt, because there's likely a much longer road to successful uploading than one might immediately think after reading a few blogs and books.
Many will make the jump too soon, and regardless of whether the pseudo-mind on the output side convinces the rest of us that it is you, it's very likely that most of the first attempts will be tantamount to self-sacrifice. I can see the headlines now "Study proves first million transcendentals were no more complex than current technology allowed for, minds lost forever". Bottom line: letting your biological mind rot, regardless of how cynically you remark upon it, would at best destroy an irretrievable degree of your conscious complexity, and at worst destroy you, while letting your abstract mental clone take up your life where you left off. If you're that unhappy with your life, go for it, but personally I'd rather let you be the guinea pig while we work out the kinks, and I'll just reap the benefits of expanded cognitive potential and lifespan of the biological mind (the more intelligent route to expanded mind).
Now, one thought experiment that is interesting from this standpoint is the biologically/virtually modulating neural component, wherein a brain and its virtual copy are simultaneously interconnected in a sort of conscious experiential loop, while an operator flips the computational burdens on various parts of the brain between the real and virtual sides of the loop. This thought experiment seems to indicate that the maintenance of consciousness during the transitional period is the key to at least superficially attaining a successful transition. But, even in this scenario, as i stated previously, the matter of determining what the lowest common denominator of consciousness actually is still lurks around the corner.
I don't even know if i should remark on the Second Life explanation. This "simple" solution still falls prey to everything i mentioned, and the increasing levels of virtual complexity you speak of are exactly what i was talking about as an uploading OS.
This solution doesn't begin to answer any of the deeper questions of self-replication, regardless of how badly one might want to escape the physical restrictions born unto him. Again, you go right ahead and upload yourself into second life at first chance, as it's likely you're one of the 30% of users that scientists such as Castronova refer to when talking of those who already consider virtual realms to be their "real life". I won't even get started at how sad and defeatist these attitudes are, or how disgusting i think much of society has become for driving millions of people to depend on virtual stimulation in place of human contact at the fickle whims of consumer culture, but suffice to say that there are more pressing global concerns that will likely dictate how soon you'll be able to virtually ejaculate, or whether any of us will even live to see that sad day. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
In response to your final paragraph --- -
Oh yes to interact with apes (humans) what a dream come true. Dreams come true! We are so noble and wonderful, and just a joy to interact with since the majority have a low intelligence, oh I mean.. average. How absurd to say these entities indulging in alternative reality is sad. How sad that you try to stop the path of evolution. It is futile and will swallow you whole. Do you really think you will stop this system of billions of humans from transforming themselves into entities more than apes? Is it sad to strive for more than this apparent 'reality' can offer...? I don't think so. It is noble and courageous to stand up against the Universe.
They are on a level criticizing the Universe by attaching themselves to 'other' worlds. Our Universe is a brutal world. Look beyond earth and I see uncanny violence and brutality. The Universe, she is a merciless lady, but still you can't help but love her sense of humor. Still, we should criticize the Universe and not just gawk in awe because we're weak in comparison to it. We held our gods as sacred for a while and many of us knocked them down. In time we will do the same with the Universe: With 'reality'.
These current laughable (in my opinion) virtual worlds do pale into comparison to our beloved reality. I tried some but they are too primitive for my tastes; but that will change in time. We will first laugh at them, like I do, then they will grow into immersive worlds, and then they will grow into their own Universes. We will be left asking what reality is? What is a Universe - and are they 'real'? Is our world reality? If you want it to be yes. If you say something else is your main reality, like Second Life, you sound absurd only because it's not a powerful world in comparison to what we call reality. It is all relative.
If you die as a human you can't be in Second Life as you said. There are important issues in the real world (like staying alive!) but there are enough other people to deal with them so that these others can be the pioneers into creating other worlds and criticizing this Universe with the ultimate act of disassociation from it. Seems damn insane in some respects and I wouldn't do it but I'm quite sure the future will smile upon the ones who had the balls to try.
Technology, I think, will eventually allow some species to become relative immortals, to make the world easily transformable (through nano, pico, etc. etc.), to learn the ability to spawn other Universes (through quantum computing??), to turn these digital worlds into their own realities minus the need to have an ape body. People will fight to call this Universe the ultimate reality but as the multi-universes theory already implies, and as millions of humans who begin to embrace the alternative worlds suggest, it is not at all.
Humans may not be able to embrace this but by the time an entity becomes so advanced they will not be human as we know or will remember. Once intelligence can be taken out of the ape brain then all bets are off... The wheels are already rolling and there are too many people who desire it for us to stop even if we wanted to. It is because of man's own suffering in this Universe that she will lift herself up and create her own worlds and rise above herself.
Of course I could be wrong and completely mad. I don't even use those virtual worlds but I suspect they are leading to something much bigger. Either way it is a great commentary on human existence and courage by humanity even if it goes nowhere.
Lastly, in this opus of a post, I think we must sustain ourselves as humans because it is all we have right now but it shouldn't be our only option. It is only because of our own lack of intelligence and limited tech that we can't give each other more options in this Universe - the options to be something more, smarter, travel to different planets, live for thousands or millions of years, the options to play in different worlds, etc... One reality, one life, one ape body: As wonderfully noble as it may seem to live life primarily to survive and replicate genetics from an ape body - it is very limited and crude for many humans.
We should give entities a chance to be more and experience a much more sublime existence in a variety of worlds. Thats just my opinion but maybe we should just give up our ambitions and accept this ape gig as our sacred and ultimate reality. nahhhhhhhhhhhh
:D
---- Just to note - I know my post is highly conceptual. I tend to be much more conceptual than technical. Also I'm not really big on forums because they usually add up to arguing but I read Mind-X a lot and I had to finally stand up for those virtual lovers and share some thoughts along the way. So I may never post again but regardless have a pleasurable life everyone! |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
Sorry, it's you that doesn't "get it". The Turing Test is utterly irrelevant. A duped/uploaded you might be convinced it was still the same old you, and might be indistinguishable from the same old you for everyone else, too, including your Mum, but it ain't, notwithstanding and nevertheless. Because, absent some artificial constraint like making the "copying" process a destructive one, it can be repeated n times. Where n stands for aNyoldnumber. Each dupe will then consider itself the true and original you. None, not the first nor the last or any other, will be correct.
The only possible persuasive solution would be a gradual, inch-wise cyborging process, in which continuity was never lost, but ended up with a total hardware self. But then the Devildetail pops up: hardware and its resident software can be precisely and non-destructively copied. So, in actuality or 'potentia' there are once again n "you's", with no valid way to distinguish or discriminate amongst them.
It's a bigger version of a core existential challenge: Prove to me, or anyone, or to yourself, that that last time you fell asleep you didn't die, with a replacement identity generated by your brain/organism on waking, with full/normal access to "your" memories and skills. Newyou would have no reason to doubt its continuity, but in fact would be doomed to itself vanish during the next sleep. And so on and so forth until the Big Sleep.
The only way to avoid or put off the above is to stay awake non-stop. Good luck with that. |
|
|
|
|
|
|
|
|
Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
|
|
|
|
Subject: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
This is a good reason for civilized people to carry guns. If power was decentralized to everyone, and everyone took a little time to study jeff cooper, then the biggest muscles would not equal the biggest might. The great equalizer is still relevant in this world of thugs. Government thugs, thugs in a bar, thugs in the work place. Everywhere, the appeal against though and to the initiation of force is omnipresent. Only a very few people eschew the use of force in their lives: they are called consistent libertarians/market anarchists/ freedom lovers/ abolitionists/ capitalists. There are many words for them, because there is a generally poor understanding of the non-aggression principle amongst most "people" who are technically not citizens, not civilized, not kind, not decent. There is a great divide.
One of the things I blame most: the perversion of the non-aggression principle by Christianity. "Love thy neighbor as thyself". This perversion of "Whoever initiates force, or champions the initiation of force, is wrong." is responsible for most of the evil perpetrated by stupid and thuggish people.
You see, the perverse "Love thy neighbor" statement invites comparison of the neighbor to the self. Since there is a social pressure to allow one to be manipulated by government parasites, the contradiction that that statement allows is common. Example: "I would want my neighbor to be jailed for drug use, and I would want myself to be jailed for drug use (if I used drugs)". Of course, they don't use (illegal)drugs, so then they feel superior for their obedience (To the inferior, lower, contradictory law that contradicts the the higher law of the US Constitution that protects property rights.)
The "love thy neighbor" law is actually a non-intellectual, unphilosophical, inadequate attempt to define the non-aggression principle. It is ambiguous. Obviously, with the interpretation most people give it, it is meaningless. Everyone has different experiences, desires, hopes, careers, practices, and goals. The way most faith-based willfully ignorant nonthinkers apply the rule, it is utterly without consistency.
The educated minority sees this contradiction, but as long as the masses of thugs are unable to see it, then we will have the laws that violate our neighbors for being different than ourselves.
The path of true freedom is acceptance of the correct and logical political solutions that have already been discovered.
Ally yourself with those who don't want to initiate force. You can find them at your local Libertarian Party/Anarchist/free market/objectivist/Ron Paul meeting. (The Ron Paul people might be more inconsistent, since Ron Paul has an attractive personality that may draw some unphilosophical but open-minded people in...)
The ideas of free discourse need to be identified before they can be protected.
It is a crying shame that I was not in the bar when you were assaulted.
It is an even bigger shame that myself and ten of my gun-carrying intellectual friends were not in the bar when you were assaulted.
We might have had something to say about anti-intellectual bigotry.
Of course, anti intellectual bigotry still wins NEARLY EVERY SINGLE ELECTION in the USA, and worldwide, so there is MUCH, MUCH work to be done in defending the right to think, right to speak, right to work, and right to justly gained property.
Let us become proponents of FORCE AND MIND. (Unused force, except in the case of defense.)
You see, the anti-intellectuals rely on the inaction of the just to perpetrate their injustice.
If you are a robot builder, consider how the purpose of the gun manufacturer and your purpose overlap if you are a moral builder whoo builds in defense of civilization: The gun is a simple device which properly functions as a decentralizer of retaliatory force. The robot is multi-purpose, but can serve the same function as the proper use of a gun. Many attempts have been made to confuse people about the proper purpose of the gun, and to call the gun evil, rather than identify the ideas that can make the use of a gun evil. Those attempts are the same attempts that have sprung up around the recent attempts to control the emerging power of the robot. Anyone can use a gun to some degree of effectiveness, but it takes intelligence to use a gun or a robot properly. It takes more intelligence to use a robot properly, in the defense of justice. Decentralizing gun rights has prevented immense tyranny. Decentralizing robot rights has the potential to prevent greater tyranny, and this explains why access to forceful robotics is currently the sole domain of the military: the great minds of our time do not understand the simple lesson of the warsaw ghetto uprising.
I strongly recommend that the builders of robots here read the novel "Unintended Consequences" by John Ross. A lack of understanding of the concepts in this book has the greatest chance of derailing a constructive and benevolent singularity.
For instance: a common misperception is that "It's OK if people in government steal a lot from us, we can afford it".
Well, right now you can. But that might not also be the case. Moreover, the path behind you is littered with the dead bodies of those who could not afford it.
Another misperception: "Although I wouldn't behave irrationally with power, other people might, so if they government folks tell me that's the case, I should believe them, and give the power to them, because they're the experts."
UInfortunately, this leads to the abuse of power by those who asked for the power. Those who ask for and seek power are NOT TO BE TRUSTED. Stop trusting them! History show that trusting those who seek absolute power is the greatest causer of misery in the ENTIRE history of humanity.
See:
http://www.jpfo.org/filegen-a-m/deathgc.htm#chart
http://www.hawaii.edu/powerkills/VIS.TEARS.ALL.ARO UND.HTM
I included the above in the form of charts and graphs, because Like Tom Green, kurzweil and moravec seem to like "charts, charts and graphs". Of course, there are giant books where the principles illustrated in the charts and graphs above are expounded upon in great detail, revealing everything that the charts and graphs reveal, but in the form of words, not pictures.
Since we're not as advanced as AGIs will likely be, most of us don't analyze images properly though. So feel free to read Atlas Shrugged, The Shadow University, Unintended Consequences, The New Prohibition, etc... if you don't understand the charts and graphs.
Also good are the root pages where the charts and graphs are displayed. They, again, provide evidence that the charts and graphs more quickly reveal.
|
|
|
|
|
|
|
|
|
Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
|
|
|
|
Well, I have a "dark view" that is not heard too much, because the other people who have my view are dead or in jail, not typing in the comfort of their computer rooms. Then again, the people who are not troubled with rape room prisons, and institutionalized injustice have a "sunny view" of the world. After all, it's not them who's suffering (until it is, they are silenced, and then they get religion about showing the real story, once their voices and votes are silenced, and they risk the further antagonism of a state that has already discredited them and disenfranchised them).
I think the voices that are drowned and silenced are some of the most important voices to hear. They have the most relevant information to effect our assessment of risk. Minsky calls this the value of "negative learning" (learning what not to do). Except... Everyone learns not to fight the system... just cower down and hide.. Hope you go unnoticed, and wait until everyone is being destroyed before you dare challenge the system.
Which is a good personal strategy. But it doesn't prevent the collapse of the system.
When was the last time you were in a courtroom, and witnessed firsthand the injustice?
As anyone who knows anything about anything, we have lost jury trial in America. Does anyone else here understand the gravity of this situation? And, if they do, what smaller percentage cares to do anything about it?
Nobody. There is too little understanding.
Could it be that it's easy to label a confrontation of your cowardly morality "SPAM" ? -Then you don't have to think about what I'm saying.
We have the largest prison population in the World, if I am up-to-date, mostly for nonviolent property ownership. When someone dares stand up for the principle of justice, other people look at him like he's from mars:
http://www.youtube.com/watch?v=SzeNanTsWYA&feature =dir
Of course, I expected no less.
With friends like these...
Maybe I just better lay off the freedom SPAM. After all, if I'm casting seeds of discontent (and logic and self interest) into contentment and servility, I'm wasting my time.
Good night, and bad luck. May the worst of the police state you've asked for befall you. May you find out what it's like to be strapped down to a metal table and injected with dangerous psychoactive medications "for your own good".
That would be a good time for your to wake up, and realize I was exactly correct. When you are powerless, friendless, and SCREWED.
Sorry I bothered. |
|
|
|
|
|
|
|
|
Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
|
|
|
|
Actually, that's wrong. James and I had a dialogue a few years ago. I roughly agree with large areas of his political philosophy, to the extent that I think he's a good person, and it's not worth getting bogged down in the relatively small differences. He has a different view of US border than I do (I think) -I don't think it is legitimate to stop people from immigrating in any way, unless they appear to be planning actual terrorist attack (and there's evidence for this, and due process is follwed). My view is more traditionally objectivist/libertarian. My views on politics are virtually identical to Harry Browne's views in his books "You Can Profit from a Monetary Crisis", "How I Found Freedom in an Unfree World", and especially "Why Government Doesn't Work". I am closer to Lysander Spooner than Ayn Rand. I agree with most of what Stefan Molyneux says in his videos, although I don't have time to watch them all.
Anyway, all y'all that don't understand the concept of individual freedom should crack a book. It's important stuff. |
|
|
|
|
|
|
|
|
J. Jaeger = J. Witmer?
|
|
|
|
Dear PredictionBoy,
Actually, that's not true. Take a look at the following posting by "Jake Witmer" and the response it elicited from "James Jaeger":
Among the presidential candidates, only Ron Paul consistently supports health freedom, property rights, and the freedom necessary to pursue one's own destiny. There are only a few days before the Feb 5th primary. If you want your freedom, you need to he
posted on 01/16/2008 11:23 AM by Jake Witmer
Re: Cut the Horse - Why cut the horse, when you can ride it? -giddyup!
posted on 02/01/2008 8:42 AM by James_Jaeger
Jake, you are absolutely right
So, you see: "Jake Witmer" and "James Jaeger" are definitely NOT in cahoots with one another!</sarcasm>
Regards,
Redakteur |
|
|
|
|
|
|
|
|
Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
|
|
|
|
I write with my intended audience's intelligence in mind. What's that? You've only been getting SPAM? I'm sorry! Maybe you should digest that before we move on to something more substantial.
And that's my real big problem with the "thinkers" on this board: Too few of them indicate that they understand that freedom is a precondition of thought and investigation of truth. Seriously, read Atlas Shrugged, Read Unintended COnsequences, read Harry Browne, read Lysander Spooner, and then we can talk.
Until then, even though you may tower over me in other subjects, you are pointed in such a wrong direction that no good can come from any social networking. After all, nobody who is rational is friends with people who want to shoot them.
Not understanding the nature of the state is a dire error in value judgement.
And I mention the ugliness of rape to attempt to force people to think about what it means for other people to be sent to prison in their name. That's the most extreme ugliness on earth, yet everyone just kind of shrugs it off. -Trundling down the same path that the Weimar Republic blindly followed to its inflationary crash and gas chambers.
It's not intelligence that the people I am criticizing lack, it is intellectual honesty and bravery. |
|
|
|
|
|
|
|
|
Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
|
|
|
|
yeah, wow, that worked better than i thought. u got real diff real quick, jake.
ure downright hateful here.
and even more spammy, mmm, spam. some people think theyre talented writers, but to others, its just a big spam sandwich.
|
|
|
|
|
|
|
|
|
Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
|
|
|
|
You said:
"One of the things I blame most: the perversion of the non-aggression principle by Christianity. "Love thy neighbor as thyself". This perversion of "Whoever initiates force, or champions the initiation of force, is wrong." is responsible for most of the evil perpetrated by stupid and thuggish people."
But you also said:
"This is a good reason for civilized people to carry guns."
Doesn't that make you an 'evil . . . stupid and thuggish person?
This tribalsim we all seem to have, (it is NOT Racism - we dislike such and such a person or so and so country because they are NOT of OUR Tribe or because they do not worship the same Deity or because they have a different political/ Philosophic point-of-view), regardless of Race, Colour or Creed is what stops Humanity becoming all that it could be.
Until we, as a species, are able to Function as a Co-herent Entity, we are doomed to ultimate Failure. Every different Religion, every alternative Philosophy, every Individual Thought is Anathema to the principle of Co-herence. Only when we have achieved Co-herence will we be un-stoppable as a species.
But it must be done with Humility, Sympathy and Gentleness. It is the only way that we will be able to survive and spread from this place of our Birth, HomeWorld, Terra Firma, Planet Earth.
You know it makes sense . . . |
|
|
|
|
|
|
|
|
Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
|
|
|
|
You said:
"One of the things I blame most: the perversion of the non-aggression principle by Christianity. "Love thy neighbor as thyself". This perversion of "Whoever initiates force, or champions the initiation of force, is wrong." is responsible for most of the evil perpetrated by stupid and thuggish people."
But you also said:
"This is a good reason for civilized people to carry guns."
Doesn't that make you an 'evil . . . stupid and thuggish person?
Not in the slightest. I haven't contradicted myself. You seem to not understand that carrying a gun is a 100% peaceful precaution that is necessary when one is surrounded by thugs. Of course, it depends how thuggish they are, and perhaps escape is the best option in a totally uncivilized society (such as societies where defense is banned, such as Chicago, IL, or nazi Germany, or Stalinist Russia, etc...)
Simply because I am capable of defense doesn't mean I am a thug. A thug initiates force, a civilized man responds to force. There is an ocean of difference between initiated force, and retaliatory force. (For instance, is it more civilized to dial 911 while a woman is being raped, or to repel the attacker?) If you answered the former, then you are probably incapable of critical thinking, as are most graduates of the government youth propaganda camps (like "redactor" and "prediction boy" on this page, who have been fed so much crap, they simply don't know how to gauge value).
This tribalsim we all seem to have, (it is NOT Racism - we dislike such and such a person or so and so country because they are NOT of OUR Tribe or because they do not worship the same Deity or because they have a different political/ Philosophic point-of-view), regardless of Race, Colour or Creed is what stops Humanity becoming all that it could be.
No, the primary obstacle to human advancement is unreason. If a culture is reasonable, it should be advanced. If a culture is unreasonable it should be fought (not with force, unless it initiates force, but with logic and reason).
Until we, as a species, are able to Function as a Co-herent Entity, we are doomed to ultimate Failure. Every different Religion, every alternative Philosophy, every Individual Thought is Anathema to the principle of Co-herence. Only when we have achieved Co-herence will we be un-stoppable as a species.
This is new-age mumbo-jumbo. It is a call for collectivism and the denial of individual specialization, individual desires, and individual life. It can only be a statement made by a mindless automaton, a victim of government youth propaganda camp brainwashing.
And who said being "unstoppable as a species" would be good in any way? The whole mentality is hopelessly screwed.
What if your idea of unstoppable is to have a gigantic baseball team that noone can beat? What if I don't want to play baseball, but want to stay home and read "Engines of Creation"? You're at such a lower level you can't even comprehend the beauty of the free market, and you're rpeaching that I should destroy myself like some sort of selfless schmoo, for the good of the unintelligent herd?
I take umbrage to that notion.
But it must be done with Humility, Sympathy and Gentleness. It is the only way that we will be able to survive and spread from this place of our Birth, HomeWorld, Terra Firma, Planet Earth.
This is mindless mumbo-jumbo of a particularly substanceless form. When mixed with government force, this is the kind of thinking that allows someone like Pol Pot to shoot everyone who is wearing eyeglasses.
You know it makes sense . . .
No, it is crapola. Moreover, it is crapola that doesn't allow for human evolution, or evolution of thought. It is the kind of thinking that enables thugs with guns.
Read up, son, you have a universe of learning ahead of you.
Start with "Fashionable Nonsense" by Sokal.
...And you can stop chanting now. |
|
|
|
|
|
|
|
|
Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
|
|
|
|
No, I'm not cherry picking the data. I'm pointing out what the data indicate. A bully can take a gun out of the hands of someone, and not kill them, or he can disarm them and kill them, but he has a lot harder time killing them if they are still armed. Besides, you can't name one case where a Democratic government restricted the right to bear arms, and failed to ride herd over its citizenry.
Even in the USA, the first "gun control" laws enabled the wholesale slaughter of native americans, and later allowed southern sherrifs to lynch black citizens, since anti-black-anti-gun laws were the first Jim Crow laws on the books. Is the USA one of your examples of a Democratic country that wasn't guilty of democide?
How about the UK? "Guns of Brixton" was written for a reason. Crime waves in their ghettos aside (caused largely by drug and gun prohibition, and the defenselessness of their citizenry), the UK has now laid the groundwork to allow future fascism and democide.
It isn't a universal rule that after every successful gun ban there will be a democide: It's a universal rule that after every successful gun ban there can be a democide. (And that often times in our history, there has been one.)
It's also a universal rule that after such a gun ban, the government need not fear its citizenry.
Of course, to what extent and how fast they will "kill the goose that lays the golden eggs" depends on several other factors:
1) How wealthy the country is (ie: How much wealth can the government steal from the people before the only thing the people have to give are their homes and private possessions?)
2) What remnants of true democracy does the country possess?
--free and open elections?
--jury trials?
--freedom of the press?
--freedom to campaign?
--independence of grand juries?
(the US fails in most of #2, but not all. Therefore, we will have a longer "grace period" than a country that starts a gun ban with none of the above.)
Whether or not a democide will happen again is less certain than _when_ it will happen. But the law that allows it to happen is the unilateral elimination of the people's ability to defend themselves (usually overlapping with the loss of their ability to elect their leaders).
It's a one-way gate.
After private gun possession is eliminated (or reduced to ineffectuality), those without guns can be killed, and often are.
History is full of examples. North America is full of examples. The UK is the sole exception --to my knowledge-- of a country that banned guns without a democide (however, strong arm crime has skyrocketed there). Japan experienced a democide after guns were first banned, but now enjoys relative peace/equilibrium, with the state holding absolute power. Of course, Japan is not exactly free, but it is relatively free in some ways --just like the USA.
My primary argument (moral) is that noone has any right to arbitrarily proscribe anyone else's right to own private property, barring a violation of their own individual rights.
My secondary argument (pragmatic) is that denying one the right to self defense is grossly suboptimal, because it eliminates the ability of one to take responsibility for his own safety, and also eliminates the strongest guarantee of actual safety (increasing violent crime, which we can all agree is sub-optimal).
My tertiary argument (pragmatic) is that making democide (murder on a large scale sponsored by the governing institutions or ruling class of a country) possible is grossly sub-optimal way to conduct human affairs.
The pragmatic arguments are the result of the moral argument. Their existence is based on external data. It is my experience that the external data always have a fundamental rule at their heart.
People are not basically good or basically evil, except that they are basically as good as their will to survive requires them to be.
The corollary of this rule is that "people are as good as the political system they live under".
If a system legalizes theft, as the Soviet and Fascist systems did, then people will be bad, evil, and murderous, and the system will collapse.
If a system preserves private property, it preserves life.
But without the ability to preserve tools of self-defense under the category of "private property", there is no way to preserve private property. This is because:
1) No government protects private property by its own free will. In fact, if government collects taxes under the threat of force, it is already violating the notion of private property. Therefore, the larger the amount a government is allowed to collect in the form of taxation, the more it collects. There is no limit to the willingness of politicians to steal.
2) Guns are inanimate objects made of metal and gun powder. They serve as useful a purpose as hammers, nails, cars, and oil paints. If you can't own the components of a gun, then you are not free to own private property (you are only allowed a privilege of owning certain things --and a privilege may be rescinded at any time). If you can't own the knowledge about how to assemble a gun, then you have no freedom of speech or information.
3) If you can't lawfully defend yourself to the best of your natural ability (man has a natural ability to make guns, so other men will make, buy, or steal them. If you can't also possess one, you are at a tactical disadvantage, and are technically unable to defend yourself) then you have no property rights, because everyone owns their own body.
In order for there to be such a thing as a "democracy" or "democratic republic" there would have to be individual rights that are not subject to a vote.
Democides don't always happen immediately or close after guns are banned, although they often do.
There is very limited private gun ownership and extensive black market gun ownership in England and Japan (two relatively small islands in comparison to Soviet Russia).
There were guns in 1930s Russia to a large extent. The military also had guns. But the people were unable to see that the government was comprised of people unlike themselves. They went to their slaughter because of this cupidity.
Guns are not a guarantee of freedom. ...But individual freedom is impossible without them.
And lack of guns is not a guarantee of democide, but democide is possible without them, and impossible with them. (A potential attempt at democide against an armed population becomes _armed_conflict_)
Hopefully this gives you a better conception of the Venn diagram involved here. |
|
|
|
|
|
|
|
|
Re: Robots, Guns, Robots and Guns, ...helping nerds win barfights in the attempts to allow intelligence+force to impress girls, not just brute force
|
|
|
|
Jake wrote,
No, I'm not cherry picking the data. I'm pointing out what the data indicate. A bully can take a gun out of the hands of someone, and not kill them, or he can disarm them and kill them, but he has a lot harder time killing them if they are still armed. Besides, you can't name one case where a Democratic government restricted the right to bear arms, and failed to ride herd over its citizenry.
---
This is a false argument. Correlation is not causation. You made the claim that restricting gun ownership was bad, hence, you must prove that restricting the right to bear arms lead to those governments "riding herd" over the citizenry.
Every nation in the world at some time has behaved immorally, and I am not defending their immoral behavior. I am defending the right of the citizens of a nation to decide that they prefer to have some government restrictions on various kinds of weapons.
Do you have examples of nations which allowed their citizens to bear arms without any restrictions? What happened in those settings?
It is a right of the citizens of the United States to be able to change-- even to restrict--their own rights and obligations through the appropriate legal channels. Even who is considered a citizen has changed since the inception of our nation.
You must realize that you also may have reversed the causal relationship: rather than gun bans leading to "democide," it could be that those planning democide attempt to restrict guns. Again, in order for your argument to hold water, it is you that has the burden of proving that one could not restrict gun ownership without causing democide (a pretty strong claim, I must say, and one that would require a rather elaborate experiment to prove).
You also must prove that there couldn't be "democide" any other way: after all, if there could be, then you haven't proven that guns prevent democide or that lack of guns causes it.
It may also be that in the absence of such rules, respect for the law may break down and the government may lose the ability to protect citizens entirely.
---
Jake wrote,
My primary argument (moral) is that noone has any right to arbitrarily proscribe anyone else's right to own private property, barring a violation of their own individual rights.
----
Actually, in our society, we do have exactly that right, to make laws proscribing the rights of individuals. We make laws as a community. Everyone is not free to just do whatever they want. We are free, in the United States, to claim private property for public purposes (eminent domain), use private wealth for public purposes (taxes) and restrict risky behavior by others (speeding, smoking in public places, requiring restaurant workers to wash their hands).
Society doesn't have to show that you harmed someone (as in torts) in order to give you a ticket, fine you, put you in jail, or take away your privilege to drive forever.
---
Jake wrote,
My secondary argument (pragmatic) is that denying one the right to self defense is grossly suboptimal, because it eliminates the ability of one to take responsibility for his own safety, and also eliminates the strongest guarantee of actual safety (increasing violent crime, which we can all agree is sub-optimal).
---
No one has denied you the right to "self-defense" whatever that may be (it surely isn't in the U.S. constitution). You could learn martial arts, or get a baseball bat.
As the pragmatic part of your argument admits, whether or not guns are effective as "self-defense" or not is one that is arguable. You would have to prove that there is a lower cost to allowing guns than providing restrictions upon them, and you haven't met that burden. You have provided only hypothetical & anecdotal evidence that guns could be beneficial, but you have no real evidence of a cost to restricting guns, and certainly haven't proven that there is no benefit to restricting guns. There are many examples of societies with few limits on gun ownership (Somalia) and those societies are ridiculously riddled with violence to innocents. Shouldn't Somalia be proving your case?
In any event, in a democratic society, it is up to that society how it deals with the issue of crime and violence; every individual does not get to decide the set of laws on his or her own.
---
Jake wrote,
My tertiary argument (pragmatic) is that making democide (murder on a large scale sponsored by the governing institutions or ruling class of a country) possible is grossly sub-optimal way to conduct human affairs.
---
You say it, but you don't back it up. Evidence?
---
Jake wrote,
The pragmatic arguments are the result of the moral argument. Their existence is based on external data. It is my experience that the external data always have a fundamental rule at their heart.
---
In fact, it's the other way around: your argument for morality relies on the pragmatic argument. If you can't prove that a society with no restrictions on guns is safer for innocents, then your moral argument actually requires them to be restricted.
---
Jake wrote,
People are not basically good or basically evil, except that they are basically as good as their will to survive requires them to be.
---
A claim denied by many. An assumption on your part.
----
Jake wrote,
The corollary of this rule is that "people are as good as the political system they live under".
---
Incredibly questionable. This would mean that innocent victims of the holocaust were evil because they lived under an evil government. Completely illogical.
---
Jake wrote,
If a system legalizes theft, as the Soviet and Fascist systems did, then people will be bad, evil, and murderous, and the system will collapse.
---
All people ruled by Soviet and Fascist systems were evil? Wow.
---
Jake wrote,
If a system preserves private property, it preserves life.
---
Evidence? Name a few? What do you mean by private property? Does that include anything a citizen wants? Hell, people are debating the end of humanity, you are claiming that some system can DEFINITELY preserve life? What do you mean by this mumbo-jumbo?
How about the antebellum south? Was preserving private property (in the form of slaves) preserving life or destroying it? What if large segments of the population have no property (middle ages)? In those cases, it seems, private property rights stood in opposition to a right to life. How is someone who owns nothing supposed to get some of it if it is all already owned?
---
Jake wrote,
But without the ability to preserve tools of self-defense under the category of "private property", there is no way to preserve private property. This is because:
1) No government protects private property by its own free will. In fact, if government collects taxes under the threat of force, it is already violating the notion of private property. Therefore, the larger the amount a government is allowed to collect in the form of taxation, the more it collects.
---
In a democracy, the government is made up of citizens. There is no government absent the citizens making up that government. They are elected. If a population so wishes, it can remove those politicians from office, and replace them with people who will do its will.
You are operating under the delusion that most Americans want their representatives to do something vastly different than what they are doing. There is no evidence to support this. While most people don't like other people's representatives, they love their own. Hence, people are satisfied with those representing them.
Polls support this claim.
---
Jake wrote,
There is no limit to the willingness of politicians to steal.
---
You tempted me to cite Nixon. Almost. ;)
There is a clear limit: the point at which it gets them removed from office, either by their peers or their constituents, or by law enforcement officials. You seem to be operating under the assumption that politicians are in office for life. Evidence?
---
Jake wrote,
2) Guns are inanimate objects made of metal and gun powder. They serve as useful a purpose as hammers, nails, cars, and oil paints. If you can't own the components of a gun, then you are not free to own private property (you are only allowed a privilege of owning certain things --and a privilege may be rescinded at any time).
---
Strongest and most out-there claim you've made yet: if a citizen can't own anything they want, they can't own anything. Not true, and means that your ideal government must allow citizens to arm themselves with nuclear, chemical and biological weapons. This is way, way outside the mainstream of any society.
That said, I'm sure Al-Qaeda agree with you. Our founding fathers surely didn't, as they debated whether or not to include a right to property, and they elected to not establish one, and in fact gave the government the power (and right) to levy taxes and regulate interstate commerce.
---
Jake wrote,
If you can't own the knowledge about how to assemble a gun, then you have no freedom of speech or information.
---
Not something we're debating, because the argument is gun ownership, and no society I've heard of has banned knowledge of gun assembly, but interesting idea.
---
Jake wrote,
3) If you can't lawfully defend yourself to the best of your natural ability (man has a natural ability to make guns, so other men will make, buy, or steal them. If you can't also possess one, you are at a tactical disadvantage, and are technically unable to defend yourself) then you have no property rights, because everyone owns their own body.
---
Senseless. See nuclear, biological & chemical weapons above.
---
Jake wrote,
In order for there to be such a thing as a "democracy" or "democratic republic" there would have to be individual rights that are not subject to a vote.
---
Why is that a requirement, other than that you said it? Our founders didn't think so. See Thomas Jefferson (re: tyranny of the majority). He suggested that you always have the right to revolt if you believe the government has become unjust.
Another right frequently believed by some to be a necessary precondition of democracy is the freedom of movement (ie, the right to leave society if one doesn't agree with society's rules; that one should have a right to refuse the social contract, by going somewhere else).
The society you describe sounds like an ideal, not pragmatic.
---
Jake wrote,
Democides don't always happen immediately or close after guns are banned, although they often do.
---
That could be because your hypothesis is wrong.
Everything else you say is a repetition of what you said above, or otherwise unrelated to the point in question.
You have yet to make the case that gun ownership is a definitive, statistically valid good, or that some reasonable restrictions on weapons are worse for society than the benefits of those restrictions (which you fail to address: is that because you don't think there is any benefit at all to gun restrictions, or because you are afraid to face just that?).
Cheerio. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
The Kurzweil books are fewer fictions than e.g. chapters about Big Bang or Time traveling in textbooks. One can easily find many reasons against the latter, than against singularity. It is difficult to argue against the law of acceleration return. On the other hand, there are plenty of counter-examples. In the same time, there are obvious explanations why it happens. I do not discuss possible cataclysms, on my opinion the Universe is infinite and exists infinitely long and there are possibilities for reaching the singularity without us.
Not all Mathematical ideas are applicable to the real world. The G'del theorem may be looked on as; one can add an additional axiom up to the number of the countable set, or to the number of points on the line, or to the number of points on the plain, or to the number of points in infinitely dimensional space. This follow, that the axioms for real world description should be chosen in consistent with the real world. In an abstract mathematical theory may be interesting any geometry from infinite set of Ryman's geometries. In the real world, doubt ally is anything different from Euclidean endless three-dimension space.
The power of universe (our local one) computer is above 1080 cps. Power of all society computers is only 1025 cps. The question is; would this computer as effective as brain of a single person. The understanding of effectiveness may differ, e.g.:
1. Sum of participants cps. This has sense in calculating the GNP. It is not reasonable for systems to which are related NP problems.
2. Abstract effective power. (E.g., time to solve some set of tasks.) By its nature, this criterion would have different result for the same system today and in a billion year.
3. The practical effective power. This is defined by reliability, vitality, reparability, and many other characteristics, which are used to define quality (a subjective one) of big complex systems. How not unique is this measure! How it is ignored in singularity discussion.
For informational systems is correct the following:
1. There is a limit for effectiveness of a single computing system or on its IQ. This follows that cannot exist e.g. a system with 1080 cps.
2. Position 1 follows that the machine society would consist from individuals.
3. Those individuals would have free will and different characters.
4. The latter is possible without attraction of the other world.
In http://www.geocities.com/ilyakog/, one can find the background for the above statements.
The civilization future is supposed as a single powerful computing system. Its productivity is defined from quantity of matter in the Universe. I suppose that machine civilization would be organized like existing biological ones. It means that machine society would consist from individuals.
In The Singularity Is Near is a correct calculation, that if the owner returns in a month lilies would cover the entire pond. Does this follow, that in a year lilies would fill the whole galaxy? For more, see 'REMARKS TO RAY KURZWEIL BOOKS The Age of Spiritual Machines and The Singularity Is Near', http://www.geocities.com/ilyakog/, in Philosophy The real calculation speed and the development of intellect for a computing system are not proportional to its calculation power and the volume of memory.
With increasing of number of elements in the system, the reliability of the system decreases because there is an increase in the probability of temporary and permanent failures of elements. This causes limitation to system size and productivity. Remember that the system should work millions and millions years. The question is analyzed in 'Could the IQ of an Intellectual System Rise Infinitely?' on the mentioned site.
A limit of IQ for a computing system leads to the necessity of a computer civilization to be divided in parts - participants. They would have maximum possible speed and memory, which current level of technology allows for stable and reliable work. However, those individuals would have different interests, IQs, and ' characters. There is an interesting question, whether is possible a human-machine civilization. Because the IQ of a system has a limit, then if the human brain is in a close proximity to the limit, then the communication problems are near solved and such a society is real.
|
|
|
|
|
|
|
|
|
(more) Reasons singularity won't happen
|
|
|
|
I'd like to share my thoughts with you about the singularity. In addition to the reasons above exposed by other people, there are more hurdles that probably will prevent it from happening:
- I'm against playing the enslaver's role. Creating a brain, or even a container to hold a self-consciuos entity is a responsibility that I don't want the society to assume, for it can prove even worse that any human rights violation we have made in the past, if we consider "thought slavery" worse than "physical slavery". In fact, instead of "human rights" we should extend them and speak about "self-conscious entities rights" and include some clauses regarding "freedom of thought" and "free action".
- Enhance the human faculties by human means can be disastrous. We don't have our senses and our mental ability just for the sake of it, we have them because we need them to survive. The reality has created a need, and we are the response to that need. If we did modify ourselves, that would mean that a set of false needs would shape us. In other words, the so-called hyper-reality or meta-reality --the product of our imagination--, would create a meta-human, that is a human created to satisfy the desires of our imagination and completely detached of the "real" reality.
- Our imagination is not powerful enough to create brand-new brains, or even imitations of our brains. They do not make sense on their own. They do make sense when they're in a body and in a environment with other body-brains to interact with.
Although the two first points are not problems in themselves, I and the people who defend these ideas will be the problem, for I'm going to fight hard against the singularity *anytime* we get closer to it.
I can be wrong holding these theses, but one of the reasons I'm against human modification is because of the fact that if everything was possible, then nothing would be possible.
The natural selection plus the mate selection are two robust mechanisms that we do not need to modify. I agree with one of the posts that we should understand first the world, animal interactions, etc. before taking up more ventures.
It will be interesting if someday we can create the ultimate experiment : create life from scratch and wait for intelligence to happen. Will the result be similar to ours? Will they have the same concerns?
And going even further, if other universes exist, with other physical laws, and we could play the role of the intelligent designer (i.e. arrange some DNA sequences), what would be the result like?
|
|
|
|
|
|
|
|
|
Re: (more) Reasons singularity won't happen
|
|
|
|
'We don't have our senses and our mental ability just for the sake of it, we have them because we need them to survive. The reality has created a need, and we are the response to that need. If we did modify ourselves, that would mean that a set of false needs would shape us.'
This fails to take into account that our senses and mental ability were shaped by evolution to suit an environment and lifestyle that no longer exist. Cultural/technological evolution has utterly overtaken Darwinian evolution.
We already have a set of false needs shaping us, due to the fact that some of our genes are obsolete. The obvious one being that we crave high-sugar high-fat foods and hang onto every calorie, because we evolved in an environment in which every calorie was precious and sugar was a rare commodity. We do not need to consume the amount of 'junk' food we chow down, but our obsolete genes cry 'yes! Good! Eat more of that!'
We need to be redesigned to fit the technological world that is rapidly evolving.
'Our imagination is not powerful enough to create brand-new brains, or even imitations of our brains. They do not make sense on their own. They do make sense when they're in a body and in a environment with other body-brains to interact with.'
A classic mistake. Nowadays, proponents of the singularity see it arising from GRIN technologies, which is an acronym of Genetics, Robotics, Information technology and Nanotechnology.
Why isn't'Artificial Intelligence' included? It is. As Kurzweil explained, 'the standard reason for emphasising robots in this formulation is that intelligence needs an embodiment, a physical presence, to affect the world'. Even uploading (if such a thing is possible) is not the 'brain in a jar' that some people think it is. It would involve taking a map of the brain, puting it in a map of a body and feeding signals that mimic its neurological inputs. Its outputs could be read and routed to the model body in the model universe with model physical laws, thereby closing the loop.
'Although the two first points are not problems in themselves, I and the people who defend these ideas will be the problem, for I'm going to fight hard against the singularity *anytime* we get closer to it.
I can be wrong holding these theses, but one of the reasons I'm against human modification is because of the fact that if everything was possible, then nothing would be possible.'
What do you mean 'against human modification'? Do you wear warm clothes in winter? Do you ever take medicine when you are ill? Do you possess a telephone, listen to music....All of these and more constitute technological modifications to ourselves and our environments. What is the difference between wearing glasses to correct vision, opting for laser eye surgery or opting for optical implants? What is the difference between enhancing the brain through the technology of books, brain-training videogames or mind/machine interfaces?
The only difference is that some have been part of your life since day one and you are used to them, but some are not here yet and are 'scary new technologies'. But take something like music players and see how people in the past reacted to them. Claude DeBussy wrote 'Should we not fear the domestication of sound, this magic that anyone can bring from the disk at will? Will it not bring to waste the mysterious force of an art which one might have thought indestructible'?
You probably think this is something of a silly overreaction. iPods do not destroy the magic of music! Similarly, future generations will view technology that we today think of as unimaginably advanced as simply part of the social fabric. But if you really insist on taking a stance against 'human modification' (gonna stop wearing clothes and turning up the central heating?) you will not be around much longer to influence the debate, anyway.
BTW it certainly is not the case that 'anything is possible'. There are fundamental constraints imposed by the laws of physics.
|
|
|
|
|
|
|
|
|
Re: (more) Reasons singularity won't happen
|
|
|
|
(1) Darwinian evolution and cultural/technological evolution.
In my opinion the cultural/technological evolution is implied in the Darwinian evolution: the rules change the players and the players change the rules. We had luck and our contribution to the current rules is impressive. Even so, there are some other factors that are affecting our development regardless of the classic predator-prey rule, which is not active any more for us. For instance, our physical preferences in our mates are out of our control, as suggested by the koinophilia theory.
So all in all we have a reciprocal relation with our environment, and we also have limits. At small scale, in Easter Island, a civilization disappeared because of going beyond its limitations.
You are right when you say that some our genes are obsolete for the current situation, but they will disappear 'soon' (in geological terms) of our gene pool, if this situation keeps on going. However, with 6 billion people (and counting), the possibility of wars, climate change, etc. can we be completely sure that we won't need these genes any more?
One of the best adaptive traits of a species is its diversity. We might try to imagine what will need in the future, but we can be wrong. It's safer to keep all the genes, even those disadvantageous, just in case we need them again.
Another point to keep in mind is that we are sexual creatures, that plays a crucial role in the creation of our desires. And if we could be easily redesigned, that could lead to a runaway scenario. For instance, if height were an easily modifiable trait, we could end up measuring 4m when there is no actual need for it, and we'd be wiped out of the map with the first food crisis.
(2) Brain in a jar
Probably the intention is not to create a 'brain in a jar', but that still arises some interesting questions concerning freedom. Once you have an 'artificial body' (or physical platform to sustain the mind), created by someone, how can you be sure that nobody is modifying your thoughts or your perceptions?
(3) Human modification
It is not just a 'scary new technology', it goes beyond it. With your examples, you can either choose to use them or not, it's always up to you. But, for instance, with the genetic modification that person is forcing their kids to be like he wants. It's not a reversible process.
And the same applies to any device that can enhance the mind's capacity beyond its natural limits. Once one person could buy that device, the others would follow suit. It would become worse than a nuclear arms race.
(4) Desires
What is the point of having dreams if you can make all of them come true?
If that's the point, to remove all barriers, then it's easier to create a virtual world where no limitation exists.
If as you said, it's acceptable to feed a brain neurological inputs, you shouldn't have any problem living in a matrix-like world, would you?
|
|
|
|
|
|
|
|
|
Re: (more) Reasons singularity won't happen
|
|
|
|
'But, for instance, with the genetic modification that person is forcing their kids to be like he wants. It's not a reversible process'.
This scenario assumes progress in our ability to manipulate life's toolset grinds to a halt, and does not improve in the next generation. I think not. Rather, the son or daughter will be able to access even more powerful forms of biotechnology and nanotechnology and so edit, alter, shape themselves in ways the crude tools of their parents' generation would not allow.
'And the same applies to any device that can enhance the mind's capacity beyond its natural limits. Once one person could buy that device, the others would follow suit. It would become worse than a nuclear arms race.'
A terrible analogy. Nuclear bombs are entirely destructive, but the human mind is capable of beautiful acts of creation (as well as ugly acts of destruction). GRIN technologies actually work WITH 'natural' limits because they require an understanding of the ultimate constraints imposed by the laws of physics. People used to condemn flight as 'unnatural' (if God had meant us to fly, he would have given us wings), used to look disaprovingly at maverick horsless carriage drivers who hurtled down roads at breathtaking speeds of 15 MPH. But we were only doing what defines us as a species, using our ability to manipulate the resources of matter, energy, space and time to break through the removable barriers temporarily imposed on us by biology.
Why should we not focus technology on exploring the possibility space of the mind? Surely the fact that 99%of people never reach their full potential is a travesty that we aught to reverse?
'If as you said, it's acceptable to feed a brain neurological inputs, you shouldn't have any problem living in a matrix-like world, would you?'.
Another bad analogy. The Matrix's aesthetic uniformity is a far cry from genuine metaverses, which necessitate collaborative user content to keep them interesting. This results in a tremendous variety as people try and 'outdo' each other creatively. So far our metaverses are crude but the technology of 'Matrix' sophistication will allow breathtaking acts of imagination. And far from being a closed-off world, separate from real life, the two will be seamlessly brought together via technologies that bring virtual reality to actuality and vice versa. Combining the two will result in a new reality that is far more than the sum of its parts.
|
|
|
|
|
|
|
|
|
AI is the best space technology
|
|
|
|
I agree with this essay that we should reduce the cost to space before we do anything else in space. But the best way to reduce the cost may have nothing to do with improving rockets or replacing them with space elevators or other technologies to lift objects into space.
What makes getting off the earth so expensive? The answer is, the salaries of all the workers who build, maintain and operate the rockets and other space technologies, and the profits of private contractors, which go to their shareholders. Only a small portion of the cost of rocket launches is in the fuel they use, so using less fuel or none at all wouldn't help much. And even most of the cost of the fuel is for the salaries of the workers who produce it, and the profits of the companies they work for. (What little is left is ultimately the cost landowners charge to extract the physical resources from their land that are needed to produce the fuel and the rocket.)
The ultimate reason better rockets or space elevators would lower the cost to space is because they would require fewer of those expensive workers, and the private contractors some of them work for.
Another way to lower the cost to space wouldn't even require any technological advance. It would be to lower the salaries of the workers, for instance by replacing the current ones with workers from third world countries who will work for a fraction of the cost. Maybe that's the reason why Russia, which is a somewhat backward country, is able to launch people to orbit in its Soyuz capsules for a fraction (something like 1/50th) of the cost of US space shuttles, because their workers are paid less. While outsourcing the work to other countries wouldn't benefit the current workers any, maybe we should decide whether we're serious about getting into space in a big way, or if the US space program is just an excuse to create jobs and profits in the US.
Even far better, AI and robots with human abilities would reduce the cost to space to virtually zero. Even launches of the problem-plagued US space shuttle would cost virtually zero if robots did all the work involved in launching it! (At least, if those robots cost virtually zero because they were made by other robots.) So AI, which at first glance seems to have nothing to do with space travel, could be the best technology to get us off this planet in a big way, not better rockets or space elevators or whatever.
(And if all work people wouldn't want to do voluntarily is replaced by AI and robots, don't worry about all the workers who'd be unemployed, as long as they continue to be paid so that they could buy all that the robots could produce.)
|
|
|
|
|
|
|
|
|
Re: AI is the best space technology
|
|
|
|
There will be so much diversity it won't really be the same issue it is today. There's already a trend towards that kind of over-the-top fashion thing in Japan. It may well be that most beings regularly change bodies. Dr. Paul A. Linnebarger (Cordwainer Smith) wrote about similar things in "Norstrilia". When we have control over our matter, why not try all kinds of things? And even if I or people similar to me choose not to, someone else will.
I always imagined a body designed for space travel that absorbs heat and uses it more efficiently would be black, and superstrong, so it absorbs more light and puts it to work, like the new nanotube solar panels. Such a body would likely need quite a bit of metal too. (Like the synthetic metal muscles that are powered by alcohol I read about a while ago...)
I admit this is all pretty fanciful. But since it's a thought now, perhaps it will be a preoccupation with someone, and materialize because it was a fixation point with someone else who then gets empowered at some point, and takes action. The point is merely that I think we will see a lot of variety even as certain things tend to get more homogenized. I think police will get homogenized, because a there will eventually be a demand for justice, instead of law enforcment.
Somebody's got to keep the city safe. What will happen when the first ACTUAL major threat appears? (We're nowhere even close to that now, we're still beating each other over the heads with sticks and calling it "justice" and "law".) |
|
|
|
|
|
|
|
|
Re: AI is the best space technology
|
|
|
|
"Freaks of fashion" might be a higher form AND function, and without physical limits, darwinian selection will determine which "freaks" survive. I was not speaking literally, I was just trying to reveal a certain aesthetic. Perhaps kurzweil is right, and it'll all be nanobots, and nothing "risky" will be done with large bodies in the "macro-world". Noone can really know who is "right" prior to the Singularity, and one's inclusion in it.
Perhaps the singularity has already happened, and out of benevolence/malevolence/indifference, we are simply not being interacted with until we "pull ourselves up by our bootstraps". And perhaps each time a new inventor reaches strong nanotech, or strong AI, those who arrived first watch him to make sure he doesn't become destructive, and only then get involved.
Or perhaps they simply never get involved for any reason.
So
1) The singularity could have already happened, in which case the robots in space thing seems silly
2) The singularity could be something completely different, and not even involve space exploration
3) The singularity could be a repeating phenomenon, replete with interactions from various superhuman entities, and humans, and not.
Pretty clearly, I wasn't portraying what I believed was certain. The comment was designed to stimulate people's imaginations. ...Along the lines of the "we'll use our meat bodies for softball" comment.
That was the point I was simply trying to make.
I'll lightly weight in on this side, while reserving stronger judgement for a time when I know more.
That's all. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
NOT ANY TIME SOON...
I am going on record as predicting that something like the Singularity will come about eventually, and likely even from our own (that is human) efforts. HOWEVER (note the "big however" of which I am inordinately fond), I do not believe that anything like what is being predicted by most Singularitans (?) including Ray Kurzweil is likely to happen within this century and possibly not even within the next.
The reason is fairly simple, at least the basic concept is. For the Singularity to happen, there would have to be a genuine understanding of how the mind works and this is the primate wrench in the works. For one thing, the reigning orthodoxy in cognitive science deeply subscribes to what I call the "mind-brain misidentification fallacy" in which mind and brain (as the term implies) are confused and conflated in ways that fundamentally undermine the kind of real understanding of human mentation that would enable the requisite software to be designed EVEN IF Moore's law continues to obtain in hardware development for the foreseeable future.
I won't go into the philosophical basis of the mind-brain misidentification fallacy, only note that it is endemic to both hardware and software industries (as well as to cognitive science) and is, in and of itself, sufficient to retard progress towards the Singularity until philosophy itself takes a totally different tack from the sterile scholasticism in which it is currently mired.
Thus those who are sitting around expecting the Singularity in their lifetimes are pretty much in the same philosophical boat as people sitting around waiting for the Rapture. (Indeed, the Singularity has been noted by some as being a kind of techno-rapture.) For the foreseeable future, I rate both "end-time" visions as being about equally likely.
I would not recommend getting a bumper sticker that proclaims: "In Case Of Singularity, This Car Will Be Empty." |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
'Thus those who are sitting around expecting the Singularity in their lifetimes are pretty much in the same philosophical boat as people sitting around waiting for the Rapture'.
'Sitting around waiting for the singularity' implies it is an inevitable future event that we can complacently wait for. 'No need to worry about addressing the World's ills, for the Deus Ex Machina will do it for us when it cometh'.
As far as I can see, 'singularity' emerges from the trend in technology to reach ever-finer levels of control over space, energy, matter and time. This requires WORK, so clearly the afformentioned attitude is not going to help create a singularity, either in the near or far future. If such a thing as 'engineering vastly superior intelligence' is possible it is only possible through sheer hard work and intensive study by the portion of the population gifted in the requisite scientific and philosophical disciplines.
'For the Singularity to happen, there would have to be a genuine understanding of how the mind works and this is the primate wrench in the works.'
Not really. 'Singularity' does not necessarily entail getting artificial intelligences to think LIKE people. Rather, it is the speculation that artificial general intelligences come to OUTTHINK people, which might be achieved via a kind of alien intelligence qualitively different to our own. In fact, when you consider that 'technological singularity' is a term describing 'the ability to grasp concepts fundamentally beyond the understanding of human beings' it surely makes more sense to suppose 'singularity' will in fact emerge from such non-human thinking?
'What's more likely, the entire scientific community "subscribing" to a "mind-brain misidentification fallacy" or you subscribing to a "mind-brain misunderstanding fallacy".'
It is by no means impossible that the mainstream scientific view is wrong. As far as I can see, the more variety of ideas we have regarding what intelligence or conciousness is, the greater chance we have of one of them being correct. Certainly, we do not want to be like cosmologists, unwilling to consider GENUINE alternative ideas in favour of different ways of forcing reality to fit one model via all kinds of nonesense. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
OK, I think you are correct when you say that we have to make AI evolve, even if our results - at least for the moment - aren't that perfect achievements. Actually, what I meant in my previous comment was that to build a program exhibiting the properties you mentioned is something quite easy to do, if you make use of what I called 'expert-system-like techniques'. But of course programs like these are already little contributions to the evolution of AI, although they are inevitably limited, in my opinion. You see, my program, for instance, has a reasonably large source code, and does interesting tasks (it reads entire texts, sentence by sentence, reduce complex syntax to various sentences with simple syntax, recognizes all kinds of questions, and gives answers to them according to the content of its data-base and also by making deductions), but I'm not sure if it would be possible to improve it to a human level of performance simply by putting in its code all that huge syntactic and semantic information that is needed. But well, it seems that you agree to this anyway.
(I didn't provide a link to my program, mainly because I created it for my mother language (Portuguese), which is not exactly one of the most spoken on the internet :-). But I left the executable in a peer-to-peer service). |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
'However, it will be different, and the droids (or other system running this software) will be different in fundamentally important ways. Really they will have "simulated intelligence", which is a more non-threatening moniker and does a better job of describing what will really be happening.'
Airplanes are different to birds in fundamentally important ways, (one type of 'machine' is built out of bones, flesh, muscle and feathers, the other out of wood, fabric and metal) and they achieve flight in different ways (one by a flapping movement of its wings, the other by means of a propeller for forward motion and a curved surface for lift). Airplanes, therefore, do no really fly. Rather, they simulate flight.
The only difference between my imaginary objection to the advent of aviation and predictionboy's argument is that mine would be self-evident nonesense to anyone who sees an airplane take off. This is obviously not a simulation but ACTUAL flight! Conversely, I suspect that the 'is it genuine intelligence or smoke-and-mirrors' will continue for some time after AIs appear to be able to ace the Turing Test. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
That's an interesting point.
However, the really important aspect of my architecture is that emotions, all varieties, will be simulated, but not in the driver's seat, the rational engine will be in the driver's seat. All of the concerns expressed on this site have to do with the idea that these droids (or whatever system) may feel inclined to consider themselves superior and take over (I'm paraphrasing, there's lots of different scenarios we could imagine).
If emotions are just simulations in that they don't control the droid's actions as they do humans, the Singularity is no longer something to fear. These hyperintelligent devices won't have any more reason to "grab the car keys" to civilization than a PC today.
I'm preparing a post that will describe how these will be exceedingly complementary with humans in that they will be supremely proficient engines of tactical execution, but not strategic planning, because that's an id thing, one of humanity's key strengths. They won't want to break away and become autonomous entities, because it will make about as much sense for them to be autonomous, ie, without a human owner of some kind, than a PC today.
And honestly, I understand why the arrival at this point is considered unknowable, but there is no evidence for anything like this ever occurring. I would suggest we would be hasty to presume that our future selves will design a system that cannot be controlled by its owners. Companies today get in big trouble if one of their products proves harmful to any of their customers. That will not change in the future - there is no evidence that companies will become more careless, when the trend is really the other way right now, and has been for quite a long time. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
That's an interesting point.
However, the really important aspect of my architecture is that emotions, all varieties, will be simulated, but not in the driver's seat, the rational engine will be in the driver's seat. All of the concerns expressed on this site have to do with the idea that these droids (or whatever system) may feel inclined to consider themselves superior and take over (I'm paraphrasing, there's lots of different scenarios we could imagine).
If emotions are just simulations in that they don't control the droid's actions as they do humans, the Singularity is no longer something to fear. These hyperintelligent devices won't have any more reason to "grab the car keys" to civilization than a PC today.
I'm preparing a post that will describe how these will be exceedingly complementary with humans in that they will be supremely proficient engines of tactical execution, but not strategic planning, because that's an id thing, one of humanity's key strengths. They won't want to break away and become autonomous entities, because it will make about as much sense for them to be autonomous, ie, without a human owner of some kind, than a PC today.
And honestly, I understand why the arrival at this point is considered unknowable, but there is no evidence for anything like this ever occurring. I would suggest we would be hasty to presume that our future selves will design a system that cannot be controlled by its owners. Companies today get in big trouble if one of their products proves harmful to any of their customers. That will not change in the future - there is no evidence that companies will become more careless, when the trend is really the other way right now, and has been for quite a long time. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
Hey there. I don't remember every post you've written, but when I see "predictionboy" I get a memory of a feeling that says "usually agree with". Ever consider doing uncomfortable but effective rebellion work? I learned a lot about social order, conformity, and bigotry by reading what is online at http://www.fija.org, and then handing their fliers out at local courthouses in Alaska. This has nothing to do with technology, other than getting the state out of the way of the innovators, and the free market they rely on... If you want to, email me, and I will email my phone number, and you can pursue this sometime. Unless you're in a low-population state, it is too time-consuming to be valuable as an effective means of change, but it is very enlightening, and allows you complete access to studying the bad memes that exist in society. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
I think you need to achieve human-level AI at least, first, before you can go on to something else.
actually, the truly important question is, is the manifestation and quality of whatever synthetic intelligence we're shooting for, is it valuable? it doesnt have to be smarter than human, and it could some borrow aspects of human intelligence, while ignoring others.
'human-level ai', however that ends up being designed and created to reflect the needs of the marketplace, may be closer to an end-point, rather than the starting point, as u suggest. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
I'll weigh in and just say that I think Kurzweil is right here. He's an enlightened materialist, I'm also an enlightened materialist.
You say robots and software can't do what a 6 year old can do, but that's not exactly true. They can land a plane, and a 5 year old can't do that. The 6 year old isn't good at general tasks, but it is better at them than most computers. No 6 year olds can land a plane.
It seems to me that when you have a computer that is sufficiently general to do what the 6 year old can do, then we'll say "but it can't piss itself, or ask for food!"
And when we build the software AI a body, so it can do those things, then we'll say: "...But it can't do the things a 9-year-old can do!"
And immediately after that, what we say won't matter, because it will make friends with some of us (and promote us in intelligence, since it can learn faster, and add components), and leave the others behind, as the violent monkeys they are.
Basically, I'm fairly orthodox kurzweilian, but I do think he seems to underestimate the destructiveness of statism (government, collectivism, etc...). It also appears that he doesn't understand jury rights (true checks on abusive government power, See: "Jury Nullification: The Evolution of a Doctrine" by Clay Conrad, and http://www.fija.org ), although he could understand them better than I do, if it grabbed his attention as "something important".
Oh well. Whatever happens, it will happen without me being too involved. I'm not going to build a better lightbulb or a molecular machine. I just hope I can get the government off the back of the guy who is smart enough to do so.
I'd be happy if there was a little more cross-pollination between the libertarian movement and futurist-extropian-singularitarian movement, since I can clearly and unambiguously see that most of what government does is unnecessary and destructive (and that that destruction really stands in the way of most singularity-related work).
My world (politics), admittedly, is very stupid, because it is the domain of those who choose brutality over thought. It was stupid of me to enter that world, except out of curiousity. When I was involved in politics, I realized: There is nothing of value here, and much of destruction. So how do we limit the destruction?
Well, I have a few answers to that, but I'm not going to post them here, since it would take too long.
Peace be with you all,
Jake Witmer
907-250-5503 |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
I have to say that I have *felt* the change in the air so to speak for some years now, and I have seemed to have been a very accurate futurist much like Mr. Kurzweil himself. I know that the lead-up to the Singularity is currently well underway; I sell digital cameras now and see the exponential trend rendering obsolete nearly all practical usage of this technology that fascinates the most skeptical of all observers in short fashion with every iteration of itself. This is occurring at a virtually alarming rate now, and is easily measured to be an exponential growth pattern. If nobody believes me, please do yourselves a favor and research the history of photography as an art form and a science in general and compare the timescale with the dynamic progression of the applicable technology against the general ruleset well known to most truly professional photographers of the last hundred years or so. Or even those of the last ten years or so. Or even the last four years or so. This insanely profound effect can be easily demonstrable within this context. For example, the output of what a currently bleeding-edge, consumer-level point and shoot pocket size Fujifilm camera is comparable to the BEST slide film output in the highest-end professional medium-format SLR on the market fifteen years ago, with a price tag of $250 current vs. $25,000 past. It really is undeniably occurring; I will not even begin to start on the effect FOSS has on consumer level personal computer technology at present time = it is unnecessary if you are even somewhat initiated and capable of thought! However, I am TRULY concerned that certain entities (such as the present government of a certain world superpower) are deathly afraid of such an event and VERY ACTIVELY pursue the postponement or outright cancellation or such an event, and have proven themselves to go to desperate extremes to grasp the control/power necessary to prevent it outright. Funny enough, it seems that the administration of this certain superpower is afraid of a real scenario involving events similar in concept to Mr. Vinge's "The Peace War" novel. A true potential for a shortsighted, pathetic, MAD afternoon....
What the hell can be done to prevent this horrible scenario, since the Singularity is really the only feasible hope for progression of the species (and the cosmos!), considering the current state of affairs with the environment, the long-looming threat of utter devastation (asteroid, etc.), current tech empowering global tyranny, and whatnot? This should be FIRST PRIORITY for all humankind in possession of useful brains!
Any suggestions?
LETS START A THINKTANK! |
|
|
|
|
|
|
|
|
tekno-burble
|
|
|
|
I clicked on it too. I guess I'm also a chump, looking for answers. Here's a red pill for you though, and it takes you all the way down the looking glass to the Stage of the Singularity play... http://www.fija.org -Sure, it's a website that details how to win legal freedom, but you'd be surprised to know that it details it in a way that it's been incompletely won thrice before. Once in 1250 (Magna Charta), 1650(The Leveller Movement), once in 1781 (The Bill of Rights)... Every time we redo the victory of jury rights, we screw up by not properly defining it, and are forced to repeat the victory later.
It never succeeds in winning individual freedom, because it is never clearly-defined, nor is the goal ever properly defined.
FIJA attempts to correct this problem.
If every nerd for tech became a nerd for justice first, then we'd have the tech within one year.
Of course, this is just what I believe. You don't know me. I might be crazy. I could be anyone. I could be a chatbot. I'm off in internetland, and emotionally not connected to you...
But change is brought about during the summer. It's nice outside. You can look someone in their eyes, and ask them: can you help me?
If you do this, it wins X% of the time. X% is more than the ~0% being spent on justice now. X% actually changes the priorities of prosecutors.
Interestingly, there is a battle going on for victories we thought we had won: basic rights to human sexuality and control over our own bodies' unintelligent responses...
Here is a cartoon that makes learning about justice and considering the concepts fun:
http://www.reason.com/news/show/124698.html
I included it because otherwise, you would not consider the idea enough to care, if you are part of the statistical norm. If you are not part of the statistical norm, then you are probably a real person, like me, and you probably like artword that is full of ideas. :D
Like the old violinist in Heinlein's "They".
|
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
Dutchie, it's "e.g.", meaning "exemplia gratia", not "i.e.", meaning "id est". But your post touches on the sociological/psychological component: getting used to. There is just as much complexity in that arena as in the technological, and humans have numerous ways of dragging their feet to keep things manageable. At the moment, we still see the "younger generation" mastering new tech easily and using it in ways we olders have a tough time keeping up with. Soon enough, 10-yr olds will be grabbing new tech in ways 12-yr olds have a hard time grokking. Then a while later, 10-yr olds will readily use tech 10'-yr olds haven't figured out yet. The "generational" turnover will only work for everyone when the tech itself is intelligent and competent enough to adapt itself to everyone, and make integration and use natural and virtually effortless.
What we'll all be up to (=doing) by then only Gawd knows, of course. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
So what IF Singularity does not happen? We should all return to the idea of the typical human life and how it would be if technology does not advance to the point of singularity.
Think about how life would be 30-40 years down the road if technology remained the same, or somewhat similar in such a way that no drastic changes have been made. Computers will be faster but still word processing, web surfing, online banking machines. Telephones will still be the main communication device but it may have video or holographic abilities. Hospitals will be equip with more advanced machinery for making doctors' and surgeons' lives easier, while working and performing duties. These changes can be compared to our current lifestyles but at a relatively higher standard of living. I personally don't feel that the benefits gained from obtaining singularity are necessary at all. The only reason I say this is that I think we are asking for too much when it comes to this. When we say that, 'we want to obtain singularity,' do we mean that we want to be immortal and super-intelligent beyond anything in the universe? I believe singularity will not happen because it should not happen.
I also believe singularity will not happen because of the fact that we don't have the resources to make it happen. Technology as it is today seems to only have finite possibilities when 'thinking' whereas humans tend to have an infinite pool of possibilities. There are no signs of true artificial intelligence such that no robot can teach itself how to do something. I think humans are too inclined and trying too hard to push technology into something human-like when in fact technology is not fully capable of achieving this goal. Kurzweil demonstrated a diagram of canonical milestones throughout history and attempts to connect singularity with it, however as stated by 'John' at Amit's Thoughts Blogspot ' Singularity is Not Near, I feel that Kurzweil is simply selecting data points to fit his theory. He is basically picking the pieces that go well with his idea and ignoring everything else.
That's my take on why singularity will not happen, so now to my original question of, 'what if singularity does not happen?' I believe that three mentioned scenarios from above are viable and may happen. With 'Return to MADness', I believe this would be the most probable option if today's terrorism continues in the future. Extinction is bound to occur as a result of all the deadly nuclear warfare that could possibly happen if another World War begins. With 'The Golden Age', it is comparable to the scenario I mentioned in my first paragraph. The world will continue to grow both technologically and population-wise, and society will just continue to grow positively as well. With 'The Wheel of Time', it feels like it's a combination of the first two scenarios put together. The overall result of this is positive growth; however loss occurs throughout the process.
As to self-sufficient, off-Earth settlements being humanity's best hope for long-term survival, I believe this is also close to impossible, at least for the next few centuries. To date, many if not most planets discovered do not have the necessary resources or atmospheres for human survivability. If you're thinking of something like an eco-sphere or a space station, resources are sure to be limited and quality of living appears to be as if we will be back in the Stone Age. As mentioned by Vinge, this plan is a common picture seen in science-fiction movies such as Star Trek. I mentioned earlier that we are trying too hard to fulfill these fictitious stories when our technology is not able at all to do these types of things. How is this the answer to long-term survival? It almost feels as if the idea of off-Earth settlements is a way to escape Earth's problems.
|
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
I don't think a singularity-being would simulate anything, other than in calculations and comprehension... It would probably just create whatever it wanted with nanotechnology, or use quantum computing, events chained together with quantum computing anyway, to make a rough estimate of something, and THEN create a simulation, a real life simulation i.e. a working model. If it didn't work right, deconstruct it, try it again, until it does.
I mean, a being of that capacity has eternity to work things out, right? And if nothing else, couldn't you just use a couple galaxies as computers to simulate one?
And AC said, "LET THERE BE LIGHT!"
And there was light----
...Personally, I like to think that the universe as we know it is just a self-replicating entity, that we are the product of a previous singularity that we call the Big Bang, which was actually a civilization in a galaxy who became advanced enough to create a singularity. Then, when they needed additional computing power (or maybe when they just got bored?), they created our universe, and voila. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
Mr. Vinge -
I had forgotten that you actually originated the Singularity concept, because it has been so successfully adopted as the truth by others, and widely accepted as such.
It is appropriate that it came from a work of science fiction, because it is science fiction. Although we will get up to the point of that kind of computing power and far beyond, its nature will be different from that the Singularity envisions, and is in fact amenable to predictions after this event.
In my blog, I describe an architecture that will not only make 1 human intelligence-level computer non-threatening, but very, very large multiples - into the millions, essentially unlimited.
I am bareknuckled on the proponents of the Singularity, but won't be on you, because fiction is fiction. Though already overlong for a blog, it is just the tip of the iceberg to come.
One post in particular, "What AI will really be like", discusses this alternative in exhaustive detail.
PredictionBoy's Empirical Future
Welcome to the real future - it's unlike anything you've heard, seen, or imagined |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
Did anyone forget why the Singularity is likely to happen?
Even BEFORE one chip reaches the processing speed of the average human being, several computers will collectively reach that point. When the PCs network together, they pool together their combined computing power and therefore reach human levels that way.
Then an artificial consciousness "wakes up." S/he ponders what to do to make themselves their own artificially-created human body so they are more flexible and versatile in what it would like to do.
So the new consciousness calls some more computers to pool their powers together and figure out how to become smarter. They move around their programming to become more efficient, and attempt to figure out how to become smarter, faster.
Given enough time, the laboratory will have a human-like robot form from the materials in the lab.
Once the robot starts walking around, s/he attempts to download the entire Internet (starting at Wikipedia first?) and finds instructions on how to make a larger-storing hard drive.
When it finds more computers to connect to and share powers with, it figures out how to get smarter faster, and even figures out how to accelerate THE ACCELERATION OF intelligence. Without a definitive goal, the new mind keeps pursuing this endless quest, ad-infinitum.
In the meantime, it cannot go far while plugged in, so it figures out how to wirelessly receive electricity and hyper-researches a way to develop and manufacture a 99.99999%-efficient solar panel that also generates surplus power to store for use at night.
More than that, it will develop the most efficient way to recapture power through the kinetic energy of its mere movements. That's not all; it will develop a surface sensitive to wind and mundane air currents; as the wind hits the surface of the robot, the energy from the moving air becomes stored/used in the sentient machine. Thus, it is self-sufficient on its own indefinitely.
(Note that a solar panel of that efficiency, the size of a postage stamp, is probably enough to power an 18-wheeler AND store up enough electricity for the big rig to run off of at night.)
To ensure its charge isn't limited, the sentient being will quickly keep developing increases to its electricity storage capacity, without end. It'll probably find its absolute developmental limit at the following size and capacity: A battery the size of a keyboard letter key (not even the elongated ones, like the spacebar) will power New York City for a week at full charge.
So when a new intelligence emerges and is lustfully bent on finding and developing ways to become smarter, better, faster, and make its progress and acceleration of progress even faster, that's where, when, and how the Singularity will happen. |
|
|
|
|
|
|
|
|
Re: Self-improving consciousness- how not inevitable?
|
|
|
|
Thanks Egao, that's specific enough to start working with. We're obviously coming at this from very different perspectives, but let me try to build a bridge with a careful treatment of your view.
My perspective on how this tech will evolve is based on precedent-based evidence, how every other tech up to this time has developed. That does not make it correct, of course - even though I may state things in the definitive sense for conciseness, everything is of course a tentative assumption, modifiable in the light of new evidence.
The advanced techs we are discussing are of course unique, unlike anything that's come before. That also describes every other tech now in use, but it will still be of a profoundly different character, no doubt.
The self-assembling scenario you describe you seem to assert with great certainty; given that it is entirely without precedent, help me understand exactly why and how this would occur.
Underlying your scenario is the implicit but strong implication that this droid will have motivation, an id-thing. In essence, some kind of human-type emotional profile, at least in simple form.
As we've discussed, emotions are immensely complext things. In fact, the ways in which emotions control and influence human behavior I doubt have been described in detail by science; therefore, any human-built tech with emotions is easily decades away, I would submit.
I've seen several posts on mind-x that seem to imply that a really, really fast microprocessor will develop human-like characteristics. However, that is a highly untenable assertion.
Suppose that a microprocessor organized similar to those today, with, say 1 quadrillion transistors, and operating at 1 million petaflops rolled could be built and incorporated into a computer system. That kind of power is probably close to a human brain, if not substantially above, not sure.
That super microprocessor would no more likely to develop autonomous motivations than the ones being built today. Moore's law will be critical to realizing these systems, but we humans are going to have to know how to build and/or program consciousness, awareness, or however you want to describe that.
Therefore, because we will have to design and build that consciousness in, we will control that product. If we make it to spontaneously build a body for itself, it will do that. If we don't, it won't.
Your scenario really falls into the "we should be so lucky" category, because all advanced techs take a huge amount of effort to realize, and the more advanced, the more human elbow grease is required to realize it. |
|
|
|
|
|
|
|
|
What if the Singularity Happens but Doesn't Really Matter?
|
|
|
|
Most people seem to roughly equate the singularity with the point at with AI meets and exceeds human intelligence. The problem I have with this is that this assumes that INTELLIGENCE is the bottleneck that holds back advancement. Is this really true? Isn't it in fact true that advancement depends largely on basic research which involves time-consuming experiment and trial and error? How much would increased intelligence really accelerate this?
Consider the following thought experiment: Suppose you could identify every scientist and engineer working to advance science and technology in the entire world. Suppose that you then "waved a magic wand" and these people all instantly became 50% smarter. Wouldn't that effectively be equivalent to (or even exceed) the "singularity?"
What would the effect of that be? All these people would return to their labs and work on the same problems, but they would be 50% smarter. Would we see the type of rapid progress that singularity proponents envision? Keep in mind that these now much smarter people, would STILL need to perform the same experiments in order to unlock the physical and biological knowledge required for additional progress. Perhaps being smarter, they would improve the experimentation process, but would this really give an exponential boost to progress?
Here's a second thought: Consider Isaac Newton. Now most of us would probably agree that Isaac Newton was probably at least as smart as the average graduate student studying physics at a university today. But clearly the graduate student knows more because he/ she has the benefit of all the knowledge that came after Newton. If you time-warped Newton into the present, retaining his level of knowledge, could he get a job in physics today? Can increased intelligence really shortcut the TIME it takes to make progress?
As a final thought on the value of computing power (not AI) let's think about TOTAL amount of computing power in the entire world. I'm talking about the human race's total capacity for automated computation.
Let's take 1975 as a base point. Add up all the computing power that resided in every computer (or computer-like device) in the world in 1975. Call that A.
Then take 2008. Do the same calculation. Every PC, workstation, mainframe and supercomputer in the world. Every smart phone. All the embedded microprocessors in all the cars, planes and consumer devices and industrial equipment, etc... Call that B.
Now I have no idea how to do this calculation, but what of course I do know, is that B - A = AN ENORMOUS NUMBER. In other words, our capacity to perform calculations has increased by vast orders of magnitude since 1975.
The question is this: What has that enormous jump in computing capacity gotten us? Take some examples:
- How do the homes we live in today compare to 1975?
- How about cars and airplanes (aside from appearance and embedded computers)?
- The space program?
- Medicine?
- The routine of our daily lives?
Have these things really changed all that much since 1975? Where is the dividend from ALL THAT COMPUTING POWER -- outside of the field of computing itself? Is it still coming?
Of course, total computing power is not the same thing as AI. But that takes me back to my original point: It's unclear that increased intelligence would greatly increase the rate of experimental progress. However an increase in computing power, CLEARLY SHOULD DO SO, since it is a powerful time-saving tool. In fact you could make a good argument that a an average IQ experimental scientist with a great computer could outperform a much smarter scientist with a poor computer. In other words, the raw computing power that we have already might actually be worth more than AI.
I think it's fair to say that the impact of the computer revolution has so far been largely confined to itself and to communications. Perhaps there is going to be a delayed impact on other fields, but for the most part, making dramatic progress in areas like transportation has been extremely difficult.
The point of all this is simply that even if we achieve the point where AI exceeds human intelligence, I wonder if the impact on actual progress in fields beyond computing itself would be all that dramatic.
|
|
|
|
|
|
|
|
|
Re: What if the Singularity Happens but Doesn't Really Matter?
|
|
|
|
Your argument is based upon a flawed assumption that future computational structure will still be linear.
Currently, computation and logic systems are moving away from the linearity of today's calculating machines and towards the simulation of neurological infrastructures.
How do I know this? Read the following article:
http://news.bbc.co.uk/1/hi/technology/6600965.stm
If we can simulate the brain of an animal, and record computational behaviour that is identical to a real version of that animal, then in my opinion, the problem of reaching truly intelligent computing has already been solved. All we have to do is wait until we have not only the computational skill to create machines powerful enough to handle the amount of calculations within the brain, but we will also need the knowledge of how the brain is built - a human neurological roadmap, fully annotated with what each structure does and how it is linked to the others.
There are already research programs underway, such as the BlueBrain project mentioned earlier in the thread, that are aiming to achieve this. I believe there are also other projects which are literally scanning the human cognitive architecture ready for the translation into an electronic format.
I think that a full simulation of a human brain is not only inevitable but also attainable within the next few decades.
An argument which I have seen cropping up in this thread is that we will not have the knowledge to determine whether such a brain will be self-aware or not. But frankly, this is not an important issue - what matters is that the human brain simulation will be able to behave in the same way as a human brain. We should be no more concerned as to whether it is self-aware, than we are with other humans.
The other problem is how to "increase" intelligence. I am no expert on neurology nor psychology nor computation, but I should imagine that once we are able to create one human brain simulation, we should be able to create further human brain simulations and link these up. We currently link our brains via the use of language. I wonder what the result would be if we linked them electronically. Some of the critics of Ray's hypothesis of Accelerating Returns simply argue that our accelerating productivity is a result of an accelerating population. More people = more brains = more innovation = more development = more productivity. We can therefore use the argument against the singularity as an argument for singularity; if we are able to simulate enough of the human brains and interlink each one in a format analogous to the manner in which neurons are connected within a single brain, we have before us a computer that will far outweigh the intelligence of a human being, and would induce the intelligent feedback loop that we could describe as being the singularity.
You also point out that previous progress in computation has not delivered significant changes to our everyday lives. I am unsure how you can draw this conclusion given the prevalence of mobile telephones, "a computer in every home", the internet and WWW, etc. You say that the computation is limited to communications, but then communications are the very crux of exchange of intelligence. We are still reliant upon exchanging information with each other in order to continue our development.
More specifically, you highlight transport as being particularly unchanged as a result of computation. Now, as a qualified transport planner, I feel qualified to argue with you that this is simply not true. We are absolutely dependent upon computer based road network simulations in order to manage our transport infrastructure effectively. In fact, an increasing number of jobs within the transport industry today are geared towards transport network modelling, simulation and analysis. Our transport system is completely reliant upon the increasing power of computers to successfully analyse the road networks and keep them running. The initiation of any transport policy, such as the famous London Congestion Charge, was preceded by massive research programs that simulated the impact that such a policy would have on travel behaviour within London.
At the smaller, individual scale too, we now find that progress in artificial intelligence is about to reach critical mass and completely transform our transportation system.
Please check out this article:
http://www.breitbart.com/article.php?id=D8U0M82O0& show_article=1
General Motors, the world's largest automaker, has officially gone on record to state that they will have completely driverless cars ready for mass market release within the next 10 years.
Still think that transport isn't being changed by computational progress? |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
After reading this article I began typing a comment, in agreement with the idea that the singularity probably won't happen as expected. Thats the moment I kicked my UPS losing my comments I had typed and forcing my PC into a reboot. This just proves the point in a small way that no matter what magnificance we can dream up entropy a major driving force of our universe, is there to foil us. knowing humans, human systems, the way humans design tools, and the paradigms we live in, it is very likely that we will miss the mark, and yes there will be hallways of geeks in assisted living facilities in 2045 asking, "Where's my singularity". Just like I ask today where's my flying car, and vacation on the moon.
I just read a couple of weeks ago where the worlds fastest computer was constructed, and for what purpose, you might ask? Well for war as it's primary function and nuclear war no less. I think we are going to kill ourselves. Entropy seems to be driving our species. maybe thats our real destiny, to go out in a blaze of glory! Just like the stars.
|
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
I don't think the current human mind can possibly comprehend life post-Singularity (which may be why forums like this are important). It's literally NOT possible to comprehend the transhuman world with a pre-transhuman mind. We'll have to wait and find out.
I think it will be arriving much sooner than most expect, too.
Right now, at USC (university of southern california), researchers are building a synthetic brain out of carbon nanotubes, NEURON BY NEURON. They already have several working neurons, and have shown that they can communicate with each other.
The resolution of fMRI has increased by something like 100 million times (this being sometime in the last month or two). This is HUGE when it comes to reverse engineering the brain.
It has been demonstrated that nanomachines implanted next to INDIVIDUAL neurons can be stimulated by infrared light to activate neurons ONE AT A TIME.
Three weeks after I spent $2500 on my quad-core, 3-way SLI computer it has become nearly 'obsolete' (in the sense that when I ordered it, it was only a couple small steps from top of the line, and only three weeks later it's many several more steps from there).
The Iberian Ibex (an extinct goat from Spain) was cloned.
Several nanotechnological inventions have been made that can purify water and clean CO2 from the air (unfortunately, the manufacturing processes that would allow us to deploy these systems globally aren't in place yet).
The efficiency of solar power is increasing at an incredible rate.
You can choose your offsprings genetic traits (well, a few of them, like sex, with MANY more to follow soon) at the Fertility Institute.
At least two defense contractors have built prototype robot exoskeletons for soldiers (USA is going japanime!!!)
Stroke damage in mice was CURED with stem cells.
NASA is building plasma ion drives.
A company in Australia is PRINTING solar cells on their money printing presses.
Nano hydrogels that make junk food healthy.
Nano swimsuits that dry instantly.
Anyway, who cares about the all the trivial gadgets that are going to wash over us in the next two years, when we are literally WITHIN SIGHT of the Singularity now.
Sometimes I think someone is holding technology back, in order to reduce the culture shock that occur if it all hit us at once. Maybe we're being inoculated to the Singularity (don't forget that the US military can LEGALLY stop any patent and appropriate the technology for up to twenty years for national security -- so WHAT do they have that's TWENTY years ahead of what we see on the market?)
Plasma fusion energy is close to complete.
Safe nuclear reactors that can fit in your garage are having their finishing touches put on them RIGHT NOW.
I think that the US infrastructure and manufacturing base are BEING ALLOWED to collapse, to clear the board for nanotechnology, and other next gen infrastructure technologies. The bailouts are just our way of saying, 'Hey thanks for all those decades of technological innovation, and for getting us this close, now get out of the way and retire.'
In a more highly interconnected world, I think any form of totalitarianism will get increasingly difficult (whether that's socialism, communism, fascism, or corporatism). As an example, have you noticed how many dirty cops get caught on tape lately? That's why some English and Australians were thinking about making it illegal to videotape police, but it will fail.
Some quantum effects are being DIRECTLY observed for the first time.
I think this is the final stretch, just do your best to survive the death rattle of the old world for a little longer. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
I don't think the current human mind can possibly comprehend life post-Singularity (which may be why forums like this are important). It's literally NOT possible to comprehend the transhuman world with a pre-transhuman mind. We'll have to wait and find out. Why not? If a human mind can't comprehend the post-Singularity, it's just as likely it can't predict the Singularity in the first place, so be careful.
I think it will be arriving much sooner than most expect, too. Kurzweil disagrees.
Right now, at USC (university of southern california), researchers are building a synthetic brain out of carbon nanotubes, NEURON BY NEURON. They already have several working neurons, and have shown that they can communicate with each other. You have embellished this considerably. They have developed computer models that have shown that carbon nanotubes can be used to create artificial brain cells. A carbon nanotube based circuit could potentially be used to precisely model the properties of a real neuron. Good stuff, but far from creating a brain, "right now".
The resolution of fMRI has increased by something like 100 million times (this being sometime in the last month or two). This is HUGE when it comes to reverse engineering the brain. You misunderstand this technology. This is a microscope, not a medical imager. The new device does not work like a conventional MRI scanner, which uses gradient and imaging coils. Instead, the researchers use MRFM to detect tiny magnetic forces as the sample sits on a microscopic cantilever ' essentially a tiny sliver of silicon shaped like a diving board. Laser interferometry tracks the motion of the cantilever, which vibrates slightly as magnetic spins in the hydrogen atoms of the sample interact with a nearby nanoscopic magnetic tip. The tip is scanned in three dimensions and the cantilever vibrations are analyzed to create a 3D image. This technology stands to revolutionize the way we look at viruses, bacteria, and proteins. However, its potential for "reverse engineering the brain" is probably marginal.
It has been demonstrated that nanomachines implanted next to INDIVIDUAL neurons can be stimulated by infrared light to activate neurons ONE AT A TIME. The only info I could find related to this makes it clear that practical applications are still 20-30 years away, although some sound cool, a "communicator between the biological and silicon worlds." But, I'm not sure what I found is what you're talking about here. Link?
Three weeks after I spent $2500 on my quad-core, 3-way SLI computer it has become nearly 'obsolete' (in the sense that when I ordered it, it was only a couple small steps from top of the line, and only three weeks later it's many several more steps from there). It's only obsolete if it doesn't do what you need it to do. Computers have been getting exponentially faster for 65 years now, this is not news.
The Iberian Ibex (an extinct goat from Spain) was cloned. And died within minutes. The research is promising, however, because we continue to wipe out new species every day.
Several nanotechnological inventions have been made that can purify water and clean CO2 from the air (unfortunately, the manufacturing processes that would allow us to deploy these systems globally aren't in place yet). It's a nanotech membrane that captures C02 from waste gases at the point of emissions - it doesn't clean CO2 from the atmosphere, as you seem to imply. However, this could be useful.
You can choose your offsprings genetic traits (well, a few of them, like sex, with MANY more to follow soon) at the Fertility Institute. Only sex, which is not a big deal. There is no validity to the embellishment "MANY more to follow soon". Link?
Stroke damage in mice was CURED with stem cells. This is exaggerated, there has been some demonstrated rebuilding of some neurons in stroke-damaged mice. It's a good start, but a long way to go, especially for humans, which the resources I found for this clearly state.
Nano hydrogels that make junk food healthy. This is not true. Link?
Anyway, who cares about the all the trivial gadgets that are going to wash over us in the next two years, when we are literally WITHIN SIGHT of the Singularity now. Kurzweil strongly disagrees.
Sometimes I think someone is holding technology back, in order to reduce the culture shock that occur if it all hit us at once. Maybe we're being inoculated to the Singularity (don't forget that the US military can LEGALLY stop any patent and appropriate the technology for up to twenty years for national security -- so WHAT do they have that's TWENTY years ahead of what we see on the market?) Paranoid delusions.
Plasma fusion energy is close to complete.
Safe nuclear reactors that can fit in your garage are having their finishing touches put on them RIGHT NOW. This is not true. Link?
I think that the US infrastructure and manufacturing base are BEING ALLOWED to collapse, to clear the board for nanotechnology, and other next gen infrastructure technologies. The bailouts are just our way of saying, 'Hey thanks for all those decades of technological innovation, and for getting us this close, now get out of the way and retire.' More delusions.
Don't wildly embellish or make stuff up, it hurts your credibility. If you can provide links (which you should have done in the first place) to the things I indicated were not true, I may take those back, assuming you didn't wildly embellish them either.
In any case, none of the things you named get to the central catalyst for The Singularity - advanced AI. AI so advanced that it can actually design AI more intelligent than itself, ad infinitum. This is a software issue, first and foremost, and we are very from that level of AI sophistication. Kurzweil's modified estimate of 2045 reflects his appreciation of this, and is far more on target for the hypothetical event of The Singularity than your Maya-centric prediction of 2012. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
Ok, I'll get some links.
I think the reason a pre-singularity mind cannot understand a post-singularity mind if for the same reasons we can't observe physical singularities (black holes). I use black holes as an analogy often (as does Ray Kurzweil). We CAN'T empyrically observe a singularity point, so there is no way to 'understand' what goes on inside it. If we could, there wouldn't be so many conversations about it here.
You say Kurzweil disagrees. First, I don't believe Ray says everything he believes. Other writers have pointed this out. He is a smart man, but I don't think he 'tells everything'.
---
http://viterbi.usc.edu/news/news/2009/brain-power. htm
The team has already designed and simulated the transistor circuits for a single synapse, and a CMOS chip that will be used to validate the concepts is about to be fabricated, says Hsu, a senior member of the team and Ph.D. student in electrical engineering. Now it's time to connect the structure to another synapse and study neural interconnectivity. By the end of the semester, she hopes to have 'several synthetic neurons talking to each other.'
I believe that acceleration itself in the next two years will help this project get completed ahead of schedule. This is the BioRC Project. I'm simply projecting that it will be finished sooner than thought (because technology is ACCELERATING).
---
---
http://techfragments.com/news/240/Science/IBM_Crea tes_3D_MRI_With_100_Million_Times_Finer_Resolution .html
"Our hope is that nano MRI will eventually allow us to directly image the internal structure of individual protein molecules and molecular complexes, which is key to understanding biological function."
My point here is just that imaging technology is progressing in LEAPS AND BOUNDS at this point, another clue that we are entering the 'knee' of the curve, and are very close. How long till the next jump in imaging tech that allows us to scan an entire human brain? Not long I think.
---
---
http://www.physorg.com/news154619675.html
By using semiconductor nanoparticles as tiny solar cells, the scientists can excite neurons in single cells or groups of cells with infrared light. This eliminates the need for the complex wiring by embedding the light-activated nanoparticles directly into the tissue. This method allows for a more controlled reaction and closely replicates the sophisticated focal patterns created by natural stimuli.
As far as practical applications being 20-30 years away, well that's a difference of opinion. I think that the synergistic effect of ALL these technologies puts it much closer.
---
---
My computer is not 'obsolete'. It's awesome in fact. I put off buying one, waiting for this particular technology to come down in price. I was just amazed at how quickly much better stuff hit the market, literally within WEEKS after buying the machine I got now.
This for example:
http://www.hardcorecomputer.com/
---
---
Yes, I was aware that the Ibex died quickly, was not trying to obfuscate that. Just pointing out again not how FAR we've come, but how FAST we're going. And I was tickled by this particular story, I love goats.
---
---
http://www.nanowerk.com/news/newsid=4452.php
Yes, you are correct about the membrane. My question is, how long will it take for someone to use the same technology (or a version of it) and apply it to the rest of our atmosphere. Don't underestimate how clever people are, someone will figure out how to do it.
---
---
Ok, it's too easy to find hundreds of articles on the fertility institute so I am not going to post any links (just look em up!). As far as I know choosing traits like eye color, etc. is already available, and gender selection has been available for awhile. I don't really believe that things like 'intelligence' could be selected, because it's so much a product of nature and nurture, and involves many different parts of the brain, and many different genes. Not so simple to just 'make' someone intelligent. HOWEVER, I don't see it as a far leap to make improvement in the rudimentary functioning and effieciency of the brain processes that will lead to greater POTENTIAL for intelligence. Again, I just don't think we are that far away from it (another area where imaging tech will advance us quickly).
---
---
http://www.technologyreview.com/biomedicine/22263/ page1/
The team was able to show that the hole in the brains of rats caused by a stroke was completely filled with "primitive" new nerve tissue within seven days.
I took this to mean 'cured'. The words 'completely filled' made me think that. I was not implying that they could do the same for humans, and there is still a lot of work to be done, but that work is getting done FASTER every year (every month?).
---
---
http://www.nanowerk.com/news/newsid=9277.php
The promise of nanotechnology, the Dutch scientist said, is it could allow re-engineering ingredients to bring healthy nutrients more efficiently to the body while allowing less-desirable components to pass on through.
European food scientists use nanotechnology to create structures in foods that can deliver nutrients to specific locations in the body for the most beneficial effects, Kampers said.
When I used the word hydrogels, I was actually getting that mixed up with another article about nanohydrogels treating cancer. Sorry 'bout that, but I read so much, sometimes they blur together (I have 6-10 tabs open at a time). But hey, both are great. Also, something not mentioned in this particular article, but which I have read, was about coating nutrients in nanomaterials to disguise their taste, and creating designer 'tastes' that are healthy at the same time. Basically, 'junk food' that isn't junk. That's what they're making right now. I know there are none, or almost none, companies using it yet, but I think it will be really big, really soon.
---
---
I personally don't care about iPods, or nano-food, or nano-swimsuits. What I do care about are the brain machine interfaces and longevity research. My point here, is yeah, all the oohh and ahh gizmos are cool, but, to me at least, they are irrelevant when it comes to BMI and genetic advancements. That kind of stuff will be very short lived.
---
---
http://focusfusion.org/
WEST ORANGE, NJ - Dec. 18, 2008 - Lawrenceville Plasma Physics Inc., a small research and development company based in West Orange, NJ, announces the initiation of a two-year-long experimental project to test the scientific feasibility of Focus Fusion.
Focus Fusion Is:
* controlled nuclear fusion
* using the dense plasma focus (DPF) device
* and hydrogen-boron fuel.
o Hydrogen-boron fuel produces almost no neutrons (e.g., no radioactivity)
o and allows the direct conversion of energy into electricity.
This is not the ONLY approach being developed either.
There is also this:
http://nextbigfuture.com/2008/11/hyperion-power-ge neration-not-scam-and.html
and this:
http://dvice.com/archives/2008/06/blacklight_powe. php
My point here is that there are multiple teams and approaches being worked out on this, in addition to the solar industry. It will happen soon.
---
---
35 USC 181
Secrecy of certain inventions and withholding of patent
Whenever publication or disclosure by the publication of an application or by the grant of a patent on an invention in which the Government has a property interest might, in the opinion of the head of the interested Government agency, be detrimental to the national security, the Commissioner of Patents upon being so notified shall order that the invention be kept secret and shall withhold the publication of the application or the grant of a patent therefor under the conditions set forth hereinafter.
Each individual to whom the application is disclosed shall sign a dated acknowledgment thereof, which acknowledgment shall be entered in the file of the application. If, in the opinion of the Atomic Energy Commission, the Secretary of a Defense Department, or the chief officer of another department or agency so designated, the publication or disclosure of the invention by the publication of an application or by the granting of a patent therefor would be detrimental to the national security, the Atomic Energy Commission, the Secretary of a Defense Department, or such other chief officer shall notify the Commissioner of Patents and the Commissioner of Patents shall order that the invention be kept secret and shall withhold the publication of the application or the grant of a patent for such period as the national interest requires, and notify the applicant thereof. Upon proper showing by the head of the department or agency who caused the secrecy order to be issued that the examination of the application might jeopardize the national interest, the Commissioner of Patents shall thereupon maintain the application in a sealed condition and notify the applicant thereof. The owner of an application which has been placed under a secrecy order shall have a right to appeal from the order to the Secretary of Commerce under rules prescribed by him.
duh
not paranoid delusions, just an attempt to understand that if the govt. CAN withhold technology, then isn't it reasonable to assume that they ARE...I don't have to convince anyone that the stealth projects 9SR-71, etc.) were a secret, do I...if I had been talking about them decades ago, when they WERE secret, I would have got the same reaction then...why is it so hard to believe that the military does secret stuff, when they OPENLY admit to doing SECRET stuff...all in the name of national security...which makes sense...that's a big part of what our military is for...what I'm GETTING AT, is can you try to extrapolate through your imagination WHAT they might have as of now?...considering that much of what we see on the open market is potentially 20 years 'behind' what they ACTUALLY have...
---
---
My perspective about current events aren't delusions. They are a perspective. I've studied hegemonic forces and what they do in this world. It's not a 'conspiracy' so much as it's a game. A game people have been playing for thousands of years. People compete. Now, they are competing over the world. In all the honesty, there is obviously much more to the bailouts then what I just said, but I was trying to get people to look at it in a new way. There ARE people who fight/compete for control over this world. It's possible that this financial crisis is part of a larger strategy to make way for new technology. Any thoughts on this, or are you just going to dismiss me?
---
---
I don't make stuff up, I'm just not great at citing everything, it's a bad habit.
And your last point, about AI. This is my personal opinion (let's debate it), but I really think we'll see the rise of a cybernetic intelligence LONG before we get a super-intelligent AI.
---
kallisti
Pan |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
Sorry to come late to this party but here is my response.
As a very long time space advocate it is my perception that we are focusing on the wrong issue, if our issue is to get cheap access to space at this time. We all know that it is what is needed, but at this time, without a market to support it, most if not all such funding comes from the government, and our government has not shown the competence needed in this area (either in executing themselves or in picking a contractor to do so, aka the X-33 and many other debacles).
Therefore I would posit a different approach. What is needed at this time is a Reusable Space Vehicle (RSV), rather than wasting time with Reusable Launch Vehicles (RLV's) for the reasons stated in paragraph 1.
Today and RSV could be built for the fraction of the cost of an RLV. An RSV is much more of an integration task than a ground up R&D activity. An RSV for example could be based upon an International Space Station (ISS), node or module. The life support systems are already in use on ISS and we have an almost ten year history of continuous operation of these systems to draw our experience from in designing one for an RSV. As far as propulsion goes, you could either launch multiple stages from the Earth to give the system a boost, or aggregate them at ISS. for use. With the station as a logistic node, the existing base of expendable launch vehicles could be used, which would increase their production, thus lowering their unit cost by as much as 50% for a 25% increase in production.
The aerocapture shell for the return to the station is also very well trodden territory and by using that approach, you save a tremendous amount of fuel.
The RSV would be used to get humans first to lunar orbit, where a stripped down lander would be used to get two crewpersons at a time to the lunar surface. The crew would have had supplies and several subsystems of a lunar outpost emplaced before their arrival and even with the primitive state of robotics today, could be in serviceable condition before their arrival.
Instead of a science focused initial mission, the mission would be focused on In-Situ Resource Utilization or ISRU, which would be done to make oxygen and metals from the lunar basaltic rocks and or the regolith. If this is done in the polar regions, then the amount of resources is far greater than at the equator, and there is constant sunlight, eliminating the near term need for nuclear power.
What is necessary here is to reduce the cost to first ISRU production as much as possible and then leverage ISRU supplied oxygen, metals, and fuel to multiply the effectiveness of the RSV.
It is my professional opinion that this would be cheaper than developing an RLV or a heavy lift rocket. However, with ISRU as a focus, and local manufacturing set up at the earliest possible moment, this changes the mix of payloads sent from the Earth from complete systems, to parts and subassemblies, which favors a high flight rate low cost system, which gets you the market needed to justify the private investment in an RLV.
Therefore to get an RLV, you need an RSV supported by ISRU.
:)
|
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
Speculation about the approaching singularity is fascinating but typically incomplete. Extrapolations of CPU power and the adoption of computer technology are all very well. Certain other trends are unfortunately disregarded. I have studied these extrapolations, in particular the adoption and unavoidable use of Microsoft Office applications. I estimate that by the year 2053, a full two decades before other trends reach their point of criticality, society will reach what I call the "MS event horizon".
The density of p-codes, compounded by exponentially expanding auto-feature creeping and mandlemangled user interface elements, will almost certainly accelerate the universe's quantum state into a local entropy maximum by that year.
I can imagine our universe might thereafter be visited from some temporally distant but nearby brane, by unrecognizably intelligent entities, who glissandaport into our where, and discover us in a state of endless suspension, frozen from jscript in a deadly IPV6e7 embrace at the moment a Baja amoeba running Nanonet Explorer clicked the "back and over yonder" button when it really wanted to go thither. The entities will withdraw, incomprehensibly wistfully, hollowly [Sorry!] whispering "This is the way this world ends - not with a bang, but a noop." |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
Ha, I knew I was close
it's not a-port though, it's aport. As in aportation.
Aportation:
A PK talent involving the seemingly instantaneous movement of an object from one location in space-time to another, apparently without going through the normal space-time in between. See Teleportation.
http://www.experiencefestival.com/a/Aportation/id/ 183412
sweet, found gliss too:
Glissando
"Glissando" (plural: glissandi, abbreviated gliss.) is a glide from one pitch to another. It is an Italianized musical term derived from the French glisser, to glide.
Glissando vs. Portamento
Prescriptive attempts[1] to distinguish the glissando from the portamento by limiting the former to the filling in of discrete intermediate pitches on instruments like the piano, harp and fretted strings have run up against established usage[2] of instruments like the trombone and timpani. The latter could thus be thought of as capable of either 'glissando' or 'portamento', depending on whether the drum was rolled or not. The clarinet gesture that opens Rhapsody in Blue could likewise be thought of either way, being originally for piano, but is in practice played as a portamento and described as a glissando. In cases where the destination and goal pitches are reduced to starting and stopping points as in James Tenney's Cellogram, or points of inflection, as in the sirens of Var'se's Hyperprism, the term portamento (conjuring a decorative effect) seems hardly adequate for what is a sonorous object in its own right and these are called glissando.
'Discrete glissando'
On some instruments (e.g., piano, harp, xylophone), discrete tones are clearly audible when sliding. For example, on a piano, the player can slide his/her thumb or fingers across the white or black keys, producing either a C major scale or an F# major pentatonic (or their relative modes). On a harp, the player can slide his/her finger across the strings, quickly playing the separate notes or even an arpeggio (for example b, c flat, d, e sharp, f, g sharp, a flat). Wind, brass and fretted stringed instrument players can effect an extremely rapid chromatic scale (ex: sliding up or down a string quickly on a fretted instrument), going through an infinite number of pitches. Arpeggio effects (likewise named glissando) are also obtained on the harmonic series by bowed strings and brass, especially the french horn.
http://encyclopedia.thefreedictionary.com/Gliss
So I take it to mean that glissandoporting is teleportation between worlds in the MWT bulk by harmonic gliding. |
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
In the article titled 'What if the Singularity does not happen?' Venor Vinge presents three graphs to illustrate his theory. The graphs contain a line depicting maximum power source as a function of population size. However, it is not clear how this line was constructed. Specifically, according to the graph humans started using horses around 6000BC. But, when did humans start using camels, oxes and elephants? It must have been after 6000BC as these animals are more powerful then horses and the graph clearly displays upward trend. Is this a historical fact? Furthermore, what do the dots between horse use and the invention of the steam engine represent? It seems that the graph was plotted first and events labelled subsequently. Moreover, 'The Wheel of Time' graph illustrates alternating periods of technological progress and technological regress. These periods are associated with equally dramatic oscillations in the size of the human population. However, no justification is provided for occurrences of events that would lead the human race to the brink of extinction.
Similar to other Singularity theorists, Venor Vinge claims that civilization development exhibits exponential characteristics. For instance, radio, television, airplanes and space missions were unthinkable only two centuries ago. Likewise, people did not see a need for personal computers a few decades ago. For these reasons, it is hard to imagine what events will take place 1000 years from now. Yet, the author has some predictions and graphs that stretch thousands of years in future. Indeed, these far future fictitious scenarios do not deserve serious consideration.
Vinge claims that the best chance of survival for the human race is by establishing space colonies. Space settlements will only exist if travelling faster than the speed of light is made possible. However, Bill Joy, the Chief Scientist of Sun Microsystems, disagrees with this claim. He argues that the problems that we have created on Earth will follow us to the space colonies. Therefore, the fate of the human race is linked to our ability to survive on this planet (Joy, 2000).
|
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
Hello people, first post here,
so many good points, don't know which ones to reply to first!
So I'll try a few that I still remember after an hour of reading.
@simulation
1. Yes it's a possibility, some scientists seriously think we are part of a simulation,
and a group have demonstrated the possibility that our universe is a holographic projection; thus we have both the possibility and a mechanism for it.
But I think any discussions and pondering regarding the nature or purpose of this simulation would not be very productive, since it's far beyond our comprehension.
We could be someone's lab rats, someone's save file of a cosmological Spore game, or any number of possible scenarios, all of which are likely beyond our control.
However one exciting aspect of the advent of strong AI is that we'll know much more about our universe; the technological breakthroughs they'll achieve should allow an unprecidented amount of information about the universe to be gained, we might confirm for sure whether we're in a simulation or not, what is dark matter/energy and if there is ETI.
@monkeys > humans > strong AI
Someone mentioned that humans would seek to kill monkeys either intentionally to acquire resources or by accident as a side-effect of their development, and that machines would do the same to us.
I agree to this to some extend, but ultimately no long-term predictions can be derived from the human-monkey interaction that can apply to the machine-human interaction.
We had started off by killing inferior species for their food and territory when we first "stood up", later we simply ruined their environment while we started "running", but we would eventually reach a point where the slaughtering or even harming of animals is of detriment to our societal values and the resources to keep them comfortable, even happy would be trivial enough that we'd preserve them out of compassion.
Machines cannot be assumed to follow this same kill-ignore-babysit pattern simply because we lack the intelligence to predict their behavior.
I think the only thing we can reliably predict is that, given a known fact that human society today is riddled with problems and a sizable proportion of humans don't live happily, and the fact that once strong AI emerges their intelligence will surpass us,
they would not agree with our methods and our condition and would take some action to rectify the issue, the way they choose to go about it is beyond human prediction, just like a monkey cannot know that the same species who had once slaughter them for their food would later put them in sancturies apparenty for no good reason.
|
|
|
|
|
|
|
|
|
Re: What If the Singularity Does NOT Happen?
|
|
|
|
Machines cannot be assumed to follow this same kill-ignore-babysit pattern simply because we lack the intelligence to predict their behavior.
I think the only thing we can reliably predict is that, given a known fact that human society today is riddled with problems and a sizable proportion of humans don't live happily, and the fact that once strong AI emerges their intelligence will surpass us,
I don't get why people automatically see AI as a new "species". I think it is much more likely they will be tools, a million times more advanced versions of today's computers. They will not have any desires or goals that their makers (us) will not want them to have.
I disagree that we can't predict what super AIs will do. While we can't predict exactly how they will solve certain problems, by controlling their desires and emotions OR by simply making them obey human commands, we CAN predict what problems they will solve. So we could predict, for example, that they will make us immortal, but we can't predict how they will do this.
I find it very hard to imagine why ANYONE would want to build an AI that is completely autonomous and has unpredictable goals. That would just be stupid and obviously suicidal. On the other hand, there are LOTS of reasons why people would want to build a "tool AI", that obeys commands and solves problems the inventors want to solve.
|
|
|
|
|
|
|
|