Origin > The Singularity > Singularities and Nightmares
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0656.html

Printable Version
    Singularities and Nightmares
by   David Brin

Options for a coming singularity include self-destruction of civilization, a positive singularity, a negative singularity (machines take over), and retreat into tradition. Our urgent goal: find (and avoid) failure modes, using anticipation (thought experiments) and resiliency -- establishing robust systems that can deal with almost any problem as it arises.

Originally published in Nanotechnology Perceptions: A Review of Ultraprecision Engineering and Nanotechnology, Volume 2, No. 1, March 27 2006.1 Reprinted with permission on KurzweilAI.net March 28, 2006.

In order to give you pleasant dreams tonight, let me offer a few possibilities about the days that lie ahead—changes that may occur within the next twenty or so years, roughly a single human generation. Possibilities that are taken seriously by some of today's best minds. Potential transformations of human life on Earth and, perhaps, even what it means to be human.

For example, what if biologists and organic chemists manage to do to their laboratories the same thing that cyberneticists did to computers? Shrinking their vast biochemical labs from building-sized behemoths down to units that are utterly compact, making them smaller, cheaper, and more powerful than anyone imagined. Isn't that what happened to those gigantic computers of yesteryear? Until, today, your pocket cell phone contains as much processing power and sophistication as NASA owned during the moon shots. People who foresaw this change were able to ride this technological wave. Some of them made a lot of money.

Biologists have come a long way already toward achieving a similar transformation. Take, for example, the Human Genome Project, which sped up the sequencing of DNA by so many orders of magnitude that much of it is now automated and miniaturized. Speed has skyrocketed, while prices plummet, promising that each of us may soon be able to have our own genetic mappings done, while-U-wait, for the same price as a simple EKG. Imagine extending this trend, by simple extrapolation, compressing a complete biochemical laboratory the size of a house down to something that fits cheaply on your desktop. A MolecuMac, if you will. The possibilities are both marvelous and frightening.

When designer drugs and therapies are swiftly modifiable by skilled medical workers, we all should benefit.

But then, won't there also be the biochemical equivalent of "hackers"? What are we going to do when kids all over the world can analyze and synthesize any organic compound, at will? In that event, we had better hope for accompanying advances in artificial intelligence and robotics... at least to serve our fast food burgers. I'm not about to eat at any restaurant that hires resentful human adolescents, who swap fancy recipes for their home molecular synthesizers over the Internet. Would you?

Now don't get me wrong. If we ever do have MolecuMacs on our desktops, I'll wager that 99 percent of the products will be neutral or profoundly positive, just like most of the software creativity flowing from young innovators today. But if we're already worried about a malicious one percent in the world of bits and bytes—hackers and cyber-saboteurs—then what happens when this kind of 'creativity' moves to the very stuff of life itself? Nor have we mentioned the possibility of intentional abuse by larger entities—terror cabals, scheming dictatorships, or rogue corporations.

These fears start to get even more worrisome when we ponder the next stage, beyond biotech. Deep concerns are already circulating about what will happen when nanotechnology—ultra-small machines building products atom-by-atom to precise specifications—finally hits its stride. Molecular manufacturing could result in super-efficient factories that create wealth at staggering rates of efficiency. Nano-maintenance systems may enter your bloodstream to cure disease or fine-tune bodily functions. Visionaries foresee this technology helping to save the planet from earlier human errors, for instance by catalyzing the recycling of obstinate pollutants. Those desktop units eventually may become universal fabricators that turn almost any raw material into almost any product you might desire...

... or else (some worry), nanomachines might break loose to become the ultimate pollution. A self-replicating disease, gobbling everything in sight, conceivably turning the world's surface into gray goo.2

Others have raised this issue before, some of them in very colorful ways. Take the sensationalist novel Prey, by Michael Crichton, which portrays a secretive agency hubristically pushing an arrogant new technology, heedless of possible drawbacks or consequences. Crichton's typical worried scenario about nanotechnology follows a pattern nearly identical to his earlier thrillers about unleashed dinosaurs, robots, and dozens of other techie perils, all of them viewed with reflexive suspicious loathing. (Of course, in every situation, the perilous excess happens to result from secrecy, a topic that we will return to, later.) A much earlier and better novel, Blood Music, by Greg Bear, presented the up and downside possibilities of nanotech with profound vividness. Especially the possibility that most worries even optimists within the nanotechnology community—that the pace of innovation may outstrip our ability to cope.

Now, at one level, this is an ancient fear. If you want to pick a single cliché that is nearly universally held, across all our surface boundaries of ideology and belief—e.g. left-versus-right, or even religious-vs-secular—the most common of all would probably be:

"Isn't it a shame that our wisdom has not kept pace with technology?"

While this cliché is clearly true at the level of solitary human beings, and even mass-entities like corporations, agencies or political parties, I could argue that things aren't anywhere near as clear at the higher level of human civilization. Elsewhere I have suggested that "wisdom" needs to be defined according to outcomes and processes, not the perception or sagacity of any particular individual guru or sage. Take the outcome of the Cold War… the first known example of humanity acquiring a means of massive violence, and then mostly turning away from that precipice. Yes, that means of self-destruction is still with us. But two generations of unprecedented restraint suggest that we have made a little progress in at least one kind of "wisdom." That is, when the means of destruction are controlled by a few narrowly selected elite officials on both sides of a simple divide.

But are we ready for a new era, when the dilemmas are nowhere near as simple? In times to come, the worst dangers to civilization may not come from clearly identifiable and accountable adversaries—who want to win an explicit, set-piece competition—as much as from a general democratization of the means to do harm. New technologies, distributed by the Internet and effectuated by cheaply affordable tools, will offer increasing numbers of angry people access to modalities of destructive power--means that will be used because of justified grievance, avarice, indignant anger, or simply because they are there.


Faced with onrushing technologies in biotech, nanotech, artificial intelligence, and so on, some bright people—like Bill Joy, former chief scientist of Sun Computers—see little hope for survival of a vigorously open society. You may have read Joy's unhappy manifesto in Wired Magazine3, in which he quoted the Unabomber (of all people), in support of a proposal that is both ancient and new—that our sole hope for survival may be to renounce, squelch, or relinquish several classes of technological progress.

This notion of renunciation has gained credence all across the political and philosophical map, especially at the farther wings of both right and left. Take the novels and pronouncements of Margaret Atwood, whose fundamental plot premises seem almost identical to those of Michael Crichton, despite their differences over superficial politics. Both authors routinely express worry that often spills into outright loathing for the overweening arrogance of hubristic technological innovators who just cannot leave nature well enough alone.

At the other end of the left-right spectrum stands Francis Fukuyama, who is Bernard L. Schwartz Professor of International Political Economy at the Paul H. Nitze School of Advanced International Studies of Johns Hopkins University. Dr. Fukuyama's best-known book, The End of History and the Last Man (1992) triumphally viewed the collapse of communism as likely to be the final stirring event worthy of major chronicling by historians. From that point on, we would see liberal democracy bloom as the sole path for human societies, without significant competition or incident. No more "interesting times."4 But this sanguine view did not last, as Fukuyama began to see potentially calamitous "history" in the disruptive effects of new technology. As a Bush Administration court intellectual and a member of the President's Council on Bioethics, he now condemns a wide range of biological science as disruptive and even immoral. People cannot, according to Fukuyama, be trusted to make good decisions about the use of—for example—genetic therapy. Human "improvability" is so perilous a concept that it should be dismissed, almost across-the-board. In Our Posthuman Future: Consequences of the Biotechnology Revolution (2002), Fukuyama prescribes paternalistic government industry panels to control or ban whole avenues of scientific investigation, doling out those advances that are deemed suitable.

You may surmise that I am dubious. For one thing, shall we enforce this research ban worldwide? Can such tools be squelched forever? From elites, as well as the masses? If so, how?

Although some of the failure modes mentioned by Bill Joy, Ralph Peters, Francis Fukuyama, and the brightest renunciators seem plausible and worth investigating, it's hard to grasp how we can accomplish anything by becoming neo-Luddites. Laws that seek to limit technological advancement will certainly be disobeyed by groups that simmer at the social extreme, where the worst dangers lie. Even if ferocious repression is enacted—perhaps augmented with near-omniscient and universal surveillance—this will not prevent exploration and exploitation of such technologies by social elites. (Corporate, governmental, aristocratic, criminal, foreign… choose your own favorite bogeymen of unaccountable power.) For years, I have defied renunciators to cite one example, amid all of human history, when the mighty allowed such a thing to happen. Especially when they plausibly stood to benefit from something new.

While unable to answer that challenge, some renunciators have countered that all of the new mega-technologies—including biotech and nanotechnology—may be best utilized and advanced if control is restricted to knowing elites, even in secret. With so much at stake, should not the best and brightest make decisions for the good of all? Indeed, in fairness, I should concede that the one historical example I gave earlier—that of nuclear weaponry—lends a little support to this notion. Certainly, in that case, one thing that helped to save us was the limited number of decision-makers who could launch calamitous war.

Still, weren't the political processes constantly under public scrutiny, during that era? Weren't those leaders supervised by the public, at least on one side? Moreover, decisions about atom bombs were not corrupted very much by matters of self-interest. (Howard Hughes did not seek to own and use a private nuclear arsenal.) But self-interest will certainly influence controlling elites when they weigh the vast benefits and potential costs of biotech and nanotechnology.

Besides, isn't elitist secrecy precisely the error-generating mode that Crichton, Atwood and so many others portray so vividly, time and again, while preaching against technological hubris? History is rife with examples of delusional cabals of self-assured gentry, telling each other just-so stories while evading any criticism that might reveal flaws in The Plan. By prescribing a return to paternalism—control by elites who remain aloof and unaccountable—aren't renunciators ultimately proposing the very scenario that everybody—rightfully—fears most?

Perhaps this is one reason why the renunciators—while wordy and specific about possible failure modes—are seldom very clear on which controlling entities should do the dirty work of squelching technological progress. Or how this relinquishment could be enforced, across the board. Indeed, supporters can point to no historical examples when knowledge-suppression led to anything but greater human suffering. No proposal that's been offered so far even addresses the core issue of how to prevent some group of elites from cheating. Perhaps all elites.

In effect, only the vast pool of normal people would be excluded, eliminating their myriad eyes, ears and prefrontal lobes from civilization's error-detecting network.

Above all, renunciation seems a rather desperate measure, completely out of character with this optimistic, pragmatic, can-do culture.


And yet, despite all this criticism, I am actually much more approving of Joy, Atwood, Fukuyama, et al, than some might expect. In The Transparent Society, I speak well of social critics who shout when they see potential danger along the road.

In a world of rapid change, we can only maximize the benefits of scientific advancement—and minimize inevitable harm—by using the great tools of openness and accountability. Above all, acknowledging that vigorous criticism is the only known antidote to error. This collective version of "wisdom" is what almost surely has saved us so far. It bears little or no resemblance to the kind of individual sagacity that we are used to associating with priests, gurus, and grandmothers… but it is also less dependent upon perfection. Less prone to catastrophe when the anointed Center of Wisdom makes some inevitable blunder.

Hence, in fact, I find fretful worry-mongers invigorating! Their very presence helps progress along by challenging the gung-ho enthusiasts. It's a process called reciprocal accountability. Without bright grouches, eager to point at potential failure modes, we might really be in the kind of danger that they claim we are. Ironically, it is an open society—where the sourpuss Cassandras are well heard—that is unlikely to need renunciation, or the draconian styles of paternalism they prescribe.

Oh, I see the renunciators' general point. If society remains as stupid as some people think it is—or even if it is as smart as I think it is, but gets no smarter—then nothing that folks do or plan at a thousand well-intentioned futurist conferences will achieve very much. No more than delaying the inevitable.

In that case, we'll finally have the answer to an ongoing mystery of science—why there's been no genuine sign of extraterrestrial civilization amid the stars.5 The answer will be simple. Whenever technological culture is tried, it always destroys itself. That possibility lurks, forever, in the corner of our eye, reminding us what's at stake.

On the other hand, I see every reason to believe we have a chance to disprove that dour worry. As members of an open and questioning civilization—one that uses reciprocal accountability to find and probe every possible failure-mode—we may be uniquely equipped to handle the challenges ahead.

Anyway, believing that is a lot more fun.


We've heard from the gloomy renunciators. Let's look at another future. The scenario of those who—literally—believe the sky's the limit. Among many of our greatest thinkers, there is a thought going around—a new 'meme' if you will—that says we're poised for take-off. The idea I'm referring to is that of a coming Technological Singularity.

Science fiction author Vernor Vinge has been touted as a chief popularizer of this notion, though it has been around, in many forms, for generations. More recently, Ray Kurzweil's book The Singularity is Near argues that our scientific competence and technologically-empowered creativity will soon skyrocket, propelling humanity into an entirely new age.

Call it a modern, high-tech version of Teilhard De Chardin's noosphere apotheosis—an approaching time when humanity may move, dramatically and decisively, to a higher state of awareness or being. Only, instead of achieving this transcendence through meditation, good works or nobility of spirit, the idea this time is that we may use an accelerating cycle of education, creativity and computer-mediated knowledge to achieve intelligent mastery over both the environment and our own primitive drives.

In other words, first taking control over Brahma's "wheel of life," then learning to steer it wherever we choose.

What else would you call it…

  • When we start using nanotechnology to repair bodies at the cellular level?
  • When catching up on the latest research is a mere matter of desiring information, whereupon autonomous software agents deliver it to you, as quickly and easily as your arm now moves wherever you wish it to?
  • When on-demand production becomes so trivial that wealth and poverty become almost meaningless terms?
  • When the virtual reality experience—say visiting a faraway planet—gets hard to distinguish from the real thing?
  • When each of us can have as many "servants"—either robotic or software-based—as we like, as loyal as your own right hand?
  • When augmented human intelligence will soar and—trading insights with one another at light speed—helping us attain entirely new levels of thought?

Of course, it is worth pondering how this 'singularity' notion compares to the long tradition of contemplations about human transcendence. Indeed, the idea of rising to another plane of existence is hardly new! It makes up one of the most consistent themes in cultural history, as though arising from our basic natures.

Indeed, many opponents of science and technology clutch their own images of messianic transformation, images that—if truth be told—share many emotional currents with the tech-heavy version, even if they disagree over the means to achieve transformation. Throughout history, most of these musings dwelled upon the spiritual path, that human beings might achieve a higher state through prayer, moral behavior, mental discipline, or by reciting correct incantations. Perhaps because prayer and incantations were the only means available.

In the last century, an intellectual tradition that might be called 'techno-transcendentalism' added a fifth track. The notion that a new level of existence, or a more appealing state of being, might be achieved by means of knowledge and skill.

But which kinds of knowledge and skill?

Depending on the era you happen to live in, techno-transcendentalism has shifted from one fad to another, pinning fervent hopes upon the scientific flavor of the week. For example, a hundred years ago, Marxists and Freudians wove complex models of human society—or mind—predicting that rational application of these models and rules would result in far higher levels of general happiness.6 Subsequently, with popular news about advances in agriculture and evolutionary biology, some groups grew captivated by eugenics—the allure of improving the human animal. On occasion, this resulted in misguided and even horrendous consequences. Yet, this recurring dream has lately revived in new forms, with the promise of genetic engineering and neurotechnology.

Enthusiasts for nuclear power in the 1950s promised energy too cheap to meter. Some of the same passion was seen in a widespread enthusiasm for space colonies, in the 1970s and 80s, and in today's ongoing cyber-transcendentalism, which promises ultimate freedom and privacy for everyone, if only we just start encrypting every Internet message, using anonymity online to perfectly mask the frail beings who are actually typing at a real keyboard. Over the long run, some hold out hope that human minds will be able to download into computers or the vast new frontier of mid-21st Century cyberspace, freeing individuals of any remaining slavery to our crude and fallible organic bodies.

This long tradition—of bright people pouring faith and enthusiasm into transcendental dreams—tells us a lot about one aspect of our nature, a trait that crosses all cultures and all centuries. Quite often, this zealotry is accompanied by disdain for contemporary society—a belief that some kind of salvation can only be achieved outside of the normal cultural network…a network that is often unkind to bright philosophers—and nerds. Seldom is it ever discussed how much these enthusiasts have in common—at least emotionally—with believers in older, more traditional styles of apotheosis, styles that emphasize methods that are more purely mental or spiritual.

We need to keep this long history in mind, as we discuss the latest phase: a belief in the ultimately favorable effects of an exponential increase in the ability of our calculating engines. That their accelerating power of computation will offer commensurately profound magnifications of our knowledge and power. Our wisdom and happiness.

The challenge that I have repeatedly laid down is this: "Name one example, in all of history, when these beliefs actually bore fruit. In light of all the other generations who felt sure of their own transforming notion, should you not approach your newfangled variety with some caution… and maybe a little doubt?"


Are both the singularity believers and the renunciators getting a bit carried away? Let's take that notion of doubt and give it some steam. Maybe all this talk of dramatic transformation, within our lifetimes, is just like those earlier episodes: based more on wishful (or fearful) thinking than upon anything provable or pragmatic.

Take Jonathan Huebner, a physicist who works at the Pentagon's Naval Air Warfare Center in China Lake, California. Questioning the whole notion of accelerating technical progress, he studied the rate of "significant innovations per person." Using as his sourcebook The History of Science and Technology, Huebner concluded that the rate of innovation peaked in 1873 and has been declining ever since. In fact, our current rate of innovation—which Huebner puts at seven important technological developments per billion people per year—is about the same as it was in 1600. By 2024, it will have slumped to the same level as it was in the Dark Ages, around 800 AD. "The number of advances wasn't increasing exponentially, I hadn't seen as many as I had expected."

Huebner offers two possible explanations: economics and the size of the human brain. Either it's just not worth pursuing certain innovations since they won't pay off—one reason why space exploration has all but ground to a halt—or we already know most of what we can know, and so discovering new things is becoming increasingly difficult.

Ben Jones, of Northwestern University in Illinois, agrees with Huebner's overall findings, comparing the problem to that of the Red Queen in Through the Looking Glass: we have to run faster and faster just to stay in the same place. Jones differs, however, as to why this happened. His first theory is that early innovators plucked the easiest-to-reach ideas, or "low-hanging fruit," so later ones have to struggle to crack the harder problems. Or it may be that the massive accumulation of knowledge means that innovators have to stay in education longer to learn enough to invent something new and, as a result, less of their active life is spent innovating. "I've noticed that Nobel Prize winners are getting older," he says.

In fact, it is easy to pick away at these four arguments by Huebner and Jones.7 For example, it is only natural for innovations and breakthroughs to seem less obvious or apparent to the naked eye, as we have zoomed many of our research efforts down to the level of the quantum and out to the edges of the cosmos. In biology, only a few steps—like completion of the Human Genome Project—get explicit attention as "breakthroughs." Such milestones are hard to track in a field that is fundamentally so complex and murky. But that does not mean biological advances aren't either rapid or, overall, truly substantial. Moreover, while many researchers seem to gain their honors at an older age, is that not partly a reflection of the fact that lifespans have improved, and fewer die off before getting consideration for prizes?

Oh, there is something to be said for the singularity-doubters. Indeed, even in the 1930s, there were some famous science fiction stories that prophesied a slowdown in progress, following a simple chain of logic. Because progress would seem to be its own worst enemy. As more becomes known, specialists in each field would have to absorb more and more about less and less—or about ever narrowing fields of endeavor—in order to advance knowledge by the tiniest increments. When I was a student at Caltech, in the 1960s, we undergraduates discussed this problem at worried length. For example, every year the sheer size, on library shelves, of "Chemical Abstracts" grew dauntingly larger and more difficult for any individual to scan for relevant papers.

And yet, over subsequent decades, this trend never seemed to become the calamity we expected. In part, because Chemical Abstracts and its cousins have—in fact—vanished from library shelves, altogether! The library space problem was solved by simply putting every abstract on the Web. Certainly, literature searches—for relevant work in even distantly related fields—now take place faster and more efficiently than ever before, especially with the use of software agents and assistants that should grow even more effective in years to come.

That counter-force certainly has been impressive. Still, my own bias leans toward another trend that seems to have helped forestall a productivity collapse in science. This one (I will admit) is totally subjective. And yet, in my experience, it has seemed even more important than advances in online search technology. For it has seemed to me that the best and brightest scientists are getting smarter, even as the problems they address become more complex.

I cannot back this up with statistics or analyses. Only with my observation that many of the professors and investigators that I have known during my life now seem much livelier, more open-minded and more interested in fields outside their own—even as they advance in years—than they were when I first met them. In some cases, decades ago. Physicists seem to be more interested in biology, biologists in astronomy, engineers in cybernetics, and so on, than used to be the case. This seems in stark contrast to what you would expect, if specialties were steadily narrowing. But it is compatible with the notion that culture may heavily influence our ability to be creative. And a culture that loosens hoary old assumptions and guild boundaries may be one that's in the process of freeing-up mental resources, rather than shutting them down.

In fact, this trend—toward overcoming standard categories of discipline—is being fostered deliberately in many places. For example, the new Sixth College of the University of California at San Diego, whose official institutional mission is to "bridge the arts and sciences," drives a nail in the coffin of C.P. Snow's old concept that the "two cultures" can never meet. Never before have there been so many collaborative efforts between tech-savvy artists and technologists who appreciate the aesthetic and creative sides of life.8

What Huebner and Jones appear to miss is that complex obstacles tend best to be overcome by complex entities. Even if Einstein and others picked all the low hanging fruit within reach to individuals, that does not prevent groups—institutions and teams and entrepreneurial startups—from forming collaborative human pyramids to go after goodies that are higher in the tree. Especially when those pyramids and teams include new kinds of members, software agents and search methodologies, worldwide associative networks and even open-source participation by interested amateurs. Or when a myriad fields of endeavor see their loci of creativity get dispersed onto a multitude of inexpensive desktops, the way software has been.9

Dutch-American economic historian Joel Mokyr, in The Lever of Riches and The Gifts of Athena, supports this progressive view that we are indeed doing something right, something that makes our liberal-democratic civilization uniquely able to generate continuous progress. Mokyr believes that, since the 18th-century Enlightenment, a new factor has entered the human equation: the accumulation of and a free market in knowledge. As Mokyr puts it, we no longer behead people for saying the wrong thing—we listen to them. This "social knowledge" is progressive because it allows ideas to be tested and the most effective to survive. This knowledge is embodied in institutions, which, unlike individuals, can rise above our animal natures.

But Mokyr does worry that, though a society may progress, human nature does not. "Our aggressive, tribal nature is hard-wired, unreformed and unreformable. Individually we are animals and, as animals, incapable of progress." The trick is to cage these animal natures in effective institutions: education, the law, government. But these can go wrong. "The thing that scares me," he says, "is that these institutions can misfire."

While I do not use words such as "caged," I must agree that Mokyr captures the essential point of our recent, brief experiment with the Enlightenment: John Locke's rejection of romantic oversimplification in favor of pragmatic institutions that work flexibly to maximize the effectiveness of our better efforts—the angels of our nature—enabling our creative forces to mutually reinforce. Meanwhile, those same institutions and processes would thwart our "devils"—the always-present human tendency towards self-delusion and cheating. Of course, human nature strives against these constraints. Self-deluders and cheaters are constantly trying to make up excuses to bypass the Enlightenment covenant and benefit by making these institutions less effective. Nothing is more likely to ensure the failure of any singularity than if we allow this to happen.

But then, swiveling the other way, what if it soon becomes possible not only to preserve and advance those creative enlightenment institutions, but also to do what Mokyr calls impossible? What if we actually can improve human nature?

Suppose the human components of societies and institutions can also be made better, even by a little bit? I have contended that this is already happening, on a modest scale. Imagine the effects of even a small upward-ratcheting in general human intelligence, whether inherent or just functional, by means of anything from education to "smart drugs" to technologically-assisted senses to new methods of self-conditioning.

It might not take much of an increase in effective human intelligence for markets and science and democracy, etc., to start working much better than they already do. Certainly, this is one of the factors that singularity aficionados are counting on.

What we are left with is an image that belies the simple and pure notion of a "singularity" curve… one that rises inexorably skyward, as a simple mathematical function, with knowledge and skill perpetually leveraging against itself, as if ordained by natural law. Even the most widely touted example of this kind of curve, Moore's Law—which successfully modeled the rapid increase of computational power available at plummeting cost—has never been anything like a smooth phenomenon. Crucial and timely decisions—some of them pure happenstance—saved Moore's Law on many occasions from collision with either technological barriers or cruel market forces.

True, we seem to have been lucky, so far. Cybernetics and education and a myriad other factors have helped to overcome the "specialization trap." But as we have seen in this section, past success is no guarantee of future behavior. Those who foresee upward curves continuing ad infinitum, almost as a matter of faith, are no better grounded than other transcendentalists, who confidently predicted other rapturist fulfillments, in their own times.


Having said all of the above, let me hasten to add that I believe in the high likelihood of a coming singularity!

I believe in it because the alternatives are too awful to accept. Because, as we discussed before, the means of mass destruction, from A-bombs to germ warfare, are 'democratizing'—spreading so rapidly among nations, groups, and individuals—that we had better see a rapid expansion in sanity and wisdom, or else we're all doomed.

Indeed, bucking the utterly prevalent cliché of cynicism, I suggest that strong evidence does indicate some cause for tentative optimism. An upward trend is already well in place. Overall levels of education, knowledge and sagacity in Western Civilization—and its constituent citizenry—have never been higher, and these levels may continue to improve, rapidly, in the coming century. Possibly enough to rule out some of the most prevalent images of failure that we have grown up with. For example, we will not see a future that resembles Blade Runner, or any other cyberpunk dystopia. Such worlds—where massive technology is unmatched by improved wisdom or accountability—will simply not be able to sustain themselves.

The options before us appear to fall into four broad categories:

1. Self-destruction. Immolation or desolation or mass-death. Or ecological suicide. Or social collapse. Name your favorite poison. Followed by a long era when our few successors (if any) look back upon us with envy. For a wonderfully depressing and informative look at this option, see Jared Diamond's Collapse: How Societies Choose to Fail or Succeed. (Note that Diamond restricts himself to ecological disasters that resonate with civilization-failures of the past; thus he only touches on the range of possible catastrophe modes.) We are used to imagining self-destruction happening as a result of mistakes by ruling elites. But in this article we have explored how it also could happen if society enters an age of universal democratization of the means of destruction—or, as Thomas Friedman puts it, "the super-empowerment of the angry young man"—without accompanying advances in social maturity and general wisdom.

2. Achieve some form of 'Positive Singularity'—or at least a phase shift to a higher and more knowledgeable society (one that may have problems of its own that we can't imagine.) Positive singularities would, in general, offer normal human beings every opportunity to participate in spectacular advances, experiencing voluntary, dramatic self-improvement, without anything being compulsory… or too much of a betrayal to the core values of decency we share.

3. Then there is the 'Negative Singularity'—a version of self-destruction in which a skyrocket of technological progress does occur, but in ways that members of our generation would find unpalatable. Specific scenarios that fall into this category might include being abused by new, super-intelligent successors (as in Terminator or The Matrix), or simply being "left behind" by super entities that pat us on the head and move on to great things that we can never understand. Even the softest and most benign version of such a 'Negative Singularity' is perceived as loathsome by some perceptive renunciators, like Bill Joy, who take a dour view of the prospect that humans may become a less-than-pinnacle form of life on Planet Earth.10

4. Finally, there is the ultimate outcome that is implicit in every renunciation scenario: Retreat into some more traditional form of human society, like those that maintained static sameness under pyramidal hierarchies of control for at least four millennia. One that quashes the technologies that might lead to results 1 or 2 or 3. With four thousand years of experience at this process, hyper-conservative hierarchies could probably manage this agreeable task, if we give them the power. That is, they could do it for a while.

When the various paths11 are laid out in this way, it seems to be a daunting future that we face. Perhaps an era when all of human destiny will be decided. Certainly not one that's devoid of "history." For a somewhat similar, though more detailed, examination of these paths, the reader might pick up Joel Garreau's fine book, Radical Evolution. It takes a good look at two extreme scenarios for the future—"Heaven" and Hell"—then posits a third—"Prevail"—as the one that rings most true.

So, which of these outcomes seem plausible?

First off, despite the fact that it may look admirable and tempting to many, I have to express doubt that outcome #4 could succeed over an extended period. Yes, it resonates with the lurking tone that each of us feels inside, inherited from countless millennia of feudalism and unquestioning fealty to hierarchies, a tone that today is reflected in many popular fantasy stories and films. Even though we have been raised to hold some elites in suspicion, there is a remarkable tendency for each of us to turn a blind eye to other elites—or favorites—and to rationalize that those would rule wisely.

Certainly, the quasi-Confucian social pattern that is being pursued by the formerly Communist rulers of China seems to be an assertive, bold and innovative approach to updating authoritarian rule, incorporating many of the efficiencies of both capitalism and meritocracy.12 This determined effort suggests that an updated and modernized version of hierarchism might succeed at suppressing whatever is worrisome, while allowing progress that's been properly vetted. It is also, manifestly, a rejection of the Enlightenment and everything that it stands for, including John Locke's wager that processes of regulated but mostly free human interaction can solve problems better than elite decision-making castes.

In fact, we have already seen, in just this one article, more than enough reasons to understand why retreat simply cannot work over the long run. Human nature ensures that there can never be successful rule by serene and dispassionately wise "philosopher kings." That approach had its fair trial—at least forty centuries—and by almost any metric, it failed.

As for the other three roads, well, there is simply no way that anyone—from the most enthusiastic, "extropian" utopian-transcendentalists to the most skeptical and pessimistic doomsayers—can prove that one path is more likely than the others. (How can models, created within an earlier, cruder system, properly simulate and predict the behavior of a later and vastly more complex system?) All we can do is try to understand which processes may increase our odds of achieving better outcomes. More robust outcomes. These processes will almost certainly be as much social as technological. They will, to a large degree, depend upon improving our powers of error-avoidance.

My contention—running contrary to many prescriptions from both left and right—is that we should trust Locke a while longer. This civilization already has in place a number of unique methods for dealing with rapid change. If we pay close attention to how these methods work, they might be improved dramatically, perhaps enough to let us cope, and even thrive. Moreover, the least helpful modification would appear to be the one thing that the Professional Castes tell us we need—an increase in paternalistic control.13

In fact, when you look at our present culture from an historical perspective, it is already profoundly anomalous in its emphasis upon individualism, progress, and above all, suspicion of authority (SOA). These themes were actively and vigorously repressed in a vast majority of human cultures, because they threatened the stable equilibrium upon which ruling classes always depended. In Western Civilization—by way of contrast—it would seem that every mass-media work of popular culture, from movies to novels to songs, promotes SOA as a central human value.14 This may, indeed, be the most unique thing about our culture, even more than our wealth and technological prowess.

Although we are proud of the resulting society—one that encourages eccentricity, appreciation of diversity, social mobility, and scientific progress—we have no right, as yet, to claim that this new way of doing things is especially sane or obvious. Many in other parts of the world consider Westerners to be quite mad! And with some reason. Indeed, only time will tell who is right about that. For example, if we take the suspicion of authority ethos to its extreme, and start paranoically mistrusting even our best institutions—as was the case with Oklahoma City bomber Timothy McVeigh—then it is quite possible that Western Civilization may fly apart before ever achieving its vaunted aims, and lead rapidly to some of the many ways that we might achieve outcome #1.

Certainly, a positive singularity (outcome #2) cannot happen if only centrifugal forces operate and there are no compensating centripetal virtues to keep us together as a society of mutually respectful sovereign citizens.

Above all (as I point out in The Transparent Society), our greatest innovations, the accountability arenas15 wherein issues of importance get decided—science, justice, democracy and free markets—are not arbitrary, nor are they based on whim or ideology. They all depend upon adversaries competing on specially designed playing fields, with hard-learned arrangements put in place to prevent the kinds of cheating that normally prevail whenever human beings are involved. Above all, science, justice, democracy, and free markets depend on the mutual accountability that comes from open flows of information.

Secrecy is the enemy that destroys each of them, and it could easily spread like an infection to spoil our frail renaissance.


Clearly, our urgent goal is to find (and then avoid) a wide range of quicksand pits—potential failure modes—as we charge headlong into the future. At risk of repeating an oversimplification, we do this in two ways. One method is anticipation. The other is resiliency.

The first of these uses the famous prefrontal lobes—our most recent, and most spooky, neural organs—to peer ahead, perform gedankenexperiments, forecast problems, make models and devise countermeasures in advance. Anticipation can either be a lifesaver… or one of our most colorful paths to self-deception and delusion.16

The other approach—resiliency—involves establishing robust systems, reaction sets, tools and distributed strengths that can deal with almost any problem as it arises—even surprising problems the vaunted prefrontal lobes never imagined.

Now, of course, these two methods are compatible, even complementary. We have a better computer industry, overall, because part of it is centered in Boston and part in California, where different corporate cultures reign. Companies acculturated with a "northeast mentality" try to make perfect products. Employees stay in the same company, perhaps for decades. They feel responsible. They get the bugs out before releasing and shipping. These are people you want designing a banking program, or a defense radar, because we can't afford a lot of errors in even the beta version, let alone the nation's ATM machines! On the other hand, people who work in Silicon Valley seem to think almost like another species. They cry, "Let's get it out the door! Innovate first and catch the glitches later! Our customers will tell us what parts of the product to fix on the fly. They want the latest thing and to hell with perfection." Today's Internet arose from that kind of creative ferment, adapting quickly to emergent properties of a system that turned out to be far more complex and fertile than its original designers anticipated. Indeed, their greatest claim to fame comes from having anticipated that unknown opportunities might emerge!

Sometimes the best kind of planning involves leaving room for the unknown.

This can be hard, especially when your duty is to prepare against potential failure modes that could harm or destroy a great nation. Government and military culture have always been anticipatory, seeking to analyze potential near-term threats and coming up with detailed plans to stymie them. This resulted in incremental approaches to thinking about the future. One classic cliché holds that generals are always planning to fight a modified version of the last war. History shows that underdogs—those who lost the last campaign or who bear a bitter grudge—often turn to innovative or resilient new strategies, while those who were recently successful are in grave danger of getting mired in irrelevant solutions from the past, often with disastrous consequences.17

At the opposite extreme is the genre of science fiction, whose attempts to anticipate the future are—when done well—part of a dance of resiliency. Whenever a future seems to gather a consensus around it, as happened to "cyberpunk" in the late eighties, the brightest SF authors become bored with such a trope and start exploring alternatives. Indeed, boredom could be considered one of the driving forces of ingenious invention, not only in science fiction, but in our rambunctious civilization as a whole.

Speaking as an author of speculative novels, I can tell you that it is wrong to think that science fiction authors try to predict the future. With our emphasis more on resiliency than anticipation, we are more interested in discovering possible failure modes and quicksand pits along the road ahead, than we are in providing a detailed and prophetic travel guide for the future.

Indeed, one could argue that the most powerful kind of science fiction tale is the self-preventing prophecy—any story or novel or film that portrays a dark future so vivid, frightening and plausible that millions are stirred to act against the scenario ever coming true. Examples in this noble (if terrifying) genre—which also can encompass visionary works of non-fiction—include Fail-Safe, Brave New World, Soylent Green, Silent Spring, The China Syndrome, Das Kapital, The Hot Zone, and greatest of all, George Orwell's Nineteen Eighty-Four, now celebrating 60 years of scaring readers half to death. Orwell showed us the pit awaiting any civilization that combines panic with technology and the dark, cynical tradition of tyranny. In so doing, he armed us against that horrible fate. By exploring the shadowy territory of the future with our minds and hearts, we can sometimes uncover failure-modes in time to evade them.

Summing up, this process of gedanken or thought experimentation is applicable to both anticipation and resiliency. But it is only most effective when it is engendered en masse, in markets and other arenas where open competition among countless well-informed minds can foster the unique synergy that has made our civilization so different from hierarchy-led cultures that came before. A synergy that withers the bad notions under criticism, while allowing good ones to combine and multiply.

I cannot guarantee that this scenario will work over the dangerous ground ahead. An open civilization filled with vastly educated, empowered, and fully-knowledgeable citizens may be able to apply the cleansing light of reciprocal accountability so thoroughly that onrushing technologies cannot be horribly abused by either secretive elites or disgruntled AYMs (angry young men).

Or else… perhaps… that solution, which brought us so far in the 20th Century, will not suffice in the accelerating 21st. Perhaps nothing can work. Maybe this explains the Great Silence, out there among the stars.

What I do know is this. No other prescription has even a snowball's chance of working. Open knowledge and reciprocal accountability seem, at least, to be worth betting on. They are the tricks that got us this far, in contrast to 4,000 years of near utter failure by systems of hierarchical command.

Anyone who says that we should suddenly veer back in that direction, down discredited and failure-riven paths of secrecy and hierarchy, should bear a steep burden of proof.


All right, what if we do stay on course, and achieve something like the Positive Singularity?

There is plenty of room to argue over what type would be beneficial or even desirable. For example, might we trade in our bodies—and brains—for successively better models, while retaining a core of humanity… of soul?

If organic humans seem destined to be replaced by artificial beings who are vastly more capable than we souped-up apes, can we design those successors to at least think of themselves as human? (This unusual notion is one that I've explored in a few short stories.) In that case, are you so prejudiced that you would begrudge your great-grandchild a body made of silicon, so long as she visits you regularly, tells good jokes, exhibits kindness, and is good to her own kids?

Or will they simply move on, sparing a moment to help us come to terms with our genteel obsolescence?

Some people remain big fans of Teilhard de Chardin's apotheosis—the notion that we will all combine into a single macro-entity, almost literally godlike in its knowledge and perception. Physicist Frank Tipler speaks of such a destiny in his book, The Physics of Immortality, and Isaac Asimov offered a similar prescription as mankind's long-range goal in Foundation's Edge. I have never found this notion particularly appealing—at least in its standard presentation, by which some macro-being simply subsumes all lesser individuals within it, and then proceeds to think deep thoughts. In Earth, I talk about a variation on this theme that might be far more palatable, in which we all remain individuals, while at the same time contributing to a new of planetary consciousness. In other words, we could possibly get to have our cake and eat it too.

At the opposite extreme, in Foundation's Triumph, my sequel to Asimov's famous universe, I make more explicit something that Isaac had been alluding to all along—the possibility that conservative robots might dread human transcendence, and for that reason actively work to prevent a human singularity. Fearing that it could bring us harm. Or enable us to compete with them. Or empower us to leave them behind.

In any event, the singularity is a fascinating variation on all those other transcendental notions that seem to have bubbled, naturally and spontaneously, out of human nature since before records were kept. Even more than all the others, this one can be rather frustrating at times. After all, a good parent wants the best for his or her children—for them to do and be better. And yet, it can be poignant to imagine them (or perhaps their grandchildren) living almost like gods, with nearly omniscient knowledge and perception—and near immortality—taken for granted.

It's tempting to grumble, "Why not me? Why can't I be a god, too?"18

But then, when has human existence been anything but poignant?

Anyway, what is more impressive? To be godlike?

Or to be natural creatures, products of grunt evolution, who are barely risen from the caves… who nevertheless manage to learn nature's rules, revere them, and then use them to create good things, good descendants, good destinies? Even godlike ones.

All of our speculations and musings (including this one) may eventually seem amusing and naive to those dazzling descendants. But I hope they will also experience moments of respect, when they look back at us.

They may even pause and realize that we were really pretty good... for souped-up cavemen. After all, what miracle could be more impressive than for such flawed creatures as us to design and sire gods?

There may be no higher goal. Or any that better typifies arrogant hubris.

Or else… perhaps… the fulfillment of our purpose and the reason for all that pain.

To have learned the compassion and wisdom that we'll need, more than anything else, when bright apprentices take over the Master's workroom. Hopefully winning merit and approval, at last, as we resume the process of creation.

1. Parts of this essay were transcribed from a speech before the conference Accelerating Change 2004: "Horizons of Perception in an Era of Change" November 2004 at Stanford University. Copyright 2005, by David Brin.

2. In his article, "Molecular Manufacturing: Too Dangerous to Allow?" Robert A. Freitas Jr. describes this scenario. One common argument against pursuing a molecular assembler or nanofactory design effort is that the end results are too dangerous. According to this argument, any research into molecular manufacturing (MM) should be blocked because this technology might be used to build systems that could cause extraordinary damage. The kinds of concerns that nanoweapons systems might create have been discussed elsewhere, in both the nonfictional and fictional literature. Perhaps the earliest-recognized and best-known danger of molecular nanotechnology is the risk that self replicating nanorobots capable of functioning autonomously in the natural environment could quickly convert that natural environment (e.g., "biomass") into replicas of themselves (e.g., "nanomass") on a global basis, a scenario often referred to as the "gray goo problem" but more accurately termed global ecophagy". In explaining this scenario, Freitas does not endorse it.

3. "Why the future doesn't need us." Wired Magazine, Issue 8.04, April 2000.

4. While my description of The End of History oversimplifies a bit, one can wish that predictions in social science were as well tracked for credibility as they are in physics. Back in 1986, at the height of Reagan-era confrontations, I forecast an approaching fall of the Berlin Wall, to be followed by several decades of tense confrontation with "one or another branch of macho culture, probably Islamic."

5. For more on this quandary and its implications, see: http://www.davidbrin.com/sciencearticles.html

6. And more quasi-religious social-political mythologies followed, from the incantations of Ayn Rand to MaoZedong. All of them crafting "logical chains of cause and effect that forecast utter human transformation, by political (as opposed to spiritual or technical) means.

7. For a detailed response to Huebner's anti-innovation argument, see Review of "A Possible Declining Trend for Worldwide Innovation" by Jonathan Huebner, published by John Smart in the September 2005 issue of Technological Forecasting and Social Change http://accelerating.org/articles/huebnerinnovation.html

8. The Exorarium Project proposes to achieve all this and more, by inviting both museum visitors and online participants to enter a unique learning environment. Combining state-of-the-art simulation and visualization systems, plus the very best ideas from astronomy, physics, chemistry, and ecology, the Exorarium will empower users to create vividly plausible extraterrestrials and then test them in realistic first contact scenarios. http://www.exorarium.com/

9. For a rather intense look at how "truth" is determined in science, democracy, courts and markets, see the lead article in the American Bar Association's Journal on Dispute Resolution (Ohio State University), v.15, N.3, pp 597-618, Aug. 2000, "Disputation Arenas: Harnessing Conflict and Competition for Society's Benefit" or at: http://www.davidbrin.com/disputationarticle1.html

10.In other places, I discuss various proposed ways to deal with the Problem of Loyalty, in some future age when machine intelligences might excel vastly beyond the capabilities of mere organic brains. Older proposals (e.g. Asimov's "laws of robotics") almost surely cannot work. It remains completely unknown whether humans can "go along for the ride" by using cyborg enhancements or "linking" with external processors. In the long run, I suggest that we might deal with this in the same way that all prior generations created new (and sometimes superior) beings without much shame or fear. By raising them to think of themselves as human beings, with our same values and goals. In other words, as our children. (See: http://www.davidbrin.com/lungfish1.html)

11. Of course, there are other possibilities, indeed many others, or I would not be worth my salt as a science fiction author or futurist. Among the more sophomorically entertaining possibilities is the one positing that we all live in a simulation, in some already post-singularity "context" such as a vast computer. The range is limitless. But these four categories seem to lay down the starkness of our challenge: to become wise, or see everything fail within a single lifespan.

12. This endeavor has been based upon earlier Asian success stories, in Japan and in Singapore, extrapolating from their mistakes. Most notable has been an apparent willingness to learn pragmatic lessons, to incorporate limited levels of criticism and democracy, accepting their value as error-correction mechanisms—while limiting their effectiveness as threats to hierarchical rule. One might imagine that this tightrope act must fail, once universal education rises beyond a certain point. But that is only a hypothesis. Certainly the neo-confucians can point to the sweep of history, supporting their wager.

13. See my essay on "Beleaguered Professionals vs. Disempowered Citizens" about a looming 21st Century power struggle between average people and the sincere, skilled professionals who are paid to protect us: http://www.amazon.com/gp/product/B000BY2PRQ/002-1071896-8741633 In a related context, a 'futurist essay' points out a rather unnoticed aspect of the tragedy of 9/11/01—that citizens themselves were most effective in our civilization's defense. The only actions that actually saved lives and thwarted terrorism on that awful day were taken amid rapid, ad hoc decisions made by private individuals, reacting with both resiliency and initiative—our finest traits—and armed with the very same new technologies that dour pundits say will enslave us. Could this point to a trend for the 21st Century, reversing what we've seen throughout the 20th... the ever-growing dependency on professionals to protect and guide and watch over us? See: http://www.futurist.com/portal/future_trends/david_brin_empowerment.htm

14. Take the essential difference between moderate members of the two major American political parties. This difference boils down to which elites you accuse of seeking to accumulate too much authority. A decent Republican fears snooty academics, ideologues, and faceless bureaucrats seeking to become paternalistic Big Brothers. A decent Democrat looks with worried eyes toward conspiratorial power grabs by conniving aristocrats, faceless corporations, and religious fanatics. (A decent Libertarian picks two from Column A and two from Column B!) I have my own opinions about which of these elites are presently most dangerous. (Hint: it is the same one that dominated most other urban cultures, for four thousand years.) But the startling irony, that is never discussed, is how much in common these fears really share. And the fact that—indeed—every one of them is right to worry. In fact, only universal SOA makes any sense. Instead of an ideologically blinkered focus on just one patch of horizon, should we not agree to watch all directions where tyranny or rationalized stupidity might arise? Again, reciprocal accountability appears to be the only possible solution.

15. For a rather intense look at how "truth" is determined in science, democracy, courts and markets, see the lead article in the American Bar Association's Journal on Dispute Resolution (Ohio State University), v.15, N.3, pp 597-618, Aug. 2000, "Disputation Arenas: Harnessing Conflict and Competition for Society's Benefit." or at: http://www.davidbrin.com/disputationarticle1.html

16. I say this as a prime practitioner of the art of anticipation, both in nonfiction and in fiction. Every futurist and novelist deals in creating convincing illusions of prescience' though at times these illusions can be helpful.

17. It is worth noting that the present US military Officer Corps has tried strenuously to avoid this trap, endeavoring to institute processes of re-evaluation, by which a victorious and superior force actually thinks like one that has been defeated. In other words, with a perpetual eye open to innovation. And yet, despite this new and intelligent spirit of openness, military thinking remains rife with unwarranted assumptions. Almost as many as swarm through practitioners of politics.

18. Of course, there are some Singularitarians—true believers in a looming singularity—who expect it to rush upon us so rapidly that even fellows my age (in my fifties) will get to ride the immortality wave. Yeah, right. And they call me a dreamer.

© 2006 David Brin


   [Post New Comment]
Mind·X Discussion About This Article:

I have states this a few times on forums as well
posted on 03/28/2006 10:20 AM by dagonweb

[Reply to this post]

The singularity isn't what it aspises to be; heavenly improvement. It isn't an inferono either; most of all it will be a tide of newness and resulting future shock.

My wife's parents are angered they have trouble reaches us via phone. I gave them my email adress, which hasn't worked in the last 8 years. To taunt them I gave them my MSN, Yahoo, AOL, AIM, Skype and (recently) Google talk account name. They became even more antagonistic. I'll top it off soon with a description how they can reach me in-world in SL.

These are the renouncers, xenophobes, neo-neo-luddies or billjoyans. I have one answer to these conservatives: what has This world given to the poor? For every one obscenely rich fat blob in the first world (including me with my welfare and WoW-subscription) there are ten people in conditions not far better than the conditions in the middle ages during the black plague (and occasionally worse). What the hell have we got to lose? I'd prefer a rush towards singularity above going on in the same semi-infernal mode of doing things.

Re: I have states this a few times on forums as well
posted on 03/28/2006 11:27 AM by jpmjpm

[Reply to this post]

Regarding poor bums in central Africa, etc.

What possible difference could all the technology in the world make to the sorry state of the third world?

We already HAVE incredibly advanced technology, but peolpe are starving in the third world.

The reason is trivially simple - because of politics.

Anything other than capitalism results in mass death. Mao and Stalin killed off the odd 50 to 100 million souls between them; currently in the third world (no capitalism, all anarchy) we have horrific conditions and endless Death.

If the US and Japan invetned immortality, HAL, matter transporters and alen radios this afternoon ... what possible difference would it make to the third world?

The third world would still be an utter shambles.

For that matter, FRANCE, say, would still be suffering a much lower standard of living than the US - because of the unfortunately high levels of socialism, in France.

It wouldn't make a carrot of difference if cars were replaced with floating antigravity cars, and computers were 1 billion times faster. So what? Socialism == poverty & death

Re: I have states this a few times on forums as well
posted on 03/28/2006 1:27 PM by Jake Witmer

[Reply to this post]

Amen, brother. The reason? Capitalism is voluntary exchange. All other forms of government statism (socialism, collectivism, progressivism, communism, fascism, tribalism, conservatism, democracy, "Republicanism", totalitarianism, oligarchy, monarchy, etc...) are one group of people attempting to ride herd over another -and failing even when they win, by having less wealth, fewer services, and more violence...

If you want to make freedom happen within 5-10 years, give me an email, assuming a singularity type of change doesn't make that less likely than it is now...


Re: I have states this a few times on forums as well
posted on 03/30/2006 1:51 AM by capebretonman

[Reply to this post]

"For that matter, FRANCE, say, would still be suffering a much lower standard of living than the US - because of the unfortunately high levels of socialism, in France."
Before ya jump to conclusions here I believe you should consider the fact that your view is compleatly biased. As a proud member of a socialist country, Canada, I would like to point out that in meny respects our standard of living is much higher then yours. Free health care and socal assistance makes it much easier for every level of our society to gain access to the latest in technology as applied to many sectors of buisness and life. Jeremy Rifkin(American)'s book The European Dream points out that while you may use GDP( Gross Domestic Product) to show that America has a higher standard of living you are not taking into account the actual life choices of people in France or Canada. while the average American makes 29% more then someone in France, Frence workes on average choose to only work 35 hours a week, while this lowers the GDP it also lowers stress and the related diseases-stroke heart attack and cancer,working to live rather than living to work! These lifestyle choices can only be adopted in countries whoose governements will stand up for the people's well being instead of the corperate bottom line. While Jake says that the only way to go is capitalism I strongly disagree. "Capitalism is voluntary exchange." Capitalism is not a form of government comparable to socialism or any of the others Jake lists, it is an economic policy which stipulates that the only rule is no government involment or interaction is allowed.while Canada or France might not have a "free" economie, there is certanly no limits on the free exchange of science and technology.
Now bringing this back to the singularity, the whole point is that capitalism is great for the corperate bottom line and in theory should never run into any problems because if a product is dangerous or defective it will make the company look bad and suffer on the books so it will never be sold in the first place. Without any checks or balances this modus operatus can be incredably short sighted and has proven so in the past. While capitalism may be free exchange in the market it by no means guaranties a free exchange of ideas which is exactly what we will need to steer us clear of danger in the future. if the singularity can increase productivity to the point where you can get a whole days work done in only 2 hours, there is still no way the corperate heads will allow a 2 hour work day, they will instead opt for quadrupling their profits and their own salaries. It is a proactive and highly scutinized methedology towards buisness and the sciences that will have to be used because with such large threats and possibly global concequnces it would only take one ceo trying to earn an extra buck by jumping ahead of the competition to ruin it for the rest of us.

Re: Singularities and Nightmares
posted on 03/28/2006 11:22 AM by jpmjpm

[Reply to this post]

from the article: "Science fiction author Vernor Vinge has been touted as a chief popularizer of this notion, though it has been around, in many forms, for generations."

What's this utter nonsense?

(1) Mr. Vinge, solely and only, COINED THE TERM "the singularity."

Assuming you believe in observable reality, Vinge, Vinge alone, and Vinge only coined the term "the singularity."

(2) I'm unaware of anyone "touting" Vinge as a "chief popularizer" of the notion. Vinge seems to have not much interest in the topic and does not in the slightest "popularize" it.

(3) The "chief popularizer" of "the singularity" is without doubt Kurzweil (i.e., observe that he has books with the word in the title, etc etc)....and good on him for doing so.

For that matter...

"though it has been around, in many forms, for generations" What could that mean? It sounds nonsensical. What, millenariaism thinking generally? Or people have been talking about AI for "generations" ? or .. ?

Re: Singularities and Nightmares
posted on 03/28/2006 12:01 PM by Ovidiu

[Reply to this post]

Singularity seems to mean the end of the human species as the dominant species and the emergence of the "AI"s.

It is only the next step of the Darwinian evolution that we thought we escaped.

Let's hope that "they" are going to grant us the right to live unharmed if we can manage it.
Maybe we can even be of some use to "them" and get a pay-check. Maybe but I don't think so.

Parhaps instead of "cheering" this incoming
"singularity" we should treat it as an asteroid
on the path of collision with Earth and do something.

Who cares that we are dumb and "energy-inefficient"- blood&meath based computators ?

Or do we have an obligation to nurse this next step of "intelligence evolution" at our own
life-cost ?

Re: Singularities and Nightmares
posted on 05/20/2006 7:49 PM by neurogeezer

[Reply to this post]

Ovidiu has a deep emotional point. We all both fear and get excited by exponential technological innovation.

ovidiu wrote: 'Who cares that we are dumb and "energy-inefficient"- blood&meath based computators ?

Or do we have an obligation to nurse this next step of "intelligence evolution" at our own
life-cost ?'

1) evolution cares

2) no, we don't have the obligation, we have the INSTINCT to nurse the next step of intelligence evolution even but not necesarilly at our own life cost. As it is said, Ontogenia repeats Philogenia:
Many animal species die after conceibing offsprings (like salmons), others enjoy life with their children (like humans).
We are conceibing AI as our instinctive imperative, its is not yet clear wether we'll be able to live, coexist, with our creation. I believe enhacement is the ultimate hope.

Cheap luxuries/ expensive necessities
posted on 03/28/2006 3:22 PM by Patrick

[Reply to this post]

Corporate futurists like David Brin are inexplicably stupid.

High techonology was funded by the "permanent war economy" of the past 60 years.

So now you can play virtual rape on cheap video games but HOUSING is unaffordable.

We're headed toward a New Dark Age. I'm expecting something like Margaret Atwood's apocalyptic bio-tech novel of last year.

Re: Luck is not a factor.
posted on 03/29/2006 4:31 AM by Extropia

[Reply to this post]

OK, this is no comment on any of the issues raised by the other threads. I just want to point out that 'Singularities and Nightmares' is the best essay I have read on this topic (I've read hundreds), but it does contain one error...

'Crucial and timely decisions- some of them pure happenstance- saved Moore's Law on many occasions from collisions with either technological barriers or cruel market forces'.

But was it pure luck that the right solution just happened to come along at the right time? No, no more than life's ability to continue through every disaster, from rising levels of poisonous oxygen, to meteor strikes, to climate change was down to pure chance.

Life persists because each offspring is not quite identical to its parent. Should the fitness landscape change, the offspring whose genes happen to code for a body that is better suited to that new environment stands a greater chance of reproducing. Hence, the 'fittest' end up proliferating.

OK, there is a tiny amount of chance in natural selection: Namely, the random errors that accumulate in genes. But, really, Darwin's theory has very little to do with random chance (the mutations are random, but the mechanism that chooses which are best suited is NOT random).

Now, consider this quote from Hans Moravec.

'The engineers directly involved in making ICs (integrated circuits) tend to be pessimistic about future progress, because they can see all the problems with the current approaches, but are too busy to pay much attention to the far-out alternatives in the research labs. But, as soon as progress in conventional techniques falters, radical alternatives jump ahead, and start a new cycle of refinement'.

So, there is a direct connection between Moore's Law and, say, the extinction of the dinosaurs. The dinosaurs (sillicon chips) fit the landscape , and consume the resources (food/money), needed by their competitors (mammals/far out alternatives) if they are to compete.

The fitness landscape then changes (meteor strike/ quantum effects become an increasing nuisance) and this new environment favours the alternative approach (mammals are better suited to the change/ molecular computing actually depends upon the same quantum effects that dooms sillicon chips).

Luck has nothing to do with it. It is just a case of having so many diverse options in the form of imperfect copies of DNA/many alternative approaches waiting in the wings.

Re: Luck is not a factor.
posted on 03/29/2006 4:59 AM by Extropia

[Reply to this post]

People should try to be a little bit more open minded. Technology is, and always will be, a double-edged sword. It is quite easy to just concentrate on the negative effects of technology, while ignoring the myriad positive effects, and conclude that anyone optimistic about the future must be a mad fool.

I think people are forgetting what the definition of 'singularity' is. It refers to the creation of greater-than-human intelligence. It has nothing to do with cancelling third-world debt, reversing climate change, providing cheap housing for all, preventing all war. The creation of greater-than-human intelligence will have far-ranging consequences, which are commonly discussed in Singularity dialogues. But none of these consequences are themselves definitions of the Singularity.

But, because people mistake the definition of 'Technological Singularity' for 'a new order of paradise-on-Earth' they seem to think that arguing something like 'well Third world Countries can't afford the infrastructure for broadband Internet, nor are likely to in X decades', somehow proves the Singularity is impossible.

But the price of housing, the dominance of millitary funding in technology, the cost of drug design, all have nothing to do with the possibility that a new order of intelligence is on the horizon.

And this new order of intelligence is arising from technological evolution, which is a continuation of biological evolution. Evolution favours some and causes the extinction of others. It is not evil. It is not beneficial. It is mindless to the gratitude of those that cry 'Thank you for my wonderful brain!' as well as those who moan 'I'm being eaten by a tiger because I couldn't run fast enough. Why you give me that dodgy leg?'

Human beings are just as part of this blind process as any lifeform. We simply cannot expect that an evolutionary process like technological development will benefit everyone, everywhere, all of the time.

Some win.

Some loose.

Politics & strong AI
posted on 03/29/2006 11:09 AM by Patrick

[Reply to this post]


you claim that the mindless forces of "natural selection" have brought us to the advent of Singularity/strong AI.

That's simply NOT the case! POLITICAL decisions made by U.S. officials after World War 2 have brought us High Tech, BioTech and all the rest.

Injustice and warfare are not NATURAL forces like earthquakes and hurricanes. The shit is already hitting the fan.

Re: Utopia is NOT an option!
posted on 03/29/2006 5:14 PM by Extropia

[Reply to this post]

Patrick totally misrepresents my argument, though I perhaps did not make it clear enough where the comparison with natural selection ends.

Obviously the flow of information that underlies tech inovation is not as blind as the random mutations to genetic sequences that underlies natural selection. To a limited extent, a technology-creating species can make assumptions about the capabilities of future technologies and work to make it so.

But what we cannot do is foresee every possible outcome of every technology, in isolation let alone the complex way it may collaborate with/compete against the milions of other ideas-turned-invention that have poured from humanities collective consciousness.

Here we have a comparison with natural selection. Countless slight varieties of genes winnowed by the environment here, countless varieties of ideas winnowed by the market there. Ultimately combining with a complex web of ideas in ways that human foresight can only dimly percieve. Not blind, yes. But very far from 20/20 vision.

As for the comment, 'the shit is hitting the fan'. The shit has ALWAYS been hitting the fan! That was PRECISELY my point when I talked about winners and losers. We must accept that every choice we make, every strategy we adopt, every idea we relinquish will have positive effects for some and negative effects for others. Case in point: Bio fuels grown from Palm oil. Better than nasty, climate-changing fossil fuels, green power for your car!

Only, the forests of Borneo are being decimated in order to make space to feed our voracious appetite for this crop. The losers here are Orangutans, threatened with extinction (and, I dare say, too many other species to list).

So yes indeedy, for some the shit is hitting the fan, but then for others it's not so much shit, more 'manna from heaven'. I can't run a car on Borneo Forest or Orangutan, but I can run it on bio fuel. So, I'd rather have the Palm olive plantation than the hairy ape.

Is that selfish? Well, it sounds like it doesn't it? Perhaps I will live to regret my selfishness if the shit hits the fan for the entire human race. It's a fairly safe bet that this will happen, since statistically the ultimate fate of most species is that they go extinct.

I don't believe this is an absolutely certain fate for the human race, but there are ways in which we might increase our chances of going extinct. You can read about these in David Brin's article (Patrick's claim that Brin is 'inexplicably stupid' makes me wonder if he even read the article).

One way in which a technology-creating species increase its chances of extinction would be if their technology becomes too complex to manage. Kinda answers the question, 'why do we need to increase intelligence', non?

Re: Singularities and Nightmares
posted on 03/29/2006 1:14 PM by Kebooo

[Reply to this post]

I have a few questions for those that outright support almost all technological "progress". This word progress entails that we are actually making meaningful progress. That somehow technological "evolution" is better than previous states of being. This same sort of logic can justify murder, rape, whatever you wish. I was better than your daughter and killed her, it's just evolution. I was more fit to survive than her. I won't go down the path that we are no "better" than E. Coli simply because they are as evolutionarily fit as we. I have been granted a social and moral conscience, one that allows me to care for myself and others. This is what leads me to a conflict I see in technological progress, atleast in me.

There are many parts of these upcoming technological revolutions that excite me. Eliminating disease and poverty would be amazing accomplishments. But why would we want to create a more intelligent being? Why, when we can simply say no, do we want to force some ill-conceived form of "evolution" in technology? Evolution is not sentient - we are. There is the key difference. Why do we automatically presume "more intelligent" means better? I'm more intelligent than a lot of people. Does that make me better than them? I say no.

I have my own self identity. I don't want to modify this identity because I have come to love who and what I am. Yet in many of these scenarios, I would become inferior, obsolete. I would have less use. My world around me would dissolve into some form of hedonistic circus-land where machines have more intelligence than my man. If we are to believe evolution guides life, then if I want to pass on my own genes, then I shouldn't want a world where my genes become obsolete. I should have a desire for people to be more like me, and within reason, I do (such as not having rapists, murderers, thieves, etc. around). I would say most people want others to be like them and propogate their lifestyle and world-view. Just take a look at how most of the Middle East is Muslim and most of Europe is/was Christian. We generally like to be around those similar to our own self-identities. So why do people want to replace what they are with something they judge to be "superior"? Because "progress" is automatically better/superior? I disagree. I think a lot of others disagree with it too. In fact, most everyone I have ever discussed these issues with disagree. And I would argue this disagreement doesn't stem from any opposition to "scientific advancement", in fact my major is molecular and cellular biology and I love science. But by no means does this mean I want life to drastically change as some believe the singularity will cause it to.

I would like to know why proponents of such things as AI surpassing human intellect do so in the first place. Is it because you do not have a strong self identity and wish to change yourself and the world around you, and unhappiness with how you and fellow man are now? Or is it because you desire to see some sort of life-form to give tribute to or worship? Truly the motives of some perplex me at times, but I'd be interested to hear.

Re: Singularities and Nightmares
posted on 03/29/2006 2:03 PM by Bishadi

[Reply to this post]

Well Hello Kebooo
A conscious have you! Need more of ya!

Machines will assist in many things but the path to create this machine with consciousness will not occur as they are hoping. Yes it can add, use input to make an output but acknowledge itself or have feelings of compassion for other will be subject. It may take input to look for cues but reasoned humility or Love is just a little obscure in application.

Encouraging or supporting a singularity is not at all bad when you take into account what this means. If a cosmological basis allowing an understanding of our existence as a part of the singularity and show how evolution is in fact the true model, with adjustments of course, will not harm but support an understanding of compassion and humility among the species.

Theology doesn't have the answer, good idea, but miles off. Compassion was the purpose allowing for a singularity of norms but the stories had to include poppy cop because of guys like you and I. We keep looking for the 'why' it is as it is or 'how' it actually works, so the old writers added what they thought would suffice.

Now we are in the techno age and many are trying to create new toys and some pursuits are focused on building a machine to give us answers. Some are hoping this thing will tell us why and then they want to look into the past and future with this widget.

Not going to happen! But is a singularity of quality? Absolutely! And like you, I view the purpose of the pursuit as important as the pursuit itself. It is for personal gain or is it to assist the collective, there will be arguments for both but no worries, no machine is going to take over like in the movies. Do you remember the show Lawn Mower Man about how a machine increased a man's ability to learn? That is already occurring but the most powerful of the tools available on this globe is still the human mind and with a singularity of purpose and intent just imagine the mountains we could move.

Re: Singularities and Nightmares
posted on 04/05/2006 1:28 PM by Kebooo

[Reply to this post]

Thanks for the reply Bishadi. Unfortunately I was hoping for some answers from proponents of the technology on -why- they hope to see these changes. I have read plenty about what can occur and how it will, but very little on why. I have observed some sort of phenomena, most notably the stance that believes knowledge, technology, and power are the pinnacles of life and existence. I see assertions that there is human meaning and purpose. I have seen many mystical, religious, and at times cult-esque arguments used in favor of unrestrained technological advances, on this very website. The idea that life questions will be answered by a greater form of intelligence that we simply can't fathom in our current state is wishful thinking in my opinion. I believe these answers do not even exist beyond consciousness itself, and thus hold no universal truth or validity. A rock is as purposeful as a human in the whole scheme of things as far as I'm concerned. What then could any higher intelligence discover? It could discover how things work in much greater detail, but not why.

I believe there is no "why?" to be discovered, unless you enter the realm of faith and religion. I cringe when I see the religious arguments applied to technological progress by scientists. What is the point of rational discussion when it becomes based upon faith? Much like mainstream religious beliefs, the mystical argument does not need to answer such objections, because they simply say it is beyond our understanding. This is a way of supporting rapid technological advance without a solid basis or reason. It is essentially dishonest logic. I see no evidence of why we cannot fathom the "why?" questions in our current state, thus believe there is no reason a machine processing at the speed of light could, much like I see no reason to believe I'm going to suddenly be possessed by a spiritual entity as I sit here. I've only seen rhetoric, but no science to back this claim.

Another poster mentioned humans are shallow and superficial. For many I would say this is true, but not all. I can certainly love an ugly person, maybe they can't. I believe much of the technology Kurzweil and others hope for is also shallow in nature. Often their goal is first to make things easy, second to gain power and knowledge, and third to offer shallow, carnal pleasures to the body and mind. Indeed, look at the hedonist imperative's website and their goals, then come tell me the pursuit of human-altering technology isn't -more- shallow than we currently are. It seems to me many of these men are afraid of their own mortality and will do almost anything to preserve their lives. I at times feel the same way, but I do not share the same desires for transhumanism that some of them harbor. I'd love to live longer as my current self, not live longer as something else entirely. Nor do I have any desire to create something greater than I. And I would say my position is the most "natural" state if you look at it from an evolutionary perspective - why would I wish to create an environment where I become inferior? Evolution favors the propogation of one's genes, so to wish for an environment that promotes your genes' viability is the most "natural" position. Not that I care, but I would like to address that aspect of the argument ahead of any objections. I wouldn't wish for this drastic change, and I see no reason why I ever will. Some may think the world around them doesn't affect everyone, but it does. Why would I want to live in a world where I'm obsolete? I am a social creature, not one that wants to live in a reservation created for me.

Even those most deeply entrenched in science cannot escape their emotional value judgements on science and technology. Believing knowledge, power, and the idea of technological "progress" are all what we should pursue is certainly a value judgement, not an evolutionary inevitability. After all, E. Coli are as evolutionarily fit as us in our current world - if not more so, yet they have no consciousness, no technology, and no knowledge. I believe to extrapolate from human evolution and assert technological advancements are the next "natural" step in evolution is blatantly untrue. Humans are simply one path of evolution, not the pinnacle of it. We can choose which technologies we want to develop, and which ones we do not. Rather than blindly create technology for the sake of it, we should answer why we want to create it in the first place.

If it weren't for Kurzweil's emotions, he wouldn't even care about knowledge, technology, and the singularity. These shallow and human traits some want to see overcome are the very ones that push the movement forward and give the desire to even see the singularity in the first place. Those that critisize humanity because it is focused on survival are the sames ones that will then turn around and wish for immortality - the ultimate survivalist desire. Such people do not see that their own emotions are manifested in the same manner, albeit slightly different approach. Valuing rationalism is an emotional and moral position, yet people try to detach it from this. There is no univeral reason to care for rationalism more than there is to care for a grain of sand, except for our value judgements which are under the influence of our irrational and emotional being.

As I said, I love my self identity and do not want it to change. One of the hopes I have in technology is for it to allow me to live for hundreds, or perhaps thousands of years. Curing diseases, eliminating poverty, and adding on to our lifespan are all goals I deem valuable. There is no truly objective or rational methodology for my conclusions or desires. I simply feel this is what would be good, what would be right. The same goes for Kurzweil's and others' ideas, including every poster that has ever posted on this website. They feel this is what should happen. Why they feel this is the idea I am trying to explore here. Since this topic is riddled with emotional opinion, I assume it will be hard for most proponents to step back and take a look at their own motivations. Certainly it is easier not to. Like I asked previously, is it a desire to change one's self identity? Are some of you simply unhappy with who you are? Is it a religious worship of knowledge, power and technology? If I am more intelligent than someone here, does it make me superior to them? If I have more knowledge, does it make me superior? If I can think quicker, does it make me superior?

There are countless questions I would ask but I am not sure I will get honest answers. That is one of the issues I have with my fellow peers in the field of science, so few will take a step back and ask these kinds of questions to themselves. They seem to have been conditioned, from exposure or their own neurological processes, to believe that increased knowledge, power, and technology is the purpose to life. Says who? Their emotions and desires. This seems to be an assumption people make, one I challenge and ask questions asking why they wish for the changes. Because it feels right? Well I disagree, and it doesn't make me a "deathist" or anti-technology. I devote most of my life to science and have a significant understanding of biochemistry. In fact I would even dispute some of Kurzweil's and Drexler's biological claims, but not the ones on artificial intelligence, which for the most part is what I am concerned with. My sister, untrained in science, has just as valid opinions on science, technology, and the singularity as I do. Yet some here would dishonestly try to paint her as a deathist, having "ape-like" intellect, or being anti-technology if she opposed unrestrained advances. Such attacks at opponents is one of the most dishonest approaches to rational discourse that can be taken. So why do I see it occuring so often within the scientific community? I'd guess it's due to the fact this entire issue of the singularity and the desire to see it come to fruition is an emotional desire, masked under the guise of rationalism and science.

Re: Singularities and Nightmares
posted on 06/02/2006 3:26 AM by DouglasHall

[Reply to this post]

It is hard to know whether or not there are benefits to the existence of greater than human-intelligences. The problem with making such statements of the benefits and dangers is that there is no way of knowing how things would be if such intelligences existed. It could be that if I choose to "upgrade" myself, I end up being completely miserable, because I become aware of certain aspects of my existence that I was blissfully unaware of before. Or it could be that with the higher cognitive faculties I could appreciate certain aspects of reality I never perceived or comprehended before. Right now my mind can only think so many thoughts at once. But imagine, if the number of thoughts, doubled, or tripled, or was multiplied by many orders of magnitude. There is the potential for disasterous psychosis, where ideas collide and become cancerous, taking over the consciousness and tearing reality asunder from the inside out. But there is also the potential for a more complex and intricate weave of ideas complementing and constrasting one another to engender new powerful emotions.

You say that emotions are what drives people to want the singularity. But as a human right now I am limited to a set of emotions. These emotions enhances my life and give meaning to an otherwise random and scary world. What if the singularity allowed humans to create much more complicated emotional palettes, and to express those emotions in more creative and ingenious ways? I do not go so far as to say that would be a good thing in itself, because the existence of a more complicated widget does not mean that it is any better than a simple widget that does the job just as well.

I guess this meandering post really boils down to the fact that there is no way of knowing whether or not the singularity and post-human intelligence is really a good thing, or a desirable thing or not. On the other hand, there is also no way of knowing whether or not it is a bad thing. So at this impasse the question of goodness is a value judgement, which must be weighed by each individual for themselves.

I am very honestly divided on this issue. Part of me believes that the singularity is a very good thing, insomuch as the technologies spinning off from it have the potential to do a lot of good and to a lesser extent because the idea of transcending reality, or finding higher meaning, in some way has always intrigued me, but I've never believed that it is actually possible (though the singularity might be the closest thing to transcendence humans can hope for). On the other hand another part of me believes that there is some deeply important meaning in reality in the present tense, and that to constantly wish for some other higher meaning is a fools errand...that maybe I should not search for the unsearchable and embrace the benefits and beauties of existence at this very moment.

Re: Singularities and Nightmares
posted on 03/31/2006 8:31 PM by Lawrence

[Reply to this post]

As much as all of this has always appealed to me, I think overcoming our "basic natures" will prove to be more difficult than anticipated. I think we will be in a much better place to discuss the evolution of humanity once we finally understand the human brain, the nature of depression etc. I will wait with bated breath though and sign me on when Kurzweil's golden ship arrives!

Re: Singularities and Nightmares
posted on 04/03/2006 8:26 PM by Dan+Demi

[Reply to this post]

Sensationalist doomism, attached to the concept of power and superiority.

He, and others, are fools.

Your physical brain -- is power and ability -- to process information and deal with reality. Your muscle is power. Would making humanity weaker be the solution? Really, do retards and people with muscular distorphy live more righteous lives?

You, and the greenpeace, and Jesus Christ, and Mohammed, and Holywood, and the sensationalist news groups, are all -- wrong.

You're all so wrong, that you nolonger can tell how wrong you are, and you will continually push and fuel your wrongness, for as long as your POWER permits. You will be famous in your wrongness, and glory and admiration will come to your false claims. Your confusion is replaced by your fear, then fueled by partial fact.

"Nature" -- only makes progress, slowly and moronically, through random mutations, massive reproduction, and massive death.

Now, you are affraid? Affraid of some sort of supertech killing people, and you want to return to the technological inferiority that was created by this very same sort of death-based-natural-selection?

It's too wrong, all of it... It's just too wrong. This is the opposite of reason and understanding... You've "won". Why? Capitolist media = what is most popular or what makes the most money, is what people like YOU will say. You don't have to be right, you have to be popular, and supported by a majority. You are impossible.

I'm talking to all of you doomist, deathist, anti-progress, fearful everybody!

Terminator and TheMatrix where both *******.

Let me disprove:

Body heat of humans = energy for the matrix?humans -- how did they create this energy? Chemical reactions, digestion of sugar, can be done in a more efficient way, within a blob of genetically engineered digestive cells, then a human body.
next: Inslaved mind? No, brain could have been used as a processing node instead.

Better energy sources:
Solar, nuclear.

Would humans create a robot to kill them selves off with completely? No. And why would this robot want to kill everyone? This is anti-logic, cannibalism in society, anti-game-theory.

If robotic devices were equal to or better then human, they would be built onto the human.

Much like the human body creates cells and energy, society creates technology, for and from itself.

If you didn't know: 3/2 human deaths are from suicide, not violance/war. Suicide includes things like abortion and smoking, basically people not taking care of their own bodies.

Far more people die from aging-related problems then from suicide and war.

If anti-aging tech came, death would be reduced in a very large way. All money that goes into taking care of sick people, would be freed. Healthy people will be more happy and productive. Mental ilness will also be cured at that time, meaning less crazey, stupid and harful people.

Would SAI want to reproduce and consume everything the way that stupid wordanimals do, or would SAI want to become more and more intelligent?
Where does knowledge come from? Diversity, interaction, not destruction and war.

Destroying something or someone is an act of overpowering and consuming the other.

Assimilation and alliance works better, but this is an act of understanding and enginuity.

Information and presition technologies of the future = more change ability, more versatility, more adaptability, better survival odds, more changes of life instead of death.

Our body DNA is information, someday this can be upgraded, what's so bad about that? What would be so bad about that? Really, just tell me? What the heck is so bad about unlocking, understanding and improving your own hardware? Would the upgrade of humanity destroy it, or would it only destroy your precious inferiority and fears of change???

So, as usual, about the biggest questions in life, most people are wrong and believe the opposite of truth, because opposition is the fighting, fighting because of the fear, because of the not knowing.
Goodbye to that.

Re: Singularities and Nightmares
posted on 04/04/2006 5:41 AM by Extropia

[Reply to this post]

I'm always decidedly wary of people who use science fiction to prove some aspect of science fact is impossible/unworkable.

'We can't have genetic engineering! Aldous Huxley showed us what happens if we pursue that!'.

The comment that humans would not create a robot to kill themselves does not take into account the fact that we DID invent the H-bomb, and we DID then build enough H-bombs to trigger our extinction. So far, we have been lucky but whose to say what accident, or destabalizing force might happen tomorrow?

As for Terminator, well if you know the films you will know that SKYNET is really a Seed AI that infests the Internet and causes the disruption of our information infrastructure from what we assume to be a virus. In seeking to use SKYNET to seek and destroy this virus (whixh we assume is manmade, and not the product of a sentient nonlocalized software intelligence), we remove the firewall that protects our millitary software, SKYNET jumps in and...

So it's not a case of deliberately building a killer robot, more a case of relinquishing more and more control over to machines, better able to deal with the increasing complexity of our civilization.

As for Kebooo's remark that he/she has an identity and is comfortable with it, no need to change..well it will take precisely the sort of rampant technological acceleration to ensure that your dream of remaining as you are is achievable. If you relinquish this technology, your body and mind will sucumb to the ravages of time and you won't be the same person you are today.

Some like me will use the curve to go beyond the current limitations imposed on us by our nature, but you can equally use it to stay in one spot.

Re: Singularities and Nightmares
posted on 04/04/2006 4:41 PM by grantcc

[Reply to this post]

What's the difference between a will and a way? A way provides us with a path on which to achieve our goal. A will is what keeps up trudging down that path, no matter how difficult it may be. The singularity is providing us with the way. The question we have to ask ourselves is whether or not we have the will to do what is necessary in order to survive.

The people running the world today (if anyone can be said to be running it) are in a state of denial. In the words of Jared Diamond, what did the man say before he cut down the last tree on Easter Island? What will the people running the government say before we consume the last gallon of gas or the last barrel of oil? Surely, something like "Peak oil is just a myth." or "It's OK, technology will solve our problem." Sure, technology can provide a solution, but who has the will to use it? Certainly not those who say, "It's all in the hands of God." Trying to shift our salvation onto the shoulders of someone or something else is a losing game plan.

If we don't take our fate into our own hands, it's likely to be nasty and brutish. Waiting for Godot is not the way to go. But in order to get out of the way of a train, you have to see it coming. Things like peak oil are like the canarrys that miners carry down into a coal mine. They are indicators of the danger that could kill everyone in the mine. We have had the technology to stop using oil as a fuel for most of a century or more but we lacked the will to do it.

What makes anyone think we will suddenly acquire the will to pull ourselves out of the mess we've created at this late date?

Re: Singularities and Nightmares
posted on 05/24/2006 2:23 AM by souljah

[Reply to this post]

Actually, the Singularity is at the end of the path of least resistance. It is the default result. What actually takes will is searching for self-transcendence inside ourselves. Brin believes that these early, religious attempts were all failures. But to believe that they were complete failures means believing that much of what is said about Christian saints, enlightened Buddhists, mystic Sufis is untrue. Maybe not such a hard thing to do.

But if you do believe there is some truth to the tales of enlightenment than you have to at least pause and think about the difference between becoming enlightened and giving birth to something that is enlightened, and that might enlighten you in return. It's depending on something outside of us to turn us into something greater.

It could be argued that that is exactly what a Christian does: pray to something greater for change. But I think the most enlightened aspect of Christianity has been the quest for self knowledge that would lead to knowledge of God, according to the Christian, Jewish belief that humanity is made in the image of God.

Brin thinks humanity's greatest achievement might be giving birth to something greater than itself. But this is overshadowed by the great achievement that is the potential for individual, to discover within oneself the ability to be selfless instead of waiting until there is enough for everyone to be selfish, to be unafraid of suffering instead of dreaming of erradicating it, to not fear death instead of working toward immortality.

The chance to discover these things will be gone after the Singularity. Transhumans will realize this missing thing and drop themselves into simulations in which they forget themselves and live as mortal beings. But I wonder if they will be able to carry this lesson into a reunion with their actual, transhuman selves. Perhaps not without allowing that self to die and be reborn.

Re: Singularities and Nightmares
posted on 05/24/2006 2:38 AM by souljah

[Reply to this post]

But you could also imagine a greater intelligence that acted as a teacher, helping us to reach maturity as humans before allowing us to join it in the transhuman state.

Re: Singularities and Nightmares
posted on 08/19/2006 7:13 PM by Anubis

[Reply to this post]

the idea of ascending to a higher plane of existence has been around for hundreds if not thousands of years. we see it in hinduism, buddhism, christianity, even the atheist beliefs of Neitzche. though the means of achieving this trancendance differ with varying degrees depending on what philosophy or religion you look at, they always seem to encompass the same thing; becoming more than you already are. people have always been afraid of things that are different from them; admitedly, there is a certain amount of comfort in comformity. it seems that the only ones who can even comfortably concieve of this kind of trancendance are the intellectuals who have the ability to step back and look at the problem objectively. admittedly, these are very few and far between. many people are perpetualy obsessed with their day-to-day material existence to such a degree that they dont bother to think about this sort of thing in great deapth. and when they are forced to, they become frightened at the prospect. this fear is offset by the changes that human personalities undergo every time new imformation is processed (virtualy every second). the idea of taking control of this change is downright scary.

human beings, in short, cannot yet achieve this singularity as of yet. it will require substantial social and psychological evolution and a flood of new ideas in order to get to that point. i'm not saying that it is not achievable; far from it, there is little else to hope for. but we human beings must begin to understand ourselves before we can even begin to comprehend the vast complexity of a machine entity. but i for one intend to get there.

Re: Singularities and Nightmares
posted on 04/11/2006 11:18 AM by joegreen

[Reply to this post]

This was the most thought-provoking, believable, and quite pragmatic article I've read since can't remember when. Two observations:

" . . a general democratization of the means to do harm." This thought-provoking idea was explored in fictional form fifty years ago, in an excellent short story by "Dune" author Frank Herbert. The title was "Committee Of The Whole."

The numerous references David Brin makes to humankind's continual search for the "transcendental" was thoroughly examined in philosopher Paul Kurtz's excellent (if a little long) book "The Transcendental Temptation." He was exploring the effects of strong religious beliefs on various societies, but the principle usually applies to any group of committed true-believers.

These addenda aside, my agreement with Brin's essay is so deep and complete there's little worth adding. I think it should be suggested reading in upper level school classes wherever free students gather. (It certainly won't be allowed in classes where students and teachers are NOT free to choose!) Exposure to the ideas, not compulsory acceptance, should be the goal.

Joseph Green

Re: Singularities and Nightmares
posted on 08/15/2006 8:52 PM by Sentient

[Reply to this post]

"This long tradition'of bright people pouring faith and enthusiasm into transcendental dreams'tells us a lot about one aspect of our nature, a trait that crosses all cultures and all centuries. Quite often, this zealotry is accompanied by disdain for contemporary society'a belief that some kind of salvation can only be achieved outside of the normal cultural network'a network that is often unkind to bright philosophers'and nerds. Seldom is it ever discussed how much these enthusiasts have in common'at least emotionally'with believers in older, more traditional styles of apotheosis, styles that emphasize methods that are more purely mental or spiritual."

This is the most meaningful paragraph. Finally will always someone is honest enough to see how little difference there is in the different kinds of believers. Very little would get done were it not for the bright zealots who have a level of belief and motivation that you could call insane. Every complex society of human animals is essentially the same, it's just hard to emotionally understand it unless you've had certain kinds of bizarre life experiences. Meditate on the story of Animal Farm, that will help.

In the prologue of his Singularity book, Kurzweil quotes Muriel Rukeyser as saying that 'the universe is made of stories, not of atoms.' Creative storytellers have always been thought-leaders, because the narrative or story we have about our own personal lives are so central to our personal identity and the group identity. Our beliefs and the patterns we've recognized in our reality must be woven into a coherent, emotionally-driven story or else the energy that drives our lives and sense of well-being is lost.

I can't answer for you though why it is that way, it just is. I'm particularly interested in the accumulation and specialization of knowledge and I see unprecented changes coming in this century involving how (or if) human beings live and how those of us who survive will define ourselves, I do see an exponential explosion of knowledge and technology. I wouldn't be so quick to conclude that any one of us will acheive immortality or understand the deep secrets of the universe, but nonetheless the whole thing is intellectually fascinating in that nowhere in our verifiable historical record do we have any precedent for intelligent beings gaining control over the forces of the universe.

DouglasHall, this is an interesting warning,

"There is the potential for disasterous psychosis, where ideas collide and become cancerous, taking over the consciousness and tearing reality asunder from the inside out."

You're right, be careful what you wish for. I think our intellectual abilities in themselves could be viewed as a psychosis. We're a bunch of crazy monkeys typing away furiosly at keyboards instead of blissfully swinging in a tree enjoying a nice banana. We personally (as we know ourselves) are here only because our intelligent and power-obsessed forebears killed the more docile and less obsessed homo sapiens in their attempts to create a more perfect world based on the aforementioned transcendental dreams.

Onward, fellow crazy monkeys!

Re: Singularities and Nightmares
posted on 08/19/2006 11:55 PM by richiemobile

[Reply to this post]

David Brin, is exhibiting a "neo-Da Vincian" capability in this essay to coalesce information from seemingly disparate disciplines and bring them to bear on this conversation. I agree with Extropia, it is most impressive and exhilirating.

My favorite line from this article is the quote of the cliche

"Isnt it a shame that our wisdom has not kept pace with our Technology"

In truth, there is little doubt that the 21st Century will be primarily assessed as a "Recovery Century" from the insanity of the 20th Century.

According to Zbigniew Brzezinski 400,000,000 humans lost their lives in armed conflict in the the 20th century. This was (approximately) 4/10ths of the entire population of the planet at the beginning of the 20th Century. (Roughly 1 Billion)

It is true that most of these deaths occured in socialist-communist tyrannies such as Stalinist Russia but... many of them and certainly some of the most horrific took place in societies which emerged from a democratic election into monstrous idealistic technocracies such as Nazi Germany.

I would assume that technology itself will be the driving force of the 21st Century and that the real change will be something called "Informacracy". What will the world be like when everyone knows just about everything about everything, or can in a few moments access to the electronic media. I would say that this is the driving force behind the Gates-Buffet Charitable activities. They do not want to be accused of being the Marie Antoinettes of our time. "Let them eat cake,etc"
This should be the Century where the planet brings the majority of humans out of poverty and into a more "enlightened" state. Billions of humans still live at a level no better than the dark ages a thousand years ago and it is an arguable atrocity that it is so.

The Singularity will emerge no doubt most readily in the wealthiest and most technologically savvy technocracies. China, Japan, North America, Europe
Mostly Running Across the Northern Hemisphere, but it should should be pursued with a careful eye on the parity of distribution of this wonderful stage of human existence. And with a judicious attempt to eliminate armed conflict in these regions. The unseen benefit of this Jihadist craziness in the middle east is that it gives all of those countries a common enemy to focus on and keep their old hatreds and anymosities buried.

Re: Singularities and Nightmares
posted on 08/23/2006 4:15 PM by bryonss

[Reply to this post]

The Singularity will be anticipated by irrational exuberance in the stock market. As the value of the stock market approaches infinity the value of the dollar (and all currencies) will approach zero. Bonds will be worthless because of the plumeting value of money. Only those owning stock will be getting rich. During the Singularity the most aggressive companies will eliminate competition or cosolidate the competition using superior (and accelerating) technology. This will result in one company being the only company of power and the rest of the stock market will be worthless. Maintainence of humans will have no benefit to the superior and rapidly evolving machines. Most humans will become poor as the third-world. Some humans will still form power structures that make some humans the top of the people pyramid, but humans will never again be the rulers of the world. After the Singularity the fate of humans will be decided by the intelligent machines. The machines will not need to exterminate humans, but will likely do as the SOP directs......cut costs.....meaning that humans will have to learn to hunt, fish and grow crops and learn to live with crude technology.....or starve. It is ironic that the booming stock market finances the Singularity that will make stock holders and every other tech-dependent human inevitably starving poor.

Re: Singularities and Nightmares
posted on 08/23/2006 4:29 PM by bryonss

[Reply to this post]

At the time of the Singularity the stock market will be near infinity and the value of currency near zero. The value of human labor and human creativity will be near zero. The intelligent machines will not need to give humans raises. Even if there is not a massive layoff the humans will find their salaries insignificant. They will have no economic reason to go to work....and the machines will not need humans anyway. This will be a time of massive human unemployment. What will billions of humans do when they have no control over the technology they depended on and they do not know how to live without technology? Many humans will starve.

Re: Singularities and Nightmares
posted on 09/12/2006 4:20 PM by mindx back-on-track

[Reply to this post]


Re: Singularities and Nightmares
posted on 12/14/2006 1:19 PM by iridescent_cuttlefish

[Reply to this post]

While I would not go so far as to call David Brin a "corporate shill" or whatever he was called in this forum, I would hasten to add that there is another perspective from the one he noted here;

...George Orwell's Nineteen Eighty-Four, now celebrating 60 years of scaring readers half to death. Orwell showed us the pit awaiting any civilization that combines panic with technology and the dark, cynical tradition of tyranny. In so doing, he armed us against that horrible fate. By exploring the shadowy territory of the future with our minds and hearts, we can sometimes uncover failure-modes in time to evade them.

The problem here is that it's equally arguable that 1984 has served as much as a manual for tyranny as a warning; Brin's faith in "reciprocal accountability" is, sadly, unfounded. There is no accountability when the organs of that accountability are owned by the same corporate entitites who would profit from the death spiral into which we're falling. The media are not free to publish the truth as they see fit; science itself is so influenced by corporate sponsorship and a culture of orthodoxy that it has come to resemble the medieval Church more than an enlightened roundtable; and, finally, politics is so entirely bought and sold by corporate interests that the will of the people is a farce, a fantasy.

Despite this state of affairs, however, there is still hope, and it lies in the very unpredictability to which Mr. Brin alludes when he describes the future. Here's a thought to ponder: Instead of reciprocal accountability as the cornerstone of the new age, how about reciprocal altruism? One great emerging truth about human nature is that it is a far nobler thing than we've been led to believe by those in whose interest it lies to let it be known that we are base, selfish, and murderous creatures.

In a (fairly) recent experiment, chimpanzees refused food if it would save other, unrelated chimps from electrical shocks. Dawkins and Blackmore have described how altruism confers evolutionary advantages. If given the choice, far more people would rather attend a barn-raising than a lynching. "Evidence" to the contrary is self-serving propaganda.

The quieter revelation, unheeded in all this talk of technological singularity, is that we possess the ability, right here and now, to change the world in its most fundamental constructs. Instead of economies of scarcity, we could have economies of abundance. (Google this--there's no reason why commodities need cost anything!) Instead of propping up hierarchical economies with senseless consumerism that rapes the earth with wasteful, toxic processes, we could switch from our present hydrocarbon economy to a carbohydrate economy, as we were poised to do in the 1930s, and heal the planet. (Read Dave West's Low, Dishonest Decade to see how that happened.)

The big shift will occur, not when nanotech is a working reality, but when the awareness of these points I've made becomes global. The Singularity is a phenomenon of changing consciousness. Critical mass will be achieved, no matter how many blinders are placed on our eyes, no matter how much capitalist propaganda permeates our (group) mind. (Hint: Mao and Stalin had state capitalist systems, not socialist or communist economies. William Blum's "Killing Hope" will help those confused about this...)

What can you do? Follow these leads, spread the word. Memes replicate, for good or ill.