|
|
Tearing Toward the Spike
We will live forever; or we will all perish most horribly; our minds will emigrate to cyberspace, and start the most ferocious overpopulation race ever seen on the planet; or our machines will transcend and take us with them, or leave us in some peaceful backwater where the meek shall inherit the Earth. Or something else, something far weirder and... unimaginable.
Originally presented as a paper at the three-day symposium Australia at the Crossroads? Scenarios and Strategies for the Future, April 31 - May 2, 2000, at the John Curtin International Institute, Curtin University of Technology, Perth, WA, Australia. Published on KurzweilAI.net May 7, 2001.
I wish I could show you the real future, in detail, just the way it's going to unfold. In fact, I wish I knew its shape myself. But the unreliability of trends is due precisely to relentless, unpredictable change, which makes the future interesting but also renders it opaque.
This important notion has been described metaphorically--both in science fiction and in serious essays--as a technological Singularity. That term is due to Professor Vernor Vinge, a mathematician in the Department of Mathematical Sciences, San Diego State University (although a few others had anticipated the insight).[1] `The term "singularity" tied to the notion of radical change is very evocative,' Vinge says, adding: `I used the term "singularity" in the sense of a place where a model of physical reality fails. (I was also attracted to the term by one of the characteristics of many singularities in General Relativity--namely the unknowability of things close to or in the singularity.)'[2] In mathematics, singularities arises when quantities go infinite; in cosmology, a black hole is the physical, literal expression of that relativistic effect
For Vinge, accelerating trends in computer sciences converge somewhere between 2030 and 2100 to form a wall of technological novelties blocking the future from us. However hard we try, we cannot plausibly imagine what lies beyond that wall. `My "technological singularity" is really quite limited,' Vinge stated. `I say that it seems plausible that in the near historical future, we will cause superhuman intelligences to exist. Prediction beyond that point is qualitatively different from futurisms of the past. I don't necessarily see any vertical asymptotes.' Some proponents of this perspective (including me) take the idea much farther than Vinge, because we do anticipate the arrival of an asymptote in the rate of change. That exponential curve will be composed of a series of lesser sigmoid curves, each mapping a key technological process, rising fast and then saturating its possibilities before being gazumped by its successor, as vacuum tubes were replaced by transistors at the dawn of electronic computing. Humanity itself--or rather, ourselves--will become first `transhuman', it is argued, and then `posthuman'.
While Vinge first advanced his insight in works of imaginative fiction, he has featured it more rigorously in such formal papers as his address to the VISION-21 Symposium, sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, March 30-31, 1993. He opened that paper with the following characteristic statement, which can serve as a fair summary of my own starting point
`The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence.
The impact of that distressing but apparently free-floating prediction is much greater than you might imagine. In 1970, Alvin Toffler had already grasped the notion of accelerating change. In Future Shock he noted: `New discoveries, new technologies, new social arrangements in the external world erupt into our lives in the form of increased turn-over rates--shorter and shorter relational durations. They force a faster and faster pace of daily life.'[3] This is the very definition of `future shock'.
Thirtysomething years on, we see that this increased pace of change is going to disrupt the nature of humanity as well, due to the emergence of a new kind of mind: AIs (artificial intelligences). With self-bootstrapping minds abruptly arrived in the world, able to enhance and rewrite their own cognitive and affective coding in seconds, science will no longer be restricted to the slow, limited apertures granted by human senses (however augmented by wonderful instruments) and sluggish brains (however glorious by the standards of other animals). We'll find ourselves, Vinge suggests, in a world where nothing much can be predicted reliably.
Is that strictly true? There are some negative constraints we can feel fairly confident about. The sheer reliability and practical effectiveness of quantum theory, and the robust way relativity holds up under strenuous challenge, argues that they will remain at the core of future science--in some form, which is rather baffling, since at the deepest levels they disagree with each other about what kind of cosmos we inhabit.[4] In other words, we do already know a great deal, a tremendous amount, corroborated knowledge will not go away.
Meanwhile, what I call the Spike in my book of that title--Vernor Vinge's technological Singularity--apparently looms ahead of us: a horizon of ever-swifter change we can't yet see past. The Spike is a kind of black hole in the future, created by runaway change and accelerating computer power. We can only try to imagine the unimaginable up to a point. That is what scientists and artists (and visionaries and explorers) have always attempted as part of their job description. Arthur C. Clarke did it rather wonderfully in his 1962 futurist book Profiles of the Future. I was greatly encouraged to read something he said about The Spike in his revised millennium edition: `Damien's book will serve as a more imaginative sequel to the one you are reading now.' If anyone else had said that, I might be worried, but I'm pretty sure that, for Sir Arthur, `imaginative' is not a term of abuse. So let's see if we can sketch a number of possible pathways into and beyond the coming technological singularity.
First, though, one must ask if the postulate is even remotely plausible. In mid-March 2000, the chief scientist and co-founder of Sun Microsystems, Bill Joy, published a now much-discussed warning that took such prospects very seriously indeed. He declared with trepidation: `The vision of near immortality that [software expert Ray] Kurzweil sees in his robot dreams drives us forward; genetic engineering may soon provide treatments, if not outright cures, for most diseases; and nanotechnology and nanomedicine can address yet more ills. Together they could significantly extend our average life span and improve the quality of our lives. Yet, with each of these technologies, a sequence of small, individually sensible advances leads to an accumulation of great power and, concomitantly, great danger' (Wired Magazine, April 2000).[5] He is right to be concerned, but I believe the risks are worth taking. Let's consider the way this deck of novelties might play out.
We need to simplify in order to do that, take just one card at a time and give it priority, treat it as if it were the only big change, modulating everything else that falls under its shadow. It's a risky gambit, since it has never been true in the past and will not strictly be true in the future. The only exception is the dire (and possibly false) prediction that something we do, or something from beyond our control, brings down the curtain, blows the whistle to end the game. So let's call that option [A i] No Spike, because the sky is fallingIn the second half of the 20th century, people feared that nuclear war (especially nuclear winter) might snuff us all out. Later, with the arrival of subtle sensors and global meteorological studies, we worried about ozone holes and industrial pollution and an anthropogenic Greenhouse effect combining to blight the biosphere. Later still, the public became aware of the small but assured probability that our world will sooner or later be struck by a `dinosaur-killer' asteroid, which could arrive at any moment. For the longer term, we started to grasp the cosmic reality of the sun's mortality, and hence our planet's: solar dynamics will brighten the sun in the next half billion years, roasting the surface of our fair world and killing everything that still lives upon it. Beyond that, the universe as a whole will surely perish one way or another.
Take a more optimistic view of things. Suppose we survive as a species, and maybe as individuals, at least for the medium term (forget the asteroids and Independence Day). That still doesn't mean there must be a Spike, at least in the next century or two. Perhaps artificial intelligence will be far more intractable than Hans Moravec and Ray Kurzweil and other enthusiasts proclaim. Perhaps molecular nanotechnology stalls at the level of micro-electronic machines (MEMS) that have a major impact but never approach the fecund cornucopia of a true molecular assembler (a `mint', or Anything Box). Perhaps matter compilers or replicators will get developed, but the security states of the world agree to suppress them, imprison or kill their inventors, prohibit their use at the cost of extreme penalties. Then we have option [A ii] No Spike, steady as she goesThis obviously forks into a variety of alternative future histories, the two most apparent being [A ii a] Nothing much ever changes ever againwhich is the day-to-day working assumption I suspect most of us default to, unless we force ourselves to think hard. It's that illusion of unaltered identity that preserves us sanely from year to year, decade to decade, allows us to retain our equilibrium in a lifetime of such smashing disruption that some people alive now went through the whole mind-wrenching transition from agricultural to industrial to knowledge/electronic societies. It's an illusion, and perhaps a comforting one, but I think we can be pretty sure the future is not going to stop happening just as we arrive in the 21st century.
The clearest alternative to that impossibility is [A ii b] Things change slowly (haven't they always?)Well, no, they haven't. This option pretends to acknowledge a century of vast change, but insists that, even so, human nature itself has not changed. True, racism and homophobia are increasingly despised rather than mandatory. True, warfare is now widely deplored (at least in rich, complacent places) rather than extolled as honorable and glorious. Granted, people who drive fairly safe cars while chatting on the mobile phone live rather... strange... lives, by the standards of the horse-and-buggy era only a century behind us. Still, once everyone in the world is drawn into the global market, once peasants in India and villagers in Africa also have mobile phones and learn to use the Internet and buy from IKEA, things will... settle down. Nations overburdened by gasping population pressures will pass through the demographic transition, easily or cruelly, and we'll top out at around 10 billion humans living a modest but comfortable, ecologically sustainable existence for the rest of time (or until that big rock arrives).
A bolder variant of this model is [A iii] Increasing computer power will lead to human-scale AI, and then stall.But why should technology abruptly run out of puff in this fashion? Perhaps there is some technical barrier to improved miniaturisation, or connectivity, or dense, elegant coding (but experts argue that there will be ways around such road-blocks, and advanced research points to some possibilities: quantum computing, nanoscale processors). Still, natural selection has not managed to leap to a superintelligent variant of humankind in the last 100,000 years, so maybe there is some structural reason why brains top out at the Murasaki, Einstein or van Gogh level.
So AI research might reach the low-hanging fruit, all the way to human equivalence, and then find it impossible (even with machine aid) to discern a path through murky search space to a higher level of mental complexity. Still, using the machines we already have will not leave our world unchanged. far from it. And even if this story has some likelihood, a grislier variant seems even more plausible. [A iv] Things go to hell, and if we don't die we'll wish we hadThis isn't the nuclear winter scenario, or any other kind of doom by weapons of mass destruction--let alone gray nano goo, which by hypothesis never gets invented in this denuded future. Technology's benefits demand a toll from the planet's resource base, and our polluted environment. The rich nations, numerically in a minority, notoriously use more energy and materials than the rest, pour more crap into air and sea. That can change--must change, or we are all in bad trouble--but in the short term one can envisage a nightmare decade or two during which the Third World `catches up' with the wealthy consumers, burning cheap, hideously polluting soft coal, running the exhaust of a billion and more extra cars into the biosphere...
Some Green activists mock `technical fixes' for these problems, but those seem to me our best last hope.[6] We are moving toward manufacturing and control systems very different from the wasteful, heavy-industrial, pollutive kind that helped drive up the world's surface temperature by 0.4 to 0.8 degrees Celsius in the 20th century.[7]
Pollsters have noted incredulously that people overwhelmingly state that their individual lives are quite contented and their prospects good, while agreeing that the nation or the world generally is heading for hell in a hand-basket. It's as if we've forgotten that the vice and brutality of television entertainments do not reflect the true state of the world, that it's almost the reverse: we revel in such violent cartoons because, for almost all of us, our lives are comparatively placid, safe and measured. If you doubt this, go and live for a while in medieval Paris, or palaeolithic Egypt (you're not allowed to be a noble). Roads from here and now to the SpikeI assert that all of these No Spike options are of low probability, unless they are brought forcibly into reality by the hand of some Luddite demagogue using our confusions and fears against our own best hopes for local and global prosperity. If I'm right, we are then pretty much on course for an inevitable Spike. We might still ask: what, exactly, is the motor that will propel technological culture up its exponential curve?
Here are seven obvious distinct candidates for paths to the Spike (separate lines of development that in reality will interact, generally hastening but sometimes slowing each other): [B i] Increasing computer power will lead to human-scale AI, and then will swiftly self-bootstrap to incomprehensible superintelligence.This is the `classic' model of the singularity, the path to the ultraintelligent machine and beyond. But it seems unlikely that there will be an abrupt leap from today's moderately fast machines to a fully-functioning artificial mind equal to our own, let alone its self-redesigned kin--although this proviso, too, can be countered, as we'll see. If we can trust Moore's Law--computer power currently doubling every year--as a guide (and strictly we can't, since it's only a record of the past rather than an oracle), we get the kinds of timelines presented by Ray Kurzweil, Hans Moravec, Michio Kaku, Peter Cochrane and others, explored at length in The Spike. Let's briefly sample those predictions:
Peter Cochrane: several years ago, the British Telecom futures team, led by their guru Cochrane, saw human-level machines as early as 2016. Their remit did not encompass a sufficiently deep range to sight a Singularity.
Ray Kurzweil:[8] around 2019, a standard cheap computer has the capacity of a human brain, and some claim to have met the Turing test (that is, passed as conscious, fully responsive minds). By 2029, such machines are a thousand times more powerful. Machines not only ace the Turing test, they claim to be conscious, and are accepted as such. His sketch of 2099 is effectively a Spike: fusion between human and machine, uploads more numerous than the embodied, immortality. It's not clear why this takes an extra 70 years to achieve.
Ralph Merkle:[9] while Dr Merkle's special field is nanotechnology, this plainly has a possible bearing on AI. His is the standard case, although the timeline is still `fuzzy', he told me in January: various computing parameters go about as small as we can imagine between 2010 and 2020, if Moore's Law holds up. To get there will require `a manufacturing technology that can arrange individual atoms in the precise structures required for molecular logic elements, connect those logic elements in the complex patterns required for a computer, and do so inexpensively for billions of billions of gates.' So the imperatives of the computer hardware industry will create nanoassemblers by 2020 at latest. Choose your own timetable for the resulting Spike once both nano and AI are in hand.
Has Moravec:[10] multipurpose `universal' robots by 2010, with `humanlike competence' in cheap computers by around 2039--a more conservative estimate than Ray Kurzweil's, but astonishing none the less. Even so, Dr Moravec considers a Vingean singularity as likely within 50 years.
Michio Kaku: superstring physicist Kaku surveyed some 150 scientists and devised a profile of the next century and farther. He concludes broadly that from `2020 to 2050, the world of computers may well be dominated by invisible, networked computers which have the power of artificial intelligence: reason, speech recognition, even common sense'.[11] In the next century or two, he expects humanity to achieve a Type I Kardeshev civilization, with planetary governance and technology able to control weather but essentially restricted to Earth. Only later, between 800 and 2500 years farther on, will humanity pass to Type II, with command of the entire solar system. This projection seems to me excessively conservative.
Vernor Vinge: his part-playful, part-serious proposal was that a singularity was due around 2020, marking the end of the human era. Maybe as soon as 2014.
Eliezer Yudkowsky: once we have a human-level AI able to understand and redesign its own architecture, there will be a swift escalation into a Spike. Could be as soon as 2010, with 2005 and 2020 as the outer limits, if his proposed Singularity Institute has anything to do with it (this will be option [C]). Yudkowsky, I should warn you, is a 20 year old autodidact genius, perhaps his generation's equivalent of Norbert Wiener or Murray Gell-Mann. Or maybe he's talking through his hat. Take a look at his site and decide for yourselves. Why build an artificial brain when we each have one already? Well, it is regarded as impolite to delve intrusively into a living brain purely for experimental purposes, whether by drugs or surgery (sometimes dubbed `neurohacking'), except if no other course of treatment for an illness is available. Increasingly subtle scanning machines are now available, allowing us to watch as the human brain does its stuff, and a few brave pioneers are coupling chips to parts of themselves, but few expect us to wire ourselves to machines in the immediate future. That might be mistaken, however. Professor Kevin Warwick, of Reading University, successfully implanted a sensor-trackable chip into his arm in 1998. A year later, he allowed an implanted chip to monitor his neural and muscular patterns, then had a computer use this information to copy the signals back to his body and cause his limbs to move; he was thus a kind of puppet, driven by the computer signals. He plans experiments where the computer, via similar chips, takes control of his emotions as well as his actions.[12]
As we gradually learn to read the language of the brain's neural nets more closely, and finally to write directly back to them, we will find ways to expand our senses, directly experience distant sensors and robot bodies (perhaps giving us access to horribly inhospitable environments like the depths of the oceans or the blazing surface of Venus). Instead of hammering keyboards or calculators, we might access chips or the global net directly via implanted interfaces. Perhaps sensitive monitors will track brainwaves, myoelectricity (muscles) and other indices, and even impose patterns on our brains using powerful, directed magnetic fields. Augmentations of this kind, albeit rudimentary, are already seen at the lab level. Perhaps by 2020 we'll see boosted humans able to share their thoughts directly with computers. If so, it is a fair bet that neuroscience and computer science will combine to map the processes and algorithms of the naturally evolved brain, and try to emulate it in machines. Unless there actually is a mysterious non-replicable spiritual component, a soul, we'd then expect to see a rapid transition to self-augmenting machines--and we'd be back to path [B i]. [B iii] Increasing computer power and advances in neuroscience will lead to rapid uploading of human minds.On the other hand, if [B ii] turns out to be easier than [B i], we would open the door to rapid uploading technologies. Once the brain/mind can be put into a parallel circuit with a machine as complex as a human cortex (available, as we've seen, somewhere 2020 and 2040), we might expect a complete, real-time emulation of the scanned brain to be run inside the machine that's copied it. Again, unless the `soul' fails to port over along with the information and topological structure, you'd then find your perfect twin (although grievously short on, ahem, a body) dwelling inside the device.
Your uploaded double would need to be provided with adequate sensors (possibly enhanced, compared with our limited eyes and ears and tastebuds), plus means of acting with ordinary intuitive grace on the world (via physical effectors of some kind--robotic limbs, say, or a robotic telepresence). Or perhaps your upload twin would inhabit a cyberspace reality, less detailed than ours but more conducive to being rewritten closer to heart's desire. Such VR protocols should lend themselves readily to life as an uploaded personality.
Once personality uploading is shown to be possible and tolerable or, better still, enjoyable, we can expect at least some people to copy themselves into cyberspace. How rapidly this new world is colonised will depend on how expensive it is to port somebody there, and to sustain them. Computer storage and run-time should be far cheaper by then, of course, but still not entirely free. As economist Robin Hanson has argued, the problem is amenable to traditional economic analysis. `I see very little chance that cheap fast upload copying technology would not be used to cheaply create so many copies that the typical copy would have an income near `subsistence' level.'[13] On the other hand, `If you so choose to limit your copying, you might turn an initial nest egg into fabulous wealth, making your few descendants very rich and able to afford lots of memory.'
If an explosion of uploads is due to occur quite quickly after the technology emerges, early adopters would gobble up most of the available computing resources. But this assumes that uploaded personalities would retain the same apparent continuity we fleshly humans prize. Being binary code, after all (however complicated), such people might find it easier to alter themselves--to rewrite their source code, so to speak, and to link themselves directly to other uploaded people, and AIs if there are any around. This looks like a recipe for a Spike to me. How soon? It depends. If true AI-level machines are needed, and perhaps medical nanotechnology to perform neuron-by-neuron, synapse-by synapse brain scanning, we'll wait until both technologies are out of beta-testing and fairly stable. That would be 2040 or 2050, I'd guesstimate. [B iv] Increasing connectivity of the Internet will allow individuals or small groups to amplify the effectiveness of their conjoined intelligence.Routine disseminated software advances will create (or evolve) ever smarter and more useful support systems for thinking, gathering data, writing new programs--and the outcome will be a `in-one-bound-Jack-was-free' surge into AI. That is the garage band model of a singularity, and while it has a certain cheesy appeal, I very much doubt that's how it will happen.
But the Internet is growing and complexifying at a tremendous rate. It is barely possible that one day, as Arthur C. Clarke suggested decades ago of the telephone system, it will just... wake up. After all, that's what happened to a smart African ape, and unlike computers it and its close genetic cousins weren't already designed to handle language and mathematics. [B v] Research and development of microelectromechanical systems (MEMS) and fullerene-based devices will lead to industrial nanoassembly, and thence to `anything boxes'.Here we have the `classic' molecular nanotechnology pathway, as predicted by Drexler's Foresight Institute and NASA,[14] but also by the mainstream of conservative chemists and adjacent scientists working in MEMS, and funded nanotechnology labs around the world. In a 1995 Wired article, Eric Drexler predicted nanotechnology within 20 years. Is 2015 too soon? Not, surely, for the early stage devices under development by Zyvex Corporation in Texas, who hope to have at least preliminary results by 2010, if not sooner.[15] For many years AI was granted huge amounts of research funding, without much result (until recently, with a shift in direction and the wind of Moore's Law at its back). Nano is now starting to catch the research dollars, with substantial investment from governments (half a billion promised by Clinton; and in Japan, even Australia) and mega-companies such as IBM. The prospect of successful nanotech is exciting, but should also make you afraid, very afraid. If nano remains (or rather, becomes) a closely guarded national secret, contained by munitions laws, a new balance of terror might take us back to something like the Cold War in international relations--but this would be a polyvalent, fragmented, perhaps tribalised balance.
Or building and using nanotech might be like the manufacture of dangerous drugs or nuclear materials: centrally produced by big corporations' mints, under stringent protocols (you hope, fearful visions of Homer Simpson's nuclear plant dancing in the back of your brain), except for those in Colombia and the local bikers' fortress...
Or it might be a Ma & Pa business: a local plant equal, perhaps, to a used car yard, with a fair-sized raw materials pool, mass transport to shift raw or partly processed feed stocks in, and finished product out. This level of implementation might resemble a small internet server, with some hundreds or thousands of customers. One might expect the technology to grow more sophisticated quite quickly, as minting allows the emergence of cheap and amazingly powerful computers. Ultimately, we might find ourselves with the fabled anything box in every household, protected against malign uses by an internal AI system as smart as a human, but without human consciousness and distractibility. We should be so lucky. But it could happen that way.
A quite different outcome is foreshadowed in a prescient 1959 novel by Damon Knight, A for Anything, in which a `matter duplicator' leads not to utopian prosperity for all but to cruel feudalism, a regression to brutal personal power held by those clever thugs who manage to monopolise the device. A slightly less dystopian future is portrayed in Neal Stephenson's satirical but seriously intended The Diamond Age, where tribes and nations and new optional tetherings of people under flags of affinity or convenience tussle for advantage in a world where the basic needs of the many poor are provided free, but with galling drab uniformity, at street corner matter compilers owned by authorities. That is one way to prevent global ruination at the hands of crackers, lunatics and criminals, but it's not one that especially appeals--if an alternative can be found.
Meanwhile, will nanoassembly allow the rich to get richer--to hug this magic cornucopia to their selfish breasts--while the poor get poorer? Why should it be so? In a world of 10 billion flesh-and-blood humans (ignoring the uploads for now), there is plenty of space for everyone to own decent housing, transport, clothing, arts, music, sporting opportunities... once we grant the ready availability of nano mints. Why would the rich permit the poor to own the machineries of freedom from want? Some optimists adduce benevolence, others prudence. Above all, perhaps, is the basic law of an information/knowledge economy: the more people you have thinking and solving and inventing and finding the bugs and figuring out the patches, the better a nano minting world is for everyone (just as it is for an open source computing world). Besides, how could they stop us?[16] (Well, by brute force, or in the name of all that's decent, or for our own moral good. None of these methods will long prevail in a world of free-flowing information and cheap material assembly. Even China has trouble keeping dissidents and mystics silenced.)
The big necessary step is the prior development of early nano assemblers, and this will be funded by university and corporate (and military) money for researchers, as well as by increasing numbers of private investors who see the marginal pay-offs in owning a piece of each consecutive improvement in micro- and nano-scale devices. So yes, the rich will get richer--but the poor will get richer too, as by and large they do now, in the developed world at least. Not as rich, of course, nor as fast. By the time the nano and AI revolutions have attained maturity, these classifications will have shifted ground. Economists insist that rich and poor will still be with us, but the metric will have changed so drastically, so strangely, that we here-and-now can make little sense of it. This is a rather different approach, and increasingly I see experts arguing that it is the short-cut to mastery of the worlds of the very small and the very complex. Biology, not computing! is the slogan. After all, bacteria, ribosomes, viruses, cells for that matter, already operate beautifully at the micro- and even the nano-scales.
Still, even if technology takes a major turn away from mechanosynthesis and `hard' minting, this approach will require a vast armory of traditional and innovative computers and appropriately ingenious software. The IBM petaflop project Blue Gene (doing a quadrillion operations a second) will be a huge system of parallel processors designed to explore protein folding, crucial once the genome projects have compiled their immense catalog of genes. Knowing a gene's recipe is little value unless you know, as well, how the protein it encodes twists and curls in three-dimensional space. That is the promise of the first couple of decades of the 21st century, and it will surely unlock many secrets and open new pathways.
Exploring those paths will require all the help molecular biologists can get from advanced computers, virtual reality displays, and AI adjuncts. Once again, we can reasonably expect those paths to track right into the foothills of the Spike. Put a date on it? Nobody knows--but recall that DNA was first decoded in 1953, and by around half a century later the whole genome will be in the bag. How long until the next transcendent step--complete understanding of all our genes, how they express themselves in tissues and organs and abilities and behavioural bents, how they can be tweaked to improve them dramatically? Cautiously, the same interval: around 2050. More likely (if Moore's law keeps chugging along), half that time: 2025 or 2030.
The usual timetable for the Spike, in other words. [C] The Singularity happens when we go out and make it happen.That's Eliezer Yudkowsky's sprightly, in-your-face declaration of intent, which dismisses as uncomprehending all the querulous cautions about the transition to superintelligence and the Singularity on its far side.[17]
Just getting to human-level AI, this analysis claims, is enough for the final push to a Spike. How so? Don't we need unique competencies to do that' Isn't the emergence of ultra-intelligence, either augmented-human or artificial, the very definition of a Vingean singularity?
Yes, but this is most likely to happen when a system with the innate ability to view and reorganise its own cognitive structure gains the conscious power of a human brain. A machine might have that facility, since its programming is listable, you could literally print it out--in many, many volumes--and check each line. Not so an equivalent human, with our protein spaghetti brains, compiled by gene recipes and chemical gradients rather than exact algorithms; we clearly just can't do that.
So intelligent design turned back upon itself, a cascading multiplier that has no obvious bounds. The primary challenge becomes software, not hardware. The raw petaflop end of the project is chugging along nicely now, mapped by Moore's Law, but even if it tops out, it doesn't matter. A self-improving seed AI could run glacially slowly on a limited machine substrate. The point is, so long as it has the capacity to improve itself, at some point it will do so convulsively, bursting through any architectural bottlenecks to design its own improved hardware, maybe even build it (if it's allowed control of tools in a fabrication plant). So what determines the arrival of the Singularity is just the amount of effort invested in getting the original seed software written and debugged.
This particular argument is detailed in Yudkowsky's ambitious web documents `Coding a Transhuman AI', `Singularity Analysis' and `The Plan to Singularity'. It doesn't matter much, though, whether these specific plans hold up under detailed expert scrutiny; they serve as a accessible model for the process we're discussing.
Here we see conventional open-source machine intelligence, starting with industrial AI, leading to a self-rewriting seed AI which runs right into takeoff to a singularity. You'd have a machine that combines the brains of a human (maybe literally, in coded format, although that is not part of Yudkowsky's scheme) with the speed and memory of a shockingly fast computer. It won't be like anything we've ever seen on earth. It should be able to optimise its abilities, compress its source code, turn its architecture from a swamp of mud huts into a gleaming, compact, ergonomic office (with a spa and a bar in the penthouse, lest we think this is all grim earnest).[18] Here is quite a compelling portrait of what it might be like, `human high-level consciousness and AI rapid algorithmic performance combined synergetically,' to be such a machine:
Combining Deep Blue with Kasparov... yields a Kasparov who can wonder `How can I put a queen here?' and blink out for a fraction of a second while a million moves are automatically examined. At a higher level of integration, Kasparov's conscious perceptions of each consciously examined chess position may incorporate data culled from a million possibilities, and Kasparov's dozen examined positions may not be consciously simulated moves, but `skips' to the dozen most plausible futures five moves ahead.[19]
Such a machine, we see, is not really human-equivalent after all. If it isn't already transhuman or superhuman, it will be as soon as it has hacked through its own code and revised it (bit by bit, module by module, making mistakes and rebooting and trying again until the whole package comes out right). If that account has any validity, we also see why the decades-long pauses in the time-tables cited earlier are dubious, if not preposterous. Given a human-level AI by 2039, it is not going to wait around biding its time until 2099 before creating a discontinuity in cognitive and technological history. That will happen quite fast, since a self-optimising machine (or upload, perhaps) will start to function so much faster than its human colleagues that it will simply leave them behind, along with Moore's plodding Law. A key distinguishing feature, if Yudkowsky's analysis is sound, is that we never will see HAL, the autonomous AI in the movie 2001. All we will see is AI specialized to develop software.
Since I don't know the true shape of the future any more than you do, I certainly don't know whether an AI or nano-minted Singularity will be brought about (assuming it does actually occur) by careful, effortful design in an Institute with a Spike engraved on its door, by a congeries of industrial and scientific research vectors, or by military ambitions pouring zillions of dollars into a new arena that promises endless power through mayhem, or mayhem threatened.
It does strike me as excessively unlikely that we will skid to a stop anytime soon, or even that a conventional utopia minus any runaway singularity sequel (Star Trek's complacent future, say) will roll off the mechanosynthesising assembly line.[20]
Are there boringly obvious technical obstacles to a Spike? Granted, particular techniques will surely saturate and pass through inflexions points, tapering off their headlong thrust. If the past is any guide, new improved techniques will arrive (or be forced into reality by the lure of profit and sheer curiosity) in time to carry the curves upward at the same acceleration. If not? Well, then, it will take longer to reach the Spike, but it is hard to see why progress in the necessary technologies would simply stop.
Well, perhaps some of these options will become technically feasible but remain simply unattractive, and hence bypassed. Dr Russell Blackford, a lawyer, former industrial advocate and literary theorist who has written interestingly about social resistance to major innovation, notes that manned exploration of Mars has been a technical possibility for the past three decades, yet that challenge has not been taken up. Video-conferencing is available but few use it (unlike the instant adoption of mobile phones). While a concerted program involving enough money and with widespread public support could bring us conscious AI by 2050, he argues, it won't happen. Conflicting social priorities will emerge, the task will be difficult and horrendously expensive. Are these objections valid? AI and nano need not be impossibly hard and costly, since they will flow from current work powered by Moore's Law improvements. Missions to Mars, by contrast, have no obvious social or consumer or even scientific benefits beyond their simple feel-good achievement. Profound science can be done by remote vehicles. By contrast, minting and AI or IA will bring immediate and copious benefits to those developing them--and will become less and less expensive, just as desktop computers have.
What of social forces taking up arms against this future? We've seen the start of a new round of protests and civil disruptions aimed at genetically engineered foods and work in cloning and genomics, but not yet targeted at longevity or computing research. It will come, inevitably. We shall see strange bedfellows arrayed against the machineries of major change. The only question is how effective its impact will be.
In 1999, for example, emeritus professor Alan Kerr, winner of the lucrative inaugural Australia Prize for his work in plant pathology, radio-broadcast a heartfelt denunciation of the Green's adamant opposition to new genetically engineered crops that allow use of insecticide to be cut by half. Some aspects of science, though, did concern Dr Kerr. He admitted that he'd been `scared witless' by the `thesis is that within a generation or two, science will have conquered death and that humans will become immortal. Have you ever thought of the consequences to society and the environment of such an achievement? If you're anything like me, there might be a few sleepless nights ahead of you. Why don't the greenies get stuck into this potentially horrifying area of science, instead of attacking genetic engineering with all its promise for agriculture and the environment?'[21] This, I suspect, is a short-sighted and ineffective diversionary tactic. It will arouse confused opposition to life extension and other beneficial on-going research programs, but will lash back as well against any ill-understood technology.
Cultural objections to AI might emerge, as venomous as yesterday's and today's attacks on contraception and abortion rights, or anti-racist struggles. If opposition to the Spike, or any of its contributing factors, gets attached to one or more influential religions, that might set back or divert the current. Alternatively, careful study of the risks of general assemblers and autonomous artificial intelligence might lead to just the kinds of moratoriums that Greens now urge upon genetically engineered crops and herds. Given the time lag we can expect before a singularity occurs--at least a decade, and far more probably two or three--there's room for plenty of informed specializt and public debate. Just as the basic technologies of the Spike will depend on design-ahead projects, so too we'll need a kind of think-ahead program to prepare us for changes that might, indeed, scare us witless. And of course, the practical impact of new technologies condition the sorts of social values that emerge; recall the subtle interplay between the oral contraceptive pill and sexual mores, and the swift, easy acceptance of in vitro conception.
Despite these possible impediments to the arrival of the Spike, I suggest that while it might be delayed, almost certainly it's not going to be halted. If anything, the surging advances I see every day coming from labs around the world convince me that we already are racing up the lower slopes of its curve into the incomprehensible.
In short, it makes little sense to try to pin down the future. Too many strange changes are occurring already, with more lurking just out of sight, ready to leap from the equations and surprise us. True AI, when it occurs, might rush within days or months to SI (superintelligence), and from there into a realm of Powers whose motives and plans we can't even start to second-guess. Nano minting could go feral or worse, used by crackpots or statesmen to squelch their foes and rapidly smear us all into paste. Or sublime AI Powers might use it to the same end, recycling our atoms into better living through femtotechnology.
The single thing I feel confident of is that one of these trajectories will start its visible run up the right-hand side of the graph within 10 or 20 years, and by 2030 (or 2050 at latest) will have put everything we hold self-evident into question. We will live forever; or we will all perish most horribly; our minds will emigrate to cyberspace, and start the most ferocious overpopulation race ever seen on the planet; or our machines will transcend and take us with them, or leave us in some peaceful backwater where the meek shall inherit the Earth. Or something else, something far weirder and... unimaginable. Don't blame me. That's what I promised you. NOTES
1. See Vernor Vinge, True Names... and Other Dangers, New York: Baen Books, 1987; Threats... and Other Promises, New York: Baen Books, 1988; and especially Marooned in Realtime, London: Pan Books, 1987. An important source is his Address to NASA VISION-21 Symposium, March 30-31, 1993, downloadable from http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html or http://www.frc.ri.cmu.edu/
~hpm/book98/com.ch1/vinge.singularity.html
For a general survey of this topic in far greater detail than I can provide in this essay, see my The Spike: Accelerating into the Unimaginable Future (Melbourne, Australia: Reed Books/New Holland, 1997; the revised, expanded and updated edition is: The Spike: How our Lives are Being Changed by Rapidly Advancing Technologies New York: Tor/Forge, February 2001).
2. Private communication, August, 1996. Vinge's own most recent picture of a plausible 2020, cautiously sans Singularity, emphasises the role of embedded computer networks so ubiquitous that finally they link into a kind of cyberspace Gaia, even merge with the original Gaia, that geological and biological macro-ecosystem of the globe (Vernor Vinge, `The Digital Gaia,' Wired, January 2000, pp. 74-8). However, on the last day of 1999, Vinge told me in an email: `The basic argument hasn't changed.'
3. Alvin Toffler, Future Shock, [1970] London: Pan Books, 1972, p. 170.
4. See, for a simplified discussion, Nobelist Steven Weinberg's summary article `A Unified Physics by 2050?', Scientific American, December 1999, pp. 36-43.
5. http://www.wired.com/wired/archive/8.04/joy.html
6. See the late economist Julian Simon's readable and optimistic book The Ultimate Resource at http://www.inform.umd.edu/EdRes/Colleges/BMGT/ .Faculty/JSimon/Ultimate_Resource/
7. http://www4.nationalacademies.org/ news.nsf/isbn/0309068916?OpenDocument
8. Ray Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, Sydney: Allen & Unwin, 1999.
9. http://merkle.com/merkle
10. http://www.frc.ri.cmu.edu/~hpm/book98/
11. Michio Kaku, Visions: How Science Will Revolutionize the 21st Century and Beyond, Oxford University Press, 1998, p. 28.
12. http://news.bbc.co.uk/hi/
english/sci/tech/newsid_503000/503552.stm
13. Personal communication, 8 December, 1999.
14. http://www.nas.nasa.gov/Groups/
SciTech/nano/index.html
15. http://www.zyvex.com
16. Some thoughts on the difficult of containing nanotech (with some comparisons to software piracy `warez'), and the likely evaporation of our current economy, can be found in: http://www.cabell.org/Quincy/Documents/
Nanotechnology/hello_nanotechnology.html
17. http://pobox.com/~sentience/singularity.html
18. Sorry, that's me again; Yudkowsky didn't say it.
19. http://www.pobox.com/~sentience/
AI_design.temp.html#view_advantage
20. To be fair, the Star Trek franchise has always made room for alien civilisations that have passed through a singularity and become as gods. It's just that television's notion of post-Spike entities stops short at mimicry of Greek and Roman mythology (Xena the Warrior Princess goes to the future), spiritualised transformations of humans into a sort of angel (familiar also from Babylon-5), down-market cyberpunk collectivity (the Borg), or sardonic whimsy (the entertaining character Q, from the Q dimension, where almost anything can happen and usually does). It's hard not to wonder why immortality is not assured by the transporter or the replicator, which can obviously encode a whole person as easily as a piping hot cup of Earl Gray tea, or why people age and die despite the future's superb medicine. The reasons, obviously, have nothing to do with plausible extrapolation and everything to do with telling an entertaining tale, using a range of contemporary human actors, that appeals to the largest demographic and ruffles as few feathers as possible while still delivering some faint frisson of difference and future shock.
21. http://abc.net.au/rn/science/
ockham/stories/s54399.htm
The book that frightened Dr Kerr was my The Last Mortal Generation (Sydney, Australia: New Holland, 1999). The Spike, by contrast, would surely shock him rigid. Arthur C. Clarke, by the way, took a different view of Last Mortal: in Profiles of the Future (London: Gollancz, 1999), he generously called it `this truly mind-stretching book' (p. 189). I much prefer to stretch minds than scare them witless.
This paper was given at the three-day symposium Australia at the Crossroads? Scenarios and Strategies for the Future, 31 April-2 May 2000, at the John Curtin International Institute. Curtin University of Technology, Perth, WA, Australia.
| | Join the discussion about this article on Mind·X! | |