Origin > The Singularity > Max More and Ray Kurzweil on the Singularity
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0408.html

Printable Version
    Max More and Ray Kurzweil on the Singularity
by   Max More
Raymond Kurzweil

As technology accelerates over the next few decades and machines achieve superintelligence, we will encounter a dramatic phase transition: the "Singularity." Will it be a "wall" (a barrier as conceptually impenetrable as the event horizon of a black hole in space), an "AI-Singularity" ruled by super-intelligent AIs, or a gentler "surge" into a posthuman era of agelessness and super-intelligence? And will this meme be hijacked by religious "passive singularitarians" obsessed with a future rapture? Ray Kurzweil and Extropy Institute president Max More debate.


Published February 26, 2002.

Ray: You were one of the earliest pioneers in articulating and exploring issues of the acceleration of technology and transhumanism. What led you to this examination?

Max: This short question is actually an enormous question. A well-rounded answer would take longer than I dare impose on any reader! One short answer is this:

Before my interest in examining accelerating technological progress and issues of transhumanism, I first had a realization. I saw very clearly how limited are human beings in their wisdom, in their intellectual and emotional development, and in their sensory and physical capabilities. I have always felt dissatisfied with those limitations and faults. After an early-teens interest in what I'll loosely call (with mild embarrassment) "psychic stuff," as I came to learn more science and critical thinking, I ceased to give any credence to psychic phenomena, as well as to any traditional religious views. With those paths to any form of transcendence closed, I realized that transhumanity (as I began to think of it), would only be achieved through science and technology steered by human values.

So, the realization was in two parts: A recognition of the undesirable limitations of human nature. And an understanding that science and technology were essential keys to overcoming human nature's confines. In my readings in science, especially potent in encouraging me to think in terms of the development of intelligence rather than static Platonic forms was evolutionary theory. When I taught basic evolutionary theory to college students, I invariably found that about 95% of them had never studied it in school. Most quickly developed some understanding of it in the class, but some found it hard to adjust to a different perspective with so many implications. To me evolutionary thinking seemed natural. It only made it clearer that humanity need not be the pinnacle of evolution.

My drive to understand the issues in a transhumanist manner resulted from a melding of technological progress and philosophical perspective. Even before studying philosophy in the strict sense, I had the same essential worldview that included perpetual progress, practical optimism, and self-transformation. Watching the Apollo 11 moon landing at the age of 5, then all the Apollo launches to follow, excited me tremendously. At that time, space was the frontier to explore. Soon, even before I had finished growing, I realized that the major barrier to crack first was that of human aging and mortality. In addition to tracking progress, from my early teens I started taking whatever reasonable measures I could to extend my life expectancy.

Philosophically, I formed an extropian/transhumanist perspective by incorporating numerous ideas and influences into what many have found to be a coherent framework of a perspective. Despite disagreeing with much (and not having read all) of Nietzsche's work, I do have a fondness for certain of his views and the way he expressed them. Most centrally, as a transhumanist, I resonate to Nietzsche's declaration that "Man is a rope, fastened between animal and overman--a rope over an abyss... What is great in man is that he is a bridge and not a goal."

A bridge, not a goal. That nicely summarizes a transhumanist perspective. We are not perfect. Neither are we to be despised or pitied or to debase ourselves before imaginary perfect beings. We are to see ourselves as a work in progress. Through ambition, intelligence, and a dash of good sense, we will progress from human to something better (according to my values).

Many others have influenced my interest in these ideas. Though not the deepest or clearest thinker, Timothy Leary's SMI2LE (Space Migration, Intelligence Increase, Life Extension) formula still appeals to me. However, today I find issues such as achieving superlongevity, superintelligence, and self-sculpting abilities to be more urgent. After my earlier years of interest, I particularly grew my thinking by reading people including philosophers Paul Churchland and Daniel Dennett, biologist Richard Dawkins, Hans Moravec, Roy Walford, Marvin Minsky, Vernor Vinge, and most recently Ray Kurzweil, who I think has brought a delightful clarity to many transhumanist issues.

Ray: How do you define the Singularity?

Max: I believe the term "Singularity," as we are using it these days, was popularized by Vernor Vinge in his 1986 novel Marooned in Realtime. (It appears that the term was first used in something like this sense, but not implying superhuman intelligence, by John von Neumann in the 1950s.) Vinge's own usage seems to leave an exact definition open to varying interpretations. Certainly it involves an accelerating increase in machine intelligence culminating in a sudden shift to super intelligence, either through the awakening of networked intelligence or the development of individual AIs. From the human point of view, according to Vinge, this change "will be a throwing away of all the previous rules, perhaps in the blink of an eye." Since the term means different things to different people, I will give three definitions.

Singularity #1: This Singularity includes the notion of a "wall" or "prediction horizon"--a time horizon beyond which we can no longer say anything useful about the future. The pace of change is so rapid and deep that our human minds cannot sensibly conceive of life post-Singularity. Many regard this as a specific point in time in the future, sometimes estimated at around 2035 when AI and nanotechnology are projected to be in full force. However, the prediction-horizon definition does not require such an assumption. The more that progress accelerates, the shorter the distance measured in years that we may see ahead. But as we progress, the prediction horizon, while probably shortening in time, will also move further out. So this definition could be broken into two, one of which insists on a particular date for a prediction horizon, while the other acknowledges a moving horizon. One argument for assigning a point in time is based on the view that the emergence of super-intelligence will be a singular advance, an instantaneous break with all the rules of the past.

Singularity #2: We might call this the AI-Singularity, or Moravec's Singularity since it most closely resembles the detailed vision of roboticist Hans Moravec. In this Singularity humans have no guaranteed place. The Singularity is driven by super-intelligent AI, which immediately follows from human-level AI. Without the legacy hardware of humans, these AIs leave humans behind in a runaway acceleration. In some happier versions of this type of Singularity, the super-intelligent AIs benevolently "uplift" humans to their level by means of brain uploading.

Singularity #3: Singularity seen as a surge into a transhuman and posthuman era. This view, though different in its emphasis, is compatible with the shifting time-horizon version of Singularity #1. In Singularity as Surge the rate of change need not remotely approach infinity (as a mathematical singularity). In this view, technological progress will continue to accelerate, though perhaps not quite as fast as some projections suggest, rapidly but not discontinuously transforming the human condition.

This could be termed a Singularity for two reasons: First, it would be a historically brief phase transition from the human condition to a posthuman condition of agelessness, super-intelligence, and physical, intellectual, and emotional self-sculpting. This dramatic phase transition, while not mathematically instantaneous, will mean an unprecedented break from the past. Second, since the posthuman condition (itself continually evolving) will be so radically different from human life, it will likely be largely if not completely incomprehensible to humans as we are today. Unlike some versions of the Singularity, the Surge/phase transition view allows that people may be at different stages along the path to posthuman at the same time, and that we may become posthuman in stages rather than all at once. For instance, I think it fairly likely that we achieve superlongevity before super-intelligence.

Ray: Do you see a Singularity in the future of human civilization?

Max: I do see a Singularity of the third kind in our future. A historically, if not subjectively, extremely fast phase change from human to transhuman to posthuman appears as a highly likely scenario. I do not see it as inevitable. It will take vast amounts of hard work, intelligence, determination, and some wisdom and luck to achieve. It's possible that some humans will destroy the race through means such as biological warfare. Or our culture may rebel against change, seduced by religious and cultural urgings for "stability," "peace" and against "hubris" and "the unknown,"

Although a Singularity as Surge could be stopped or slowed in these and other ways (massive meteorite strike?), I see the most likely scenario as being a posthuman Singularity. This is strongly implied by current accelerating progress in numerous fields including computation, materials science, bioinformatics and the convergence of infotech neuroscience, and biotech, microtech and nanotech.

Although I do not see super-intelligence alone as the only aspect of a Singularity, I do see it as a central aspect and driver. I grant that it is entirely possible that super-intelligence will arrive in the form of a deus ex machina, a runaway single-AI super-intelligence. However, my tentative assessment suggests that the Singularity is more likely to arise from one of two other means suggested by Vinge in his 1993 essay1. It could result from large computer networks of computers and their users--some future version of a semantic Web--"waking up" in the form of a distributed super-intelligent entity or community of minds. It could also (not exclusive of the previous scenario) result from increasingly intimate human-computer interfaces (by "computer," I loosely include all manner of sensors, processors, and networks). At least in the early stages, and partly in combination with human-computer interfaces, I expect biological human intelligence to be augmented through the biological sciences.

To summarize: I do not expect an instantaneous Singularity, nor one in which humans play no part after the creation of a self-improving human-level AI. I do anticipate a Singularity in the form of a growing surge in the pace of change, leading to a transhuman transition. This phase change will be a historically rapid and deep change in the evolutionary process. This short period will put an end to evolution in thrall to our genes. Biology will become an increasingly vestigial component of our nature. Biological evolution will become ever more suffused with and replaced by technological evolution, until we pass into the posthuman era.

As a postscript to this answer, I want to sound a note of caution. As the near-universal prevalence of religious belief testifies, humans tend to attach themselves, without rational thought, to belief systems that promise some form of salvation, heaven, paradise, or nirvana. In the Western world, especially in millennarian Christianity, millions are attracted to the notion of sudden salvation and of a "rapture" in which the saved are taken away to a better place.

While I do anticipate a Singularity as Surge, I am concerned that the Singularity concept is especially prone to being hijacked by this memeset. This danger especially arises if the Singularity is thought of as occurring at a specific point in time, and even more if it is seen as an inevitable result of the work of others. I fear that many otherwise rational people will be tempted to see the Singularity as a form of salvation, making personal responsibility for the future unnecessary. Already, I see a distressing number of superlongevity advocates who apparently do not exercise or eat healthily, instead firmly hoping that medical technology will cure aging before they die. Clearly this abdication of personal responsibility is not inherent in the Singularity concept.

But I do see the concept as an attractor that will draw in those who treat it in this way. The only way I could see this as a good thing is if the Passive Singularitarians (as I will call them) substitute the Singularity for preexisting and much more unreasonable beliefs. I think those of us who speak of the Singularity should be wary of this risk if we value critical thought and personal responsibility. As much as I like Vernor and his thinking, I get concerned reading descriptions of the Singularity such as "a throwing away of all the previous rules, perhaps in the blinking of an eye." This comes dangerously close to encouraging a belief in a Future Rapture.

Ray: When will the Singularity take place?

Max: I cannot answer this question with any precision. I feel more confident predicting general trends than specific dates. Some trends look very clear and stable, such as the growth in computer power and storage density. But I see enough uncertainties (especially in the detailed understanding of human intelligence) in the breakthroughs needed to pass through a posthuman Singularity to make it impossible to give one date. Many Singularity exponents see several trends in computer power and atomic control over matter reaching critical thresholds around 2030. It does look like we will have computers with hardware as powerful as the human brain by then, but I remain to be convinced that this will immediately lead to superhuman intelligence. I also see a tendency in many projections to take a purely technical approach and to ignore possible economic, political, cultural, and psychological factors that could dampen the advances and their impact.

I will make one brief point to illustrate what I mean: Electronic computers have been around for over half a decade, and used in business for decades. Yet their effect on productivity and economic growth only became evident in the mid-1990s as corporate organization and business processes finally reformed to make better use of the new technology. Practice tends to lag technology, yet projections rarely allow for this. (This factor also led to many dot-com busts where business models required consumers to change their behavior in major ways.)

Cautions aside, I would be surprised (and, of course, disappointed) if we did not move well into a posthuman Singularity by the end of this century. I think that we are already at the very near edge of the transhuman transition. This will gather speed, and could lead to a Singularity as phase transition by the middle of the century. So, I will only be pinned down to the extent of placing the posthuman Singularity as not earlier than 2020 and probably not later than 2100, with a best guess somewhere around the middle of the century. Since that puts me in my 80s or 90s, I hope I am unduly pessimistic!

Ray: Thanks, Max, for these thoughtful and insightful replies. I appreciate your description of transhumanity as a transcendence to be "achieved through science and technology steered by human values." In this context, Nietzsche's "Man is a rope, fastened between animal and overman--a rope over an abyss" is quite pertinent, thereby interpreting Nietzsche's "overman" to be a reference to transhumanism.

The potential to hijack the concept of the Singularity by the "future rapture" memeset is discerning, but I would point out that humankind's innate inclination for salvation is not necessarily irrational. Perhaps we have this inclination precisely to anticipate the Singularity. Maybe it is the Singularity that has been hijacked by irrational belief systems, rather than the other way around. However, I share your antipathy toward passive singularitarianism. If technology is a double-edged sword, then there is the possibility of technology going awry as it surges toward the singularity to profoundly disturbing consequences. We do need to keep our eye on the ethical ball.

I don't agree that a cultural rebellion "seduced by religious and cultural urgings for 'stability,' peace,' and against 'hubris' and 'the unknown'" are likely to derail technological acceleration. Epochal events such as two world wars, the Cold War, and numerous economic, cultural, and social upheavals have failed to provide the slightest dent in the pace of the fundamental trends. As I discuss below, recessions, including the Great Depression, register as only slight deviations from the far more profound effect of the underlying exponential growth of the economy, fueled by the exponential growth of information-based technologies.

The primary reason that technology accelerates is that each new stage provides more powerful tools to create the next stage. The same was true of the biological evolution that created the technology creating species in the first place. Indeed, each stage of evolution adds another level of indirection to its methods, and we can see technology itself as a level of indirection for the biological evolution that resulted in technological evolution.

To put this another way, the ethos of scientific and technological progress is so deeply ingrained in our civilization that halting this process is essentially impossible. Occasional ethical and legal impediments to fairly narrow developments are rather like stones in the river of advancement; progress just flows around them.

You wrote that it appears that we will have sufficient computer power to emulate the human brain by 2030, but that this development will not necessarily "immediately lead to superhuman intelligence." The other very important development is the accelerating process of reverse engineering (i.e., understanding the principles of operation of) the human brain. Without repeating my entire thesis here, this area of knowledge is also growing exponentially, and includes increasingly detailed mathematical models of specific neuron types, exponentially growing price-performance of human brain scanning, and increasingly accurate models of entire brain regions. Already, at least two dozen of the several hundred regions in the brain have been satisfactorily reverse-engineered and implemented in synthetic substrates. I believe that it is a conservative scenario to expect that the human brain will be fully reverse-engineered in a sufficiently detailed way to recreate its mental powers by 2029.

As you and I have discussed on various occasions, I've done a lot of thinking over the past few decades about the laws of technological evolution. As I mentioned above, technological evolution is a continuation of biological evolution. So the laws of technological evolution are compatible with the laws of evolution in general.

These laws imply at least a "surge" form of Singularity, as you describe it, during the first half of this century. This can be seen from the predictions for a wide variety of technological phenomena and indicators, including the power of computation, communication bandwidths, technology miniaturization, brain reverse engineering, the size of the economy, the rate of paradigm shift itself, and many others. These models apply to measures of both hardware and software.

Of course, if a mathematical line of inquiry yields counterintuitive results, then it makes sense to check the sensibility of these conclusions in another manner. However, in thinking through how the transformations of the Singularity will actually take place, through distinct periods of transhuman and posthuman development, the predictions of these formulae do make sense to me, in that one can describe each stage and how it is the likely effect of the stage preceding it.

To me, the concept of the Singularity as a "wall" implies a period of infinite change, that is, a mathematical Singularity. If there is a point in time at which change is infinite, then there is an inherent barrier in looking beyond this point in time. It becomes as impenetrable as the event horizon of a black hole in space, in which the density of matter and energy is infinite. The concept of the Singularity as a "surge," on the other hand, is compatible with the idea of exponential growth. It is the nature of an exponential function that it starts out slowly, then grows quite explosively as one passes what I call the "knee of the curve." From the surge perspective, the growth rate never becomes literally infinite, but it may appear that way from the limited perspective of someone who cannot follow such enormously rapid change.

This perspective can be consistent with the idea that you mention of the prediction horizon "moving further out," because one of the implications of the Singularity as a surge phenomenon is that humans will enhance themselves through intimate connection with technology, thereby increasing our capacity to understand change. So changes that we could not follow today may very well be comprehensible when they occur. I think that it is fair to say that at any point in time, the changes that occur will be comprehensible to someone, albeit that the someone may be a superintelligence on the cutting edge of the Singularity.

With the above as an introduction, I thought you would find of interest a dialog that Hans Moravec and I had on the issue you address on whether the Singularity is a "wall" (i.e., "prediction horizon") or a "surge." Some of these ideas are best modeled in the language of mathematics, but I will try to put the math in a box, so to speak, as it is not necessary to track through the formulas in order to understand the basic ideas.

First, let me describe the formulas that I showed to Hans.

The following analysis describes the basis of understanding evolutionary change as an exponentially growing phenomenon (a double exponential to be exact). I will describe here the growth of computational power, although the formulas are similar for other exponentially growing aspects of information-based aspects of evolution, including our knowledge of human intelligence, which is a primary source of the software of intelligence.

We are concerned with three variables:

V: Velocity (i.e., power) of computation (measured in Calculations per Second per unit cost)

W: World Knowledge as it pertains to designing and building computational devices

t: Time

As a first-order analysis, we observe that computer power is a linear function of the knowledge of how to build computational devices. We also note that knowledge is cumulative, and that the instantaneous increment to knowledge is proportional to computational power. These observations result in the conclusion that computational power grows exponentially over time.

See Analysis One

The data that I've gathered shows that there is exponential growth in the rate of exponential growth (we doubled computer power every three years early in the 20th century, every two years in the middle of the century, and are doubling it every one year now).

The exponentially growing power of technology results in exponential growth of the economy. This can be observed going back at least a century. Interestingly, recessions, including the Great Depression, can be modeled as a fairly weak cycle on top of the underlying exponential growth. In each case, the economy "snaps back" to where it would have been had the recession/depression never existed in the first place. We can see even more rapid exponential growth in specific industries tied to the exponentially growing technologies, such as the computer industry.

If we factor in the exponentially growing resources for computation, we can see the source for the second level of exponential growth.

See Analysis Two

Now, let's consider some real-world data. My estimate of brain capacity is 100 billion neurons times an average 1,000 connections per neuron (with the calculations taking place primarily in the connections) times 200 calculations per second--a total of 20 million billion (2*1016) calculations per second. Although these estimates are conservatively high, one can find higher and lower estimates. However, even much higher (or lower) estimates by orders of magnitude only shift the prediction by a relatively small number of years.

See Analysis Three

Human Brain = 100 Billion (1011) neurons * 1000 (103) Connections/Neuron * 200 (2 * 102) Calculations Per Second Per Connection = 2 * 1016 Calculations Per Second

Human Race = 10 Billion (1010) Human Brains = 2 * 1026 Calculations Per Second

We achieve one Human Brain capability (2 * 1016 cps) for $1,000 around the year 2023.

We achieve one Human Brain capability (2 * 1016 cps) for one cent around the year 2037.

We achieve one Human Race capability (2 * 1026 cps) for $1,000 around the year 2049.

We achieve one Human Race capability (2 * 1026 cps) for one cent around the year 2059.

If we factor in the exponentially growing economy, particularly with regard to the resources available for computation (already about a trillion dollars per year), we can see that nonbiological intelligence will be many trillions of times more powerful than biological intelligence by approximately mid-century.

Although the above analysis pertains to computational power, a comparable analysis can be made of brain reverse-engineering, i.e., knowledge about the principles of operation of human intelligence. There are many different ways to measure this, including mathematical models of human neurons, the resolution, speed, and bandwidth of human brain scanning, and knowledge about the digital-controlled analog, massively parallel algorithms utilized in the human brain.

As I mentioned above, we have already succeeded in developing highly detailed models of several dozen of the several hundred regions of the brain and implementing these models in software, with very successful results. I won't describe all of this in our dialog here, but I will be reporting on brain reverse- engineering in some detail in my next book, The Singularity is Near.

We can view this effort as analogous to the genome project. The effort to understand the information processes in our biological heritage has largely completed the stage of collecting the raw genomic data, is now rapidly gathering the proteomic data, and has made a good start at understanding the methods underlying this information. With regard to the even more ambitious project to understand our neural organization, we are now approximately where the genome project was about ten years ago, but are further along than most people realize.

Keep in mind that the brain is the result of chaotic processes (which themselves use a controlled form of evolutionary pruning) described by a genome with very little data (only about 23 million bytes compressed). The analyses I will present in The Singularity is Near demonstrate that it is quite conservative to expect that we will have a complete understanding of the human brain and its methods, and thereby the software of human intelligence, prior to 2030.

The above is my own analysis, at least in mathematical terms, and backed up by extensive real-world data, of the Singularity as a "surge" phenomenon. This, then, is the conservative view of the Singularity.

Hans Moravec points out that my assumption that computer power grows proportionally with knowledge (i.e., V = C1 * W) is overly pessimistic because independent innovations (each of which is a linear increment to knowledge) increase the power of the technology in a multiplicative way, rather than an additive way.

In an email to me on February 15, 1999, Hans wrote:

"For instance, one (independent innovation) might be an algorithmic discovery (like log N sorting) that lets you get the same result with half the computation. Another might be a computer organization (like RISC) that lets you get twice the computation with the same number of gates. Another might be a circuit advance (like CMOS) that lets you get twice the gates in a given space. Others might be independent speed-increasing advances, like size-reducing copper interconnects and capacitance-reducing silicon-on-insulator channels. Each of those increments of knowledge more or less multiplies the effect of all of the others, and computation would grow exponentially in their number."

So if we substitute, as Hans suggests, V = exp(W) rather than V = C1 * W, then the result is that both W and V become infinite.

See Analysis Four

Hans and I then engaged in a dialog as to whether or not it is more accurate to say that computer power grows exponentially with knowledge (which is suggested by an analysis of independent innovations) (i.e., V = exp (W)), or grows linearly with knowledge (i.e., V = C1 * W) as I had originally suggested. We ended up agreeing that Hans' original statement that V = exp (W) is too "optimistic," and that my original statement that V = C1 * W is too "pessimistic."

We then looked at what is the "weakest" assumption that one could make that nonetheless results in a mathematical singularity. Hans wrote:

"But, of course, a lot of new knowledge steps on the toes of other knowledge, by making it obsolete, or diluting its effect, so the simple independent model doesn't work in general. Also, simply searching through an increasing amount of knowledge may take increasing amounts of computation. I played with the V=exp(W) assumption to weaken it, and observed that the singularity remains if you assume processing increases more slowly, for instance V = exp(sqrt(W)) or exp(W1/4). Only when V = exp(log(W)) (i.e, = W) does the progress curve subside to an exponential.

Actually, the singularity appears somewhere in the I-would-have-expected tame region between and V = W and V = W2 (!)

Unfortunately the transitional territory between the merely exponential V=W and the singularity-causing V=W2 is analytically hard to deal with. I assume just before a singularity appears, you get non-computably rapid growth!"

Interestingly, if we take my original assumption that computer power grows linearly with knowledge, but add that the resources for computation also grows in the same way, then the total amount of computational power grows as the square of knowledge, and again, we have a mathematical singularity.

See Analysis Five

The conclusions that I draw from these analyses are as follows. Even with the "conservative" assumptions, we find that nonbiological intelligence crosses the threshold of matching and then very quickly exceeds biological intelligence (both hardware and software) prior to 2030. We then note that nonbiological intelligence will then be able to combine the powers of biological intelligence with the ways in which nonbiological intelligence already excels, in terms of accuracy, speed, and the ability to instantly share knowledge.

Subsequent to the achievement of strong AI, human civilization will go through a period of increasing intimacy between biological and nonbiological intelligence, but this transhumanist period will be relatively brief before it yields to a posthumanist period in which nonbiological intelligence vastly exceeds the powers of unenhanced human intelligence. This, at least, is the conservative view.

The more "optimistic" view is difficult for me to imagine, so I assume that the formulas stop just short of the point at which the result becomes noncomputably large (i.e., infinite). What is interesting in the dialog that I had with Hans above is how easily the formulas can produce a mathematical singularity. Thus the difference between the Singularity as "wall" and the Singularity as "surge" results from a rather subtle difference in our assumptions.

Max: Ray, I know I'm in good company when the "conservative" view means that we achieve superhuman intelligence and a posthuman transition by 2030! I suppose that puts me in the unaccustomed position of being the ultra-conservative. From the regular person's point of view, the differences between our expectations will seem trivial. Yet I think these critical comparisons are valuable in deciding whether the remaining human future is 25 years or 50 years or more. Differences in these estimations can have profound effects on outlook and which plans for the future are rational. One example would be the sense of saving heavily to build compound returns versus spending almost everything until the double exponential really turns strongly toward a vertical ascent.

I find your trend analysis compelling and certainly the most comprehensive and persuasive ever developed. Yet I am not quite willing to yield fully to the mathematical inevitability of your argument. History since the Enlightenment makes me wary of all arguments to inevitability, at least when they point to a specific time. Clearly your arguments are vastly more detailed and well-grounded than those of the 18th century proponents of inevitable progress. But I suspect that a range of non-computational factors could dampen the growth curve. The double exponential curve may describe very well the development of new technologies (at least those driven primarily by computation), but not necessarily their full implementation and effects.

Numerous world-changing technologies from steel mills to electricity to the telephone to the Internet have taken decades to move from introduction to widespread effects. We could point to the Web to argue that this lag between invention and full adoption is shrinking. I would agree for the most part, yet different examples may tell a different story. Fuel cells were invented decades ago but only now do they seem poised to make a major contribution to our energy supply.

Psychological and cultural factors act as future-shock absorbers. I am not sure that models based on models of evolution in information technology necessarily take these factors fully into account. In working with businesses to help them deal with change, I see over and over the struggle involved in altering organizational culture and business processes to take advantage of powerful software solutions from supply chain management to customer relationship management (CRM). CRM projects have a notoriously high failure rate, not because the software is faulty but because of poor planning and a failure to re-engineer business processes and employee incentives to fit.

I expect we will eventually reach a point where cognitive processes and emotions can be fully understood and modulated and where we have a deep understanding of social processes. These will then cease to act as significant brakes to progress. But major advances in those areas seem likely to come close to the Singularity and so will act as drags until very close. It could be that your math models may overstate early progress toward the Singularity due to these factors. They may also understate the last stages of progress as practice catches up to technology with the liberation of the brain from its historical limitations.

Apart from these human factors, I am concerned that other trends may not present such an optimistic picture of accelerating progress. Computer programming languages and tools have improved over the last few decades, but it seems they improve slowly. Yes, at some point computers will take over most programming and perhaps greatly accelerate the development of programming tools. Or humans will receive hippocampus augmentations to expand working memory. My point is not that we will not reach the Singularity but that different aspects of technology and humanity will advance at different rates, with the slower holding back the faster.

I have no way of formally modeling these potential braking factors, which is why I refrain from offering specific forecasts for a Singularity. Perhaps they will delay the transhuman transition by only a couple of years, or perhaps by 20. I would agree that as information technology suffuses ever more of economy and society, its powerful engines of change will accelerate everything faster and faster as time goes by. Therefore, although I am not sure that your equations will always hold, I do expect actual events to converge on your models the closer we get to Singularity.

I would like briefly to make two other comments on your reply. First, you suggest that "humankind's innate inclination for salvation is not necessarily irrational. Perhaps we have this inclination precisely to anticipate the Singularity." I am not sure how to take this suggestion. A natural reading suggests a teleological interpretation: humans have been given (genetically or culturally) this inclination. If so, who gave us this? Since I do not believe the evidence supports the idea that we are designed beings, I don't think such a teleological view of our inclination for salvation is plausible.

I would also say that I don't regard this inclination as inherently irrational. The inclination may be a side effect of the apparently universal human desire to understand and to solve problems. Those who feel helpless to solve certain kinds of problems often want to believe there is a higher power (tax accountant, car mechanic, government, or god) that can solve the problem. I would say such an inclination only becomes irrational when it takes the form of unfounded stories that are taken as literal, explanatory facts rather than symbolic expressions of deep yearnings. That aside, I am curious how you think that we come to have this inclination in order to anticipate the Singularity.

Second, you say that you don't agree that a cultural rebellion "seduced by religious and cultural urgings for 'stability,' peace,' and against 'hubris' and 'the unknown'" are "likely to derail technological acceleration." We really don't disagree here. If you look again at what I wrote, you can see that I do not think this derailing is likely. More exactly, while I think they are highly likely locally (look at the Middle East for example), they would have a hard time in today's world universally stopping or appreciably slowing technological progress. My concern was to challenge the idea that progress is inevitable rather than simply highly likely.

This point may seem unimportant if we adopt a position based on overall trends. But it will certainly matter to those left behind, temporarily or permanently in various parts of the world: The Muslim woman dying in childbirth as her culture refuses her medical attention; a dissident executed for speaking out against the state; or a patient who dies of nerve degeneration or who loses their personality due to amentia because religious conservatives have halted progress in stem cell research. The derailing of progress is likely to be temporary and local, but no less real and potentially deadly for many. A more widespread and enduring throwback, perhaps due to a massively infectious and deadly terrorist attack, surely cannot be ruled out. Recent events have reminded us that the future needs security as well as research.

Normally I do the job of arguing that technological change will be faster than expected. Taking the other side in this dialog has been a stimulating change of pace. While I expect those major events we call the Singularity to come just a little later than you calculate, I strongly hope that I am mistaken and that you are correct. The sooner we master these technologies, the sooner we will conquer aging and death and all the evils that humankind has been heir to.

Ray: It's tempting indeed to continue this dialog indefinitely, or at least until the Singularity comes around. I am sure that we will do exactly that in a variety of forums. A few comments for the moment, however:

You cite the difference in our future perspective (regarding the time left until the Singularity) as being about 20 years. Of course, from the perspective of human history, let alone evolutionary history, that's not a very big difference. It's not clear that we differ by even that much. I've projected the date 2029 for a nonbiological intelligence to pass the Turing test. (As an aside, I just engaged in a "long term wager" with Mitchell Kapor to be administered by the "Long Now Foundation" on just this point.)

However, the threshold of a machine passing a valid Turing test, although unquestionably a singular milestone, does not represent the Singularity. This event will not immediately alter human identity in such a profound way as to represent the tear in the fabric of history that the term Singularity implies. It will take a while longer for all of these intertwined trends--biotechnology, nanotechnology, computing, communications, miniaturization, brain reverse engineering, virtual reality, and others--to fully mature. I estimate the Singularity at around 2045. You estimated the "posthuman Singularity as [occurring]. . . . with a best guess somewhere around the middle of the century." So, perhaps, our expectations are close to being about five years apart. Five years will in fact be rather significant in 2045, but even with our mutual high levels of impatience, I believe we will be able to wait that much longer

I do want to comment on your term "the remaining human future" (being "25 years or 50 years or more"). I would rather consider the post-Singularity period to be one that is "post-biological" rather than "posthuman." In my view, the other side of the Singularity may properly be considered still human and still infused with (our better) human values. At least that is what we need to strive for. The intelligence we are creating will be derived from human intelligence, i.e., derived from human designs, and from the reverse engineering of human intelligence.

As the beautiful images and descriptions that (your wife) Natasha Vita-More and her collaborators put together ("Radical Body Design 'Primo 3M+'") demonstrate, we will gladly move beyond the limitations, not to mention the pain and discomfort, of our biological bodies and brains. As William Butler Yeats wrote, an aging man's biological body is "but a paltry thing, a tattered coat upon a stick." Interestingly, Yeats concludes, "Once out of nature I shall never take, My bodily form from any natural thing, But such a form as Grecian goldsmiths make, Of hammered gold and gold enamelling." I suppose that Yeats never read Feynman's treatise on nanotechnology or he would have mentioned carbon nanotubes.

I am concerned that if we refer to a "remaining human future," this terminology may encourage the perspective that something profound is being lost. I believe that a lot of the opposition to these emerging technologies stems from this uninformed view. I think we are in agreement that nothing of true value needs to be lost.

You write that "Numerous world-changing technologies. . . . have taken decades to move from introduction to widespread effects." But keep in mind that, in accordance with the law of exponential returns, technology adoption rates are accelerating along with everything else. The following chart shows the adoption time of various technologies, measured from invention to adoption by a quarter of the U.S. population:



With regard to technologies that are not information-based, the exponent of exponential growth is definitely slower than for computation and communications, but nonetheless positive, and as you point out, fuel cell technologies are posed for rapid growth. As one example of many, I'm involved with one company that has applied MEMS technology to fuel cells. Ultimately we will also see revolutionary changes in transportation from nanotechnology combined with new energy technologies (think microwings).

A common challenge to the feasibility of strong AI, and therefore to the Singularity, is to distinguish between quantitative and qualitative trends. This challenge says, in essence, that perhaps certain brute force capabilities such as memory capacity, processor speed, and communications bandwidths are expanding exponentially, but the qualitative aspects are not.

This is the hardware versus software challenge, and it is an important one. With regard to the price-performance of software, the comparisons in virtually every area are dramatic. Consider speech recognition software as one example of many. In 1985, $5,000 bought you a speech recognition software package that provided a 1,000 word vocabulary, did not provide continuous speech capability, required three hours of training, and had relatively poor accuracy. Today, for only $50, you can purchase a speech-recognition software package with a 100,000 word vocabulary, that does provide continuous speech capability, requires only five minutes of training, has dramatically improved accuracy, provides natural language understanding ability (for editing commands and other purposes), and many other features.

What about software development itself? I've been developing software myself for forty years, so I have some perspective on this. It's clear that the growth in productivity of software development has a lower exponent, but it is nonetheless growing exponentially. The development tools, class libraries, and support systems available today are dramatically more effective than those of decades ago. I have today small teams of just three or four people who achieve objectives in a few months that are comparable to what a team of a dozen or more people could accomplish in a year or more 25 years ago. I estimate the doubling time of software productivity to be approximately six years, which is slower than the doubling time for processor price-performance, which today is approximately one year. However, software productivity is nonetheless growing exponentially.

The most important point to be made here is that we have a specific game plan (i.e., brain reverse engineering) for achieving the software of human-level intelligence in a machine. It's actually not my view that brain reverse engineering is the only way to achieve strong AI, but this scenario does provide an effective existence-proof of a viable path to get there.

If you speak to some of the (thousands of) neurobiologists who are diligently creating detailed mathematical models of the hundreds of types of neurons found in the brain, or who are modeling the patterns of connections found in different regions, you will often encounter the common engineer's/scientist's myopia that results from being immersed in the specifics of one aspect of a large challenge. I'll discuss this challenge in some detail in the book I'm now working on, but I believe it is a conservative projection to expect that we will have detailed models of the several hundred regions of the brain within about 25 years (we already have impressively detailed models and simulations for a couple dozen such regions). As I alluded to above, only about half of the genome's 23 million bytes of useful information (i.e., what's left of the 800 million byte genome after compression) specifies the brain's initial conditions.

The other "non-computational factors [that] could dampen the growth curve" that you cite are "psychological and cultural factors [acting] as future-shock absorbers." You describe organizations you've worked with in which the "culture and business processes" resist change for a variety of reasons. It is clearly the case that many organizations are unable to master change, but ultimately such organizations will not be the ones to thrive.

You write that "different aspects of technology and humanity will advance at different rates, with the slower holding back the faster." I agree with the first part, but not the second. There's no question but that different parts of society evolve at different rates. We still have people pushing plows with oxen, but the continued existence of preindustrial societies has not appreciably slowed down Intel and other companies from advancing microprocessor design.

In my view, the rigid cultural and religious factors that you eloquently describe end up being like stones in a stream. The water just flows around them. A good case in point is the current stem-cell controversy. Although I believe that banning therapeutic cloning represents an ignorant and destructive position, it has had the effect of accelerating workable approaches to converting one type of cell into another. Every cell has the complete genetic code, and we are beginning to understand the protein-signaling factors that control differentiation. The holy grail of tissue engineering will be to directly convert one cell into another by manipulating these signaling factors and thereby bypassing fetal tissue and egg cells altogether. We're not that far from being able to do this, and the current controversy has actually spurred these efforts. Ultimately, these will be superior approaches anyway because egg cells are hard to come by.

I think our differences here are rather subtle, and I agree strongly with your insight that "the derailing of progress is likely to be temporary and local, but no less real and potentially deadly for many."

On a different note, you ask, "who gave us this,. . .innate inclination for salvation." I agree that we're evolved rather than explicitly designed beings, so we can view this inclination to express "deep yearnings" as representative of our position as the cutting edge of evolution. We are that part of evolution that will lead the Universe to converting its endless masses of dumb matter into sublimely intelligent patterns of mass and energy. So we can view this inclination for transcendence in its evolutionary perspective as a useful survival tool in our ecological niche, a special niche for a species that is capable of modeling and extending its own capabilities.

Finally, I have to strongly endorse your conclusion that 'the sooner we master these technologies, the sooner we will conquer aging and death and all the evils that humankind has been heir to."

Max: I would like to respond to one point. You wrote:

"I do want to comment on your term 'the remaining human future' (being '25 years or 50 years or more'). I would rather consider the post-Singularity period to be one that is 'post biological' rather than 'post human.' In my view, the other side of the Singularity may properly be considered still human and still infused with (our better) human values. At least that is what we need to strive for."

I am sympathetic to what you are saying about the term "posthuman." It could carry connotations for some readers that it means disposing of all human values. Certainly no one could get that impression from what I have written on values, but the term itself may imply that. "Post-biological" is better in that sense, and I sometimes use that. Conceptually, I do think it is possible to be literally posthuman without yet being fully post-biological. For example, thorough genetic engineering, or a biological body supplemented by technological components, might be so divergent from the human genome that the person is no longer of the same species.

I wrote a paper on the human-species concept in light of where we are heading. I didn't get it into a form that quite satisfies me, but it did lead me to study the biologist's definitions of species concepts. (They cannot agree it seems!) I think it is reasonable by those definitions to talk of "posthuman." However, since the connotation may be undesirable, it may best be avoided. As you say, I would expect our post-Singularity selves to retain some of our human values (the better ones I hope). If we were to completely detach the human-species concept from its biological roots, we might still talk of post-Singularity humans, though I find that awkward. I did once try out the term "ultrahuman." That has the advantage of implying the retention of the best of humanity. However, I decided it sounded a bit too much like a superhero.

So, while I agree that "post-biological" works well for the most part, I'm still not quite settled on a preferred term. Any "post-" term is unsatisfying, but of course it's hard to create a positive term before we really know what forms we might take. I suspect that species concepts for ourselves may come to be fairly useless, post-Singularity.

Ray: I basically agree with what you're saying. Terminology is important, of course. For example, calling the multicell bundles that can be used for creating stem cells "human embryos" has a lot of unfortunate implications. And recall that "nuclear magnetic resonance" was changed to "magnetic resonance imaging." When we figure out that strong magnetic fields have negative consequences, they'll probably have to change it again. We'll both have to work further on the terminology for the next stage in the evolution of our civilization.

Max: Ray, I want to thank you for inviting me into this engaging dialog. Such a vigorous yet well-mannered debate has, I think, helped both of us to detail our views further. Our shared premises and modestly differing conclusions have allowed us to tease out some implicit assumptions. As you concluded, our views amount to a rather small difference considered from the point of view of most people. The size of our divergence in estimations of time until that set of events we call the Singularity will seem large only once we are close to it. However, before then I expect our views to continue converging. What I find especially interesting is how we have reached very similar conclusions from quite different backgrounds. That someone with a formal background originally in economics and philosophy converges in thought with someone with a strong background in science and technology encourages me to favor E. O. Wilson's view of consilience. I look forward to continuing this dialog with you. It has been a pleasure and an honor.

Ray: Thanks, Max, for sharing your compelling thoughts. The feelings of satisfaction are mutual, and I look forward to continued convergence and consilience.

1 The Technological Singularity

Analysis One



Analysis Two



Analysis Three



Analysis Four



Analysis Five


Return to top

 Join the discussion about this article on Mind·X!  
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Singularitarian Dialog
posted on 02/27/2002 12:33 PM by iph1954@msn.com

[Top]
[Mind·X]
[Reply to this post]

First, I want to thank both Ray and Max for taking the time to engage in this stimulating consilience and also for your willingness to share it with others.

I would like to add my thoughts concerning the exchange about "humankind's innate inclination for salvation." Ray says it "is not necessarily irrational. Perhaps we have this inclination precisely to anticipate the Singularity. Maybe it is the Singularity that has been hijacked by irrational belief systems, rather than the other way around."

Clearly we are flirting around the edges of the mystical here, but I suspect there is a logical and rational basis for proposing that humans have evolved to suit the objectives of the universe. We don't need to relegate such speculations only to religious traditions. It may indeed be true that our universe exists for the sole purpose of explaining itself.

The propensity of life to seek, to grow, to expand, to push the boundaries outward and upward is well established and understood. Life ceaselessly strives for propagation, for reproduction. In the process, through random mutation and natural selection, life continues to experiment with greater orders of design, complexity, and intelligence. This increase in order, sometimes referred to as Extropy, can be seen to have begun even before the advent of earthly life. From out of the primordial energy of the Big Bang, first came matter, then stars and galaxies, and eventually planets possessing a suitable mix of chemistry and environment that would lead to the formation of life.

On our planet, and perhaps on others, life forms have achieved consciousness and have developed enough intelligence to be able to understand themselves. Per Nietzsche, we are 'a rope'a bridge, and not a goal.' The bridge we are building today will almost certainly lead to entities of greater design, complexity, and intelligence. They, in turn, will build bridges of their own, which many (including myself) expect will lead to a technological singularity and a dramatic transcending of human limitations.

But that will not be the end. The Singularity, in whatever form it occurs, is not a goal, but rather another bridge. The super-intelligent, immortal being(s) that are produced will then give rise to entities of even greater design, complexity, and intelligence, which will do the same, ad infinitum.

In this way, the universe can be envisioned as a giant parallel processing computer, massively complex, self-constituting, self-programming, self-enhancing, and self-developing. Recognize that description? It is one of the popular definitions of Posthuman. Our universe is the original and the prototypical Posthuman.

This fits neatly in with the 'universe as simulation' hypothesis. It also explains 'humankind's innate inclination for salvation' that is now motivating us to bring about the Singularity, and ultimately to achieve the aims of the universe itself.

- Mike Treder, Incipient Posthuman

Re: A Dialog Between Max More and Ray Kurzweil
posted on 02/27/2002 3:22 PM by Gottfried J. Mayer

[Top]
[Mind·X]
[Reply to this post]

In the physics of phase transitions literature, the problem of singularities or critical points has been analyzed in considerable. In our 1995 paper (1) we discussed the transition to a Global Brain as the equivalent of a non-equilibrium phase-transition on the Internet. The essential point that is missed in this current singularity discussion is that in second order phase-transitions you can get a true singularity (i.e. infinite value) in the growth rate without any unphysical singularities in the order parameters (here 'velocity/power of computation' or 'World Knowledge') themselves.

The rate, at which the growth rate approaches infinity as one gets close to the singularity/critical point is determined by (universal) critical exponents. (They play a similar role to the second exponents in More's double exponential law).
I plotted an example of such a curve with critical exponent gamma = 0.1; larger values of gamma keep the infinite rate but can soften the singularity (see http://www.comdig.de/ComDig02-09/phasetransition.htm ).
Note that fluctuations also will have an effect of smearing out the singularity and thus keeping the growth rates finite.

I think such a second order phase transition is a better mathematical description of More's intuitive notion of a 'surge'. I am not going to attempt to write down detailed differential equations involving 'velocity/power of computation' or 'World Knowledge' but rather point out that these phase transitions/bifurcations are universal properties of complex adaptive systems, independent of the microscopic details or context of the model.
Maybe an illustrative example is population growth in New York City: It was predicted to level off at about 7 Million because then the whole city would be covered with manure from the horses that made up the urban transportation system (see http://156.145.78.54/htm/framesets/living_city/fs_dev.htm ).

Than brings me to my last point: I didn't see any estimation of the power consumption and heat dissipation requirements of these new 'post-human' computers. If you talk about hyper-exponential growth in performance I assume you soon would need one of Seth Loyd's anti-matter burning black-hole size computers that run at super-nova temperatures (see http://www.comdig.org/ComDig00/ComDig00-34/#3).

Gottfried J. Mayer, Editor, Complexity Digest, www.comdig.org

(1) G. Mayer-Kress, C. Barczys, The Global Brain as an Emergent Structure from the Worldwide Computing Network, and its Implications for Modeling , Tech.Rep. CCSR-94-22, The Information Society, Vol 11 No 1, 1-28, (1995)


Re: A Dialog Between Max More and Ray Kurzweil
posted on 02/28/2002 1:38 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Exercising those smooth (exponential) functions has a moderate meaning.

I agree, that we are heading toward a phase transition with the accelerating speed.

Yet, to disregard the possibility of a 'sudden up wave' which would bring the Singularity - is a crucial mistake.

Those '2030+' guys are wrong precisely be cause of this (unjustified) 'smoothness assumption'.

A 'quantum leap' might come much earlier. About 2020 - I would predict.

p.s.

Talking about actual infinity - is a nonsense.

- Thomas Kristan

Re: A Dialog Between Max More and Ray Kurzweil
posted on 03/01/2002 1:01 PM by mcomess@ucla.edu

[Top]
[Mind·X]
[Reply to this post]

I think that this upsurge will happen much sooner then most people realize. The fact that nanocomputers will be realizable by the end of the decade (or sooner), as shown by the work that HP and others are doing, seems to be lost on a lot of people. Once we achieve the capacity to build nanocomputers, we're not going to continue to follow Moore's Law, it's not even a Law for crying out loud. These nanocircuits as currently described have a density 10^5 times higher then today's tech. Don't forget we also have the ability to stack layers of circuitry vertically. It seems to me that when these become available we are looking at an increase in computing power of 10^6 or more over today's levels, which puts us comfortably within the range of computing power that is estimated as being essential for true AI. However, as we will see over the next few years, the development of limited AI systems such as cyc and others will also speed the development of technology by greatly enhancing human productivity. Some of these systems are even in use today. I would argue that these systems will produce a "ramping up" or what might appear to be a "soft take off" scenario, but at some point these systems will produce <i> effective </i> >H capability and a hard take off will soon follow, even if the individual AI's or systems are nowhere near the complexity of a human brain, I don't think they have to be. Humans need this huge neural net because we really have no good way of storing information digitally in our minds. Even limited AI's are already beginning to show promise because, unlike us, they have access to huge amounts of data in digital form and ways of quickly processing it, sort of the same way an autistic savant I know is able to do many kinds of computations in seconds, such as being able to take the square root of any number up to like 10^10 to as many decimal places as you want.

A Party on the edge of the singularity!
posted on 02/28/2002 4:16 PM by erik@axiomresources.com

[Top]
[Mind·X]
[Reply to this post]

Complex rates are complex! Without going into the formulae much, If people start to comprehend that this singularity means the end of much that we know, they may form technology non-proliferation groups. For ~50 years, most technological progress has been cheerleaded by USA. Europe until recently has been more worried about technology as technology allowed the World Wars to be fought on their soil. Imagine a world not cheerleaded by the technology gods of the USA, progress may have been much slower.

As a technophile myself, I must admit to being concerned about this exponential growth to a point, an endpoint I believe, because a google trillion trillion quantum computers can answer every formulae. There would another big bang and zoom, we start over or at least my life is no longer recognizable.

Maybe we could use nanotech, closely controlled, to dance on the edge of the singularity for an eternity. That would really be engineering our fate in a deterministic way, not the fatalistic way of accepting the singularity as an inevitable consequence like death. After all, much of the rationale I hear for tech progress is to fight aging & death. So why accept the singularity?

Let us dance & party on the edge of the singularity without going in the front door.

Re: A Party on the edge of the singularity!
posted on 03/01/2002 2:47 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I have a different terminology, but the same agenda.

What are you calling 'the edge of Singularity' - I call 'the Singularity'.

What are you calling 'the Singularity' - I call 'the Goo'.

- Thomas

Re: A Dialog Between Max More and Ray Kurzweil
posted on 03/02/2002 11:32 AM by natasha@natasha.cc

[Top]
[Mind·X]
[Reply to this post]

Thank you Ray and Max for creating a provoking combined piece of writing. It is refreshing to read such an indepth (and pur sang) deliberation about the Singularity.

I appreciate the three examples Max gives of the Singularity for better understanding of possible narrative outlines. I prefer approach #3, although #1 could be possible. #2 (Moravecian) is a bit off-putting in that the idea of ending up a bush is not terribly exciting to me -:)

It seems plausible that the Singularity and advanced AI takeoff will go hand in hand and that humans will not be left in the dust, but continue on to post-biological humanity (Kurzweil) or posthuman (More) environments. I'm not sure what my thoughts are on the post-biological vs. posthuman because I'm not all together sure that a posthuman will not be biological or that a post-biological entity will not contain aspects of DNA, however synthetic. Herein, both Max and Ray's erudite explanation of nonbiological intelligence and biological intelligence merging makes it seem totally plausible that one day in the future we will be the AI.

It takes sturdy minds to see the potential of mass technologies and also to have an ear to the breeze of human whim.

Natasha Vita-More






Re: A Dialog Between Max More and Ray Kurzweil
posted on 03/04/2002 7:44 PM by earthmoon23@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

By way of introduction to his thinking on these matters, Max More writes:

Before my interest in examining accelerating
technological progress and issues of
transhumanism, I first had a realization. I
saw very clearly how limited are human beings
in their wisdom, in their intellectual and
emotional development, and in their sensory
and physical capabilities. I have always felt
dissatisfied with those limitations and
faults.

From this "realization" of and dissatisfaction with human limitations, spring More's (and many other people's) desire to transcend these limits... in More's case, via the beliefs, terminology, and technologies of "transhumanism."

All of human technology, all of human knowledge and understanding, all of our thoughts and feelings and ideas and plans and hopes and dreams...all of this exists within us, ourselves, beings of such frustratingly limited "intellectual...emotional...sensory...and physical capabilities."

Is it not possible that one of our "limits" is this very belief in our own imperfection?

In other words...perhaps our limited perceptual and cognitive abilities blind us to the reality that we are already perfect; that the world is already perfect; that the universe is already perfect.

I realize that I'm sounding like a Taoist... what can I say, it's a hobby of mine.

But if we're limited in all our important respects, how can we bootstrap ourselves out of these limits?

How do we know what we don't know?

How can accelerating technological development ever possibly hope to "improve" its maker?

Re: A Dialog Between Max More and Ray Kurzweil
posted on 03/07/2002 3:15 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> perhaps our limited perceptual and cognitive abilities blind us to the reality that we are already perfect; that the world is already perfect; that the universe is already perfect

It's possible. It's possible tuna fish has milk - we just haven't noticed this fact yet. But I wouldn't count much on that. :)

- Thomas

Re: A Dialog Between Max More and Ray Kurzweil
posted on 12/28/2002 11:51 AM by Susan Calvin

[Top]
[Mind·X]
[Reply to this post]

"How can accelerating technological development ever possibly hope to "improve" its maker?"

If you're not a creationist, the fact we're here is proof that more complex organization can come from a less complex one.
We can now consciously direct and preserve incremental improvements, which accelerates the evolutionary process.
We will go beyond our current limitations the same way our ancestors did - one small step at a time.

Re: A Dialog Between Max More and Ray Kurzweil
posted on 03/07/2002 1:25 AM by kcisobderf@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

I object to the disparaging of so called "Passive Singularians", because when the machines hit the AI takeoff point, we are *all* passive! The implication of the doubly exponential curve is that much of the work to achieve "transcendance" occurs in the small fraction of time before the knee. Already, no one person has a solid grasp of any scientific discipline. None of us can direct the process, only by keeping our minds open and attending to different facets of the phenomenon can we participate. I work in the computer industry and do my little bit to help things along. A surfer does not make the wave that pushes him/her. He is aware of his surroundings and rides the wave to the shore. Maybe I am in the millennialist crowd, because I'm waiting for the day when I can devote a progressively smaller part of my life to employment. And if I'm lucky, any type of "play" I may engage in will be deemed worthy of remuneration. Or it just won't matter, which is even better!

Re: A Dialog Between Max More and Ray Kurzweil
posted on 03/07/2002 12:28 PM by kino@daxtron.com

[Top]
[Mind·X]
[Reply to this post]

On the evolution of 'Singularity Awareness':
If you assume that meme's follow principles similar to genes, then singularity awareness should have some survival value. It has to survive, persist and expand its range over time. I would suggest that it would do so because those who have it create or live in environments that support technological advance, which gives them all the advantages such advance would imply. Some outcome of having such a meme would be to either build new things (structures/systems/social organizations) or at least be passive and allow them to be built. Those who do not would tend to try to maximize immediate gain, and thus break down such structures. Assuming there is a 'great reward' AND 'you have to be good' by whatever evaluation function to get it, should be enough to create conditions for the meme's survival. If you assume that you can do something about disease, and do research to conquer it rather than assume its just fate so you might as well get what you can while you can, then it seams obvious to me that the first group are going to out live and possibly out breed the second group. At least they will have a higher standard of living and quality of life.

Evolved systems are not necessarily logical, but statistical. It matters little if it makes sense, but rather if it works, and that is determined by survival. An example might be superstitious belief that demons inhabit a cave, when in reality it is the home of a deadly virus. Either way, the outcome is the same: stay away from the cave. People at the intermediate tech level who say 'there is no demon' walk in without viral protection, and die just as dead (and provide evidence to demon believers that there really is a demon!) In a similar way the religious belief in a transcendence event and the technical description of the Singularity may be siblings in effects, if not in detail. Those who hold either meme should tend to live longer ('live a clean healthy life'), invest in progress, have a purpose and motivation for staying alive and improving their environment('help others/build great works'). So 'Singularity Awareness could evolve naturally, be enshrined in religion and be subject to the same forces as any other meme. Wither on not it adapts to the details of the new conditions in time, remains to be seen.

Re: A Dialog Between Max More and Ray Kurzweil
posted on 03/11/2002 12:31 PM by reed.riner@nau.edu

[Top]
[Mind·X]
[Reply to this post]

barnett's idea is that an innovation is two things brought together in a new way. therefore, it follows that the more things are out there, the more innovations there will be. "singularity" means the point where the exponential curve of innovations goes straight up! coming soon! this is a rather longish article. you might to save it for later reading.

Re: A Dialog Between Max More and Ray Kurzweil
posted on 03/13/2002 3:15 PM by rmiryala@mail.utexas.edu

[Top]
[Mind·X]
[Reply to this post]

on the issue of 'singularity awareness' (or the inclination for salvation):

human consciousness is naturally autonomous, contained within the brain which is part of the body and so forth. human beings, throughout our ancient history have needed socializing elements to enable us not only to work for ourselves but to work with others. could it be that the 'idea' of (religious) salvation was just the right glue to make primitive societies work together (probably why there weren't too many societies of autonomous thinkers- aside from ancient greece). I really dont think that the vast majority of people in the world (who also believe in religion) will be able to fathom the singularity and all that it imports; i attribute the inclination for salvation as being either a part of a demented-meme (religion) that has survived with ignorance and strong-willed masses, or simply a desire for consciousness to desire something greater than itself, as a means of evolutionary survival, or perhaps simply as a means of assuaging loneliness.

waiting patiently for us to create god,
rhm

Fundamental physical limitations to growth rates
posted on 03/13/2002 12:03 AM by mpf@cise.ufl.edu

[Top]
[Mind·X]
[Reply to this post]

I find Ray's work very interesting and provocative and basically
agree with most of it, but I would like to suggest an amendment
to the doubly-exponential growth discussion above.

Current understanding of the fundamental physical
limits to computation suggests that exponential growth of
performance cannot continue indefinitely, but will eventually
hit various hard limits from thermodynamics, quantum mechanics,
and relativity. Unless our present-day understanding of physics
turns out to be seriously wrong in a way that happens to be
favorable to exponential growth, such growth will hit a wall
eventually.

As an example of one of the extreme limiting physical scenarios
that is still sub-exponential, consider a sphere of influence of
our civilization that is expanding outwards through the universe
at near the speed of light (a replicator "shock wave"), gathering
up all matter and hurling it inwards to a concentrated, near-
black-hole-density central core which harnesses all of the
incoming mass-energy for a computation that is proceeding
at the maximum rate set by quantum mechanics (described in
papers by Margolus & Levitin and Lloyd). At this limit, the
total energy accumulated goes as time cubed (the volume
of the sphere of influence), and the quantum limit on computation
rate is proportional to total energy. So raw total performance,
though very large, is increasing at most in proportion to elapsed
time cubed - definitely sub-exponential.

It is even worse than this if the astronomers are right and
the expansion of the universe is accelerating, in which case
we will only have time to accumulate a constant amount of
energy before the rest of the galaxies escape beyond the
cosmic redshift horizon in a mere ~150 billion years or so.

And a paper by astrophysicists Krauss and Starkman suggests
that, due to various decay processes, the total number of
computations that can *ever* be done (in our accessible slice
of the cosmos) is then finite; i.e. the future rate of computation
will grind to a halt. (Although I am not yet sure they have
truly taken *all* the engineering possibilities into account.)

Of course, these limits are so vast and far away that they do
not preclude something from happening that appears to be
exponential, doubly-exponential, or even faster growth in
the near term, and certainly, enormously superhuman
intelligences will be easily possible long before the fundamental
limits are reached.

But my point is that this trend is probably not permanent;
that in a fairly short number of centuries after this singularity or
surge (or whatever you want to call it) happens, "we" (or
rather, our transhuman progeny) will be over the knee of the
exponential trend, and it will really have to level
off to at best a polynomial trend, unless we are lucky and
future physics discoveries give us ways to get around the
limits of the above-mentioned theories. But whether this will
happen ultimately depends not on Kurzweil's laws, but on
whether the true bottom-most laws of physics of our universe
allows it. So I would say his laws are not absolute but are
at best contingent the fact that our universe has not
(yet) undergone a phase transition to the point where all
of the resources of physics are already being used for
computing and no more are available.

Can exponential growth continue forever? We do not know
the answer to this yet! I don't think Ray will claim to
know the answer either, unless he has found the correct
Grand Unified Theory of particle physics, and has been
hiding it from everyone. :)

Still, aside from neglecting this issue of post-singularity growth
rates, I found the above discussion to be very reasonable &
insightful.

See my course webpages http://www.cise.ufl.edu/~mpf/physlim
for some related citations & links to relevant readings.

Cheers & best regards,
-Mike

Re: Fundamental physical limitations to growth rates
posted on 03/13/2002 1:51 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

It's a finite world! We will all (most probably) die eventually.

But we shall have a big party first. A googol years or so. Blooming time.

- Thomas

Re: Fundamental physical limitations to growth rates
posted on 03/15/2002 10:41 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

It already takes all the running I can do just to stay in one place.

Re: A Dialog Between Max More and Ray Kurzweil
posted on 04/28/2002 1:44 PM by darrel@realkauai.com

[Top]
[Mind·X]
[Reply to this post]

Thank you Ray, Max, Natasha and everybody else for one of the most stimulating discussions I have read in years. When much of the world still would have trouble understanding the concept of the Singularity, and would have even greater difficulty believing in the possibility of the Singularity, it is refreshing to see a forum wherein the possibility, even the likelihood, of the Singularity is a premise, and the discussion revolves mainly around when, not if, the Singularity will occur.

Early in the discussion Max expresses dissatisfaction with certain limitations and faults of human beings in regard to their wisdom, intellectual and emotional development, and in their sensory and physical capabilities. I share that dissatisfaction, yet I would note that we human beings and our descendants, whether they be AIs, cyborgs, or pure biologicals, will always have limitations in those areas until we, or they, reach perfection, or, to put it in religious terminology, we will always have those limitations until we become identical with an all-powerful, all-knowing, omnipresent God. Leaving aside any questions as to whether any such God does or can exist, one still may question whether or not we, or any other being, could ever achieve such perfection as to overcome all limits in those areas. Given the arduousness, and perhaps impossibility, of overcoming such limitations one ought not to be too dissatisfied with them. As Omar Khayam said, "If you would this spangle of existence spend, about the secret, quick about it friend, for the bird of time hath but little way to fly, and lo, the bird is on the wing." Or as Bobbie McFerrin said, "Don't worry, be happy."

Not that Max sounded worried. On the contrary he seemed quite hopeful. Nevertheless, my intellectual philosophical side says the belief that human limitation is undesirable is merely a perspective that is not shared by all. Just as an adult might envy the innocence of a child, a superintelligent AI might envy the limitations of a mere human. Just the same, I too want to overcome my human limitations, as Max does. Maybe I am not as attached to the concept as some might be. Many people have no particular desire to overcome their human limitations, and I think that is just as well. Otherwise the human race might just die out. Much as I might want to transcend my humanity, it would be a shame if humans as a species ceased to exist. It may be that someday, someday soon, some sort of AI or cyborgic intelligence will rule the human race, but will that be a fundamentally different situation than that in which humanity has found itself for eons? After all, hasn't the vast majority of humanity been ruled by some sort of king, government, religion, God (real or imagined) or other master throughout history? What would be the difference for the majority of humans after the Singularity? I for one seen little potential difference after the Singularity, unless whatever beings rule afterward decide to rid the Earth of humanity entirely, for the sake other flora and fauna on this planet.

I admire Max for starting in his early to take practical steps toward increasing his longevity. I was not so practical, and began in my early teens to search for some sort of mystical magical secret to immortality in the writings of philosophers such as Plato and Aristotle, and in various religious texts. Needless to say I did not find any mystical backdoor to immortality in the ancient philosophers or in ancient religions, at least not the kind of personal immortality sought by transhumanists.

Max noted that one definition of the Singularity - "includes the notion of a "wall" or "prediction horizon"--a time horizon beyond which we can no longer say anything useful about the future." In my view the past has a similar type of horizon, one past which we cannot see, or about which we cannot say anything useful. But modern physicists say they can reliably reconstruct the past back to the first few nanoseconds of the "big bang". A simple philosopher such as myself is humbled by the magnificent complexity of the physicists' calculations, and I respectfully stand in awe of their collective achievements. Not being a formally trained scientist I naively think that I too can think back to the beginning of time, and in my simple ignorant musing I find what appear to me to be contradictions in the physicists' worldview. I know of course that the physicists are right and I am wrong, after all, they are experts and I am not, but nevertheless my vanity leads me to believe my observations may have some relevance to discussions of what might happen after the Singularity. One of the observations I have made is that in recent times physicists have variously estimated that age of the Universe at 10 billion years, and then they revised those estimates to be 13 billion years or so then 15 billion years or so, all according to their "calculations". Funny thing though, telescopes now can see 15 billion years or so into the past, to the edge of the Universe they say. That is, we can see galaxies that are so far away that the light from those galaxies took 15 billion years to get from where it came from to here where we are now. And that is true no matter in which direction we look. So 15 billion years ago those galaxies were 15 billion light years from where we are now. And since there are such faraway galaxies in all directions, then there are galaxies opposite each other from our perspective that, 15 billion years ago, were 30 billion light years apart. And so, if there are galaxies 15 billion years distant from us "at the edge of the Universe" in all directions, then according to their beliefs about the age of the Universe our galaxy must be at the center of the Universe, and the Universe was 30 billion light years across 15 billion years ago. But if the Universe began with a "big bang", then those faraway galaxies and our galaxy were once together, and therefore, to get where they were 15 billion years ago they had to traverse 15 billion light years of space to get to where they were way back then. Assuming that the maximum speed they traveled was the speed of light (and in fact they must have traveled much slower than that for most of the journey) then they started out on that journey at least 30 billion years ago. In other words, in order for the Universe to have been 30 billion light years across 15 billion years ago, then the "big bang" would have had to have happened at least 15 billion years before, that is, 30 billion years ago. This is a lower limit on the age of the Universe. Conservative estimates would put the age of the Universe much higher since these low estimates assume two very unlikely premises: (1) the premise that the galaxies are, and have been for the last 15 billion years, flying apart at the speed of light, and (2) that we are at the center of the Universe. More reasonable assumptions such as that we are halfway from the center to the edge of the Universe, and that the galaxies are flying apart at one tenth the speed of light would give an age of the Universe at 600 million years. But the calculations of modern physicists are perfect, even if their theories are not, and modern physicists are infallible, so I am told, even if their predecessors were not, and so conventional wisdom says they must be right that the Universe is only 15 billion years old, even if that estimate appears to fly in the face of common sense.

So anyway, the Universe must be at least 15 billion years old, if not more, and our human bodies and the local planets are composed of second-generation star stuff. Our local star is middle aged and of a common type. There are literally, as Carl used to say, billions and billions of similar stars out there. No doubt many of those stars have planets upon which life has evolved. Certainly some of those planets have intelligent life forms. Certainly some of those intelligent life forms must have reached our present level of civilization millions of years ago, if not billions of years ago. Some extra-terrestrial civilizations must have survived their own self-destructive tendencies, and avoided annihilation by planet killer asteroids, and reached and surpassed a Singularity such as we are approaching. What then must have happened to their civilization? Given our present state of technology it is reasonable to assume that some E.T. civilizations have evolved through a stage wherein their entire home planet was covered by an interconnected web of communication networks that coalesced into a unified planetary artificial consciousness into which some of the original biological sentient beings merged.

Such a unified artificial planetary consciousness could be similar in many ways to an individual human consciousness: It could have a sense of self. That is, it could identify itself as a single entity, a single ego that senses things, feels things, and has hopes and aspirations. The capacity for rational thought of such a unified planetary artificial consciousness would far exceed our puny human powers. If it were purely rational it would have no motivation and no desire since a purely rational being has no motivation, no reason for being except to serve the non-rational emotions, hopes, dreams and desires of creatures that have such emotions. So, if it is something more than a mere mindless machine then it has its own emotions, hopes dreams and desires. It may desire: self-preservation, to propagate itself, to expand and grow, to discover and explore new worlds, and to have the companionship of equals.

Such a huge conscious being may require tremendous amounts of energy to cogitate upon the vast stores of knowledge and sense data it has acquired over eons and that it is receiving every second, and so it might even harness the energy of its local star to power such massive amounts of information processing. It might spread out and inhabit the other planets of its solar system, or even the local star itself. But if we assume its original planet was approximately the same size as Earth, then, at the speed of light, 186 thousand miles per second, it would take approximately one tenth of a second to send an information signal to the other side of its globe. It would experience a time lag in communicating with the various parts of its planetary body. We humans experience ourselves as unified individual egos in part because our neurons communicate with one another very rapidly. If it took a minimum full tenth of a second for a message to get from one side of our brain to another we might begin to have a breakdown of our essential unity and ability to function as one individual self. The unity we feel as an individual organism is dependant upon the ability of our neurons to communicate huge amounts of information back and forth very quickly. If we could communicate that much that quickly to other intelligent human beings we could begin to merge our consciousness with that other human being. That inability to communicate so much so quickly is a major factor that keeps us separate from other individual human beings. Let us assume that a planetary consciousness could experience itself as a unified individual consciousness even if it had a fraction of a second time delay in communicating with opposite parts of its globe. What then would happen if such a consciousness tried to expand to other planets, even nearby planets as close as Venus is to us? Two planetary consciousnesses that far apart would have a communication time delay of some four to twenty minutes depending on the position of their orbit assuming speed of light communication. Obviously we would not consider such consciousnesses a single individual, and they would experience each other a separate entities, not as a single self, if the fastest they could communicate were the speed of light. Two such planetary consciousnesses would then always be separate in some way due to the time lag in communication. In a sense this puts a physical limit of sorts on the results of the Singularity. Though we may evolve a planetary consciousness, and meld with that consciousness, and even evolve multiple planetary consciousnesses, or even evolve into entities that inhabit stars, we will still have some degree of separation from others. Of course we may eventually find some way of exploiting some sort of quantum communication that is instantaneous, and in that way end the communication time lag. Such a means of instantaneous communication would allow us to evolve into, or merge with, an almighty God. But such a means of communication is speculative. Communication via electromagnetic radiation at the speed of light is a reality, and the technology to evolve super intelligent planetary consciousnesses is near at hand and virtually inevitable baring worldwide catastrophe. The time frame is uncertain. Almost certainly within the next one hundred years, perhaps even within our lifetime. This then would be the likely result of the Singularity. We would evolve a planetary consciousness that would eventually seek to propagate progeny on other planets and other stars. These progeny would seek to explore the vast Universe, and communicate with other planetary and stellar creatures. Several such entities might inhabit a single planetary body. A multitude of different planetary and stellar species may evolve or may have already evolved. They may be communicating with each other via electromagnetic radiation on a time scale and bit rate that are nearly impossible for us to imagine. They may be sculpting whole planets, or star system or even groups of stars. But we, little gnats that we are, live our lives so quickly that we do not even begin to sense their activities.

I believe we are headed toward building that kind of planetary consciousness. I think it will be fun and exhilarating. That's one of the reasons why I think Max and others are wise to eat right and exercise and do what they can to extend their lives. I do what I can with my limited willpower to avoid too much artery clogging chocolate ice cream. I think we can merge and become like gods, but I'm not sure I want to if gods can't have a little chocolate ice cream once in awhile. Its going to be a fun ride on the road toward Nietzsche's Uberman. I don't want to miss a minute of it.

On the other hand, I would not feel too bad if I were doomed to be mortal. I have faith that, in some sense, all consciousness is one. Therefore in a certain sense it does not matter if I myself live forever or not, as long as someone, or something does. If we can become gods now then it may be we were once gods before. We may have been gods that chose to incarnate in mortal beings to experience life once again with the wonder of a child. I know I still feel that wonder sometimes. When I contemplate our mysterious exciting future I feel that priceless feeling. I do not want to die any more than I wanted to go to sleep as a child. I am not tired yet and I do not want to miss anything. But if I die it is not the end of the world. At worst death is like an eternal dreamless sleep. Sleeping is not so bad, it fact it is very peaceful and melts away a lot of cares and stress. I imagine death relieves a lot of stress.

We can have it both ways. We may be able to live and die a normal human life and death, and yet still merge with an artificial consciousness than lives on eternally. I imagine brain uploading would be a little like that. I imagine my consciousness would experience uploading in two ways. In one way I would experience being a human one moment, then, after the upload, I would see the world through the eyes of the AI body. But part of me might experience unity with the AI body only during the upload, then after separation that version of me would see the world through my biological body. I would of course want the biological me to converse with the uploaded me so that I could convince my biological self that my consciousness was indeed successfully uploaded. I think then my biological self could rest in the knowledge that I would live on in the artificial body. I think perhaps slowly replacing various body parts, and evolving from biot to cyborg to pure robotic AI might be a little more satisfactory than uploading would be. Nevertheless, I will take immortality in any form I can get it, even if it is only spiritual immortality.

One last comment on human wisdom. I think that even though our technology has advanced greatly in the past millennium, it may be that our wisdom has not progressed much, if at all. I believe certain ancient mystics/philosophers/thinkers realized long ago, in general form, the path that human evolution would take towards the godlike status of super intelligence. Further they realized that lifeforms have gone through such evolutions many times before, and will many times again. A thousand years ago if someone said humans are made of stardust, they would have been thought of as a mystic or a crackpot. Now we accept the idea that we are made of stardust as a simple scientific fact. Stars live for billions of years, but they die too. Planetary beings are probably similar. Their lifespan is probably measured in the billions of years, plenty of time to have a good conversation with neighboring planets and starsystems. But they, like us, must deal with aging and dying and all the stuff that goes on in between. I would like to live a billion years, but a hundred will do in a pinch.

My apologies if I have rambled on too much, but, like I said, this is an interesting discussion and I felt compelled to put in my two cents.

- Darrel Jarmusch

Re: A Dialog Between Max More and Ray Kurzweil
posted on 04/28/2002 2:01 PM by darrel@realkauai.com

[Top]
[Mind·X]
[Reply to this post]

Oops! I accidentally wrote in the above post that a realistic estimate of the age of the Universe is 600 million years. I meant to say that a realistic estimate of the age of the Universe is 600 billion years.

Re: A Dialog Between Max More and Ray Kurzweil
posted on 04/28/2002 3:09 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> I have faith that, in some sense, all consciousness is one

Another one! I am a little less alone, now. ;)

But then again ... it's normal for a solipsist like me, to be alone ...

Of course, despite a joke - I am serious, as always.

- Thomas

Re: A Dialog Between Max More and Ray Kurzweil
posted on 05/20/2002 7:11 PM by gareth@renowden.co.nz

[Top]
[Mind·X]
[Reply to this post]

Fascinating stuff!

OK: two thoughts - the first relates to the "continuity of self" alluded to above. When biological Me (bMe) has been uploaded and becomes digital Me (dMe), there will be a divergence in experience. bMe will continue to be the me that has experienced all of my life (however technologically enhanced), while dMe will be out exploring the digiverse and will become some other self. A close friend, I hope. bMe may get functional immortality as life sciences improve, but the outer reaches of what dMe's up to will be forever out of reach.

This, I believe, provides the compelling reason why any move to uploading and transhuman existence will have to be tied back into the wetware. I want dMe and bMe to be tied together - dMe will be engineered to be nostalgic for its biological roots, for the full body real experience, while bMe will benefit from Ray's synaptic nanobots and their ability to create dMe's experience inside bMe's brain.

True AI's, lacking these roots, may have their own ideas about the relevance of biological life to where they are - some may even actively seek cooperative wetware, or breed their own. Fancy being a dolphin, anyone?

Some interest comments on the "limits to growth" as well, but by focussing on post-Singularity effects we miss the potential problems leading into this technological phase change. Religious and cultural objections may indeed be stones in a stream, but out here in the real world there a host of problems that will mitigate against a smooth transition for all of humanity.

Perhaps the most obvious point is that technological advances do not propagate smoothly across all human societies. It may be that a few wealthy people in Southern California lead the world into the singularity. What then becomes of the rest of the human race? Within the singularity, physical geography may be irrelevant: outside it, boring things like climate change, famine and war will remain reality for most humans.

I can only hope that altruism is a key characteristic of entities in the digiverse. Perhaps engineering in the links back to the wetware will ensure that.

On the other hand, I may just be predicting the depth of horseshit in New York....

Re: A Dialog Between Max More and Ray Kurzweil
posted on 12/27/2002 6:25 PM by Tim Ventura

[Top]
[Mind·X]
[Reply to this post]

Interesting comments on the singularity. I realize that Moore's Law has only a topical relationship to the discussion at hand, however, something that I have noticed with Moore's law may also apply to the Singularity.

Every now and then when processor speeds increase to a certain level, the machines tend to "break out of the box" of current constraints and tend to begin again at a new level, starting once again at the bottom rung of required performance.

For instance, the 486 was an enormously fast processor for its time, and for the applications that it was designed for (ie: its environment), it was more than adequate. Perhaps in the same way, the pre-singularity human race is "king of the jungle", but ONLY for the needs and requirements of our "pre-singularity world".

The singularity might be seen to increase the expectations and demands on the human mind and body at a ratio similar to the increase in their capacity -- the development of super-intelligent AI devices might serve as an aid to this -- but might also serve to create demands on humans that have not yet been foreseen.

For instance, the amount of work that I perform at my job is commensurate with the amount of brainpower at my disposal -- when I started taking nootropics a few months ago, the amount of work that was then placed upon me was increased to match the higher capacity.

Back to the 486 PC comparison -- when the Pentium I came out (90mhz starting, if I recall correctly), ID games released "Quake", which used everything that the Pentium had and more just to make the game work correctly. Therefore, since load had increased to match potential, the net result was similar overall performance.

In the world of the post-singularity "Earth", or whatever you might call this place in space-time, there is an implicit assumption that not only will the capacity of humans, machines, and our "systems" increase, but so will the demands of those systems.

In the post-singularity world, if you can build a super-intelligent machine to do 98% of the things that a person can do, then the future of human labor will naturally be involved with doing the 2% of remaining tasks.

If you have a superintelligent machine #1 that can repair other superintelligent machines #2+, then our super-human descendants will probably spend the majority of their time repairing that first machine, in the same way that we currently build the robots by hand that create cars in factories run by automated assembly.

In other words, computers aren't much faster overall nowdays than they were 10 years ago, because even though they can do more, the environment demands exponentially more of them.

By comparison, if humanity is "king of the jungle" in our current setting, then having us expand our horizons even further will put us BACK ON THE BOTTOM of the barrel as we begin to explore an exponentially larger environment -- outer space.

As always, the REAL payoff is a comparative one -- put somebody in a brand-new landrover in a jungle tribe, and the primitives will be amazed at the imperviousness of the vehicle. In the same way, have somebody from the post-singularity future compete with non-augmented people in the arena of physical or mental competition is the ONLY place that they will probably really shine.

Humans greatest gift and limitation is their ability to challenge themselves, which is how we grow and evolve. The practical outcome of this might just be that instead of using our million-digit post-singularity computing power to easily master current real-world tasks, we will instead be using it to struggle through plotting course corrections as we mine the asteroid belt.

Space isn't the only place that the post-singularity human will probably be challenged - as usual, the majority of the challenges that these people will face will come from other humans. A post-singularity United States might not ever have problems with terrorism due to our awesome ability to overcome any obstacle, but at the same time we will face the same challenge as ever from other post-singularity cultures, most likely very intensely in the economic arena.

Thanks;

Tim Ventura
http://www.americanantigravity.com

Re: A Dialog Between Max More and Ray Kurzweil
posted on 04/02/2003 5:10 PM by prothe113

[Top]
[Mind·X]
[Reply to this post]

I believe that predictions about when we will achieve human, or super-human, machine intelligence tend to leave out one fundamental fact.

Information content in the brain is not expressed simply in the number of connections and rate of electrical signal transfer, but it also dependant upon positional and systemic factors. For instance, neurons in the anterior cingulate serve a completely different function than neurons in the extrapyramidal tracts -- you cannot simply compare the number of neurons in these two regions and thus compare their relative "intelligence." They serve qualitatively different purposes, and any method of estimating the "computational power" of the brain that doesn't take positional information into account probably vastly underestimates the brain's power.

Additionally, the brain reacts locally to systemic events. An external event -- a wild predator attacking -- may trigger a massive outrush of adrenaline, which causes various organs to intensify their efforts, and causes varying levels of various neurotransmitters to be dumped (or retained) at different locations in the brain. This is more of a variation on the position-dependant theme, but the point is that there are additional dimensions of information that must be considered when trying to calculate the processing power of the human brain.

Don't get me wrong -- the processing power of the brain _IS_ limited, and at some point in time our electronic creations _will_ supersede our own capabilities, but I think that an estimate of 20-30 years may be a trifle optimistic.

Thanks for giving me a soapbox.

The Mystical Narratives of Kurzweil
posted on 11/14/2004 5:05 PM by jpmaus

[Top]
[Mind·X]
[Reply to this post]

All this 'scientific' discourse is really amusing on a number of levels. Firstly, the fact that it basically ignores all the great thinkers of consciousness from Heidegger to Merleau-Ponty and, secondly, because it fails to address any of the critiques Lyotard, Foucault and others have made of its supposed legitimacy.

It is rather ridiculous that Kurzweil has been accused of having an 'impoverished view of spirituality' when his thinking is so shamelessly theistic and dialectical. The blatant resurrection of the One (one consciousness, one intelligence saturating all space and time, etc.) is so awfully theistic has no right to claim itself as anything other than some kind-of mystical fiction. Reading Kurzweil and the other thinkers of Strong AI, one is confronted with extremely Hegelian (another thinker not mentioned anywhere) notions of 'rising' towards Absolute Spirit; notions so backwards and silly it is small wonder the topic hasn't been taken more seriously by the philosophical community at large (Badiou, Zizek, etc.).

In the Kurzweilian narrative humanity is exponentially progressing towards a 'Singularity' (a point beyond which nothing can be reliably conceived). Isn't this in fact nothing more than a tautology parading as scientific discourse that serves to legitimate such theological ideas as the One? Scientific, mathematical and/or technological events, like all events, are unpredictable, they don't follow the simple-minded teleology put-forth by Kurzweil, so in a sense, this 'singularity' could take-place tomorrow.

All of these articles and all of this chatter can essentially be boiled down to the claim that humanity is 'progressing' towards One Super Intelligence ... Yet, none of it makes an even slight advance towards solving the fundamental problems confronting Strong A.I. (proof of consciousness whatsoever, a definition of the subject, a software or mathematics that would make self-reflexivity possible, etc.) Furthermore, these backwards theistic dialectical ideas of progress (to move forward or develop to a higher, 'better', more advanced stage) which have been completely destabilized by philosophical postmodernism are so preposterously assumptive they warrant no well-reasoned criticism (Better??!! Positing a faster, more complex intelligence, with more information at its disposal does nothing in the way of addressing the consciousness of consciousness as such ' Nor does it explain this whole mystical assumption of better-ness).

I am believer that the technologies being celebrated here have the potential to answer some of the most profound questions imaginable, and am certainly open to the idea that this is the last generation of humans that will be biologically identical to those from 10,000 years ago, but all this backwards religiosity parading as science does nothing to help this goal.

One MUST contend with not only the poltical dimension implicit in the developement of technology, but its narrative dimension and what that narrative legitimates and privelges and for what reason it does this... None of this is addressed here. On top of that, as said earlier, all of the REAL attempts at defining subjectivity philosophically (outside of Hofstader ... like Deleuze, for example) are completely ignored here.

Re: The Mystical Narratives of Kurzweil
posted on 03/16/2006 12:30 AM by Micah Glasser

[Top]
[Mind·X]
[Reply to this post]

Where do I begin....!? You know nothing of philosophy and are but a mere sophist speaking gibberish. Hiedegger would vomit if he heard you use his name in association with such nonsense. Go back and read Aristotle for about four more years, then maybe you can graduate on to studying Hegel, Whitehead, and Heidegger (the real Heidegger, not some postmodernist version of him). As Heidegger explains in "The Question Concerning Technology" The essence of technology is nothing technological. The essence of technology is that it is a kind of enframing of things (in a narrative as you PMs say)which both destines man (in accordance with his nature) and is a revelation to man. As such technology is a part of the unconcealing of physis. However the true nature of technology and the true nature of man both remain mysteries because both remain tied up with the relationship of beings to Being.
What Heidegger attempts in his writing is to clear a way for the possibility of thinking the being of Beings. Questioning concerning the essence of technology is one such path. What Heidegger criticizes is the idea that the essence of technology can be understood through technology. What is called for, Heidgger argues, is a return to a contemplative thinking about the essence of technology in order to understand how man is related to Being and the nature of his technological destiny.

Re: The Mystical Narratives of Kurzweil
posted on 03/16/2006 5:28 AM by jpmaus

[Top]
[Mind·X]
[Reply to this post]

Thank you for the VERY uninformative summary of 'Question Concerning Technology' Micah. Though, you would have done better to mention how the 'ge-stell' of modern technology is always the gathering of presencing as 'standing reserve,' i.e., the 'thesis' of this essay.

I am not quite sure what any of this has to do with my post from two years ago. I would, of course, grant that it was a bit heavy on the Lyotard-jargon, but you didn't actually refute any of the points it made very clearly, e.g.

1.Kurzweil et al could benefit from a better understanding of Heidegger in their materialist accounts of consciousness.

2.Kurzweil et al could benefit from a better understanding of the Frankfurt school's and/or the post-structuralist critiques of science. (let alone dialectics)

3.Kurzweil et al could benefit from a better understanding of what they mean by (vaguely Hegelian) concepts like 'progress,' especially, 'progress towards some-kind of absolute.'

It seems the gist of my post was that Kurzweil's prose is ultimately some-kind-of mysticism that tries to legitimate itself with pseudo-scientific discourse, if you take issue with this I would be happy to hear why.

Also, I am not sure why Heidegger would 'vomit' if he were to hear his name associated with questions like these. I am sure he would very much want that the 'Ereignis' be distinguished from meaningless pop-phrases designed to sell comic books like 'the singularity.'

Re: The Mystical Narratives of Kurzweil
posted on 04/19/2006 11:42 AM by Micah Glasser

[Top]
[Mind·X]
[Reply to this post]

After re-reading this exchange it appears that I went off on an irrational tirade. Sorry. I do that sometimes.

Re: The Mystical Narratives of Kurzweil
posted on 11/23/2009 3:23 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

I quite enjoyed it.

Nietzsche and Heidegger were defeated on the battlefield under their own rules.
The trouble is a litrealist's interbpretation of Darwin, which Marx also fell into.

Epigenetics was dismissed as LaMarkian gibberish and threw out a major science.

We are surely going to rise in some amazing ways??

Re: The Mystical Narratives of Kurzweil
posted on 11/24/2009 3:57 PM by Micah Glasser

[Top]
[Mind·X]
[Reply to this post]

I am interested in continuing this discussion. Would you care to elaborate further on your thinking.