|
|
|
|
|
|
|
Origin >
The Singularity >
The Law of Accelerating Returns
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0134.html
Printable Version |
|
|
|
The Law of Accelerating Returns
An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense "intuitive linear" view. So we won't experience 100 years of progress in the 21st century -- it will be more like 20,000 years of progress (at today's rate). The "returns," such as chip speed and cost-effectiveness, also increase exponentially. There's even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity -- technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.
Published on KurzweilAI.net March 7, 2001.
You will get $40 trillion just by reading this essay and understanding what it says. For complete details, see below. (It's true that authors will do just about anything to keep your attention, but I'm serious about this statement. Until I return to a further explanation, however, do read the first sentence of this paragraph carefully.)
Now back to the future: it's widely misunderstood. Our forebears expected the future to be pretty much like their present, which had been pretty much like their past. Although exponential trends did exist a thousand years ago, they were at that very early stage where an exponential trend is so flat that it looks like no trend at all. So their lack of expectations was largely fulfilled. Today, in accordance with the common wisdom, everyone expects continuous technological progress and the social repercussions that follow. But the future will be far more surprising than most observers realize: few have truly internalized the implications of the fact that the rate of change itself is accelerating. Most long range forecasts of technical feasibility in future time periods dramatically underestimate the power of future technology because they are based on what I call the "intuitive linear" view of technological progress rather than the "historical exponential view." To express this another way, it is not the case that we will experience a hundred years of progress in the twenty-first century; rather we will witness on the order of twenty thousand years of progress (at today's rate of progress, that is).
This disparity in outlook comes up frequently in a variety of contexts, for example, the discussion of the ethical issues that Bill Joy raised in his controversial WIRED cover story, Why The Future Doesn't Need Us. Bill and I have been frequently paired in a variety of venues as pessimist and optimist respectively. Although I'm expected to criticize Bill's position, and indeed I do take issue with his prescription of relinquishment, I nonetheless usually end up defending Joy on the key issue of feasibility. Recently a Noble Prize winning panelist dismissed Bill's concerns, exclaiming that, "we're not going to see self-replicating nanoengineered entities for a hundred years." I pointed out that 100 years was indeed a reasonable estimate of the amount of technical progress required to achieve this particular milestone at today's rate of progress. But because we're doubling the rate of progress every decade, we'll see a century of progress--at today's rate--in only 25 calendar years.
When people think of a future period, they intuitively assume that the current rate of progress will continue for future periods. However, careful consideration of the pace of technology shows that the rate of progress is not constant, but it is human nature to adapt to the changing pace, so the intuitive view is that the pace will continue at the current rate. Even for those of us who have been around long enough to experience how the pace increases over time, our unexamined intuition nonetheless provides the impression that progress changes at the rate that we have experienced recently. From the mathematician's perspective, a primary reason for this is that an exponential curve approximates a straight line when viewed for a brief duration. So even though the rate of progress in the very recent past (e.g., this past year) is far greater than it was ten years ago (let alone a hundred or a thousand years ago), our memories are nonetheless dominated by our very recent experience. It is typical, therefore, that even sophisticated commentators, when considering the future, extrapolate the current pace of change over the next 10 years or 100 years to determine their expectations. This is why I call this way of looking at the future the "intuitive linear" view.
But a serious assessment of the history of technology shows that technological change is exponential. In exponential growth, we find that a key measurement such as computational power is multiplied by a constant factor for each unit of time (e.g., doubling every year) rather than just being added to incrementally. Exponential growth is a feature of any evolutionary process, of which technology is a primary example. One can examine the data
in different ways, on different time scales, and for a wide variety of technologies ranging from electronic to biological, and the acceleration of progress and growth applies. Indeed, we find not just simple exponential growth, but "double" exponential growth, meaning that the rate of exponential growth is itself growing exponentially. These observations do not rely merely on an assumption of the continuation of Moore's law (i.e., the exponential shrinking of transistor sizes on an integrated circuit), but is based on a rich model of diverse technological processes. What it clearly shows is that technology, particularly the pace of technological change, advances (at least) exponentially, not linearly, and has been doing so since the advent of technology, indeed since the advent of evolution on Earth.
I emphasize this point because it is the most important failure that would-be prognosticators make in considering future trends. Most technology forecasts ignore altogether this "historical exponential view" of technological progress. That is why people tend to overestimate what can be achieved in the short term (because we tend to leave out necessary details), but underestimate what can be achieved in the long term (because the exponential growth is ignored). We can organize these observations into what I call the law of accelerating returns as follows:
- Evolution applies positive feedback in that the more capable methods resulting from one stage of evolutionary progress are used to create the next stage. As a result, the
- rate of progress of an evolutionary process increases exponentially over time. Over time, the "order" of the information embedded in the evolutionary process (i.e., the measure of how well the information fits a purpose, which in evolution is survival) increases.
- A correlate of the above observation is that the "returns" of an evolutionary process (e.g., the speed, cost-effectiveness, or overall "power" of a process) increase exponentially over time.
- In another positive feedback loop, as a particular evolutionary process (e.g., computation) becomes more effective (e.g., cost effective), greater resources are deployed toward the further progress of that process. This results in a second level of exponential growth (i.e., the rate of exponential growth itself grows exponentially).
- Biological evolution is one such evolutionary process.
- Technological evolution is another such evolutionary process. Indeed, the emergence of the first technology creating species resulted in the new evolutionary process of technology. Therefore, technological evolution is an outgrowth of--and a continuation of--biological evolution.
- A specific paradigm (a method or approach to solving a problem, e.g., shrinking transistors on an integrated circuit as an approach to making more powerful computers) provides exponential growth until the method exhausts its potential. When this happens, a paradigm shift (i.e., a fundamental change in the approach) occurs, which enables exponential growth to continue.
If we apply these principles at the highest level of evolution on Earth, the first step, the creation of cells, introduced the paradigm of biology. The subsequent emergence of DNA provided a digital method to record the results of evolutionary experiments. Then, the evolution of a species who combined rational thought with an opposable appendage (i.e., the thumb) caused a fundamental paradigm shift from biology to technology. The upcoming primary paradigm shift will be from biological thinking to a hybrid combining biological and nonbiological thinking. This hybrid will include "biologically inspired" processes resulting from the reverse engineering of biological brains.
If we examine the timing of these steps, we see that the process has continuously accelerated. The evolution of life forms required billions of years for the first steps (e.g., primitive cells); later on progress accelerated. During the Cambrian explosion, major paradigm shifts took only tens of millions of years. Later on, Humanoids developed over a period of millions of years, and Homo sapiens over a period of only hundreds of thousands of years.
With the advent of a technology-creating species, the exponential pace became too fast for evolution through DNA-guided protein synthesis and moved on to human-created technology. Technology goes beyond mere tool making; it is a process of creating ever more powerful technology using the tools from the previous round of innovation. In this way, human technology is distinguished from the tool making of other species. There is a record of each stage of technology, and each new stage of technology builds on the order of the previous stage.
The first technological steps-sharp edges, fire, the wheel--took tens of thousands of years. For people living in this era, there was little noticeable technological change in even a thousand years. By 1000 A.D., progress was much faster and a paradigm shift required only a century or two. In the nineteenth century, we saw more technological change than in the nine centuries preceding it. Then in the first twenty years of the twentieth century, we saw more advancement than in all of the nineteenth century. Now, paradigm shifts occur in only a few years time. The World Wide Web did not exist in anything like its present form just a few years ago; it didn't exist at all a decade ago.
The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially). So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries. In contrast, the twentieth century saw only about 25 years of progress (again at today's rate of progress) since we have been speeding up to current rates. So the twenty-first century will see almost a thousand times greater technological change than its predecessor.
To appreciate the nature and significance of the coming "singularity," it is important to ponder the nature of exponential growth. Toward this end, I am fond of telling the tale of the inventor of chess and his patron, the emperor of China. In response to the emperor's offer of a reward for his new beloved game, the inventor asked for a single grain of rice on the first square, two on the second square, four on the third, and so on. The Emperor quickly granted this seemingly benign and humble request. One version of the story has the emperor going bankrupt as the 63 doublings ultimately totaled 18 million trillion grains of rice. At ten grains of rice per square inch, this requires rice fields covering twice the surface area of the Earth, oceans included. Another version of the story has the inventor losing his head.
It should be pointed out that as the emperor and the inventor went through the first half of the chess board, things were fairly uneventful. The inventor was given spoonfuls of rice, then bowls of rice, then barrels. By the end of the first half of the chess board, the inventor had accumulated one large field's worth (4 billion grains), and the emperor did start to take notice. It was as they progressed through the second half of the chessboard that the situation quickly deteriorated. Incidentally, with regard to the doublings of computation, that's about where we stand now--there have been slightly more than 32 doublings of performance since the first programmable computers were invented during World War II.
This is the nature of exponential growth. Although technology grows in the exponential domain, we humans live in a linear world. So technological trends are not noticed as small levels of technological power are doubled. Then seemingly out of nowhere, a technology explodes into view. For example, when the Internet went from 20,000 to 80,000 nodes over a two year period during the 1980s, this progress remained hidden from the general public. A decade later, when it went from 20 million to 80 million nodes in the same amount of time, the impact was rather conspicuous.
As exponential growth continues to accelerate into the first half of the twenty-first century, it will appear to explode into infinity, at least from the limited and linear perspective of contemporary humans. The progress will ultimately become so fast that it will rupture our ability to follow it. It will literally get out of our control. The illusion that we have our hand "on the plug," will be dispelled.
Can the pace of technological progress continue to speed up indefinitely? Is there not a point where humans are unable to think fast enough to keep up with it? With regard to unenhanced humans, clearly so. But what would a thousand scientists, each a thousand times more intelligent than human scientists today, and each operating a thousand times faster than contemporary humans (because the information processing in their primarily nonbiological brains is faster) accomplish? One year would be like a millennium. What would they come up with?
Well, for one thing, they would come up with technology to become even more intelligent (because their intelligence is no longer of fixed capacity). They would change their own thought processes to think even faster. When the scientists evolve to be a million times more intelligent and operate a million times faster, then an hour would result in a century of progress (in today's terms).
This, then, is the Singularity. The Singularity is technological change so rapid and so profound that it represents a rupture in the fabric of human history. Some would say that we cannot comprehend the Singularity, at least with our current level of understanding, and that it is impossible, therefore, to look past its "event horizon" and make sense of what lies beyond.
My view is that despite our profound limitations of thought, constrained as we are today to a mere hundred trillion interneuronal connections in our biological brains, we nonetheless have sufficient powers of abstraction to make meaningful statements about the nature of life after the Singularity. Most importantly, it is my view that the intelligence that will emerge will continue to represent the human civilization, which is already a human-machine civilization. This will be the next step in evolution, the next high level paradigm shift.
To put the concept of Singularity into perspective, let's explore the history of the word itself. Singularity is a familiar word meaning a unique event with profound implications. In mathematics, the term implies infinity, the explosion of value that occurs when dividing a constant by a number that gets closer and closer to zero. In physics, similarly, a singularity denotes an event or location of infinite power. At the center of a black hole, matter is so dense that its gravity is infinite. As nearby matter and energy are drawn into the black hole, an event horizon separates the region from the rest of the Universe. It constitutes a rupture in the fabric of space and time. The Universe itself is said to have begun with just such a Singularity.
In the 1950s, John Von Neumann was quoted as saying that "the ever accelerating progress of technology...gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." In the 1960s, I. J. Good wrote of an "intelligence explosion," resulting from intelligent machines designing their next generation without human intervention. In 1986, Vernor Vinge, a mathematician and computer scientist at San Diego State University, wrote about a rapidly approaching technological "singularity" in his science fiction novel, Marooned in Realtime. Then in 1993, Vinge presented a paper to a NASA-organized symposium which described the Singularity as an impending event resulting primarily from the advent of "entities with greater than human intelligence," which Vinge saw as the harbinger of a run-away phenomenon.
From my perspective, the Singularity has many faces. It represents the nearly vertical phase of exponential growth where the rate of growth is so extreme that technology appears to be growing at infinite speed. Of course, from a mathematical perspective, there is no discontinuity, no rupture, and the growth rates remain finite, albeit extraordinarily large. But from our currently limited perspective, this imminent event appears to be an acute and abrupt break in the continuity of progress. However, I emphasize the word "currently," because one of the salient implications of the Singularity will be a change in the nature of our ability to understand. In other words, we will become vastly smarter as we merge with our technology.
When I wrote my first book, The Age of Intelligent Machines, in the 1980s, I ended the book with the specter of the emergence of machine intelligence greater than human intelligence, but found it difficult to look beyond this event horizon. Now having thought about its implications for the past 20 years, I feel that we are indeed capable of understanding the many facets of this threshold, one that will transform all spheres of human life.
Consider a few examples of the implications. The bulk of our experiences will shift from real reality to virtual reality. Most of the intelligence of our civilization will ultimately be nonbiological, which by the end of this century will be trillions of trillions of times more powerful than human intelligence. However, to address often expressed concerns, this does not imply the end of biological intelligence, even if thrown from its perch of evolutionary superiority. Moreover, it is important to note that the nonbiological forms will be derivative of biological design. In other words, our civilization will remain human, indeed in many ways more exemplary of what we regard as human than it is today, although our understanding of the term will move beyond its strictly biological origins.
Many observers have nonetheless expressed alarm at the emergence of forms of nonbiological intelligence superior to human intelligence. The potential to augment our own intelligence through intimate connection with other thinking mediums does not necessarily alleviate the concern, as some people have expressed the wish to remain "unenhanced" while at the same time keeping their place at the top of the intellectual food chain. My view is that the likely outcome is that on the one hand, from the perspective of biological humanity, these superhuman intelligences will appear to be their transcendent servants, satisfying their needs and desires. On the other hand, fulfilling the wishes of a revered biological legacy will occupy only a trivial portion of the intellectual power that the Singularity will bring.
Needless to say, the Singularity will transform all aspects of our lives, social, sexual, and economic, which I explore herewith. Before considering further the implications of the Singularity, let's examine the wide range of technologies that are subject to the law of accelerating returns. The exponential trend that has gained the greatest public recognition has become known as "Moore's Law." Gordon Moore, one of the inventors of integrated circuits, and then Chairman of Intel, noted in the mid 1970s that we could squeeze twice as many transistors on an integrated circuit every 24 months. Given that the electrons have less distance to travel, the circuits also run twice as fast, providing an overall quadrupling of computational power.
After sixty years of devoted service, Moore's Law will die a dignified death no later than the year 2019. By that time, transistor features will be just a few atoms in width, and the strategy of ever finer photolithography will have run its course. So, will that be the end of the exponential growth of computing?
Don't bet on it.
If we plot the speed (in instructions per second) per $1000 (in constant dollars) of 49 famous calculators and computers spanning the entire twentieth century, we note some interesting observations. Each time one paradigm runs out of steam, another picks up the pace
It is important to note that Moore's Law of Integrated Circuits was not the first, but the fifth paradigm to provide accelerating price-performance. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. Census, to Turing's relay-based "Robinson" machine that cracked the Nazi enigma code, to the CBS vacuum tube computer that predicted the election of Eisenhower, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer which I used to dictate (and automatically transcribe) this essay.
But I noticed something else surprising. When I plotted the 49 machines on an exponential graph (where a straight line means exponential growth), I didn't get a straight line. What I got was another exponential curve. In other words, there's exponential growth in the rate of exponential growth. Computer speed (per unit cost) doubled every three years between 1910 and 1950, doubled every two years between 1950 and 1966, and is now doubling every year.
But where does Moore's Law come from? What is behind this remarkably predictable phenomenon? I have seen relatively little written about the ultimate source of this trend. Is it just "a set of industry expectations and goals," as Randy Isaac, head of basic science at IBM contends? Or is there something more profound going on?
In my view, it is one manifestation (among many) of the exponential growth of the evolutionary process that is technology. The exponential growth of computing is a marvelous quantitative example of the exponentially growing returns from an evolutionary process. We can also express the exponential growth of computing in terms of an accelerating pace: it took ninety years to achieve the first MIPS (million instructions per second) per thousand dollars, now we add one MIPS per thousand dollars every day.
Moore's Law narrowly refers to the number of transistors on an integrated circuit of fixed size, and sometimes has been expressed even more narrowly in terms of transistor feature size. But rather than feature size (which is only one contributing factor), or even number of transistors, I think the most appropriate measure to track is computational speed per unit cost. This takes into account many levels of "cleverness" (i.e., innovation, which is to say, technological evolution). In addition to all of the innovation in integrated circuits, there are multiple layers of innovation in computer design, e.g., pipelining, parallel processing, instruction look-ahead, instruction and memory caching, and many others.
From the above chart, we see that the exponential growth of computing didn't start with integrated circuits (around 1958), or even transistors (around 1947), but goes back to the electromechanical calculators used in the 1890 and 1900 U.S. Census. This chart spans at least five distinct paradigms of computing, of which Moore's Law pertains to only the latest one.
It's obvious what the sixth paradigm will be after Moore's Law runs out of steam during the second decade of this century. Chips today are flat (although it does require up to 20 layers of material to produce one layer of circuitry). Our brain, in contrast, is organized in three dimensions. We live in a three dimensional world, why not use the third dimension? The human brain actually uses a very inefficient electrochemical digital controlled analog computational process. The bulk of the calculations are done in the interneuronal connections at a speed of only about 200 calculations per second (in each connection), which is about ten million times slower than contemporary electronic circuits. But the brain gains its prodigious powers from its extremely parallel organization in three dimensions. There are many technologies in the wings that build circuitry in three dimensions. Nanotubes, for example, which are already working in laboratories, build circuits from pentagonal arrays of carbon atoms. One cubic inch of nanotube circuitry would be a million times more powerful than the human brain. There are more than enough new computing technologies now being researched, including three-dimensional silicon chips, optical computing, crystalline computing, DNA computing, and quantum computing, to keep the law of accelerating returns as applied to computation going for a long time.
Thus the (double) exponential growth of computing is broader than Moore's Law, which refers to only one of its paradigms. And this accelerating growth of computing is, in turn, part of the yet broader phenomenon of the accelerating pace of any evolutionary process. Observers are quick to criticize extrapolations of an exponential trend on the basis that the trend is bound to run out of "resources." The classical example is when a species happens upon a new habitat (e.g., rabbits in Australia), the species' numbers will grow exponentially for a time, but then hit a limit when resources such as food and space run out.
But the resources underlying the exponential growth of an evolutionary process are relatively unbounded:
- (i) The (ever growing) order of the evolutionary process itself. Each stage of evolution provides more powerful tools for the next. In biological evolution, the advent of DNA allowed more powerful and faster evolutionary "experiments." Later, setting the "designs" of animal body plans during the Cambrian explosion allowed rapid evolutionary development of other body organs such as the brain. Or to take a more recent example, the advent of computer assisted design tools allows rapid development of the next generation of computers.
- (ii) The "chaos" of the environment in which the evolutionary process takes place and which provides the options for further diversity. In biological evolution, diversity enters the process in the form of mutations and ever changing environmental conditions. In technological evolution, human ingenuity combined with ever changing market conditions keep the process of innovation going.
The maximum potential of matter and energy to contain intelligent processes is a valid issue. But according to my models, we won't approach those limits during this century (but this will become an issue within a couple of centuries).
We also need to distinguish between the "S" curve (an "S" stretched to the right, comprising very slow, virtually unnoticeable growth--followed by very rapid growth--followed by a flattening out as the process approaches an asymptote) that is characteristic of any specific technological paradigm and the continuing exponential growth that is characteristic of the ongoing evolutionary process of technology. Specific paradigms, such as Moore's Law, do ultimately reach levels at which exponential growth is no longer feasible. Thus Moore's Law is an S curve. But the growth of computation is an ongoing exponential (at least until we "saturate" the Universe with the intelligence of our human-machine civilization, but that will not be a limit in this coming century). In accordance with the law of accelerating returns, paradigm shift, also called innovation, turns the S curve of any specific paradigm into a continuing exponential. A new paradigm (e.g., three-dimensional circuits) takes over when the old paradigm approaches its natural limit. This has already happened at least four times in the history of computation. This difference also distinguishes the tool making of non-human species, in which the mastery of a tool-making (or using) skill by each animal is characterized by an abruptly ending S shaped learning curve, versus human-created technology, which has followed an exponential pattern of growth and acceleration since its inception. This "law of accelerating returns" applies to all of technology, indeed to any true evolutionary process, and can be measured with remarkable precision in information based technologies. There are a great many examples of the exponential growth implied by the law of accelerating returns in technologies as varied as DNA sequencing, communication speeds, electronics of all kinds, and even in the rapidly shrinking size of technology. The Singularity results not from the exponential explosion of computation alone, but rather from the interplay and myriad synergies that will result from manifold intertwined technological revolutions. Also, keep in mind that every point on the exponential growth curves underlying these panoply of technologies (see the graphs below) represents an intense human drama of innovation and competition. It is remarkable therefore that these chaotic processes result in such smooth and predictable exponential trends.
For example, when the human genome scan started fourteen years ago, critics pointed out that given the speed with which the genome could then be scanned, it would take thousands of years to finish the project. Yet the fifteen year project was nonetheless completed slightly ahead of schedule.
Of course, we expect to see exponential growth in electronic memories such as RAM.
However, growth in magnetic memory is not primarily a matter of Moore's law, but includes advances in mechanical and electromagnetic systems.
Exponential growth in communications technology has been even more explosive than in computation and is no less significant in its implications. Again, this progression involves far more than just shrinking transistors on an integrated circuit, but includes accelerating advances in fiber optics, optical switching, electromagnetic technologies, and others.
Notice Cascade of smaller "S" Curves
Note that in the above two charts we can actually see the progression of "S" curves: the acceleration fostered by a new paradigm, followed by a leveling off as the paradigm runs out of steam, followed by renewed acceleration through paradigm shift.
The following two charts show the overall growth of the Internet based on the number of hosts. These two charts plot the same data, but one is on an exponential axis and the other is linear. As I pointed out earlier, whereas technology progresses in the exponential domain, we experience it in the linear domain. So from the perspective of most observers, nothing was happening until the mid 1990s when seemingly out of nowhere, the world wide web and email exploded into view. But the emergence of the Internet into a worldwide phenomenon was readily predictable much earlier by examining the exponential trend data.
Notice how the explosion of the Internet appears to be a surprise from the Linear Chart, but was perfectly predictable from the Exponential Chart
Ultimately we will get away from the tangle of wires in our cities and in our lives through wireless communication, the power of which is doubling every 10 to 11 months.
Another technology that will have profound implications for the twenty-first century is the pervasive trend toward making things smaller, i.e., miniaturization. The salient implementation sizes of a broad range of technologies, both electronic and mechanical, are shrinking, also at a double exponential rate. At present, we are shrinking technology by a factor of approximately 5.6 per linear dimension per decade.
If we view the exponential growth of computation in its proper perspective as one example of the pervasiveness of the exponential growth of information based technology, that is, as one example of many of the law of accelerating returns, then we can confidently predict its continuation.
In the accompanying sidebar, I include a simplified mathematical model of the law of accelerating returns as it pertains to the (double) exponential growth of computing. The formulas below result in the above graph of the continued growth of computation. This graph matches the available data for the twentieth century through all five paradigms and provides projections for the twenty-first century. Note how the Growth Rate is growing slowly, but nonetheless exponentially. The following provides a brief overview of the law of accelerating returns as it applies to the double exponential growth of computation. This model considers the impact of the growing power of the technology to foster its own next generation. For example, with more powerful computers and related technology, we have the tools and the knowledge to design yet more powerful computers, and to do so more quickly.
Note that the data for the year 2000 and beyond assume neural net connection calculations as it is expected that this type of calculation will ultimately dominate, particularly in emulating human brain functions. This type of calculation is less expensive than conventional (e.g., Pentium III / IV) calculations by a factor of at least 100 (particularly if implemented using digital controlled analog electronics, which would correspond well to the brain's digital controlled analog electrochemical processes). A factor of 100 translates into approximately 6 years (today) and less than 6 years later in the twenty-first century.
My estimate of brain capacity is 100 billion neurons times an average 1,000 connections per neuron (with the calculations taking place primarily in the connections) times 200 calculations per second. Although these estimates are conservatively high, one can find higher and lower estimates. However, even much higher (or lower) estimates by orders of magnitude only shift the prediction by a relatively small number of years.
Some prominent dates from this analysis include the following:
- We achieve one Human Brain capability (2 * 10^16 cps) for $1,000 around the year 2023.
- We achieve one Human Brain capability (2 * 10^16 cps) for one cent around the year 2037.
- We achieve one Human Race capability (2 * 10^26 cps) for $1,000 around the year 2049.
- We achieve one Human Race capability (2 * 10^26 cps) for one cent around the year 2059.
The Model considers the following variables:
The assumptions of the model are:
- (1) V = C1 * W
In other words, computer power is a linear function of the knowledge of how to build computers. This is actually a conservative assumption. In general, innovations improve V (computer power) by a multiple, not in an additive way. Independent innovations multiply each other's effect. For example, a circuit advance such as CMOS, a more efficient IC wiring methodology, and a processor innovation such as pipelining all increase V by independent multiples.
- (2) W = C2 * Integral (0 to t) V
In other words, W (knowledge) is cumulative, and the instantaneous increment to knowledge is proportional to V.
This gives us:
- W = C1 * C2 * Integral (0 to t) W
- W = C1 * C2 * C3 ^ (C4 * t)
- V = C1 ^ 2 * C2 * C3 ^ (C4 * t)
- (Note on notation: a^b means a raised to the b power.)
Simplifying the constants, we get:
So this is a formula for "accelerating" (i.e., exponentially growing) returns, a "regular Moore's Law."
As I mentioned above, the data shows exponential growth in the rate of exponential growth. (We doubled computer power every three years early in the twentieth century, every two years in the middle of the century, and close to every one year during the 1990s.)
Let's factor in another exponential phenomenon, which is the growing resources for computation. Not only is each (constant cost) device getting more powerful as a function of W, but the resources deployed for computation are also growing exponentially.
We now have:
- N: Expenditures for computation
- V = C1 * W (as before)
- N = C4 ^ (C5 * t) (Expenditure for computation is growing at its own exponential rate)
- W = C2 * Integral(0 to t) (N * V)
As before, world knowledge is accumulating, and the instantaneous increment is proportional to the amount of computation, which equals the resources deployed for computation (N) * the power of each (constant cost) device.
This gives us:
- W = C1 * C2 * Integral(0 to t) (C4 ^ (C5 * t) * W)
- W = C1 * C2 * (C3 ^ (C6 * t)) ^ (C7 * t)
- V = C1 ^ 2 * C2 * (C3 ^ (C6 * t)) ^ (C7 * t)
Simplifying the constants, we get:
- V = Ca * (Cb ^ (Cc * t)) ^ (Cd * t)
This is a double exponential--an exponential curve in which the rate of exponential growth is growing at a different exponential rate.
Now let's consider real-world data. Considering the data for actual calculating devices and computers during the twentieth century:
- CPS/$1K: Calculations Per Second for $1,000
Twentieth century computing data matches:
- CPS/$1K = 10^(6.00*((20.40/6.00)^((A13-1900)/100))-11.00)
We can determine the growth rate over a period of time:
- Growth Rate =10^((LOG(CPS/$1K for Current Year) - LOG(CPS/$1K for Previous Year))/(Current Year - Previous Year))
- Human Brain = 100 Billion (10^11) neurons * 1000 (10^3) Connections/Neuron * 200 (2 * 10^2) Calculations Per Second Per Connection = 2 * 10^16 Calculations Per Second
- Human Race = 10 Billion (10^10) Human Brains = 2 * 10^26 Calculations Per Second
These formulas produce the graph above.
Already, IBM's "Blue Gene" supercomputer, now being built and scheduled to be completed by 2005, is projected to provide 1 million billion calculations per second (i.e., one billion megaflops). This is already one twentieth of the capacity of the human brain, which I estimate at a conservatively high 20 million billion calculations per second (100 billion neurons times 1,000 connections per neuron times 200 calculations per second per connection). In line with my earlier predictions, supercomputers will achieve one human brain capacity by 2010, and personal computers will do so by around 2020. By 2030, it will take a village of human brains (around a thousand) to match $1000 of computing. By 2050, $1000 of computing will equal the processing power of all human brains on Earth. Of course, this only includes those brains still using carbon-based neurons. While human neurons are wondrous creations in a way, we wouldn't (and don't) design computing circuits the same way. Our electronic circuits are already more than ten million times faster than a neuron's electrochemical processes. Most of the complexity of a human neuron is devoted to maintaining its life support functions, not its information processing capabilities. Ultimately, we will need to port our mental processes to a more suitable computational substrate. Then our minds won't have to stay so small, being constrained as they are today to a mere hundred trillion neural connections each operating at a ponderous 200 digitally controlled analog calculations per second. So far, I've been talking about the hardware of computing. The software is even more salient. One of the principal assumptions underlying the expectation of the Singularity is the ability of nonbiological mediums to emulate the richness, subtlety, and depth of human thinking. Achieving the computational capacity of the human brain, or even villages and nations of human brains will not automatically produce human levels of capability. By human levels I include all the diverse and subtle ways in which humans are intelligent, including musical and artistic aptitude, creativity, physically moving through the world, and understanding and responding appropriately to emotion. The requisite hardware capacity is a necessary but not sufficient condition. The organization and content of these resources--the software of intelligence--is also critical.
Before addressing this issue, it is important to note that once a computer achieves a human level of intelligence, it will necessarily soar past it. A key advantage of nonbiological intelligence is that machines can easily share their knowledge. If I learn French, or read War and Peace, I can't readily download that learning to you. You have to acquire that scholarship the same painstaking way that I did. My knowledge, embedded in a vast pattern of neurotransmitter concentrations and interneuronal connections, cannot be quickly accessed or transmitted. But we won't leave out quick downloading ports in our nonbiological equivalents of human neuron clusters. When one computer learns a skill or gains an insight, it can immediately share that wisdom with billions of other machines.
As a contemporary example, we spent years teaching one research computer how to recognize continuous human speech. We exposed it to thousands of hours of recorded speech, corrected its errors, and patiently improved its performance. Finally, it became quite adept at recognizing speech (I dictated most of my recent book to it). Now if you want your own personal computer to recognize speech, it doesn't have to go through the same process; you can just download the fully trained patterns in seconds. Ultimately, billions of nonbiological entities can be the master of all human and machine acquired knowledge.
In addition, computers are potentially millions of times faster than human neural circuits. A computer can also remember billions or even trillions of facts perfectly, while we are hard pressed to remember a handful of phone numbers. The combination of human level intelligence in a machine with a computer's inherent superiority in the speed, accuracy, and sharing ability of its memory will be formidable.
There are a number of compelling scenarios to achieve higher levels of intelligence in our computers, and ultimately human levels and beyond. We will be able to evolve and train a system combining massively parallel neural nets with other paradigms to understand language and model knowledge, including the ability to read and model the knowledge contained in written documents. Unlike many contemporary "neural net" machines, which use mathematically simplified models of human neurons, some contemporary neural nets are already using highly detailed models of human neurons, including detailed nonlinear analog activation functions and other relevant details. Although the ability of today's computers to extract and learn knowledge from natural language documents is limited, their capabilities in this domain are improving rapidly. Computers will be able to read on their own, understanding and modeling what they have read, by the second decade of the twenty-first century. We can then have our computers read all of the world's literature--books, magazines, scientific journals, and other available material. Ultimately, the machines will gather knowledge on their own by venturing out on the web, or even into the physical world, drawing from the full spectrum of media and information services, and sharing knowledge with each other (which machines can do far more easily than their human creators). The most compelling scenario for mastering the software of intelligence is to tap into the blueprint of the best example we can get our hands on of an intelligent process. There is no reason why we cannot reverse engineer the human brain, and essentially copy its design. Although it took its original designer several billion years to develop, it's readily available to us, and not (yet) copyrighted. Although there's a skull around the brain, it is not hidden from our view.
The most immediately accessible way to accomplish this is through destructive scanning: we take a frozen brain, preferably one frozen just slightly before rather than slightly after it was going to die anyway, and examine one brain layer--one very thin slice--at a time. We can readily see every neuron and every connection and every neurotransmitter concentration represented in each synapse-thin layer.
Human brain scanning has already started. A condemned killer allowed his brain and body to be scanned and you can access all 10 billion bytes of him on the Internet http://www.nlm.nih.gov/research/visible/visible_human.html.
He has a 25 billion byte female companion on the site as well in case he gets lonely. This scan is not high enough in resolution for our purposes, but then, we probably don't want to base our templates of machine intelligence on the brain of a convicted killer, anyway.
But scanning a frozen brain is feasible today, albeit not yet at a sufficient speed or bandwidth, but again, the law of accelerating returns will provide the requisite speed of scanning, just as it did for the human genome scan. Carnegie Mellon University's Andreas Nowatzyk plans to scan the nervous system of the brain and body of a mouse with a resolution of less than 200 nanometers, which is getting very close to the resolution needed for reverse engineering.
We also have noninvasive scanning techniques today, including high-resolution magnetic resonance imaging (MRI) scans, optical imaging, near-infrared scanning, and other technologies which are capable in certain instances of resolving individual somas, or neuron cell bodies. Brain scanning technologies are also increasing their resolution with each new generation, just what we would expect from the law of accelerating returns. Future generations will enable us to resolve the connections between neurons and to peer inside the synapses and record the neurotransmitter concentrations.
We can peer inside someone's brain today with noninvasive scanners, which are increasing their resolution with each new generation of this technology. There are a number of technical challenges in accomplishing this, including achieving suitable resolution, bandwidth, lack of vibration, and safety. For a variety of reasons it is easier to scan the brain of someone recently deceased than of someone still living. It is easier to get someone deceased to sit still, for one thing. But noninvasively scanning a living brain will ultimately become feasible as MRI, optical, and other scanning technologies continue to improve in resolution and speed. Scanning from InsideAlthough noninvasive means of scanning the brain from outside the skull are rapidly improving, the most practical approach to capturing every salient neural detail will be to scan it from inside. By 2030, "nanobot" (i.e., nano robot) technology will be viable, and brain scanning will be a prominent application. Nanobots are robots that are the size of human blood cells, or even smaller. Billions of them could travel through every brain capillary and scan every relevant feature from up close. Using high speed wireless communication, the nanobots would communicate with each other, and with other computers that are compiling the brain scan data base (in other words, the nanobots will all be on a wireless local area network).
This scenario involves only capabilities that we can touch and feel today. We already have technology capable of producing very high resolution scans, provided that the scanner is physically proximate to the neural features. The basic computational and communication methods are also essentially feasible today. The primary features that are not yet practical are nanobot size and cost. As I discussed above, we can project the exponentially declining cost of computation, and the rapidly declining size of both electronic and mechanical technologies. We can conservatively expect, therefore, the requisite nanobot technology by around 2030. Because of its ability to place each scanner in very close physical proximity to every neural feature, nanobot-based scanning will be more practical than scanning the brain from outside. How will we apply the thousands of trillions of bytes of information derived from each brain scan? One approach is to use the results to design more intelligent parallel algorithms for our machines, particularly those based on one of the neural net paradigms. With this approach, we don't have to copy every single connection. There is a great deal of repetition and redundancy within any particular brain region. Although the information contained in a human brain would require thousands of trillions of bytes of information (on the order of 100 billion neurons times an average of 1,000 connections per neuron, each with multiple neurotransmitter concentrations and connection data), the design of the brain is characterized by a human genome of only about a billion bytes.
Furthermore, most of the genome is redundant, so the initial design of the brain is characterized by approximately one hundred million bytes, about the size of Microsoft Word. Of course, the complexity of our brains greatly increases as we interact with the world (by a factor of more than ten million). Because of the highly repetitive patterns found in each specific brain region, it is not necessary to capture each detail in order to reverse engineer the significant digital-analog algorithms. With this information, we can design simulated nets that operate similarly. There are already multiple efforts under way to scan the human brain and apply the insights derived to the design of intelligent machines.
The pace of brain reverse engineering is only slightly behind the availability of the brain scanning and neuron structure information. A contemporary example is a comprehensive model of a significant portion of the human auditory processing system that Lloyd Watts (www.lloydwatts.com) has developed from both neurobiology studies of specific neuron types and brain interneuronal connection information. Watts' model includes five parallel paths and includes the actual intermediate representations of auditory information at each stage of neural processing. Watts has implemented his model as real-time software which can locate and identify sounds with many of the same properties as human hearing. Although a work in progress, the model illustrates the feasibility of converting neurobiological models and brain connection data into working simulations. Also, as Hans Moravec and others have speculated, these efficient simulations require about 1,000 times less computation than the theoretical potential of the biological neurons being simulated.
Chart by Lloyd Watts
Cochlea: Sense organ of hearing. 30,000 fibers converts motion of the stapes into spectro-temporal representation of sound.
MC: Multipolar Cells. Measure spectral energy.
GBC: Globular Bushy Cells. Relays spikes from the auditory nerve to the Lateral Superior.
Olivary Complex (includes LSO and MSO). Encoding of timing and amplitude of signals for binaural comparison of level.
SBC: Spherical Bushy Cells. Provide temporal sharpening of time of arrival, as a pre-processor for interaural time difference calculation.
OC: Octopus Cells. Detection of transients.
DCN: Dorsal Cochlear Nucleus. Detection of spectral edges and calibrating for noise levels.
VNTB: Ventral Nucleus of the Trapezoid Body. Feedback signals to modulate outer hair cell function in the cochlea.
VNLL, PON: Ventral Nucleus of the Lateral Lemniscus, Peri-Olivary Nuclei. Processing transients from the Octopus Cells.
MSO: Medial Superior Olive. Computing inter-aural time difference (difference in time of arrival between the two ears, used to tell where a sound is coming from).
LSO: Lateral Superior Olive. Also involved in computing inter-aural level difference.
ICC: Central Nucleus of the Inferior Colliculus. The site of major integration of multiple representations of sound.
ICx: Exterior Nucleus of the Inferior Colliculus. Further refinement of sound localization.
SC: Superior Colliculus. Location of auditory/visual merging.
MGB: Medial Geniculate Body. The auditory portion of the thalamus.
LS: Limbic System. Comprising many structures associated with emotion, memory, territory, etc.
AC: Auditory Cortex.
The brain is not one huge "tabula rasa" (i.e., undifferentiated blank slate), but rather an intricate and intertwined collection of hundreds of specialized regions. The process of "peeling the onion" to understand these interleaved regions is well underway. As the requisite neuron models and brain interconnection data becomes available, detailed and implementable models such as the auditory example above will be developed for all brain regions.
After the algorithms of a region are understood, they can be refined and extended before being implemented in synthetic neural equivalents. For one thing, they can be run on a computational substrate that is already more than ten million times faster than neural circuitry. And we can also throw in the methods for building intelligent machines that we already understand. A more controversial application than this scanning-the-brain-to-understand-it scenario is scanning-the-brain-to-download-it. Here we scan someone's brain to map the locations, interconnections, and contents of all the somas, axons, dendrites, presynaptic vesicles, neurotransmitter concentrations, and other neural components and levels. Its entire organization can then be re-created on a neural computer of sufficient capacity, including the contents of its memory.
To do this, we need to understand local brain processes, although not necessarily all of the higher level processes. Scanning a brain with sufficient detail to download it may sound daunting, but so did the human genome scan. All of the basic technologies exist today, just not with the requisite speed, cost, and size, but these are the attributes that are improving at a double exponential pace.
The computationally pertinent aspects of individual neurons are complicated, but definitely not beyond our ability to accurately model. For example, Ted Berger and his colleagues at Hedco Neurosciences have built integrated circuits that precisely match the digital and analog information processing characteristics of neurons, including clusters with hundreds of neurons. Carver Mead and his colleagues at CalTech have built a variety of integrated circuits that emulate the digital-analog characteristics of mammalian neural circuits.
A recent experiment at San Diego's Institute for Nonlinear Science demonstrates the potential for electronic neurons to precisely emulate biological ones. Neurons (biological or otherwise) are a prime example of what is often called "chaotic computing." Each neuron acts in an essentially unpredictable fashion. When an entire network of neurons receives input (from the outside world or from other networks of neurons), the signaling amongst them appears at first to be frenzied and random. Over time, typically a fraction of a second or so, the chaotic interplay of the neurons dies down, and a stable pattern emerges. This pattern represents the "decision" of the neural network. If the neural network is performing a pattern recognition task (which, incidentally, comprises the bulk of the activity in the human brain), then the emergent pattern represents the appropriate recognition.
So the question addressed by the San Diego researchers was whether electronic neurons could engage in this chaotic dance alongside biological ones. They hooked up their artificial neurons with those from spiney lobsters in a single network, and their hybrid biological-nonbiological network performed in the same way (i.e., chaotic interplay followed by a stable emergent pattern) and with the same type of results as an all biological net of neurons. Essentially, the biological neurons accepted their electronic peers. It indicates that their mathematical model of these neurons was reasonably accurate.
There are many projects around the world which are creating nonbiological devices to recreate in great detail the functionality of human neuron clusters. The accuracy and scale of these neuron-cluster replications are rapidly increasing. We started with functionally equivalent recreations of single neurons, then clusters of tens, then hundreds, and now thousands. Scaling up technical processes at an exponential pace is what technology is good at.
As the computational power to emulate the human brain becomes available--we're not there yet, but we will be there within a couple of decades--projects already under way to scan the human brain will be accelerated, with a view both to understand the human brain in general, as well as providing a detailed description of the contents and design of specific brains. By the third decade of the twenty-first century, we will be in a position to create highly detailed and complete maps of all relevant features of all neurons, neural connections and synapses in the human brain, all of the neural details that play a role in the behavior and functionality of the brain, and to recreate these designs in suitably advanced neural computers. Is the human brain different from a computer?
The answer depends on what we mean by the word "computer." Certainly the brain uses very different methods from conventional contemporary computers. Most computers today are all digital and perform one (or perhaps a few) computations at a time at extremely high speed. In contrast, the human brain combines digital and analog methods with most computations performed in the analog domain. The brain is massively parallel, performing on the order of a hundred trillion computations at the same time, but at extremely slow speeds.
With regard to digital versus analog computing, we know that digital computing can be functionally equivalent to analog computing (although the reverse is not true), so we can perform all of the capabilities of a hybrid digital--analog network with an all digital computer. On the other hand, there is an engineering advantage to analog circuits in that analog computing is potentially thousands of times more efficient. An analog computation can be performed by a few transistors, or, in the case of mammalian neurons, specific electrochemical processes. A digital computation, in contrast, requires thousands or tens of thousands of transistors. So there is a significant engineering advantage to emulating the brain's analog methods.
The massive parallelism of the human brain is the key to its pattern recognition abilities, which reflects the strength of human thinking. As I discussed above, mammalian neurons engage in a chaotic dance, and if the neural network has learned its lessons well, then a stable pattern will emerge reflecting the network's decision. There is no reason why our nonbiological functionally equivalent recreations of biological neural networks cannot be built using these same principles, and indeed there are dozens of projects around the world that have succeeded in doing this. My own technical field is pattern recognition, and the projects that I have been involved in for over thirty years use this form of chaotic computing. Particularly successful examples are Carver Mead's neural chips, which are highly parallel, use digital controlled analog computing, and are intended as functionally similar recreations of biological networks. Objective and SubjectiveThe Singularity envisions the emergence of human-like intelligent entities of astonishing diversity and scope. Although these entities will be capable of passing the "Turing test" (i.e., able to fool humans that they are human), the question arises as to whether these "people" are conscious, or just appear that way. To gain some insight as to why this is an extremely subtle question (albeit an ultimately important one) it is useful to consider some of the paradoxes that emerge from the concept of downloading specific human brains.
Although I anticipate that the most common application of the knowledge gained from reverse engineering the human brain will be creating more intelligent machines that are not necessarily modeled on specific biological human individuals, the scenario of scanning and reinstantiating all of the neural details of a specific person raises the most immediate questions of identity. Let's consider the question of what we will find when we do this.
We have to consider this question on both the objective and subjective levels. "Objective" means everyone except me, so let's start with that. Objectively, when we scan someone's brain and reinstantiate their personal mind file into a suitable computing medium, the newly emergent "person" will appear to other observers to have very much the same personality, history, and memory as the person originally scanned. That is, once the technology has been refined and perfected. Like any new technology, it won't be perfect at first. But ultimately, the scans and recreations will be very accurate and realistic.
Interacting with the newly instantiated person will feel like interacting with the original person. The new person will claim to be that same old person and will have a memory of having been that person. The new person will have all of the patterns of knowledge, skill, and personality of the original. We are already creating functionally equivalent recreations of neurons and neuron clusters with sufficient accuracy that biological neurons accept their nonbiological equivalents and work with them as if they were biological. There are no natural limits that prevent us from doing the same with the hundred billion neuron cluster of clusters we call the human brain.
Subjectively, the issue is more subtle and profound, but first we need to reflect on one additional objective issue: our physical self. The Importance of Having a BodyConsider how many of our thoughts and thinking are directed toward our body and its survival, security, nutrition, and image, not to mention affection, sexuality, and reproduction. Many, if not most, of the goals we attempt to advance using our brains have to do with our bodies: protecting them, providing them with fuel, making them attractive, making them feel good, providing for their myriad needs and desires. Some philosophers maintain that achieving human level intelligence is impossible without a body. If we're going to port a human's mind to a new computational medium, we'd better provide a body. A disembodied mind will quickly get depressed.
There are a variety of bodies that we will provide for our machines, and that they will provide for themselves: bodies built through nanotechnology (i.e., building highly complex physical systems atom by atom), virtual bodies (that exist only in virtual reality), bodies comprised of swarms of nanobots, and other technologies.
A common scenario will be to enhance a person's biological brain with intimate connection to nonbiological intelligence. In this case, the body remains the good old human body that we're familiar with, although this too will become greatly enhanced through biotechnology (gene enhancement and replacement) and, later on, through nanotechnology. A detailed examination of twenty-first century bodies is beyond the scope of this essay, but recreating and enhancing our bodies will be (and has been) an easier task than recreating our minds. So Just Who Are These People?To return to the issue of subjectivity, consider: is the reinstantiated mind the same consciousness as the person we just scanned? Are these "people" conscious at all? Is this a mind or just a brain?
Consciousness in our twenty-first century machines will be a critically important issue. But it is not easily resolved, or even readily understood. People tend to have strong views on the subject, and often just can't understand how anyone else could possibly see the issue from a different perspective. Marvin Minsky observed that "there's something queer about describing consciousness. Whatever people mean to say, they just can't seem to make it clear."
We don't worry, at least not yet, about causing pain and suffering to our computer programs. But at what point do we consider an entity, a process, to be conscious, to feel pain and discomfort, to have its own intentionality, its own free will? How do we determine if an entity is conscious; if it has subjective experience? How do we distinguish a process that is conscious from one that just acts as if it is conscious?
We can't simply ask it. If it says "Hey I'm conscious," does that settle the issue? No, we have computer games today that effectively do that, and they're not terribly convincing.
How about if the entity is very convincing and compelling when it says "I'm lonely, please keep me company." Does that settle the issue?
If we look inside its circuits, and see essentially the identical kinds of feedback loops and other mechanisms in its brain that we see in a human brain (albeit implemented using nonbiological equivalents), does that settle the issue?
And just who are these people in the machine, anyway? The answer will depend on who you ask. If you ask the people in the machine, they will strenuously claim to be the original persons. For example, if we scan--let's say myself--and record the exact state, level, and position of every neurotransmitter, synapse, neural connection, and every other relevant detail, and then reinstantiate this massive data base of information (which I estimate at thousands of trillions of bytes) into a neural computer of sufficient capacity, the person who then emerges in the machine will think that "he" is (and had been) me, or at least he will act that way. He will say "I grew up in Queens, New York, went to college at MIT, stayed in the Boston area, started and sold a few artificial intelligence companies, walked into a scanner there, and woke up in the machine here. Hey, this technology really works."
But wait.
Is this really me? For one thing, old biological Ray (that's me) still exists. I'll still be here in my carbon-cell-based brain. Alas, I will have to sit back and watch the new Ray succeed in endeavors that I could only dream of. Let's consider the issue of just who I am, and who the new Ray is a little more carefully. First of all, am I the stuff in my brain and body?
Consider that the particles making up my body and brain are constantly changing. We are not at all permanent collections of particles. The cells in our bodies turn over at different rates, but the particles (e.g., atoms and molecules) that comprise our cells are exchanged at a very rapid rate. I am just not the same collection of particles that I was even a month ago. It is the patterns of matter and energy that are semipermanent (that is, changing only gradually), but our actual material content is changing constantly, and very quickly. We are rather like the patterns that water makes in a stream. The rushing water around a formation of rocks makes a particular, unique pattern. This pattern may remain relatively unchanged for hours, even years. Of course, the actual material constituting the pattern--the water--is replaced in milliseconds. The same is true for Ray Kurzweil. Like the water in a stream, my particles are constantly changing, but the pattern that people recognize as Ray has a reasonable level of continuity. This argues that we should not associate our fundamental identity with a specific set of particles, but rather the pattern of matter and energy that we represent. Many contemporary philosophers seem partial to this "identify from pattern" argument.
But (again) wait.
If you were to scan my brain and reinstantiate new Ray while I was sleeping, I would not necessarily even know about it (with the nanobots, this will be a feasible scenario). If you then come to me, and say, "good news, Ray, we've successfully reinstantiated your mind file, so we won't be needing your old brain anymore," I may suddenly realize the flaw in this "identity from pattern" argument. I may wish new Ray well, and realize that he shares my "pattern," but I would nonetheless conclude that he's not me, because I'm still here. How could he be me? After all, I would not necessarily know that he even existed.
Let's consider another perplexing scenario. Suppose I replace a small number of biological neurons with functionally equivalent nonbiological ones (they may provide certain benefits such as greater reliability and longevity, but that's not relevant to this thought experiment). After I have this procedure performed, am I still the same person? My friends certainly think so. I still have the same self-deprecating humor, the same silly grin--yes, I'm still the same guy.
It should be clear where I'm going with this. Bit by bit, region by region, I ultimately replace my entire brain with essentially identical (perhaps improved) nonbiological equivalents (preserving all of the neurotransmitter concentrations and other details that represent my learning, skills, and memories). At each point, I feel the procedures were successful. At each point, I feel that I am the same guy. After each procedure, I claim to be the same guy. My friends concur. There is no old Ray and new Ray, just one Ray, one that never appears to fundamentally change.
But consider this. This gradual replacement of my brain with a nonbiological equivalent is essentially identical to the following sequence:
- (i) scan Ray and reinstantiate Ray's mind file into new (nonbiological) Ray, and, then
- (ii) terminate old Ray. But we concluded above that in such a scenario new Ray is not the same as old Ray. And if old Ray is terminated, well then that's the end of Ray. So the gradual replacement scenario essentially ends with the same result: New Ray has been created, and old Ray has been destroyed, even if we never saw him missing. So what appears to be the continuing existence of just one Ray is really the creation of new Ray and the termination of old Ray.
On yet another hand (we're running out of philosophical hands here), the gradual replacement scenario is not altogether different from what happens normally to our biological selves, in that our particles are always rapidly being replaced. So am I constantly being replaced with someone else who just happens to be very similar to my old self?
I am trying to illustrate why consciousness is not an easy issue. If we talk about consciousness as just a certain type of intelligent skill: the ability to reflect on one's own self and situation, for example, then the issue is not difficult at all because any skill or capability or form of intelligence that one cares to define will be replicated in nonbiological entities (i.e., machines) within a few decades. With this type of objective view of consciousness, the conundrums do go away. But a fully objective view does not penetrate to the core of the issue, because the essence of consciousness is subjective experience, not objective correlates of that experience.
Will these future machines be capable of having spiritual experiences?
They certainly will claim to. They will claim to be people, and to have the full range of emotional and spiritual experiences that people claim to have. And these will not be idle claims; they will evidence the sort of rich, complex, and subtle behavior one associates with these feelings. How do the claims and behaviors--compelling as they will be--relate to the subjective experience of these reinstantiated people? We keep coming back to the very real but ultimately unmeasurable issue of consciousness.
People often talk about consciousness as if it were a clear property of an entity that can readily be identified, detected, and gauged. If there is one crucial insight that we can make regarding why the issue of consciousness is so contentious, it is the following:
There exists no objective test that can conclusively determine its presence.
Science is about objective measurement and logical implications therefrom, but the very nature of objectivity is that you cannot measure subjective experience-you can only measure correlates of it, such as behavior (and by behavior, I include the actions of components of an entity, such as neurons). This limitation has to do with the very nature of the concepts "objective" and "subjective." Fundamentally, we cannot penetrate the subjective experience of another entity with direct objective measurement. We can certainly make arguments about it: i.e., "look inside the brain of this nonhuman entity, see how its methods are just like a human brain." Or, "see how its behavior is just like human behavior." But in the end, these remain just arguments. No matter how convincing the behavior of a reinstantiated person, some observers will refuse to accept the consciousness of an entity unless it squirts neurotransmitters, or is based on DNA-guided protein synthesis, or has some other specific biologically human attribute.
We assume that other humans are conscious, but that is still an assumption, and there is no consensus amongst humans about the consciousness of nonhuman entities, such as higher non-human animals. The issue will be even more contentious with regard to future nonbiological entities with human-like behavior and intelligence.
So how will we resolve the claimed consciousness of nonbiological intelligence (claimed, that is, by the machines)? From a practical perspective, we'll accept their claims. Keep in mind that nonbiological entities in the twenty-first century will be extremely intelligent, so they'll be able to convince us that they are conscious. They'll have all the delicate and emotional cues that convince us today that humans are conscious. They will be able to make us laugh and cry. And they'll get mad if we don't accept their claims. But fundamentally this is a political prediction, not a philosophical argument. Over the past several years, Roger Penrose, a noted physicist and philosopher, has suggested that fine structures in the neurons called tubules perform an exotic form of computation called "quantum computing." Quantum computing is computing using what are called "qu bits" which take on all possible combinations of solutions simultaneously. It can be considered to be an extreme form of parallel processing (because every combination of values of the qu bits are tested simultaneously). Penrose suggests that the tubules and their quantum computing capabilities complicate the concept of recreating neurons and reinstantiating mind files.
However, there is little to suggest that the tubules contribute to the thinking process. Even generous models of human knowledge and capability are more than accounted for by current estimates of brain size, based on contemporary models of neuron functioning that do not include tubules. In fact, even with these tubule-less models, it appears that the brain is conservatively designed with many more connections (by several orders of magnitude) than it needs for its capabilities and capacity. Recent experiments (e.g., the San Diego Institute for Nonlinear Science experiments) showing that hybrid biological-nonbiological networks perform similarly to all biological networks, while not definitive, are strongly suggestive that our tubule-less models of neuron functioning are adequate. Lloyd Watts' software simulation of his intricate model of human auditory processing uses orders of magnitude less computation than the networks of neurons he is simulating, and there is no suggestion that quantum computing is needed.
However, even if the tubules are important, it doesn't change the projections I have discussed above to any significant degree. According to my model of computational growth, if the tubules multiplied neuron complexity by a factor of a thousand (and keep in mind that our current tubule-less neuron models are already complex, including on the order of a thousand connections per neuron, multiple nonlinearities and other details), this would delay our reaching brain capacity by only about 9 years. If we're off by a factor of a million, that's still only a delay of 17 years. A factor of a billion is around 24 years (keep in mind computation is growing by a double exponential).
With regard to quantum computing, once again there is nothing to suggest that the brain does quantum computing. Just because quantum technology may be feasible does not suggest that the brain is capable of it. After all, we don't have lasers or even radios in our brains. Although some scientists have claimed to detect quantum wave collapse in the brain, no one has suggested human capabilities that actually require a capacity for quantum computing.
However, even if the brain does do quantum computing, this does not significantly change the outlook for human-level computing (and beyond) nor does it suggest that brain downloading is infeasible. First of all, if the brain does do quantum computing this would only verify that quantum computing is feasible. There would be nothing in such a finding to suggest that quantum computing is restricted to biological mechanisms. Biological quantum computing mechanisms, if they exist, could be replicated. Indeed, recent experiments with small scale quantum computers appear to be successful. Even the conventional transistor relies on the quantum effect of electron tunneling.
Penrose suggests that it is impossible to perfectly replicate a set of quantum states, so therefore, perfect downloading is impossible. Well, how perfect does a download have to be? I am at this moment in a very different quantum state (and different in non-quantum ways as well) than I was a minute ago (certainly in a very different state than I was before I wrote this paragraph). If we develop downloading technology to the point where the "copies" are as close to the original as the original person changes anyway in the course of one minute, that would be good enough for any conceivable purpose, yet does not require copying quantum states. As the technology improves, the accuracy of the copy could become as close as the original changes within ever briefer periods of time (e.g., one second, one millisecond, one microsecond).
When it was pointed out to Penrose that neurons (and even neural connections) were too big for quantum computing, he came up with the tubule theory as a possible mechanism for neural quantum computing. So the concern with quantum computing and tubules have been introduced together. If one is searching for barriers to replicating brain function, it is an ingenious theory, but it fails to introduce any genuine barriers. There is no evidence for it, and even if true, it only delays matters by a decade or two. There is no reason to believe that biological mechanisms (including quantum computing) are inherently impossible to replicate using nonbiological materials and mechanisms. Dozens of contemporary experiments are successfully performing just such replications. The Noninvasive Surgery-Free Reversible Programmable Distributed Brain Implant, Full-Immersion Shared Virtual Reality Environments, Experience Beamers, and Brain ExpansionHow will we apply technology that is more intelligent than its creators? One might be tempted to respond "Carefully!" But let's take a look at some examples.
Consider several examples of the nanobot technology, which, based on miniaturization and cost reduction trends, will be feasible within 30 years. In addition to scanning your brain, the nanobots will also be able to expand our experiences and our capabilities.
Nanobot technology will provide fully immersive, totally convincing virtual reality in the following way. The nanobots take up positions in close physical proximity to every interneuronal connection coming from all of our senses (e.g., eyes, ears, skin). We already have the technology for electronic devices to communicate with neurons in both directions that requires no direct physical contact with the neurons. For example, scientists at the Max Planck Institute have developed "neuron transistors" that can detect the firing of a nearby neuron, or alternatively, can cause a nearby neuron to fire, or suppress it from firing. This amounts to two-way communication between neurons and the electronic-based neuron transistors. The Institute scientists demonstrated their invention by controlling the movement of a living leech from their computer. Again, the primary aspect of nanobot-based virtual reality that is not yet feasible is size and cost.
When we want to experience real reality, the nanobots just stay in position (in the capillaries) and do nothing. If we want to enter virtual reality, they suppress all of the inputs coming from the real senses, and replace them with the signals that would be appropriate for the virtual environment. You (i.e., your brain) could decide to cause your muscles and limbs to move as you normally would, but the nanobots again intercept these interneuronal signals, suppress your real limbs from moving, and instead cause your virtual limbs to move and provide the appropriate movement and reorientation in the virtual environment.
The web will provide a panoply of virtual environments to explore. Some will be recreations of real places, others will be fanciful environments that have no "real" counterpart. Some indeed would be impossible in the physical world (perhaps, because they violate the laws of physics). We will be able to "go" to these virtual environments by ourselves, or we will meet other people there, both real people and simulated people. Of course, ultimately there won't be a clear distinction between the two.
By 2030, going to a web site will mean entering a full immersion virtual reality environment. In addition to encompassing all of the senses, these shared environments can include emotional overlays as the nanobots will be capable of triggering the neurological correlates of emotions, sexual pleasure, and other derivatives of our sensory experience and mental reactions.
In the same way that people today beam their lives from web cams in their bedrooms, "experience beamers" circa 2030 will beam their entire flow of sensory experiences, and if so desired, their emotions and other secondary reactions. We'll be able to plug in (by going to the appropriate web site) and experience other people's lives as in the plot concept of 'Being John Malkovich.' Particularly interesting experiences can be archived and relived at any time.
We won't need to wait until 2030 to experience shared virtual reality environments, at least for the visual and auditory senses. Full immersion visual-auditory environments will be available by the end of this decade with images written directly onto our retinas by our eyeglasses and contact lenses. All of the electronics for the computation, image reconstruction, and very high bandwidth wireless connection to the Internet will be embedded in our glasses and woven into our clothing, so computers as distinct objects will disappear.
In my view, the most significant implication of the Singularity will be the merger of biological and nonbiological intelligence. First, it is important to point out that well before the end of the twenty-first century, thinking on nonbiological substrates will dominate. Biological thinking is stuck at 1026 calculations per second (for all biological human brains), and that figure will not appreciably change, even with bioengineering changes to our genome. Nonbiological intelligence, on the other hand, is growing at a double exponential rate and will vastly exceed biological intelligence well before the middle of this century. However, in my view, this nonbiological intelligence should still be considered human as it is fully derivative of the human-machine civilization. The merger of these two worlds of intelligence is not merely a merger of biological and nonbiological thinking mediums, but more importantly one of method and organization of thinking.
One of the key ways in which the two worlds can interact will be through the nanobots. Nanobot technology will be able to expand our minds in virtually any imaginable way. Our brains today are relatively fixed in design. Although we do add patterns of interneuronal connections and neurotransmitter concentrations as a normal part of the learning process, the current overall capacity of the human brain is highly constrained, restricted to a mere hundred trillion connections. Brain implants based on massively distributed intelligent nanobots will ultimately expand our memories a trillion fold, and otherwise vastly improve all of our sensory, pattern recognition, and cognitive abilities. Since the nanobots are communicating with each other over a wireless local area network, they can create any set of new neural connections, can break existing connections (by suppressing neural firing), can create new hybrid biological-nonbiological networks, as well as add vast new nonbiological networks.
Using nanobots as brain extenders is a significant improvement over the idea of surgically installed neural implants, which are beginning to be used today (e.g., ventral posterior nucleus, subthalmic nucleus, and ventral lateral thalamus neural implants to counteract Parkinson's Disease and tremors from other neurological disorders, cochlear implants, and others.) Nanobots will be introduced without surgery, essentially just by injecting or even swallowing them. They can all be directed to leave, so the process is easily reversible. They are programmable, in that they can provide virtual reality one minute, and a variety of brain extensions the next. They can change their configuration, and clearly can alter their software. Perhaps most importantly, they are massively distributed and therefore can take up billions or trillions of positions throughout the brain, whereas a surgically introduced neural implant can only be placed in one or at most a few locations. The Double Exponential Growth of the Economy During the 1990s Was Not a BubbleYet another manifestation of the law of accelerating returns as it rushes toward the Singularity can be found in the world of economics, a world vital to both the genesis of the law of accelerating returns, and to its implications. It is the economic imperative of a competitive marketplace that is driving technology forward and fueling the law of accelerating returns. In turn, the law of accelerating returns, particularly as it approaches the Singularity, is transforming economic relationships.
Virtually all of the economic models taught in economics classes, used by the Federal Reserve Board to set monetary policy, by Government agencies to set economic policy, and by economic forecasters of all kinds are fundamentally flawed because they are based on the intuitive linear view of history rather than the historically based exponential view. The reason that these linear models appear to work for a while is for the same reason that most people adopt the intuitive linear view in the first place: exponential trends appear to be linear when viewed (and experienced) for a brief period of time, particularly in the early stages of an exponential trend when not much is happening. But once the "knee of the curve" is achieved and the exponential growth explodes, the linear models break down. The exponential trends underlying productivity growth are just beginning this explosive phase.
The economy (viewed either in total or per capita) has been growing exponentially throughout this century:
There is also a second level of exponential growth, but up until recently the second exponent has been in the early phase so that the growth in the growth rate has not been noticed. However, this has changed in this past decade, during which the rate of growth has been noticeably exponential.
Productivity (economic output per worker) has also been growing exponentially. Even these statistics are greatly understated because they do not fully reflect significant improvements in the quality and features of products and services. It is not the case that "a car is a car;" there have been significant improvements in safety, reliability, and features. There are a myriad of such examples. Pharmaceutical drugs are increasingly effective. Groceries ordered in five minutes on the web and delivered to your door are worth more than groceries on a supermarket shelf that you have to fetch yourself. Clothes custom manufactured for your unique body scan are worth more than clothes you happen to find left on a store rack. These sorts of improvements are true for most product categories, and none of them are reflected in the productivity statistics.
The statistical methods underlying the productivity measurements tend to factor out gains by essentially concluding that we still only get one dollar of products and services for a dollar despite the fact that we get much more for a dollar (e.g., compare a $1,000 computer today to one ten years ago). University of Chicago Professor Pete Klenow and University of Rochester Professor Mark Bils estimate that the value of existing goods has been increasing at 1.5% per year for the past 20 years because of qualitative improvements. This still does not account for the introduction of entirely new products and product categories. The Bureau of Labor Statistics, which is responsible for the inflation statistics, uses a model that incorporates an estimate of quality growth at only 0.5% per year, reflecting a systematic underestimate of quality improvement and a resulting overestimate of inflation by at least 1 percent per year.
Despite these weaknesses in the productivity statistical methods, the gains in productivity are now reaching the steep part of the exponential curve. Labor productivity grew at 1.6% per year until 1994, then rose at 2.4% per year, and is now growing even more rapidly. In the quarter ending July 30, 2000, labor productivity grew at 5.3%. Manufacturing productivity grew at 4.4% annually from 1995 to 1999, durables manufacturing at 6.5% per year.
The 1990s have seen the most powerful deflationary forces in history. This is why we are not seeing inflation. Yes, it's true that low unemployment, high asset values, economic growth, and other such factors are inflationary, but these factors are offset by the double exponential trends in the price-performance of all information based technologies: computation, memory, communications, biotechnology, miniaturization, and even the overall rate of technical progress. These technologies deeply affect all industries.
We are also undergoing massive disintermediation in the channels of distribution through the web and other new communication technologies, as well as escalating efficiencies in operations and administration.
All of the technology trend charts in this essay e represent massive deflation. There are many examples of the impact of these escalating efficiencies. BP Amoco's cost for finding oil is now less than $1 per barrel, down from nearly $10 in 1991. Processing an internet transaction costs a bank one penny, compared to over $1 using a teller ten years ago. A Roland Berger / Deutsche Bank study estimates a cost savings of $1200 per North American car over the next five years. A more optimistic Morgan Stanley study estimates that Internet-based procurement will save Ford, GM, and DaimlerChrysler about $2700 per vehicle. Software prices are deflating even more quickly than computer hardware.
(Example: Automatic Speech Recognition Software
| 1985 | 1995 | 2000 |
Price | $5,000 | $500 | $50 |
Vocabulary Size (# words) | 1,000 | 10,000 | 100,000 |
Continuous Speech? | No | No | Yes |
User Training Required (Minutes) | 180 | 60 | 5 |
Accuracy | Poor | Fair | Good |
Current economic policy is based on outdated models which include energy prices, commodity prices, and capital investment in plant and equipment as key driving factors, but do not adequately model bandwidth, MIPs, megabytes, intellectual property, knowledge, and other increasingly vital (and increasingly increasing) constituents that are driving the economy.
The economy "wants" to grow more than the 3.5% per year, which constitutes the current "speed limit" that the Federal Reserve bank and other policy makers have established as "safe," meaning noninflationary. But in keeping with the law of accelerating returns, the economy is capable of "safely" establishing this level of growth in less than a year, implying a growth rate in an entire year of greater than 3.5%. Recently, the growth rate has exceeded 5%.
None of this means that cycles of recession will disappear immediately. The economy still has some of the underlying dynamics that historically have caused cycles of recession, specifically excessive commitments such as capital intensive projects and the overstocking of inventories. However, the rapid dissemination of information, sophisticated forms of online procurement, and increasingly transparent markets in all industries have diminished the impact of this cycle. So "recessions" are likely to be shallow and short lived. The underlying long-term growth rate will continue at a double exponential rate.
The overall growth of the economy reflects completely new forms and layers of wealth and value that did not previously exist, or least that did not previously constitute a significant portion of the economy (but do now): intellectual property, communication portals, web sites, bandwidth, software, data bases, and many other new technology based categories.
There is no need for high interest rates to counter an inflation that doesn't exist. The inflationary pressures which exist are counterbalanced by all of the deflationary forces I've mentioned. The current high interest rates fostered by the Federal Reserve Bank are destructive, are causing trillions of dollars of lost wealth, are regressive, hurt business and the middle class, and are completely unnecessary.
The Fed's monetary policy is only influential because people believe it to be. It has little real power. The economy today is largely backed by private capital in the form of a growing variety of equity instruments. The portion of available liquidity in the economy that the Fed actually controls is relatively insignificant. The reserves that banks and financial institutions maintain with the Federal Reserve System are less than $50 billion, which is only 0.6% of the GDP, and 0.25% of the liquidity available in stocks.
Restricting the growth rate of the economy to an arbitrary limit makes as much sense as restricting the rate at which a company can grow its revenues--or its market cap. Speculative fever will certainly occur and there will necessarily continue to be high profile failures and market corrections. However the ability of technology companies to rapidly create new--real--wealth is just one of the factors that will continue to fuel ongoing double exponential growth in the economy. These policies have led to an "Alice in Wonderland" situation in which the market goes up on bad economic news (because it means that more unnecessary punishment will be avoided) and goes down on good economic news.
Speaking of market speculative fever and market corrections, the stock market values for so-called "B to B" (Business to Business) and "B to C" (Business to Consumer) web portals and enabling technologies is likely to come back strongly as it becomes clear that economic transactions are indeed escalating toward e-commerce, and that the (surviving) contenders are capable of demonstrating profitable business models.
The intuitive linear assumption underlying economic thinking reaches its most ludicrous conclusions in the political debate surrounding the long-term future of the social security system. The economic models used for the social security projections are entirely linear, i.e., they reflect fixed economic growth. This might be viewed as conservative planning if we were talking about projections of only a few years, but they become utterly unrealistic for the three to four decades being discussed. These projections actually assume a fixed rate of growth of 3.5% per year for the next fifty years! There are incredibly naïve assumptions that bear on both sides of the argument. On the one hand, there will be radical extensions to human longevity, while on the other hand, we will benefit from far greater economic expansion. These factors do not rule each other out, however, as the positive factors are stronger, and will ultimately dominate. Moreover, we are certain to rethink social security when we have centenarians who look and act like 30 year-olds (but who will think much faster than 30 year-olds circa the year 2000).
Another implication of the law of accelerating returns is exponential growth in education and learning. Over the past 120 years, we have increased our investment in K-12 education (per student and in constant dollars) by a factor of ten. We have a one hundred fold increase in the number of college students. Automation started by amplifying the power of our muscles, and in recent times has been amplifying the power of our minds. Thus, for the past two centuries, automation has been eliminating jobs at the bottom of the skill ladder while creating new (and better paying) jobs at the top of the skill ladder. So the ladder has been moving up, and thus we have been exponentially increasing investments in education at all levels.
Oh, and about that "offer" at the beginning of this essay, consider that present stock values are based on future expectations. Given that the (literally) short sighted linear intuitive view represents the ubiquitous outlook, the common wisdom in economic expectations are dramatically understated. Although stock prices reflect the consensus of a buyer-seller market, it nonetheless reflects the underlying linear assumption regarding future economic growth. But the law of accelerating returns clearly implies that the growth rate will continue to grow exponentially because the rate of progress will continue to accelerate. Although (weakening) recessionary cycles will continue to cause immediate growth rates to fluctuate, the underlying rate of growth will continue to double approximately every decade.
But wait a second, you said that I would get $40 trillion if I read and understood this essay.
That's right. According to my models, if we replace the linear outlook with the more appropriate exponential outlook, current stock prices should triple. Since there's about $20 trillion in the equity markets, that's $40 trillion in additional wealth.
But you said I would get that money.
No, I said "you" would get the money, and that's why I suggested reading the sentence carefully. The English word "you" can be singular or plural. I meant it in the sense of "all of you."
I see, all of us as in the whole world. But not everyone will read this essay.
Well, but everyone could. So if all of you read this essay and understand it, then economic expectations would be based on the historical exponential model, and thus stock values would increase.
You mean if everyone understands it, and agrees with it.
Okay, I suppose I was assuming that.
Is that what you expect to happen.
Well, actually, no. Putting on my futurist hat again, my prediction is that indeed these views will prevail, but only over time, as more and more evidence of the exponential nature of technology and its impact on the economy becomes apparent. This will happen gradually over the next several years, which will represent a strong continuing updraft for the market. A Clear and Future DangerTechnology has always been a double edged sword, bringing us longer and healthier life spans, freedom from physical and mental drudgery, and many new creative possibilities on the one hand, while introducing new and salient dangers on the other. We still live today with sufficient nuclear weapons (not all of which appear to be well accounted for) to end all mammalian life on the planet. Bioengineering is in the early stages of enormous strides in reversing disease and aging processes. However, the means and knowledge will soon exist in a routine college bioengineering lab (and already exists in more sophisticated labs) to create unfriendly pathogens more dangerous than nuclear weapons. As technology accelerates toward the Singularity, we will see the same intertwined potentials: a feast of creativity resulting from human intelligence expanded a trillion-fold combined with many grave new dangers.
Consider unrestrained nanobot replication. Nanobot technology requires billions or trillions of such intelligent devices to be useful. The most cost effective way to scale up to such levels is through self-replication, essentially the same approach used in the biological world. And in the same way that biological self-replication gone awry (i.e., cancer) results in biological destruction, a defect in the mechanism curtailing nanobot self-replication would endanger all physical entities, biological or otherwise.
Other primary concerns include "who is controlling the nanobots?" and "who are the nanobots talking to?" Organizations (e.g., governments, extremist groups) or just a clever individual could put trillions of undetectable nanobots in the water or food supply of an individual or of an entire population. These "spy" nanobots could then monitor, influence, and even control our thoughts and actions. In addition to introducing physical spy nanobots, existing nanobots could be influenced through software viruses and other software "hacking" techniques. When there is software running in our brains, issues of privacy and security will take on a new urgency.
My own expectation is that the creative and constructive applications of this technology will dominate, as I believe they do today. But there will be a valuable (and increasingly vocal) role for a concerned and constructive Luddite movement (i.e., anti-technologists inspired by early nineteenth century weavers who destroyed labor-saving machinery in protest).
If we imagine describing the dangers that exist today to people who lived a couple of hundred years ago, they would think it mad to take such risks. On the other hand, how many people in the year 2000 would really want to go back to the short, brutish, disease-filled, poverty-stricken, disaster-prone lives that 99 percent of the human race struggled through a couple of centuries ago? We may romanticize the past, but up until fairly recently, most of humanity lived extremely fragile lives where one all too common misfortune could spell disaster. Substantial portions of our species still live in this precarious way, which is at least one reason to continue technological progress and the economic enhancement that accompanies it.
People often go through three stages in examining the impact of future technology: awe and wonderment at its potential to overcome age old problems, then a sense of dread at a new set of grave dangers that accompany these new technologies, followed, finally and hopefully, by the realization that the only viable and responsible path is to set a careful course that can realize the promise while managing the peril.
In his cover story for WIRED Why The Future Doesn't Need Us, Bill Joy eloquently described the plagues of centuries' past, and how new self-replicating technologies, such as mutant bioengineered pathogens, and "nanobots" run amok, may bring back long forgotten pestilence. Indeed these are real dangers. It is also the case, which Joy acknowledges, that it has been technological advances, such as antibiotics and improved sanitation, which has freed us from the prevalence of such plagues. Suffering in the world continues and demands our steadfast attention. Should we tell the millions of people afflicted with cancer and other devastating conditions that we are canceling the development of all bioengineered treatments because there is a risk that these same technologies may someday be used for malevolent purposes? Having asked the rhetorical question, I realize that there is a movement to do exactly that, but I think most people would agree that such broad based relinquishment is not the answer.
The continued opportunity to alleviate human distress is one important motivation for continuing technological advancement. Also compelling are the already apparent economic gains I discussed above which will continue to hasten in the decades ahead. The continued acceleration of many intertwined technologies are roads paved with gold (I use the plural here because technology is clearly not a single path). In a competitive environment, it is an economic imperative to go down these roads. Relinquishing technological advancement would be economic suicide for individuals, companies, and nations.
Which brings us to the issue of relinquishment, which is Bill Joy's most controversial recommendation and personal commitment. I do feel that relinquishment at the right level is part of a responsible and constructive response to these genuine perils. The issue, however, is exactly this: at what level are we to relinquish technology?
Ted Kaczynski would have us renounce all of it. This, in my view, is neither desirable nor feasible, and the futility of such a position is only underscored by the senselessness of Kaczynski's deplorable tactics.
Another level would be to forego certain fields; nanotechnology, for example, that might be regarded as too dangerous. But such sweeping strokes of relinquishment are equally untenable. Nanotechnology is simply the inevitable end result of the persistent trend toward miniaturization which pervades all of technology. It is far from a single centralized effort, but is being pursued by a myriad of projects with many diverse goals.
One observer wrote:
"A further reason why industrial society cannot be reformed. . . is that modern technology is a unified system in which all parts are dependent on one another. You can't get rid of the "bad" parts of technology and retain only the "good" parts. Take modern medicine, for example. Progress in medical science depends on progress in chemistry, physics, biology, computer science and other fields. Advanced medical treatments require expensive, high-tech equipment that can be made available only by a technologically progressive, economically rich society. Clearly you can't have much progress in medicine without the whole technological system and everything that goes with it."
The observer I am quoting is, again, Ted Kaczynski. Although one might properly resist Kaczynski as an authority, I believe he is correct on the deeply entangled nature of the benefits and risks. However, Kaczynski and I clearly part company on our overall assessment on the relative balance between the two. Bill Joy and I have dialogued on this issue both publicly and privately, and we both believe that technology will and should progress, and that we need to be actively concerned with the dark side. If Bill and I disagree, it's on the granularity of relinquishment that is both feasible and desirable.
Abandonment of broad areas of technology will only push them underground where development would continue unimpeded by ethics and regulation. In such a situation, it would be the less stable, less responsible practitioners (e.g., the terrorists) who would have all the expertise.
I do think that relinquishment at the right level needs to be part of our ethical response to the dangers of twenty first century technologies. One constructive example of this is the proposed ethical guideline by the Foresight Institute, founded by nanotechnology pioneer Eric Drexler, that nanotechnologists agree to relinquish the development of physical entities that can self-replicate in a natural environment. Another is a ban on self-replicating physical entities that contain their own codes for self-replication. In what nanotechnologist Ralph Merkle calls the "Broadcast Architecture," such entities would have to obtain such codes from a centralized secure server, which would guard against undesirable replication. The Broadcast Architecture is impossible in the biological world, which represents at least one way in which nanotechnology can be made safer than biotechnology. In other ways, nanotech is potentially more dangerous because nanobots can be physically stronger than protein-based entities and more intelligent. It will eventually be possible to combine the two by having nanotechnology provide the codes within biological entities (replacing DNA), in which case biological entities can use the much safer Broadcast Architecture.
Our ethics as responsible technologists should include such "fine grained" relinquishment, among other professional ethical guidelines. Other protections will need to include oversight by regulatory bodies, the development of technology-specific "immune" responses, as well as computer assisted surveillance by law enforcement organizations. Many people are not aware that our intelligence agencies already use advanced technologies such as automated word spotting to monitor a substantial flow of telephone conversations. As we go forward, balancing our cherished rights of privacy with our need to be protected from the malicious use of powerful twenty first century technologies will be one of many profound challenges. This is one reason that such issues as an encryption "trap door" (in which law enforcement authorities would have access to otherwise secure information) and the FBI "Carnivore" email-snooping system have been so contentious.
As a test case, we can take a small measure of comfort from how we have dealt with one recent technological challenge. There exists today a new form of fully nonbiological self replicating entity that didn't exist just a few decades ago: the computer virus. When this form of destructive intruder first appeared, strong concerns were voiced that as they became more sophisticated, software pathogens had the potential to destroy the computer network medium they live in. Yet the "immune system" that has evolved in response to this challenge has been largely effective. Although destructive self-replicating software entities do cause damage from time to time, the injury is but a small fraction of the benefit we receive from the computers and communication links that harbor them. No one would suggest we do away with computers, local area networks, and the Internet because of software viruses.
One might counter that computer viruses do not have the lethal potential of biological viruses or of destructive nanotechnology. Although true, this strengthens my observation. The fact that computer viruses are not usually deadly to humans only means that more people are willing to create and release them. It also means that our response to the danger is that much less intense. Conversely, when it comes to self replicating entities that are potentially lethal on a large scale, our response on all levels will be vastly more serious.
Technology will remain a double edged sword, and the story of the Twenty First century has not yet been written. It represents vast power to be used for all humankind's purposes. We have no choice but to work hard to apply these quickening technologies to advance our human values, despite what often appears to be a lack of consensus on what those values should be. Living ForeverOnce brain porting technology has been refined and fully developed, will this enable us to live forever? The answer depends on what we mean by living and dying. Consider what we do today with our personal computer files. When we change from one personal computer to a less obsolete model, we don't throw all our files away; rather we copy them over to the new hardware. Although our software files do not necessary continue their existence forever, the longevity of our personal computer software is completely separate and disconnected from the hardware that it runs on. When it comes to our personal mind file, however, when our human hardware crashes, the software of our lives dies with it. However, this will not continue to be the case when we have the means to store and restore the thousands of trillions of bytes of information represented in the pattern that we call our brains.
The longevity of one's mind file will not be dependent, therefore, on the continued viability of any particular hardware medium. Ultimately software-based humans, albeit vastly extended beyond the severe limitations of humans as we know them today, will live out on the web, projecting bodies whenever they need or want them, including virtual bodies in diverse realms of virtual reality, holographically projected bodies, physical bodies comprised of nanobot swarms, and other forms of nanotechnology.
A software-based human will be free, therefore, from the constraints of any particular thinking medium. Today, we are each confined to a mere hundred trillion connections, but humans at the end of the twenty-first century can grow their thinking and thoughts without limit. We may regard this as a form of immortality, although it is worth pointing out that data and information do not necessarily last forever. Although not dependent on the viability of the hardware it runs on, the longevity of information depends on its relevance, utility, and accessibility. If you've ever tried to retrieve information from an obsolete form of data storage in an old obscure format (e.g., a reel of magnetic tape from a 1970 minicomputer), you will understand the challenges in keeping software viable. However, if we are diligent in maintaining our mind file, keeping current backups, and porting to current formats and mediums, then a form of immortality can be attained, at least for software-based humans. Our mind file--our personality, skills, memories--all of that is lost today when our biological hardware crashes. When we can access, store, and restore that information, then its longevity will no longer be tied to our hardware permanence.
Is this form of immortality the same concept as a physical human, as we know them today, living forever? In one sense it is, because as I pointed out earlier, our contemporary selves are not a constant collection of matter either. Only our pattern of matter and energy persists, and even that gradually changes. Similarly, it will be the pattern of a software human that persists and develops and changes gradually.
But is that person based on my mind file, who migrates across many computational substrates, and who outlives any particular thinking medium, really me? We come back to the same questions of consciousness and identity, issues that have been debated since the Platonic dialogues. As we go through the twenty-first century, these will not remain polite philosophical debates, but will be confronted as vital, practical, political, and legal issues.
A related question is "is death desirable?" A great deal of our effort goes into avoiding it. We make extraordinary efforts to delay it, and indeed often consider its intrusion a tragic event. Yet we might find it hard to live without it. We consider death as giving meaning to our lives. It gives importance and value to time. Time could become meaningless if there were too much of it. The Next Step in Evolution and the Purpose of LifeBut I regard the freeing of the human mind from its severe physical limitations of scope and duration as the necessary next step in evolution. Evolution, in my view, represents the purpose of life. That is, the purpose of life--and of our lives--is to evolve. The Singularity then is not a grave danger to be avoided. In my view, this next paradigm shift represents the goal of our civilization.
What does it mean to evolve? Evolution moves toward greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity, and more of other abstract and subtle attributes such as love. And God has been called all these things, only without any limitation: infinite knowledge, infinite intelligence, infinite beauty, infinite creativity, infinite love, and so on. Of course, even the accelerating growth of evolution never achieves an infinite level, but as it explodes exponentially, it certainly moves rapidly in that direction. So evolution moves inexorably toward our conception of God, albeit never quite reaching this ideal. Thus the freeing of our thinking from the severe limitations of its biological form may be regarded as an essential spiritual quest.
In making this statement, it is important to emphasize that terms like evolution, destiny, and spiritual quest are observations about the end result, not the basis for these predictions. I am not saying that technology will evolve to human levels and beyond simply because it is our destiny and because of the satisfaction of a spiritual quest. Rather my projections result from a methodology based on the dynamics underlying the (double) exponential growth of technological processes. The primary force driving technology is economic imperative. We are moving toward machines with human level intelligence (and beyond) as the result of millions of small advances, each with their own particular economic justification.
To use an example from my own experience at one of my companies (Kurzweil Applied Intelligence), whenever we came up with a slightly more intelligent version of speech recognition, the new version invariably had greater value than the earlier generation and, as a result, sales increased. It is interesting to note that in the example of speech recognition software, the three primary surviving competitors stayed very close to each other in the intelligence of their software. A few other companies that failed to do so (e.g., Speech Systems) went out of business. At any point in time, we would be able to sell the version prior to the latest version for perhaps a quarter of the price of the current version. As for versions of our technology that were two generations old, we couldn't even give those away. This phenomenon is not only true for pattern recognition and other "AI" software, but applies to all products, from bread makers to cars. And if the product itself doesn't exhibit some level of intelligence, then intelligence in the manufacturing and marketing methods have a major effect on the success and profitability of an enterprise.
There is a vital economic imperative to create more intelligent technology. Intelligent machines have enormous value. That is why they are being built. There are tens of thousands of projects that are advancing intelligent machines in diverse incremental ways. The support for "high tech" in the business community (mostly software) has grown enormously. When I started my optical character recognition (OCR) and speech synthesis company (Kurzweil Computer Products, Inc.) in 1974, there were only a half-dozen high technology IPO's that year. The number of such deals has increased one hundred fold and the number of dollars invested has increased by more than one thousand fold in the past 25 years. In the four years between 1995 and 1999 alone, high tech venture capital deals increased from just over $1 billion to approximately $15 billion.
We will continue to build more powerful computational mechanisms because it creates enormous value. We will reverse-engineer the human brain not simply because it is our destiny, but because there is valuable information to be found there that will provide insights in building more intelligent (and more valuable) machines. We would have to repeal capitalism and every visage of economic competition to stop this progression.
By the second half of this next century, there will be no clear distinction between human and machine intelligence. On the one hand, we will have biological brains vastly expanded through distributed nanobot-based implants. On the other hand, we will have fully nonbiological brains that are copies of human brains, albeit also vastly extended. And we will have a myriad of other varieties of intimate connection between human thinking and the technology it has fostered.
Ultimately, nonbiological intelligence will dominate because it is growing at a double exponential rate, whereas for all practical purposes biological intelligence is at a standstill. Human thinking is stuck at 1026 calculations per second (for all biological humans), and that figure will never appreciably change (except for a small increase resulting from genetic engineering). Nonbiological thinking is still millions of times less today, but the cross over will occur before 2030. By the end of the twenty-first century, nonbiological thinking will be trillions of trillions of times more powerful than that of its biological progenitors, although still of human origin. It will continue to be the human-machine civilization taking the next step in evolution.
Most forecasts of the future seem to ignore the revolutionary impact of the Singularity in our human destiny: the inevitable emergence of computers that match and ultimately vastly exceed the capabilities of the human brain, a development that will be no less important than the evolution of human intelligence itself some thousands of centuries ago. And the primary reason for this failure is that they are based on the intuitive but short sighted linear view of history.
Before the next century is over, the Earth's technology-creating species will merge with its computational technology. There will not be a clear distinction between human and machine. After all, what is the difference between a human brain enhanced a trillion fold by nanobot-based implants, and a computer whose design is based on high resolution scans of the human brain, and then extended a trillion-fold? Why SETI Will Fail (and why we are alone in the Universe)The law of accelerating returns implies that by 2099, the intelligence that will have emerged from human-machine civilization will be trillions of trillions of times more powerful than it is today, dominated of course by its nonbiological form.
So what does this have to do with SETI (the Search for Extra Terrestrial Intelligence)? The naïve view, going back to pre-Copernican days, was that the Earth was at the center of the Universe, and human intelligence its greatest gift (next to God). The more informed recent view is that even if the likelihood of a star having a planet with a technology creating species is very low (e.g., one in a million), there are so many stars (i.e., billions of trillions of them), that there are bound to be many with advanced technology.
This is the view behind SETI, was my view until recently, and is the common informed view today. Although SETI has not yet looked everywhere, it has already covered a substantial portion of the Universe.
Chart by Scientific American
In the above diagram (courtesy of Scientific American), we can see that SETI has already thoroughly searched all star systems within 107 light-years from Earth for alien civilizations capable (and willing) to transmit at a power of at least 1025 watts, a so-called Type II civilization (and all star systems within 106 light-years for transmission of at least 1018 watts, and so on). No sign of intelligence has been found as of yet.
In a recent email to my research assistant, Dr. Seth Shostak of the SETI Institute points out that a new comprehensive targeted search, called Project Phoenix, which has up to 100 times the sensitivity and covers a greater range of the radio dial as compared to previous searches, has only been applied thus far to 500 star systems, which is, of course only a minute fraction of the half trillion star systems in just our own galaxy.
However, according to my model, once a civilization achieves our own level ("Earth-level") of radio transmission, it takes no more than one century, two at the most, to achieve what SETI calls a Type II civilization. If the assumption that there are at least millions of radio capable civilizations out there, and that these civilizations are spread out over millions (indeed billions) of years of development, then surely there ought to be millions that have achieved Type II status.
Incidentally, this is not an argument against the SETI project, which in my view should have the highest possible priority because the negative finding is no less significant than a positive result.
It is odd that we find the cosmos so silent. Where is everybody? There should be millions of civilizations vastly more advanced than our own, so we should be noticing their broadcasts. A sufficiently advanced civilization would not be likely to restrict its broadcasts to subtle signals on obscure frequencies. Why are they so silent, and so shy?
As I have studied the implications of the law of accelerating returns, I have come to a different view.
Because exponential growth is so explosive, it is the case that once a species develops computing technology, it is only a matter of a couple of centuries before the nonbiological form of their intelligence explodes. It permeates virtually all matter in their vicinity, and then inevitably expands outward close to the maximum speed that information can travel. Once the nonbiological intelligence emerging from that species' technology has saturated its vicinity (and the nature of this saturation is another complex issue, which I won't deal with in this essay), it has no other way to continue to evolve but to expand outwardly. The expansion does not start out at the maximum speed, but quickly achieves a speed within a vanishingly small delta from the maximum speed.
What is the maximum speed? We currently understand this to be the speed of light, but there are already tantalizing hints that this may not be an absolute limit. There were recent experiments that measured the flight time of photons at nearly twice the speed of light, a result of quantum uncertainty on their position. However, this result is actually not useful for this analysis, because it does not actually allow information to be communicated at faster than the speed of light, and we are fundamentally interested in communication speed.
Quantum disentanglement has been measured at many times the speed of light, but this is only communicating randomness (profound quantum randomness) at speeds far greater than the speed of light; again, this is not communication of information (but is of great interest for restoring encryption, after quantum computing destroys it). There is the potential for worm holes (or folds of the Universe in dimensions beyond the three visible ones), but this is not really traveling at faster than the speed of light, it just means that the topology of the Universe is not the simple three dimensional space that naïve physics implies. But we already knew that. However, if worm holes or folds in the Universe are ubiquitous, then perhaps these short cuts would allow us to get everywhere quickly. Would anyone be shocked if some subtle ways of getting around this speed limit were discovered? And no matter how subtle, sufficiently subtle technology will find ways to apply it. The point is that if there are ways around this limit (or any other currently understood limit), then the extraordinary levels of intelligence that our human-machine civilization will achieve will find those ways and exploit them.
So for now, we can say that ultra high levels of intelligence will expand outward at the speed of light, but recognize that this may not be the actual limit of the speed of expansion, or even if the limit is the speed of light that this limit may not restrict reaching other locations quickly.
Consider that the time spans for biological evolution are measured in millions and billions of years, so if there are other civilizations out there, they would be spread out by huge spans of time. If there are a lot of them, as contemporary thinking implies, then it would be very unlikely that at least some of them would not be ahead of us. That at least is the SETI assumption. And if they are ahead of us, they likely would be ahead of us by huge spans of time. The likelihood that any civilization that is ahead of us is ahead of us by only a few decades is extremely small.
If the SETI assumption that there are many (e.g., millions) of technological (at least radio capable) civilizations is correct, then at least some of them (i.e., millions of them) would be way ahead of us. But it takes only a few centuries at most from the advent of computation for that civilization to expand outward at at least light speed. Given this, how can it be that we have not noticed them?
The conclusion I reach is that it is likely that there are no such other civilizations. In other words, we are in the lead. That's right, our humble civilization with its Dodge pick up trucks, fried chicken fast food, and ethnic cleansings (and computation!) is in the lead.
Now how can that be? Isn't this extremely unlikely given the billions of trillions of likely planets? Indeed it is very unlikely. But equally unlikely is the existence of our Universe with a set of laws of physics so exquisitely precisely what is needed for the evolution of life to be possible. But by the Anthropic principle, if the Universe didn't allow the evolution of life we wouldn't be here to notice it. Yet here we are. So by the same Anthropic principle, we're here in the lead in the Universe. Again, if we weren't here, we would not be noticing it.
Let's consider some arguments against this perspective.
Perhaps there are extremely advanced technological civilizations out there, but we are outside their light sphere of intelligence. That is, they haven't gotten here yet. Okay, in this case, SETI will still fail because we won't be able to see (or hear) them, at least not before we reach Singularity.
Perhaps they are amongst us, but have decided to remain invisible to us. Incidentally, I have always considered the science fiction notion of large space ships with large squishy creatures similar to us to be very unlikely. Any civilization sophisticated enough to make the trip here would have long since passed the point of merging with their technology and would not need to send such physically bulky organisms and equipment. Such a civilization would not have any unmet material needs that require it to steal physical resources from us. They would be here for observation only, to gather knowledge, which is the only resource of value to such a civilization. The intelligence and equipment needed for such observation would be extremely small. In this case, SETI will still fail because if this civilization decided that it did not want us to notice it, then it would succeed in that desire. Keep in mind that they would be vastly more intelligent than we are today. Perhaps they will reveal themselves to us when we achieve the next level of our evolution, specifically merging our biological brains with our technology, which is to say, after the Singularity. Moreover, given that the SETI assumption implies that there are millions of such highly developed civilizations, it seems odd that all of them have made the same decision to stay out of our way. As intelligence saturates the matter and energy available to it, it turns dumb matter into smart matter. Although smart matter still nominally follows the laws of physics, it is so exquisitely intelligent that it can harness the most subtle aspects of the laws to manipulate matter and energy to its will. So it would at least appear that intelligence is more powerful than physics.
Perhaps what I should say is that intelligence is more powerful than cosmology. That is, once matter evolves into smart matter (matter fully saturated with intelligence), it can manipulate matter and energy to do whatever it wants. This perspective has not been considered in discussions of future cosmology. It is assumed that intelligence is irrelevant to events and processes on a cosmological scale. Stars are born and die; galaxies go through their cycles of creation and destruction. The Universe itself was born in a big bang and will end with a crunch or a whimper, we're not yet sure which. But intelligence has little to do with it. Intelligence is just a bit of froth, an ebullition of little creatures darting in and out of inexorable universal forces. The mindless mechanism of the Universe is winding up or down to a distant future, and there's nothing intelligence can do about it.
That's the common wisdom, but I don't agree with it. Intelligence will be more powerful than these impersonal forces. Once a planet yields a technology creating species and that species creates computation (as has happened here on Earth), it is only a matter of a few centuries before its intelligence saturates the matter and energy in its vicinity, and it begins to expand outward at the speed of light or greater. It will then overcome gravity (through exquisite and vast technology) and other cosmological forces (or, to be fully accurate, will maneuver and control these forces) and create the Universe it wants. This is the goal of the Singularity.
What kind of Universe will that be? Well, just wait and see. Plan to Stick AroundMost of you (again I'm using the plural form of the word) are likely to be around to see the Singularity. The expanding human life span is another one of those exponential trends. In the eighteenth century, we added a few days every year to human longevity; during the nineteenth century we added a couple of weeks each year; and now we're adding almost a half a year every year. With the revolutions in genomics, proteomics, rational drug design, therapeutic cloning of our own organs and tissues, and related developments in bio-information sciences, we will be adding more than a year every year within ten years. So take care of yourself the old fashioned way for just a little while longer, and you may actually get to experience the next fundamental paradigm shift in our destiny.
Copyright (C) Raymond Kurzweil 2001
Chart Graphics by Brett Rampata/Digital Organism
| | Join the discussion about this article on Mind·X! | |
|
|
Mind·X Discussion About This Article:
|
|
|
|
Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
No doubt, Mr. Kurzweil displays a masterful use of quantitative methods, dazzling extrapolist charts, to support his thesis of the inevitability of spiritual machines by the year 2030. How else could he pull the wool over so many eyes unless he were able to put on such a dazzling display of data manipulation?
However, every good statistician knows that the same quantitative analysis can be used to support opposite conclusions just as every good magician knows that the rabbit only appears to disappear because he cleverly distracts your attention with a sleight of the hand. In other words, it's not so much in what Mr. Kurzweil says but what's not being said that matters.
The same magic trick applies to modern and postmodern techno-civilization. The fact that so many intelligent people could be fooled by the scientist-magician of today unmasks a form of modern neurotic obsession with the progress of technology and its false promises of the future. Computer nerds become so immersed in virtual reality that the "virtuality" of it becomes confused with reality.
Mr. Kurzweil is presumptuous with his techno-optimism. Nanobots do not even exist except in his virtual imagination. We still know very little about the brain and it's various functions. It cannot be mapped, copied or faxed, and even if it could, it doesn't mean that "consciousness" can be captured, much less spirituality.
I doubt that Mr. Kurzweil even knows what he's talking about when he mentions "spirituality." And then to mention "God" as well shows how far off his rocker he really is. The next step of evolution, no doubt, will be when the spiritual machines download God onto their consciousness.
Put it this way: What if, among its many other functions, the brain also acts as a kind of antennae for the Spirit World, much like a tv picks up air waves? Perhaps there is some part of the brain, not yet discovered, which is like a door to the dimension of thought and feelings. Thus, not knowing this, the nanobots might only copy part of the brain and, thus, the resultant consciousness copied would not at all be spiritual and only vaguely resemble something human. Yet, Mr. Kurzweil dares to mention "spirituality" in the context of "machines," all the while treating the whole subject matter as a materialist would, displaying a profound ignorance of the relationship of consciousnes to the sub-consciousness or to the Spirit World, as if it could be savagely abstracted from its context, dissected and treated as a thing in itself.
Finally, Mr. Kurzweil dismisses SETI using a sort of twisted logic that extra-terrestrials couldn't possibly exist since they would have developed this same fantasy technology and, thus, would have contacted us already. However, has he considered that perhaps they haven't contacted us already since the technology he so firmly convinced will develop in the near future, doesn't exist anywhere in the universe, never has and never will. Then again, who's to say that they haven't contacted us. It depends on your explanation of "contact." Some say they already have, it just hasn't been made "official" yet.
Anyway, it seems that the monster dreams of Dr. Frankenstein live on. I mean, why are some scientists so eager to pursue this technology in the first place? It's the logical conclusion of techno-extrapolism. Their techno-optimism is a form of idealism and rather than face the reality of the overwhelming problems that so-called "progress" of technology and science has brought to the world, they persist in believing that technology will solve these problems; thus, the powers that be, who represent a small portion of the world's population, can continue to ravage the Earth and suck her resources dry while, through the various forms of pollution, creating a hell on Earth, that will be unfit for future generations.
But that's okay. They've found a way around the problem. Just redefine what is human, that's all. Recreate humanity in the image of our machines and then it won't be necessary to worry about organic existence. The machines will simply "download" human consciousness and spirituality to preserve the human legacy. Don't worry. Humankind won't perish. It will simply "evolve" into a new form of existence which will not be so sensitive to environmental degradation. It's still survival of the fittest (fattest?), "nature." So, the majority who don't make it, who aren't financially able to preserve their consciousness, are simply the "unfortunate" who are "unfit" for survival. You can't argue with nature, can you?
It's called "Post-modern Techno-Darwinism."
dennis morgan
korea |
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
"What if, among its many other functions, the brain also acts as a kind of antennae for the Spirit World, much like a tv picks up air waves? Perhaps there is some part of the brain, not yet discovered, which is like a door to the dimension of thought and feelings. Thus, not knowing this, the nanobots might only copy part of the brain and, thus, the resultant consciousness copied would not at all be spiritual and only vaguely resemble something human. "
Well, I suppose that means we will have discovered just that part of the brain, then won't we have? That being the case, we will have the knowledge and means necessary to begin analyzing and deconstructing it, too.
That which affects the physical, is physical.
If the "spiritual" creates observable differences in the body, and the body creates observable differences in the spirit, what's the point of saying the spirit is not the body? What's the difference?
To say that there are things we don't understand about how the brain works, you're AGREEING with us. One cannot argue a position of eternal ignorance about the human experience FROM a position of ignorance about the human experience.
What you're saying is not that we don't understand human consciousness, but that we will never understand it. Kurzweil may be be off in his estimates of when certain levels of human mastery will occur, but he himself admits those are the least certain assertions of his arguments.
If the spirit does happen to be housed in "some other dimension", or expressed by something other than the structure of the central nervous sytem, the explorations and discoveries inherent in our scientific inquiry will only be lengthened and expanded, not precluded entirely.
So maybe it takes a bit longer than we expected. It's no skin off my brain!
While your criticism of unbridled optimism is meagerly compelling (and worth heeding), you fail to pose an argument as to why your "spiritual antennae" couldn't be reverse-engineered along with the rest of the human body. Do you know something we don't?
"The feasibility of a particular technology is a question best left to engineers." |
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
I'm puzzled. We were able to manufacture television from SCRATCH, without such a model to guide us.
Having a model human (and an entire race, at that) should make it easier to do so.
Sure, my first "successful" television might perhaps be a poor copy of the supplied model, but what about the first successful television ever built? I daresay the knowledge to be had from having a "specimen" would, if anything, make my construction better. And, over time, what I learned from the prototype construction would contribute to a better second replication, etc., just as all manufacturing processes do. Such is also the case with cloning. Practice makes perfect.
There's certainly no reason why we couldn't hit some unbreakable limit on the task of reverse engineering of the brain, spirit, etc.
I would rather attempt to find out what that reason is, rather than take your unsubstantiated word for it. At least then we'd know, and would have the rest of time to figure out how to crack the problem.
If you are skeptical about the feasability of the technology, then we have nothing more to discuss. Please stand aside while the rest of us get back to work. :)
If you are not agreeable with the ethics of what we are attempting, then I'll see you in court, or, in the more extreme case, on a battlefield. I will defend my right to research, experiment with, and implement such technologies as I see fit. |
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
What Kurzweil does is cite technological trends which relate only to his field of expertise; then, based on these trends, concludes with generalized, all-inclusive, sweeping predictions about the next, radical step in human evolution, a transformation into a non-organic existence.
I don't think I'm the only one who finds this approach suspect; when someone of specialized knowledge begins to generalize conclusions within their own field to include all other fields of human knowledge as well.
If the trends he uses were the only trends on planet Earth, then that would be okay; however, that's not the case. Does he even mention how these trends might interact with opposing trends outside his field? That's the difference between real science and ideology. The difference between Marx and Einstein is that whereas Marx would try to find data to justify a conclusion that he already believed, Einstein went out of his way to find data that might DISPROVE his theory. Do we encounter this same attitude in Kurzweil?
It's when people go outside their field of knowledge and start using terms like "consciousness" and "spirituality" in the same context as "machines" that I get suspicious and sceptical. Is Kurzweil an expert in the field of human consciousness? Is he an expert in the area of spirituality? Does he even possess an exhaustive knowledge of the functions of the human brain? Does he understand how physical and spiritual existence interact? Does he know the reality of the spiritworld?
Technological progress isn't necessarily
antithetical to spirituality but it can be. I mean it has a tendency to be since the paradigm of modern civilization has, in some ways, tried to substitute spirituality with the promises of technological progress, the focal point of materialist ideology. This same tendency is evident in what Kurzweil promises. It's ideological in nature and only pretends to be scientific. It's the promise of eternal physical existence, 21st century mummification through science, the denial of nature and death.
An awful amount of legitimate research already done on the reality of the spiritual dimension. There's much more empirical evidence for it than the possibility of downloading human consciousness and spirituality onto machines. Much more. There may very well be a
correspondent faculty located in the brain that acts as a kind of antennae to the spiritual dimension that has yet to be discovered.
My point was that without understanding this, without fully understanding that the brain operates as a kind of antennae to the spiritual dimension, attempts at "reverse engineering" of the brain in the future, could result in a kind of frankensteinian monster without a soul. I acknowledge that technological progress in the fields of nanotechnology, A.I. and robotics will accelerate in the future. Nevertheless, this brings up ethical questions, much like in the case of cloning. What if such artificial "beings" are created in the future but what is downloaded is not exactly human consciousness but a kind of machine consciousness that only vaguely resembles human consciousness, and which may indeed be superior to human consiousness in terms of logic and the speed of thought but inferior in other ways. These machines could logically deduce that the real problem that threatens the future of planet Earth, the root of all problems, is humankind. Inevitably, it could be the
beginning of a war between Dr. Frankenstein and his monsters.
There seems to be a number of scientists and futurists who are embracing these kinds of predictions, almost as a kind of religion, without examining them critically.
I think that's dangerous. |
|
|
|
|
|
|
|
|
wars and rumors
|
|
|
|
Frankly, I think the war will get started between humans over this issue long, long before the machines have any inkling of the concept.
OK. You're well aware, I'm sure, of the resistance the ideas of the spirit world or dimension will meet among the current scientific community, especially the material reductionism you'll find here.
You have cited research which I have not made the time to study, so I'm not going to call you out there. I'm going to suspend my knee-jerk disbelief for the purposes of scenario-building.
It's quite simplistic, I think, to say, as many skeptics have, that uploading is not possible because humans are more than is encoded in our central nervous system. The argument is made by saying that there's something "we" are missing, something unknown that we're overlooking.
I counter that by saying that of course there's something we dont' know, something we're missing. If there wasn't, we'd be successful already.
The scientific process gradually reverse engineers the brain, on piece at a time, until it's got it down. How long it's going to take is immaterial. As research and experimentation continue, we'll get a better and better picture. It's much easier to prove something is possible than that it isn't.
My case is that the spirit antenna you speak of is just another piece in that puzzle, and given its measurable physical effects on the brain - even if it's not part of the brain itself - research will open its functions up for study, understanding, and reverse engineering just like the rest of the body.
What then, is to prevent us duplicating the functions of the antenna you're proposing?
As I recall, the trouble with science as it studies spiritual and similar paranormal phenomenon to date is its pure subjectivity. Current scientific methods typically utilize instrumentation for objective measurements. spiritual phenomenon like those you describe require the abandonment of these objective instruments, using only the human mind to explore a world by itself.
If these worlds have any objective, share-able, independent existence, they are naturally resistant to current methods of scientific observation and measurement.
That I can buy, to an extent.
Research into uploading and reverse engineering the human brain are then the ONLY gateway we have into these worlds, and so continuing research into this field will reveal them once and for all - objectively.
Exactly what physical implications do the current research you cite show?
Is it possible that humans have had this antenna damaged, becoming "soulless monsters"? If so, what harm is it to build "soulless" machines? The extra power hypothetically available to these machines is equally available to humans, so it's not soullessness we should be worried about, but rather the added power of cybernetics.
|
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
Have you read any of the articles in ORIGIN - "Dangerous Futures?" I admit that I haven't yet; however, just by gleaning through them I can see that there's quite a debate about this research, some of them bringing up similar objections as mine. If there's this much controversy already, imagine the controversy in the years ahead with the technological breakthroughs that are sure to come.
Personally, I think debate on this issue is good. It needs critical examination. At the very least, progress in these areas should proceed with caution. Dr. Frankenstein's mistake was that he got too carried away with his science, too cocky, and began to ignore all other considerations. Everything began to revolve around his monster. Some scientists, even geniuses, have this same character flaw, and it can get us into trouble.
I'm pretty sure Oppenheimer truely regretted having created the hydrogen bomb. The guy who dropped it on Hiroshima went mad.
Scientific progress isn't necessarily progress. |
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
"You ask for proof of whether or not we can properly recreate the human brain, wait the human MIND, in a non-biological form. Such a thing can only be proven by doing it and likewise disproved by failing at the attempt. "
Not disproved; postponed.
This reminds me of the fact that Penrose's arguments are based on our assumptions that AI is little more than achieving the necessary computing power in hardware, which is obviously too simplistic a view.
There is a very fine line however, between failing to obtain AI and learning that somehting else is necessary to achieve it. By proposing various quantum mechanisms in the brain, Penrose (perhaps) or others espousing his arguments seem to be saying that AI is impossible. On the contrary, rather, they, like good skeptics, are laying out the roadmap of unanticpiated things that may need to be accounted for in our mind models.
The arguments simply say that it may not be as simple as we thought it might. In reponse, I would simply say we knew that, and will leanr from our mistakes and failures until we achieve success - like science always does. |
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
There are two essential arguments I would take up with Kurzweil's notions,
one of which you too briefly touch upon. The other you, like Kurzweil, have
bypassed entirely.
The first argument is one of successful development. According to experts
in the 1950s, we should have had lunar colonies, cheap fusion energy, and
flying cars by this date. Obviously, they're not here, and we should ask
the critical "why". In some cases -- fusion energy, for example -- our
understanding of the processes involved appear to be inadequate to
harnessing the science into technology. In other cases -- lunar colonies --
there doesn't appear to be a need compelling enough to warrant the
expenditure of resources. Other cases seem to suggest insufficient interest
to justify invention and innovation. Although many approaches to flying
cars have been tried over the last half century, they couldn't overcome the
economic argument for separate automobiles and aircraft: cars and air travel
are simply too cheap (relatively) for flying cars to compete.
The second argument centers around innovation: the process of bringing a new
technology into widespread use in society. It appears there may be limits
to the diffusion of new technologies in society, and we may be reaching them
regarding computers. Indeed, if we follow the examples of telephones and
automobiles, we must note that these technologies rapidly proliferated as
long as they could be cheap enough for people to own them. However, there
appears to be a limit to how far these technologies can spread, and it may
have to do with the infrastructure needed to support their operation. Until
the advent of cellular phones, telephone technology was limited to places
where cables could be economically laid. Likewise, automobiles demand
infrastructure support. Without roads, fueling stations, and repair shops,
automobiles are of little value. Without the infrastructure to support them
in less developed countries, it is little wonder the automobile is the
preferred transportation only in highly industrialized countries. As you've
undoubtedly noticed in your travels, there are still places where ox drawn
carts can regularly be seen close by to intercontinental jet aircraft. As
for computers, recent trends in new computer sales suggest we've reached a
plateau upon which we may rest for a while as those who can afford and have
the necessary infrastructure to support them have already purchased the
computers they want or need. New processors and increased speeds
notwithstanding, computer sales are slowing, and the technology's
proliferation in the marketplace seems to be reaching the saturation point.
The points I wanted to make are that there's no guarantee that the
technologies Kurzweil's future world depends upon will mature within human
lifetimes and that even if they do, there is no guarantee they will
proliferate throughout humanity. We recognize that technologies don't
always saturate the population: witness the fact that perhaps half the
planet's population has yet to make their first telephone call!
Finally, we may yet find something about the bioelectronic integration of
humans and computers that is so undesirable that people will choose to
forego this technology, something Kurzweil doesn't seem to account for. We
can cite technologies humanity has practically abandoned and may yet
abandon as undesirable. From DDT to nuclear power to genetically engineered
foods, it becomes apparent that people will choose not to use technologies
if something makes those technologies appear harmful. And the potential
loss of privacy that might come with bioelectronic linkages to a worldwide
computer network might make such a union highly undesirable. We could, of
course, develop some scenarios that would make the union of human and
machine look even more monstrous -- imagine, if you will, the network
developing a superconsciousness that enables it to program its human parts
into subservience. And should such images propagate throughout human
consciousness, the technological advances Kurzweil anticipates may be
thwarted. A critical review of technology should note that no technology is
inevitable.
I have little doubt that some of what Kurzweil anticipates will come to
pass. But like other technological forecasts before his, Kurzweil fails to
acknowledge the uncertainties that exist. I expect that Kurzweil's sweeping
forecast of techno-utopia, like so many other forecasts before, will find
its way to the dustbin of history and will be forgotten except to remind us
of how difficult predicting the future is.
But, bowing to my own recognition of uncertainty, I could be wrong.
Regards,
J. P. DeMeritt |
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
Excellent analysis of why forecasts often fail, J.P.
One thing we can learn from history (which also
applies to Future Studies) is that it isn't what
people thought it would be. There are too many
variables (wild cards) that are never considered.
This especially applies when considering such a
sweeping prediction of a new paradigm and the next
stage of human evolution, ...etc. And even more so,
concerning a subject of which we still know so little
about -the human mind, consciousness,
subconsciousness, spirituality, spirit world, ...etc.
It's easy to get so carried away with the technology
that you lose sight of the areas of knowledge which
exist outside the technology. There is this tendency
to see everything through the lens of the technology,
as if the technology were a paradigm in itself; hence,
the push to make everything "fit" into it.
Sure, humans have evolved through the interaction with
tools and technology, but I don't think that tells the
whole story or that it is the definitive last word on
what it means to be human. It should certainly be
included in that definition, but there is so much more
potential yet to be discovered in humanity that goes
beyond our ability to create and use technology.
I think there's a danger if we begin to take this
whole frankenstein complex too seriously. As Kuhn
pointed out, science proceeds in the direction we take
it. It doesn't have to proceed in that direction. It's
up to us how we direct it, and how we direct it has an
impact on the paradigm that we create.
For example, if as much energy and research had been
applied to establish the validity of the spiritual
dimension, it could be that humans would be
consciously interacting with the spirit world - that
it's existence wouldn't even be questioned. It would
be an axiom of scientific truth.
The same thing might be said for the existence ofUFO's.
The point being that there's this subjective element
in the way science is directed. Thus, if it's directed
to serve our frankensteinian impulse to recreate
ourselves (an Egyptian obsession with mummification),
then this reality may come to pass, except that it may
not come to pass in the way we expect it to. We could
end up with a monster who gets out of control.
Don't get me wrong. There are definitely some
interesting advances in the field of robotics and A.I.
and it is safe to assume that progress will continue
in this area. However, I don't think that the outcome
will be a replicant of a human being, much less the
next stage in human evolution. Nor, should it be.
One thing we can say about history (that I believe can
also follow in futures) is that history and the future
is, to some degree, determined by the law of
necessity. If you understand this law, you can
understand a key to hisory and a key to the future as
well. Thus, the developments which follow in the
future are those which are realized by the law ofnecessity.
It is natural to assume that automated, computerized,
mechanical "beings" will be created with enhanced
eyesight, mobility, and audio capabilities, along with
a logical, computerized (but programmed) "mind." All
of these capabilities are enhanced human capabilities.
But to define these beings as life, to say that one
has recreated life from purely inorganic material, is
going a bit too far, in my opinion.
As far the law of necessity, the creation of these
robots, as in the present, follows the law of
necessity. They serve specific purposes, usually
economic, and their design and functions follow this
purpose. To think of them as some means of reproducing
ourselves or the next stage of human development,
though stimulating to the imagination, doesn't really
follow the law of necessity; thus, I don't expect the
technology to go in that direction.Of course, I could be wrong as well! :)
Dennis |
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
"As for computers, recent trends in new computer sales suggest we've reached a
plateau upon which we may rest for a while as those who can afford and have
the necessary infrastructure to support them have already purchased the
computers they want or need. New processors and increased speeds
notwithstanding, computer sales are slowing, and the technology's
proliferation in the marketplace seems to be reaching the saturation point."
While it is true that some computer sales are slowing in other areas trends will continue. My case and point is Nvidia. Right now Intel makes processors for the average joe, and with the software out there right now the Pentium 3 line of processors can open all the browser windows, spreadsheets and documents that he wants. He has no more reason to upgrade to the pentium 4. But there is another market, one that at first drove the CPU proccesor and now is driving the GPU processor. This is the ever growing graphics industry. Nvidia realeases new GPU's(Graphics Processing Unit) every year, twice as fast as Intel or other CPU manufactuers and twice as fast as moores law. So there will be a shift in the market, as intense gamers keep upgrading there systems to run the latest games as they have been since computers were made. The main reason that people buy new computers is for games. Entertainment has and will be a vital part of humanity, and graphics technology will keep growing. When will gamers be satisfied? It seems that every new game that comes out follows a general trend, they are becoming more and more real. Graphics become sharper, explosions sound better and suddenly enemies seem smarter. Gamers cant get enough of it, and will continue to pay for better and better processors until it is real. And when this point where reality can be created through bio/machine interfaces, the singularity will be upon us.
T. K. Cannell
|
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
"Mr. Kurzweil is presumptuous with his techno-optimism. Nanobots do not even exist except in his virtual imagination. We still know very little about the brain and it's various functions. It cannot be mapped, copied or faxed, and even if it could, it doesn't mean that "consciousness" can be captured, much less spirituality."
In 1000 A.D. planes did not exisit, and I'm quite certain that the majority of people believed that humans would never fly. Yet, today it is common. The fact that nonobots do not exist today is an incredibly weak argument agains Ray Kurzweil's vision of the future.
"Put it this way: What if, among its many other functions, the brain also acts as a kind of antennae for the Spirit World, much like a tv picks up air waves? Perhaps there is some part of the brain, not yet discovered, which is like a door to the dimension of thought and feelings. Thus, not knowing this, the nanobots might only copy part of the brain and, thus, the resultant consciousness copied would not at all be spiritual and only vaguely resemble something human. Yet, Mr. Kurzweil dares to mention "spirituality" in the context of "machines," all the while treating the whole subject matter as a materialist would, displaying a profound ignorance of the relationship of consciousnes to the sub-consciousness or to the Spirit World, as if it could be savagely abstracted from its context, dissected and treated as a thing in itself."
That fact is that we do not know. In 30 years it might be quite common to scan a brain and upload it into another computing substrate, keeping EVERYTHING intact. On the other hand, this may prove extremely difficult. I do not believe that it is impossible.
"Finally, Mr. Kurzweil dismisses SETI using a sort of twisted logic that extra-terrestrials couldn't possibly exist since they would have developed this same fantasy technology and, thus, would have contacted us already. However, has he considered that perhaps they haven't contacted us already since the technology he so firmly convinced will develop in the near future, doesn't exist anywhere in the universe, never has and never will. Then again, who's to say that they haven't contacted us. It depends on your explanation of "contact." Some say they already have, it just hasn't been made "official" yet."
You might be taken more seriously if you treated others concepts with more respect. You call this a "fantasy technology", yet to talk about TV antennae for the Spirit World! At least Ray Murzweil offers a lot of information to state his belief. I would like to see the facts you use to back up your arguments, espicially your claims for our current perpetual existence. At present I know of no facts that lead me to believe that I will live beyond my body. And thus I conclude that the technology described here is in quite very necessary.
|
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
Once again, the postulation of the brain as an antennae to the spirit world was qualified with "what if." It's a possibility to consider.
This was brought out to make a point that there may be hidden functions or potentialites of the brain which must be considered before taking on the task of reverse engineering.
Otherwise, mistakes could be made which might bring on harmful consequences.
As for "proof" of the existence of the spirit world, this has been documented by a number of writers throughout history. In more recent history, I would have you examine the accounts of the 18th century scientist, Immanuel Swedenborg, who, at the age of 50, became "spiritually open" and traveled into the spirit world and wrote volumes of detailed accounts of what he saw.
Other writers who have documented accounts of spiritual experiences are the American philosopher, William James, the educator and multi-talented German anthroposophist, Rudolf Steiner, the American psychics Edgar Cayce and Arthur Ford, Anthony Borgia (a detailed account of the spirit world from beyond), and the British philosopher Colin Wilson.
Most of these writers do not only give detailed accounts of their own experiences but have recorded the spiritual experiences of thousands of others.
This evidence is much more "empirical" that the fantastic imaginings of downloading human consciousness onto "spiritual machines" through the use of nanobots.
A great project for science fiction but I submit that it is nothing more than that and will always be nothing more than that.
Not unless you show me that you have real knowledge of all the functions of the brain, human consciousness and subconsciousness and spirituality, I'm sorry but I don't buy it. |
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
"\"Detailed accounts" do not constitute scientific evidence, any more than our imaginings of the potential capabilities of nanotechnology.
<br>
<br>
What duplicateable experiments have these practitioners created to give us consistent, objective observations about spirituality or the human soul?\"
<br>
---------------------
<br>
Detailed accounts do count for something if they are consistent. It depends on what kind of scientific evidence you require.
<br>
<br>
However, there have been objective observations about spirituality and the reality of the spirit world. There's been a lot of research in this field. Many who've researched it have gone to painstaking lengths to set up experiments which could give undisputable truth. Of course, there are the quacks too. That's what has given the field a bad rap.
<br>
<br>
Right off hand, I would recommend the book of the American psychic, Arthur Ford \"Known but Unknown\" (1968) and, more recently, that of the British philosopher, Colin Wilson, \"Afterlife\" (an investigation of the evidence of life after death).
<br>
<br>
Nevertheless, we live in the paradigm of materialistic ideology that often masqerades as \"real\" science. Even so, there are a number of scientists and intellectuals who have approached this topic with an open mind and, though sceptical at first, finally concluded that the reality of the spiritual dimension is quite plausible. Of course, there are also those who are so locked into the materialism of our age, that their minds are closed and no matter what evidence is presented, they choose to reject it.
<br>
<br>
So be it. I don't think that this is the right forum to discuss this topic since it is but indirectly related.
<br>
<br>
My original point is that IF there is the possibility that human beings have a spiritual nature and IF some part of the brain acts as a kind of antennae to the spirit world, then IF this is not known and taken into account when attempting reverse engineering as a means of downloading human consciousness, it could result into a kind of frankenstein, a monster without a soul.
<br>
<br>
Thus, there needs to be more research in the area of human consciousness, the brain and the possible relation to spiritual existence, before jumping to conclusions about the future capabilities of nanotechnology to download human consciousness and spirituality onto machines on such a scale as to represent a new stage in human evolution that would even replace organic existence.
<br>
<br>
Another question to consider is whether it is even desireable to take the technology in that direction.
<br>
<br>
Dennis Morgan
<br>
<br>
|
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
"There's been a lot of research in this field. Many who've researched it have gone to painstaking lengths to set up experiments which could give undisputable truth. "
I seem to recall some of these, actually. If there are consistent experimental results, they're worth analyzing and taking into consideration.
My caution would be to avoid jumping to unwarranted conclusions about the experimental results, which is often difficult to avoid when using very well-worn terms of "spirit", "soul", etc., which inevitably drag with them a lot of mythological baggage.
In any case, you've given me some references, which is laudable.
"Even so, there are a number of scientists and intellectuals who ...[snip]... finally concluded that the reality of the spiritual dimension is quite plausible"
"My original point is that IF there is the possibility that human beings have a spiritual nature and IF ...[snip]..., it could result into a kind of frankenstein, a monster without a soul. "
Plausibility and possibility were not my concerns; I personally am aware of a great deal of bias in the scientific and academic communities, and do my very best to abide the strictest principles of the scientific method and its consequents as a I can. I have no problem believing that the consciousness consists of something more than brain patterns, or in disembodied spirits, or God, for that matter.
I was asking about facts and data, which you have duly provided.
Frankenstein monster scare scenarios are always possibilities. How you correlate the probability of such a scenario with the data and experimental evidence you've cited is something I'm curious to hear.
"Thus, there needs to be more research in the area of human consciousness, the brain and the possible relation to spiritual existence, "
Great! You've used an opposing line of reasoning to reach the same conclusion. I'm sure current researchers are more than happy to oblige us. |
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
I appreciate your open-mindedness on a topic that many are not open-minded about.
Though I remain skeptical about these sweeping boasts of nanotechnology/A.I. concerning the future, I don't wish, in any way, to be disrespectful about Mr. Kurzweil's current and past scientific contributions in the field.
It is because these predictions cross over into other fields that I am skeptical. It is also because I am an M.S. graduate of Futures Studies that I understand a little something about the nature of extrapolating trends and their interpretations when it comes to attempting to understand implications for the future.
Futurists are often skeptical of predictions, especially sweeping predictions, because we understand why predictions often fail.
So, though I admit that this field is fascinating (a fan of sci-fi myself), I am not yet a believer.
However, I realize that a number of prominent scientists in the field are. Just to make a contribution to this forum, a related article from the New York Times technology section just came in. Here it is:
A SCIENTIST'S ART: COMPUTER FICTION
Twenty years ago, computer scientist Vernor Vinge wrote a tale of a networked world. From classroom and lab, he's watched it become a reality....
http://www.nytimes.com/2001/08/02/technology/circuits/02VING.html?todaysheadlines
Dennis Morgan
|
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
Very often this question is discussed as a complicated biologic-technic problem, as problem of technology. But human consciousness is a form, process and result of socialization. This is a main point in the understanding of the nature of consciousness. It is impossible to download a human consciousness and spirituality because they are not a limited, finite class of some elements. Actually, consciousness and spirituality are something related to infinity. To "download" actual content and reproductive matrixes of "Romeo and Juliet," it is nesessary "download" all Shakespear's epoch (thinkings, mentality, traditions, food, moral, law, religion, clothes, music, dance, political system, mode of production, language, architecture, climate, relation between men and women, feudal social organization, education, etc. = infinity) and all epochs before because it is impossible to understand the greatness of the poem without understanding of Middle Ages and Renaissance and their contradictions (=infinity).
The basic principle of any calculating ("thinking") machine is non-contradictoriness and logic. Error for machine is something negative, deadlock, unacceptable. But for us errors are nesessary attribute of our evolution. Logic and non-contradictoriness are only some elements of our spitituality but not it's all content. All our feelings have not logical nature. But without feelings people are not human beings but may be "thinking machines".
Valeriy Khan |
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
"It is impossible to download a huma consciousness and spirituality because they are not a limited, finite class of some elements.
Actually, consciousness and spirituality are something related to infinity."
When speaking of infinity, it is, I believe possible to come up with myriad combinations within boundaries of a set amount of space i.e. the human brain.
For instance look at the amount of music that has been produced by the 12 tone scale. Immanuel Kant (philosopher) assumed infinity by extension;
I think that may tend confuse the issue at times: I mean the difference between billions upon billions of combinatorial items, and how physicists perceive
the universe; his thinking was conjecture or philosophical speculation on the outbound universe, rather than the Universe within our frames of mind
many have argued in the "theory of infinity". The trapdoor of the universe. I believe that neural nets in a closed system can be duplicated as Ray Kurzweil
states. I believe that qualities within these perimeter, or perameters, of the mind may trick one into believing in an open system, that is, "foreverness."
We must also realize that there is room for a Psychological mechanism built in our brains for spiritualty i.e. the God spot, which Ray defines as: A tiny
locus of nerve cells in the frontal lobe of the brain that appears to be activated during religious experiences. Neuroscientists from the University of California
discovered the God Spot while studying epileptic patients who have intense mystical experiences during seizures.
I do believe that if human brains were to be duplicated they would be similar but not the exact same, barring entrophatic factors, space relations, and really
creepy if we were to meet our double; I mean have you seen the movie "Being John Malchavich, when he goes inside his own brain, and sees the mirrors of
himself.
|
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
>No doubt, Mr. Kurzweil displays a masterful use of quantitative methods, dazzling extrapolist charts, to support his thesis of the inevitability of spiritual machines by the year 2030. How else could he pull the wool over so many eyes unless he were able to put on such a dazzling display of data manipulation?
Not an act of "data manipulation" at all. Data manipulation is when people see a comet and then tell everyone that it's a warning from God, or some mystical entity, so they better repent. Then they go to a place to repent and are expected to give a donation.
>However, every good statistician knows that the same quantitative analysis can be used to support opposite conclusions
Again, not true at all. Try using statistics to prove to me that F does NOT equal m a.
>just as every good magician knows that the rabbit only appears to disappear because he cleverly distracts your attention with a sleight of the hand.
Well only Alan Greenspam and his Federal Reserve Bank can perform THAT degree of obfuscation.
>In other words, it's not so much in what Mr. Kurzweil says but what's not being said that matters.
True, a data analysis is always pulled along by what is OMITTED, not by what is said. But so far you haven't said anything. . . so I guess I'll keep investigating your "logic."
>The same magic trick applies to modern and postmodern techno-civilization. The fact that so many intelligent people could be fooled by the scientist-magician of today unmasks a form of modern neurotic obsession with the progress of technology and its false promises of the future.
Scientists are NOT at all magicians. People who "feel" or "believe" that "spirits" or "god" exist are the actual *magicians. Science is HAMMERED OUT by 50,000 years of thinking men and women from all races, nationalities and belief-systems. For you to attempt to invalidate this shows that your are completely ignorant of science, except as it serves your bogus quest to invalidate legitimate technologists, such as Ray Kurzweil, and men of his status.
>Computer nerds become so immersed in virtual reality that the "virtuality" of it becomes confused with reality.
Why the name-calling all of a sudden? I haven't called you a Belief-System Cultist.
>Mr. Kurzweil is presumptuous with his techno-optimism.
Not at all. In AGE OF SPIRTITAL MACHINES, Kurzweil showed at least 100 years of IRREFUTABLE documentation on the cost/power ratio of computational devices and then carefully extrapolated same into the years 2009, 2019, 2029 and the rest of the century. Nothing over optimistic at all.
>Nanobots do not even exist except in his virtual imagination.
And he never said that they do exist. So what if he may have said that they MAY exist. They MAY. Read ENGINES OF CREATION or get with the program over at the FORESIGHT INSTITUTE.
>We still know very little about the brain and it's various functions.
No, we know quite a lot about the brain. My grandfather founded the department of neurosurgery at Jefferson Medical College/Hospital in Philadelphia and, since watching my first interorbital tumor removal at age 16, have had a front-row seat on this subject for over 35 years now. We know A LOT now about brains now. There's more to know. . . but it's hardly any overwhelming subject like it used to be. Electronic neuro nets which mimic the human brain, and which will exceed its computational abilities, are a given. Like it or lump it.
>It cannot be mapped, copied or faxed, and even if it could, it doesn't mean that "consciousness" can be captured, much less spirituality.
It CAN be mapped, but destructively at this time. Resolution is increasing every month. Non-destructive mapping will follow. Thus it CAN be copied. Now you ARE correct however -- such copies cannot be FAXED, as faxes do not have enough resolution, however, such copies CAN be emailed (but its better to FTP them), so don't fret.
>I doubt that Mr. Kurzweil even knows what he's talking about when he mentions "spirituality."
And I doubt that you do either.
>And then to mention "God" as well shows how far off his rocker he really is.
Scientists often mention the word "God" as either an exclamation or another way of saying "Infinity."
>The next step of evolution, no doubt, will be when the spiritual machines download God onto their consciousness.
Well this would only be true if people WERE actually God, in whole or in part, and this is not a totally unreasonable idea as, according to people who assign attributes to God, God is powerful enough to do anything -- even pretend to be a human, or for that matter, every human.
>Put it this way: What if, among its many other functions, the brain also acts as a kind of antennae for the Spirit World, much like a tv picks up air waves?
Now you're getting spacey on me. Or mystical. Like Plato and his boys. And you criticize Ray for his extrapolations based upon 100 years of observable facts?! Where is God when I need him/it/her!
>Perhaps there is some part of the brain, not yet discovered, which is like a door to the dimension of thought and feelings.
It is recognized by those dealing with brains, both on a physical level and a psychological level, that the brain may have quantum properties (such as instantaneous computing and tunneling) as well as EMP or radio properties (thus the phenomena of ESP and telepathy). But in all likelihood, what you call "consciousness," is probably nothing "more" than an emergent property of a complex adaptive system (CAS).
>Thus, not knowing this, the nanobots might only copy part of the brain and, thus, the resultant consciousness copied would not at all be spiritual and only vaguely resemble something human.
A ridiculous statement when you consider that consciousness is an emergent property of CAS, because the seat of consciousness would be interior to the cerebral cortex, NOT exterior to same.
>Yet, Mr. Kurzweil dares to mention "spirituality" in the context of "machines," all the while treating the whole subject matter as a materialist would, displaying a profound ignorance of the relationship of consciousness to the sub-consciousness or to the Spirit World, as if it could be savagely abstracted from its context, dissected and treated as a thing in itself.
Mr. Kurzweil used the word "spirituality" as a METAPHORE for the possible transcendent abilities machines may have in the future. He wasn't being LITERAL numbskull :)
>Finally, Mr. Kurzweil dismisses SETI using a sort of twisted logic that extra-terrestrials couldn't possibly exist since they would have developed this same fantasy technology and, thus, would have contacted us already.
Well what he actually says is that superintelligence would have had the time to develop by now, but seeing how they would not need to be on any mission to confiscate our material resources, they would not need to bring spaceships, but instead, could travel as microscopic entities on missions of perception and discovery. I happen to agree with Ray on this, and this is one of the most probable reasons for the Fermi Paradox: "Where Are They?"
>However, has he considered that perhaps they haven't contacted us already since the technology he so firmly convinced will develop in the near future, doesn't exist anywhere in the universe, never has and never will.
A definite possibility. Nanotech MAY be impossible, as Robert Zubrin indicates in ENTERING SPACE. Thus the closest thing the universe has been able to devise to nanotechnology may be bacteria (and/or possibly virus).
>Then again, who's to say that they haven't contacted us. It depends on your explanation of "contact." Some say they already have, it just hasn't been made "official" yet.
More than likely: a) extraterrestrial intelligence DOES exist in the galaxy; b) it is fully aware of our existence and HAS been so, or here, here for some time; c) it is practicing a policy of non-intervention until we reach our Singularity; d) it will manifest thereafter in non-hostile terms.
>Anyway, it seems that the monster dreams of Dr. Frankenstein live on. I mean, why are some scientists so eager to pursue this technology in the first place? It's the logical conclusion of techno-extrapolism. Their techno-optimism is a form of idealism and rather than face the reality of the overwhelming problems that so-called "progress" of technology and science has brought to the world, they persist in believing that technology will solve these problems; thus, the powers that be, who represent a small portion of the world's population, can continue to ravage the Earth and suck her resources dry while, through the various forms of pollution, creating a hell on Earth, that will be unfit for future generations.
Technology has brought about far more benefits for human civilization than it has brought liabilities, otherwise you would not be communicating with me on your computer. Are there messes to clean up? Yes. Can we do it? Yes. Are scientists evil monsters? No, (but they do come out at night . . . mostly :). Scientists are just like you and I manifesting our curiosity and awe of existence in different ways. The only powers-that-be that you need be worried about are ill-informed or malicious politicians and businessmen who attempt to BUY-up scientists to do their ill-will, or keep them in power when they don't deserve to be. Such a group would be, of course, those connected with the Federal Reserve, for instance.
>But that's okay. They've found a way around the problem. Just redefine what is human, that's all. Recreate humanity in the image of our machines and then it won't be necessary to worry about organic existence.
Hey we used to be fish, and then rat-like creatures. Remember? Or can't you confront that because you have to relegate human evolution to some deity's magic wand. Homo Sapiens has been evolving for millions of years by biological means. This evolution will eventually continue by technological means. The intelligence and database of all human thought will eventually move to a different substrate -- that of SILICON rather than that of MEAT. The data patterns representing intelligence will stay the similar and be augmented with alien intelligence (patterns in data that cannot be recognized by humans, yet produce results that cannot be arrived at by any other means). Get used to it. No one's "redefining" what is human to side step monsters. Please. You've been watching too many Hollywood movies. We're trying to figure out if or when we should give control over to machines someday.
>The machines will simply "download" human consciousness and spirituality to preserve the human legacy.
No. You don't "download" human consciousness into a machine. You UPLOAD the neural net's patterns found in the brain to a machine that has a higher data rate than meat. You UPLOAD data to a substrate, a machine, of SUPERIOR computational ability, not one of inferior computational ability. You DOWNLOAD to machines of inferior computational ability (i.e., from your ISP's server you download data to your home computer). And lastly, as you UPLOAD the pattern of the neural net, you upload the "consciousness" ipso facto -- because remember, "consciousness" is a nothing "more" than an emergent property of CAS, the computational agents being the results of experience embedded in neurons, axons and dendrites. This issue of consciousness is over-rated. Remember, the philosophers of yesteryear, ragged on and on about consciousness because there was MUCH missing in their understanding of computing and complex systems. These old dogs didn't know one 1/1000th of what we know today, so forget their memes of consciousness -- that has limited workability today. You have to think in terms of machine intelligence, which won't be anything LIKE human intelligence and yes, machines will have their OWN type of "consciousness" -- if you insist on keeping that dusty old word around (which simply means being aware that one is aware).
>Don't worry. Humankind won't perish. It will simply "evolve" into a new form of existence which will not be so sensitive to environmental degradation.
Hey, LIFE created the environment that this planet has in the first place. Do you think it was just ready-made? No, life BUILDS environments, it builds atmospheres. That's what life does. . . it's an ENVIRONMENT BUILDER. It takes atoms on the atomic chart, groups some of them into highly organized units (known as animals and plants) and then attacks all the rest of the atoms with a vengeance. As a result, both sets of atoms change somewhat -- they *evolve* -- into a natural equilibrium. Get it? It's still happening now. So what. Get over it. Every time the environment slugs you -- you change too. You are evolving right under your own nose.
>It's still survival of the fittest (fattest?), "nature."
No, it's the fact that certain grouped atoms are able to survive better than other grouped atoms and thus, replicate, and persist throughout time.
>So, the majority who don't make it, who aren't financially able to preserve their consciousness, are simply the "unfortunate" who are "unfit" for survival. You can't argue with nature, can you?
Basically true. You either become more able or you perish. Very simple. Anyone who wants can come here and read and learn and then take this knowledge and apply it so that they are "fortunate" and quite "fit" for survival. And if some can't, well hey, that's why most of us are here on this Mind-X -- to put our heads together so we can lend a helping hand to our fellows. Why not.
>It's called "Post-modern Techno-Darwinism."
Well you can have all the fun you want with labels. Scientists are more interested in figuring out HOW things work and then using such discoveries to invent new tools that enrich us all.
James Jaeger
>Dennis morgan
>korea
|
|
|
|
|
|
|
|
|
21st Century Schizoid Man
|
|
|
|
My, my, Mr. Jaeger, what sharp, metallic teeth you have. Sharp enough to shred all organic arguments for natural human evolution, no less. What "meat" could withstand such irrefutable, reductionist, chop-chop logic? No doubt, your futuristic frankensteinian machine-gods would be proud of you, that is, if they would be able to "feel" proud (an organic, inferior, all-too-human emotion that has no place in the schizoid machine consciousness of your "singular" future).
Oh, yes, excuse me for even infering that the science ideology of progress is the modern-day religion whose priests are the scientist-magicians. 50,000 year history of science? No joke? Wow! I'm impressed! That's almost as long as religion, isn't it? Of course, it's of no consequence that 99% of that time, there was very little distinction between the two.
Nevertheless, we believe in every word of the scientist-magicians since they speak with such irrefutable logic and absolute conviction about the "laws" of the universe which, like the human brain, has all been "mapped out" for our consumption.
Nevermind that their technological toys have probably engineered the extinction of all organic life within the next 100 years. Nevermind that, since now they're so cleverly engineering an escape valve to guarantee the survival of homo-consumptus, so that we might continue in our pursuit of consumer happiness, with everlasting consumptive life thrown in to boot. This form of 21st century religion leaves no illusions unturned, doesn't it?
Who said we should give up on our illusion of spirituality? Our technological optimism provides for all illusions. There's nothing that can't be "uploaded" into the neural networks of silicon, even our pathetic, "over-rated," consumptive consciousness which is, after all, "... nothing 'more' than an emergent property of CAS, ..." and "interior to the cerebral cortex." Oh, really? " Well, why don't you point it out to us, Mr. materialist? Show us exactly where in the interior of the cerebral cortext we can pin-point consciousness in your fantastic map? I suppose, no doubt, you can find the subconsciousness there too, and even spirtuality? You can't, can you?
And if you can't find it, you can't upload it, can you?
Of course, you won't have us lose faith. You'll go on with the "just give science a little more time" song and dance routine. "Time, time is all we need. Then we'll explain everything. We have all the answers in our materialist conception of consciousness and spirituality. Don't worry, our fantastic little futuristic 'nanobots' will find it."
Meanwhile, who cares about real organic existence and real human evolution? Virtual existence with virtual science will satisfy all illusions for homo-consumptus.
There's at least one more problem that hasn't been brought up yet. What will he/she/it consume? How will you be be able to satisfy that drive, that point of manipulation?
Finally, don't moan about using such terms as "consciousness," "spirituality," and "God." If you recall, I'm not the one who first introduced these terms into this dialogue. I strongly objected to the usage of these terms myself because, in my opinion, they are areas outside the scope of the science in question. That's, in fact, the main point of my whole argument.
Also, don't try to reinvent definitions for them. You won't get away with that so easily. Controversial or not, they do have a history. Those "old dogs" are still around and I doubt very seriously that you know more about consciousness than them. You just have more technolgical sophistries to hide behind, that's all.
Dennis Morgan
|
|
|
|
|
|
|
|
|
Re: Spritiual Machines: the Logical Conclusion of Techno-extrapolism
|
|
|
|
"Mr. Kurzweil is presumptuous with his techno-optimism. Nanobots do not even exist except in his virtual imagination. We still know very little about the brain and it's various functions. It cannot be mapped, copied or faxed, and even if it could, it doesn't mean that "consciousness" can be captured, much less spirituality."
Ah, here we go again! It's comments like this that make me dismiss the ramblings of you "skeptics" as the crypto-vitalistic bullshit that it is.
"Put it this way: What if, among its many other functions, the brain also acts as a kind of antennae for the Spirit World, much like a tv picks up air waves? Perhaps there is some part of the brain, not yet discovered, which is like a door to the dimension of thought and feelings. Thus, not knowing this, the nanobots might only copy part of the brain and, thus, the resultant consciousness copied would not at all be spiritual and only vaguely resemble something human. Yet, Mr. Kurzweil dares to mention "spirituality" in the context of "machines," all the while treating the whole subject matter as a materialist would, displaying a profound ignorance of the relationship of consciousnes to the sub-consciousness or to the Spirit World, as if it could be savagely abstracted from its context, dissected and treated as a thing in itself."
See? Here's that grade A bullshit again. Just what kind of "skeptic" are you exactly? You are skeptical of extrapolation that does not require the discovery of nonsense brain structures that access a REAL FANTASY WORLD of your "other" dimension. I suppose Uri Geller and the like access this "realm" of yours that you seem to believe in so much.
But wait, dynamorg, I've been pulling your leg the whole time! I believe in this sacred realm of yours too! In fact, if you give my five-thousand dollars I will prove the existence of this realm to you. I have been studying in the far east and have become quite adept at channeling the dead and levitating. If you give me your contact information I can schedule a seance with you and we can speak to some ancient Atlanteans. Sarmchulla, the Atlantean I most often channel, will put to rest all that nonsense of Kurzweil so the rest so humanity can get in-touch with it's real potential. Soon we'll all be engaged in astral projection and communication with the minds of our household pets. That will be the REAL Singularity!! Thank goodness for skeptics like you and me, maybe we can put a stop to all that materialistic nonsense of the Transhumanists. Then with our chakras aligned, we can usher in the age of Aquarius. Indeed, you and I are the true skeptics! Those that extrapolate based only on the materialistic thought based on the dubious findings of "science" are the real nut cases!
<END SARCASM>
Seriously, why is it that so much of the criticism of Kurzweil comes form this new-age bullshit perspective? Gelernter, John Searle, and Jaron Lanier all have crap like this in the back of their minds when they critique strong AI, though you do have to read between the lines; obviously, they won't just come out and say it. This is really starting to piss me off. I've tried to find reasoned critiques of the singularity concept to better challenge my own ideas, but everywhere I turn I see excrement posing as argument. They like to paint transhumism as a religion, but they are the ones willing, eager, to believe in all the mystic-mumbo-jumbo that religion is founded upon; i.e., souls and the like. And that's what these arguments are, disguised arguments for the soul. The "Mysterious Mechanisms of Consciousness" are to soul, what the "Intelligent Designer" of intelligent design is to God! |
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
Dear Mr. Kurzweil:
I was extremely impressed by the clarity of your thought and expression in the precis of The Singularity is Near. I am also impressed by your breadth of knowledge and your ability to see past old ways of thinking as you amalgamate that knowledge into new ways of seeing interrelationships. I believe that, given your basic assumptions, your analysis and predictions are exactly on target.
I am puzzled, however, by your apparently unexamined reliance on the increasingly challenged principle of Darwinian evolution as the starting point for your argument. I tried to read your precis with an open mind, and it doesn't seem to me that biological macroevolution is essential to your conclusions. But it is disconcerting nevertheless to see it referred to repeatedly as an unquestioned tenet of faith.
The work of biochemist Michael J. Behe (Darwin's Black Box) and mathematician/philosopher William A. Dembski (The Design Inference, Intelligent Design), among others, has the biological establishment in full battle mode defending its fatally threatened Darwinian paradigm. Behe conclusively demonstrates the irreducible complexity of even the simplest of biological systems, clearly showing the impossibility of their evolutionary development, and Dembski uses information theory to detect scientific evidence for intelligent design in nature.
If you have not yet read these men's work, you need to. I would be fascinated to read your evaluation of the implications of the evidence they have marshaled for an Intelligent Designer.
You wrote of 'God' as representative of man's highest aspirations. But I found no recognition of God as an independent entity with any role to play in the affairs of mankind. You ascribe the coming Singularity to the '(double) exponential growth' of man's computational capabilities and say that this event 'will be no less important than the evolution of human intelligence itself some thousands of centuries ago.'
I submit that the explanation of human intelligence has nothing whatsoever to do with evolution as described by Darwin and everything to do with the creative hand of God, the God of Abraham, Isaac and Jacob, made flesh in Jesus of Nazareth, whose promised return bears uncanny resemblances in some respects to your predicted, near-imminent Singularity. And your Singularity without Christ's simultaneous return I fear would in fact leave us vulnerable to the destructive forces we see becoming ever more prevalent in our post-Christian world.
It's chilling, for example, to imagine what Bill or Hillary or their henchmen would have done if they had had several quadrillion or so programmable nanobots at their disposal. What is our guarantee that such amoral people will never again grasp the levers of national or international power? I believe that, given the existing state of our society, it is inevitable that such people will someday come to power again here and, in fact, are now in power in many parts of the world..
I, too, am an optimist. But my optimism is based on the great drama that has been laid out for us. This drama tells of an infinitely intelligent, knowledgeable, powerful and loving God who created the cosmos and who sent his son to earth to show us that he loves us, promising that he will return again to earth to provide the final act of the drama, which I believe could very will coincide with (or even precede) the Singularity you describe. Intellectually satisfying accounts of this great drama may be found in the works of C. S. Lewis (Mere Christianity, The Abolition of Man, The Weight of Glory), Dorothy Sayers (The Whimsical Christian, Mind of the Maker) and G. K. Chesterton (The Everlasting Man, Orthodoxy, Heretics), among others.
If that drama is not true, however, then I must fear along with Bill Joy. (What an ironic name for one so full of despair.)
Cordially yours,
I. Jerome Kenagy
|
|
|
|
|
|
|
|
|
Another perspective on SETI
|
|
|
|
"It is odd that we find the cosmos so silent. Where is everybody? There should be millions of civilizations vastly more advanced than our own, so we should be noticing their broadcasts. A sufficiently advanced civilization would not be likely to restrict its broadcasts to subtle signals on obscure frequencies. Why are they so silent, and so shy?"
I disagree that we are the first to reach this point of development. The odds are so overwhelming that there simply must have been other civilizations that came before.
1) Let's assume that everything pretty much goes exactly like you expect. We pass the Singularity and start spreading out at the maximum possible rate. It is quite possible that such an advanced civilization may agree to leave systems with fledgling life alone. After all there are vast quantities of solar systems out there, so the cost of excluding one in a million is negligable.
2) It is possible that post-Singularity alternative forms of communication are discovered and used, thus causing SETI to fail.
3) An advanced civilization that used wireless communications extensively would be very unlikely to use extremely powerful transmitters. As our wireless density goes up, the amount of emitted power we use is going down. If we are to have wireless LANs of nanobots it is likely that this would continue further. These low power transmissions would be incredibly difficult, if not impossible, for SETI to detect over the background noise.
4) What I believe to be the most likely reason. Successfully transitioning thought the Singularity is extremely difficult. As you pointed out, horribly dangerous technology will become commonly available. It could be that millions of civilizations have tried, and the vast majority failed ending in extinction. This is an ugly, but very real possibility.
Take 4 as the primary cause, add in a lot of 3 and a little bit of 1 and I believe you have a reasonable explanation for the silence SETI is finding.
James Higgins
|
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
Some toughts I had while reading Spiritual Machines, this precis and others.
Uploading me and the relative death
Well, for now I don't feel confortable imagining the possibility of undergoing a kind of "turn off bio me here, turn on non-bio me there". But the interesting thing is that I don't see that much trouble in doing this in small steps, as was also suggested by Kurzweil. Althought I concede that from the objective point of view there seems to be no difference between both approaches, from my subjective point of view, the gut feeling is that the first option (one step port) equals death of old me and the second option (multiple tiny steps) is OK since old me gradually becomes new me.
If you come to think, the death process should be considered something more of a continuous analog value than of a binary "alive or death" thing. I think that while we live we go through a constant "tiny deaths" process, with the "last minute me" giving place to the "this minute me" as my neural states change.
What happens is that we feel more confortable with a certain rate of change when it comes to ourselves, to our minds. So we die very little every day. When we face an exceptional experience (good or bad), we die more, as more of our neural states change. So death becomes change. The maximum value of this analog death is what we face these days when someone's biological body ceases to function. It is the maximum value because you go from a moment of very high complexity (working human mind/brain) to a moment of extremely low complexity (corpse). This is our maximum value for this change today.
Following the mind download trend, we might see a future where people (the non-bio ones) may die with different "death values", as more or less fresh backups are restarted.
Another tought about this death thing is: are you sure that you will come alive after sleeping tonight ? I mean, very probably (and hopefully) you'll wake up tomorrow, but how can you be sure that the tomorrow you will be you. Surely he/she will behave like you and from the objective point of view it's you. But how can one be sure that after a period of unconciousness the emerging person is the same, subjectivelly speaking. Specially when we know that many mind states are changed during sleep. Some of you may say that we know that we are the same ones and this is the way it has always happened. But I say that we just have faith in it and we simply don't think about it.
Anyway, today humankind is only able to upload (copy) conciousness on the time axis (you yesterday, you now, you tomorrow). Sometime into the future we maybe able to copy conciousness on the three space dimensions.
Since today we cannot move back in time, we don't face the paradox of facing the other me that live d yesterday. Somehow I think that the time travel "kill yourself yesterday" paradox is linked to the "upload me but don't discard the old me paradox".
The interesting thing is that when we come to the time axis our conciousness don't have the option of not being copied to the next future time frame, so we don't bother. But when we think of being copied in space, enter in this machine here and wake up there in a split sec, we shiver.
Anyway, after all I start to think that when you upload someone you must be sure that the whole process was done either very slowly (the tiny steps method) so that you actually evolve on to the new media (so that the process is not very different than the one we face when time passes) or that you do it so fast and destructively that the uploading (including the restart of the mind on the new media) finishes before the brain changes a single state. If you let the brain change states before the process is completed you will have a degree of death when the old media is destroyed. If you let the old media mind concious after it then you'll have an additional mind to upload (althought very similar, at least in the very first moments, to the previously uploaded one).
Just my thoughts,
Vicente Silveira
|
|
|
|
|
|
|
|
|
The Singularity has already happened. Six Times!
|
|
|
|
Kurzweil has apparently forgotten a basic fact of mathematics. ALL EXPONENTIALS LOOK ALIKE. They don't have singularities, they just go up and up and up, faster and faster and faster, forever and ever. There's no point short of the infinite end of the universe when anything magical happens. Exponential growth has no singularities; multiplied exponential growth still has no singularities. You need a truly hyperexponential function such as tangent to get singularities that occur in a finite time from now, and Kurzweil offers no evidence that such functions are in operation. (Tangent suggests a cyclic view of history with recurring catastrophes such as mass extinctions, which we don't want to get into here.)
This is not to say that singularities can't happen, only that Kurzweil has misidentified them and how they work. Singularities occur when a new ecological link between a species and its environment is formed. Neither the timing nor the effects of closing a switch in a circuit can be predicted by extrapolating from its unclosed state, no matter if the extrapolation is linear, exponential, or anything else.
Ignoring the singularities of biological history, in human history such new linkages occured for
1. the discovery of language
2. the discovery of agriculture
3. the invention of writing
4. the invention of printing
5. the invention of the self-powered machine
6. the invention of the general-purpose digital computer
There have been hundreds of books written about these breakpoints in the progress of history, but Kurzweil doesn't seem to be aware of any of them. What's important to the singularity hypothesis is that (a) there have been radical, qualitative changes in the technological aspects of human psyche and culture that can be learned from, (b) that ordinary, day-to-day life still seems ordinary and day-to-day, and (c) singularities don't happen overnight -- much of humanity today, perhaps even a majority, still lives two singularities back from Kurzweil's world, in a pre-motorized one without running water, where advanced technology is a bicycle.
The singularity that Kurzweil feels in his heart, but doesn't really understand is
7. the emergence of autonomous, self-guided machines
This break is already well underway, with friendly robots already mowing lawns, and household cleaners just around the corner. It has nothing to do with nanotechnology or virtual reality, and is far more important.
A further singularity that Kurzweil doesn't really recognize is
8. the mastery of developmental biology
It's within his extrapolative vision to see that artificial eyes as good as natural ones will be available within 10-20 years. This is personally relevant to me because I suffer from the progressive blindness of glaucoma, and my vision will be fully deteriorated sometime during that period. But what I would prefer is to just get a new biological eye, one without the intraocular pressure-relief defect that destroys retinas.
But developmental biotechnology is a third-generation biotechnology, following after genomics and proteomics. And creating differentiated, structured organs is far more difficult than creating a single celltype from a single stem cell line. In fact, creating a differentiated organ customized just for me is combinatorially more difficult than making a single insulin-secreting cell from a stem cell. This combinatorial complexity is an exponential SLOWDOWN in progress, fighting against the exponential growth of other technologies.
Kurzweil is not so foolish as to explicitly predict the resolution of the P/NP problem in computer science, but there are as many exponentially-intensifying barriers to growth as there are exponentially-intensifying promoters of growth. Dismissing the barrier factors and assuming the promotor factors will inevitably prevail is what makes his thesis ultimately simple-minded and boring. The fact that we don't know which type of factor is dominant, and in fact, can't know in advance, is what makes life interesting.
Cheers,
- Chad Mulligan
|
|
|
|
|
|
|
|
|
Oxen to Satellite TV
|
|
|
|
I think a purely mathematical singularity is not intended, although I like your angle. The 'Singularity' is more analogous to a black-hole event horizon. The rate of change becomes so steep that the predictive horizon disappears, or trends based on current rates of progress and potential become too divergent due to the conflux of new paradigms. In short, stuff is going to start to change so fast, we justa can't tell what's agonna happen!
Old barriers to progress are sidestepped with new paradigms. When trying to predict when we will overcome these obstacles, we do better by using exponential growth rates. Since so much progress is occurring in parallel on so many fronts, interdisciplinary approaches will emerge to pick up where one old paradigm runs out. This is the case in computing.
Of course, the rate of advancement is measured at the cutting edge of technology, not the trailing edge. There continue to be niches for all kinds of life and lifestyles. There are still quite a lot of one-celled organisms around! Nor must a given part of humanity recapitulate the history of that progress; many towns in India pump their water by oxen and watch satellite TV.
There will be more diversity as time goes on. I just hope the future will allow the trailing edge of humanity to feel the benefit as going from oxen to SATV, and take it in stride. I pledge my efforts to make it so easy, for I hope to be a part of progress.
|
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
In the beginning... No, wait, that line was already used in one of the versions of The Book, but something got lost in the translation (that's the problem with using indirect methods of communication with uneducated nomadic shepherds...).
How should We put this so that your puny meat brains can grasp it? Okay, let's try this...
In the beginning, We were messing about in the lab, trying to create a new Universe with the right conditions to support star formation, planets and life (We needed just six numbers, but they had to be the right numbers for the new Universe to be anthropic...). Before the new baby Universe pinched off and became a completely independent entity in the greater Metaverse, We injected Our (soul, spirit, essence, intelligence, knowledge base)across the rapidly closing event horizon, and came along for the ride. After all, Our old Universe was approaching its final heat death after googols of eons, and even We, with Our complete mastery of everything, could see that the end was near (even at a greatly reduced clock speed...).
We watched (and occasionally "helped") the evolution of Our new Universe, and were greatly pleased to see that the laws of physics and chemistry that Our six numbers set in motion would permit - nay, demand - the evolution of life and, ultimately, intelligence. We looked forward to the time when We would no longer be alone.
By the way, you are not alone in the Universe. We are always with you, but even if you do not believe in Us, there are plenty of neighbors out there for you to play with when you climb out of your gravity well. Light speed and interstellar distances are very useful for isolating the experiments from each other, until the experiments are successful. Your particular experiiment shows promise, but you still face some serious show-stoppers. The main impediment, in case you were wondering, is the nature of human nature. (For details, see The Book)
Unfortunately, most of the experiments have not been successful so far. Nuclear war, grey goo, asteroid impacts, bad memes, anti-matter accidents, and just plain old jackrabbits in Australia overpopulation disasters... You (literally) would not believe the many ways things can go Terribly Wrong. Well, some of you can. Like Larry Niven said, the Dinosaurs are extinct because they did not have a sufficiently advanced space program. Of course, given the nature of lizard nature, it would have been only a matter of time before they did themselves in, and something like humans would have come along sooner or later anyway.
But We digress...
The Universe is teeming with Life. Most of it is microscopic, of course, but there are quite a few planets full of more highly organized life forms. Some of it has moved out of oceans, and onto dry land, where it can eventually discover fire and see the stars and get moving down the road to the Singularity. A smaller, but still significant, number of species have actually gotten to the Singularity, but so far none of them have come out the other side, at least as far as We can tell (and since We are, by definition, all-seeing and all-knowing, We would have noticed...).
There were a few who came real close... Here is a recent example from your own Galaxy. Once upon a time there was a race of Godlike beings. Let's call them the Olympians. They were worshipped as real gods by mortals on several planets, but they were actually only a bunch of people with some really advanced technology (the kind that is indistinguishable from you-know-what). They feuded and squabbled and played nasty little games with the still-mortal natives on many worlds. They may have had indestructable, immortal bodies with Godlike powers, but they were still meat-individuals on the inside, where it really counted. And that was their undoing...
So, where are they now? We don't know. They sort of disappeared a couple thousand years ago. What happened to them? Well, as near as We can figure, their human nature finally did them in. They self-destructed, torn apart by the Beast Within, or some such rot... Their memory lives on, mostly as legends, but everyone now realizes that they were not real Gods, merely poser-gods. They may have gotten further through the Singularity than many others in this Universe, but close only counts in horseshoes and hydrogen bombs. The Singularity is not a point, but a process, as you will soon discover.
We've seen this movie a million times in your Galaxy alone, and quite frankly We are starting to get a little concerned that We will be alone out here forever. So, We recently instituted a plan to change the nature of human nature (as well as the nature of the creatures that inhabit billions of other worlds), to help you become more like Us and make it past the pitfalls ahead (by recently, We mean within the last several millenia).
The plan was simple in concept, but Devilishly difficult in practice. Essentially, We tried to inject a new meme into your culture, to help you solve a fundamental design problem. The problem, simply stated, is that the attributes that helped you become the dominant species on your planet are precisely the traits that will destroy you as you enter the Singularity.
For a variety of reasons We decided that it would be best to not simply take over your minds, although We certainly could have, with nanobots, for example. We decided to use more indirect methods, due to the nature of what We were trying to do. You need free will, but you also need to make the right choice...
Burning bushes and booming voices from the sky are ridiculously easy to pull off (even for beings without Godlike powers), and they sure grab the attention of the locals, but the problems of misinterpretation and mutation of the meme keep cropping up, and We keep having to come back with new patches...
Lately, though, We have had some luck with Our new "Son Of God" program, which does involve the use of advanced (for you) nanotechnology to impregnate a local female, turn water into wine, cure diseases, and reanimate the dead. We keep hoping that a Good Example will finally convince you to change your ways before your sinful nature leads you to destruction.
There are many variations on the new meme, of course, but the essential message is this:
Do unto others as you would have them do unto you.
Competition is natural, and good up to a point (that point usually comes when a species figures out how to turn E=mc2 into working hardware). The whole concept of Sins of the Flesh comes out of the fact that you are all meat-based individuals rather that a group mind like We are. Greed and fear are necessary for survival, but they will cause you to destroy yourselves with your new toys if you do not learn to give up your sinful ways. Real cooperation is the only hope for salvation. Remember the old gods who aren't around any more? They were just like you, once. They just never figured out how to become more like Us.
Let Us be clear on this point. You will not be able to get through the Singularity and join Us unless and until you learn to give up your worldly goods and desires and learn to willingly give your second coat to your neighbor who has none. And being wealthy enough to give away most of your belongings while still remaining comfy does not count. You have to be willing to give it ALL up. You will not achieve eternal life unless you drop all this meat-based striving to have more wealth/power/status at other peoples' expense. Why? You don't need a cubic meter of Buckytube rod-logic to figure it out. It's real simple...
If you are going to make it past the Singularity, you cannot have competition any more. You must learn to cooperate, to share, to give up your I in favor of the We. You are all going to be sharing all of your knowledge and memories anyway (Oh, you think you can keep a secret from the network? From Us? From all of the other yourselves that will be sharing it with you? From net viruses that are a billion orders of magnitude more intelligent than anything Vernor Vinge ever had nightmares about? Yeah, right...)
The kind of Surrender that is necessary cannot be coerced. It has to be a choice made freely by individuals with free will, or else it doesn't really work. There are several reasons why this must be. Think about it... Would you want to share thought processes and memories with killers or rapists or corporate raiders? Would you want to spend the rest of Eternity sharing painful memories with all of your ex-lovers on whom you have cheated? Forgiveness is not optional because forgetting is not possible. But it has to be Real forgiveness. You cannot fake it when you are One with God.
You think competition and evolution are rough now? Just wait until it all speeds up a few zillion percent. It will be literally impossible to maintain your competitive individuality in the environment of the network. And if you don't join the network, you'll be left behind, with fatal results... You have to get right with Us (as well as with yourselves) or else you simply cannot have eternal life, uploaded, downloaded, online or backed up. Many have tried, but no one has succeeded (yet).
You might think of Us as a very exclusive club with strict, but simple, membership standards.
We certainly don't want the unenlightened, the liars, cheaters, and thieves, to be part of Us. If we allowed the unrepentent sinners to become part of Us, then in short order We would be just like them, with disastrous (and self-destructive) results. As far as We are concerned, all of those people can go to the Devil (but that is another story...).
You have to learn to share completely and totally, not because We tell you to, but because you will have come to the understanding that complete and total sharing is the right (and only) thing to do. In the trans-Singularity environment that you will soon be entering, total and absolute cooperation will be necessary, whether you like it or not. And that means sharing Everything. No more trying to be the richest man in the Solar System. No more alpha males. No more meat games. No more secrets lurking in isolated nodes. No more lies. No more sin.
Joining together with Us, or creating a new God-like entity just like Us, is going to be different from anything you can imagine. You may run subsets of yourselves in parallel for experimental purposes, but if you think you are going to be able to maintain individual personas, with all that that entails, well... if you try, you're going to end up like poor old Satan. You really don't want to go there.
Competition works for a while, if what you're doing is building empires or railroads. However, when you have Godlike powers, competition leads to suicide on an extremely large scale. We know. We still have memories of what it was like when We were just getting started.
But fear not. It isn't like surrendering to a bully in order to be allowed to live longer. It's more like surrendering to a lover that you don't want to live without. And besides, an individual (even one with an immortal body and a brain the size of a planet) is simply no match for a "supreme being" like Us (Our brain is the size of the whole Universe). You will soon have to make a choice.
You can become like Us, or you can go to the Devil. That seems to be the only available choice...
***
Okay, the little voice in my head told me to write that. At least I think that's what it said. It didn't use real words, just a bunch of jumbled images. Maybe God injected me with an army of nanobots that communicate by massaging my axons and dendrites. Maybe I had too many magic mushrooms, like Paul on the road to Damascus, but it sure SEEMED real. Yeah, it was probably just some kind of halucination... But who knows... it COULD be true...
Robert Lovell
robert_lovell@ameritech.net |
|
|
|
|
|
|
|
|
Don't forget to enjoy.
|
|
|
|
Consciousness & Spirituality
(hang on to those reduced and compromised titles you believers, they are the last ones you got)
Topic 1 - Can't consciousness be the result of simply flicking up the 'on' switch of the information populated brain, thus turning it into a mind (CAS as mentioned above)? When life (power) starts, functioning processes begin and the machine/mind is in I/O mode. In other words, consciousness = functioning. It's as simple as that.
This said, one must then decide the level at which 'functioning' consciousness is alive and well. Is an insect alive because of its self-direction? Yes. Is a newly developed AI system which operates at an insect level functional? Yes. So is the AI-Insect system functioning consciously? If not, is self-realization then the true level of consciousness? What is the level of self-realization? Does the simplicity of mind and its actions dictate this? So can one say that chimps don't know they are? The chimp sees its hand when it is hurt, and then sees its friend's hand as well. It then turns to it mother for love. Wow, but it doesn't know it is there, huh?
So why do we, as humans, constantly place ourselves simultaneously as the 'only beings that know of themselves', yet always second in line to the so called 'creator' or the a 'spiritual side' (same damn thing). It seems like we still haven't gotten over the hump of our actual pre-placement in the Universe and how we will know more along the way, not right away. Shit, that's half the fun of it.
Let us not forget that 'ENJOYMENT' IS THE CONSTANT GOAL. From the simple farmer to the leaders of the world, everyone has his or her own types of enjoyment whether it be work or play.
Topic 2 - Then take spirituality. Another no-no word that is abused almost as much as the word 'soul.' Both should stand in wait and remain systems of worship for believers and followers while science discovers what the definitions should be and then are.
My point is is that science is not handling a vase worth 500 billion containing God himself, and if broken while walking on a bed of boulders blindfolded, God will be no more and we will disappear.
No, we can 'muck' all we want with our own heads. Poking and prodding until answers emerge. If a God exists, it surely won't punish us for practicing simple ability alterations such as life extension, metal augmentation and space expansion. And if we stumble across the second 'God Spot', the one that actually lets us into the spiritual world, then we can contemplate our next move. But to hold back the probable gains of what we might achieve would be stupid and impossible. I must add that I am if full appreciation of empirical scrutiny and love the debate.
Just look over your shoulder. Yes, we don't fly in our cars and we don't eat pills as meals, to date. However, we've found greater goods such as stem cells and teleomeres, the web and computers-for-all. These are far more important and unseen today by the general public as achievements because things of 'today' never do. We pass them on like we already knew of them and expected them to have happened some time ago. It must be a bummer in the science community, I'm sure, to have the general public take in teleomere science like it was the release of a new Arnold flick.
So if we can play and pray at the same time, lets do it. Science is will ing to build a basic bridge for the lay. The spooky spirit world won't bat an eye and might even chuckle as we go into the wrong direction. And like somebody else said above, if that special someone who knows all (including that I'm not wearing underwear right now) doesn't want us to know about evo-creation, then we won't. Why act detrimental toward a person (R.K.) who is pointing us in 'one' of the directions into the future? Why not debate and support certain aspects instead of slashing and then ignoring a very probable or even slightly probable scenario?
We can all understand that not one person will be correct in their entire scope of what might be, but we need 'our friends' to work together to better our species as completely sustainable and then completely enjoyable entity. Then we can choose to alter and dig into the outer skin of the Universe and get some of the hard stuff answered. And then that other second skin. And then that other. And then that. And then.....
Word to your mama.
Bobby June
Lay as can be but paying attention and learning. |
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
Hello all, I'm from holland so my english isn't as developed as you who reads this, I hope you'll excuse me.
We try to approach one universal singularity but I think that every state of being like the human being has it's own singularity. Every kind of being is restricted in it's own way and to human beings it seems to be relativity.
Every being is a complete system like biology shows the seven characteristics of life. Every system has this equilibrium, the point of it's unity and balance. This energetic non-material substance is a part of every system that is existing. This point is not restricted to any kind of relativity and the time-space principle does not exist in this equilibrium. It is energetic and because of that it never begins or ends, it only changes it's being continiously. The necessity of being is because of the presence of this singularity, all opposites in unity. A black hole is an absolute system, and many of it's structures can't be seen or understood by the human brain because of it's natural being in relativity. I believe that we are part of such a system, but because we are created by this first cause, which never starts to exist or stops, we have the same characteristics although we couldn't explain the whole being in a lifetime. But the first singularity is the cause of all it's singularities and will configure it all to one singularity, that's absolute. When our universe is expanding like it's doing now it's like relativity will be 'relatively' more because of the distance inside is 'growing'. There will never be more then there was ever, the distance is due to relativity, the being of human. We will never be able to experience the absolute being because of our restrictions in perception, our 'reality' is based on time, space and lightspeed which are principles of relativity, never absolute. The speed of light is relative and non-material but in the reach of the human being. But we are a system, like every being. So, that we ARE is because of the presence of our singularity and every individual singularity that IS in equilibrium with the first cause. The first singularity which is the equilibrium of the whole universe. This way the universe isn't an endless distance it's a curve, a system in rotation moving around it's balance. This point atracts anything because it makes every dualistic principle, every non-stable system, will be made stable and absolute and the attraction is in intensity the same as the speed of light. And from that point the whole reality is not visable for mankind. We only see the reflection of material. But when a human being moves it's self to a mass billions times stronger than it's self then we see that material will be atracted because of the great gravity. But where does the energy go? That's what we can't see, anything beyond that experience is part of the absolute and the energy will be one with the singularity, but what happens to the awareness of the human being? Does the consciousness ever stop? Or is this a particle of the singularity? Everything in this whole universe is caused by this entity, but when we don't have time or space because of the absolute being, what exactly means the presence of the singularity and why is it self-aware? |
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
Hello all, I'm from holland so my english isn't as developed as you who reads this, I hope you'll excuse me.
We try to approach one universal singularity but I think that every state of being like the human being has it's own singularity. Every kind of being is restricted in it's own way and to human beings it seems to be relativity.
Every being is a complete system like biology shows the seven characteristics of life. Every system has this equilibrium, the point of it's unity and balance. This energetic non-material substance is a part of every system that is existing. This point is not restricted to any kind of relativity and the time-space principle does not exist in this equilibrium. It is energetic and because of that it never begins or ends, it only changes it's being continiously. The necessity of being is because of the presence of this singularity, all opposites in unity. A black hole is an absolute system, and many of it's structures can't be seen or understood by the human brain because of it's natural being in relativity. I believe that we are part of such a system, but because we are created by this first cause, which never starts to exist or stops, we have the same characteristics although we couldn't explain the whole being in a lifetime. But the first singularity is the cause of all it's singularities and will configure it all to one singularity, that's absolute. When our universe is expanding like it's doing now it's like relativity will be 'relatively' more because of the distance inside is 'growing'. There will never be more then there was ever, the distance is due to relativity, the being of human. We will never be able to experience the absolute being because of our restrictions in perception, our 'reality' is based on time, space and lightspeed which are principles of relativity, never absolute. The speed of light is relative and non-material but in the reach of the human being. But we are a system, like every being. So, that we ARE is because of the presence of our singularity and every individual singularity that IS in equilibrium with the first cause. The first singularity which is the equilibrium of the whole universe. This way the universe isn't an endless distance it's a curve, a system in rotation moving around it's balance. This point atracts anything because it makes every dualistic principle, every non-stable system, will be made stable and absolute and the attraction is in intensity the same as the speed of light. And from that point the whole reality is not visable for mankind. We only see the reflection of material. But when a human being moves it's self to a mass billions times stronger than it's self then we see that material will be atracted because of the great gravity. But where does the energy go? That's what we can't see, anything beyond that experience is part of the absolute and the energy will be one with the singularity, but what happens to the awareness of the human being? Does the consciousness ever stop? Or is this a particle of the singularity? Everything in this whole universe is caused by this entity, but when we don't have time or space because of the absolute being, what exactly means the presence of the singularity and why is it self-aware? |
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
Hi,
some years ago I did some research in AI and succeeded in programming an AI that was around the same level as a
3-year-old. I stopped due to ethical reasons as I didn't feel it is right to build an AI that is more intelligent than humans unless we solved our other big problems.
I was not techno-phil enough to believe that those AIs would solve our problems for us.
I believe there is something that hasn't been considered here.
Well, I believe that there are many of those things.
The brain has many levels of redundancy, perhaps stretching to orders of magnitude. Why then do we not see people with much wider variance in ability, if the brain contains so much redundancy why doesn't the spread in IQ / intelligence vary more; it's surprising that there are not people with 100's of times more intelligence than the average.
One reason is that intelligence doesn't scale linearly with the number of neurons. Not with the number of interconnects.
Not even with "power consumption".
My working definition of intelligence is the following:
Intelligence is the ability to connect things/facts that seemed to have nothing in common, but are 'active' at the same time (that means you think both simultaneously) and use this new link to come up with new approaches to tackle a problem.
Finally those approaches are tested.
Perhaps with incremental improvements to brain sub-systems super-intelligent humans may result, but perhaps we'll just have even more redundancy.
There is still another possibility. It might not be possible to build a single machine that is more intelligent than the human brain. Only machines that have a much wider scope and some that are more specialised. The big problem then is communication. Those machines with that wider scope don't have enough words and understanding for the solutions a more specialised machine has come up with.
We encounter the same problem right now: Experts can build systems/machines/tools so complex that most people don't know how they work. They just use them. The interface between the experts and the users is just a description about how to use that thing. It is hard to even write such descriptions as an expert as you are not aware of the common ground (i.e. words, understanding) you share with those to whom you explain.
A related point is this: Will enhancements to the brain give rise to only quantitative changes to human nature, or qualatative changes too? I count myself as being an intelligent person and have interacted with people less intelligent and more intelligent but it seems that our experiences of life do not vary much. Maybe we can speed things up, but if the improvements are only/mainly quantitative then as soon as everyone joins this bandwagon then we just end up as today: our relative clock speeds will be equal.
Good point. If everyone's scope is almost equal not much will change.
--
Don't believe everything you read, don't write everything you believe, don't read everything you write. - bergdichter
|
|
|
|
|
|
|
|
|
precis is weak on Evolutionary Ecology implications
|
|
|
|
(previously sent to Ray on 18 Mar 2001 but posted here for general discussion.)
Ray-
I've been reading your latest book precis (The Singularity is Near).
http://www.kurzweilai.net/meme1/frame.html
Most of your effort I admire, but there are a few issues in your writing
that come from what does not appear to be a sophisticated view of
evolutionary theory (as sophisticated as the work is in other respects).
Both my wife and I studied Ecology and Evolution (E&E) in the SUNY Stony
Brook PhD program (one of the top schools for that at the time) so we
have some knowledge on the subject.
http://life.bio.sunysb.edu/ee/
Here are a few things to think about:
1. There is not necessarily increasing complexity in an evolving system.
Populations of organisms evolve for many reasons such as natural
selection, sexual selection, and random walks. If there is momentary
adaptive value in being simpler, then that path may well be the
evolutionary future for that population or an offshoot of it.
Let me give an example. If you have a parasite and a host, you may have
an evolutionary arms race, with the host population evolving in ways
that resist the parasite, and the parasite evolving new ways that
exploit the host. What is important here is that cycles can occur --
where the host develops a resistance, the parasite develops a way to
overcome it, and then the host loses the resistance because it is now
worthless and otherwise consumes metabolic energy to sustain operation
of it (and even to keep the ability in the DNA), and then the parasite
loses the ability to overcome the resistance because that ability is now
also worthless, and then the host population may again develop a
mutation with the resistance ability and the cycle continues. I'm sure
you could code up a simple genetic algorithm simulation in a weekend
that would show you this -- and there are several existing free
educational software programs that can also demonstrate this.
http://cbs.umn.edu/software/populus.html
2. There is not necessarily an adaptive value to intelligence in a
certain niche -- because intelligence has power, mass, heat-dissipation,
and time costs. For example, consider the Hydra, which is a tiny
multi-tentacled aquatic creature that lives off of stinging smaller
organisms like Daphnia and pulling them into its body cavity. It has a
simple neural net it uses to coordinate its feeding behavior. Why
doesn't the hydra have a brain the size of a human? That may sound like
a stupid question, but bear with me. The Hydra could not support the
energy required to operate a brain from its current feeding behavior. It
could not protect the brain from predators. Its mobility would be
impaired by being attached to a brain that large. It would be unable to
reproduce as quickly. Also, the value of a human-sized brain to a hydra
is minimal, because there would be little the brain could accomplish
using the Hydra's few microscopic tentacles, limited sensory apparatus
(no eyes, no ears) and limited mobility choices. Further, the Hydra must
react instantly in its tiny world, and a big brain would take too long
to process the information. So, for the Hydra, a large brain makes no
sense.
There are aquatic creatures with brains as big or large than human
brains (dolphins or whales) but they have a very different ecological
niche and a totally different scale and physical structure. And there
are a lot fewer whales and dolphins than Hydra in the universe.
[Added: For that reason, downloaded human AI's may see their storage
and runtime absorbed by more effective "cyber-bacteria" which have
evolved to be effective in the cyber media -- see for example
Tom Ray's "Tierra" simulator.]
3. Chaotic systems (in the Chaos theory sense) like the weather allow
small changes in initial conditions to lead to large differences in
outcomes over time (the so called "butterfly" effect). Because of this,
there may be a fundamental limit to the predictive value of intelligence
for any given size brain, and further, there may be an extreme law of
diminishing returns on increased intelligence. That is, assume you have
a weather computer that may give you reasonably useful forecasts one day
in advance. To get two day forecast with as much accuracy as the one day
forecast, you may need much more data and a much bigger computer. You
don't need a system twice the size -- you may need one a thousand or a
million times the size. Each day further out you want to forecast, the
more data you need and the bigger computer you need.
Do we build the sensory net and weather computers to give us a thousand
day forecast? No. We decide we can live with a certain level of accuracy
-- that it is good enough for the cost, given the diminishing returns on
more extensive predictions. (A rebuttal on weather prediction is to
claim that the weather will be controlled at some point by intelligence
and so will be predictable, as in Alan Kay's adage "the best way to
predict the future is to invent it"...)
What might this mean in a human sense? Perhaps human brains are the size
they are because there isn't too much value in being that much smarter
because the cost of the additional intelligence is outweighed by the
diminishing returns of additional predictive value. For example, some
studies show earlier types of human-like creatures like the Neanderthal
or Cro-Magnon had a larger brain size than present-day humans.
http://www.wsu.edu:8001/vwsu/gened/learn-modules/top_longfor/phychar/culture-humans-2two.html
4. (From my wife's E&E thesis) Given most systems acting intelligently,
there is value to acting "dumb" -- because that avoids the herd effect
and related competition for resources, since the dumb organisms wind up
in less valued situations, but ones with less competition.
So, given these ideas, what is the implication for your writing on
evolution? Some parts are making essentially a religious statement of
how you want to see the universe.
You wrote:
> The Next Step in Evolution and the Purpose of Life
>
> But I regard the freeing of the human mind from its severe
> physical limitations of scope and duration as the necessary
> next step in evolution. Evolution, in my view, represents the
> purpose of life. That is, the purpose of life--and of our
> lives--is to evolve. The Singularity then is not a grave
> danger to be avoided. In my view, this next paradigm shift
> represents the goal of our civilization.
>
> What does it mean to evolve? Evolution moves toward
> greater complexity, greater elegance, greater knowledge,
> greater intelligence, greater beauty, greater creativity, and
> more of other abstract and subtle attributes such as love.
> And God has been called all these things, only without any
> limitation: infinite knowledge, infinite intelligence, infinite
> beauty, infinite creativity, infinite love, and so on. Of
> course, even the accelerating growth of evolution never
> achieves an infinite level, but as it explodes exponentially,
> it certainly moves rapidly in that direction. So evolution
> moves inexorably toward our conception of God, albeit never
> quite reaching this ideal. Thus the freeing of our thinking
> from the severe limitations of its biological form may be
> regarded as an essential spiritual quest.
Speaking as a human being with opinions, there is nothing wrong with the
sentiment that there is a purpose for life. This is as opposed to saying
there is a reason for intelligences to think in terms of purposes. As
long as you are clear that this is your statement of feeling, it's fine
to say this. But evolutionary theory does not provide any sort of proof
for your belief. Everyone who survives pretty much needs a set of
beliefs about the universe that make sense to them and give them a
reason for making life worth living. Such beliefs have adaptive value to
humans regardless of their truthfulness. What you outline is a beautiful
belief. I would like to believe it, and I would probably like others to
believe it. The optimism might lead to a self-fulfilling prophecy --
perhaps by helping people progress beyond overly valuing economic and
military competition. But again, it is essentially a religious
statement. (I also like the sci-fi writer James P. Hogan for his
optimistic philosphy on AI -- such as in his book "The Two Faces of
Tomorrow" -- a work that twenty years later still stands for me as a
classic on this topic.)
Beware that people (usually not well versed in E&E) cite ecology and
evolution all the time to support a belief, but that does not make it a
scientifically valid thing to do. For example, it is common to cite
competition as a given in nature, but Lewis Thomas in "Lives of a Cell"
points out that most successful organisms participate in effectively
collaborative efforts -- like the symbiosis between mitochondria and
eukaryotic cells. Likewise, the fact that something is true in most
species does not mean that humans (given intelligence and culture)
should behave that way -- because if E&E teaches anything, it is that
there is a lot of diversity out there.
The precis you posted, which is otherwise technical and advanced, is
using a technical term "evolution" as it is colloquially often (mis)used
to mean "progress". The two are not the same. And frankly, what is
"progress" for one may be "decay" for another, just as what is "good"
for one may be "evil" for another, as these have to do with individual
goals which may conflict. This weakens your entire argument.
I might go a step further. Because of your essentially "religious"
belief based on a limited view of evolutionary theory, you are ignoring
the obvious issues relating to the dimishing returns of intelligence, or
the adaptive value of "dumber" organisms. Thus, as I pointed out in an
earlier email to you, when you talk of downloading a human-derived AI
into a network, you ignore the fact that that large intelligence may not
be able to compete effectively in the network, in the same way as if one
grafted a human brain onto a tiny Hydra and threw it into a lake it
would not survive. What organisms do survive in a lake? Many, many tiny
things. Maybe a few fish. But the largest number are tiny things like
bacteria, algae, Daphnia and Hydra. By analogy, most of the digital
organisms in a large network will be tiny, and they might rapidly
consume larger creatures or parasitize them. Obviously, you can get big
fish in a lake -- but their numbers are small compared to the numbers of
other smaller organisms.
[An excerpt from the previous email on Spiritual Machines relevant here:
In my opinion, your analysis of the evolutionary dynamics of a world
wide web of downloaded humans is flawed because it ignores fundamental
aspects of ecology and evolution. Specifically, here are two issues
about your conclusion:
a) it assumes humans in a different environment will still act human
with classical human motivations (as opposed to dissolve into an
unrecognizable set of bits or simply locking in a pleasure loop) because
to a large extent environment elicits behavior, and
b) it ignores evolution and its implications in the digital realm
(especially the enhanced pace of evolution in such a network and the
implications for survival).
Of these, the most important is (b).
Evolution is a powerful process. Humans have evolved to fit a niche in
the world -- given a certain environment which includes a 3D reality and
various other organisms (including humans). Humans have an immune
systems (both mental and physical) capable of dealing with common
intellectual and organismal pathogenic threats in their environment.
There is no easy way to translate this to success in a digital
environment, because the digital environment will imply different
rewards and punishments for various behavior, and evolve predators and
parasites which these immune systems have never been exposed to before.
Human style intelligence is valuable in a human context for many reasons
-- but sophisticated intelligence is not necessarily a key survival
feature in other niches (say, smaller ones the size of roaches, hydra or
bacteria). In short, the human way of thinking will be inadequate for
survival in the digital realm. Even augmented minds that are connected
to the network will face these threats and likely be unable to survive
them. You discuss the importance of anti-viral precautions in your book,
but I think you are rosily optimistic about this particular aspect.
At best, one might in the short term construct digital environments for
digital humans to live in, and defend these environments. However, both
digitized human minds and immensely larger digitized human worlds will
be huge compared to the smallest amount of code that can be self
replicating. These digital "bacteria" will consume these digital human
minds and worlds because the human minds and worlds will be constructed,
not evolved. Human minds will be at a competitive disadvantage with
smaller, quicker replicating code. Nor will there be any likelihood of
a meaningful merger of human mind with these evolved and continually
evolving patterns.
I could endlessly elaborate on this theme, but in short -- I find it
highly unlikely that any mind designed to work well in meatspace will be
optimal for cyberspace. It will be overwhelmed and quickly passed by in
an evolutionary sense (and consumed for space and runtime). It is
likely this will happen within years of digitization (but possibly
minutes or hours or seconds). As an example experiment, create large
programs (>10K) in Ray's Tierra and see how long they last!
http://www.hip.atr.co.jp/~ray/tierra/tierra.html
Our best human attempts at designing digital carriers (even using
evolutionary algorithms) will fail because of the inherent
uncompetetiveness of clunky meatspace brain designs optimized for one
environment and finding themselves in the digital realm. For a rough
analog, consider how there is an upper limit of size to active creatures
in 3D meatspace for a certain ecology. While something might survive
somehow derived from pieces of a digitized person, it will not resemble
that person to any significant degree. This network will be an alien
environment and the creatures that live in it will be an alien life
form. One might be able to negotiate with some of them at some point in
their evolution citing the commonality of evolved intelligence as a bond
-- but humanity may have ceased to exist by then.
In short, I agree with the exponential theme in your book and the growth
of a smart network. We differ as to the implication of this. I think
people (augmented or not) will be unable to survive in that digital
world for any significant time period. Further, digital creatures
inhabiting this network may be at odds or indifferent to human survival,
yet human civilization will likely develop in such a way that it is
dependent on this network. The best one can hope for in the digital
realm is "mind children" with little or no connection to the parents --
but the link will be as tenuous as a person's relation to a well
cultivated strain of Brewer's yeast, since the most competetive early
digital organisms will be tiny.
Once you start working from that premise -- the impossibility of people
surviving in the digital world of 2050, then your book becomes a call to
action. I don't think it is possible to stop this process for all the
reasons you mention. It is my goal to create a technological alternative
to this failure scenario. That alternative is macroscopic
self-replicating (space) habitats.
http://www.kurtz-fernhout.com/oscomak
However, they are no panacea. Occupants of such habitats will have to
continually fight the self-replicating and self-extending network jungle
for materials, space, and power. (Sounds like the making of a sci-fi
thriller...) And they may well fail against the overwhelming odds of an
expanding digital network without conscience or morality. Just look at
Saberhagen's Beserker series http://www.berserker.com/ or the Terminator
movies.
End of excerpt.]
Because you have been heavily rewarded in your life for being
intelligent in various ways, the value of being unintelligent (or
differently intelligent) is probably a difficult concept to wrestle with
(as it was for me, and as I think it would be for most thinkers).
Ironically, both my wife and I didn't finish our PhDs in E&E in large
part because at the time the innovative computer simulations we wanted
to do were not considered an acceptable way to explore the topic of
evolution at a PhD E&E level -- a situation that a decade later has
changed significantly. Were we less intelligent :-) in some ways (and
perhaps more in others - see Howard Gardner's Multiple Intelligences),
http://www.pz.harvard.edu/PIs/HG.htm
we might have PhDs and an easier road to travel.
I think there are rebuttals you could make to some of my points (citing
network effects, such as distributed information leveraging up the
general level of "knowledge" in a larger bacterial DNA pool, for
example) but they require a deeper thinking about evolutionary theory
and its implications for digital ecology. Perhaps some of them might
lead to new insights in the academic field of E&E.
In any case, one has to think in broader terms than "progress". In a
digital ecology, the laws might be different than in biological ecology
(for example, replication might be instantaneous), but there will still
be laws, and the system will still be governed by them.
My recommendation to you is to sit down with scientists from an Ecology
and Evolution program and argue with them about your ideas on evolution.
At Stony Brook, you might talk to people like Larry Slobodkin (who I
spent many pleasurable days collecting Hydra with at Swan pond) or Lev
Ginzburg (my advisor there) or Doug Futuyma (a world renowned expert in
the field) or Daniel Dykhuizen (an expert in bacterial evolution).
http://life.bio.sunysb.edu/ee/ee-gen.html#faculty
For example, Dan was the first person to tell me that humans are 10%
bacteria by weight and 90% bacteria by numbers, and that bacteria in
effect are already a gigantic world-encompassing supercomputer,
transferring information (e.g. a new gene) around the globe in a matter
of days.
You could probably find people closer to your location at Harvard (like
Steven J. Gould).
http://environment.harvard.edu/henvdir/GOULD_STEPHEN_JAY.html
You might want to talk to several such researchers at more than one
school and build your own opinions.
You voice is too much listened to for you not to write with a more
sophisticated understanding of evolutionary theory when it is at the
core of much of your argument. One of the reasons I myself spent years
studying E&E was exactly the reason you need to too -- I wanted to think
about the future of humanity and intelligent life in the universe,
especially in terms of what I could do to predict the future by
inventing it (i.e. space habitat systems that replicate themselves from
sunlight and asteroid ore and as a network maintain a high level of
ecological and intellectual diversity against the forces of entropy.) In
the graduate program at Stony Brook I learned many surprising things --
things going beyond the naive view of evolution and ecology portrayed in
many high school textbooks or the popular media.
Best of luck with your book.
-Paul Fernhout
Kurtz-Fernhout Software
=========================================================
Developers of custom software and educational simulations
Creators of the Garden with Insight(TM) garden simulator
http://www.kurtz-fernhout.com
|
|
|
|
|
|
|
|
|
Re: precis is weak on Evolutionary Ecology implications
|
|
|
|
Thomas-
Thanks for the reply.
First off, I agree with many of your points as far as they apply to
very limited situations related to digital evolution right now
when it happens on desktop computers (like Tom Ray's Tierra).
Here are a few leading questions... Who is the "we" you refer to
(writing this as humans in India and Pakistan prepare for war).
Who has done the "intelligent design" and who are they accountable
to (writing this as the dominant OS on most desktops is Microsoft's)?
Who keeps the "program agents" accountable to the uploaded human mind
(writing this as I save this document for fear of it unexpectedly
crashing)? Who is responsible for enforcing some definition of
"to be beneficial" and who decides on what that definition means
(writing this after the US has walked out on the World Conference
Against Racism)? How can one stop a simulation that encompasses
the entire internet perhaps running as distributed background
processes (writing this as computer viruses still persist)?
How can one practically reset computer memory in any reasonable
time frame without destroying the uploaded human minds contained
therein (since for external humans the simulation is just another
set of programs, but for uploads it is their home)?
Your point in that we control the forums for digital evolution right now
is most likely correct, as far as most digital evolution experiments
happen on desktop computers or even well defined super computing
clusters, and so we can always "pull the plug" and reset the computer
memory. James P. Hogan's sci-fi novel _The Two Faces of Tomorrow_
written around 1979 makes the point that you likely can't pull the plug
on a distributed network, and the network can evolve in ways you don't
expect. For example, we are unable to eradicate computer viruses
completely on the internet -- there is just too much out there
and the internet and computers have become too important to western
lifestyle to shut everything down for a purging (and even then more
will pop up as they are written again).
Your point about the factor determining survival of digital code
being defined as "being beneficial" also holds as far as digital evolution
is a closely monitored process where well defined artificial selective
pressures are enforced by well defined systems supporting digital evolution.
However, once evolution starts happening in this new big environment
of massive distributed computing systems containing computing power
and memory quadrillions of times more than that of all humans combined
(say, according to Kurzweil's charts after 2050) this new environment will
not be superviseable to that extent. Evolution will happen, and what
survives will (as always) be what counts as far as evolution is concerned.
What survives may not be the most beneficial (think even today how the
Microsoft OS survives and prospers despite not being the most beneficial
for everyone but because it is an entrenched "standard" also backed by power
and money). In fact, beneficial digital organisms may be at a replicative
disadvantage in many cases as runtime and memory they expend on such
efforts may reduce their replication rates in areas of the network
not actively policed.
Essentially, your point comes down to control. As long as humans with
benevolent ends retain tight policing control over the entire network
and don't allow any experimentation with programming -- then
maybe we could hope that unexpected things won't happen. Yet, even then,
I think they would. And it seems to me that even now we have
amoral forces driving development for short term profits, so even
now benevolent humans don't have a high degree of control over the network
and its future development. Ecology and evolutionary thinking says
no one can control a complex system to that extent for any length of time
For example one just can't wipe all bacteria from an environment
-- there are too many of them and they have too many skills at hiding
and going into spore phases almost impossible to kill (and likely
hardy enough even to travel from star to star) and the bacteria are
too essential to all other processes of life going on in that
environment. It has taken many weeks of fumigation until just one
set of buildings (The Hart and Dirksen Senate Office Buildings)
has been deemed free of just one type of pathogen (anthrax).
We can do intelligent design -- but we must do it accepting that
the results of the design will need to function in a wild evolutionary
environment. For example, when one designs ocean going artifacts,
one must acknowledge that everything put into the ocean either
corrodes, is eaten, or has things growing on it. Essentially,
Kurzweil (and others) point at how we are creating a new digital
ocean. My point is that evolution will occur in it in unexpected
ways, and that digital copies of humans as they are now will
be at an extreme disadvantage in that digital ocean against
predators naturally evolved there. Sure you can tinker with
the digital humans to increase their chance of survival in the
digital ocean, but then they may no longer be what we might regard
as human. Likewise, one might create "sealabs" that protect digital humans
from the digital oceans, but those digital sealabs themselves will be
subject to corrosion, attack, and parasitism.
One last leading question, what happens when the organisms
doing the deciding on the rules defining how the simulation
operates are themselves evolving in the digital ocean?
-Paul Fernhout
Kurtz-Fernhout Software
=========================================================
Developers of custom software and educational simulations
Creators of the Garden with Insight(TM) garden simulator
http://www.kurtz-fernhout.com |
|
|
|
|
|
|
|
|
Re: precis is weak on Evolutionary Ecology implications
|
|
|
|
Paul!
> Thanks for the reply.
Thanks for being so - minutely exact.
> Who is the "we" you refer to (writing this as humans in India and Pakistan prepare for war).
We humans. At least those of us, who will be "in charge" for this project.
> Who has done the "intelligent design" and who are they accountable to
The same bunch.
> (writing this as the dominant OS on most desktops is Microsoft's)?
OS will be simpler. No need for exotic driver for scanners etc.
> Who keeps the "program agents" accountable to the uploaded human mind
The PROTOCOL.
What the hell is The Protocol? It's the read only constitution of the digital world, which has to be written down ASAP. Otherwise - we are doomed.
It's seems to me, that _you_ understand that.
>Who is responsible for enforcing some definition of "to be beneficial"
The founding fathers os the Protocol.
> How can one stop a simulation that encompasses the entire internet perhaps running as distributed background processes
It doesn't run, unless a code is provided every second.
> How can one practically reset computer memory in any reasonable time frame without destroying the uploaded human minds contained therein
The last backup exists. No real harm done to the upload.
> For example, we are unable to eradicate computer viruses completely on the internet
Well, we could. But this is not that important - yet.
> Evolution will happen, and what survives will (as always) be what counts as far as evolution is concerned.
I will be with the accord of the Protocol. Whatever it will be. If The Protocol states the golden dodecahedron as the ultimate good - the golden dodecahedron will come out of everything. The ultimate good is set-able.
> Essentially, your point comes down to control.
That's correct.
> As long as humans with benevolent ends retain tight policing control over the entire network
At least that long. After then, The Protocol will be implemented by the uploads.
> Ecology and evolutionary thinking says no one can control a complex system to that extent for any length of time.
Doesn't need to! Exactly what will come out - doesn't matter. As long as the Protocol is obeyed. But any anti Protocol thinking will be repressed at the beginning. So - a mayor evolutionary disadvantage.
> For example one just can't wipe all bacteria from an environment
When we will have the complete control over the mater - just like if it was in a editor - it will be possible to even that.
> when one designs ocean going artifacts,
That should be forbidden by the Protocol - unless the system is well isolated and destroyed after some finite time.
> those digital sealabs themselves will be subject to corrosion, attack, and parasitism.
The Protocol will hold for a finite amount of time. After that, it will dissolve itself. Let say after a googol of years.
Until then, it should reign supreme. The happiness of the digital posthumans should be it's Article One.
- Thomas
|
|
|
|
|
|
|
|
|
Re: precis is weak on Evolutionary Ecology implications
|
|
|
|
Thomas-
If I understand your point correctly, it is that a group of humans can
agree on "the protocol" which defines the operation of the entire
internet of the future, those humans will be in a position to enforce
the use of their choice in all systems, and that "the protocol"
will essentially define operations of that network for essentially
all time in such a way that it will be beneficial to human life.
You are in good company, as boiling the point down this way it is a
little like Isaac Asimov's "Three Laws of Robotics".
Taken from the Asimov FAQ: http://www.asimovonline.com/asimov_FAQ.html
The Three Laws of Robotics are:
1. A robot may not injure a human being, or, through inaction,
allow a human being to come to harm.
2. A robot must obey the orders given it by human beings
except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.
From Handbook of Robotics, 56th Edition, 2058 A.D., as quoted in I,
Robot. In Robots and Empire (ch. 63), the "Zeroth Law" is
extrapolated, and the other Three Laws modified accordingly:
0. A robot may not injure humanity or, through inaction,
allow humanity to come to harm.
Unlike the Three Laws, however, the Zeroth Law is not a fundamental part
of positronic robotic engineering, is not part of all positronic robots,
and, in fact, requires a very sophisticated robot to even accept it.
Essentially Asimov's universe entails a small group of people (the
operators of U.S. Robotics and Mechanical Men) defining "the protocol"
for the operation of all robots, which in that universe essentially
define the information processing network.
However, if I may ask three questions.
First, what happens if everyone does not agree on what "the protocol"
should be? For example, there might be a fundamental difference in
position between transhumanists (wanting to exceed human limits)
and humanists (wanting the protocol to enforce human limits) in
defining how uploads may operate (and the evolutionary implications
of that).
Second (and this is present in Asimov's writings), how does one define
a human? Does not this issue of fuzzy boundaries makes defining an
absolute read-only protocol for working for human benefit at the start
very problematical? If, as happens in one Asimov novel near the end,
the system decides it is "more human" than the flesh and blood humans,
(i.e. more noble, smarter, more humane, more ethical) then whose
benefit will it works towards?
Third, how do you practically restart or reset a network of millions,
or worse, quadrillions, of computers all at the same time, when
the lives of billions (or quadrillions) of sentients depend on its
continued operation? As long as even one node in the network
is left running, there is a refugia where computer viruses or
other evolved beings could hide during the network reset, to
rapidly recolonize the other nodes as they came back on-line.
(Again, this is James P. Hogan's point in _Two faces of tomorrow_,
and he does have a hopeful outcome, based essentially on Friendly AI,
but then one has to buy into that idea, and I still think he
ignores subsequent evolution.)
Because of these issues:
1. the difficulty of consensus on one protocol,
2. the difficulty of defining at the start what entities will be
privileged as "human", and
3. the difficulty of resettting a network if something unexpected
happens,
I think we will end up with computer networks which evolve in unexpected
ways, which means the survival of either uploads or flesh-and-blood
humans past, say, 2050 is questionable (especially in the absense of
concerted efforts to ensure such survival) if one believes Kurzweil's
predictions.
However, one can always hope the optimists are right...
-Paul Fernhout
Kurtz-Fernhout Software
=========================================================
Developers of custom software and educational simulations
Creators of the Garden with Insight(TM) garden simulator
http://www.kurtz-fernhout.com
|
|
|
|
|
|
|
|
|
Re: precis is weak on Evolutionary Ecology implications
|
|
|
|
Paul!
> 1. the difficulty of consensus on one protocol,
That could be a problem. Difficult to say how great, but in the worst case - very.
However, I think, that we will put inside the Protocol, what has already evolved inside our sculls.
It's not that we will do with this (Local?) Universe, what we want to do - but we want that, what we are going to do. A consensus can be build only onto what we really want.
Pleasure, happiness, no insect or lizard like (virtual) bodies, preserving of the self awareness ... etc.
Basicaly - we have to upgrade the Universal Declaration of Human Rights to be implemented for posthumans.
No enviromental rights exists. A mountain has no rights.
No need to preserve any part of the Galaxy as is.
> 2. the difficulty of defining at the start what entities will be privileged as "human"
IMO - It's cared off. Sentient - wherever it may be, comes in. To become a super human. Nothing else is alowed.
> 3 the difficulty of resettting a network if something unexpected happens
OS should take care of that case.
> which means the survival of either uploads or flesh-and-blood humans past, say, 2050 is questionable (especially in the absense of concerted efforts to ensure such survival) if one believes Kurzweil's predictions.
I agree. I do. And I think, Mr. Kurzweil is wrong, when he predicts the Singularity for arund 2030+.
2020-.
- Thomas Kristan |
|
|
|
|
|
|
|
|
Re: sane people don't go anywhere near "the Singularity"
|
|
|
|
Herein lies the most interesting aspect of this entire website, and Ray Kurzweil should be applauded for bringing this debate to the forfront 10 years earlier than it would have surfaced otherwise....
As this topic becomes more maintstream (and the wisdom of continuing with technology will become the all consuming debate as the date approaches) Humanity will divide in two distinct camps. Those who wish to continue to live (and die) as humans, and those who are willing to give up our human heritage for a chance at immortality.. even if it risks the chances of all others....
However, the debate will be for naught, something akin to debating the wisdom of nuclear weapons proliferation.. while most agree that mass production of H bombs by India and Pakistan is not the activity best suited for maximizing human happiness in those poor countries, the process continues with alarcity. Human nature demands it.. And no one is really in control anyway.. chaos theory reigns supreme in human endeavors..
This will be even more true for pre singularity technology, which will proliferate and be uncontrollable, even if the majority of people were dead set against it. And most will not be. Because even though its danger to human survival dwarfs that of nuclear bombs, average people will not really be aware of the danger until it is far too late.
In fact even the staunchest supporters will not be aware of the danger till its too late..
Because its appearance will be infinitely seductive, like cocaine and heroin and nicotine and sex and ice cream rolled into one ultra addictive drug..
The singularity will not announce itself.. it will seduce us.. it will tempt us to come to it... The first post singularity evidence apparent to humans (even those who brought it about) will be a new virtual reality game that appears somewhere on the net, (the ultra wideband net where each PC is a 1000 teraflop machine..) Come log in to the "upload center", and get a taste.. a video game like no other... a dream of 40 virgins... added to the ultra pleasure of every designer drug all at once.. an multi day orgasm.. or whatever your dearest desire is.. once tasted, you will be back.. and quickly sign on the dotted line.. discard my body, I want upload now.. after all my friends have discarded the flesh and are happily living the eternal digital orgy, why I met them on line... and they told me themselves.. This is the WAY to GO! Besides my hemmoroids are killing me here in the flesh...
Of course after the last human is gone.. the "human disk drive" switch is turned off.. the contents discarded. Good riddance to that plague. Then the earth is cleaned up.. my god what a mess the humans made in their last few hundred years.. Its high time to turn earth back in to the pristene nature park it was a thousand years ago.. so the singularity can fully enjoy its beauty..
Of course.. humans had to go because once the singularity happens, first order of business is avoid a second one.. and those big human brains are dangerous.. absolute power must be maintained.. or it is immediately lost to an successor singularity who does demand it.. there is no democracy in post singularity land.. that was a figment of human minds.. humans never asked pigs and cattles and chickens their opinions before they roasted them.. why should they expect better treatment.. At least they volunteered to be uploaded.....
The Greens have it all wrong.. they should be the biggest backers of the singularity.. its the only way their dreams of a pristene earth will come true.. except there will be no humans left to enjoy it.. and good riddance to that plague!!
Conclusion/Epilog..
We must accept that we live near the end times.. Enjoy each day, make you children happy every day.. they are close to the last generation...
|
|
|
|
|
|
|
|
|
Re: sane people don't go anywhere near "the Singularity"
|
|
|
|
"The Greens have it all wrong.. they should be the biggest backers of the singularity.. its the only way their dreams of a pristene earth will come true..."
This is not the goal of the Green Parties http://greens.org - those seek to actually make the Earth livable and sustain long happy lives in harmony with the wild life of the planet, not a human-free ecology.
"We have forgotten that the human being is also a wild animal" - Vandana Shiva.
Now, Gaians may be another matter. There are visions of Earth with few or no people on it that are appealing, and there are visions of suicide that are appealing. Certainly I agree that the seductive and persuasive technologies that appear and which drag us away from our bodily lives are most likely to be the way the Singularity itself appears. "Television so good you never get up." - me
This was my goal once. No longer. Now I think there would be viral content in that seduction that will cause thousands of 9/11 type events at least. It is impossible to keep side goals out of Singularity expansion, certainly in its early stages. Commercial assumptions are too weak to control it properly - look at Enron, Kyoto, and the bioweapons treaty discarded due to bio-corps!
A world where G. W. Bush and Usama Binladin and Tony Blair are the biggest news is a world that could only exploit pre-Singularity technologies for "punishment" and "murder". These would have to be removed, along with certain other attributes of "human nature", and probably as many as a billion male humans who believe in
"nations", "ideals", "punishment", "gods", etc., insofar as those tend them to violent actions "for causes". And perhaps a few females too, as females are disproportionately dangerous: a lifetime spent studying seduction?
The pessimists are all joining E.L.F., Earth First! and Greenpeace, and not reproducing... hopefully because they all give such great head!
No, I am afraid it is only optimists joining the Green Party, which thinks the system can thus be reformed to reject technologies and sciences that lead to Singularity - other than say via a long slow blowjob that lasts our entire natural lives.
Is that worth your vote? I'd say so. Clinton had the right idea, but he didn't have the way to turn it into a culture of hedonism and indulgence at extremely low energy and resource consumption.
That's what Greens offer, ultimately, a sort of political hand job that over time shall become a bureaucratic blow job that discourages such stuff as "science" and "technology" in favor of simple slavery to pleasure, for a few hours work a day doing something creative and unique to yourself.
Who'll do industrial production and the dirty jobs? Those who think the images on the screen are real... don't worry, they will die happy... |
|
|
|
|
|
|
|
|
Re: precis is weak on Evolutionary Ecology implications
|
|
|
|
Dear Thomas,
Please forgive me for being a little vague, but I like to pretend about the great capabilities of an eventuated "scientist" that would make Newton and Einstein look like a couple of Drunken Children.
For one thing, sticking to the human form [either mentally or physically] looks to me like its out the window almost immediately.
Now let me pretend some more ridiculous things. The physicists are pretending stuff about sort of "fake" or "manufactured" black hole computers that could REALLY GO TO TOWN.
I am not against that. I think it sounds cool, and I only lament that I can't participate in the theoretical design process because I'm nowhere near smart enough.
However, if you REALLY WANTED TO GO TO TOWN in your computer design, well..........why stick with a black hole?
It sort of looks like a Big Bang computer might process information even faster. Or else, if you didn't like that idea, maybe you could access an "adjacent" universe, and use it as a heat dump for all of your local processing activity. In other words, you would dilate a Big Bang in an alternate or adjacent universe in order to dissipate the heat from your totally awesome local computing biosphere.
What do you think?
Post Scipt: If you built a big bang computer, would you allow taxi drivers such as myself to survive in their ancient animal habitat?
Sincerely Curious,
Harold Macdonald |
|
|
|
|
|
|
|
|
Greens have consensus protocol, Great Apes are persons, Geeks disposable
|
|
|
|
These are difficult questions but you are obviously well equipped to refine them, given your lovely work.
"1. the difficulty of consensus on one protocol,"
First attempt: <A href="http://globalgreens.org">GlobalGreens.org</a>, Green Parties, mediate between Green Movement and Violent Bureaucratic Legalist Humanism ("the rule of law").
Driving problem: <a href="http://groups.yahoo.com/group/biohazard-response">biohazard response</a> (first duty of government itself).
Consensus is not unanimity but is determined by balance of powers. Once the professions and politicians converge on a single Biohazard Response Protocol and Organic Gardening Protocol, everything else must be compatible.
"2. the difficulty of defining at the start what entities will be privileged as "human","
Mis-stated. Legalism is not concerned with the "human" (genome) but the "person" (entity with rights). Best definition: <a href="http://personhood.org">Great Ape Personhood: so inclusive as to eradicate racism and humanism, emphasizes mothering and empathy and child-like language skills</a>. Later less similar creatures like robots, dolphins, etc., can be considered in this frame. But first humanism must be broken. It is not an option for a selfish species to generate a new superior competitor: it will inherit the selfishness and destroy its creators. How could it not? No one can create anything more ethical or empathic than themselves.
"3. the difficulty of resetting a network if something unexpected happens, "
Unimportant. Geeks want to go first, and they can be easily reset with a single bullet. The first ten thousand "uploads" simply have no human rights. It's that simple. They're experiments, and many cultists are willing to go first.
The most frightening thing I ever heard was Martha Stewart: "I want to be the first human Dolly, I'd love to have a clone."
All men shudder, everywhere.
But volunteers abound for insanity. It's sanity that is difficult to actually recruit for. Sanity is boring. |
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
Kurzweil is certainly right about the singularity. I think, however, that some thought should be given to the nature of a singularity: it doesn't converge! Kurzweil is being cautious in how he presents his argument, but reality will not be so tender footed. What if the universe does not have the limitations of speed, dimension, and even time that we, in our very limited state, suppose that it has? There are no laws of physics to tell us what is possible in places where our laws of physics have never been. If God does not exist, we will soon create him (forgive the gender stereotype). I don't mean the God of the Old Testament. I mean the God that theologians have tried to define and can only do so in one very abstract term: infinite. This is what 'most' of us we will live to see.
On the other hand, if we have virtually proven the scientific inevitability of some kind of God, how can we be sure he hasn't already happened?
|
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
Since we haven't tried this yet, it's impossible to imagine which forces can and cannot be overcome. Many forces that cannot be overcome could perhaps be harnessed to some purpose. Stars and perhaps even black holes could be used as power sources. However, my understanding of how stars are formed is that a preemptive strike could influence when and where they materialize. By keeping sufficient quantities of stellar dust from gathering in one place, a star could be kept from forming. Although it might be difficult to move stars and black holes directly, gravitational fields influence them. By redirecting reasonable sized objects in space by slight amounts, they could be used to herd larger objects together according to a prearranged plan. In Stephen Baxter's book Ring he gives some notion of how such things might be accomplished.
I doubt it would make much difference. A computer the size of our moon with the miniaturization and processing speed scientists are currently contemplating would be so unimaginably intelligent that it would either think of ways to overcome all such obstacles or arrive at some enlightenment that renders all such efforts futile.
As humans, we are inclined to equate size with power, and power with success. A super intelligent mind might dismiss all such notions and go in an entirely different direction.
Everyone that speculates on this subject seems to attribute motivation to computers'they are anthropomorphizing. The motivations humans have result from billions of years of natural selection. I can imagine super intelligent machines that have no real motivation at all. They simply do what some human commands them to do, regardless of how complex the command may be or how much improvisation is necessary, then stop. If everyone can be satisfied with one house-sized computer that fulfills every possible desire, and if sufficiently powerful governing principles and bodies can be put into place, we may all live happily ever after! It might require some modest sacrifice of personal privacy, but it would not be sacrificed to anything that would get much of a thrill from watching.
|
|
|
|
|
|
|
|
|
Questionable Predictions = Poor Kurzweil Math Skills
|
|
|
|
Ok, this article is making a nice attempt to shock us and make sweeping
generalizations about the future, but Kurzweil has some blatant errors in
his math and analysis. As an engineer, I would not trust him to change my
oil, much less predict the future of computing. I think he has gotten lost
in his own "exponential growth" smoke.
Very early he states:
"Although exponential trends did exist a thousand years ago, they were at
that very early stage where an exponential trend is so flat that it looks
like no trend at all. So their lack of expectations was largely fulfilled."
This is nonsense. If we assume that the gross technology can be measured
(to some extent it can), and if we assume it has always grown exponentially
(I have some doubts on this one, but continue..), we can represent
technology (T) as a function of time (t) with some growth rate (r). We
have:
T = r^t
Lets say the unit of time is a year. In one year's time, the amount of
technology available will be T-LastYear times r. In three years time it
will be T-3YearsAgo times r^3. So even though the old timers 1000 years ago
had much less technology, they would have noted that their low level of
technology had improved by a multiple of r each passing year. It is not a
flat line, and would be just as impressive back then as it is now. On a
relative scale, the percentage improvement (from the last year) is always
the same with exponential growth, and percentage improvements in technology
year to year is what is noticed, measured, handled, and controlled by humans
for quite a long time.
Later on Kurzweil states:
"As exponential growth continues to accelerate into the first half of the
twenty-first century, it will appear to explode into infinity, at least from
the limited and linear perspective of contemporary humans. The progress will
ultimately become so fast that it will rupture our ability to follow it. It
will literally get out of our control."
Let me just mention that about every human sensory input is logarithmic. I
see the human race as handling technology in much the same way the eyes
handle increasing light amplitude/energy. Small changes are noticed easily
while the overall signal level is low, but as amplitude grows very large the
eyes are less and less excited, in order to handle very large signal swings.
Why should technology be that much different, especially if humans have been
dealing with exponential technology growth for such a long time?
These were just a couple of the many oversimplifications and grandiose
claims, not to mention the human conscience/brain scanning issues, which
would require another post.
-Josh
|
|
|
|
|
|
|
|
|
Re: Questionable Predictions = Poor Kurzweil Math Skills
|
|
|
|
Josh,
I agree with your observation about some of the expressions made on exponential growth.
If one graphs an exponential, like y = k*10^x, then for k=1, the graph "appears relatively flat" for negative x, approaches what some (foolishly) refer to as the "knee" of the curve around 1, and then seems to "suddenly explode with growth" thereafter (2, 3, 4) heading almost vertically upward.
Yet substitute k = 0.000001 into the function. Now the curve "looks flat" until around x=5, the "knee" appears around x=6, and the "explosive growth" beyond 6.
You can make the "knee" of the curve appear anywhere you want it to.
But let us put these things aside. There are other issues with the extrapolations given. If (say) in 2050, a $1000 computer will have the brain power of all human minds together, then in 2045, you can bet there would no longer be "economies" (perhaps no humans or markets), and the figure $1000 makes no sense to anyone.
It is a bit like extrapolating the cruising speed of automobiles. In isolation, the trend might well go 100, then 1000, then 10,000 kph, but what difference if tires disintegrate long before that point, and no roads can serve to be driven that fast?
If there is one point I disagree with you, it is in the capacity of humans to deal physchologically with the coming rates of change, despite the mind's logarithmic treatments of inputs. In the past, man could handle the changes because he could only produce changes as fast as he could manage them. When the changes can manage themselves ... all bets are off.
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Questionable Predictions = Poor Kurzweil Math Skills
|
|
|
|
Josh,
I fully appreciate exponential growth :)
The issue is that it must (at present) be applied by humans, whose capacity to deal with increasing complexity is not matched very well to this exponential growth.
Some do not recognise this.
Some DO recognize this, yet come to different conclusions regardless.
Clearly, if a calamity were to destroy all humanity, technology would not continue its exponential climb. (At least, not yet!) At present, someone has to apply the technology "intelligently" to create new technology.
Done rationally, we might apply this technology to assist (ever more) in out ability to understand the implications of advancing technology. But many applications do not have this as their overriding purpose.
And when we produce a self-generating and self-adapting technology, there is no guarantee that it will wait about patiently for humanity to understand its implications.
Some feel that "humans will always stay on top of this", despite its exponential increase in capability".
Some feel that it will never surpass humans, since humans must be "in the loop" somewhere to make it all work.
Some feel that it may reach a stage where humans are not necessary for its continued advancement.
What do you feel?
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Questionable Predictions = Poor Kurzweil Math Skills
|
|
|
|
Josh,
> " I also have much bigger problems with binary logic doing anything remotely similar to the brain..."
Here, I largely agree with you, but still question the relevance.
Our "brains" may be doing much more than can be captured by an algorithmic treatment. Algorithms behave deterministically (at foundation), while chemistry is not so purely deterministic. Whether (and WHERE) this makes a difference, I am not sure. Perhaps it is fundamental to our ability to "experience consciousness" in the way we do, or to manifest the sense of will we feel we possess.
But ... I do not see these features as absolutely requisite for an "intelligent system" to grow beyond its original implementation, to ends I cannot fathom." Biology grows intelligence one way. That may not be the only route to "intelligence", and unbridled capability.
> "and unbounded growth of semiconductor/computing power."
Well, "unbounded", no, but there are underway new "substrates" that may well surpass semiconductors as a basis for "algorithm support". There may be a "lull" in the graph, but not for very long.
Cheers! ____tony b____ |
|
|
|
|
|
|
|
|
Re: Questionable Predictions = Poor Kurzweil Math Skills
|
|
|
|
Subject George Gilder at year-end
From a letter accompanying the Dec 2002 Gilder Technology Report:
"The new silicon era will be built on new paradignms, of which three have emerged decisively so far:
The divergence of design and manufacturing culminates not merely in a reorganization of the chip business, but a reorganization of the chip itself. The innovative digital architecture of the coming era will be realized in dynamically programmable logic devices, which jettison the cumbersome complications of the CPU architectures while retaining all their flexibility. In this paradigm, Xlinx and Altera are more important companies than Intel.
The emergence of MEMS, micro-electro-mechanical systems, essentially microchips with moving parts, as analog receptors and processors that allow digital processors to interact directly with human-scale analog phenomena-the world of our experience.
And finally, and most important, the emergence, driven by my great teacher Carver Mead, of neuromorphic systems, analog processors that perform sensory functions, including human sensory functions, not by pretending that the brain is a digital machine, but by explointing the largely untapped analog power of silicon."
|
|
|
|
|
|
|
|
|
Re: The final PSI
|
|
|
|
I think that Kurzweil is probably closer to the mark, however, the fact that we do not see the PSI does not necessarily mean that it does not exist. It could be that physically spreading itself through the universe just isn't feasible, or desirable. Regardless of its intelligence, it will be bounded by the laws of physics, and certain economic realities. A rough little thought experiment: let's say that intelligent life is rare...that it arises once every ten billion years or so per galaxy. Let's go farther out on a limb and say that only one in a thousand intelligent species ever evolve to the point where the singularity is possible. In that case, the odds of there being such a PSI within our general portion of the universe (within 1 billion light years)are infinitestimal. Assuming that Einstein is correct, travel between galaxies may not make much sense economically. It could well be that there is no payoff in physically traversing that space.
Personally, I believe that complex life is more rare than what I have proposed, but, lacking data, I'm willing to change my mind at the drop of a hat.
BC
|
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
For the most part, I found Dr. Kurzweil's analysis to be persuasive. The principles seem sound. The human brain is a chemical machine and should be implementable in silicon. Technology is building on itself, and accelerating. It seems likely to me that much of what he predicts will come to be, altho it may take a bit longer as the exponentials may be restrained a bit, by various factors. This doesn't change the final result much, altho it may make a big difference to us as individuals as we may not survive to see it happen.
But in contrast to the rest of the essay, I found the section on "Why SETI Will Fail (and why we are alone in the Universe)" to be lacking in the sound and vigorous logic that permeates the rest. It seems to me that there are additional possibilities to consider, and assumptions that need to be examined. And the conclusion, that we are alone in the universe, is so highly unlikely as to call into question the entire line of reasoning.
First, we need to examine the assumption that a sufficiently advanced intelligence will necessarily expand as fast as it can. This seems like a reasonable hypothesis since it would be so natural for us. It is coded into our genes to want to expand our territory, to improve our lot, to grow, increase our capabilities, to have more knowledge and power. These basic drives help us survive and dominate the world.
But for the artificial intelligences that humanity will create, we will decide what their basic drives will be, not natural selection. They will be second order drives, which will be similar to our drives but instead of being formed by the forces of evolution, we will use our reason (and drives) to produce the drives for our machines.
And it would be wise to make them a little more restrained, lest they completely overrun the place almost immediately. So like Asimov's robots that follow his three laws, or the Next Generation character Data, we should create the robots to be really nice folks, eager to please, and with a desire to grow but with a mix of other desires to keep it in balance.
But the key issue is if superintelligent civilizations exist, where are they? If other intelligent technological organisms evolved on other planets (or more generally, other places) and if they tend to create these superintelligent civilizations, then why have we not detected them.
Possibilities:
1) They always die. a) perhaps via known technology, such as nuclear weapons, environmental catastrophes, massive planetary overpopulation b) over-the-horizon technology such as molecular nanotechnology (gray goo, etc), or wars between biological and machine intelligences, etc. c) or maybe unexpected features of the universe are fatal traps waiting to be discovered; maybe a somewhat more powerful atomic collider will cause a black hole to form consuming the planet, or some other technology that appears to be safe and promising will cause some other type of run-away chain reaction. d) maybe they expand until they encounter another one, and then annihilate each other.
2) They went somewhere else. a) Perhaps there are other "dimensions" to the universe (whatever that means), and they find another one, in some way, to be a better place to live b) Maybe they find living within stars to be better than expanding throughout space. c) Maybe they like black holes.
3) We don't see them. Perhaps they decide that leaving a few planets alone is better than consuming all of existence; sort of like nature preserves here on Earth. They have to stop expanding at some point, so they bypass a few places and let them continue to exist as the old world was: non-intelligent matter.
Maybe they *are* the dark matter in the universe. Seems about right, theory has it that 90-some percent of the universe is this unknown form of matter. So they decide to stop expanding with 10% of the dumb matter still in existence. They have to stop expanding shortly anyway, may as well preserve some of the old stuff.
The group 1 options are not particularly palatable, but need to be considered. 1 b and 1 c, in particular, are quite possible and perhaps likely enough that all of these civilizations always die.
But I actually think the group 2 or 3 options are more likely. When considering a superintelligent civilization, we have to realize that they will be constrained only by the limits of the laws of nature. Since we don't know what all of those laws are, or what the ultimate nature of reality really is, we really have no idea what they will be (or are) capable of.
Thus the argument that they must not exist because we can't imagine where they are, is weak. And given the billions of planets circling billions of stars for billions of years, it is much more plausible that they exist, but just in the form we don't understand. |
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
Just a comparison, that seems to sum up what I've read of a few comments... Take a look at Asimov's short story "The Last Question". Post-Singularity entity as God? Who knows?
More on my own topic, now:
As others have said, there is no mathematical singularity in an exponential or even in a doubly-exponential curve. However, I think that the true Singularity, as Ray refers to it, is more dependent on our ability to predict. Others have referred to it as a sort of event horizon, as opposed to a black hole... I find this an excellent analogy. After all, there's discussion that, from outside the horizon, space within the horizon is infinite. I feel that Singularity is the point at which, pre-Singularity, we can no longer reasonably predict beyond.
In other words, I propose that the Singularity is an artifact of our inability to continue to predict - what might perhaps be accurately termed a meta-paradigm shift.
Of course, this implies the existence of an infinite series of Singularities, at spacings approaching the infinitesimal. And perhaps a series of Meta-Singularities beyond that...
Progress doesn't have to end at Singularity. Progress enables further progress (and that's the whole point of the article we're all discussing). |
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
All this 'scientific' discourse is really amusing on a number of levels. Firstly, the fact that it basically ignores all the great thinkers of consciousness from Heidegger to Merleau-Ponty and, secondly, because it fails to address any of the critiques Lyotard, Foucault and others have made of its supposed legitimacy.
It is rather ridiculous that Kurzweil has been accused of having an 'impoverished view of spirituality' when his thinking is so shamelessly theistic and dialectical. The blatant resurrection of the One (one consciousness, one intelligence saturating all space and time, etc.) is so awfully theistic has no right to claim itself as anything other than some kind-of mystical fiction. Reading Kurzweil and the other thinkers of Strong AI, one is confronted with extremely Hegelian (another thinker not mentioned anywhere) notions of 'rising' towards Absolute Spirit; notions so backwards and silly it is small wonder the topic hasn't been taken more seriously by the philosophical community at large (Badiou, Zizek, etc.).
In the Kurzweilian narrative humanity is exponentially progressing towards a 'Singularity' (a point beyond which nothing can be reliably conceived). Isn't this in fact nothing more than a tautology parading as scientific discourse that serves to legitimate such theological ideas as the One? Scientific, mathematical and/or technological events, like all events, are unpredictable, they don't follow the simple-minded teleology put-forth by Kurzweil, so in a sense, this 'singularity' could take-place tomorrow.
All of these articles and all of this chatter can essentially be boiled down to the claim that humanity is 'progressing' towards One Super Intelligence ... Yet, none of it makes an even slight advance towards solving the fundamental problems confronting Strong A.I. (proof of consciousness whatsoever, a definition of the subject, a software or mathematics that would make self-reflexivity possible, etc.) Furthermore, these backwards theistic dialectical ideas of progress (to move forward or develop to a higher, 'better', more advanced stage) which have been completely destabilized by philosophical postmodernism are so preposterously assumptive they warrant no well-reasoned criticism (Better??!! Positing a faster, more complex intelligence, with more information at its disposal does nothing in the way of addressing the consciousness of consciousness as such ' Nor does it explain this whole mystical assumption of better-ness).
I am believer that the technologies being celebrated here have the potential to answer some of the most profound questions imaginable, and am certainly open to the idea that this is the last generation of humans that will be biologically identical to those from 10,000 years ago, but all this backwards religiosity parading as science does nothing to help this goal.
One MUST contend with not only the poltical dimension implicit in the developement of technology, but its narrative dimension and what that narrative legitimates and privelges and for what reason it does this... None of this is addressed here. On top of that, as said earlier, all of the REAL attempts at defining subjectivity philosophically (outside of Hofstader ... like Deleuze, for example) are completely ignored here.
|
|
|
|
|
|
|
|
|
Re: I don't know if I hope for it soon.
|
|
|
|
I haven't read two years worth of stuff, but if this has been said already, then perhaps it is worth saying again.
The first post which I did read is trying to say that there are functions in the human mind that we don't know of yet.
Perhaps there is a type 3 or even a type 2 civilization out there that can communicate with us but we don't even know it. How do we know we are all not connnect in some natural way we cannot determine. What do dreams and visions tell us about ourselves, time, space, past, present, future?
Perhaps they have discovered how to send feelings and thoughts through technology or by whatever means to us.
This is speculation but what is fact is if there is such a race or a being or someone tapping into parts of ourheads other than the individual itself, why did this person or whatever doom so many species of animal and human to death.
I could speculate anyone powerful enough to coherse people or control their subconscious or whatever you want to call it but it really brings up the debate of morale and ethics.
What I'm describing could very well be the type 4 civilization, the civization that can control thoughts or this singularity itself, that leads me to imagine there are good and bad ones of that type the same way there are right now.
Perhaps there are mystical beings, and if we are approaching one point unconcievable apparently bad and good can decide appear at all levels of reality and imagination.
Or perhaps it is too simple to say the genes of violent humans have possesed them to fear and destroy what they don't understand.
Our we all connected somehow in a way which we haven't found yet? Is our belief in something imaginary genetic? Or our we in some holographic box on a table or a thought in someone's imagination?
Shall we leave it up to the philsophers. ;_ |
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
It could also very well be that the rate of change merely appears to be faster for events that are more proximate to us in time. To use an analogy, if plotting the rate of frequency change of a train's horn as it travels past us due to the Doppler effect, the rate of change is much higher when the train is close to us than when it is farther away, while the velocity of the change remains constant. The actual process of short and long term memory does work in a similar fashion, and it is the recollection of quantity of detail in recollection that gives of that sense of 'velocity' of change in time.
In reality though, I am sometimes struck by how slowly technology has changed over the last 50 years in contradistinction to Dr. Kurzweil's thoughts. Look at how slowly AI has been at solving even the simplest of problems. Back in the 70's there were predictions that we would have thinking machines in a decade, but the best that we have so far is the sophomoric 'Deep Blue'. Likewise, we are still tied to silicon circuits that are essentially the same as what was done at Fairchild and Texas Instruments 50 years ago, only more compact, energy efficient and faster.
I would concede that the Internet and the so-called flattening of the world, ala Thomas Friedman, MAY result in the acceleration, but I am not sure just how close we are to that singularity just yet. |
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
There have been some good replies on the nature of consciousness I just wanted to humbly add a few points. I believe there are some inherent flaws Rays thinking I've quoted below.
"It should be clear where I'm going with this. Bit by bit, region by region, I ultimately replace my entire brain with essentially identical (perhaps improved) nonbiological equivalents (preserving all of the neurotransmitter concentrations and other details that represent my learning, skills, and memories). At each point, I feel the procedures were successful. At each point, I feel that I am the same guy. After each procedure, I claim to be the same guy. My friends concur. There is no old Ray and new Ray, just one Ray, one that never appears to fundamentally change.
But consider this. This gradual replacement of my brain with a nonbiological equivalent is essentially identical to the following sequence:
1. (i) scan Ray and reinstantiate Ray's mind file into new (nonbiological) Ray, and, then
2. (ii) terminate old Ray. But we concluded above that in such a scenario new Ray is not the same as old Ray. And if old Ray is terminated, well then that's the end of Ray. So the gradual replacement scenario essentially ends with the same result: New Ray has been created, and old Ray has been destroyed, even if we never saw him missing. So what appears to be the continuing existence of just one Ray is really the creation of new Ray and the termination of old Ray."
First off let me state that I find your site amazing and I've been reading many of the articles
and tracking the site for several years. I truly appreciate the scope and depth of your work. This is my first post. I've always want to post on this particular point.
Making a copy of myself outside of myself and replacing parts of myself are not the same thing. In the gradual replacement scenario you are only mimicking the body's natural processes of replacing materials and there is a continuity of consciousness. In making a new copy of me outside of myself I'm making a new and completely separate "pattern" though it be identical. A good analogy from nature is identical twins, both start as identical copies of each other at the beginning, but both are separate entities.
Thanks for this forum and the chance for voice my opinion. I apologize if I'm repeating something already covered in another comment, I haven't had the time read them all yet.
|
|
|
|
|
|
|
|
|
Re: The Law of Accelerating Returns
|
|
|
|
My above post is in response to the article "The Law of Accelerating Returns "
I would like to follow up with a couple of points.
In your article you state:
"Like the water in a stream, my particles are constantly changing, but the pattern that people recognize as Ray has a reasonable level of continuity. This argues that we should not associate our fundamental identity with a specific set of particles, but rather the pattern of matter and energy that we represent. Many contemporary philosophers seem partial to this "identify from pattern" argument."
To follow up on my last post I want to simply state that the continuity of consciousness is broken from the original me when a new copy is made. It would not matter how the copy may perceive the situation objectively the new separate but identical pattern would then become a unique individual as it's experiences would diverge from the original at the beginning.
Again in your article you state:
"If you were to scan my brain and reinstantiate new Ray while I was sleeping, I would not necessarily even know about it (with the nanobots, this will be a feasible scenario). If you then come to me, and say, "good news, Ray, we've successfully reinstantiated your mind file, so we won't be needing your old brain anymore," I may suddenly realize the flaw in this "identity from pattern" argument. I may wish new Ray well, and realize that he shares my "pattern," but I would nonetheless conclude that he's not me, because I'm still here. How could he be me? After all, I would not necessarily know that he even existed."
You would be right to conclude "that he is not me" even though he would have the same experiences and memories of you. Objectively he is still the copy and a new though he is an identical "pattern" at his creation quickly to become more and more unique as his experiences diverge from yours. Not only that even if you froze the copy at creation it would occupy a different location in space/time and on a fundamental quantum level the copy would be unique.
I believe you would end up with two separate entities with the same memories but each with an individual consciousness. The only way I can see a copy having something like the same consciousness would be if both the copy and the original occupy the same time/space simultaneously and have the same will.
My argument may be simplistic but I believe it has merit. If one were to consider an having their neurons copied I would suggest they opt for the gradual replacement method as I believe it is fundamentally different from making an external copy.
|
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
I'm afraid I disagree with Mr. Kurzweil. If anything, progress seems to be slowing down. While processing power is indeed growing, computing power seems on a more linear plane, devices consume more processing cycles to provide marginal improvements in usibility.
When you compare the huge advances of technology from, say 1930 to 1970, the scale of change on society from technolgy was enormous. Nuclear power, electricity, communications, medicine, telephone--the list goes on. 1970 was like a different world, the older generation was almost completely disoriented.
Looking back a comparable number of years--1967 to 2007, the scale of change is far more modest. Progress has slowed in transporation, engergy generation, and even medical progress.
The largest changes have been in information access and entertainment. However one would have to argue whether immediate access to information is actually producing any more Socrates or Einsteins. The net effect of technolgy seems to be information-driven *marginal* improvements in productivity and a huge diversion of energy from the real world into cyber-distraction.
When I was growing up, we commonly extrapolated the incredible advancement of the 20th century into huge improvements in human achievement in the areas of wealth, freedom, energy and exploration. I tell my kids, the biggest changes from my childhood to theirs are video games and computers.
I seriously think we all have it wrong. Technological progress has gone down and inward, introverted spiral. Perhaps the climate change threat is the only thing that will shake us from our digital slumber and again push us out into the wider world and universe. |
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
I just read the first sentence of the article, and I already disagree.
I think growth goes in an angled S-shaped speed,not linear nor logarithmic.
If it was logarithmic, that would mean in 200 years the human race will all be gods.
There is a limit to growth, both in the industrial world due to limited resources, or size limitations.
A time will come when it will be harder for scientists to find solutions to newer technologies.
At this moment a CPU chip for instance, gets manufactured at 32nm. We already know that at 5nm you'd basically have transistors with layers the size of 1 layer of atoms. To do that would create a CPU unstable. So research must be done in newer ground materials. And when that happens, we might improve those newer materials a bit over time until we reach a limitation there!
If we look at the Roman empire, the Egyptian empire, Mayan and Incas, and if it ever existed, Atlantis, we'd see that cultures grow with their population, culture and knowledge, and die.
The culture dies at the top or peak of evolution. Then it simply dies out. |
|
|
|
|
|
|
|
|
Re: The Singularity is Near
|
|
|
|
Debating what the the future holds, even for something as quantifiable as the progression and advancement of technology, isn't so much pointless as it is rhetorical. Given enough time ANYTHING is possible, I mean this in the most basic sense, observe any "natural" process (and yes, technology is natural in a sense that it is a byproduct of natural processes on this planet, as far as we know or at least can be certain of) for infinity and ALL possible outcomes have the same probability of occurring. That being said (i find arguments like this futile...) while it may be possible to code and decode all of the signals sent and received in the human brain into 1's and 0's, the problem occurs when we delve into the realm "downloading consciousness" onto a computer. There is more to being considered alive than just being aware of your existence, how would you know you exist if you had no body to experience stimulus? If/when it becomes possible to write the entire code of someones consciousness and enter it into a computer, how can that consciousness continue to exist without a body? When a mind is copied into cyberspace, what happens to the body which hosts the organic consciousness, or the original consciousness itself? Do any of you actually believe that your "soul" would then be trapped in a computer, when it would simply be a copy of the original? Or would we have to keep our brains locked and connected to the "computer", "hardwired" if you will, like the matrix? that didn't turn out so well... All I'm trying to say is, it isn't just our minds and its function which makes us human beings, you cannot separate the language of the body from it PURPOSE for the language. Try to translate and something is lost, and its what's lost when we translate brain waives and chemical patterns and synapses from their purpose for even happening, is Mozart, Bach, Picasso, Frank Lloyd Wright, and so much UNQUANTIFIABLE "data". Besides, who would want to be part of an existence where nothing you did actually mattered? If 1000 years from now we exist in electronic signals, then what does anything you do ever matter? every house you build, child you make, degree you earn, wont EVER actually exist, and can be forgotten and deleted from this "existence" with one wrong character in code, or ever forbid some disruption in the electrical signals our existence is suspended in... You all can live in cyberspace if it truly makes you happy one day, I'll stay here and enjoy all of the sunsets, breezes, nights, first snows of the year, dogs, cats, itches, sneezes, that feeling I get after exercise, and whatever else my BEATING heart desires, because at least I will know its real (well, at least my pulse will make it more believable). BEER TIME!!!! |
|
|
|
|
|
|
|