Origin > Nanotechnology > Engines of Creation > Chapter 10: The Limits to Growth
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0117.html

Printable Version
    Chapter 10: The Limits to Growth
by   K. Eric Drexler



The chess board is the world, the pieces are the phenomena of the universe, the rules of the game are what we call the laws of nature.
- T. H. HUXLEY

IN THE LAST CENTURY we have developed aircraft, spacecraft, nuclear power, and computers. In the next we will develop assemblers, replicators, automated engineering, cheap spaceflight, cell repair machines, and much more. This series of breakthroughs may suggest that the technology race will advance without limit. In this view, we will break through all conceivable barriers, rushing off into the infinite unknown - but this view seems false.

 The laws of nature and the conditions of the world will limit what we do. Without limits, the future would be wholly unknown, a formless thing making a mockery of our efforts to think and plan. With limits, the future is still a turbulent uncertainty, but it is forced to flow within certain bounds.

 From natural limits, we learn something about the problems and opportunities we face. Limits define the boundaries of the possible, telling us what resources we can use, how fast our spacecraft will fly, and what our nanomachines will and won't be able to do.

 Discussing limits is risky: we can be more sure that something is possible than that it isn't. Engineers can make do with approximations and special cases. And given tools, materials, and time, they can demonstrate possibilities directly. Even when doing exploratory design, they can stay well within the realm of the possible by staying well away from the limits. Scientists, in contrast, cannot prove a general theory - and every general claim of impossibility is itself a sort of general theory. No specific experiment (someplace, sometime) can prove something to be impossible (everywhere, forever). Neither can any number of specific experiments.

 Still, general scientific laws do describe limits to the possible. Although scientists cannot prove a general law, they have evolved our best available picture of how the universe works. And even if exotic experiments and elegant mathematics again transform our concept of physical law, few engineering limits will budge. Relativity didn't affect automobile designs.

 The mere existence of ultimate limits doesn't mean that they are about to choke us, yet many people have been drawn to the idea that limits will end growth soon. This notion simplifies their picture of the future by leaving out the strange new developments that growth will bring. Other people favor the vaguer notion of limitless growth - a notion that blurs their picture of the future by suggesting that it will be utterly incomprehensible.

 People who confuse science with technology tend to become confused about limits. As software engineer Mark S. Miller points out, they imagine that new knowledge always means new know-how; some even imagine that knowing everything would let us do anything. Advances in technology do indeed bring new know-how, opening new possibilities. But advances in basic science simply redraw our map of ultimate limits; this often shows new impossibilities. Einstein's discoveries, for example, showed that nothing can catch up with a fleeing light ray.

The Structure of the Vacuum

Is the speed of light a real limit? People once spoke of a "sound barrier" that some believed would stop an airplane from passing the speed of sound. Then at Edwards Air Force Base in 1947, Chuck Yeager split the October sky with a sonic boom. Today, some people speak of a "light barrier," and ask whether it, too, may fall.

 Unfortunately for science fiction writers, this parallel is superficial. No one could ever maintain that the speed of sound was a true physical limit. Meteors and bullets exceeded it daily, and even cracking whips cracked the "sound barrier." But no one has seen anything move faster than light. Distant spots seen by radio telescopes sometimes appear to move faster, but simple tricks of perspective easily explain how this can be. Hypothetical particles called "tachyons" would move faster than light, if they were to exist - but none has been found, and current theory doesn't predict them. Experimenters have pushed protons to over 99.9995 percent of the speed of light, with results that match Einstein's predictions perfectly. When pushed ever faster, a particle's speed creeps closer to the speed of light, while its energy (mass) grows without bound.

 On Earth, a person can walk or sail only to a certain distance, but no mysterious edge or barrier suddenly blocks travel. The Earth is simply round. The speed limit in space no more implies a "light barrier" than the distance limit on Earth implies a wall. Space itself - the vacuum that holds all energy and matter - has properties. One of these is its geometry, which can be described by regarding time as a special dimension. This geometry makes the speed of light recede before an accelerating spaceship much as the horizon recedes before a moving sea ship: the speed of light, like the horizon, is always equally remote in all directions. But the analogy dies here - this similarity has nothing to do with the curvature of space. It is enough to remember that the limiting speed is nothing so crude or so breakable as a "light barrier." Objects can always go faster, just no faster than light.

 People have long dreamed of gravity control. In the 1962 edition of Profiles of the Future, Arthur C. Clarke wrote that "Of all the forces, gravity is the most mysterious and the most implacable," and then went on to suggest that we will someday develop convenient devices for controlling gravity. Yet is gravity really so mysterious? In the general theory of relativity, Einstein described gravity as curvature in the space-time structure of the vacuum. The mathematics describing this is elegant and precise, and it makes predictions that have passed every test yet contrived.

 Gravity is neither more nor less implacable than other forces. No one can make a boulder lose its gravity, but neither can anyone make an electron lose its electric charge or a current its magnetic field. We control electric and magnetic fields by moving the particles that create them; we can control gravitational fields similarly, by moving ordinary masses. It seems that we cannot learn the secret of gravity control because we already have it.

 A child with a small magnet can lift a nail, using a magnetic field to overwhelm Earth's gravitational pull. But unfortunately for eager gravitational engineers, using gravity to lift a nail requires a tremendous mass. Hanging Venus just over your head would almost do the job - until it fell on you.

 Engineers stir up electromagnetic waves by shaking electric charges back and forth in an antenna; one can stir up gravity waves by shaking a rock in the air. But again, the gravitational effect is weak. Though a one-kilowatt radio station is nothing extraordinary, all the shaking and spinning of all the masses in the solar system put together fails to radiate as much as a kilowatt of gravity waves.

 We understand gravity well enough; it simply isn't much use in building machines much lighter than the Moon. But devices using large masses do work. A hydroelectric dam is part of a gravity machine (the other part being the Earth) that extracts energy from falling mass. Machines using black holes will be able to extract energy from falling mass with over fifty percent efficiency, based on E = mc2. Lowering a single bucket of water into a black hole would yield as much energy as pouring several trillion buckets of water through the generators of a kilometer-high dam.

 Because the laws of gravity describe how the vacuum curves, they also apply to science-fiction style "space warps." It seems that tunnels from one point in space to another would be unstable, even if they could be created in the first place. This prevents future spaceships from reaching distant points faster than light by taking a shortcut around the intervening space, and this limit to travel in turn sets a limit to growth.

 Einstein's laws seem to give an accurate description of the overall geometry of the vacuum. If so, then the limits that result seem inescapable: you can get rid of almost anything, but not the vacuum itself.

 Other laws and limits seem inescapable for similar reasons. In fact, physicists have increasingly come to regard all of physical law in terms of the structure of the vacuum. Gravity waves are a certain kind of ripple in the vacuum; black holes are a certain kind of kink. Likewise, radio waves are another kind of ripple in the vacuum, and elementary particles are other, very different kinds of kinks (which in some theories resemble tiny, vibrating strings). In this view, there is only one substance in the universe - the vacuum - but one that takes on a variety of forms, including those patterns of particles that we call "solid matter." This view suggests the inescapable quality of natural law. If a single substance fills the universe, is the universe, then its properties limit all that we can do.

 The strangeness of modern physics, however, leads many people to distrust it. The revolutions that brought quantum mechanics and relativity gave rise to talk of "the uncertainty principle," "the wave nature of matter," "matter being energy," and "curved space-time." An air of paradox surrounds these ideas and thus physics itself. It is understandable that new technologies should seem odd to us, but why should the ancient and immutable laws of nature turn out to be bizarre and shocking?

 Our brains and languages evolved to deal with things vastly larger than atoms, moving at a tiny fraction of the speed of light. They do a tolerable job of this, though it took people centuries to learn to describe the motion of a falling rock. But we have now stretched our knowledge far beyond the ancient world of the senses. We have found things (matter waves, curved space) that seem bizarre - and some that are simply beyond our ability to visualize. But "bizarre" does not mean mysterious and unpredictable. Mathematics and experiments still work, letting scientists vary and select theories, evolving them to fit even a peculiar reality. Human minds have proved remarkably flexible, but it is no great surprise to find that we cannot always visualize the invisible.

 Part of the reason that physics seems so strange is that people crave oddities, and tend to spread memes that describe things as odd. Some people favor ideas that coat the world in layers and fill it with grade-B mysteries. Naturally, they favor and spread memes that make matter seem immaterial and quantum mechanics seem like a branch of psychology.

 Relativity, it is said, reveals that matter (that plain old stuff that people think they understand) is really energy (that subtle, mysterious stuff that makes things happen). This encourages a smiling vagueness about the mystery of the universe. It might be clearer to say that relativity reveals energy to be a form of matter in all its forms, energy has mass. Indeed, lightsails work on this principle, through the impact of mass on a surface. Light even comes packaged in particles.

 Consider also the Heisenberg uncertainty principle, and the related fact that "the observer always affects the observed." The uncertainty principle is intrinsic to the mathematics describing ordinary matter (giving atoms their very size), but the associated "effect of the observer" has been presented in some popular books as a magical influence of consciousness on the world. In fact, the core idea is more prosaic. Imagine looking at a dust mote in a light beam: When you observe the reflected light, you certainly affect it - your eye absorbs it. Likewise, the light (with its mass) affects the dust mote: it bounces off the mote, exerting a force. The result is not an effect of your mind on the dust, but of the light on the dust. Though quantum measurement has peculiarities far more subtle than this, none involves the mind reaching out to change reality.

 Finally, consider the "twin paradox." Relativity predicts that, if one of a pair of twins flies to another star and back at near the speed of light, the traveling twin will be younger than the stay-at-home twin. Indeed, measurements with accurate clocks demonstrate the time-slowing effect of rapids motion. But this is not a "paradox"; it is a simple fact of nature.

Will Physics Again be Upended?

In 1894 the eminent physicist Albert A. Michelson stated: "The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote .. . Our future discoveries must be looked for in the sixth place of decimals."

 But in 1895, Roentgen discovered X rays. In 1896, Becquerel discovered radioactivity. In 1897, Thomson discovered the electron. In 1905, Einstein formulated the special theory of relativity (and thus explained Michelson's own 1887 observations regarding the speed of light). In 1905, Einstein also presented the photon theory of light. In 1911, Rutherford discovered the atomic nucleus. In 1915, Einstein formulated the general theory of relativity. In 1924-30, de Broglie, Heisenberg, Bohr, Pauli, and Dirac developed the foundations of quantum mechanics. In 1929, Hubble announced evidence for the expansion of the universe. In 1931, Michelson died.

 Michelson had made a memorable mistake. People still point to his statement and what followed to support the view that we shouldn't (ever?) claim any firm understanding of natural law, or of the limits to the possible. After all, if Michelson was so sure and yet so wrong, shouldn't we fear repeating his mistake? The great revolution in physics has led some people to conclude that science will hold endless important surprises - even surprises important to engineers. But are we likely to encounter such an important upheaval again?

 Perhaps not. The content of quantum mechanics was a surprise, yet before it appeared, physics was obviously and grossly incomplete. Before quantum mechanics, you could have walked up to any scientist, grinned maliciously, rapped on his desk and asked, "What holds this thing together? Why is it brown and solid, while air is transparent and gaseous?" Your victim might have said something vague about atoms and their arrangements, but when you pressed for an explanation, you would at best have gotten an answer like "Who knows? Physics can't explain matter yet!" Hindsight is all too easy, yet in a world made of matter, inhabited by material people using material tools, this ignorance of the nature of matter was a gap in human knowledge that Michelson should perhaps have noted. It was a gap not in "the sixth place of decimals" but in the first.

 It is also worth observing the extent to which Michelson was right. The laws of which he spoke included Newton's laws of gravity and motion, and Maxwell's laws of electromagnetism. And indeed, under conditions common in engineering, these laws have been modified only "in the sixth place of decimals." Einstein's laws of gravity and motion match Newton's laws closely except under extreme conditions of gravitation and speed; the quantum electrodynamic laws of Feynman, Schwinger, and Tomonaga match Maxwell's laws closely except under extreme conditions of size and energy.

 Further revolutions no doubt lurk around the edges of these theories. But those edges seem far from the world of living things and the machines we build. The revolutions of relativity and quantum mechanics changed our knowledge about matter and energy, but matter and energy themselves remained unchanged - they are real and care nothing for our theories. Physicists now use a single set of laws to describe how atomic nuclei and electrons interact in atoms, molecules, molecular machines, living things, planets, and stars. These laws are not yet completely general; the quest for a unified theory of all physics continues. But as physicist Stephen W. Hawking states, "at the moment we have a number of partial laws which govern the behavior of the universe under all but the most extreme conditions." And by an engineer's standards, those conditions are extraordinarily extreme.

 Physicists regularly announce new particles observed in the debris from extremely energetic particle collisions, but you cannot buy one of these new particles in a box. And this is important to recognize, because if a particle cannot be kept, then it cannot serve as a component of a stable machine. Boxes and their contents are made of electrons and nuclei. Nuclei, in turn, are made of protons and neutrons. Hydrogen atoms have single protons as their nuclei; lead atoms have eighty-two protons and over a hundred neutrons. Isolated neutrons disintegrate in a few minutes. Few other stable particles are known: photons - the particles of light - are useful and can be trapped for a time; neutrinos are almost undetectable and cannot even be trapped. These particles (save the photon) have matching antiparticles. All other known particles disintegrate in a few millionths of a second or less. Thus, the only known building blocks for hardware are electrons and nuclei (or their antiparticles, for special isolated applications); these building blocks ordinarily combine to form atoms and molecules.

 Yet despite the power of modern physics, our knowledge still has obvious holes. The shaky state of elementary particle theory leaves some limits uncertain. We may find new stable, "boxable" particles, such as magnetic monopoles or free quarks; if so, they will doubtless have uses. We may even find a new long-range field or form of radiation, though this seems increasingly unlikely. Finally, some new way of smashing particles together may improve our ability to convert known particles into other known particles.

 But in general, complex hardware will require complex, stable patterns of particles. Outside the environment of a collapsed star, this means patterns of atoms, which are well described by relativistic quantum mechanics. The frontiers of physics have moved on. On a theoretical level, physicists seek a unified description of the interactions of all possible particles, even the most short-lived particles. On an experimental level, they study the patterns of subatomic debris created by high-energy collisions in particle accelerators. So long as no new, stable, and useful particle comes out of such a collision - or turns up as the residue of past cosmic upheavals - atoms will remain the sole building blocks of stable hardware. And engineering will remain a game that is played with already known pieces according to already known rules. New particles would add pieces, not eliminate rules.

The Limits to Hardware

Is molecular machinery really the end of the road where miniaturization is concerned? The idea that molecular machinery might be a step toward a smaller "nuclear machinery" seems natural enough. One young man (a graduate student in economics at Columbia University) heard of molecular technology and its ability to manipulate atoms, and immediately concluded that molecular technology could do almost anything, even turn falling nuclear bombs into harmless lead bricks at a distance.

 Molecular technology can do no such thing. Turning plutonium into lead (whether at a distance or not) lies beyond molecular technology for the same reason that turning lead into gold lay beyond an alchemist's chemistry. Molecular forces have little effect on the atomic nucleus. The nucleus holds over 99.9 percent of an atom's mass and occupies about 1/1,000,000,000,000,000 of its volume. Compared to the nucleus, the rest of an atom (an electron cloud) is less than airy fluff. Trying to change a nucleus by poking at it with a molecule is even more futile than trying to flatten a steel ball bearing by waving a ball of cotton candy at it. Molecular technology can sort and rearrange atoms, but it cannot reach into a nucleus to change an atom's type.  

Nanomachines may be no help in building nuclear-scale machines, but could such machines even exist. Apparently not, at least under any conditions we can create in a laboratory. Machines must have a number of parts in close contact, but close-packed nuclei repel each other with ferocity. When splitting nuclei blasted Hiroshima, most of the energy was released by the violent electrostatic repulsion of the freshly split halves. The well-known difficulty of nuclear fusion stems from this same problem of nuclear repulsion.

 In addition to splitting or fusing, nuclei can be made to emit or absorb various types of radiation. In one technique, they are made to gyrate in ways that yield useful information, letting doctors make medical images based on nuclear magnetic resonance. But all these phenomena rely only on the properties of well-separated nuclei. Isolated nuclei are too simple to act as machines or electronic circuits. Nuclei can be forced close together, but only under the immense pressures found inside a collapsed star. Doing engineering there would present substantial difficulties, even if a collapsed star were handy.

 This returns us to the basic question, What can we accomplish by properly arranging atoms? Some limits already seem clear. The strongest materials possible will have roughly ten times the strength of today's strongest steel wire. (The strongest material for making a cable appears to be carbyne, a form of carbon having atoms arranged in straight chains.) It seems that the vibrations of heat will, at ordinary pressures, tear apart the most refractory solids at temperatures around four thousand degrees Celsius (roughly fifteen hundred degrees cooler than the Sun's surface).

 These brute properties of matter - strength and heat resistance-cannot be greatly improved through complex, clever arrangements of atoms. The best arrangements seem likely to be fairly simple and regular. Other fairly simple goals include transmitting heat, insulating against heat, transmitting electricity, insulating against electricity, transmitting light, reflecting light, and absorbing light.

 With some goals, pursuit of perfection will lead to simple designs; with others, it will lead to design problems beyond any hope of solving. Designing the best possible switching component for a computer may prove fairly easy; designing the best possible computer will be vastly more complex. Indeed, what we consider to be "the best possible" will depend on many factors, including the costs of matter, energy and time - and on what we want to compute. In any engineering project, what we call "better" depends on indefinitely many factors, including ill-defined and changing human wants. What is more, even where "better" is well defined, the cost of seeking the final increment of improvement that separates the best from the merely excellent may not be worth paying. We can ignore all such issues of complexity and design cost, though, when considering whether limits actually exist.

 To define a limit, one must choose a direction, a scale of quality. With that direction defining "better," there will definitely be a best. The arrangement of atoms determines the properties of hardware, and according to quantum mechanics, the number of possible arrangements is finite - more than just astronomically large, yet not infinite. It follows mathematically that, given a clear goal, some one of these arrangements must be best, or tied for best. As in chess, the limited number of pieces and spaces limits the arrangements and hence the possibilities. In both chess and engineering, though, the variety possible within those limits is inexhaustible.

 Just knowing the fundamental laws of matter isn't enough to tell us exactly where all the limits lie. We still must face the complexities of design. Our knowledge of some limits remains loose: "We know only that the limit lies between here (a few paces away) and there (that spot near the horizon)." Assemblers will open the way to the limits, wherever they are, and automated engineering systems will speed progress along the road. The absolute best will often prove elusive, but the runners-up will often be nearly as good.

 As we approach genuine limits, our abilities will, in ever more areas of technology, cease growing. Advances in these fields will stop not merely for a decade or a century, but permanently.

 Some may balk at the word "permanently," thinking "No improvements in a thousand years? In a million years? This must be an overstatement." Yet where we reach true physical limits, we will go no further. The rules of the game are built into the structure of the vacuum, into the structure of the universe. No rearranging of atoms, no clashing of particles, no legislation or chanting or stomping will move natural limits one whit. We may misjudge the limits today, but wherever the real limits lie, there they will remain.

 This look at natural law shows limits to the quality of things. But we also face limits to quantity, set not only by natural law but by the way that matter and energy are arranged in the universe as we happen to find it. The authors of The Limits to Growth, like so many others, attempted to describe these limits without first examining the limits to technology. This gave misleading results.

Entropy: A Limit to Energy Use

Recently, some authors have described the accumulation of waste heat and disorder as ultimate limits to human activity. In The Lean Years - Politics in the Age of Scarcity, Richard Barnet writes:

"It is ironical that the rediscovery of limits coincides with two of the most audacious technological feats in human history. One is genetic engineering, the sudden glimpse of a power to shape the very stuff of life. The other is the colonization of space. These breakthroughs encourage fantasies of power, but they do not break the ecological straightjacket known as the Second Law of Thermodynamics: Ever greater consumption of energy produces ever greater quantities of heat, which never disappear, but must be counted as a permanent energy cost. Since accumulation of heat can cause ecological catastrophe, these costs limit man's adventure in space as surely as on earth."
Jeremy Rifkin (with Ted Howard) has written an entire book on thermodynamic limits and the future of humanity, titled Entropy: A New World View.

 Entropy is a standard scientific measure of waste heat and disorder. Whenever activities consume useful energy, they produce entropy; the entropy of the world therefore increases steadily and irreversibly. Ultimately, the dissipation of useful energy will destroy the basis of life. As Rifkin says, this idea may seem too depressing to consider, but he argues that we must face the terrible facts about entropy, humanity, and the Earth. But are these facts so terrible?

 Barnet writes that accumulating heat is a permanent energy cost, limiting human action. Rifkin states that "pollution is the sum total of all of the available energy in the world that has been transformed into unavailable energy." This unavailable energy is chiefly low-temperature waste heat, the sort that makes television sets get warm. But does heat really accumulate, as Barnet fears? If so, then the Earth must be growing steadily hotter, minute by minute and year by year. We should be roasting now, if our ancestors weren't frozen solid. Somehow, though, continents manage to get cold at night, and colder yet during the winter. During ice ages, the whole Earth cools off.

 Rifkin takes another tack. He states that "the fixed endowment of terrestrial matter that makes up the earth's crust is constantly dissipating. Mountains are wearing down and topsoil is being blown away with each passing second." By "blown away" Rifkin doesn't mean blown into space or blown out of existence; he just means that the mountain's atoms have become jumbled together with others. Yet this process, he argues, means our doom. The jumbling of atoms makes them "unavailable matter", as a consequence of the "fourth law of thermodynamics," propounded by economist Nicholas Georgescu-Roegen: "In a closed system, the material entropy must ultimately reach a maximum," or (equivalently) that "unavailable matter cannot be recycled," Rifkin declares that the Earth is a closed system, exchanging energy but not matter with its surroundings, and that therefore "here on earth material entropy is continually increasing and must ultimately reach a maximum," making Earth's life falter and die.

 A grim situation indeed - the Earth has been degenerating for billions of years. Surely the end must be near!

 But can this really be true? As life developed, it brought more order to Earth, not less; the formation of ore deposits did the same. The idea that Earth has degenerated seems peculiar at best (but then, Rifkin thinks evolution is bunk). Besides, since matter and energy are essentially the same, how can a valid law single out something called "material entropy" in the first place?

 Rifkin presents perfume spreading from a bottle into the air in a room as an example of "dissipating matter," of material entropy increasing - of matter becoming "unavailable." The spread of salt into water in a bottle will serve equally well. Consider, then, a test of the "fourth law of thermodynamics" in the Salt-Water Bottle Experiment:

 Imagine a bottle having a bottom with a partition, dividing it into two basins. In one sits salt, in the other sits water. A cork plugs the bottle's neck: this closes the system and makes the so-called fourth law of thermodynamics apply. The bottle's contents are in an organized state: their material entropy is not at a maximum - yet.

 Now, pick up the bottle and shake it. Slosh the water into the other basin, swirl it around, dissolve the salt, increase the entropy-go wild! In such a closed system, the "fourth law of thermodynamics" says that this increase in the material entropy should be permanent. All of Rifkin's alarums about the steady, inevitable increase of Earth's entropy rest on this principle.

 To see if there is any basis for Rifkin's new worldview, take the bottle and tip it, draining the salty water into one basin. This should make no difference, since the system remains closed. Now set the bottle upright, placing the saltwater side in sunlight and the empty side in shade. Light shines in and heat leaks out, but the system remains as closed as the Earth itself. But watch - the sunlight evaporates water, which condenses in the shade! Fresh water slowly fills the empty basin, leaving the salt behind.

 Rifkin himself states that "in science, only one uncompromising exception is enough to invalidate a law." This thought experiment, which mimics how natural salt deposits have formed on Earth, invalidates the law on which he founds his whole book. So do plants. Sunlight brings energy from space; heat radiated back into space carries away entropy (of which there is only one kind). Therefore, entropy can decline in a closed system and flowers can bloom on Earth for age upon age.

 Rifkin is right in saying that "it's possible to reverse the entropy process in an isolated time and place, but only by using up energy in the process and thus increasing the overall entropy of the environment." But both Rifkin and Barnet make the same mistake: when they write of the environment, they imply the Earth - but the law applies to the environment as a whole, and that whole is the universe. In effect, Rifkin and Barnet ignore both the light of the Sun and the cold black of the night sky.

 According to Rifkin, his ideas destroy the notion of history as progress, transcending the modern worldview. He calls for sacrifice, stating that "no Third World nation should harbor hopes that it can ever reach the material abundance that has existed in America." He fears panic and bloodshed. Rifkin finishes by informing us that "the Entropy Law answers the central question that every culture throughout history has grappled with: How should human beings behave in the world?" His answer? "The ultimate moral imperative, then, is to waste as little energy as possible."

 This would seem to mean that we must save as much energy as possible, seeking to eliminate waste. But what is the greatest nearby energy waster? Why, the Sun, of course - it wastes energy trillions of times faster than we humans do. If taken seriously, it seems that Rifkin's ultimate moral imperative therefore urges: "Put out the Sun!"

 This silly consequence should have tipped Rifkin off. He and many others hold views that smack of a pre-Copernican arrogance: they presume that the Earth is the whole world and that what people do is automatically of cosmic importance.

 There is a genuine entropy law, of course: the second law of thermodynamics. Unlike the bogus "fourth law," it is described in textbooks and used by engineers. It will indeed limit what we do. Human activity will generate heat, and Earth's limited ability to radiate heat will set a firm limit to the amount of Earth-based industrial activity. Likewise, we will need winglike panels to radiate waste heat from our starships. Finally, the entropy law will - at the far end of an immensity of time - bring the downfall of the universe as we know it, limiting the lifespan of life itself.

 Why flog the carcass of Rifkin's Entropy? Simply because today's information systems often present even stillborn ideas as if they were alive. By encouraging false hopes, false fears, and misguided action, these ideas can waste the efforts of people actively concerned about long-range world problems.

 Among those whose praise appears on the back cover of Rifkin's book ("an inspiring work," "brilliant work," "earthshaking," "should be taken to heart") are a Princeton professor, a talk-show host, and two United States senators. A seminar at MIT ("The Finite Earth - World Views for a Sustainable Future") featured Rifkin's book.

 All the seminar's sponsors were from nontechnical departments. Most senators in our technological society lack training in technology, as do most professors and talk-show hosts. Georgescu-Roegen himself, inventor of the "fourth law of thermodynamics," has extensive credentials - as a social scientist.

 The entropy threat is an example of blatant nonsense, yet its inventors and promoters aren't laughed off the public stage. Imagine a thousand, a million similar distortions - some subtle, some brazen, but all warping the public's understanding of the world. Now imagine a group of democratic nations suffering from an infestation of such memes while attempting to cope with an era of accelerating technological revolution. We have a real problem. To make our survival more likely, we will need better ways to weed our memes, to make room for sound understanding to grow. In Chapters 13 and 14 I will report on two proposals for how we might do this.

The Limits to Resources

Natural law limits the quality of technology, but within these limits we will use replicating assemblers to produce superior spacecraft. With them, we will open space wide and deep.

 Today Earth has begun to seem small, arousing concerns that we may deplete its resources. Yet the energy we use totals less than 1/10,000 of the solar energy striking Earth; we worry not about the supply of energy as such, but about the supply of convenient gas and oil. Our mines barely scratch the surface of the globe; we worry not about the sheer quantity of resources, but about their convenience and cost. When we develop pollution-free nanomachines to gather solar energy and resources, Earth will be able to support a civilization far larger and wealthier than any yet seen, yet suffer less harm than we inflict today. The potential of Earth makes the resources we now use seem insignificant by comparison.  

Yet Earth is but a speck. The asteroidal debris left over from the formation of the planets will provide materials enough to build a thousand times Earth's land area. The Sun floods the solar system with a billion times the power that reaches Earth. The resources of the solar system are truly vast, malting the resources of Earth seem insignificant by comparison.

 Yet the solar system is but a speck. The stars that crowd the night sky are suns, and the human eye can see only the closest. Our galaxy holds a hundred billion suns, and many no doubt pour their light on dead planets and asteroids awaiting the touch of life. The resources of our galaxy make even our solar system seem insignificant by comparison.

 Yet our galaxy is but a speck. Light older than our species shows galaxies beyond ours. The visible universe holds a hundred billion galaxies, each a swarm of billions of suns. The resources of the visible universe make even our galaxy seem insignificant by comparison.

 With this we reach the limits of knowledge, if not of resources. The solar system seems answer enough to Earth's limits - and if the rest of the universe remains unclaimed by others, then our prospects for expansion boggle the mind several times over. Does this mean that replicating assemblers and cheap spaceflight will end our resource worries?

 In a sense, opening space will burst our limits to growth, since we know of no end to the universe. Nevertheless, Malthus was essentially right.

Malthus

In his 1798 Essay on the Principle of Population, Thomas Robert Malthus, an English clergyman, presented the ancestor of all modern limits-to-growth arguments. He noted that freely growing populations tend to double periodically, thus expanding exponentially. This makes sense: since all organisms are descended from successful replicators, they tend to replicate when given a chance. For the sake of argument, Malthus assumed that resources - the food supply - could increase by a fixed amount per year (a process called linear growth, since it plots as a line on a graph). Since mathematics shows that any fixed rate of exponential growth will eventually outstrip any fixed rate of linear growth, Malthus argued that population growth, if unchecked, would eventually outrun food production.

 Authors have repeated variations on this idea ever since, in books like The Population Bomb and Famine - 1975!, yet food production has kept pace with population. Outside Africa, it has even pulled ahead. Was Malthus wrong?

 Not fundamentally: he was wrong chiefly about timing and details. Growth on Earth does face limits, since Earth has limited room, whether for farming or anything else. Malthus failed to predict when limits would pinch us chiefly because he failed to anticipate breakthroughs in farm equipment, crop genetics, and fertilizers.

 Some people now note that exponential growth will overrun the fixed stock of Earth's resources, a simpler argument than the one Malthus made. Though space technology will break this limit, it will not break all limits. Even if the universe were infinitely large, we still could not travel infinitely fast. The laws of nature will limit the rate of growth: Earth's life will spread no faster than light.

 Steady expansion will open new resources at a rate that will increase as the frontier spreads deeper and wider into space. This will result not in linear growth, but in cubic growth. Yet Malthus was essentially right: exponential growth will outrun cubic growth as easily as it would linear. Calculations show that unchecked population growth, with or without long life, would overrun available resources in about one or two thousand years at most. Unlimited exponential growth remains a fantasy, even in space.

Will Someone Stop Us?

Do other civilizations already own the resources of the universe? If so, then they would represent a limit to growth. The facts about evolution and technological limits shed useful light on this question.

 Since many Sunlike stellar systems are many hundreds of millions of years older than our solar system, some civilizations (if any substantial number exist) should be many hundreds of millions of years ahead of ours. We would expect at least some of these civilizations to do what all known life has done: spread as far as it can. Earth is green not just in the oceans where life began, but on shores, hills, and mountains. Green plants have now spread to stations in orbit; if we prosper, Earth's plants will spread to the stars. Organisms spread as far as they can, then a bit farther. Some fail and die, but the successful survive and spread farther yet. Settlers bound for America sailed and sank, and landed and starved, but some survived to found new nations. Organisms everywhere will feel the pressures that Malthus described, because they will have evolved to survive and spread genes and memes both push in the same direction. If extraterrestrial civilizations exist, and if even a small fraction were to behave as all life on Earth does, then they should by now have spread across space.

 Like us, they would tend to evolve technologies that approach the limits set by natural law. They would learn how to travel near the speed of light, and competition or sheer curiosity would drive some to do so. Indeed, only highly organized, highly stable societies could restrain competitive pressures well enough to avoid exploding outward at near the speed of light. By now, after hundreds of millions of years, even widely scattered civilizations would have spread far enough to meet each other, dividing all of space among them.

 If these civilizations are indeed everywhere, then they have shown great restraint and hidden themselves well. They would have controlled the resources of whole galaxies for many millions of years, and faced limits to growth on a cosmic scale. An advanced civilization pushing its ecological limits would, almost by definition, not waste both matter and energy. Yet we see such waste in all directions, as far as we can see spiral galaxies: their spiral arms hold dust clouds made of wasted matter, backlit by wasted starlight.

 If such advanced civilizations existed, then our solar system would lie in the realm of one of them. If so, then it would now be their move - we could do nothing to threaten them, and they could study us as they pleased, with or without our cooperation. Sensible people would listen if they firmly stated a demand. But if they exist, they must be hiding themselves - and keeping any local laws secret.

 The idea that humanity is alone in the visible universe is consistent with what we see in the sky and with what we know about the origin of life. No bashful aliens are needed to explain the facts. Some say that since there are so many stars, there must surely be other civilizations among them. But there are fewer stars in the visible universe than there are molecules in a glass of water. Just as a glass of water need not contain every possible chemical (even downstream from a chemical plant), so other stars need not harbor civilizations.

 We know that competing replicators tend to expand toward their ecological limits, and that resources are nonetheless wasted throughout the universe. We have received no envoy from the stars, and we apparently lack even a tolerably humane zookeeper. There may well be no one there. If they do not exist, then we need not consider them in our plans. If they do exist, then they will overrule our plans according to their own inscrutable wishes, and there seems no way to prepare for the possibility. Thus for now, and perhaps forever, we can make plans for our future without concern for limits imposed by other civilizations.

Growth Within Limits

Whether anyone else is out there or not, we are on our way. Space waits for us, barren rock and sunlight like the barren rock and sunlight of Earth's continents a billion years ago, before life crept forth from the sea. Our engineers are evolving memes that will help us create fine spaceships and settlements: we will settle the land of the solar system in comfort. Beyond the rich inner solar system lies the cometary cloud - a vast growth medium that thins away into the reaches of interstellar space, then thickens once more around other star systems, with fresh suns and sterile rock awaiting the touch of life.

 Although endless exponential growth remains a fantasy, the spread of life and civilization faces no fixed bound. Expansion will proceed, if we survive, because we are part of a living system and life tends to spread. Pioneers will move outward into worlds without end. Others will remain behind, building settled cultures throughout the oases of space. In any settlement, the time will come when the frontier lies far away, then farther. For the bulk of the future, most people and their descendants will live with limits to growth.

 We may like or dislike limits to growth, but their reality is independent of our wishes. Limits exist wherever goals are clearly defined.

 But on frontiers where standards keep changing, this idea of limits becomes irrelevant. In art or mathematics the value of work depends on complex standards, subject to dispute and change. One of those standards is novelty, and this can never be exhausted. Where goals change and complexity rules, limits need not bind us. To the creation of symphony and song, paintings and worlds, software, theorems, films, and delights yet unimagined, there seems no end. New technologies will nurture new arts, and new arts will bring new standards.

 The world of brute matter offers room for great but limited growth. The world of mind and pattern, though, holds room for endless evolution and change. The possible seems room enough.

Views of Limits

The idea of great advances within firm limits isn't evolved to feel pleasing, but to be accurate. Limits outline possibilities, and some may be ugly or terrifying. We need to prepare for the breakthroughs ahead, yet many futurists studiously pretend that no breakthroughs will occur.

 This school of thought is associated with The Limits to Growth, published as a report to the Club of Rome. Professor Mihajlo D. Mesarovic later coauthored Mankind at the Turning Point, published as the second report to the Club of Rome. Professor Mesarovic develops computer models like the one used in The Limits to Growth - each is a set of numbers and equations that purports to describe future changes in the world's population, economy, and environment. In the spring of 1981, he visited MIT to address "The Finite Earth: Worldviews for a Sustainable Future," the same seminar that featured Jeremy Rifkin's Entropy. He described a model intended to give a rough description of the next century. When asked whether he or any of his colleagues had allowed for even one future breakthrough comparable to, say, the petroleum industry, aircraft, automobiles, electric power, or computers - perhaps self-replicating robotic systems or cheap space transportation? - he answered directly: "No."

 Such models of the future are obviously bankrupt. Yet some people seem willing - even eager - to believe that breakthroughs will suddenly cease, that a global technology race that has been gaining momentum for centuries will screech to a halt in the immediate future.

 The habit of neglecting or denying the possibility of technological advance is a common problem. Some people believe in snugly fitting limits because they have heard respected people spin plausible-sounding arguments for them. Yet it seems that some people must be responding more to wish than to fact, after this century of accelerating advance. Snug limits would simplify our future, making it easier to understand and more comfortable to think about. A belief in snug limits also relieves a person of certain concerns and responsibilities. After all, if natural forces will halt the technology race in a convenient and automatic fashion, then we needn't try to understand and control it.

 Best of all, this escape doesn't feel like escapism. To contemplate visions of global decline must give the feeling of facing harsh facts without flinching. Yet such a future would be nothing really new: it would force us toward the familiar miseries of the European past or the Third World present. Genuine courage requires facing reality, facing accelerating change in a world that has no automatic brakes. This poses intellectual, moral, and political challenges of greater substance.

 Warnings of bogus limits do double harm. First, they discredit the very idea of limits, blunting an intellectual tool that we will need to understand our future. But worse, such warnings distract attention from our real problems. In the Western world there is a lively political tradition that fosters suspicion of technology. To the extent that it first disciplines its suspicions by testing them against reality and then chooses workable strategies for guiding change, this tradition can contribute mightily to the survival of life and civilization. But people concerned about technology and the future are a limited resource. The world cannot afford to have their efforts squandered in futile campaigns to sweep back the global tide of technology with the narrow broom of Western protest movements. The coming problems demand more subtle strategies.

 No one can yet say for certain what problems will prove to be most important, or what strategies will prove best for solving them. Yet we can already see novel problems of great importance, and we can discern strategies with varying degrees of promise. In short, we can see enough about the future to identify goals worth pursuing.

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Speed limit of light.
posted on 12/03/2001 2:06 PM by mike@mikyo.com

[Top]
[Mind·X]
[Reply to this post]

The light barrier would be a definite limitation to how a civilization could spread throughout the universe, since communication is the essence of civilization.

However, doesn't quantum entanglement and teleportation hint that the speed of light may not be the ultimate limit, at least to communication?

Re: Speed limit of light.
posted on 12/04/2001 3:36 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I don't know. But I do know, that nobody really knows.

Those saying - QM does not violate light speed, if you understand the matter - also don't know.

We need Quantum Gravity complete, to learn it.

IMHO.

- Thomas

Re: Speed limit of light.
posted on 12/13/2002 11:00 PM by CriX

[Top]
[Mind·X]
[Reply to this post]

Quantum teleportation can perform instantaneous (I think... or close to it) information transfer but the entangled matter must be seperated at normal sub-light speeds.

Hmm... I now realize that I don't know if entangled matter can be used over and over for transfering info. Maybe you can which could actually make it pretty useful... but it could also be a one shot deal.

Re: Speed limit of light.
posted on 12/14/2002 10:45 AM by Grant

[Top]
[Mind·X]
[Reply to this post]

What is information? When the American colonists decided to resist the British army sent to subdue them, a couple of lamps were set in a church tower. The information they conveyed was "One if by land and two if by sea." It could just as easily have been the other way around. In fact, the lights in the tower could have been used to signal just about anything. The actual information conveyed included the method of transport and time of arrival for an entire British force as well as when and where they would likely make their move toward the colonists. All of this encoded in a single bit.

The "information" being transfered is not the method used to encode it. So a single bit can carry as much information as the people who send it have agreed to let the code stand for. It can carry a whole book, if necessary, because the decision about what the code stands for has to be decided before the message is sent. The code merely tells the receiver which of several alternatives the sender has chosen.

Let's say I am having an argument about philosophy and want to tell my cohort which line of thought I am considering. I tell him beforehand "One bit for Spinoza and 0 bits for Aristotle." In this instance, one bit stands for the whole of either philospher's published work.

So the idea that small amounts of encoding can't carry large amounts of information ignores the fact that any symbol can stand for any amount of information. It's the setup process that determines the amount and content carried.

Grant

Re: Speed limit of light.
posted on 12/14/2002 5:47 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Grant,

(nothing to do with the speed of light, but ...)

If we agree in advance that I am going to read either Moby Dick, or War and Peace, and I will send you a 1-bit, or 0-bit later on, to convey to you which of these I've read ... then I send, and you receive, only 1-bits-worth of information.

Because we have already "exchanged a great deal of information about what the "bit" will indicate, there is nothing left for you to learn except "which" of the alteratives transpired.

If you had never read either book, and knew nothing of their content, then the bit sent certainly does not convey anything ABOUT either book.
If we have already exchanged elaborate plans as to how we will defend ourselves, deploy troops, etc., in the event the British come by land or by sea, then we already have BOTH of those plans, and all of their details, in our heads. The only thing conveyed by the lanterns in the tower is the 1-bit switch indicating WHICH of those alternative to act upon.

Cheers! ____tony b____

Re: Speed limit of light.
posted on 12/14/2002 8:58 PM by Grant

[Top]
[Mind·X]
[Reply to this post]

Tony,

What you say is true but the last 20,000 years of building cultures has created a foundation on which small symbols can transmit volumes of information. I need only mention Darwin's name to trigger in your mind the whole philosophy of evolution. The names of other things also trigger the history and concepts behind much larger and more involved ideas. Christmas, triggers massive amounts of data within the minds of everyone who hears it. This is not something we went to a lot of trouble to set up. It is just a consequence of the memetic process of building culture. Now even Christmas has been reduced to "xmas" and a picture of a tree with lights on it or a man in a red suit and a white beard can speak volumes.

It's no longer something we have to build, the buiding has been going on long enough to perpetuate itself and continue building on itself even after the human race becomes transhuman and intelligence becomes a shared part of everything we build. In other words, inert objects are and will continue to become capable of communication, that is, receiving and sending signals based on changes in their environment.

Have you heard about the watch you can now wave in front of the gas pump to pay for your gas? Or the key chain that does the same? There is no limit to what will contain this ability in the near future and almost no limit to what will be done with it. The storage and transmission of information is becomine ubiquitous and the amount of signal needed to pass it decreasing at the same exponential rate. Think of a zip file for the universe. ;-)>

Grant

Re: Speed limit of light.
posted on 12/14/2002 10:53 PM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

I hear Maxwell's Demon whistling at you.

Re: Speed limit of light.
posted on 12/15/2002 4:54 AM by T P

[Top]
[Mind·X]
[Reply to this post]

It is a semantic discussion. Frege and Wietgenstein comes to mind.

The problem as far as I see it is that of inputting new information to the database or accessing already existing information in the database through indexing.

The first one requires all the information in raw format, full size, unless the reciever have som sort of algoritm that needs less to unpack information.

The last one is what Grant is reffering to and is the ability to index already learned information and be able to retrieve it.

Re: Speed limit of light.
posted on 12/14/2002 6:09 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

CriX,

Its actually a bit more subtle than that (and hard for me to get my head around, so to speak).

Nothing of "mass" can approach/reach lightspeed, and no information in the formal sense (conveying "choice" or "intent") can be transmitted faster than light.

If I send you a QM-entangled photon-pair (I keep one and send you the other), and they are both polarized either vertically or horizontally (we don't know which), it will still take 4.3 years to reach you on Alpha Centauri.

If a third photon, of unknown polarization, is "given" to me, I have two choices. I can "determine" if photon-3 agrees or disagrees with my photon by one interaction, which will destroy the entanglement, OR I can cause them to interact in a way that I never learn of the agreement, but you do (instantaneously, btw). However, all you learn is the boolean "agrees/disagrees", rather than the value.

In other words, there in (as yet) no theoretical way that you, on Alpha Centauri, and someone here on Earth, BOTH learn something at the same time, or that something "intended" at this end can be conveyed to you at the your end, faster than C.

If you end up getting the "instantaneous state", it is because WE do not learn it at our end.

Very subtle, and not an easy concept to convey (even below lightspeed. :)

Cheers! ____tony b____

Re: Speed limit of light.
posted on 12/16/2002 9:42 AM by CriX

[Top]
[Mind·X]
[Reply to this post]

Thanks. Yeah, this stuff is pretty complicated. The main thing I got from your post was that at most, entangled matter would be useful only once and for a one way transfer of information.

I DO think info can be sent instaneouly though. (I should probably have researched this before writing this response but I'm pretty damn sure that I've read it and heard about it several times before... one sourse being a friend that's taking a quantum computation course.)

So we take 10 years to seperate ourselves, you on Earth, me around Alpha Centauri and we both have some matter which has been entangled. It's my understanding that you could do something to the matter on your end(without viewing that change I suppose, only inferring it) and then I could have my matter interact with something and from that product infer the state that you put the matter on your side in. Something like that.... hrrmmm

Re: Speed limit of light.
posted on 12/17/2002 2:12 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Crix,

Yeah, I think you got it.

If we were both in possession of pair-entangled "qubits", and I receive a "package", I can either choose to "open" the package and learn what is inside, OR I can have it interact with the qubit in a way that will (with a bit of extra work) reveal the contents of the package to you, instantaneously, no matter how distant you are. Unfortunately in that case, I cannot learn the contents of the package.

Its a weird info-teleportation that disallows duplication. Distant points cannot BOTH know the thing upon the moment of its intention/creation.

Strange Universe :)

Cheers! ____tony b____

Re: Speed limit of light.
posted on 12/15/2002 11:32 PM by James Jaeger

[Top]
[Mind·X]
[Reply to this post]

>Most senators in our technological society lack training in technology, as do most professors and talk-show hosts. . . .

This is an excellent point. I have posted extensively here that Congress is staffed by too few individuals with technical knowledge. It's almost all lawyers. How are THEY going to lead us into the 21st Century?!

>The entropy threat is an example of blatant nonsense, yet its inventors and promoters aren't laughed off the public stage. Imagine a thousand, a million similar distortions - some subtle, some brazen, but all warping the public's understanding of the world. Now imagine a group of democratic nations suffering from an infestation of such memes while attempting to cope with an era of accelerating technological revolution. We have a real problem. To make our survival more likely, we will need better ways to weed our memes, to make room for sound understanding to grow.

Probably true. But as soon as all human knowledge has been transcribed from paper to bytes, AI data mining techniques should make this process more possible.

James Jaeger




The Limits to Growth
posted on 12/16/2002 12:23 AM by James Jaeger

[Top]
[Mind·X]
[Reply to this post]

Okay, so what Dr. Drexler is saying is that there ARE real physical limits, but we haven't gotten anywhere near them ... yet.

Further:

>Warnings of bogus limits do double harm. First, they discredit the very idea of limits, blunting an intellectual tool that we will need to understand our future. But worse, such warnings distract attention from our real problems.

Well, seems to me if the media is eventually owned by one multi-national corporation and congress is filled with nothing but a bunch of non-technical lawyers -- we are in for one bozo future full of all sorts of bogus claims about limits, science, technology, goals, dangers, breakthroughs. In other words, Grant, the pseudo-memes will be a-flying. And I guess we'll have to start measuring memes in half-lives like we do radioactive isotopes and TV series.

>In the Western world there is a lively political tradition that fosters suspicion of technology. To the extent that it first disciplines its suspicions by testing them against reality and then chooses workable strategies for guiding change, this tradition can contribute mightily to the survival of life and civilization. But people concerned about technology and the future are a limited resource.

Hey, does this include us here at the MIND-X?! Are we a valuable resource, even though we're a little off-beat at times. Does anyone know the ratio of scientific breakthroughs that came from corporate labs vs. private basements?

>The world cannot afford to have their efforts squandered in futile campaigns to sweep back the global tide of technology with the narrow broom of Western protest movements. The coming problems demand more subtle strategies.

Like how about this subtle strategy to avoid squandering our efforts: Ray Kurzweil gives each of us here at the MIND-X a grant of say $100,000 to go off to some corner of the planet and experiment, research and/or observe the universe with the proviso that we have to meet back here by January 1, 2005 and discuss what we have come up with or learned. And just because someone is a big mucky-muck scientist or has published a bunch of books should be no cause to give them extra grant money or consideration. New and original research and technology can come out of any quarter, at least according to Isaac Asimov.

James Jaeger