Origin > How to Build a Brain > The Computational Universe
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0530.html

Printable Version
    The Computational Universe
by   Seth Lloyd

The amount of information you could process if you were to use all the energy and matter of the universe is 10^90 bits and the number of elementary operations that it can have performed since the Big Bang is about 10^120 ops. Perhaps the universe is itself a computer and what it's doing is performing a computation. If so, that's why the universe is so complex and these numbers say how big that computation is. Also, that means Douglas Adams was right (the answer is "42").


Originally published on Edge, Oct. 24, 2002. Published on KurzweilAI.net Oct. 24, 2002.

On July 21, 2002, Edge brought together leading thinkers to speak about their "universe." Other participants:

The Emotion Universe by Marvin Minsky
The Intelligent Universe by Ray Kurzweil
The Inflationary Universe by Alan Harvey Guth
The Cyclic Universe by Paul Steinhardt

I'm a professor of mechanical engineering at MIT. I build quantum computers that store information on individual atoms and then massage the normal interactions between the atoms to make them compute. Rather than having the atoms do what they normally do, you make them do elementary logical operations like bit flips, not operations, and-gates, and or-gates. This allows you to process information not only on a small scale, but in ways that are not possible using ordinary computers. In order to figure out how to make atoms compute, you have to learn how to speak their language and to understand how they process information under normal circumstances.

It's been known for more than a hundred years, ever since Maxwell, that all physical systems register and process information. For instance, this little inchworm right here has something on the order of Avogadro's number of atoms. And dividing by Boltzmann's concept, its entropy is on the order of Avogadro's number of bits. This means that it would take about Avogadro's number of bits to describe that little guy and how every atom and molecule is jiggling around in his body in full detail. Every physical system registers information, and just by evolving in time, by doing its thing, it changes that information, transforms that information, or, if you like, processes that information. Since I've been building quantum computers I've come around to thinking about the world in terms of how it processes information.

A few years ago I wrote a paper in Nature called "Fundamental Physical Limits to Computation," in which I showed that you could rate the information processing power of physical systems. Say that you're building a computer out of some collection of atoms. How many logical operations per second could you perform? Also, how much information could these systems register? Using relatively straightforward techniques you can show, for instance, that the number of elementary logical operations per second that you can perform with that amount of energy, E, is just E/H - well, it's 2E divided by pi times h-bar. [h-bar is essentially 10[-34] (10 to the -34) Joule-seconds, meaning that you can perform 10[-50] (10 to the 50) ops per second.)]If you have a kilogram of matter, which has mc2 - or around 10[17] Joules (10 to the 17) Joules - worth of energy and you ask how many ops per second it could perform, it could perform 10[17] (ten to the 17) Joules / h-bar. It would be really spanking if you could have a kilogram of matter - about what a laptop computer weighs - that could process at this rate. Using all the conventional techniques that were developed by Maxwell, Boltzmann, and Gibbs, and then developed by von Neumann and others back at the early part of the 20th century for counting numbers of states, you can count how many bits it could register. What you find is that if you were to turn the thing into a nuclear fireball - which is essentially turning it all into radiation, probably the best way of having as many bits as possible - then you could register about 10[30] (10 to the 30) bits. Actually that's many more bits than you could register if you just stored a bit on every atom, because Avogadro's number of atoms store about 10[24] (10 to the 24) bits.

Having done this paper to calculate the capacity of the ultimate laptop, and also to raise some speculations about the role of information-processing in, for example, things like black holes, I thought that this was actually too modest a venture, and that it would be worthwhile to calculate how much information you could process if you were to use all the energy and matter of the universe. This came up because back in when I was doing a Masters in Philosophy of Science at Cambridge. I studied with Stephen Hawking and people like that, and I had an old cosmology text. I realized that I can estimate the amount of energy that's available in the universe, and I know that if I look in this book it will tell me how to count the number of bits that could be registered, so I thought I would look and see. If you wanted to build the most powerful computer you could, you can't do better than including everything in the universe that's potentially available. In particular, if you want to know when Moore's Law, this fantastic exponential doubling of the power of computers every couple of years, must end, it would have to be before every single piece of energy and matter in the universe is used to perform a computation. Actually, just to telegraph the answer, Moore's Law has to end in about 600 years, without doubt. Sadly, by that time the whole universe will be running Windows 2540, or something like that. 99.99% of the energy of the universe will have been listed by Microsoft by that point, and they'll want more! They really will have to start writing efficient software, by gum. They can't rely on Moore's Law to save their butts any longer.

I did this calculation, which was relatively simple. You take, first of all, the observed density of matter in the universe, which is roughly one hydrogen atom per cubic meter. The universe is about thirteen billion years old, and using the fact that there are pi times 10[7] (10 to the 7) seconds in a year, you can calculate the total energy that's available in the whole universe. Remembering that there's a certain amount of energy, you then divide by Planck's Constant - which tells you how many ops per second can be performed - and multiply by the age of the universe, and you get the total number of elementary logical operations that could have been performed since the universe began. You get a number that's around 10[120] (10 to the 120). It's a little bigger - 10[122] (10 to the 122) or something like that - but within astrophysical units, where if you're within a factor of one hundred, you feel that you're okay;

The other way you can calculate it is by calculating how it progresses as time goes on. The universe has evolved up to now, but how long could it go? One way to figure this out is to take the phenomenological observation of how much energy there is, but another is to assume, in a Guthian fashion, that the universe is at its critical density. Then there's a simple formula for the critical density of the universe in terms of its age; G, the gravitational constant; and the speed of light. You plug that into this formula, assuming the universe is at critical density, and you find that the total number of ops that could have been performed in the universe over time (T) since the universe began is actually the age of the universe divided by the Planck scale - the time at which quantum gravity becomes important - quantity squared. That is, it's the age of the universe squared, divided by the Planck length, quantity squared. This is really just taking the energy divided by h-bar, and plugging in a formula for the critical density, and that's the answer you get.

This is just a big number. It's reminiscent of other famous big numbers that are bandied about by numerologists. These large numbers are, of course, associated with all sorts of terrible crank science. For instance, there's the famous Eddington Dirac number, which is 10[40] (10 to the 40). It's the ratio between the size of the universe and the classical size of the electron, and also the ratio between the electromagnetic force of, say, the hydrogen atom, and the gravitational force on the hydrogen atom. Dirac went down the garden path to try to make a theory in which this large number had to be what it was. The number that I've come up with is suspiciously reminiscent of (10[40])[3] (10 to the 40, quantity cubed). This number, 10[120], (10 to the 120) is normally regarded as a coincidence, but in fact it's not a coincidence that the number of ops that could have been performed since the universe began is this number cubed, because it actually turns out to be the first one squared times the other one. So whether these two numbers are the same could be a coincidence, but the fact that this one is equal to them cubed is not.

Having calculated the number of elementary logical operations that could have been performed since the universe began, I went and calculated the number of bits, which is a similar, standard sort of calculation. Say that we took all of this beautiful matter around us on lovely Eastover Farm, and vaporized it into a fireball of radiation. This would be the maximum entropy state, and would enable it to store the largest possible amount of information. You can easily calculate how many bits could be stored by the amount of matter that we have in the universe right now, and the answer turns out to be 10[90] (10 to the 90). This is necessary, just by standard cosmological calculations - it's (10[120])[3/4] (10 to the 120, quantity to the 3/4 power). We can store 10[90] (10 to the 90) bits in matter, and if one believes in somewhat speculative theories about quantum gravity such as holography - in which the amount of information that can be stored in a volume is bounded by the area of the volume divided by the Planck Scale squared - and if you assume that somehow information can be stored mysteriously on unknown gravitational degrees of freedom, then again you get 10[120] (10 to the 120). This is because, of course, the age of the universe squared divided by the Planck length squared is equal to the size of the universe squared divided by the Planck length. So the age of the universe squared, divided by the Planck time squared is equal to the size of the universe divided by the Planck length, quantity squared. So we can do 10[120] (10 to the 120) ops on 10[90] (10 to the 90) bits.

[Continued on Edge.org.]

Copyright © 2002 by Edge Foundation, Inc. Published on KurzweilAI.net with permission.

   
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Bits deep inside
posted on 11/13/2002 4:31 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

I very much recomend this article. Read it, please!

- Thomas

Re: Bits deep inside
posted on 11/14/2006 8:42 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

Use your mind!

One group of nitwits is saying that nanorobots are going to be extremely capable.

Another group of nitwits is saying that the whole universe has limits as to its computations.

Idiots rule!

es

Re: The Computational Universe
posted on 11/14/2002 2:22 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

this is a good concept- but I don't understand why he assumes that particles are the ultimate in processing units? Particles are only HALF-WAY down the scale of smalleness- like giant stars compared to Planck-scale structures- which may include a vast array of weird stuff like quantum-scale singularities/wormholes- cosmic string-stuff- probabilistic fluctuations- dimensional enfoldments- and the like-

using a relatively stable/sustainable Planck-scale structure could give you 10^100 bits of processing in just a cubic FEMTOMETER- let alone the whole universe!

Re: The Computational Universe
posted on 11/14/2002 2:42 PM by 6

[Top]
[Mind·X]
[Reply to this post]

The problem is that it could take an unimagendable amount of energy, to make or measure that planch-computronium.

Perhabs not, who knows?

Where not posthuman yet and i think our simple human minds, should just consider the nano-scale for now.

Interestingly if the estimates of Kurzweil are correct then we would reach the planch level in about 500 years, what would we do then?

Other than expanding of course.

Re: The Computational Universe
posted on 11/13/2006 2:15 PM by CSCD03-Student

[Top]
[Mind·X]
[Reply to this post]

Maybe Microsoft won't have to 'rely on Moore's Law to save their butts' after all. After reaching the 600 year mark when Moore's Law would no longer apply, I'm not really sure why they would want to stop at particle processing units either, when they would probably be able to use the infinitely small proportions given by a spatial singularity (since space-time curvature reaches infinity). This may not agree with quantum gravity, but if we assume the existence of point masses, they would have infinite density and zero volume to work with given the predictions of General Relativity (i.e. where there would not be any physical limit on the size of particles). That all works assuming a metric that allows both coordinate and absolute singularities wherever a point mass occurs in the equations. This would give us infinite density, infinite force of gravity, and infinite curvature of space around that point mass. Thus Microsoft would be able to rely on harnessing this if they can find away around the minor problem of escaping the event horizon.

Let us even consider disallowing point masses, by several means. For example, the wavelength limits suggestion put forth by quantum theory, or even postulations in string theory regarding the dimensions of space-time. One must take into account of course that quantum theory is incompatible with General Relativity inside a black hole (assuming that a singularity occurs within a black hole), so we take into account that there could be some other explanation that we do not yet know of why point masses cannot occur.

Even in this case, we can consider a source of processing units other than the matter we know of in our universe. Could the key possibly lie in some kind of traversable (inter-universe) Einstein-Rosen Bridge? I am not an expert on this, but from what I understand this could be possible. Although the vacuum solutions to the Einstein field equations are understood to produce unstable wormholes (in which light can't even get through). If we assume that Lorentzian traversable (inter-universe) wormholes are possible, then we might have a possible solution. These types of wormholes in general relativity were demonstrated by Kip Thorne and Mike Morris in a 1988 paper. This would imply the existence of exotic matter, where a spherical shell of exotic matter could be used to hold open the 'throat' of a Morris-Thorne wormhole. Also, Matt Visser showed other types of traversable wormholes held open by cosmic strings that were solutions to general relativity. So for now let us assume that we can in fact have such an Einstein-Rosen bridge.

Given all of this, Microsoft would possibly have the option of traversing such a wormhole to an alternate universe to take advantage of a new source of matter-energy for processing units. Another idea this brings up is higher dimensional space, where instead of using a cube of matter, you use something like a tesseract or measure polytope of some kind. A good way to visualize this is to use Michio Kaku's flatland analogy. A 3D sphere passing through such a 2D universe could be seen as a circle with increasing and decreasing radius. If a flatlander could manipulate the sphere somehow, maybe by linearly combining all the circles, he would have the volume of the sphere at his disposal, would he not? So couldn't Microsoft instead use a tesseract in the 5th spatial dimension for example, to provide more options in processing units, or would this not really make any sense? Another interesting question is what role time could play in all of this, and whether time can somehow be used to provide more processing options?

Re: The Computational Universe
posted on 12/27/2004 6:55 PM by satulite

[Top]
[Mind·X]
[Reply to this post]

I am not very appt in math, but one thought that did pop in mind, was if the big bang could be compared to the sperm and egg. What would the calculation be for this expansion and "collapse"?
And if there are some similarities, and we are on the verge of pushing these limits(living for 300 years). Could we find other ways to expand on this idea?

Re: The Computational Universe
posted on 11/13/2006 7:14 PM by Extropia

[Top]
[Mind·X]
[Reply to this post]

The trouble with all these speculations about the ultimate capabilities of far-flung civilizations is that they use the Big Bang cosmological model as their foundation.

Unfortunately, seeing as how this theory was falsified 40 years ago any speculation built on the assumption it is true is obviously wrong as well.

Nonetheless there may be some way of reconciling Lloyd's work with the superior EU model.

Re: The Computational Universe
posted on 11/14/2006 7:27 AM by maryfran^

[Top]
[Mind·X]
[Reply to this post]


***thinking outside the box***
big bang and before big bang = quantum intermittences beyond conventional time and space, i would say ....

http://www.universetoday.com/am/publish/beyond_big _bang.html

mf

Re: The Computational Universe
posted on 11/14/2006 10:20 PM by thehigherearth

[Top]
[Mind·X]
[Reply to this post]

Extremely pleasant...

Re: The Computational Universe
posted on 11/15/2006 5:30 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

I was thinking of writing an essay, detailing how type II civilizations could build a galactic xylophone out of the chrystal spheres that house the planets. But then it ocurred to me that, since numerous observations have falsified this model of the solar system, it was probably a waste of time.

Similarly, observations have falsified the assumption that redshift=expansion and so the very foundation of Big Bang cosmology has no choice but to collapse in utter humiliation.

But why is the Big Bang still spoken of as the only choice for the serious scientist? Well, this is what some 'serious scientists' say about it:

'The big bang theory can boast of no quantitative predictions that have subsequently been validated by observation. The successes claimed by the theory's supporters consist of its ability to retrospectively fit observations with a steadily increasing array of adjustable parameters, just as the old Earth-centred cosmology of Ptolemy needed layer upon layer of epicycles.

Yet the big bang is not the only framework available for understanding the history of the universe. Plasma cosmology and the steady-state model both hypothesise an evolving universe without beginning or end. These and other alternative approaches can also explain the basic phenomena of the cosmos, including the abundances of light elements, the generation of large-scale structure, the cosmic background radiation, and how the redshift of far-away galaxies increases with distance. They have even predicted new phenomena that were subsequently observed, something the big bang has failed to do.

Supporters of the big bang theory may retort that these theories do not explain every cosmological observation. But that is scarcely surprising, as their development has been severely hampered by a complete lack of funding. Indeed, such questions and alternatives cannot even now be freely discussed and examined. An open exchange of ideas is lacking in most mainstream conferences. Whereas Richard Feynman could say that "science is the culture of doubt", in cosmology today doubt and dissent are not tolerated, and young scientists learn to remain silent if they have something negative to say about the standard big bang model. Those who doubt the big bang fear that saying so will cost them their funding.

Even observations are now interpreted through this biased filter, judged right or wrong depending on whether or not they support the big bang. So discordant data on red shifts, lithium and helium abundances, and galaxy distribution, among other topics, are ignored or ridiculed. This reflects a growing dogmatic mindset that is alien to the spirit of free scientific enquiry.

Today, virtually all financial and experimental resources in cosmology are devoted to big bang studies. Funding comes from only a few sources, and all the peer-review committees that control them are dominated by supporters of the big bang. As a result, the dominance of the big bang within the field has become self-sustaining, irrespective of the scientific validity of the theory.

Giving support only to projects within the big bang framework undermines a fundamental element of the scientific method -- the constant testing of theory against observation. Such a restriction makes unbiased discussion and research impossible. To redress this, we urge those agencies that fund work in cosmology to set aside a significant fraction of their funding for investigations into alternative theories and observational contradictions of the big bang. To avoid bias, the peer review committee that allocates such funds could be composed of astronomers and physicists from outside the field of cosmology.

Allocating funding to investigations into the big bang's validity, and its alternatives, would allow the scientific process to determine our most accurate model of the history of the universe'.

This open letter to the scientific community was signed by the following people.

Halton Arp, Max-Planck-Institute Fur Astrophysik (Germany)
Andre Koch Torres Assis, State University of Campinas (Brazil)
Yuri Baryshev, Astronomical Institute, St. Petersburg State University (Russia)
Ari Brynjolfsson, Applied Radiation Industries (USA)
Hermann Bondi, Churchill College, Cambridge (UK)
Timothy Eastman, Plasmas International (USA)
Chuck Gallo, Superconix, Inc.(USA)
Thomas Gold, Cornell University (emeritus) (USA)
Amitabha Ghosh, Indian Institute of Technology, Kanpur (India)
Walter J. Heikkila, University of Texas at Dallas (USA)
Michael Ibison, Institute for Advanced Studies at Austin (USA)
Thomas Jarboe, Washington University (USA)
Jerry W. Jensen, ATK Propulsion (USA)
Menas Kafatos, George Mason University (USA)
Eric J. Lerner, Lawrenceville Plasma Physics (USA)
Paul Marmet, Herzberg Institute of Astrophysics(retired) (Canada)
Paola Marziani, Istituto Nazionale di Astrofisica, Osservatorio Astronomico di Padova (Italy)
Gregory Meholic, The Aerospace Corporation (USA)
Jacques Moret-Bailly, Universit' Dijon (retired) (France)
Jayant Narlikar, IUCAA(emeritus) and College de France (India,France)
Marcos Cesar Danhoni Neves, State University of Maring (Brazil)
Charles D. Orth, Lawrence Livermore National Laboratory (USA)
R. David Pace, Lyon College (USA)
Georges Paturel, Observatoire de Lyon (France)
Jean-Claude Pecker, College de France (France)
Anthony L. Peratt, Los Alamos National Laboratory (USA)
Bill Peter, BAE Systems Advanced Technologies (USA)
David Roscoe, Sheffield University (UK)
Malabika Roy, George Mason University (USA)
Sisir Roy, George Mason University (USA)
Konrad Rudnicki, Jagiellonian University (Poland)
Domingos S.L. Soares, Federal University of Minas Gerais (Brazil)
John L. West, Jet Propulsion Laboratory, California Institute of Technology (USA)
James F. Woodward, California State University, Fullerton (USA).

A trump card that is often played by Big Bang supporters is that its papers have undergone the peer review process. But there is at least one instance that proves this process is not as perfect at sorting out science from quackery as you might suppose.

The disgraced Korean scientist Hwang Woo-suk was not outed by the peer review process. Indeed, the well respected journals Nature and Science published his papers. Rather, he was found out in chat room discussions.

Where cosmology is concerned, the peer review process has turned into a monoculture where any paper not Big Bang-orientated is dismissed out-of-hand. Given the superior success of plasma cosmology and EU this is clearly not a healthy way of getting to the truth.

Perhaps the story of Woo-suk should serve as a guide that forums open to free-flowing ideas and no-holds-barred criticism are now superior avenues for scientific discussion?