Origin > Visions of the Future > Reflections on Stephen Wolfram's 'A New Kind of Science'
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0464.html

Printable Version
    Reflections on Stephen Wolfram's 'A New Kind of Science'
by   Ray Kurzweil

In his remarkable new book, Stephen Wolfram asserts that cellular automata operations underlie much of the real world. He even asserts that the entire Universe itself is a big cellular-automaton computer. But Ray Kurzweil challenges the ability of these ideas to fully explain the complexities of life, intelligence, and physical phenomena.


Stephen Wolfram's A New Kind of Science is an unusually wide-ranging book covering issues basic to biology, physics, perception, computation, and philosophy. It is also a remarkably narrow book in that its 1,200 pages discuss a singular subject, that of cellular automata. Actually, the book is even narrower than that. It is principally about cellular automata rule 110 (and three other rules which are equivalent to rule 110), and its implications.

It's hard to know where to begin in reviewing Wolfram's treatise, so I'll start with Wolfram's apparent hubris, evidenced in the title itself. A new science would be bold enough, but Wolfram is presenting a new kind of science, one that should change our thinking about the whole enterprise of science. As Wolfram states in chapter 1, "I have come to view [my discovery] as one of the more important single discoveries in the whole history of theoretical science."1

This is not the modesty that we have come to expect from scientists, and I suspect that it may earn him resistance in some quarters. Personally, I find Wolfram's enthusiasm for his own ideas refreshing. I am reminded of a comment made by the Buddhist teacher Guru Amrit Desai, when he looked out of his car window and saw that he was in the midst of a gang of Hell's Angels. After studying them in great detail for a long while, he finally exclaimed, "They really love their motorcycles." There was no disdain in this observation. Guru Desai was truly moved by the purity of their love for the beauty and power of something that was outside themselves.

Well, Wolfram really loves his cellular automata. So much so, that he has immersed himself for over ten years in the subject and produced what can only be regarded as a tour de force on their mathematical properties and potential links to a broad array of other endeavors. In the end notes, which are as extensive as the book itself, Wolfram explains his approach: "There is a common style of understated scientific writing to which I was once a devoted subscriber. But at some point I discovered that more significant results are usually incomprehensible if presented in this style…. And so in writing this book I have chosen to explain straightforwardly the importance I believe my various results have."2 Perhaps Wolfram's successful technology business career may also have had its influence here, as entrepreneurs are rarely shy about articulating the benefits of their discoveries.

So what is the discovery that has so excited Wolfram? As I noted above, it is cellular automata rule 110, and its behavior. There are some other interesting automata rules, but rule 110 makes the point well enough. A cellular automaton is a simple computational mechanism that, for example, changes the color of each cell on a grid based on the color of adjacent (or nearby) cells according to a transformation rule. Most of Wolfram's analyses deal with the simplest possible cellular automata, specifically those that involve just a one-dimensional line of cells, two possible colors (black and white), and rules based only on the two immediately adjacent cells. For each transformation, the color of a cell depends only on its own previous color and that of the cell on the left and the cell on the right. Thus there are eight possible input situations (i.e., three combinations of two colors). Each rule maps all combinations of these eight input situations to an output (black or white). So there are 28 = 256 possible rules for such a one-dimensional, two-color, adjacent-cell automaton. Half of the 256 possible rules map onto the other half because of left-right symmetry. We can map half of them again because of black-white equivalence, so we are left with 64 rule types. Wolfram illustrates the action of these automata with two-dimensional patterns in which each line (along the Y axis) represents a subsequent generation of applying the rule to each cell in that line.

Most of the rules are degenerate, meaning they create repetitive patterns of no interest, such as cells of a single color, or a checkerboard pattern. Wolfram calls these rules Class 1 automata. Some rules produce arbitrarily spaced streaks that remain stable, and Wolfram classifies these as belonging to Class 2. Class 3 rules are a bit more interesting in that recognizable features (e.g., triangles) appear in the resulting pattern in an essentially random order. However, it was the Class 4 automata that created the "ah ha" experience that resulted in Wolfram's decade of devotion to the topic. The Class 4 automata, of which Rule 110 is the quintessential example, produce surprisingly complex patterns that do not repeat themselves. We see artifacts such as lines at various angles, aggregations of triangles, and other interesting configurations. The resulting pattern is neither regular nor completely random. It appears to have some order, but is never predictable.

Why is this important or interesting? Keep in mind that we started with the simplest possible starting point: a single black cell. The process involves repetitive application of a very simple rule3. From such a repetitive and deterministic process, one would expect repetitive and predictable behavior. There are two surprising results here. One is that the results produce apparent randomness. Applying every statistical test for randomness that Wolfram could muster, the results are completely unpredictable, and remain (through any number of iterations) effectively random. However, the results are more interesting than pure randomness, which itself would become boring very quickly. There are discernible and interesting features in the designs produced, so the pattern has some order and apparent intelligence. Wolfram shows us many examples of these images, many of which are rather lovely to look at.

Wolfram makes the following point repeatedly: "Whenever a phenomenon is encountered that seems complex it is taken almost for granted that the phenomenon must be the result of some underlying mechanism that is itself complex. But my discovery that simple programs can produce great complexity makes it clear that this is not in fact correct."4

I do find the behavior of Rule 110 rather delightful. However, I am not entirely surprised by the idea that simple mechanisms can produce results more complicated than their starting conditions. We've seen this phenomenon in fractals (i.e., repetitive application of a simple transformation rule on an image), chaos and complexity theory (i.e., the complex behavior derived from a large number of agents, each of which follows simple rules, an area of study that Wolfram himself has made major contributions to), and self-organizing systems (e.g., neural nets, Markov models), which start with simple networks but organize themselves to produce apparently intelligent behavior. At a different level, we see it in the human brain itself, which starts with only 12 million bytes of specification in the genome, yet ends up with a complexity that is millions of times greater than its initial specification5.

It is also not surprising that a deterministic process can produce apparently random results. We have had random number generators (e.g., the "randomize" function in Wolfram's program "Mathematica") that use deterministic processes to produce sequences that pass statistical tests for randomness. These programs go back to the earliest days of computer software, e.g., early versions of Fortran. However, Wolfram does provide a thorough theoretical foundation for this observation.

Wolfram goes on to describe how simple computational mechanisms can exist in nature at different levels, and that these simple and deterministic mechanisms can produce all of the complexity that we see and experience. He provides a myriad of examples, such as the pleasing designs of pigmentation on animals, the shape and markings of shells, and the patterns of turbulence (e.g., smoke in the air). He makes the point that computation is essentially simple and ubiquitous. Since the repetitive application of simple computational transformations can cause very complex phenomena, as we see with the application of Rule 110, this, according to Wolfram, is the true source of complexity in the world.

My own view is that this is only partly correct. I agree with Wolfram that computation is all around us, and that some of the patterns we see are created by the equivalent of cellular automata. But a key issue is to ask is this: Just how complex are the results of Class 4 Automata?

Wolfram effectively sidesteps the issue of degrees of complexity. There is no debate that a degenerate pattern such as a chessboard has no effective complexity. Wolfram also acknowledges that mere randomness does not represent complexity either, because pure randomness also becomes predictable in its pure lack of predictability. It is true that the interesting features of a Class 4 automata are neither repeating nor pure randomness, so I would agree that they are more complex than the results produced by other classes of Automata. However, there is nonetheless a distinct limit to the complexity produced by these Class 4 automata. The many images of Class 4 automata in the book all have a similar look to them, and although they are non-repeating, they are interesting (and intelligent) only to a degree. Moreover, they do not continue to evolve into anything more complex, nor do they develop new types of features. One could run these automata for trillions or even trillions of trillions of iterations, and the image would remain at the same limited level of complexity. They do not evolve into, say, insects, or humans, or Chopin preludes, or anything else that we might consider of a higher order of complexity than the streaks and intermingling triangles that we see in these images.

Complexity is a continuum. In the past, I've used the word "order" as a synonym for complexity, which I have attempted to define as "information that fits a purpose."6 A completely predictable process has zero order. A high level of information alone does not necessarily imply a high level of order either. A phone book has a lot of information, but the level of order of that information is quite low. A random sequence is essentially pure information (since it is not predictable), but has no order. The output of Class 4 automata does possess a certain level of order, and they do survive like other persisting patterns. But the pattern represented by a human being has a far higher level of order or complexity. Human beings fulfill a highly demanding purpose in that they survive in a challenging ecological niche. Human beings represent an extremely intricate and elaborate hierarchy of other patterns. Wolfram regards any pattern that combines some recognizable features and unpredictable elements to be effectively equivalent to one another, but he does not show how a Class 4 automaton can ever increase its complexity, let alone to become a pattern as complex as a human being.

There is a missing link here in how one gets from the interesting, but ultimately routine patterns of a cellular automaton to the complexity of persisting structures that demonstrate higher levels of intelligence. For example, these class 4 patterns are not capable of solving interesting problems, and no amount of iteration moves them closer to doing so. Wolfram would counter that a rule 110 automaton could be used as a "universal computer."7 However, by itself a universal computer is not capable of solving intelligent problems without what I would call "software." It is the complexity of the software that runs on a universal computer that is precisely the issue.

One might point out that the Class 4 patterns I'm referring to result from the simplest possible cellular automata (i.e., one-dimensional, two-color, two-neighbor rules). What happens if we increase the dimensionality, e.g., go to multiple colors, or even generalize these discrete cellular automata to continuous functions? Wolfram addresses all of this quite thoroughly. The results produced from more complex automata are essentially the same as those of the very simple ones. We obtain the same sorts of interesting but ultimately quite limited patterns. Wolfram makes the interesting point that we do not need to use more complex rules to get the complexity (of Class 4 automata) in the end result. But I would make the converse point that we are unable to increase the complexity of the end result through either more complex rules or through further iteration. So cellular automata only get us so far.

So how do we get from these interesting but limited patterns of Class 4 automata to those of insects, or humans or Chopin preludes? One concept we need to add is conflict, i.e., evolution. If we add another simple concept to that of Wolfram's simple cellular automata, i.e., an evolutionary algorithm, we start to get far more interesting, and more intelligent results. Wolfram would say that the Class 4 automata and an evolutionary algorithm are "computationally equivalent." But that is only true on what I could regard as the "hardware" level. On the software level, the order of the patterns produced are clearly different, and of a different order of complexity.

An evolutionary algorithm can start with randomly generated potential solutions to a problem. The solutions are encoded in a digital genetic code. We then have the solutions compete with each other in a simulated evolutionary battle. The better solutions survive and procreate in a simulated sexual reproduction in which offspring solutions are created, drawing their genetic code (i.e., encoded solutions) from two parents. We can also introduce a rate of genetic mutation. Various high-level parameters of this process, such as the rate of mutation, the rate of offspring, etc., are appropriately called "God parameters" and it is the job of the engineer designing the evolutionary algorithm to set them to reasonably optimal values. The process is run for many thousands of generations of simulated evolution, and at the end of the process, one is likely to find solutions that are of a distinctly higher order than the starting conditions. The results of these evolutionary (sometimes called genetic) algorithms can be elegant, beautiful, and intelligent solutions to complex problems. They have been used, for example, to create artistic designs, designs for artificial life forms in artificial life experiments, as well as for a wide range of practical assignments such as designing jet engines. Genetic algorithms are one approach to "narrow" artificial intelligence, that is, creating systems that can perform specific functions that used to require the application of human intelligence.

But something is still missing. Although genetic algorithms are a useful tool in solving specific problems, they have never achieved anything resembling "strong AI," i.e., aptitude resembling the broad, deep, and subtle features of human intelligence, particularly its powers of pattern recognition and command of language. Is the problem that we are not running the evolutionary algorithms long enough? After all, humans evolved through an evolutionary process that took billions of years. Perhaps we cannot recreate that process with just a few days or weeks or computer simulation. However, conventional genetic algorithms reach an asymptote in their level of performance, so running them for a longer period of time won't help.

A third level (beyond the ability of cellular processes to produce apparent randomness and genetic algorithms to produce focused intelligent solutions) is to perform evolution on multiple levels. Conventional genetic algorithms only allow evolution within the narrow confines of a narrow problem, and a single means of evolution. The genetic code itself needs to evolve; the rules of evolution need to evolve. Nature did not stay with a single chromosome, for example. There have been many levels of indirection incorporated in the natural evolutionary process. And we require a complex environment in which evolution takes place.

To build strong AI, we will short circuit this process, however, by reverse engineering the human brain, a project well under way, thereby benefiting from the evolutionary process that has already taken place. We will be applying evolutionary algorithms within these solutions just as the human brain does. For example, the fetal wiring is initially random in certain regions, with the majority of connections subsequently being destroyed during the early stages of brain maturation as the brain self-organizes to make sense of its environment and situation.

But back to cellular automata. Wolfram applies his key insight, which he states repeatedly, that we obtain surprisingly complex behavior from the repeated application of simple computational transformations - to biology, physics, perception, computation, mathematics, and philosophy. Let's start with biology.

Wolfram writes, "Biological systems are often cited as supreme examples of complexity in nature, and it is not uncommon for it to be assumed that their complexity must be somehow of a fundamentally higher order than other systems. . . . What I have come to believe is that many of the most obvious examples of complexity in biological systems actually have very little to do with adaptation or natural selection. And instead . . . they are mainly just another consequence of the very basic phenomenon that I have discovered. . . .that in almost any kind of system many choices of underlying rules inevitably lead to behavior of great complexity."8

I agree with Wolfram that some of what passes for complexity in nature is the result of cellular-automata type computational processes. However, I disagree with two fundamental points. First, the behavior of a Class 4 automaton, as the many illustrations in the book depict, do not represent "behavior of great complexity." It is true that these images have a great deal of unpredictability (i.e., randomness). It is also true that they are not just random but have identifiable features. But the complexity is fairly modest. And this complexity never evolves into patterns that are at all more sophisticated.

Wolfram considers the complexity of a human to be equivalent to that a Class 4 automaton because they are, in his terminology, "computationally equivalent." But class 4 automata and humans are only computational equivalent in the sense that any two computer programs are computationally equivalent, i.e., both can be run on a Universal Turing machine. It is true that computation is a universal concept, and that all software is equivalent on the hardware level (i.e., with regard to the nature of computation), but it is not the case that all software is of the same order of complexity. The order of complexity of a human is greater than the interesting but ultimately repetitive (albeit random) patterns of a Class 4 automaton.

I also disagree that the order of complexity that we see in natural organisms is not a primary result of "adaptation or natural selection." The phenomenon of randomness readily produced by cellular automaton processes is a good model for fluid turbulence, but not for the intricate hierarchy of features in higher organisms. The fact that we have phenomena greater than just the interesting but fleeting patterns of fluid turbulence (e.g., smoke in the wind) in the world is precisely the result of the chaotic crucible of conflict over limited resources known as evolution.

To be fair, Wolfram does not negate adaptation or natural selection, but he over-generalizes the limited power of complexity resulting from simple computational processes. When Wolfram writes, "in almost any kind of system many choices of underlying rules inevitably lead to behavior of great complexity," he is mistaking the random placement of simple features that result from cellular processes for the true complexity that has resulted from eons of evolution.

Wolfram makes the valid point that certain (indeed most) computational processes are not predictable. In other words, we cannot predict future states without running the entire process. I agree with Wolfram that we can only know the answer in advance if somehow we can simulate a process at a faster speed. Given that the Universe runs at the fastest speed it can run, there is usually no way to short circuit the process. However, we have the benefits of the mill of billions of years of evolution, which is responsible for the greatly increased order of complexity in the natural world. We can now benefit from it by using our evolved tools to reverse-engineer the products of biological evolution.

Yes, it is true that some phenomena in nature that may appear complex at some level are simply the result of simple underlying computational mechanisms that are essentially cellular automata at work. The interesting pattern of triangles on a "tent olive" shell or the intricate and varied patterns of a snowflake are good examples. I don't think this is a new observation, in that we've always regarded the design of snowflakes to derive from a simple molecular computation-like building process. However, Wolfram does provide us with a compelling theoretical foundation for expressing these processes and their resulting patterns. But there is more to biology than Class 4 patterns.

I do appreciate Wolfram's strong argument, however, that nature is not as complex as it often appears to be. Some of the key features of the paradigm of biological systems, which differ from much of our contemporary designed technology, are that it is massively parallel, and that apparently complex behavior can result from the intermingling of a vast number of simpler systems. One example that comes to mind is Marvin Minsky's theory of intelligence as a "Society of Mind" in which intelligence may result from a hierarchy of simpler intelligences with simple agents not unlike cellular automata at the base.

However, cellular automata on their own do not evolve sufficiently. They quickly reach a limited asymptote in their order of complexity. An evolutionary process involving conflict and competition is needed.

For me, the most interesting part of the book is Wolfram's thorough treatment of computation as a simple and ubiquitous phenomenon. Of course, we've known for over a century that computation is inherently simple, i.e., we can build any possible level of complexity from a foundation of the simplest possible manipulations of information.

For example, Babbage's computer provided only a handful of operation codes, yet provided (within its memory capacity and speed) the same kinds of transformations as do modern computers. The complexity of Babbage's invention stemmed only from the details of its design, which indeed proved too difficult for Babbage to implement using the 19th century mechanical technology available to him.

The "Turing Machine," Alan Turing's theoretical conception of a universal computer in 1950, provides only 7 very basic commands9, yet can be organized to perform any possible computation. The existence of a "Universal Turing Machine," which can simulate any possible Turing Machine (that is described on its tape memory), is a further demonstration of the universality (and simplicity) of computation. In what is perhaps the most impressive analysis in his book, Wolfram shows how a Turing Machine with only two states and five possible colors can be a Universal Turing Machine. For forty years, we've thought that a Universal Turing Machine had to be more complex than this10. Also impressive is Wolfram's demonstration that Cellular Automaton Rule 110 is capable of universal computation (given the right software).

In my 1990 book, I showed how any computer could be constructed from "a suitable number of [a] very simple device," namely the "nor" gate11. This is not exactly the same demonstration as a universal Turing machine, but it does demonstrate that any computation can be performed by a cascade of this very simple device (which is simpler than Rule 110), given the right software (which would include the connection description of the nor gates).12

The most controversial thesis in Wolfram's book is likely to be his treatment of physics, in which he postulates that the Universe is a big cellular-automaton computer. Wolfram is hypothesizing that there is a digital basis to the apparently analog phenomena and formulas in physics, and that we can model our understanding of physics as the simple transformations of a cellular automaton.

Others have postulated this possibility. Richard Feynman wondered about it in considering the relationship of information to matter and energy. Norbert Weiner heralded a fundamental change in focus from energy to information in his 1948 book Cybernetics, and suggested that the transformation of information, not energy, was the fundamental building block for the Universe.

Perhaps the most enthusiastic proponent of an information-based theory of physics was Edward Fredkin, who in the early 1980s proposed what he called a new theory of physics based on the idea that the Universe was comprised ultimately of software. We should not think of ultimate reality as particles and forces, according to Fredkin, but rather as bits of data modified according to computation rules.

Fredkin is quoted by Robert Wright in the 1980s as saying "There are three great philosophical questions. What is life? What is consciousness and thinking and memory and all that? And how does the Universe work? The informational viewpoint encompasses all three. . . . What I'm saying is that at the most basic level of complexity an information process runs what we think of as physics. At the much higher level of complexity, life, DNA - you know, the biochemical functions - are controlled by a digital information process. Then, at another level, our thought processes are basically information processing. . . . I find the supporting evidence for my beliefs in ten thousand different places, and to me it's just totally overwhelming. It's like there's an animal I want to find. I've found his footprints. I've found his droppings. I've found the half-chewed food. I find pieces of his fur, and so on. In every case it fits one kind of animal, and it's not like any animal anyone's ever seen. People say, where is this animal? I say, Well he was here, he's about this big, this that, and the other. And I know a thousand things about him. I don't have him in hand, but I know he's there. . . . What I see is so compelling that it can't be a creature of my imagination."13

In commenting on Fredkin's theory of digital physics, Robert Wright writes, "Fredkin . . . is talking about an interesting characteristic of some computer programs, including many cellular automata: there is no shortcut to finding out what they will lead to. This, indeed, is a basic difference between the "analytical" approach associated with traditional mathematics, including differential equations, and the "computational" approach associated with algorithms. You can predict a future state of a system susceptible to the analytic approach without figuring out what states it will occupy between now and then, but in the case of many cellular automata, you must go through all the intermediate states to find out what the end will be like: there is no way to know the future except to watch it unfold. . . There is no way to know the answer to some question any faster than what's going on. . . . Fredkin believes that the Universe is very literally a computer and that it is being used by someone, or something, to solve a problem. It sounds like a good-news / bad-news joke: the good news is that our lives have purpose; the bad news is that their purpose is to help some remote hacker estimate pi to nine jillion decimal places."14

Fredkin went on to show that although energy is needed for information storage and retrieval, we can arbitrarily reduce the energy required to perform any particular example of information processing, and there is no lower limit to the amount of energy required15. This result made plausible the view that information rather than matter and energy should be regarded as the more fundamental reality.

I discussed Weiner's and Fredkin's view of information as the fundamental building block for physics and other levels of reality in my 1990 book The Age of Intelligent Machines16.

The complexity of casting all of physics in terms of computational transformations proved to be an immensely challenging project, but Fredkin has continued his efforts.17 Wolfram has devoted a considerable portion of his efforts over the past decade to this notion, apparently with only limited communication with some of the others in the physics community who are also pursuing the idea.

Wolfram's stated goal "is not to present a specific ultimate model for physics,"18 but in his "Note for Physicists,"19 which essentially equates to a grand challenge, Wolfram describes the "features that [he] believe[s] such a model will have."

In The Age of Intelligent Machines, I discuss "the question of whether the ultimate nature of reality is analog or digital," and point out that "as we delve deeper and deeper into both natural and artificial processes, we find the nature of the process often alternates between analog and digital representations of information."20 As an illustration, I noted how the phenomenon of sound flips back and forth between digital and analog representations. In our brains, music is represented as the digital firing of neurons in the cochlear representing different frequency bands. In the air and in the wires leading to loudspeakers, it is an analog phenomenon. The representation of sound on a music compact disk is digital, which is interpreted by digital circuits. But the digital circuits consist of thresholded transistors, which are analog amplifiers. As amplifiers, the transistors manipulate individual electrons, which can be counted and are, therefore, digital, but at a deeper level are subject to analog quantum field equations.21 At a yet deeper level, Fredkin, and now Wolfram, are theorizing a digital (i.e., computational) basis to these continuous equations. It should be further noted that if someone actually does succeed in establishing such a digital theory of physics, we would then be tempted to examine what sorts of deeper mechanisms are actually implementing the computations and links of the cellular automata. Perhaps, underlying the cellular automata that run the Universe are yet more basic analog phenomena, which, like transistors, are subject to thresholds that enable them to perform digital transactions.

Thus establishing a digital basis for physics will not settle the philosophical debate as to whether reality is ultimately digital or analog. Nonetheless, establishing a viable computational model of physics would be a major accomplishment. So how likely is this?

We can easily establish an existence proof that a digital model of physics is feasible, in that continuous equations can always be expressed to any desired level of accuracy in the form of discrete transformations on discrete changes in value. That is, after all, the basis for the fundamental theorem of calculus22. However, expressing continuous formulas in this way is an inherent complication and would violate Einstein's dictum to express things "as simply as possible, but no simpler." So the real question is whether we can express the basic relationships that we are aware of in more elegant terms, using cellular-automata algorithms. One test of a new theory of physics is whether it is capable of making verifiable predictions. In at least one important way that might be a difficult challenge for a cellular automata-based theory because lack of predictability is one of the fundamental features of cellular automata.

Wolfram starts by describing the Universe as a large network of nodes. The nodes do not exist in "space," but rather space, as we perceive it, is an illusion created by the smooth transition of phenomena through the network of nodes. One can easily imagine building such a network to represent "naïve" (i.e., Newtonian) physics by simply building a three-dimensional network to any desired degree of granularity. Phenomena such as "particles" and "waves" that appear to move through space would be represented by "cellular gliders," which are patterns that are advanced through the network for each cycle of computation. Fans of the game of "Life" (a popular game based on cellular automata) will recognize the common phenomenon of gliders, and the diversity of patterns that can move smoothly through a cellular automaton network. The speed of light, then, is the result of the clock speed of the celestial computer since gliders can only advance one cell per cycle.

Einstein's General Relativity, which describes gravity as perturbations in space itself, as if our three-dimensional world were curved in some unseen fourth dimension, is also straightforward to represent in this scheme. We can imagine a four-dimensional network and represent apparent curvatures in space in the same way that one represents normal curvatures in three-dimensional space. Alternatively, the network can become denser in certain regions to represent the equivalent of such curvature.

A cellular-automata conception proves useful in explaining the apparent increase in entropy (disorder) that is implied by the second law of thermodynamics. We have to assume that the cellular-automata rule underlying the Universe is a Class 4 rule (otherwise the Universe would be a dull place indeed). Wolfram's primary observation that a Class 4 cellular automaton quickly produces apparent randomness (despite its determinate process) is consistent with the tendency towards randomness that we see in Brownian motion, and that is implied by the second law.

Special relativity is more difficult. There is an easy mapping from the Newtonian model to the cellular network. But the Newtonian model breaks down in special relativity. In the Newtonian world, if a train is going 80 miles per hour, and I drive behind it on a nearby road at 60 miles per hour, the train will appear to pull away from me at a speed of 20 miles per hour. But in the world of special relativity, if I leave Earth at a speed of three-quarters of the speed of light, light will still appear to me to move away from me at the full speed of light. In accordance with this apparently paradoxical perspective, both the size and subjective passage of time for two observers will vary depending on their relative speed. Thus our fixed mapping of space and nodes becomes considerably more complex. Essentially each observer needs his own network. However, in considering special relativity, we can essentially apply the same conversion to our "Newtonian" network as we do to Newtonian space. However, it is not clear that we are achieving greater simplicity in representing special relativity in this way.

A cellular node representation of reality may have its greatest benefit in understanding some aspects of the phenomenon of quantum mechanics. It could provide an explanation for the apparent randomness that we find in quantum phenomena. Consider, for example, the sudden and apparently random creation of particle-antiparticle pairs. The randomness could be the same sort of randomness that we see in Class 4 cellular automata. Although predetermined, the behavior of Class 4 automata cannot be anticipated (other than by running the cellular automata) and is effectively random.

This is not a new view, and is equivalent to the "hidden variables" formulation of quantum mechanics, which states that there are some variables that we cannot otherwise access that control what appears to be random behavior that we can observe. The hidden variables conception of quantum mechanics is not inconsistent with the formulas for quantum mechanics. It is possible, but is not popular, however, with quantum physicists because it requires a large number of assumptions to work out in a very particular way. However, I do not view this as a good argument against it. The existence of our Universe is itself very unlikely and requires many assumptions to all work out in a very precise way. Yet here we are.

A bigger question is how could a hidden-variables theory be tested? If based on cellular automata-like processes, the hidden variables would be inherently unpredictable, even if deterministic. We would have to find some other way to "unhide" the hidden variables.

Wolfram's network conception of the Universe provides a potential perspective on the phenomenon of quantum entanglement and the collapse of the wave function. The collapse of the wave function, which renders apparently ambiguous properties of a particle (e.g., its location) retroactively determined, can be viewed from the cellular network perspective as the interaction of the observed phenomenon with the observer itself. As observers, we are not outside the network, but exist inside it. We know from cellular mechanics that two entities cannot interact without both being changed, which suggests a basis for wave function collapse.

Wolfram writes that "If the Universe is a network, then it can in a sense easily contain threads that continue to connect particles even when the particles get far apart in terms of ordinary space." This could provide an explanation for recent dramatic experiments showing nonlocality of action in which two "quantum entangled" particles appear to continue to act in concert with one another even though separated by large distances. Einstein called this "spooky action at a distance" and rejected it, although recent experiments appear to confirm it.

Some phenomena fit more neatly into this cellular-automata network conception than others. Some of the suggestions appear elegant, but as Wolfram's "Note for Physicists" makes clear, the task of translating all of physics into a consistent cellular automata-based system is daunting indeed.

Extending his discussion to philosophy, Wolfram "explains" the apparent phenomenon of free will as decisions that are determined but unpredictable. Since there is no way to predict the outcome of a cellular process without actually running the process, and since no simulator could possibly run faster than the Universe itself, there is, therefore, no way to reliably predict human decisions. So even though our decisions are determined, there is no way to predetermine what these decisions will be. However, this is not a fully satisfactory examination of the concept. This observation concerning the lack of predictability can be made for the outcome of most physical processes, e.g., where a piece of dust will fall onto the ground. This view thereby equates human free will with the random descent of a piece of dust. Indeed, that appears to be Wolfram's view when he states that the process in the human brain is "computationally equivalent" to those taking place in processes such as fluid turbulence.

Although I will not attempt a full discussion of this issue here, it should be noted that it is difficult to explore concepts such as free will and consciousness in a strictly scientific context because these are inherently first-person subjective phenomena, whereas science is inherently a third person objective enterprise. There is no such thing as the first person in science, so inevitably concepts such as free will and consciousness end up being meaningless. We can either view these first person concepts as mere illusions, as many scientists do, or we can view them as the appropriate province of philosophy, which seeks to expand beyond the objective framework of science.

There is a philosophical perspective to Wolfram's treatise that I do find powerful. My own philosophy is that of a "patternist," which one might consider appropriate for a pattern recognition scientist. In my view, the fundamental reality in the world is not stuff, but patterns.

If I ask the question, 'Who am I?' I could conclude that, perhaps I am this stuff here, i.e., the ordered and chaotic collection of molecules that comprise my body and brain.

However, the specific set of particles that comprise my body and brain are completely different from the atoms and molecules than comprised me only a short while (on the order of weeks) ago. We know that most of our cells are turned over in a matter of weeks. Even those that persist longer (e.g., neurons) nonetheless change their component molecules in a matter of weeks.

So I am a completely different set of stuff than I was a month ago. All that persists is the pattern of organization of that stuff. The pattern changes also, but slowly and in a continuum from my past self. From this perspective I am rather like the pattern that water makes in a stream as it rushes past the rocks in its path. The actual molecules (of water) change every millisecond, but the pattern persists for hours or even years.

It is patterns (e.g., people, ideas) that persist, and in my view constitute the foundation of what fundamentally exists. The view of the Universe as a cellular automaton provides the same perspective, i.e., that reality ultimately is a pattern of information. The information is not embedded as properties of some other substrate (as in the case of conventional computer memory) but rather information is the ultimate reality. What we perceive as matter and energy are simply abstractions, i.e., properties of patterns. As a further motivation for this perspective, it is useful to point out that, based on my research, the vast majority of processes underlying human intelligence are based on the recognition of patterns.

However, the intelligence of the patterns we experience in both the natural and human-created world is not primarily the result of Class 4 cellular automata processes, which create essentially random assemblages of lower level features. Some people have commented that they see ghostly faces and other higher order patterns in the many examples of Class 4 images that Wolfram provides, but this is an indication more of the intelligence of the observer than of the pattern being observed. It is our human nature to anthropomorphize the patterns we encounter. This phenomenon has to do with the paradigm our brain uses to perform pattern recognition, which is a method of "hypothesize and test." Our brains hypothesize patterns from the images and sounds we encounter, followed by a testing of these hypotheses, e.g., is that fleeting image in the corner of my eye really a predator about to attack? Sometimes we experience an unverifiable hypothesis that is created by the inevitable accidental association of lower-level features.

Some of the phenomena in nature (e.g., clouds, coastlines) are explained by repetitive simple processes such as cellular automata and fractals, but intelligent patterns (e.g., the human brain) require an evolutionary process (or, alternatively the reverse-engineering of the results of such a process). Intelligence is the inspired product of evolution, and is also, in my view, the most powerful "force" in the world, ultimately transcending the powers of mindless natural forces.

In summary, Wolfram's sweeping and ambitious treatise paints a compelling but ultimately overstated and incomplete picture. Wolfram joins a growing community of voices that believe that patterns of information, rather than matter and energy, represent the more fundamental building blocks of reality. Wolfram has added to our knowledge of how patterns of information create the world we experience and I look forward to a period of collaboration between Wolfram and his colleagues so that we can build a more robust vision of the ubiquitous role of algorithms in the world.

The lack of predictability of Class 4 cellular automata underlies at least some of the apparent complexity of biological systems, and does represent one of the important biological paradigms that we can seek to emulate in our human-created technology. It does not explain all of biology. It remains at least possible, however, that such methods can explain all of physics. If Wolfram, or anyone else for that matter, succeeds in formulating physics in terms of cellular-automata operations and their patterns, then Wolfram's book will have earned its title. In any event, I believe the book to be an important work of ontology.


1 Wolfram, A New Kind of Science, page 2.

2 Ibid, page 849.

3 Rule 110 states that a cell becomes white if its previous color and its two neighbors are all black or all white or if its previous color was white and the two neighbors are black and white respectively; otherwise the cell becomes black.

4 Wolfram, A New Kind of Science, page 4.

5 The genome has 6 billion bits, which is 800 million bytes, but there is enormous repetition, e.g., the sequence "ALU" which is repeated 300,000 times. Applying compression to the redundancy, the genome is approximately 23 million bytes compressed, of which about half specifies the brain's starting conditions. The additional complexity (in the mature brain) comes from the use of stochastic (i.e., random within constraints) processes used to initially wire specific areas of the brain, followed by years of self-organization in response to the brain's interaction with its environment.

6 See my book The Age of Spiritual Machines, When Computers Exceed Human Intelligence (Viking, 1999), the section titled "Disdisorder" and "The Law of Increasing Entropy Versus the Growth of Order" on pages 30 - 33.

7 A computer that can accept as input the definition of any other computer and then simulate that other computer. It does not address the speed of simulation, which might be slow in comparison to the computer being simulated.

8 Wolfram, A New Kind of Science, page 383.

9 The seven commands of a Turing Machine are: (i) Read Tape, (ii) Move Tape Left, (iii) Move Tape Right, (iv) Write 0 on the Tape, (v) Write 1 on the Tape, (vi) Jump to another command, and (vii) Halt.

10 As Wolfram points out, the previous simplest Universal Turing machine, presented in 1962, required 7 states and 4 colors. See Wolfram, A New Kind of Science, pages 706 - 710.

11 The "nor" gate transforms two inputs into one output. The output of "nor" is true if an only if neither A nor B are true.

12 See my book The Age of Intelligent Machines, section titled "A nor B: The Basis of Intelligence?," pages 152 - 157.

13 Edward Fredkin, as quoted in Did the Universe Just Happen by Robert Wright.

14 Ibid.

15 Many of Fredkin's results come from studying his own model of computation, which explicitly reflects a number of fundamental principles of physics. See the classic Edward Fredkin and Tommaso Toffoli, "Conservative Logic," International Journal of Theoretical Physics 21, numbers 3-4 (1982). Also, a set of concerns about the physics of computation analytically similar to those of Fredkin's may be found in Norman Margolus, "Physics and Computation," Ph.D. thesis, MIT.

16 See The Age of Intelligent Machines, section titled "Cybernetics: A new weltanschauung," pages 189 - 198.

17 See the web site: www.digitalphilosophy.org, including Ed Fredkin's essay "Introduction to Digital Philosophy." Also, the National Science Foundation sponsored a workshop during the Summer of 2001 titled "The Digital Perspective," which covered some of the ideas discussed in Wolfram's book. The workshop included Ed Fredkin Norman Margolus, Tom Toffoli, Charles Bennett, David Finkelstein, Jerry Sussman, Tom Knight, and Physics Nobel Laureate Gerard 't Hooft. The workshop proceedings will be published soon, with Tom Toffoli as editor.

18 Stephen Wolfram, A New Kind of Science, page 1,043.

19 Ibid, pages 1,043 - 1,065.

20 The Age of Intelligent Machines, pages 192 - 198.

21 Ibid.

22 The fundamental theorem of calculus establishes that differentiation and integration are inverse operations.

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

CA
posted on 05/14/2002 2:10 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

It's Mr. Stephen Wolfram, who's right here.

(Edward Fredkin had the same ideas, as far as I know.)

Doesn't matter who, the point is, that this concept of "everything is CA", just can't be wrong. In the world, which is quite ordered, you can always invent enough CAs to explain everything.

Arguing, that this is not enough - is pointless. It is _just_ enough, (as it should be)!

I expect we'll soon see practical implications also.

- Thomas Kristan

Re: CA
posted on 05/14/2002 3:19 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

I haven't read the book yet, but it seems to me that Mr. Wolfram's automata rules could just as easily apply to any dichotomy such as on and off, 1 and 0 or even yin and yang. The question in my mind is whether this is the basis for the universe itself or just how we perceive the universe. Is the universe a computer, or have we just learned how to capitalize on our ability to divide up the universe in our minds and use it as a computer? Ultimately, what is the relationship between our perception and reality?

Re: CA
posted on 05/14/2002 2:16 PM by mindxmoderator@kurzweilai.net

[Top]
[Mind·X]
[Reply to this post]

I think you misunderstand the primary point of my review. I'm not saying that Wolfram is "wrong" in saying that "everything is CA," although neither Wolfram nor anyone else has yet proven that. That's really not the point I am making. Rather, I am objecting to Wolfram's position that most of the complexity we see in the world is due to the complexity created by a Class 4 CA. I point out that, although a Class 4 CA does create some complexity (randomness combined with identifiable features), the order of complexity (from a Class 4 CA) is limited and never evolves into anything more intelligent.

-Ray Kurzweil

Re: CA
posted on 05/14/2002 3:17 PM by bhoffarth@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

I wonder if there exist any man-made evolutionary exercises that could create more complex patterns than class 4 automata? Are they powerful enough to make them "intelligent?" Finally, since it is so important to watch the entire process in order to see what the outcome will be, we may be unable to see such complexity emerge due to our current level of computational power.

Barrett Hoffarth

Re: CA
posted on 05/14/2002 5:01 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I see! We discuss the _level_ of complexity only. ;)

If there will be no Singularity in 2020 yet ... the world is more complex, than I think it is.

But if Stephen Wolfram is right (if I understand, what his point is) - the Singularity _will_ be near!

- Thomas

Re: CA
posted on 05/14/2002 7:03 PM by mizell_ken@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

-my first post.. I've been reading for a month or two.

I agree that Wolfram is on to something here.. I've been wondering if Chemestry is exactly this.. a predetermined set of rules on the smallest particles added up to be a more complex set of rules for say... an Atom and an even more complex set of rules for a molecule.

Is Bio-chemestry such a stretch?..

One other point.. the dichotomy of analog and digital mentioned in Ray's review.. what if Wolfram is right about CA's and each CA is set to it's original starting point based on an analog wave pattern and the time it was created in the system. Could this explain the level of randomness needed to "Accidentally" (at it's predetermined time billions of years later) create inteligence? I firmly believe that Ray is right in saying that "I" am a continuing pattern.. the residual of what I was, incremented forward to it's next itteration. If this is true (and I believe it is) then if we can extend our bodies long enough to upload our "pattern" we will definately "Live" forever.

Yes we are all stardust... but is stardust only patterns of information represented as energy and matter? I think so.

P.S. Thomas you seem incredibly bright I enjoy all of your comments and agree with many of them.

Re: CA
posted on 07/15/2002 3:46 PM by RandallBouza@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

I have been contemplating this type of thing for a few years, and have had meager success when attempting to articulate it to co-workers and/or friends, and when attempting to get them to understand the significance and ramifications. It's nice to finally encounter someone with similar thinking. If our brains/minds (patterns) can in fact be be "uploaded" to some other machine/system, what a wonderous and strange facet of reality that will present. As you stated, shortly before a person dies (i.e., pattern ceases to exist) it would imperative to "upload". Along a similar line, since the pattern changes throughout the decades (albeit possibly within somewhat restrictive "individual" person constraints), couldn't several versions of patterns be uploaded, and prior to death, a selection could be made as to the "best" one (e.g., age 30), and which will be the one that you wish to represent you in perpetuity. Who would "own" your pattern? Would there be any limitations of if and how how often your "pattern" could be reproduced. What if somebody purposefully or accidentally deleted your pattern(s) ("killed" you). These questions seem somewhat similar to but even more significant than those being wrestled with by the human genome ethicists.

Re: CA
posted on 07/26/2002 5:04 AM by bobdole@loser.com

[Top]
[Mind·X]
[Reply to this post]

Yes, but would a so-called identical pattern also contain the same consciousness, (or mind with respect to Descartes). I doubt it. Anyways, the 'uploading of your pattern' idea is much too far-fetched to be concievable any time soon. Even trying to define someone's 'pattern' would be far too complicated, much more trying to recreate it. So try not to compare this fantasy with the ethics of real-life applications.

Re: CA
posted on 07/26/2002 8:58 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Mr. (to be President?)!

I wonder what (urge) makes you to express that opinion here?

And more - what grounds it's based on?

Or it is just another "Don't try to fly, for a God sake! It's not possible and very dangerous experimentation." statement?

- Thomas

Re: CA
posted on 07/26/2002 1:13 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

The brain-substrate supporting your "identity-pattern" does not contain the same consciousness, from moment to moment, either. It just feels like it does, due to its inherited-memory embedding.

If we can engineer "new substrates" for mind, we would be "creating new minds" in the process, never really "uploading" ours, any more than copying/duplicating a running software program "sucks the dynamic state" out of one into the other.

"We" have nothing to gain by creating new minds saddled with our current sense-of-self memory embeddings. Do we want our "children" to be raised in the same "poverty" which bore us? Better to start them fresh with improved patterns and embeddings.

Cheers! ____tony b____

Re: CA
posted on 09/30/2007 6:45 PM by cleariver

[Top]
[Mind·X]
[Reply to this post]

We are not machines that are programmed and "cease to exist" upon death. Each of us are "specific energetic signatures" that are unique in all of creation. We exist as ONE within Source(All That Is), and move according to function within THAT. We are waves of consciousness that exist before and after the physical body does. We create an "ego personality" to assist us in navigating the earth experience here. Our ego can be merged with our Vibrational Essence (signature) as we raise our consciousness to "remember" who we are truly...If you imagine that God/All That Is, is like the human body (metaphor) and each of us is a cell in that body. We move according to function within the body. A brain cell is aware of the toe cell, but would never want to move there or be that. WE each also are like a drop of water that knows itSelf as THAT until it falls into the ocean...then...knows itself at a drop of water/AND the ocean. How does that sound?...

Re: CA
posted on 05/19/2002 1:29 AM by cyberdiction@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Hi,

I'm wondering about the reverse-engineering part
of the article. Minksy pointed out the difficulty
in trying to evolve the mind level of software
starting from a batch soup(NN). However, trying to
emulate the function of the mind from the finished
product seems a lot more doable. But I have not
heard of much headway with what IMO amount to a
definition of consciousness proven to be elusive.

I enjoyed the article. I also notice that the CA
usage of randomness would include Pi, although
Algorithmic Information Theory would disallow it.

Best regards,
Stephen

the secret-of-the-universe in one line of code
posted on 05/20/2002 10:52 AM by sblake@steveblakedesign.com

[Top]
[Mind·X]
[Reply to this post]

In response to this piece from the "Wired" article by Stephen Levy.

As dessert is served, I bring up the secret-of-the-universe question. Wolfram's theory that there is a single rule at the heart of everything - a single simple algorithm that, in effect, generates all the rules of physics and everything else - is bound to be one of his most controversial claims, a theory that even some of his close friends in physics aren't buying. Furthermore, Wolfram rubs our faces in the dreary implications of his contention. Not only does a single measly rule account for everything, but if one day we actually see the rule, he predicts, we'll probably find it unimpressive. "One might expect," he writes, "that in the end there would be nothing special about the rule for our universe - just as there has turned out to be nothing special about our position in the solar system or the galaxy."

This "New Kind of Science" is really very old science. The "single measly rule" is actually a proportion.

(The square root of 5, plus 1) divided by 2.


It's kind of shocking that Wolfram could spend 10 years writing this book and then have the nerve to publish it without the answer.

Stephen Blake

Re: the secret-of-the-universe in one line of code
posted on 02/19/2003 6:31 AM by PSYBERTRON

[Top]
[Mind·X]
[Reply to this post]

I've scan-read ANKOS about 5 times, and admit I've still not felt compelled to buy and read the whole thing. I think we need to separate cause and effect and new from old here. My observations are two-fold:

(1) There seems little doubt that mind or consciousness is emergent from the complex evolved assembly we know as brain, and that this assembly consists of biological / bio-chemical matter which is itself based on the basic physics underlying every other phenomenon that exists in the physical world, (including the paranormal I suspect). At some basic level the laws of physics are few and simple - it's the relationships and assemblies that can be complex. So nothing hew here.

(2) Phenomena like the apparent significance of the golden ratio (the root five, plus one, over 2, above), and other numerical patterns in seemingly unmathematical aspects of real world human life (like 80/20 rules and power laws for example) is because they too share a common heredity on the same underlying simple physics and mathematics (See quantum information and quantum genetics work). It is not because life-sized real world pneomena are directly explainable in terms of simple formulae like the golden ratio or CA generator rules.

Re: CA
posted on 05/22/2002 2:34 PM by NigelRaymondo@aol.com

[Top]
[Mind·X]
[Reply to this post]

I hope Mr Wolfram did not rely too much on
his own program when preparing his book. Until
1996 the pseudorandom number generator in
Mathematica sucked.

Nigel Raymond

Re: CA
posted on 05/22/2002 3:11 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I'll spend 14 days with this book in July. Well, half of this time. Days mainly.

I wonder if useful. Might be. Might be not.

How practical is this theory - is the main question for me. That it's theoretically okay, I know already.

- Thomas

Re: CA
posted on 05/23/2002 11:49 AM by karun@tranquilmoney.com

[Top]
[Mind·X]
[Reply to this post]


I enjoyed Ray's review of Wolfram's book. I wrote a similar point of view for a critique on an economics list at http://maelstrom.stjohns.edu/CGI/wa.exe?A2=ind0205&L=hayek-l&D=1&O=D&P=13386. I think Wolfram's work is great, but have some epistemological differences. Were I to write a complete review I would have to add in much of the praise Kurzweil has taked the trouble to write. Here is the critique anyway:

---------------------------------------

I have now done a brief speed read of Wolfram's book. The core idea that
complex (NP-complete) systems can evolve out of simple rules. This is indeed
Hayekian, but not new in and of itself. I am trying to find what else it
says. In the beginning he claims it has implications in philosophy, social
science, etc. etc., but unfortunately the book is not organized by such
areas. The rather proceed by examining particular sets of rules he found
that exhibited interesting, life-like behavior and shape of entities. On the
way, he says how this is relevant to regular science, technology, etc.

Unfortunately somewhere along the way he goes overboard and seems to adopt
Leibnitz's metaphysics. He claims that if complex systems can emerge out of
nodes acting per simple rules, then it is plausible that the Universe
*actually* consists of discrete nodes interacting per some Grand Simple
Rules which we have yet to find out (he does not claim to have found them
yet). This begs the question of continuous systems as being fundamentally
analog and not digital or discrete. That reality can be fairly accurately
modeled using an atomic system does not mean that it IS atomic (ie
consisting of 'uncuttable' -- atomic -- particles). Newton's metaphysics
transcended this dilemma by using continuous time equations and invented
much of calculus in order to do so. In fact I think Kant split with the
Leibnizians because of this metaphysics, though Kant's metaphysics tried to
transcend Newton's basic 3D moving in time.

In short, I find interesting experiments, and see some use in his
painstaking discovery of which particular systems of rules produce lifelike
objects and behavior. But I do not find a new Science, and the regular
Mathematics provides a LOT more than Wolfram does in this book (Leibniz's
contributions to math are phenomenal, even if his metaphysics is flawed). A
far better (and shorter) book with far better real math (which makes it a
very hard read) is John Holland's Adaptation in Natural and Artificial
Systems.

Since the view on metaphysics is the most important to an Austrian group, I
think that is where attention is due. I think Kant had it almost right,
though he inverts what I believe is important -- I believe that reality
itself and the data it provides is most important, and logic and math are
only tools that we use. The awe and respect should not be of such
'transcendental' relationships, but of reality; and the transcendental
relationships of mental constructs are only tools that we can use to
discover how reality (including us) works. The problem here is whether this
is too much of a blow to the Enlightenment prime mover of exaltation of the
ego and man's capabilities. I think sentience -- the ability of using
language, awareness that we use language to model reality, and awareness of
the distinction between models and reality itself -- is enough to sustain
ego as something positive and worth holding important, without going
overboard and becoming enraptured by transcendental / apodeictic aspects of
math and logic and forgetting about reality in the process. It is around
this principle that I find Hayek's "knowledge problem" and critique of
planning different from apriorists.

Certainly Wolfram also emphasizes time and again in this book that even
complex systems generated from simple axiomatic systems display "knowledge
problems" that are insurmountable. But this does not mean that logic and
math are not important and useful. Holland is an example of useful math in
complex system analysis (NOT in the generation of complex systems but
analysis of complex systems one finds given in the world already). Economics
too could potentially have a different math that could largely agree with
classical economists and still provide causal reasons for commonly known
economic principles -- it simply remains for someone to abstract what they
are saying into mathematical equations -- hard but not impossible.

Regards,

Karun
--
Karun Philip
Author: "Zen and the Art of Funk Capitalism: A General Theory of
Fallibility"
http://www.k-capital.com/HA.htm

Re: CA
posted on 05/23/2002 4:44 PM by serambin@ailink.org

[Top]
[Mind·X]
[Reply to this post]

In your critique, you speak of reality as something concrete. One could submit that reality, as you propose it, is hard to find. Relativity shows that reality changes depending on your frame of reference. G'del assures us that no theorem can be proven. Quantum Mechanics revels that measuring things always changes them and therefore reality.

When I was in school, a physics teacher defined reality as the ultimate frame of reference. However in order to see the view, you must be at the foot of the Great Toad. I've been looking for road signs ever since.

Stan Rambin

Re: CA
posted on 05/23/2002 5:38 PM by karun@tranquilmoney.com

[Top]
[Mind·X]
[Reply to this post]

Stan wrote:

"In your critique, you speak of reality as something concrete. One could submit that reality, as you propose it, is hard to find. Relativity shows that reality changes depending on your frame of reference. G'del assures us that no theorem can be proven. Quantum Mechanics revels that measuring things always changes them and therefore reality. "

What I mean by the word reality is not something that is hard to experience -- but perhaps hard to understand. I just mean the material universe, whatever it may *actually* be (energy, unknowable in toto, etc.) People do tend to distance themselves from reailty because they are thinking of their absolutely certian knowledge about it, instead of simply going out and smelling a rose -- it is easy to experience even if hard to understand.

In the book, I indeed begin with G'del, and point to JR Lucas' demonstration that the implication of G'del's meta-mathematics in mathematical logic is non-refutability of fallibility. There are levels of abstraction from reality in our brains. Our perecptions are (most probably, unrefutedly so far) neural outputs to stimulus to something "out there". I do not believe fallibility has major philosophical implications at the perception level (though technically one has to simply have a conjecture that there is a reality out there and have faith in that belief at least until and unless it if refuted). There can be perceptual illusions, mismeasurement, limitations of the senses, etc., but machines can usually be designed to more accurately perceive, at least to some degree of accuracy (let's leave QM out of it for now). From perceptions we have words. When we perceive difference in reality -- any difference -- we assign words to to each entity/attribute/process that were perceived as different. So language evolves as we perceive ever finer differences. At the next level of abstraction we have logic. Further out, we have mathematics, and then meta-mathematics.

Kant's theory was that these levels of abstraction are analogous (like a Fourier or Laplace Transform, for any engineers out there), and so one can use deductions at one level of abstraction -- say mathematics -- and apply the deduced rule to logic or reality. The problem is that while this is a powerful technique, it introduces fallibility -- a theorem may be apodeictic (necesssarily or demonstrably true) in math, but its application to reality is heuristic only, and this introduces fallibility. 1+1 = 2, but if I take one stone and place it near another stone and the second one breaks when I put it down, then I have three stones. (Obviously this does not mean 1+1 = 3!!).

Now, coming to QM, I do not see the problem is with the nature of reality, but with the limitation of our knowledge of it. See, I am talking about the first level of abstraction I mentioned above -- an object and knowledge of it are two things. One is inside our heads and the other is "out there". The rose and the name of the rose are two entities, though there is a direct correspondence between them. The real rose is complete though, and the neural perception is not per heuristic implications of G'del. In normal terms, when you remember the rose, it is never the complete thing -- just some of the attributes that you noticed. That we don't know the position and momentum of the electron is our problem. The uncertainty is Heisenberg's and not the particle's. Even the concept of particle is in serious philosophical trouble. Where exactly does the table in front of you end and the atmosphere begin? Where exactly does Mount Everest end and some other mountain begin? Nevertheless, we use coherence to create atomic contructs -- particle, Mount Everest -- because it is very useful, given our capability of language, logic and math, which need atomic base entities to function. This does not mean that the actual subset of reality you refer to is an atomic (uncuttable) entity. Even in superstring theory, some are now claiming once the 3D string equation is written then those strings are the uncuttable entitites that constitute the universe. But, even if there is no further cutting, what about choice in selecting the coordinate system? There is an Axiom of Choice in math which is probably related to both G'del and this apparent problem. But the problem is only a limitation of knowledge. Not a question of reality's nature, which is whatever it is and whose origins are whatever its origins are. Newton's metaphysics simply stated that and did not explicitly denied trying to explain the ultimate cause.

Theories are our atomic/linguisitic/mathematical models that are (hopefully) approximations to whatever reality is. Theories are always conjectures subject to refutation through logic or counter-evidence. With ingenuity, we are able to shift paradigms from time to time and switch to theories that better explain more of whatever perceptions and measurements we get from reality.

Regards,

Karun.
--
Karun Philip
Author: Zen and the Art of Funk Capitalism: A General Theory of Fallibility
http://www.k-capital.com

Re: CA
posted on 05/25/2002 4:11 AM by serambin@ailink.org

[Top]
[Mind·X]
[Reply to this post]


My comments were perhaps more concerned with Kant's concept of truth, rather than reality (maybe a difference without a distinction). In the introduction of Logic Kant declares,' A universal, material criterion of truth is not possible'indeed, it is even self-contradictory. 'But material truth must consist in the agreement of a cognition with that definite object to which it refers.' My contention is that our perception of 'reality' is only a course graining of truth, and that the coarseness is determined by the limitations of our perception and empirical experience combined with the limitations of our cognition. The desk supporting the monitor in front of me appears to be solid and substantial. It is counterintuitive to speak of it as being mostly empty space, even more so to assert that the monitor is not being supported by what I see, but rather by the unseen forces between that which I see.

I choose to view Stephan Wolfram's book not in the context of its universality of truth, but instead by the contribution (and it may be considerable) to both the discussion of complexity and of the four philosophical questions of Kant.

1. What can I know? (Answered by Metaphysics)
2. What ought I do? (Answered by Morality)
3. What may I hope? (Answered by Religion/Science)
4. What is Man? (If we can answer this one, we will know the answer to the other three)

Thanks,

Stan Rambin

Re: CA
posted on 05/25/2002 9:23 AM by karun@tranquilmoney.com

[Top]
[Mind·X]
[Reply to this post]

Stan wrote:

"1. What can I know? (Answered by Metaphysics)
2. What ought I do? (Answered by Morality)
3. What may I hope? (Answered by Religion/Science)
4. What is Man? (If we can answer this one, we will know the answer to the other three)"

I would put the questions:

1. What is? (Answered by Metaphysics)
2. What can I know? (Answered by Epistemology)
3. What ought I do? (Answered by Morality)
4. What may I hope? (Answered by Religion/Science)
5. What is Man? (Answered by Sentience. Also answered by Ray Kurzweil as a being with a perceptron. Answered by me as a being with a perceptron complex enough to develop lanaguage)



Re: CA
posted on 05/28/2002 10:38 PM by jhughes@noao.edu

[Top]
[Mind·X]
[Reply to this post]

1. What can I know
vs.
1. What is

That is the whole point Kant is trying to make...
Saying 'What is' presupposes getting under the
skin of the Noumena.'What can I know' is much
more cautious and, IMHO, more likely to keep us
out of the 'formula worship' bramble which i
detect in Stephen Wolfram's motorcycle.

I think Ray K. is MUCH closer to the mark and
thank him for saving me $44 which i was about to
spend on a 1200 page paperweight.

Re: CA
posted on 05/29/2002 9:02 AM by karun@tranquilmoney.com

[Top]
[Mind·X]
[Reply to this post]

"1. What can I know
vs.
1. What is

That is the whole point Kant is trying to make...
Saying 'What is' presupposes getting under the
skin of the Noumena.'What can I know' is much
more cautious"

I agree that it is more cautious. But this is exactly what I disagree with in Kant and Plato. They claim that noumena are "higher" forms than phenomena. They are just more abstract (no normative judgement applied). Neural excitations are still phenomena -- undoubtedly with distict properties not found in other phenomena. In other words, noumena are just particular types of phenomena -- the types found in perceptrons.

There is no need to get "under the skin" of noumena in order to experience phenomena. Kant does not deny this -- he just disses phenomena as "base". I diss noumena as fallible. Kant exalts the transcendental subset of noumena (which he again sees as non-noumenal and non-phenomenal, which is false -- transcendental is a subset of noumenal is a subset of phenomenal) but Godel proves the transcendental incomplete.

Once you get beyond exalting the transcendental, you can cease conflating metaphysics and epistemology. Yes, reality is not understandable with completeness, but reality is experiencable. Inferences based on observations are fallible, but fallible does not mean false -- just the possibility of falsehood along with a non-zero possibility of truth. As Popper put it, conjectural inferences subject to debate and refutation, and which survive debate and attempted refutation, are considered acceptable by modern Science. It seems to work quite well. Using transcendental concepts (such as Newton's use of the Hamiltonian in representing the mathematics of physics) is useful, but as Einstein showed, he was still wrong -- there is no instantaneous force in the universe (as Newton described gravity). Yet, we still use Newton's laws of physics and not Einstein's to build bridges and spacecraft -- it is a close enough approximation to reality for the purpose. Of course, we engineers are more realistic than scientists so we always over-engineer and test, test, test...

Regards,

Karun.
--
Karun Philip
Author: Zen and the Art of Funk Capitalism: A General Theory of Fallibility
http://www.k-capital.com

Re: CA
posted on 05/29/2002 9:56 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

This reminds me of a man walking into a French restaurant and telling the waiter: "I'd like some dim sum."

"We don't serve dim sum." replies the waiter.

"Well, then" says the customer, "What the hell good are you?" and he walks out.

If you're looking for what Kant has to offer, read Kant. If you looking for what Wolfram has to offer, read Wolfram. If you feel you don't want what Wolfram has to offer, for you the book is indeed little more than an expensive paperweight. If you're looking for dim sum, French fare will not satisfy you.

Re: CA
posted on 06/03/2002 6:27 PM by serambin@ailink.org

[Top]
[Mind·X]
[Reply to this post]

You Wrote:
This reminds me of a man walking into a French restaurant and telling the waiter: "I'd like some dim sum."

Let us look at you comment. Under what circumstances might this be a perfectly acceptable question to ask?

1. If the man knew that the chef was named Wan Foo and prepared Dim Sum for the staff everyday.
2. If the chef had a personal interest in Chinese cooking and had invited the man to try his Dim Sum.
3. If the French restaurant next door served Dim Sum and the man was in the wrong establishment.
4. Perhaps he was misinformed about the Canap' du Jour being Dim Sum.

The similarities between Dim Sum and French appetizers are considerable. They are both served in small quantities. Both have many of the same ingredients. Both are considered finger food'

If science uses logical assumptions and methodologies to arrive at a conclusion (opposed to empirical), then you must read Wolfram or Kurzweil with an eye on the framework of Logic extolled by Kant.

Re: CA
posted on 06/03/2002 10:51 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

>If science uses logical assumptions and methodologies to arrive at a conclusion (opposed to empirical), then you must read Wolfram or Kurzweil with an eye on the framework of Logic extolled by Kant.

Why? Is there only one possible logical framework? Is Kant the only source for such a framework? I fail to see what your point is. Wolfram's assumptions seem to me to be just as logical as any others I've seen. He lays out his case step by step, demonstrating each one as he goes along. What is he lacking in your opinion?

Re: CA
posted on 06/05/2002 1:21 AM by serambin@ailink.org

[Top]
[Mind·X]
[Reply to this post]

>Why? Is there only one possible logical framework? Is Kant the only source for such a framework? I fail to see what your point is. Wolfram's assumptions seem to me to be just as logical as any others I've seen. He lays out his case step by step, demonstrating each one as he goes along. What is he lacking in your opinion?

Logic, as a modern scientific methodology begins with Descartes, continued with Leibniz and culminated in Kant. If support for a position is based on logic, then, there is only one framework. Logical methodology is meant to illuminate a view, to make it clear and distinct from opposing views, and to work from the components to the whole of the conclusion. Science is not based on opinions regardless of their beauty or simplicity.

This brings us to the problem. The conclusion that 'everything' can be explained by CA is not supported with a structure either of logic or empirical evidence satisfactory for making such a broad claim. It seems as if he moves from A = B, B = C, to C = Everything. The conclusion is more than the sum of the parts.

This does not mean that much of the treatise is not provocative and well presented. It does represent a serious effort (and mostly a successful one) to shed light on a basic question about the nature of the universe. But, I think it is safe to say the jury is still out on the 'answer to everything' part. Big Ideas require Big peer review.

Re: CA
posted on 06/05/2002 2:29 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> Big Ideas require Big peer review.

Yes! The peers have to have enough time to see, that there is a little for them to say. :)

- Thomas

Re: CA
posted on 06/13/2002 9:57 PM by mitchs@enteract.com

[Top]
[Mind·X]
[Reply to this post]

>If science uses logical assumptions and methodologies to arrive at a conclusion (opposed to empirical), then you must read Wolfram or Kurzweil with an eye on the framework of Logic extolled by Kant.

>Why? Is there only one possible logical framework? Is Kant the only source for such a framework? I fail to see what your point is. Wolfram's assumptions seem to me to be just as logical as any others I've seen. He lays out his case step by step, demonstrating each one as he goes along. What is he lacking in your opinion?

------

Certainly, Kant is not the only possible source for such a framework. However, his motivating question was whether or not metaphysics could be possible as a science. He was prompted in this direction by the retreat to skepticism embraced by Hume. Moreover, contrary to the suggestion of other posts on this topic, Kant did not formulate transcendental idealism lightly. Rather, it was the only means by which he could establish a framework under which causality could be realized as a function of the sentient organism. Without this, Hume's skepticism must reign.

(For the record, he retracted his use of the term "transcendental" in favor of the term "critical" so as to reduce the confusion it was causing. In either case, his use of the term is a restricted reference to the cognitive faculty. It does not refer to cognition itself. Nor is it a reference to hyperphysical experience.)

You see, Hume is very convincing in his rejection that causal relationships may be attributed to some independent reality. As a matter of pragmatism, we must each adopt some assumption concerning such independence lest we fall into the trap of solipsism. Nevertheless, the logical progression from Descartes' meditations leads to the conclusion that our understanding of causality is internally constructed rather than observed. That was Hume's contribution and, in Kant's opinion, he failed to recognize how he might recover from his conclusions.

Kant may not be the only source for a logical framework, however the problem which led him to write "Critique of Pure Reason" had the potential to undermine all logical frameworks. The question he addressed was far more devastating than the incompleteness of arithmetic discovered by Goedel. Unfortunately, the pragmatic assumptions of those who practice the "hard" sciences often allows them to dismiss these philosophical debates as if they lack relevance.

With regard to your question concerning uniqueness, I can only offer my own thoughts on the matter.

Kant begins with a definition of space and time which are not the platonistic concepts of modern physics. Quite simply, he states that time is the form of inner sense and space, by all appearances, is the form of outer sense. Note that the term "appearance" has a technical meaning as the presentation of sensory perception associated with an object. The nature of objects in and of themselves is not known.

In any case, time and space are forms of sensibility for Kant. They do not necessarily constitute an environment independent of the sensuous organism as they did for Newton. They are simply a bifurcation of sensory experience into an internal form and an external form.

Now, one of the motivations for pursuing a foundation for mathematics in the nineteenth and twentieth centuries was to understand the concept of certainty--and, possibly, to identify that which could be known for certain. A diverse collection of accumulated results were reorganized under axiomatic systems. At the same time, Cantor's idea concerning collections taken as objects in their own right began to demonstrate utility, Eventually, set theory became the focus of foundational mathematics.

It would seem that logical thought has been converging to a common logical framework. But, few would ever attribute Kant with a prediction of this event.

You see, Kant recognized that time and space, as forms of sensibility, were the foundation for the intuitions of mathematicians. He is very clear about this and characterizes mathematical intuition as synthetic a priori cognition. This is to be contrasted with the analytic cognition typical of metaphyical concepts. The difference between the two lies in the fact that synthetic cognitions must be accompanied by a mental visualization. Otherwise, the two types of cognition must share the simple constraint that they not be self-contradictory.

One may think of the visualization characterizing synthetic cognitions as a rule of detachment that allows a language user to formulate an opinion concerning the certainty of the cognition based on personal experience. In contrast, opinions concerning metaphysical questions in general often depend on how credible the source of information is believed to be.

Now, in "Prolegomena to Any Future Metaphysics" Kant is very clear about the fact that the relationship between appearances is described using mathematics. It is this assertion which I believe constitutes a prediction concerning the evolution of a logical framework. Moreover, it suggests that mathematics is a sublanguage--or, portable subgrammar--whose continued refinement is governed by our attempts to explain natural phenomena.

To the extent that the Kantian framework is valid, the assumed validity of physical laws adopted by the modern scientific community should lead to a common logical framework.

There are two unfortunate discrepancies in this scenario. On the one hand, Kant's detractors were able to focus criticism on some of his assertions because his assumptions concerning the absoluteness of Euclidean geometry were about to be challenged by the discovery of curved geometries. On the other, the twentieth century mathematicians failed to complete their work in foundations. As for the former, one need only focus on Kant's general statements to assess their continued relevance. I shall return to the latter momentarily.

This post began with the observation that Kant was motivated by Hume's arguments concerning causality. For Kant, nature is not the existence of things in and of themselves. Rather, it is the complex of all objects of experience as given to us through time and space as forms of sensibility. Although subtle, this distinction is the link which permits Kant to recover causal intuition.

Indeed, temporality as in internalized experience yields a notion of causality which can be attributed to the sentient organism a priori. Namely, there is a distinction between asserting that everything has a cause and asserting that every observed event has an antecedent which it follows according to a universal law. The former assumes knowledge concerning the nature of reality. In contrast, the latter merely reflects the ability of the cognitive faculty to organize its experiences according to an ordering. Naturally, Kant adopted the presumption of antecedents because he believed it reflected conditions on which experience was possible that could be known a priori to any possible experience.

In the end, however, Kant concluded that metaphysics as a science was not possible. In order for our knowledge of mathematics and natural science to be secure, our knowledge of reality must be obscured. Mathematics and natural science are valid with respect to appearances. However, reality is hidden behind the filter of our sensuous experience. As the translator's preface to my copy of the "Prolegomena to Any Future Metaphysics" observes, his transcendental philosophy "is a lesson in intellectual humility."

Returning momentarily to the earlier observation that the mathematical community failed to complete their work, I direct your attention to conventional first-order model theory where the identity predicate is taken to be a "logical" symbol of the language. This fails a use case analysis of mathematicians as language users. Specifically, note that mathematicians use definitions to introduce language elements. Neither the identity predicate nor the membership predicate of set theory have definitions in conventional models. Moreover, the fact that no model of the axioms of set theory can be proven to be a model of the class universe, the interpretation of these predicates is without foundation.

For the last seventeen years I have been working on a different foundation for set theory. The sentences which resolve the problems of the preceding paragraph are trivial and uninteresting from a mathematical perspective. The were formulated in such a manner so as to not impact any existing results in mathematics. Consequently, they solve no problems of current interest.

Nevertheless, the sentences are philosophically interesting. As the predicates are obtained via definition, the first predicate of the language is necessarily self-defining. It is a strict transitive predicate which is subsequently interpreted as strict set containment when characterized in terms of the membership predicate by a theorem.

Now, the transitivity of the initial predicate in necessary to the definition of a membership relation. To see this, visualize membership relative to ascending nestings of superclasses. That is, X is an element of Y if X is an element of every superset of Y.

Finally, the identity predicate is characterized in terms of topological separation rather than extensionality. More precisely, distinctness is characterized thusly. This result, however, requires axioms. Contrast this with conventional model theory where one must assume that an identity predicate is understood a priori.

In any case, the use of circular reference to obtain a language for mathematics completes its foundation. The self-referential syntactic forms constitute a construction from first principles. Circularity, however, makes those principles void of meaning without a supporting visualization in the sense of a synthetic cognition. It, therefore, becomes necessary to introduce Venn diagrams which illustrate the underlying intuitions. The axioms which subsequently yield an identity predicate are derived from manipulations of Venn diagrams expressing topological separation. It is quite elegant.

So, it has been my experience that there is precisely one logical framework. Its purpose is fundamentally grammatical. That is, in order for us to communicate, we must share a capability to process linguistic constructions. Whether or not language processing is binary, the data provided to each of us is obtained bitwise from the firing of neurons. Ultimately, our ability to agree on a representation for this framework necessitates a self-referential form. This follows from the fact that our neural networks are disconnected.

That is, we are agreeing on conscious experience. Therefore, we must also agree on the self-contained nature of that experience.

The existence of other frameworks derives from constructions similar to those of conventional model theory. They depend on metalinguistic support which, in turn, depends on metametalinguistic support and so on. This is just a grammatical expression of Russell's unsatisfactory theory of types. His fear of self-reference completely biased the foundations of mathematics to reject any use of circular reference. Thus, a generation of mathematicians failed to properly investigate nonparadoxical uses of self-referencing syntactic forms.

As for your final question, it would be difficult to contend with your observation that Mr. Wolfram's assumptions seem logical. As noted earlier, opinions formulated about analytic cognitions are closely bound with the credibility of the information source. And there is no problem with that. In spite of Kant's own conclusions, he recognized that humans would never stop seeking answers for their questions. He discussed incompleteness long before Goedel. Goedel simply expressed it in a form which would silence Hilbert. Thank goodness.

Mr. Wolfram is pursuing answers in the best of traditions. He is lacking nothing. Personally, however, I do not find his work compelling.

Re: CA
posted on 06/14/2002 11:04 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

That was the most excellent and well thought-out reply I've ever received and I really appreciate the effort that went into it. I'm also convinced that mathematics is just a subset of language and it's the restrictions placed on the use of language by mathematics that makes it so powerful. Unfortunately, it also takes away some of the power of language by these restrictions.

I also find the space-time thing especially interesting, as I came to the opposite conclusion as Kant in that I see space as internal and time as external. I may be looking at different aspects of time and space than he was, but if you think about dimensions and how we use them, space is concerned with the internal dimensions of an object and time is concerned with the external dimensions.

I say this because length, width and height measure the object within its prescribed boundaries while time measures the movement of objects (including the internal dimensions but excluding the external dimensions) through the universe outside the object.

But all of these measurements, it seems to me, are arbitrary and contained within the linguistic framework by which we divide up our experience with the universe. Therefore, another type of being might approach the description of the universe from another point of view not based upon the linguistic structure that is the foundation of our shared experience. We divide the world up into the units we do because our language leads us down this path. Every new concept is built on an older one and is made to fit into the structure that language has built.

But if you read Worf, you will see that other linguistic groups, such as the American Indians for example, divided the universe of their experience in a different way. The fact that they came around to the Europoean model in the end has more to do with the size and complexity of that model (in my opinion) and the course of history in which they were absorbed and their culture extinguished by historical events than it has to do with the model we now use being the only possible model.

We are creatures of our own cultural evolution as well as our genetic evolution. Our culture influences the shape of the universe as we perceive it as well as the tools with which we are able to examine it. What we are missing is all the things our culture doesn't include -- which are beyond our imagination simply because our culture hasn't grown large and complex enough to include them yet.

This is starting to run on and run away from the things you were talking about, so I'll just say there were some other areas where the Kantian viewpoint also conflicts with my own but they are trivial by comparison. But we shouldn't forget that the sciences of chaos and complexity show that seemingly trivial parts of a complex adaptive system can cause tremendous changes in the development of that system and that's what Wolfram's work seems aimed at dealing with.

Grant

Re: CA
posted on 06/14/2002 6:19 PM by mitchs@enteract.com

[Top]
[Mind·X]
[Reply to this post]

I expended much effort trying to understand Kant's concept of time. Having grown up with an avid interest in science, I had never questioned the platonistic environment assumed by the scientific community. Ultimately, my deliberations on Kant's veiw led me to understand time as the partitioning of my stream of consciousness by the internal vocalization of my thoughts. In essence, the demarcation of each syllable delineated the progression of a counting protocol.

I had hoped that Wolfram's work might offer insight concerning a slightly different problem. Namely, could discrete automata explain the uniform progression of time assumption underlying modern physics? Typically, computation is an iterative process which is understood in terms of clock cycles. It had been my hope that Wolfram might have been flipping this relationship in some way that would justify this belief in uniformity. Instead he implicitly relies on this very assumption to run his automata.

Another possibility might have dealt with causal connectedness. The probabilistic interpretation of quantum phenomena is at odds with traditional assumptions concerning causality. The idea that an automata-based model might organize events into a causal structure has some appeal to me. After all, evolution proceeds with respect to environmental conditions, and, causal structure is precisely the environment in which the next particle interaction realizes statefulness. What simple rule translates the outcome of that interaction into causal constraints for creations, interactions, and annihilations which have yet to occur?

It would, however, be incorrect to infer that I do not deeply respect the possibilities presented by chaos and complexity. Let me share a topological fantasy...

Imagine the positive real quadrant of a three dimensional complex space. Let a Cantor cube be situated so that one of its corners is coincident with the origin of the space. Moreover, suppose that any macroscopic attempt to measure the edge length of any such Cantor cube always yields the same result. Thus, metric structure fails because of the self-similarity of intermediate Cantor cubes arising from the iterative construction.

Suppose that the distinction between related iterations is temporal and suppose further that the "smaller" Cantor cubes converging to the origin of the space are actually three dimensional projections of higher dimensional cubes. More precisely, think of the iterations as a counting protocol and think of the action of that protocol as increasing the dimension by one for each "smaller" Cantor cube obtained as part of the iteration.

The reason for this increasing dimension lies with the length of the cube diagonals. As the dimension increases the diagonal length increases. Moreover, the angle between the cube edges and the diagonal increases with increasing dimension.

Now, interpreting this counting protocol temporally, only finitely many iterations can have occurred since the beginning of the universe. Whatever the number of iterations may be, it constitutes a local measurement of time. There is no reason to think that this value is universally the same.

How then are we to arrive at a comparison for two such elements? For one thing, we must assume convergence to the origin of the space. This is a bit of a solipsitic approach which asks, "What would I be measuring if I were at that other point?" Instead of thinking about the answer in the usual sense, consider that you must now regress temporally to the beginning of time and then progress temporally so as to arrive at the other point about which the question was asked.

As for the other relevant issue, the only mechanism for universally capturing "arbitrarily large but finite" in the absence of known bounds is by assuming global convergence through an infinite regression. It is my opinion that the assumption of uniform temporal progression throughout the universe is simply obfuscating an infinite assumption.

Now observe that in this limit, the diagonal of the cubes becomes an independent dimension orthogonal to all spatial dimensions from which it was derived. Moreover, it becomes infinitely long. Finally, the construction is situated in the positive real quadrant of an infinite dimensional complex space.

I admit that this is a total fantasy. But, it offered me a framework for things I was compelled to reconcile. I needed to see how the infinite dimensional complex spaces of quantum mechanics might arise from the spatiality I experience. Furthermore, I needed to realize a platonistic time dimension in terms of the spatial elements of my experience.

You should note, however, the central role of the Cantor cube in this construction. The self-similarity is essential to any utility it may have. So, you see, I have a great deal of respect for the contribution chaos and complexity might have for fundamental theories. I believe they may hold the key to understanding temporality as a topological feature of space.

When I first studied Kant's ideas of space and time I was highly motivated to understand its relationship to the platonistic space and time of modern physics. To a large extent I have found an answer that I can live with. However, I doubt it is the kind of answer with which others would be comfortable. Whether or not one is speaking of the ideality of space and time or the world view inherent to our natural languages, there are clearly limitations on our ability to describe the universe. Cellular automata may provide some new perspectives, but it is unlikely that they will circumvent these constraints.

Re: CA
posted on 02/19/2003 6:47 AM by PSYBERTRON

[Top]
[Mind·X]
[Reply to this post]

In response to Karun Phillips I quote "continuous systems as being fundamentally analog and not digital or discrete"

This misses the key point about the physics underlying the world - at some quantum level everything does indeed come in discrete packets, (though since these always seem to involve conjugate variables, binary is probably an inappropriate term, as quantum information work is showing).

Strange error, considering the extensive reference in this correspondence to the history of physics in views of the world.

Ian Glendinning
Reading, UK

Re: CA
posted on 02/19/2003 8:18 AM by karunphilip

[Top]
[Mind·X]
[Reply to this post]

I have not been to this site for a long time, but got a message pointing to this reply. I like the Kant discussion that followed -- mitchs has obviously done a lot of homework.

Your point that QM presumes a discrete substructure is correct in that that is the mainstream theory right now. However, there is an interpretation of superstring theory that models reality as a (continuous) superfluid (much like Newton's aether) and supersting theory is simply an attempt at expression of the equations of that superstring. On another list I wrote a more detailed description of this concept but the archives are not public.

All theories are subject to revision -- just because there exists a discrete quanta model don't assume that that is the end of science.

Regards,

Karun.

Re: CA
posted on 02/19/2003 8:51 AM by PSYBERTRON

[Top]
[Mind·X]
[Reply to this post]

I understand the point Karun.

I'm not suggesting that QM is the end (base) of physics - ripples in strings in the 26 dimensional ether, etc or whatever they think of next - 42 dimensions perhaps :-) ...

What I am saying though, is that there is a level (in the physics of everything) where everything is quantised.

Now whilst I agree all science is open to revision in the light of new knowledge, are you suggesting there is any evidence to revise this particular view ?

Ian Glendinning
Reading, UK

PS I did peruse the entire correspondence on Ray's paper - researching alternate views - I didn't pick on your contribution specifically - just interjected where I saw interesting points.

Re: CA
posted on 02/19/2003 8:56 AM by karunphilip

[Top]
[Mind·X]
[Reply to this post]

There is also a level of physics where atoms are uncuttable (that is the meaning of the Greek word a-tom). It is generally useful to work on discrete models (which is why I think Wolfram's book is well within the scientific tradition and worthy of praise). I only object to the metaphysical comitment that shows up from time to time. As mitchs says, there are some insurmountable problems that Wolfram not only does not address but appears to be unaware of.

Note that there are already quantum tunneling transistors that were produced using the discrete quantum model. So theories, despite being fallible, are nevertheless useful. Some cultures even developed complex architecture based on what they thought were theological principles.

Regards,

Karun.


Re: CA
posted on 02/19/2003 9:11 AM by PSYBERTRON

[Top]
[Mind·X]
[Reply to this post]

OK, so I clearly need to follow-up mitchs threads, but for now are we agreeing that "life the universe and everything" is indeed quantised at (more than) one level at least.

DNA-bases, Atoms, Quanta ... whatever ?

I'm not suggesting this is the answer to any momentus question in itself - just a "useful" "fact" to bear in mind.

Perhaps it might be simpler (for me) if you explained what you originally intended by "fundamentally continuous"

On the subject of Wolfram, I'm sceptical as to whether he has found anything fundamentally new, except a new focus of debate. However, what keeps bringing me back to it (despite the fact I'm sceptical over its originality and its hyped ANKOS title) is that it does seem to lead the debate very quickly to metaphysics, avoidable or not.

Re: CA
posted on 02/19/2003 9:33 AM by karunphilip

[Top]
[Mind·X]
[Reply to this post]

Instead of saying reality "is" quantum at one (or more) levels, I prefer to say reality "can be modeled" as a discrete or quantized system. There is a difference between the model and whatever the noumenon is, which is mitchs' point.

The reason I am tentatively committed to a "fundamentally continuous" model of reality is because it explains why a discrete model of that reality (mathematical, linguistic, ot whatever) would be unable to fully explain it. Think of a continuous function f(t) as a wavy line on a graph. What we have using language (words) are only discrete entities. We can have correspondence between the model and the continuous thing it models, but not completeness. If we get finer points to work with we can test and then refine our model. Language evolves into ever finer differentiation of concepts as ever finer distinctions in reality are perceived or conjectured. But no theory of relaity will ever *equal* it.

While I agree that Kant had discovered this fundamental problem before Goedel or Lucas, it is Lucas' paper (referenced in my book -- its on the web) that crystallized the concept for me. Popper is the philosopher who spoke most about fallibilism, but he was a disagreeable personality and hence didn't get too popular. I think Hayek is the one who applied the concept to the wider area of social science and economics, which is the important thing. So the book on my site (and on Amazon) used Hayek as a starting point and goes on to describe a philosophy of commerce.

Regards

Karun.
--
Karun Philip
Author: Zen and the Art of Funk Capitalism: A General Theory of Fallibility
http://www.k-capital.com

Re: CA
posted on 02/19/2003 10:21 AM by PSYBERTRON

[Top]
[Mind·X]
[Reply to this post]

Karun, this is enormously interesting to my research, but the day-job calls on my time right now. I think I will review your own work / web-site when I get the chance ....

The issues of interest here are (all rhetorical questions for now.)

Any real distinction between how the thing in itself "is" vs "is modelled" ?

Whether the discretisation of one is any different to the other ?

The difference between the discretisation in the linguistic model and any other possible view of the thing ?

Pretty fundamental philosophical / metaphyisical issues IMHO.

Bye for now
Ian Glendinning
Reading, UK

Re: CA
posted on 02/19/2003 11:25 AM by karunphilip

[Top]
[Mind·X]
[Reply to this post]

I do not think it is possible to have anything but the linguistic discretized model -- the noumena in and of itself is indeed unknowable at some level, as Kant put it. However, as I mentioned, it is eminently experiencable. Just go out and small a rose. The point of some meditative techniques (focusing on beathing for instance) is supposed to help reach close to (but not quite) the pure experience without one's fallible conjectures intruding. Unfortunately, these techniques make many people forget about reality entirely and focus on their own conjectures. In the end it is impossible to tell the difference: true nirvana is only for dead people ;-).

Like Kurzweil, I also worked in pattern recognition, neural networks, and character recognition. I came to a point where I realized that neural networks, by nature of their structure, are inherently fallible. I realized the only way to solve character recognition tolerably was to at least achieve human capability. But if we did this we would have a complete AI. The key to it, I felt, was to have a neural model of language. How can language emerge spontaneously in a neural network receiving stimulus and creating abstract analogies? Hayek provides a structure and notes that once neural networks learn a pattern, they hold expectations ahead of perceiving. Then the perception comes in and is compared, modifying the expectations a bit if necessary. The key to a neural model of language is noting that perception is the perception of difference. Even if everything out there (and in here) are just superstrings, the particular organizations of superstrings that make up a cat are different from a dog. Blue is different from red. When we perceive a difference there is an implied perception of similarity or categorization. We then assign words to both sides of the difference, and language evolves by noticing ever finer differences. In an artificial neural network (ANN), we could have a preprocessor which behaves like a back prop network, and once a difference is found, it can be assigned a handle that is stored in a content addressible memory (also neurally constructed -- there are well known models). When the word is recalled, the weights of the back prop section are recreated to form the expectation. This ANN would keep learning by itself, with the goal of building a liguistic (handle based) model of reality, while never assuming a model to be final.

I further propose (not in my book) that creative conjecture procedes solely by a process of analogy (also neurally implemented, but better done in analog rather than digital electronics). Syllogism itself is an analogy to causality (whether such causality is real or apparent is unimportant). We experience such causality and develop its linguistic analogy of logic. Conjectures are eliminated by testing their logic with the logical rule as well as through evidence of new perceptions.

Building an AI, in my opinion, is useless, practically speaking. We might as well hire someone from China. Its sole use may be to explain ourselves better. I wonder whether it could become a better programmer than us -- somehow I doubt it: when the creative level reaches our levels the degradation due to emotion may be as large as in us.

I abandoned research in order to start a company and make real money through regular means, and in the process found myself applying my supposed understanding of myself (i.e., my abstract neural model) to business. As I learnt, I developed my philosophy of business, which my book is about, and ends with a proposal that I claim will largely get rid of extreme global poverty (and yes, it has to do with giving people the knowledge and philosophy of how to maximize the chance of building a successful small business). The last claim I will have to test by actually trying it out, at least at a small scale. 2004 is my goal.

Regards,

Karun.
--
Karun Philip
Author: Zen and the Art of Funk Capitalism: A General Theory of Fallibility
http://www.k-capital.com

Re: CA
posted on 02/19/2003 11:31 AM by PSYBERTRON

[Top]
[Mind·X]
[Reply to this post]

You opened with - quote "I do not think it is possible to have anything but the linguistic discretized model"

I avoided saying that for now - but it's what I happen to believe too.

I like your style "Zen and the ...." I really must dig into your stuff and blog interesting links when I get time in a day or two.

Thanks for your thoughts.
I really must switch off for a while :-)
Ian G
www.psybertron.org

Re: CA
posted on 07/29/2002 3:38 PM by Notavailable@home.com

[Top]
[Mind·X]
[Reply to this post]

Ray,

You point out that "...although a Class 4 CA does create some complexity (randomness combined with identifiable features), the order of complexity (from a Class 4 CA) is limited and never evolves into anything more intelligent."

I would suggest that in viewing CA in a closed environment, it is behaving just as it should...evolving outward to fill all the possible forms available to it. Suggesting that it is limited, which it is, and that it will not evolve within its closed environment into anything more intelligent would seem to be obvious. If we look at a closed system or create a closed system within nature itself we could expect to come to the same conclusion...that system's complexity will be limited and it will not evolve into anything more intelligent over what we may observe in a given period of time.

When rule 110, for instance, is evolving outward, we can change the course of its evolution through forcibly interacting with it at some point in its evolution. In viewing a system in nature we observe changes in what may be its normal course of evolution through the interaction with other systems. Beyond using CA (rule 110) as representative of one system in a world of many, we can also use it as an example of the universe itelf viewing it as a closed system, which some may believe the universe is, and observing the interaction of randomness with identifiable features which themselves (the randomness and identifiable features) may be viewed as evolving constructs (systems) within the larger rule 110 "universe".

Further, I would suggest that to ask, "Just how complex are the results of Class 4 Automata?" is to go back down the road that Wolfram diverged from in recognizing that complexity is destiny and natural selection is not all that important.

You have given me some things to consider relative to Wolfram's work...thank you for your thoughts relative "hardware, software". The topic does get a little deep at times for us average folk that Wolfram's book seemed to target.

Best.

Lester.





Re: CA
posted on 08/25/2002 7:50 AM by jerryrosenberg@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

1) Thank you for engendering some interesting thoughts and comments; much like cellular automata, your review generated an overwhelming complexity of responses, which leads me to

2) could you (in your copious free time) switch to a response system that organizes responses by thread/theme, it is too overwhelming to process the discussion in a linear fashion, and some of the discussions may have a narrow audience

3)i have a thousand questions, but ...

a)regarding the discussion of analog vs digital modeling of the universe, and analytic vs ca computational solutions; my memory of texts in fluid dynamics (population dynamics, etc) is always of the derivation of the (analytic) differential equations, such as the navier stokes equations, starting with a picture of small interconnecting volumes(cells) and the interaction of mass or momentum between those neighboring cells; in essence we derive the analytic from the cellular. Conversely, we often arrive at discrete (digital) solutions using analytic wave equations, so I do not quite understand the perceived dichotomy of analog and cellular/computational models . It seems there is some deep connection between the two models.

b) what grounds are there for believing that "nature" will select the fastest method of ca computation; i.e. we cannot find the solution to complex natural problems faster than nature can compute it herself?

c) if ca solutions can represent shell patterns, why couldn't a class 4 solution represent a Chopin nocturne; has anyone tried playing the solutions rather than viewing them? did Chopin write nocturnes? Likwise, how do we know that the solutions do not represent the position of each penguin in an antarctic rookery from 8:00AM to 2:00 PM (AADT)? I don't believe that it is likely, but I don't see why it isn't possible. It seems that Wolfram could be right that fairly simple interactions between individual penguins (cellular automata) could lead to what appears to us as an extremely complex distribution pattern. And if these are possible, then It isn't quite clear to me why it isn't POSSIBLE for the class 4 solutions to represent "higher order" phenomena such as human behavior.

Once again, thanks for your site and insight

Re: CA
posted on 09/09/2002 2:58 AM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

Jerry,

> "what grounds are there for believing that "nature" will select the fastest method of ca computation; i.e. we cannot find the solution to complex natural problems faster than nature can compute it herself?"

Whether one considers the "substrate" (underlying physical manifestation) to be "CAs", or simply the interactions of particles and forces, that has given rise to chemistry, biology, you and I, the weather, etc., it should be recognized that nature does not "solve problems".

What would nature find to be a "problem to be solved"? All life everywhere could go extinct, and nature would not find this to be a problem.

I assume that what you mean is, nature is "computing her future" as fast as she can, and your question becomes, "why can we not outdo nature in this regard?" I believe it is because this would be a contradiction in terms. We ARE nature, and we cannot outdo ourselves.

Suppose we could calculate the weather (in detail) faster than nature can "produce the weather". We might then act to change the weather, in advance. Is this evidence that "nature was wrong in her calculation?" No. such activity on our part would be "natures work" as much as anything else.

We cannot calculate the future ahead of schedule. If we could do so accurately, we could change the future and prove our calculations wrong.

Thinking of the substrate as "CAs producing patterns" is a useful way to examine the universe. But the analogy is not very close to "CAs" as we might produce them for a pentium processor. Why?

In the latter case, the "CAs" are software, and variable by us, with the (relatively unchanging) pentium processor as the "real substrate".

In contrast, (if we take the universe as CAs viewpoint), those CAs ARE the substrate. They are not software executing on yet another substrate, they are the rules of the substrate itself. Anything we "build" or engineer is still a manifestation of that same substrate.

We might device ever more capable "computers", molecular computers, quantum computers, whatever, and ever more clever software, but all of that still executes at the mercy of the underlying physics, the "universal founding CAs". They cannot be modified or moved, any more than you could shift the location of this universe - you have nowhere to stand in which to move the universe.

Perhaps I have missed the point of your question. If so, let me know. I am not sure what "improving on nature's rate of CA-calculation" is supposed to "effect".

Cheers! ____tony b____

Re: CA: Don't ever post at 4:58 AM
posted on 08/25/2002 8:04 AM by jerryrosenberg@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Please belay my previous comment about lack of threads at this site; I just noticed the view index option. I promise to never again post at 4:58 AM

Jerry Rosenberg

Re: CA
posted on 12/29/2003 7:27 PM by fredzim

[Top]
[Mind·X]
[Reply to this post]

I fail to accept your argumentation on this point precisely. If Rule 110 is Turing-complete, this means it is able to emulate any concievable algorithm. Now if you accept that a computer is able to emulate human beings (for instance by just emulating the behaviour of the atoms in his brain) then you should accept that rule 110 is able to emulate a human being. It's just a matter of starting from the right initial state (presumably a highly complex initial state). Then rule 110 is able to produce the same level of complexity as a human being. Of course, this behaviour of rule 110 can't show up if we examining mere thousands of bits of its evolution. That's the same problems as if we were able to examine the behaviour of a single neuron, or maybe even a few neurons: it's boring, and it's hard to imagine how a more intelligent "big picture" can arise from it. But the intelligent big picture does arise.

The missing point above is: can computers produce human-like intelligence and complexity ?

I fail to grasp your position on this last question. Your statement:

"but class 4 automata and humans are only computational equivalent in the sense that any two computer programs are computationally equivalent, i.e., both can be run on a Universal Turing machine"

is simply incorrect, and leaves me wondering whether you really know what universality of Turing machines actually is. (For programs P and Q to be equivalent, it is *not* sufficient that both can run on some universal machine U; in some sense, the possible states of Q must contain a representation of the possible states of P, and conversely, so that state transitions are preserved. Which is to say that Q is able to emulate P and conversely.)

Frankly, from my limited current point of view you seem to have failed to grasp the true power of universal Turing machines in general and of rule 110 in particular. I can't take your argumentation seriously with that doubt lingering in my mind.

Re: CA
posted on 05/24/2009 11:56 PM by Sergio123Cabral

[Top]
[Mind·X]
[Reply to this post]


Dear Mr Ray , and all here ...

I would like to recomend investigating 3 points:

Point1- what will be the perspectives of CA on Singularity age ?

Point2 - Recurssivity and CA

Point3 - Can an CA-1 build another CA-2 for a diferent proposal of CA-1 , on the scenario of none info about the proposal of CA-2 will exist on CA-1 ?

Best Regards, and congrats

--Sergio / ideavalley.com.br / Rio de Janeiro / Brasil

Re: CA
posted on 05/26/2009 9:57 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

Point1- what will be the perspectives of CA on Singularity age ?


I think we will find sets of simple but strong universal quantum cellular automata to generate the raw data space in which to simulate the Multiverse and thus use quantum computers to sort for the patterns of software/objects/beings/worlds to download/reconstruct or to upload into-

it will be the end of design through additive processes of building/writing- instead we will sort for UTILITY from the set of all sets- searching for the best forms and processes that accomplsih whatever arbitrary goal you are seeking- for instance instead of writing code for AI we will search for already intelligent code in the multiverse and download it into our networks and our reality- and CA are the best candidates to provide the recursive algorithms

Re: CA
posted on 05/26/2009 11:41 PM by Sergio123Cabral

[Top]
[Mind·X]
[Reply to this post]

Thanks for your answer.
Yes , it sounds very coherent , and remember me a little bit Matrix ( I have readed Ray Kurzweil vision about Matrix and I m not sayng that you are suporting the Matrix possibilitie ) in terms of concept of downloading objects as modules to colaborate to interact with some task goals , wich can be creat problems and/or create solutions , instead writing codes for knowed situations. But in terms of singularity age I feell that we will have conditions to discover or aproximate to what I call the IQUAP , the Inteligence Quantum Particle. I m not shure yeat if CAs can helps us to start to think about them as minimalistic building block structure for AI Softwares , and we would try to analisy how Nature have worked on it some time ago. I feel that only manipulating the IQUAPs we will really have the power to reconfigure the enviroment and archive infinites possibilities for the reality. If we dont find the IQUAPs , we will be only extending or srinkg , ie , distorcing, the reality , and this will be , to me , the first consequence os Singularity age. If we accept the near age of Singularity , and the possibilite to discover the IQUAPs , probably we will have the power to reconfigure the universe , ie , writing or creating Programs , that will use the universse as a computer.
Could you help us with your comments about it ?
Sergio Cabral - IdeaValley.com.br / Rio de Janeiro / Brasil

Re: CA
posted on 05/14/2002 2:55 PM by bclary@mac.com

[Top]
[Mind·X]
[Reply to this post]

I haven't read Wolfram's book yet. I pre-ordered it from Amazon, and hope it is shipped soon. I find the general thrust of his idea interesting, but will reserve substantive comment until I've read the whole thing.
One thing strikes me: we computer people are always trying to shoehorn ontology into a computerized box--oftentimes, we see computing as the most fundamental idea in the universe. Of course, we're not alone in this--I know biologists who believe that some form of natural selection is actually the fundamental truth, and I've known economists who saw markets everywhere in nature.
One thing I've learned over the years is that we know a lot less than we think--and I'd be surprised if Wolfram has found the Holy Grail yet.


C


Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/14/2002 6:37 PM by Doug@breiterman.com

[Top]
[Mind·X]
[Reply to this post]

The complexity of the software still belies the basic conversion to and processing of information on a binary basis, the very simplicty of model found in Class 4. The decisions made by the creator of the software reduced down to the most basic sequence of logical choices to be executed in the face of a defined pattern of 0s and 1s is at root no more complex than the 110 automaton.

The suggestion that there has to be a higher level of complexity, whether computational or not, in order to produce the empirical complexity evident around us; and, that in some way our consciousness of the patterns and apparent order around us is in any way a function of a power and control over the outcome of the billions of decisions made from nanosecond to nanosecond seems to me to operate more in the realm of wishful thinking.

If one were to eliminate the concept or perception of time from the equation, and effectively profile the universe and the perception of the universe as a balance sheet, rather than a cash-flow analysis; then in its standing state the ordering of information into a specific and pecurliarly human perceptual vernacular of things with mass and relative states of location to other things is simply a matter of cumulative and communal beliefs or awarenesses, resulting from experiences as data inputs and primal responses and conclusions to those inputs.

Isn't the simplicity of fight or flight as a behavioral choice simpler than the cellular automata described and as a practical matter just as quickly arrived at in the face of the stimulus provoking it? Is there any reason to imbue that choice with any more significance, weight or pretense reflecting a greater level of processing or sophistication, in the instance the decision is made and executed. I think objectively from simply a taking of inventory perspective, not.

If one were to accept Wolfrom's thesis, then the questions raised by your comments seem much more to involve the self-imposed perceptual constructs of man's reasoning resulting from acquiescence to the concept of time, as an immuteable feature of reality, and the dimensional limitations and linearity of perception that results.

If one were to suppose that the concepts of evolution and progression are merely derived from the Monday morning quarterbacking of from whence man has come, then it is possible to concede that the next instant is as wildly unpredictable on all levels and dimensions as postulated; however, that does not detract from the extraordinary complexity and experiential joy of a sunny Spring morning or a Mozart Sonata.

Perhaps the complexity around us is actually a perceptual orthodoxy to shield and protect our relatively primitive understanding of both the universe at large, and the operation of our brains within it. On some level, to concede the level of simplicity posited by Wolfrom is to require a taking off of the blinders and assumptions implicit in the codified body of scientific knowledge, and to admit of a much broader field of inquiry, including the possibility that we are not perceptually or intellectually evolved enough to understand.

how exactly are you measuring complexity?
posted on 05/15/2002 3:58 AM by john@spamsystems.com

[Top]
[Mind·X]
[Reply to this post]

You consider people, insects, and Chopin preludes to be of a higher "order of complexity" then wolfram's "streaks and intermingling triangles". How, pray tell, did you come to that conclusion? And what do you mean by "order of complexity", anyway?

To the best of my knowledge, no precise definition of "complexity" has become widely accepted. It seems to possess a porn-like I-know-it-when-I-see-it undefinablity, which leads me to reflexively question the premises it is based on.

I read an funny little paper on attempts to quantify self-organization a while ago... Where did I put that... Ah:

http://www.santafe.edu/~shalizi/Self-organization/soup-done/

If quantifying and defining self-organization is as difficult as that paper suggests, and if complexity is as related to self-organization as my intuition says it is, then "people are of a higher order of complexity then Wolfram's streaks and intermingling triangles" seems like an untenable position.

-----
Out with the spam, in with the fnord!
If you want to email me, that is...

Re: how exactly are you measuring complexity?
posted on 07/15/2002 10:31 PM by RandallBouza@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

To the best of my knowledge, no precise definition of "complexity" has become widely accepted. It seems to possess a porn-like I-know-it-when-I-see-it undefinablity, which leads me to reflexively question the premises it is based on.

----------

Me too.

Emergence?
posted on 05/15/2002 3:55 PM by lauri.grohn@ske.pp.fi

[Top]
[Mind·X]
[Reply to this post]

Havn't received Wolframs's book, yet,
but I wonder:

1) Mr. Kurzweil didn't mention energence in
his review. What about Mr. Wolfram and emergence?

2) How Wolfram defines the relation
between a model and "reality"?

http://personal.inet.fi/musiikki/lauri.grohn/

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/16/2002 6:29 PM by slsreeni@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Thanks Mr.Kurzweil for the illuminating perspective you offer on the book.I dont know about others but for me your lucid reasoning throughout not only made for delightful reading but crystallised and gave voice to some of my own misgivings about the hype surrounding Wolfram's book.After decades of innuendo and awed whispers the answer to Life,Universe and Everything is now revealed which it appears is not 42 but 110!
Sorry, but for people working in fields like AI,nanotechnology-fields demanding intelligent design,that facile numeric solution is a fantasy we cannot afford to indulge.To people looking yearningly towards Wolfram for a paradigm-shift that will somehow overthrow the tyranny of having to think I will say-time to stop playing the Game Of Life and start living and thinking.

Re: Reflections on "ANKOS"
posted on 05/18/2002 11:25 AM by RhodesR@BottomLayer.com

[Top]
[Mind·X]
[Reply to this post]

What a terrific review. I, too, have pre-ordered from Amazon and am bitterly disappointed that I am not the first one on my block to get it. Your comments will help immensely when I finally get to crack the cover.

Thanks for pulling together so many threads, and for adding some of the antecedents that Mr. Wolfram neglected to mention (like the work of Edward Fredkin and his group). I suppose if your publisher limits you to 1,200 pages you have to cut something. ;-)

- Ross Rhodes
www.bottomlayer.com

Re: Reflections on "ANKOS"
posted on 05/18/2002 1:20 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Indeed! Wolfram SHOULD mention Edward Fredkin. But that is a question of who was the first. As I know, Fredkin was in - "digital physics" - more than 10 years ago. At MIT.

Wolfram did it also, more than 10 (20?) years ago in his firm, which is a kind of byproduct of his ideas.

I don't know. I know only, that Raymond Kurzweil has been wrong twice. When he put the Singularity 20 years too much in the future and when he thinks that "it is a little more complex than that". ;)

- Thomas

Re: Reflections on "ANKOS"
posted on 05/18/2002 2:59 PM by RhodesR@BottomLayer.com

[Top]
[Mind·X]
[Reply to this post]

Well, I think the ideas are the most interesting part, and I'm happy to see Wolfram join the fray. But on the matter of who thought of it first, Wolfram had this to say to Forbes magazine in November 2000 http://www.forbes.com/asap/2000/1127/162_4.html

<Quote:>

Wolfram later recalled this breakthrough when he told author Ed Regis in 1987, "It was sort of amusing. I was thinking about these models of mine for a month or so, and then I happened to have dinner with some people from MIT, from the Lab for Computer Science, and I was telling everybody about them...and somebody said, 'Oh yeah, those things have been studied a bit in computer science; they're called cellular automata.'

[MIT? Lab for Computer Science? Sound familiar?]
[Snip]

Soon he was at an informal conference on the physics of computation. It took place in January 1982 on a small Caribbean island privately owned by computer scientist/physicist Ed Fredkin, then an MIT faculty member.

[Snip]

What was on Wolfram's mind was something he'd seen at the conference: a computer programmed to become a cellular automata machine. The Life game was on that machine, as was every other recent attempt to generate two-dimensional automata. Wolfram could sit at the keyboard and put in various conditions, and the cells would grow across the screen. "I find it really remarkable that such simple things can make such complicated patterns," he told Computing magazine. The experience would set the trajectory of his life for the next 18 years.

<End quote.>

'Nuff said? This is hard to reconcile with Athena springing from Wolfram's forehead.

- Ross Rhodes

Re: Reflections on "ANKOS"
posted on 05/18/2002 3:54 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Interesting, very interesting.

At least it will not be a Leibnitz-Newton calculus case. Everything will be clear soon - and unimportant.

Not, that it is not important now. Not, that the matter is not equally important.

One was lucky, to be the first, having this idea. Who cares who. The idea is too important - if correct. ;)

- Thomas


Shocking
posted on 05/21/2002 6:58 PM by lionlad@qwest.net

[Top]
[Mind·X]
[Reply to this post]

It's shocking to me that Fredkin and Toffoli were not mentioned at all by Wolfram's book, since their work in "digital physics" was ground-breaking.

I should note that I did my bachelor's thesis at MIT under Dr. Toffoli's supervision, and that my thesis topic specifically dealt with the use of cellular automata for modeling physical systems.

Thankfully, Kurzweil and others have the intellectual honesty to bring these facts to light.

Re: Shocking
posted on 05/21/2002 8:17 PM by tony@spammenot.spies.com

[Top]
[Mind·X]
[Reply to this post]

There is at least one mention of Fredkin in the book, on page 1027...

http://www.wolframscience.com/preview/nks_pages/?NKS1027.gif

Re: Shocking
posted on 05/21/2002 9:18 PM by lionlad@qwest.net

[Top]
[Mind·X]
[Reply to this post]

Yeah, I see Fredkin's name mentioned, but only in the context of discrete space. Hmmm. Fredkin definitely pioneered the idea of the Universe as one giant cellular automaton, and wrote extensively on the topic before I graduated MIT in 1992. So I think Wolfram is sort of a latecomer to the party; he's just the guy who is popularizing the idea for the current generation.

The words "intellectual dishonesty" come to mind. But I've seen the same tactics used by other academics, including Benoit Mandelbrot. Mentioning the other people who pioneered the work before you came onto the scene, without really mentioning their contributions to the idea-space or the tool-space. Fredkin definitely touched on everything that Wolfram has touched on, from the synopses of the book I've read (I'm waiting to obtain my copy of the book), and he did it earlier, but all he gets credit for in Wolfram's book is the idea that space is discreet? And for that matter, he's mentioned in the same breath as Marvin Minsky!

Not so...
posted on 05/26/2002 6:18 AM by lauri.grohn@ske.pp.fi

[Top]
[Mind·X]
[Reply to this post]

Fredkin is mentioned at least in 7 different places. Wolfram thanks in his Preface hundreds of named persons. Using references might have added a few hundred pages more to the book ...

Re: Not so...
posted on 05/26/2002 9:50 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Well, well ... I am very happy, that it's so.

Fredkin was always my hero.

- Thomas

Re: Not so...
posted on 05/28/2002 12:06 PM by lionlad@qwest.net

[Top]
[Mind·X]
[Reply to this post]

Well, if Wolfram wanted people to take his work seriously in an academic sense, and if he didn't want to step on anyone's toes by claiming to have originated ideas he didn't originate, he could have been a helluva lot clearer. You claim citing references would have added hundreds of pages to the book. What, he couldn't have boiled it down to a selected bibliography? Even Carl Sagan (whom many are loath to call a serious scientist) put bibliographies in his books, even if they weren't the most comprehensive.

Wolfram seems to be lacking enough humility and intellectual honesty to admit that he didn't even originate these ideas. The sad fact, though, is that this puts him in the same category as most scientists and professors I've known in my life.

Re: Not so...
posted on 05/28/2002 12:30 PM by lauri.grohn@ske.pp.fi

[Top]
[Mind·X]
[Reply to this post]

>Wolfram seems to be lacking enough humility and intellectual honesty to admit that he didn't even originate these ideas.

Could you give some explicit examples of
this?

Consciousness and intelligence overrated?
posted on 05/19/2002 10:50 AM by ts@meme.com.au

[Top]
[Mind·X]
[Reply to this post]

While I find Ray's review very useful, as I wait for Wolfram's tome to wing its way across the Pacific, I guess it doesn't matter how hard we try we don't seem to be able to truly escape from every possible lapse into anthropocentrism.

Having followed the rise of complex systems since I found a chance to play with Fredkin's self replicating cellular automata in 1983, I have lately become convinced that the main problem we have in getting a handle on what we subjectively experience as consciousness is that it is actually far simpler than anybody wants it to be.

The seemingly great organisational complexity of human society is not a product of intelligent design, but rather of prolific streams of recursive grammar (as per Pinker) from which a vast network of not very smart businesses and other social institutions has emerged.

My idea of a proper measure of complexity is what might be left after the most efficient possible data compression (some cross between factor analysis and object oriented instantiation) was applied. Given that considerably less than 10^10 words rattle around inside our heads in a human lifetime, I suspect that even the contribution of people as bright as Wolfram to the human project could fit easily on a CD. Could any of the major natural systems on this planet be represented with significantly less than that?

Ray's discussion of the interplay between digital and analogue (discreet and continuous) models was particularly useful, helping with the struggle not to be seduced by how much easier it appears to be to account for an ultimately discreet structure, even in the face of knowledge as to how discreet phenomena emerge readily within a continuum.

And his observations about the connection between the world of patterns and the pattern recognition strengths of neural networks should be essential preliminary reading for anybody who wants to play in these games.

It should be interesting to see how A New Kind of Science gets integrated with the progress others have been making with complex systems while Wolfram has been off writing it.

Emergence
posted on 05/20/2002 12:58 PM by david@thecharboneaus.net

[Top]
[Mind·X]
[Reply to this post]

The complaint about the inability of cellular automata to generate complex,ordered structures like animals, cars, etc. ignores the phenomena of emergence. Ray's complaint amounts to: Wolfram doesn't demonstrate that rule 110 gives rise to a spontaneously emergent syntax. I've read to chapter 5 and haven't seen this either, but this doesn't mean that it doesn't. Assuming that rule 110 gives rise to such a syntax, especially one that is as complex as rule 110 itself, it isn't hard to imagine this process bootstrapping itself on up to a reality as rich and full as ours. The problem is that Ray got lost in the background noise - if I am right that rule 110 does give rise to an emergent syntax as complex as the progenating rule itself. And, if it doesn't, well... this doesn't prove that no cellular automata does. Either way, emergence in such a system answers Ray's complaint quite nicely, I think.

Re: Emergence
posted on 05/20/2002 1:51 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

It is only the question of HOW complex the rule must be. Stephen Wolfram thinks not that much.

I agree. This world COULD be much more random looking. But it isn't.

- Thomas

Re: Emergence
posted on 05/20/2002 4:13 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

I've only read the first chapter but the first thing that popped up in my internal screen was that in the first few examples the first line of each looks like the binary numbers from 1 to 7 if you give black the value 0 and white the value 1. So all of his examples in the first chapter start with a line of binary 1 to 7 and the variation that changes the pattern all occurs on line 2.

It sort of reminds me of DNA which also has two chemicals on a helical ribbon in an order that could also take on the arbitrary values of 1 and 0 with a + or - to account for the orientation of the chemicals within the helix. It is these two chemicals and their orientation which creates the information for all of the complexity of life on earth.

Re: Emergence
posted on 05/20/2002 5:24 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Exactly! And how diverse is the life on Earth.

Every Wolfram's rule is a number in fact. Some numbers - not very big - produce shocking patterns.

What if Wolfram IS wrong, and you would need a million bits rule, to produce the Universe?

Nothing, he's still almost right.

He just can't be really wrong.

- Thomas

Re: Emergence
posted on 05/21/2002 10:08 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

I goofed when I said "1 to 7." I should have said 0 to 7.

Re: Emergence
posted on 05/21/2002 1:38 PM by david@thecharboneaus.net

[Top]
[Mind·X]
[Reply to this post]

Well, I made it to chapter 6. Take a look at the illustration on page 229, read the talk about localized structures and their interactions in the evolution of rule 110 from random initial conditions. This is a spontaneous emergent syntax - structures (aka symbols) spontaneously arise which have rules (aka syntax) to their interactions. Did Ray read the book?

Re: Emergence
posted on 06/13/2002 8:08 AM by zbschoening@attbi.com

[Top]
[Mind·X]
[Reply to this post]

The key difference, which Mr. Kurzweil did not explicitly state, is that an evolutionary process that uses a genetic algorithm (GA) "intelligently" traverses a search space for an optimal solution; the individuals of one generation are evaluated for fitness and are proportionally represented in the following generation based on those values. Consequently, a GA process can have a direction, goal, endpoint, etc., and can produce increasingly complex solutions.

A CA, on the other hand, simply uses a set of fixed rules to generate each generation from the preceding one. A CA system may produce interesting patterns or even emergent behavior, but it is not the result of an optimization search. Subsequent generations are "improved" only at random, not by an optimization process.

Or maybe I'm completely wrong; I'm not a mathematician or computer scientist.

Re: Emergence
posted on 06/13/2002 11:13 AM by dcharbon@us.ibm.com

[Top]
[Mind·X]
[Reply to this post]

True, there is no optimizing direction (at the base) to the development of these programs. This is why Wolfram states that he has come to view natural selection as not the primary force in evolution. However, it is important to observe that a fitness constraint could , and indeed should and does, emerge in the evolution of a CA. Look at rule 110, structures emerge - structures that lack "fitness" decay rapidly and do not persist. Further, the structures interact. As the system progresses and grows, these interactions could, should, give rise to yet larger structures with their own rules for interaction and emergent fitness contstraints for propagation of the pattern... Natural selection, the imposition of selective constraints on the growth and propagation of particular patterns, is emergent in the system.

Re: Emergence
posted on 06/13/2002 12:50 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I see no problem, if the Evolution had a broader explanation in (Wolfram's theory).

It makes (a possible) perfect sense to me.

I've expected, that the Evolution will somehow becomes the corner stone for the Theory of Everything.

But if it has to be sacrificed for the sake of CAs - fine with me.

- Thomas

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/20/2002 11:49 PM by docdtv@deeteevee.com

[Top]
[Mind·X]
[Reply to this post]

I always enjoy reading Ray Kurzweil because I get to read important numbers (or at least educated guesses) people seem to have scrupulously avoided offering elsewhere, probably out of fear of error.

The numbers which fascinate me now are the
"compression ratio" of the human genome, which is given as 800/23 or 35, and the fraction of the information dedicated to structuring the brain (half). But on to my actual point.

Now, while 23 megabytes of C-G-A-T sequencing is still pretty big, I wonder how fair it is to say that is the amount of information which starts a human on his trajectory from NOTHING to a complicated multi-cellular being versus merely from a single-celled ZYGOTE to that final place.

What I am getting at is a critique of the genome as human sine qua non which somewhat parallels Searle's critique of AI. Specifically, what good are those 23 megabytes without the CONTEXT of a human zygote? (i.e. that fraction which is NOT DNA (about 85% of the non-water mass?))

I can hear people offering the opinion that the 85% dry mass which is "merely" fats, sugar, proteins and "a bit" of structural organization really doesn't amount to ALL that much compared to the IMMENSE negentropy of the DNA sequence.
Maybe that is so - but maybe not.

Even if I can DESCRIBE the extra-nuclear part of the zygote in a compact manner - does that mean I have accounted for the vast number of designs, disregarded via natural selection, that DON'T WORK? How many jillion experiments failed? Sure, you can describe a brick-house as being made of a set of bricks at a certain set of relative locations - but you have assumed someone knows the brick and the technology around making and joining it, as distinguished from chopped liver.

I am not a biologist, but even I know that while some nuclear material is transplantable between species (one might say viruses hold important patents in the area!) that doesn't mean there isn't a complicated interplay between the
"designs" stored in the nucleic acids, and the "mere" COMPATIBLE factory in the cytosome which fabricates the incredible structure of subsequent cells - including the differentiated tissues and organs of massive multi-cellular organisms like Homo sapiens.

For that matter, why is the very fact that Nature uses C, G, T, and A "monomers" reckoned as constituting basically "zero" information? Think of all the millions upon millions of molecules of that size - let alone of ever-larger sizes - which are NOT used to store the information which the usual sequence of bases do. Let alone the jillions of four-tuple sets (or 148-tuple sets) which fail to pass muster.

Sure, the "essence" of the Pentium microprocessor is in the incredibly complicated circuit designs instantiated as masks - but they would be useless without the billions-dollar fabs which are FAR more complicated than Egyptian stone pyramids. The CHIP FAB ALSO represents a GREAT DEAL of the complexity of the brains in your PC - even if that complexity is *common* to the humble 7400 quad-gate IC and the multi-megatransistor Pentium.

Goodness knows I have no way to estimate the
"information" in the structure of the cell sans nucelic acids - but what good are those 23 megabytes unless IMPLEMENTED as monomers which can INTERACT with an incredibly complicated machine like the living cell?

Sorry if this was too far off-topic, but I guess I think the "central dogma" of genetic science may get a bit too much respect.

Ron Feigenblatt

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/21/2002 10:17 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

If you look at the cell as a computer and the DNA as the software, you can see a pretty clear relationship. The DNA holds the basic program for creating an organism and running it from conception to death, but without the cell there would be no way to carry out that program.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/21/2002 11:28 AM by mattb@alumni.stanford.org

[Top]
[Mind·X]
[Reply to this post]

I think DNA is more like data memory, which is
used as part of an instruction set where physics is the machine.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/21/2002 7:05 PM by mizell_ken@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

CMOS... and yes i see your point and agree.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/21/2002 1:49 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

The amount of information inside the Pentium's structure, has _nothing_ to do with the factory where manufactured.

- Thomas

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/23/2002 7:16 PM by apawi@aol.com

[Top]
[Mind·X]
[Reply to this post]

Re your comment on DNA: You are correct--you cannot explain an organism solely by its DNA. Except for the most simple organisms, the outside environment is also extremely important in determining how it behaves, and, how the species changes over time.
I have read the first five chapters of Wolfram's book, and have skipped around some of the notes in the back. I'm reserving final judgement, but it looks to me so far that at least in terms of biology, Wolfram has gotten part of the picture right, but has underestimated the importance of natural selection. I am an anthropologist, but my undergraduate and master's work was in biology.
I became interested in anthropology after spending two years in swamps studying the social structures and behavior of Ardea herodias, the American Great Blue Heron.
Looking at Wolfram's CA diagrams, and thinking of my old pals the herons, I can see that he's right in a sense--for instance, in explaining the heron's plumage, for example. The long feathers on his breast apparently have little or no survival value--and thus, are probably not the product of natural selection.
But I cannot believe that something as simple as a CA program could be solely responsible for their behavior. Herons are clever, highly adaptable birds; their range has expanded to include almost the entire US because they learn quickly, and can adapt to just about any water source that contains fish, amphibians and crustaceans. I have personally watched individual birds succesfully hone their skills catching different prey over a period of months. The most successful become highly efficient carnivores over their 20 year life span. My point behind this is that one cannot explain the heron's behavior without examining the pressures of his environment. Each generation of heron becomes ever-so-slightly more adept at finding food, and in replacing other shorebirds in various ecosystems. It is the pressure of selection that explains this.
Similarly, herons are loners, and when it is time for them to mate and nest, they respond with highly ritualistic behaviors--I don't think they could tolerate being so close to other individuals unless they were completely driven. Their behavior then shows much less variation--but even so, it's much more complicated than what I think you could get by running a CA program...and it works the way it works because of natural selection.
SW

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 01/07/2004 3:17 PM by tombronson

[Top]
[Mind·X]
[Reply to this post]


Twenty trades: bricklayers, rockers, roofers, siders, formsetters, painters, tile-setters, glaziers, ... , trade N.

Twenty amino acid types: glycine, alanine, valine, ... , glutamine.

All amino acids have in common the ammonia+carboxylate groups on carbon 1. The more electron orbitals in harmonic equilibrium with the electron orbitals of adjacent atoms the deeper the enrgy well of the Van der Waals bond(s).

Where there are any number of atoms forming a continuously adjacent set and there is at least one electron orbital in each atom having an identical frequency of resonance at ground, those orbitals mesh like gears. Energy, and hence signal, may be transferred along that chain of gears.

I am not a biologist, but even I know that a very high degree of order is required to produce the functionality of the cellular nucleus. On construction sites I observe a highly ordered group of functional modules (module=tradesman='amino acid') that indirectly reference a reproducible document (document=DNA) using off-prints ('off-prints'=mRNA) of the comprehensive planset to provide disposable instructions definitive of elemental substructures (structures=proteins) immediately required.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/21/2002 2:58 PM by fkampas@msn.com

[Top]
[Mind·X]
[Reply to this post]

It is interesting to note that a network approach to the structure of space-time already exists in loop quantum gravity.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/21/2002 3:30 PM by joe@joe.to

[Top]
[Mind·X]
[Reply to this post]

i could kick stephen wolfram's ass. who would be smarter then?

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/21/2002 6:27 PM by geurgi_ivanov@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Word (the f--k) up! I got your back!
-Geurgi

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/21/2002 6:38 PM by geurgi_ivanov@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

I have a couple of questions. I have not yet read the book, but I am planning to. Hopefully someone can help with some answers.

First, how does Wolfram's cellular automata work compare to all the game theory work that has been done in the sciences?

Second, falsifiability is the demarcation of science. Does the book actually put forward falsifiable theorys that can be compaired with measured data? Someone earlier said that it can't be wrong...

Let me know if you have any answers, I can't wait to get the book and find out more.

Thanks,
Geurgi

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/21/2002 10:20 PM by gottliebpet@cs.com

[Top]
[Mind·X]
[Reply to this post]

All this discussion of CA as another (possibly most important) model in computer science is fine. However, a replacement for all the fundamental rules of physics will have to be held to a higher standard. The discussions of Wolfram's book mention that he has aluded to being able to better describe turbulent fluid motion and to have a more correct version of the second law of thermodynamics. It is not easy to devise experiments for the latter, but the former has been subject to much experimental investigation. Until Dr. Wolfram can give an improved explanation of some of those experiments, any physicist must regard his speculations as having no more connection to reality than does cold fusion. And that's what physics is about, describing the physical world, not beautiful mathematical theories.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/21/2002 11:13 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

The way I understand the process, all descriptions of the physical world that come under the label of physics begin with a theory. The theory is then tested until enough people either agree with it or disagree with it to pronounce it acceptable or discard it. Are you now proposing that we discard the theory portion of the process?

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/22/2002 4:46 PM by geurgi_ivanov@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

I think in general, observation precedes theory, and in order for the theory to be considered physical, it has to make observable predictions.

What gottliebpet seems to be saying is that Wolfram's theory doesn't explain presently observed phenomena, not that we should abolish theory.

Also, there is no democracy in science. A theory is successful if it's predictions are closer to measurements then other theories. This idea of democratic science you have is related to reproducability. Cold fusion ultimately failed as a theory because it wasn't reproducable.

A CA map is not the terrain
posted on 05/22/2002 4:59 AM by lauri.grohn@ske.pp.fi

[Top]
[Mind·X]
[Reply to this post]

Hi,

I wonder what do Wolfram's CA simulations can
prove in the first place?

Proving that a map has some properties does not
prove that the terrain has them?

Taking just another example:

One can't prove that some system is "chaotic"
by proving that its model is chaotic.

Lauri Gr'hn
Metacomposer
Brussels

Re: A CA map is not the terrain
posted on 05/22/2002 1:31 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

It's alway this way. Many ... most ... don't like it, if it's new.

Therefore ... a lot of silly arguments. ;)

- Thomas

Re: A CA map is not the terrain
posted on 05/22/2002 4:49 PM by geurgi_ivanov@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

On the other hand, most new ideas are wrong;|

Re: A CA map is not the terrain
posted on 05/22/2002 5:12 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Most, but please do not react, as there were all wrong. Consider them with much care. Wolfram will not be fool, even if he is wrong. In that case, a posible solution will be eliminated.

Bell was not wrong about his aeroplane. He just didn't manage to fly. Those, saying it's a silly idea to fly - were wrong. Silly even.

- Thomas

Re: A CA map is not the terrain
posted on 05/23/2002 6:08 AM by thp@studiooctopussy.com

[Top]
[Mind·X]
[Reply to this post]

So very true Tomaz

Re: A CA map is not the terrain
posted on 05/23/2002 6:12 PM by geurgi_ivanov@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

I think you misunderstand. The way we can tell good ideas from bad is through scrutiny. Not only is it natural to try and disprove a theory, it is essential, especially not silly.

I don't know too much about Bell and his bellocopter, maybe it's because it didn't work.

No one believed flight was impossible unless they believed birds didn't exist. They just believed his design wouldn't work, and ultimately they were right, not silly. The people that believed it would work miscalculated and were not all that silly either. But the people who claimed that criticism was silly... that it was just a fear of new ideas... oh boy... silly!

What about refusing to submit to peer review, same type of silliness says Geurgi.

Same type of sillieness,
Geurgi

Re: A CA map is not the terrain
posted on 05/24/2002 3:47 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> No one believed flight was impossible unless they believed birds didn't exist.

It was just not the case. In fact, they thought - either that God didn't intent people to fly.

Either, that the machine, heavier than air, will never fly. (Birds are NOT machines, but God's creatures or living animals - was the "common knowledge")

> But the people who claimed that criticism was silly... that it was just a fear of new ideas... oh boy... silly!

I've said - if it is of any use. If. That, I'll see. The criticism which just say NO - is the silly one.

> What about refusing to submit to peer review

They can peer review now, if they want.

- Thomas

I Ching: determinism from randomness
posted on 05/22/2002 2:13 PM by ca314159@bestweb.net

[Top]
[Mind·X]
[Reply to this post]

randomness from determinism:
"...so we are left with 64 rule types."

determinism from randomness:
The 64 hexagrams of the I Ching:
http://members.ozemail.com.au/~ddiamond/table.html

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/23/2002 11:37 AM by serambin@ailink.org

[Top]
[Mind·X]
[Reply to this post]

If you equate complexity with order and loosely defined them as 'information that fits a purpose', then a telephone book would appear as highly complex, at least in the Effective Complexity concept. If you define Effective Complexity as the length of the schema required to define a string of symbols and to allow one to predict or approximate the up coming symbol/s. Schema for this purpose is defined as an incomplete or compressed representation of an object or event. Like 2^4 as opposed to 20000. When an object or event is uncompressed or uncompressible, it can be called high in Algorithmic Information Content, like a random list of integers.


For example, arriving on the page of a specific name in a phonebook with one million listings without the 'order' of alphabetical listing would require a page to page search. Plus, each name on every page would have to be examined to determine if you were on the right page. With a binary search in an alphabetical ordered list, it would require a maximum of 20 tries to arrive at the correct page.

In fact, a telephone book comes close to the maximum in Effective Complexity. Effective Complexity reaches its maximum when a combination of compressibility and randomness combine. That is, when the balance between order (only a very short schema is required: next cell = Black) and maximum AIC (no schema or compressibility is possible: a true random list of integers where the only way to express it is by writing each integer out) is achieved. The phone book has relatively high AIC because each name must be presented. But the purpose of the information can be achieved by following a schema (using a tree approach). This schema is fairly complex but is much shorter than going through the list in a sequential and exhaustive method.

Stan Rambin

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/23/2002 12:39 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

After all, if the _standard_ mathematics is "under everything" - Wolfram translated it into Mathematica long ago.

Compiler he has used, translated it into a sequence of Intel's instructions. Could be simplifed further.

Now, he is legalizing the whole situation, only.

- Thomas

kannattaako lukee?
posted on 05/23/2002 2:19 PM by pemiku@nic.fi

[Top]
[Mind·X]
[Reply to this post]

Kannattaako lukea, vai onko huono. Meneek' pitk'n matematiikan ja pitk'n fysiikan lukeneelta paljon yli hilseen?

Listen (midi) to Wolfram's CA Rule 110
posted on 05/23/2002 2:56 PM by lauri.grohn@ske.pp.fi

[Top]
[Mind·X]
[Reply to this post]

Please, open your ears to Stephen Wolfram's
Cellular Automata Rule 11:

http://personal.eunet.fi/pp/ske/musitives/2002.html

Rule adapted from Stephen Wolfram: A New Kind of Science. The music has been automatically generated from the picture shown in 5 seconds.

Re: Listen (midi) to Wolfram's CA Rule 110
posted on 05/23/2002 4:04 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

It sounds to me like music for alien ears. To my ears it's all tone and no tune. A jazz musician, though, might be able to take it and run with it. And although the pictures are visually different, the musical pieces don't sound much different to me.

Re: Listen (midi) to Wolfram's CA Rule 110
posted on 05/23/2002 5:08 PM by lauri.grohn@ske.pp.fi

[Top]
[Mind·X]
[Reply to this post]

OK. Classical minimalism, Riley, Reich, Glass etc. is different. Many don't like it. It takes time to get used to it...

Re: Listen (midi) to Wolfram's CA Rule 110
posted on 05/23/2002 10:32 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

>. It takes time to get used to it...

What's the incentive? It takes time to get used to living in prison, too. But, again, what's the incentive? Answer: you can't get out! In the world of music, there are too many alternatives to spend a lot of time getting used to something that has no obvious redeeming features.

Re: Listen (midi) to Wolfram's CA Rule 110
posted on 05/24/2002 1:21 AM by lauri.grohn@ske.pp.fi

[Top]
[Mind·X]
[Reply to this post]

Hi,

I dont' disagree with you. But how many of those
about 50 musitives have you listened and how many of them more than once? What kind of music do yor normally listen? How much so called contemporary classical music?

In musitives paradigm composing means finding pictures that generate interesting music and then trying to make the piece even better by changing the parameters. The concept is still under development. Getting e.g. music lasting 2 hours in 5 seconds automatically have many uses:

http://personal.eunet.fi/pp/ske/midi/gallery.html

Yours,

LG
Brussels

PS.

Composing all 255 Wolfram's RULES would be aninteresting experience...

Re: Listen (midi) to Wolfram's CA Rule 110
posted on 05/27/2002 5:30 PM by jonny@joyofvb.com

[Top]
[Mind·X]
[Reply to this post]

That music is remarkable. Its discordant but obviously patterned. Have you considered using an evolutionary algorithm to search for CAs that play melodies?

Imagine if you found a familiar tune, Bethoven perhaps. That would raise some eyebrows.

Jonny

Universal melody emulator etc.
posted on 05/28/2002 8:02 AM by lauri.grohn@ske.pp.fi

[Top]
[Mind·X]
[Reply to this post]

>Have you considered using an evolutionary algorithm to search for CAs that play melodies?

I am going to look at what I can do with CAs
but first going to read Wolfram's interesting
book firstk, being on page 656 now...

>Imagine if you found a familiar tune, Bethoven perhaps. That would raise some eyebrows.

The software is open for "virtual scales"
making it possible to use any melodies
"behind the stage". For example I have use
the famous Japanese song Sakara, Sakura a
few times. You only need these 12 numbers in
the Java source code:

{0,0,0,3,3,5,5,5,6,6,10,10}, //#5 sakura

I someone would program a user interface
it would be easy to program a "universal seed melody option" to emulate any melodies in
some approximation outside copyright restrictions.
But on the other hand any block of music could
also be emulated in some approximation just by
"finding" the right picture. Or painting it...

See the source if you are more interested:

http://personal.eunet.fi/pp/ske/musitives/synestesia.html

Re: Listen (midi) to Wolfram's CA Rule 110
posted on 05/28/2002 11:34 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Watching Willie Nelson and Friends on TV last night made me realize that what seems to be missing is a good rhythm section. Program your midi to back up the tones with some drums and guitars. Timing is everything.

Re: Listen (midi) to Wolfram's CA Rule 110
posted on 05/28/2002 12:11 PM by lauri.grohn@ske.pp.fi

[Top]
[Mind·X]
[Reply to this post]

You are quite right. Minimalism and popular music
differ with respect of rhythmic bases.

I would be happy to programme a sw version generating pop music but I still don't have
any idea how to generate a rhythm from the
picture used for generation. CAs might give
some new ideas.

LG

The best "rhythms" until now you find from the
piece Trixie here:

http://personal.eunet.fi/pp/ske/musitives/2002.html

Digital "ether hypothesis"?
posted on 05/24/2002 5:14 AM by lauri.grohn@ske.pp.fi

[Top]
[Mind·X]
[Reply to this post]

Kutzweil Writes:

" Fans of the game of "Life" (a popular game based on cellular automata) will recognize the common phenomenon of gliders, and the diversity of patterns that can move smoothly through a cellular automaton network. The speed of light, then, is the result of the clock speed of the celestial computer since gliders can only advance one cell per cycle."

Isn't this a kind of digital "ether hypothesis"?

Re: Digital "ether hypothesis"?
posted on 05/24/2002 5:48 PM by Citizen Blue

[Top]
[Mind·X]
[Reply to this post]

I think Mr. Wolfram may be dreaming in digital. Quanta, so I've heard, have been known to have wave properties. Is the Universe a computer? I see too much randomness, and whoever heard of Chaos being deterministic? If something goes to point A from point B, couldn't we just say that in our expanding universe that was the only way it would have happened? To say this might mean that, to the Big Bang, back to the Big Crunch and back again, we may be fated to live this life over and over again unbeknownst to us.
If WE are a type of machine, then maybe it could be possible for there to be larger more bizarre types of machines running on rules that aren't as systematic as we once thought, leading up to the final mother of all machines The Universe.

Re: Digital "ether hypothesis"?
posted on 05/24/2002 5:52 PM by Citizen Blue

[Top]
[Mind·X]
[Reply to this post]

This wasn't supposed to be part of Re: Digital "ether hypothesis"? I made a mistake.

Wolfram's Idea
posted on 05/24/2002 5:55 PM by Citizen Blue

[Top]
[Mind·X]
[Reply to this post]

Citizen Blue.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/24/2002 5:37 PM by wwalker3@austin.rr.com

[Top]
[Mind·X]
[Reply to this post]

First of all, kudos to Mr. Kurzweil for the excellent review of the book. I've been a Kurzweil fan for many years, ever since his sampling keyboard days.

I haven't received my copy of "A New Kind of Science" yet, but from what I can tell, Mr. Wolfram is entirely dismissing a field of mathematics that's both very old, and a superset of cellular automata (CA) -- namely, differential equations (DEs).

The DE dh(x)/dt = F(h(x)) is a one-dimensional, continuous-valued, spatially-continuous cellular automaton with the next-state function F(h). As long as F(h) is "local", the next state at any point in x depends only on the previous state at that point and the previous state at neighboring points.

Just as in CAs, DEs have to be simulated to find the value at some point in the future -- you can't magically look ahead to find the end result. And, just as in CAs, they can start with simple initial conditions and evolve to great apparent complexity that defies conventional analysis.

The Swiss mathematician Leonhard Euler (1707 - 1783) wrote a set of DEs describing ideal fluids that has defied conventional analysis for more than 200 years. The Euler equations and the Navier-Stokes equations (discovered in 1823) describe time-evolving systems vastly more complex than CA 110. And, just like in CA 110, they can start with simple (smooth) initial conditions and rapidly evolve into unpredictable, though predetermined, complexity.

The study of DEs has spawned a huge number of numerical techniques, due to the inability of conventional techniques to solve them satisfactorily. Most of these numerical techniques amount to analog CAs on discretized spatial and temporal grids. They've been studied using computers since computers were even remotely fast enough (see Fermi, Pasta, and Ulam's work from 1955).

My point is this: since most current theories of physics boil down to a set of DEs that are being studied using analog CAs, and has for decades, Kurzweil is really just restating the obvious.

Indeed, one can't help but think that he chose discrete CAs instead of analog merely because in the discrete case, you can easily enumerate the entire problem domain, divide it into four parts, and thereby give yourself the illusion that you've made some meaningful classification. Analog CAs resist such simple treatment, since the problem domain is infinite.

Mr. Wolfram's "new kind of science" is exactly like the old kind of science, just digital instead of analog. It's easy to say that it should be possible to describe the universe with a discrete CA instead of an analog one, but it's infinitely more difficult to actually write down one that works. I suspect that's why he's left that little detail as an exercise for the reader.

Wade Walker

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/24/2002 5:45 PM by wwalker3@austin.rr.com

[Top]
[Mind·X]
[Reply to this post]

Oops, correction: I meant to say "Wolfram is restating the obvious", not "Kurzweil is restating the obvious". Apologies to Mr. Kurzweil, who I have great respect for.

Wade Walker

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/28/2002 11:16 PM by ftldevice@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

As far as the observation that you can't use CA to "jump ahead" since the universe only computes so fast, I disagree since gravity would compress the "nodes" which Wolfram says makes up space itself.

A gravitational field alters the geometry of space/time. Space itself becomes compressed. In the mobile automaton/CA model proposed by Wolfram, it occurs to me that if space itself is compressed (which I guess in Wolfram's scenareo would mean the 'nodes' are closer together), then more computation can occur in that area.

Interestingly, this node concept may actually agree with relativity since even though more computation occurs in that area, the information about that "faster" computation can't travel outside this area at the accelerated rate since it must eventually hit "normally-spaced" nodes.

So it should be possible to skip-ahead in a high-gravity environment. But communicating that information outside the area would be a problem.

Hmmm... Could CA's actually predict event horizons?

Of course, you could easily send a yes/no answer by having it compute at an accelerated rate the development of a CA and answer yes/no about some question you give it.

-Chris Elfers

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/29/2002 4:55 PM by tlundberg@wausaufs.com

[Top]
[Mind·X]
[Reply to this post]

Thanks for pointing out that intimate connection between discrete cellular automata and continuous differential equations. I knew it was there... somewhere.

As Kurzweil also pointed out:

"We can easily establish an existence proof that a digital model of physics is feasible, in that continuous equations can always be expressed to any desired level of accuracy in the form of discrete transformations on discrete changes in value. That is, after all, the basis for the fundamental theorem of calculus."

This whole chicken/egg argument about which is more fundamental - analog or digital operations, discrete or continuous functions - is one of those philosophical questions that I don't think science can answer because the problem isn't well defined. It's impossible to imagine one without the other. Is space really "made" of pre-existing points ? Or is a point an abstract limit representing a possible localization in space ? One which may not even apply to our universe if string theory is correct and there is a lower limit to the whole notion of distance.

I think a more interesting question is whether the universe is, in fact, computable. Can the linguistic constructs of science, whether CA algorithms or differential equations or something else, really simulate nature perfectly ?

David Deutsch made some very interesting observations about this in "The Fabric of Reality" (another book full of very BIG ideas).
He claims that the universe incorporates a kind of Strong Turing Principle that implies that any environment within it could be perfectly simulated with an appropriate program running on a physically realizable computer. With the proper man/machine interface, no scientist would be able to perform any experiment in such a virtual reality that could distinguish between the reality and the illusion.

Such a program would have to incorporate knowledge of the laws of physics, biology, etc, in order to perfectly simulate the behavior of real objects, creatures, and, yes, even real
people. But this could be done in principle.

Since a Turing machine can be embodied by a CA, of course a CA could perform such a computation and thereby explain the universe. But so could a variety of other mechanisms. What matters is not what parts the computer is made from, but that the universe allows itself to be modeled in this way. It has a kind of fractal self-similarity in which the part can mirror/model the whole and this is a fundamental property of the universe.

Incidently, this also challenges the notion that the only way to simulate certain processes and "jump ahead to see what happens" is to "run the actual program" - ie watch what the universe actually does. According to the Strong Turing Principle, it should be possible to simulate any such process in our universal computer and watch what happens there.

Tony Lundberg

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/04/2002 2:15 PM by malatmals@msn.com

[Top]
[Mind·X]
[Reply to this post]

He does show interesting parallels between DE's and CA's at the end of Chapter 4.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/29/2002 3:40 AM by paul_sagi@astro.com.my

[Top]
[Mind·X]
[Reply to this post]

i sent the below message to stephen wolframs' site:

stephen,

seems to me the appearance of unexpected patterns (patterns not following the rules) after many permutations is a manifestation of godels' theorem. interesting that some of the patterns should come to resemble snowflakes and other things in nature. some relation to fractals perhaps? perhaps what you have in some cases is production of rules of fractals.

sincerely,
____________________________

Fertilize a mind - plant an idea.
____________________________

Paul Sagi
Service Engineer
Airtime Management & Programming Sdn Bhd
All Asia Broadcast Centre
Technology Park Malaysia
Bukit Jalil
57000 Kuala Lumpur
Malaysia
Mobile: 012-2845140
Tel: 60 03 9543 8888 ext: 7263
Fax: 60 03 9543 5688
email: paul_sagi@astro.com.my
--------------------------------------------------------------------------------------
This email and any attachments are confidential and solely for the
use of the intended recipient. They may contain material protected
by legal, professional or other privilege. If you are not the intended
recipient or the person responsible for delivering to the intended
recipient, you are not authorised to and must not disclose, copy,
distribute or retain this email or its attachments. Although this email
and its attachments are believed to be free of any virus or other defect,
it is the responsibility of the recipient to ensure that they are virus
free and no responsibility is accepted by the company or sender for any loss
or damage arising from receipt or use thereof. The views expressed in this email
may not necessarily be those of, or supported by, the company.

Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/01/2002 12:18 AM by ben@goertzel.org

[Top]
[Mind·X]
[Reply to this post]


Ray,

I agree that the "universality" theme is overstated in Wolfram's, but I would use a slightly different phrasing to explain why.

The theory of universal computation is not Wolfram's invention of course; and it has a thorn in its side that he does not emphasize in his book. This thorn is *infinity*. Any "universal computer" can simulate any other computer, but it can only do so by running a simulation program that may slow it down by an arbitrarily large finite amount, and expand its memory usage by an arbitrarily large finite amount.

This means that it's not enough to say that some particular universal computer (e.g. CA rule 110) is *in principle capable* of modeling the universe, or the brain, or biological evolution, or whatever. One has to show why the universal computer in question is *pragmatically effective* for modeling the systems in questions.

If you have another modeling framework M that you think is better than CA rule 110 for modeling, say, the brain or the universe -- then you are guaranteed that your whole modeling framework M can be simulated using CA rule 110.

Thus, when you complain in your review that the pictures of class 4 CA's that you see in his book are not as complex as lifeforms, universes, brains, etc. -- he can always retort "Yes, but a sufficiently large class 4 CA picture *would* have patterns of that complexity and variety in it, although they might not be visually discernable." But then you must retort: "So what? If it takes Class 4 CA's with initial conditions of length 10^50 and runtimes of 10^20 generations to emulate the brain, what use is the emulation?"

The point is that his Principle of Computational Universality ignores the question of computational efficiency at real-world scales. He proposes a statement going beyond universal computation theory, which is: Almost any dynamical system that doesn't lead to random or transparently fixed or oscillatory behavior, is likely to be a universal computer. This is a fascinating insight, though I'm not yet 100% convinced it's true. But even if it is true, so what? Each of these theoretically-universal dynamical systems is going to lead to different behaviors *within reasonable space and time constraints*. And that is what is important, because the physical world and the mental world have to exist within fairly tight space and time constraints.

I thought his book was very interesting, but I found no insights in it that were applicable to my own work in artificial general intelligence (AGI). AGI work is all about spacetime efficiency. It's easy to make a thinking machine if you assume (as Wolfram does in his Principle of Computational Universality) infinite space and time resources. What the human brain is all about -- and what the first digital minds will be all about -- is achieving *relatively broad scope* computation within *relatively limited resources*.

In this sense, I'm afraid Wolfram's "new kind of science" is not that new after all. He has given us a funky new angle on some familiar complexity-science ideas, and introduced some valid technical and conceptual insights. But, he has imported from standard computation theory the focus on *infinite resources*. I think the new kind of science we need is one that deals with the finite world we live in, i.e. one that considers average-case spacetime complexity as absolutely theoretically fundamental, not as an irritating technicality to be brushed aside. I was hoping to see something in this direction in Wolfram's book, but, no cigar.

Yours,
Ben Goertzel
Novamente LLC
www.realai.net

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/01/2002 6:09 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Ben!

> thorn is *infinity*. Any "universal computer" can simulate any other computer, but it can only do so by running a simulation program that may slow it down by an arbitrarily large finite amount, and expand its memory usage by an arbitrarily large finite amount.

Sometimes however, we can simulate faster. By putting something aside. Even deliberately cut it away, as disturbing anyway.

> One has to show why the universal computer in question is *pragmatically effective* for modeling the systems in questions.

We saw the examples already. But why ask Newton to actually calculate the Solar system? Or Quantum Theory to model the whole monkey DNA?

> "So what? If it takes Class 4 CA's with initial conditions of length 10^50 and runtimes of 10^20 generations to emulate the brain, what use is the emulation?"

Then it's useless, of course. But I don't think that will be (or is) the case. So what, if I think that way, you may ask? I _guess_ that way. You _guess_ the opposite.

Besides, I see a tremendous acceleration in the computation field, if CA will be proved doable. Every cell may be a (very simple) processor/RAM cell. The computing power should go unprecedentedlly higher, than went ever before.

> because the physical world and the mental world have to exist within fairly tight space and time constraints.

I don't see why. Maybe. Maybe not.

> AGI work is all about spacetime efficiency. It's easy to make a thinking machine if you assume (as Wolfram does in his Principle of Computational Universality) infinite space and time resources.

That's true. Very easy. So is with the energy (or money ... or whatever) supply in the infinite world. But it's just an irrelevant Cantor's game.

I saw this case before. Many times during the last few centuries.


- the new kind of science always goes to an unpredicted direction.

- opposition is always two component. That it's nothing really new and is evidently wrong and fruitless. The author should just do something else, where he may be even good at.

- "must be something deeper, it's too prosaic"

The pattern is too strong here, not to spot it. But it doesn't prove anything as well.

That's life! ;-)


- Thomas Kristan

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/01/2002 10:03 AM by ben@goertzel.org

[Top]
[Mind·X]
[Reply to this post]


Thomas Kristan,

I'm afraid you misinterpreted one aspect of my comment.

I don't doubt that there is a "new kind of science" abrew. The science of complex systems has been growing for some time, fueled by computer technology among other things. It's very exciting.

I think that Wolfram's work is an important part of this emerging new-kind-of-science -- but not quite as important a part of it as he does ;)

I doubt that CA's are going to emerge as the core modeling tool of the science of complex systems. Rather, I think we'll continue to see a diversity of modeling tools, and that the science of complex systems -- when it finally emerges as a robust science rather than just a philosophy together with a diverse collection of specialized tools and examples -- will focus on the formalized study of *emergent system patterns* rather than focusing on one specific class of dynamical systems for generating such patterns.

I doubt that Wolfram's MO of visually identifying complexity via 2D patterns is going to take him to a real complexity-based physics theory, or a complexity-based theory of mind-brain, evolution or even fluid dynamics. (To name some of the disciplines he touches in his book). For that I think some kind of math/science that focuses on emergent patterns rather than generating iterations will be necessary.

-- Ben Goertzel
Novamente LLC
www.realai.net


Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/01/2002 10:30 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Ben!

I only don't understand, why do you think, it is not possible, to link several simple CA automata, to obtain every pattern of this world.

I am not absolutely sure, it is - bellow one million bit rule. Only pretty sure.

[After all, inside PI, there is a whole world. A RGB picture of every human ever lived. Resolution 1024X1024 targa.]

If somebody doesn't trust my word on that, please see: http://www.nature.com/nsu/010802/010802-9.html



And you need less than 100 bits to define PI. as 4-4/2+4/3-4/4+4/5-... !

So, I tend toward the Wolfram's conjecture.

- Thomas

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/01/2002 10:50 AM by ben@goertzel.org

[Top]
[Mind·X]
[Reply to this post]


Tomaz...

There may be our whole world encoded in the decimal expansion of Pi, but this is a *very inefficient* representation of the world, because along with our world it also encodes a hell of a lot of random and quasirandom nonsense.

This is the same as my complaint about CA's. Yeah, by universal computing theory, all the patterns in our world can be made to emerge from some CA. But how complex does the initial condition of the CA have to get? Sure, Rule 110 is simple, but this just means, that for modeling anything complex or getting any really complex behavior, all the complexity is pushed into the initial conditions.

I think the "graph-rewriting" systems Wolfram discusses in his physics chapter have a lot of promise, and I like his growing combinator systems too. My own taste would be to intermix self-rewriting combinator expressions with self-rewriting graphs. My intuition is that this is likely to lead to more compact models of many real-world systems than CA's (e.g. the brain, the physical universe at a low level). Whereas CA's are obviously a great modeling tool for fluid dynamics and other domains that depend simply and critically on local transmission of information, between a point in space and its neighbors.

One way or another though, I think the big thing missing in Wolfram's book is a systematic and mathematical way of analyzing, interrelating and synthesizing *emergent patterns*. He plays with iterations and identifies emergent patterns visually, then proves some things about them in special cases, or draws analogies between what he sees and various real-world systems. But until we have some real science & math about the emergent patterns in these complex systems -- even if there is a small collection of CA-ish rules giving rise to the universe as we know it, we'll never be able to find this collection...

-- Ben Goertzel
Novamente LLC
www.realai.net

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/01/2002 11:03 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Ben!

> even if there is a small collection of CA-ish rules giving rise to the universe as we know it, we'll never be able to find this collection...

Why do you say that? If something like that exists - I think it well might - what is going to hide it, from us?

On the contrary! If we'll not find 'this collection" relatively soon, they probably not exist. Universe is trans-CA, then.



- Thomas

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/01/2002 11:18 AM by ben@goertzel.org

[Top]
[Mind·X]
[Reply to this post]


Tomaz,

Even for "relatively small" CA or CA-ish rules, the search space is very very large.

One can only get so far by visual inspection of emergent patterns in such dynamical systems. Maybe Wolfram has gone almost as far as one can go using visual inspection.

So my guess is that, if we're not going to rely on the human visual system to find the rules giving rise to complex patterns, we need either

a) a solid math theory telling us what kinds of rules and initial conditions will give rise to what emergent patterns (we're very far from having such a thing, at this point), OR

b) an AI system that can do far better than humans or existing heuristic search techniques, at searching the space of possible dynamic rules and initial conditions in a systematic and intelligent way

I had sort of hoped Wolfram would provide a), but he has not (yet), and nor has anyone else

-- Ben Goertzel
Novamente LLC
www.realai.net

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/01/2002 11:29 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> a solid math theory telling us what kinds of rules and initial conditions will give rise to what emergent patterns

If those rules are the simplest, it's no point to search for "something better", to explain them.

You just have to run one after another and - here I do agree with you - some automatized matching checker should be installed. To inform you, that the rule number 110110110110 produces - a hydrogen atom like object.

How difficult is to do such a program? Not very.

- Thomas

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/06/2002 11:00 PM by im4xlns@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

NOTE: Following is a copy of email I sent to various colleagues in the areas of genetics, government, and organizational behavior (the latter two being my fields). I started out just reading a short piece in Newsweek, about the new book by Stephen Wolfram. From my own (admittedly limited) knowledge of these subjects, I felt that Wolfram was a)REALLLY egotistical, and b)the article left out some really important people that - oops - Wolfram learned a lot from. As the lengthy piece below reveals, this matter really resonated with me. I don't know if any of you will find this interesting - but I hope you do, in some ways. If you do read on about this, and have some thoughts to share, please do write back. Thanks,

Bruce Waltuck



Hi... I don't know how many of you see publications like Time, Newsweek, Wired, or The New York Times regularly. This past week marked the publication of a remarkable and controversial book. Stephen Wolfram has self-published a huge volume that purports to define "A New Kind of Science." I get Newsweek, and when I saw a brief piece about the book, it struck a major chord (pun intended) with me.

According to Newsweek, the central thesis of Wolfram's book is that pretty much any and all complex systems and phenomena in the universe come from the iterative computation of a few simple rules. Wolfram, an undeniably egotistical and brilliant guy (quit Oxford from boredom; awarded a Ph.D. by Caltech when 20; awarded a MacArthur "genius" grant the next year; creator of Mathematica, multi-million selling computational software) is quoted as saying that as far as he knows, no one else is thinking/doing the same science that he is.

Well, I am not a MacArthur genius (though I met one recently, James Randi), but I took exception to Mr. Wolfram's assertion. As friends and colleagues, most of you know the principal work I did in the quality improvement field. Together with my colleague Jim Armshaw in 1989-1990, we built an employee involvement and quality improvement system for the U.S. Department of Labor. We spent six months doing research, and devising a system that would meet the needs of the very diverse organizations within the USDOL. Our research taught us that we could expect - guess what - very complex behaviors based on a very simple set of rules (make decisions by consensus; create teams of people to solve process improvement problems; use meaningful and valid data to understand the problem, etc.). Not quite rocket science, and certainly no scientific "revolution" (as Wolfram claims for his work).

Back in the early 1990's I was heavily influenced by the work of the late W. Edwards Deming. Dr. Deming had summed up his own approach to organizational improvement in a set of just 14 "points" (simple rules like "drive fear out of the organization" and "maintain a constancy of purpose"). Not rocket science either, but to use a word that Dr. Deming favored, arguably "profound."
In those early years of my quality improvement work, I really thought that we had our hands on The Answer - that we had built, or come close to building, a system of improvement that solved all problems, answered all concerns.

Of course, the DOL's experiences over the past 12 years have proven me wrong. Our system was not perfect, not as complete as we had thought. A few years ago, an experience at a kids' science museum on a rainy day in San Francisco changed my perception of organizational and other systematic behaviors forever. The exhibit "Turbulent Landscapes" (still viewable at the Exploratorium website) featured simple science systems and toys that showed how - guess what - simple rules could create astonishingly complex behaviors. In particular, my observation of a magnetic pendular system (available as an "executive desk toy" called R.O.M.P.) gave me a flash of insight into human and organizational behavior that has informed and transformed my work ever since.

Many of you have heard me ramble on and on in the last few years about chaos, complexity, complex adaptive systems, and so on. I have read every book I could find (and understand) on the subject. From the well-known sources like Meg Wheatley's landmark "Leadership and the New Science" from 1993, I branched out just as Stephen Wolfram has done. The more I asked the "simple" questions of "how do people behave in organizations?" and "what makes a new idea become a shift in paradigm?" the farther afield my inquiries went. Before I knew it, I was reading about the pioneers of the Santa Fe Institute- Chris Langton, John Holland, Stuart Kauffman, Brian Arthur - and exploring new ideas about everything from economics to evolution and biology. It seemed that the ideas about how complexity influenced system behavior were on the minds of many truly great thinkers and scientists.

So now we have Sephen Wolfram, an acknolwedged genius, claiming that he and he alone, has figured out, well, everything. He has said he expects to be mentioned alongside Newton, and Einstein some day.

Well, I don't know about that. There are, as the also very smart Ray Kurzweil has written, some parts "missing" in Wolfram's massive work.

For myself, I (currently) believe that:

- we live in a quantum universe. Reality, as we know it, is defined in large part by our inter-relationship with things.
- as "independent agents" living and moving through our organizations, we perceive, appreciate, understand, and act.
- Unlike Wolfram's simple computer program rules, humans also desire, and intend. To the extent that a bit of computer code can mimic this behavior, it is a reflection of the design/intention of the program's author (see the very cool FRAMSTICKS website to watch computer "life forms" do their things).
- I increasingly believe that our behavior in organizations (and in general) is governed in a rather quantum way by both a mechanistic/Newtonian world of rules, causes, and effects, and a simultaneous co-equal world of desires, intentions, perceptions, and behaviors. These worlds BOTH are real, and both rely on the interaction of observer and observed to take form. I recently gave a talk on "A New Definition of Quality" that draws on these ideas, and described strategies and methods that are implied by this "Quantum Quality" (tm) model.
- Finally, and maybe my own most controversial notion, is that the real nature of the way things function in the universe was discovered by speculative thinkers as far back as the second century, and emerged in a body of thought and literature in the 13th century. Only in recent years has this body of knowledge become widely available in English, and generated a renewed level of interest and understanding. I really think these people had it right, centuries ago. I just don't know how they figured it all out (see "God and the Big Bang" by Berkeley professor Daniel Matt).


Well if you have read this far, you either know what this is all about, or you think I am "losing it." I am writing to Newweek, to criticize Wolfram's egotistical claims for being alone in this field of endeavor (he is not), and for the glaring omission of at least one foundational thinker in Wolfram's own area.

POSTNOTE: I was referring to John Holland, who I understand to have really pioneered work with Cellular Automata, his "genetic algorithm" and so on. I have not seen mention of John Holland in connection to the Wolfram book in the various articles and reviews I have read. What do you, the obviously bright and well-informed posters here, know about any of this?

Thanks

Bruce A. Waltuck
United States Department of Labor

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/07/2002 2:09 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I think, that the probability, that Wolfram is right about the simple rules - is very high.

As I've learned, he do mention some people.

But it doesn't matter at all, if he is arogant or selfish or has no respect to peers. His problem.

- Thomas

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/07/2002 9:03 AM by erietman@tritonsys.com

[Top]
[Mind·X]
[Reply to this post]



Kurzweil is right about rule 110 (or any other rule for that matter) not being able to "evolve" highly complex systems ' like ants or humans. But, I don't think that is the point of Wolfram's work. His work shows that automata interact with their neighbor to produce complex patterns. That is all he is saying at this stage. But the beauty of that is you can now see how complex systems from atoms to anthills are possible.

Automata on the subatomic scale, through their dynamics, give rise to atoms. Automata on the molecular scale give rise to complex molecular systems. Automata on the scale of large molecules give rise to life forms. Automata on the life form scale give rise to ants. Automata on the ant scale give rise to anthills. Etc.

ed

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/09/2002 10:53 AM by svamd@aol.com

[Top]
[Mind·X]
[Reply to this post]

Ed,

I think you've nailed it.

I'm only about 1/4 way through NKS, but I dont find that Wolfram has considered this issue of scale. Kurzweil's complaint about the lack of holarchic depth (=degrees or levels of complexity) in Wolfram's CAs revolves around this, as far as I can see.

Scott Anderson

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/07/2002 10:23 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

In the preface of his book, Mr. Wolfram devoted nearly three pages of small type to list name after name of people who contributed thoughts and ideas that helped him develop his concept. Some of them are the same names you mentioned in your message above. Throughout the notes at the back of the book he gives references to other people and their ideas. I think what he was saying is that the way he is approaching the subject is different from the way anyone else is doing it today. I don't see anywhere in the book a claim that he invented the mathematical ideas on his own. He was also a contributor to the SFI and mentions in his book the names of Murray Gell-mann, John Holland, Stuart Kaufman, the people he worked with at Princeton, etc., etc.

But to worry about who he gives credit to is to miss the point of the book, which is to introduce the reader to a new approach to numbers and mathematics and how we should think about them. He points out, rightly in my opinion, that a lot of the processes we use to do mathematical manipulations lead to fuzzy or untrue conclusions. He thinks his approach will be a more useful method in the long run and will leave out a lot of the complexity that confuses such matters today. It's his way of looking at the whole system that is unique and not being done by other thinkers -- not the individual concepts that he has organized into his own particular approach.

To worry about Wolfram's ego is to miss the point of what he is trying to tell us. The man is pointing to the stars and, instead of seeing what he is pointing at, people are arguing about the shape of his fingers and whether they are worthy of pointing at such a high and mighty feature of the universe.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/08/2002 2:59 PM by lauri.grohn@ske.pp.fi

[Top]
[Mind·X]
[Reply to this post]

"POSTNOTE: I was referring to John Holland, who I understand to have really pioneered work with Cellular Automata, his "genetic algorithm" and so on. "

From ANKOS p. 985: "And starting in the mid1980s, extensive work was done on biologically motivated so-called genetic algorithms, which had ben advocated by John HOlland sice the 1960s."

PS. You forgot to mention those other few hundred persons which Wolfram is thanking in his preface. Why? Why do you think the reviewers didn't mention all those names?

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/08/2002 5:13 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> Why? Why do you think the reviewers didn't mention all those names?

Yea. It's to discredit Stephen Wolfram, I guess. Low.

- Thomas

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/07/2002 5:46 PM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

I became intrigued by Kurzweil's review and started reading Wolfram's book. I found Wolfram's argument to be quite convincing.

Here is an observation: If you accept John Searle's rather authoritative conclusion that syntactical computation cannot generate consciousness and Wolfram's conclusion that the universe is a giant syntactical computer, it amounts to an embarrassingly simple set-theoretical proof that consciousness'the soul if you will'must be independent of the physical universe.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/08/2002 5:14 AM by quim@bronze.lcs.mit.edu

[Top]
[Mind·X]
[Reply to this post]

Although I think most of the review is on target, I found the discussion of genetic algorithms strangely facile. Kurzweil says that we need the genetic code itself to evolve, and that the currently static coding schemes somehow explain why genetic algorithms don't reach the complexity of living systems.

After such rigorous analysis of the meanings of complexity and information and what we can and cannot say about certain interesting CAs, it strikes me as odd that Kurzweil would at the same time give such casual treatment to another serious subject (GAs).

Kurzweil's hypothesis that somehow the key to evolution is having the genetic code itself evolve is not supported by a shred of evidence (which does not imply it is incorrect, but rather that it is presently naive). Although it worked well for nature, having the genetic code itself evolve inside of a computer would result in a massive explosion of the search space being covered by evolution. In such a scenario, many candidates would be eliminated not because they did not sufficiently perform, but rather because, in some sense, their genetic coding itself accidentally had self-imposed limitations. Thus, a large portion of the computational resources would dedicated to dead-end encodings, in addition to all the poor quality solutions that normal GAs suffer from.

Not only that, but who defines the space of encodings being searched? And who is to say that THAT space is not itself somehow suboptimal? We don't even know if nature got it right. Perhaps evolution would have been a thousand times faster if only the genetic encoding WASN'T allowed to evolve, thereby constraining the search space. The space of representations is likely to be rife with pitfalls, which are just as likely to decelerate evolution as to accelerate it.

Computers are also fundamentally different from nature in that they can only perform so many operations in parallel. Nature could afford a lot more "throw-away" encodings because they were evaluated in parallel with more robust strategies. In a computer, we don't want to waste valuable resources on throwing things out that actually perform well! Remember, Kurzweil is suggesting the encoding itself would evolve, implying that some poorly encoded solutions might actually perform well during their lifetime! That's a waste of resources.

I would suggest that the real "solution" here involves a much more sophisticated analysis of what it means to be a good encoding, and an understanding of how the encoding constrains the search space of organisms. I only bring this whole line of argument up to point out that when you are being rigorous in general, you need to be careful about making broad statements about entire fields of inquiry that are hardly yet understood.

I believe a similar criticism could be made about the whole "reverse engineering" broadstroke undertaking, but I'm sure I've gone on long enough.

Q.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/21/2002 4:06 PM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

>Computers are also fundamentally different from nature in that they can only perform so many operations in parallel. Nature could afford a lot more "throw-away" encodings because they were evaluated in parallel with more robust strategies.

This is the key problem I find with Kurzweil's review. The search space on earth was unimaginably large compared to the size of the encodings in question, and all of the encodings were being tested in parallel. If you look at the history of life on earth, there are many periods where absolutely nothing seems to happen. Then, there is an explosion of change such as the Cambrian period. Who knows how long you would have to stare at a CA and how large of a search space you would need before you saw the level of complexity we are interested in?

Re: 1st person subjective view
posted on 06/08/2002 1:05 PM by thoughts@reflection-idea.com

[Top]
[Mind·X]
[Reply to this post]

I offer the book 'Reflection,' which can be read at www.reflection-idea.com, as a "1st person subjective view" of spiritual phenomena that hints very strongly at CA underpinnings of it all.

"Spiritual" experiences are often discounted by the logically-minded, but, like with the pre-plate-tectonic critics of Continental Drift, they are discounted because there is no physical model of the World that accounts for them.

The Net of Indra is a spiritual idea that seems tantalizingly close to Wolfram's CA is-the-meaning-of-everything ideas.

I welcome any and all comments on the ideas in 'Reflection.'

Dennis Merrit

what science?
posted on 06/09/2002 3:38 AM by abhirej@netscape.net

[Top]
[Mind·X]
[Reply to this post]

dear all

maybe its time we look for the emperor's clothing...

a (?) new (?) kind (?) of science(?) ?

i believe all of the above ?-marks individually challenge all of wolfram's propositions in that darn book of his (and, yes, i've a copy and i've read through most of it).

1. a (?)
fredkin et. al. speculated (and note an emphasis on that word) on the idea that "the universe may be a computer". so what's new ? and besides, i see ABSOLUTELY NO evidence of that claim, even he is correct (wolfram, that is) to that effect. ceo. wolfram: please to predict us what your "new kind of science" tells us about the top quark mass is... or better still, can you predict the right value for the cosmological constant? or forget that, your tall claims about gould's shell shape calculations being wrong. for any arbitrary shape, can you GENERATE that pattern exactly (in the kantian sense)?

2. new(?)
what's new in the book? at best its an exhaustive catalogue of cellular automatons, a catalogue nevertheless, exemplifying what a billionaire software guru does to kill spare time. conceptually, it misguided: remember the wise man who said that the map is not the territory? heck, even medieval cartography was more sane, the lack of samity being related to lack of predictive power. more mathematically, the argument about computationally irreducibility can be formally shown to be equivalent to turing's original universal machine argument, just in a different 'guise. to see it more clearly (i.e., make it more 'bimbo-friendly'), try visualizing turing machines as post production systems. in effect, one can do all sorts of things with these universal machines that are nevertheless NP-complete: ever heard of coupled tape machines?

3. kind (?)
this is going to be more philosophical: 'kinds' of science?: the last time i checked, there were two methods of doing things: either you were a positivist or a realist. now what exactly does ceo. wolfram stand for? this is certainly not positivism with all the talks about some immutable programs generating phenomenological apperances. unfortunately, it fails to statisfy realism because of the same reasons.

4. science (?)
hardly. one question would suffice: does your "new kind of science" have any predictive power? if it does, answer the questions laid down in q.1. if not, accept the fact that this another of those hippie (figurative:-)) approaches to things that were pursued to nurse your bruised ego, and perhaps, inspire some following amongst semi-litterate programmers who thinks of you as the best thing since sliced bread.

cheers

rej

Re: what science?
posted on 06/09/2002 4:12 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]


> a
> fredkin et. al. speculated

Before Einstein's we had Lorentz's relativity ...

> (?) new

It's new to the mainstream. It's new by it's extent.

It was possible to say to Darwin also - "What's new here? We had that kind of blasphemies already!"

> (?) kind

Of course. A new mathematical approach. At least. Imagine (your) opposition if it was even more radical shift.

> of science(?) ?
> does your "new kind of science" have any predictive power?

Sure it has. After a month you demand some "1919 eclipse" experiments already done?



What's bothering you so much, that you are that eager and hasty to dismiss it quickly???


- Thomas











Re: what science?
posted on 06/09/2002 12:34 PM by richard@skydancer.com

[Top]
[Mind·X]
[Reply to this post]

>What's bothering you so much, that you are that >eager and hasty to dismiss it quickly???

>- Thomas

I'm glad rej had the sense to ask the basic scientific question about this book which is - is it actually any use to anyone?

If you consider the biggest advances in science, such as Newton's Principia, Relativity, QM, The Origin of Species, et al, they made two contributions.

1. They proposed a new conceptual framework for dealing with a given set of phenomena. Okay, we (arguably) have something like that here.

2. They offered the *immediate* possibility of using that framework in a predictive or otherwise scientifically useful way.

Put simply, *they actually solved some real problems.*

Wolfram's book does nothing like this. It has some interesting ideas, but as rej has pointed out, most of these aren't all that new. (As a former Life follower I know that people were drawing similar parallels between CA and The Entire Universe more than twenty years ago.)

Still, that's not the real issue. The real issue that this book makes *no* testable predictions, contributes *no* solutions to any of the problems that currently haunt science, and describes in detail *no* new technological applications. There are hints and possibilities that some of these ideas might work, but nothing so much as a definitive example.

So (to use an old, but still appropriate cliche) where's the beef?

If Wolfram can use his CA approach to solve a real problem in any discipline, then I think people should sit up and take notice. We don't even have to ask for the mythical Theory of Everything here, even though that's clearly Wolfram's implication and ultimate desired destination.

Something simpler, such as a worked example of how to reformulate an old problem using his CA approach in a way that provides useful new insights or cuts down substantially on computational cost would be a good start. Even something that *potentially* but still clearly, obviously and unambiguously does the above would be enough, even if it turned out we could't implement it with current technology.

But until he does that, or someone else takes the time to do it for him, I think we should all be wondering just what there is of substance here.

Now, my prediction is that this is never going to happen. The reason is that the principles in this book aren't worked through to the level where it's possible to apply them practically in all the areas that Wolfram claims.

More than that, I'm not convinced that they *can* be worked through to that level because I don't see that the computational sophistication is really there - for reasons that Ray has explained already.

Of course I could be wrong about that. But when you're faced with a book that repeats relentless 'It's my belief that...', 'I strongly believe that...', 'It is my expectation that...' over and over, without providing proofs or examples for any of these beliefs or expectations, I think it's reasonable, rational and wise to be sceptical until something more concrete is presented.

Richard Wentk

Re: what science?
posted on 06/09/2002 1:15 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

No substantiable critics - yet. Only some character damaging assertions.

If people knew, that was always so ... always when it came to some paradigm shift - they would treat NKS with some positive scepticism.

- Thomas


p.s.

I will read it in July.

Re: what science?
posted on 06/11/2002 4:04 AM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

The deeper I get into Wolfram's book, the more I realize that it has value, if in no other way, just as a source of inspiration. One finds oneself seeing all sorts of things differently'from social life to organizational strategies. If no one can get any good out of reading it, that will only show a shortage of imagination.

I seem to recall that the inspiration for Darwin's exposition was a chance encounter with an observation of Thomas Robert Malthus. Few economists give much credence to Malthus, but the theory of natural selection was worth the enterprise.

Re: what science?
posted on 06/09/2002 4:04 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

As I remember (although not too clearly off the top of my head) there were a number of matrices developed in the 1800s that people had no clear use for until the science of quantum mechanics was developed. First came the math and then came a way to use the math for something concrete. Now, instead of a four dimensional universe, we have scientists talking about a ten or more dimensional universe. My point is that it often takes a while to find a use for a new tool that was not invented to solve a specific problem.

I, too, found myself asking what I can use this new way of representing numbers and developing patterns on paper for. I want to find a way of dealing with memes and how they propagate. The subject seems a lot like predicting the weather. I see culture as a complex adaptive system that should be amenable to that kind of analysis. But so far, I haven't found anything in Wolfram's book that makes it useful for what I want to do. But, as with other purely mathematical developments, it may take a while.

Re: what science?
posted on 06/09/2002 11:38 PM by abhirej@netscape.net

[Top]
[Mind·X]
[Reply to this post]

dear all

first a response to thomas' post:

1. there is NO such thing as lorentzian relativity: at best, there is a certain contraction factor derived by lorentz that relates rest length to kinetic length. (special)relativity starts out with the assumption that the tranformations between coordinate systems between (inertial) observers must involve a NON-TRIVIAL transformation between time coordinates (this demolishes the notion of "simultaenity") apart from non-trivial space coordinate transformations (some of which were already deduced by poincare on the basis of lorentz's work). that is relativity, and its beginning (atleast from a professional physical viewpoint). this kind of thinking started in 1905 with einstein. end of story.

2. stuff about blasphemies/paradigm shifts: if mr. wolfram had been able to predict ONE (however improbable testing that one thing might have been at the present time, say, due to computational costs involved), atleast his "theory" would have been falsifiable. isn't that a criterion for science? you know, popper has not been dead for that long... also, sane and rather simplistic questions such as mine are easy to dismiss by the zealous lot rather than be answered directly. in a forum more formal than this, sane questioning (such as mine) by university scientists are often dismissed by the same zealous lot as the "harassement of the academic mafia". all this points to one new research program: "a new kind of (academic) sociology"...

3. about prediction: all canons of modern science, to which wolfram and his cronies have been comparing ANKOS to, made (a) direct predictions (einstein's 1916 paper had the mercury perihilion shift prediction IN THE SAME PAPER as his revolutionary theory of gravitation from general covariance and equivalence) or (b) based on DIRECT observations of nature ITSELF (and not some "model") such as Darwin's. none of the aforementioned canons engaged in anything remotely as quixotic as ANKOS in terms of "all talk and no predictions".

as to someone who talked about matrices and QM etc., there is no such thing as specific matrices in QM. matrices are invoked in QM because of the "noncommutativity of the product of observables-operators" property, from which the uncertainty principle is directly derivable. consequently talking about SPECIFIC matrices that were available in the 1800s and were later used in QM (in order, presumably to invoke a comparison with the "tools" in ANKOS) is meaningless. i personally suspect that the correspondent's way of thinking about QM has been colored by reading too many popular books about the same. a good "conservative" textbook will be good remedy.

cheers

rej

Re: what science?
posted on 06/10/2002 2:09 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

rej

> lorentzian relativity

Man, don't you have Google?

At least? Check for the Wallace Evolution also.

- Thomas

Re: what science?
posted on 06/11/2002 4:46 AM by shai_op@netvision.net.il

[Top]
[Mind·X]
[Reply to this post]

I totally agree with Rej.
The science of Alife and complexity failed to bring any new prediction, new understanding or new conclusion related to the real world.

Shai.

Re: what science?
posted on 06/11/2002 5:49 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

You are fast to draw the conclusions. Besides, they are universally negative, which are generally hard to be proved. Why the hurry?

- Thomas

Re: what science?
posted on 06/11/2002 10:12 AM by shai_op@netvision.net.il

[Top]
[Mind·X]
[Reply to this post]

I am not in a hurry. I follow ALife since its initiation, more than 12 years ago, but so far you'd agree it is disapointing.

Re: what science?
posted on 06/11/2002 2:33 PM by abhirej@netscape.net

[Top]
[Mind·X]
[Reply to this post]

dear all

thanks to the people who see reason in my reasoning
guess for those who dont, have to add some seasoning!

i will post one personal criticism every day to ANKOS in this forum. these criticisms will be of a technical nature, and generally avoid any references to the philosophical quagmire of the whole enterprise of ANKOS. though tempting, i shall refrain from pointing the more obvious problems in ANKOS w.r.t. wolfram's writing style, idiosyncrasies and a maddening disregrad of what others have already done in the field (and i'm refereeing to papers in extremely famous journals, not the work of some lone sole toiling away in secrecy in some basement in the valley). i challenge all those who believe in ANKOS in the ontological sense ;-) to refute each criticism as they are posted. please refrain from vague replies.

"SIMPLE" VERSUS "COMPLEX": WHERE ARE THE DEFINITIONS?

i shall start out with the most obvious problem with the book: lack of basic defintions. lets go back to the mantra of the book "simple programs generate complex behavior". what is "simple" and what is "complex"? lets start with "simple". suppose you have a rule that you want to implement. implementation could mean configuration space evolution or phase space
evolution. furthermore, evolution could mean iterative application of a set of rules with respect to some configuration space parameter(say, time). in order to implement this rule,
you need to have a suitable formal language. the rule is then encoded in this formal language to get a program. "simple program" then could mean two things: the naive meaning is that the length of the program is small. the more sophisticated meaning is the kolmogrov-chaitin complexity: the program is simple if it is algorithmically compressible. in the first case, the meaning of "simple" is context-dependent on the level of the formal language being used to encode the rule. clearly, one can implement CAs in mathematica 4.0 with inbuilt functions. "the naive length" is very small. the second meaning of "simple" as something that is algorithmicaly compressible leads to a conclusion directly opposite of wolfram's. i'm referring to chaitin's book "the unknowable": something to wolfram is simple if it looks very regular and non-random. his notion of complex is the converse. however, for chaitin, something is simple if it is compressable. $\pi$ for chaitin is simple. $\pi$ for wolfram is complex. however, this is where the problem begins: for wolfram, "simple behavior" is what has a non-random representation. however "simple program" is not defined. also, as i've argued, either it could mean ones that are more compressable than the others or ones that not lengthy. obviously, the first is not the case,, because that implies
chaitin's conclusions which are diametrically opposite to wolfram's. the second option is just too naive and context dependent to be of any use. so what exactly is a "simple program"? the same argument applies to "complex", when "complex" is seen as the inverse of "simple". i guess what i'm driving home is what while simple and complex representations in ANKOS have a semblance of definitions (in terms of being how close or how far the representations are from being random), simple and complex programs have no definitions as i've argued.

cheers

rej

Re: what science?
posted on 06/11/2002 3:18 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Rey!

What are you talking about?

The Kolmogorov complexity of ANY data means the length of the shortest possible program, which produces this data.

And yes - a short program is always a simple program according to Kolmogorov, be cause there is a short program which produces it. A direct "printf PROGRAM" - makes it.

Say it is not true, if you dare! ;)

- Thomas

Re: what science?
posted on 06/11/2002 7:17 PM by abhirej@netscape.net

[Top]
[Mind·X]
[Reply to this post]

thomas

things are not that simple. if it were then, how come $\pi$ is simple for chaitin while $\pi$ is complex for wolfram? (direct qoute from "the unknowable": "in my view $\pi$ is not alll that complex, but to wolfram, its infinitely complex, because it loooks completely random" (p.107).

my claim stands.

cheers

rej

Re: what science?
posted on 06/11/2002 7:27 PM by abhirej@netscape.net

[Top]
[Mind·X]
[Reply to this post]

thomas

an afterthought: the definition of algorithmic compressibilty (in terms of shortest possible programs, etc.) leads to the claim made by chaitin about $\pi$ being simple. wolfram doesnot seem to agree that $\pi$ (or any other transcendental number, for that matter) is simple. the two definitions of simple programs as i presented (naive minimum length versus kolmogrov-chaitin) do not coincide. for the aforesaid reasons, wolfram's premises are different from k-c. if that's the case, the only recourse is naive minimum length type arguments, which, you'd agree, is fairly trivial and useless. the basic point of my post was that definitions are lacking in w. this just illustrates the case. if you believe otherwise, kindly back it up with a more exhaustive explanation.

cheers

rej

Re: what science?
posted on 06/12/2002 4:42 AM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

Much of your argument against Wolfram's book reminds me of a scene in the play Inherit the Wind in which Clarence Darrow is expected to defend John Scopes without any reference to the theory of evolution. You say that any argument against your 'daily' criticism must be framed in precise technical terms'terms that you have largely restricted to your own specialty. Even this approach to criticizing Wolframs book suggests to me that you have largely missed the point of his discussion.

If I understand what Wolfram is trying to say, and I suspect that I am beginning to, his point is that science based on precise definitions and proceeding through a lattice of deductive reasoning has largely failed. He even points out that extended logical arguments are basically only symbolic cellular automata and subject to the same limitations.

By staying away from overly restricting definitions and simply accepting simplicity and complexity for what they seem to be he frees himself to study the progress of these objects as if he were observing life forms. His Science is, admittedly, more like art than science, but that is more to the point.

When Wolfram says that he is expounding a new 'kind' of science, he means literally that: not that he is proposing a superior kind of science or an inferior kind of science or a new branch of science, but that he is proposing, literally, a new way of undertaking the enterprise. If Wolfram had started off by making the precise definitions that you are demanding, he may eventually have been found out to be in contradiction with the method he is ultimately proposing. That is, if I understand his method, and I suspect that I am beginning to.

Has anyone ever figured out for sure what Machiavelli had in mind when he wrote The Prince? Was it a satire or literally a guidebook for statehood? Did Jonathan Swift really expect people to eat their children? When an obviously accomplished person writes a book that's methodology eludes you, you need to be very cautious in how you interpret it.

I am the last one to make an appeal to authority, but I find it difficult to believe that the creator of a program with the syntactical depth and complexity of Mathematica could have overlooked the possible need for precise definitions. Certainly, he had someone review his book, and that someone noticed the lack of precise definitions. It is on the basis of this that I am willing to consider that he may have something else in mind. I'm not sure yet exactly what it is that he has in mind'perhaps it is what a few paranoids are undoubtedly suspecting and merely a way to boost Mathematica sales'but I am taking my Prozac and lithium carbonate and hoping for the best. Who knows, maybe we are seeing the degeneration of another Howard Hughes. As Wolfram might say, I strongly suspect that this is not the case.

As Thomas has pointed out, you need to give his ideas a longer look.

Re: what science?
posted on 06/11/2002 2:56 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

On the contrary! Did you know, that a pattern exists inside the Conway's life, which produces prime numbers!?

Very simple rules - a cell survives if it has 3 or 2 neighbors, is born if it has 3 - otherwise died, if alive.

From this, a prime number generator pattern is devised!

- Thomas

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/09/2002 4:56 PM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

I seem to recall Origin of the Species having all the problems currently ascribed to Wolfram's book. In a vacuum of knowledge about genetics and with so many bewildering obstacles to overcome, Darwin's new mechanism was barely discernable. Was Darwin mistaken in writing the book?

I agree that Wolfram is at times quite pompous. I work with so many wise, intelligent, and generally correct people who have that particular character flaw that it goes right past me.

Wolfram may be wrong, but the book was not a mistake.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/12/2002 5:13 AM by asimov@msri.org

[Top]
[Mind·X]
[Reply to this post]

I cannot comment on the future scientific potential for cellular automata, but I do want to defend the use of mathematics in physics and other areas of science.

A huge number of objectives that scientists and engineers want to find out about the universe are more than adequately approximated by pretending that space and time are mathematical continua rather than the discrete objects that quantum mechanics dictates.

For these purposes, the use of calculus, differential equations, linear algebra, statistics and probability, differential geometry, and other branches of math as they now exist -- in comination with mathematically-formulated laws of science -- has been extraordinarily successful in giving the answers to questions that have yielded enormous CONCRETE advances in science and technology.

These include detecting and finding cures for diseases, building skyscrapers, making computers, getting airplanes to fly, manufacturing new materials

To claim that mathematics is a failure because, for example, it cannot at present predict much about the stock market or, say, a hurricane's future is like saying the telescope is a failure because it cannot penetrate past the known universe.

OF COURSE there will be calculations which are so complicated that our best computers cannot complete them in a reasonable amount of time. Not to mention the fact that even if our computers were fast enough, we are still unable to collect the requisite amount of data for the calculation to be especially accurate.

In the case of a hurricane, this would be the position, velocity, temperature and electrical charge of the air and water, at a very fine spatial and temporal scale. There may even be a macro "Heisenberg uncertainty principle" at play here, since the very collection of the data might alter what is being observed. (This is certainly true with regard to measuring the physical state of the human brain.)

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/12/2002 5:43 AM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

I don't disagree with a single word you've said. No one could dispute that even Wolfram's constructions have been made possible by the very mathematics you describe. However, I get the impression that in addition to proposing a new 'kind' of science, Wolfram is proposing a new 'kind' of mathematics. His new mathematics seems to revolve around the idea of creating the axioms and simply letting the proofs play out. Of course, this is not really a new idea'perhaps his exact approach to it is.

Wolfram seems to believe that we can get further faster by, in a sense, letting go...

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/13/2002 7:24 PM by alanw@dtl.nos.pam.co.nz

[Top]
[Mind·X]
[Reply to this post]

I'm glad to see more comments now from people who have actually read the book. I guess I'm prejudiced in its favor because I have followed a similar trajectory on a much lower plane from hard science research through computing to business.

Personally, I find most of his arguments convincing and very interesting, though many are so far just suggestive rather than conclusive.

Regarding his egotism, one can be sure that it will impell opponents to publish refutations asap - if refutations exist. I have yet to see any.

He does not dismiss existing mathematics. He is saying it is inappropriate as an underlying explanation for many natural phenomena. It does give a satisfactory account at a macro level for many circumstances.

I don't agree that there is nothing in the book that is already useful. Kurzweil's review itself clearly refutes that claim.

On the matter of genetics and evolution, I was impressed by Kurzweil's criticism before I read Wolfram, but now I find it less compelling. Time will tell, but I tend to think Wolfram has the better position.

Alan Wilkinson

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/14/2002 10:23 AM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

Yes, reading the book really helps.

I think I have at last found a specific answer to the question on the definition of complexity. On page 58, bottom paragraph, Wolfram says, 'But to what extent is it possible to define a notion of complexity that is independent of the details of specific methods of perception and analysis? In this chapter I argue that essentially all common forms of perception and analysis correspond to rather simple programs.'

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/14/2002 10:27 AM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

Oops, sorry, That's page 558.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/15/2002 5:58 PM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

I'm a little uncomfortable with Wolfram's discussion of Randomness in chapter 7. Specifically, why does he not consider the Heisenberg Uncertainty Principle or something like it as a source of continuously injected randomness? In the case of his automata, it could manifest as blocks simply changing their choice of color for no apparent reason. Such an occurrence would, of course, be counterintuitive'it is the 'God playing with dice' mechanism that Einstein was so uncomfortable with'but it is nevertheless implicit in quantum mechanics. I am guessing that at the level of granularity Wolfram is interested in, he considers the Uncertainty Principle to be merely another consequence of intrinsic randomness, but I can find no specific reference. Has anyone found a specific reference that might clarify this issue? Is there some obvious external explanation?

Also, if there were any randomness in the system (the universe) initially, could this not account for the indefinite appearance of the randomness amplified through the kneading process he describes on page 307? Am I missing something?

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/15/2002 11:00 PM by claire@cthisspace.com

[Top]
[Mind·X]
[Reply to this post]

Wolfram is trying to make people see something similar in a different way. And it will work.


Claire

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/16/2002 1:14 AM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

A year or two from now, when I'm a little closer to figuring out what Wolfram is actually saying, I'll be ready to take on issues like that.

Re: Wolfram -- Kurzweil's "Patterns" ??
posted on 06/17/2002 4:04 PM by lanceamiller@starpower.net

[Top]
[Mind·X]
[Reply to this post]

In his discussion of Wolfram, and elsewhere, Kurzweil both states his belief that PATTERNS are the real unit of existence, and refers to the notion of COMPLEXITY for various reasons.

I would appreciate his elaborating on this notion of PATTERNS, for I do not yet understand it, and I think it may well require an elucidation of his concept of COMPLEXITY.

For example, PATTERNS are definable both in spatial and in temporal terms. Presumably, spatial patterns occur in some kind of multi-dimensional feature space. Also, presumably, it is useful to think of patterns as differing along some dimension of COMPLEXITY -- e.g., simple linear alternation of the values of one feature is not high in complexity, while the production of drawings of all human faces is more so.

So, then, some of my questions:
1. What constitutes a pattern vs. a non-pattern? How is one sure that, given a finite measurement, it does or does not contain a "pattern"? Or, is the recognition of a pattern solopsistic -- one man's noise is another's pattern?
2. What are the alternative ways in which patterns may be defined? (e.g., by a summarization formula, by a brief verbal description, by matching a stored image, ...)? Is this set closed?
3. How do the definitions depend on whether the pattern is a temporal, a spatial, or a mixed one?
4. Is it meaningful to talk about "patterns of patterns" as a pattern? (e.g., the pattern of one drum rhythm against another and both against a pattern of tonal melody)
5. Is it meaningful to talk about the "evolution" of patterns, in the sense of ascribing their appearance to some underlying dynamics?
6. If it is meaningful to talk about a "pattern of patterns" then some notion of complexity has been invoked. What is an appropriate definition of complexity re patterns? In particular, how does he feel about Coveney and Highfield's ("Frontiers of Complexity", 1995) definition: "... complexity is the study of the behavior of macroscopic collections of such units [many basic but interacting units, be they atoms or molecules ...] that are endowed with the potential to evolve in time" [p7]?

Many thanks. ... Lance Miller

Re: Wolfram -- Kurzweil's "Patterns" ??
posted on 06/17/2002 11:36 PM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

I think I understand your concern, though it is difficult to explain.

For a pattern to exist, it seems that there must be something to interpret it. But the existence of an 'interpreter' implies something like the homunculus that John Searle wants so much to dismiss from his concept of consciousness. If we admit to the existence of such a homunculus, then the pattern is really in the homunculus and not in whatever the homunculus is interpreting. Hence, we have explained nothing.

Suppose that we have a long thin strait wire that is utterly unbroken and has absolutely no pattern in it. Now imagine an invisible curved surface that bends back and forth, something like a wave, and intersects the wire in a complex pattern. This pattern of intersections now represents a pattern in the wire. Now suppose that there is nothing (such as yourself) to imagine the invisible surface. Does the pattern in the wire still exist, or do we now have nothing but a long thin patternless wire?

If you think about it, there are no patterns in anything that do not have the same fallacy as the invisible plane intersecting the wire. We can always bend our perception in such a way as to create or demolish any supposed pattern.

Avoiding all the esoteric language and complex arguments that people employ, the pattern concept simply doesn't work. In truth, I am beginning to realize that none of the complicated syntactical structures that people employ really explain anything. We are no closer to a real understanding of our reality than primitive rock-throwers were. Now, we throw much smaller rocks (we call them atoms) and we have more complicated rules to describe their motion (we call them mathematics) but none of that really amounts to a hill of beans.

Am I close?

Of course, I have basically pointed out the reason why hard science takes the position that it does. Hard science does not try to explain anything; it simply models. If a particular model gives predictable results, the model is retained. It seems like Wolfram's automata model might be as good as any other, but it needs to be investigated much further. As many have pointed out, we need to find something testable about it.

Re: Wolfram -- Kurzweil's "Patterns" ??
posted on 06/18/2002 3:40 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

What we are observing are the bit togglings.

No bit switching - no observation.

At least some bits in our head should change it's state, for us to see, that there is not much change going on.

The paste of time is somehow connected to this bit write/erase number.

That is another reason why I like those CAs.

- Thomas

Re: Wolfram -- Kurzweil's "Patterns" ??
posted on 06/18/2002 10:16 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

>Now, we throw much smaller rocks (we call them atoms) and we have more complicated rules to describe their motion (we call them mathematics) but none of that really amounts to a hill of beans.

But look what we've been able to do with it! Look at the civilizations that have arisen from this ability to divide things into smaller and smaller segments and turn them into something new. What started as a rock becomes a temple, a building, a skyscraper. A polished rock becomes a marble floor or a marble bathtub. It is our ability to recognize the patterns of usage in the things around us that give us control over them.

Until we divided a plant in to leaves, stem, flower and root, we couldn't use the leaves for tea, the stem for fire wood, the flower to scent the air and the root as a medical treatment. Sure the divisions were created by us and I'm quite sure the plant is not aware of them, but this ability to divide and conquer is the foundation of our culture and civilization.

Re: Wolfram -- Kurzweil's "Patterns" ??
posted on 06/18/2002 12:15 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I am sick tired of those Capra holistic guys, who keep saying, that "we shouldn't divide, but see as a whole".

It's a well packed bull, nothing more.

You are right Grant, as in about 90% of the time. When not - I am. ;))

- Thomas

Re: Wolfram -- Kurzweil's "Patterns" ??
posted on 06/18/2002 4:45 PM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

>But look what we've been able to do with it!

I'll even add to that. I really love my TV, my computer and my car. I especially love my car. All of these things are a result of what we have been able to do. But you notice that your emphasis is on the word 'do'. We can do things that our ancient rock-throwing ancestors could not have imagined and that they might, upon a cursory examination, have taken for real understanding.

If Aristotle could be brought forward in time, he might be impressed for about a day. After that, he would see a rock for a rock and perhaps hold us in contempt for our complacency. Of course, his judgement would be unfair: he would not immediately understand the reasons for our resignation to the scientific method.

> It's a well packed bull, nothing more.

It's a pi'ata!

Re: Wolfram -- Kurzweil's "Patterns" ??
posted on 06/18/2002 5:10 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Even in Aristotle's day, the Greeks and Egyptians were able to do some interesting things with rocks. It's what you do with what you have that makes the difference between what we were and what we are. Without that, there's no difference between us and the chimps or the microbes.

Re: Wolfram -- Kurzweil's "Patterns" ??
posted on 06/18/2002 8:54 PM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

>It's what you do with what you have that makes the difference between what we were and what we are. Without that, there's no difference between us and the chimps or the microbes.

So let's take that to its logical conclusion. Suppose, for the sake of argument, it turns out that Wolfram's theory is correct. Since rule 110 appears to be universal, let us even assume that we are just rule 110. Automata cannot go below the level of their own programming, so that would be the end of science. Now, let's throw in some of the other predictions that are being made. In thirty years, the singularity is achieved. If we are lucky (who knows) we may end up in the optimal position: we are commanders over electronic gods with perfect obedience. They can make or do anything we wish. There is nothing more for science to do but Godel has guaranteed that there will be no end to the mathematical theorems we can prove'that is, until we run out of RAM. What a waste of good RAM! However, taking Wolfram to heart, it really only amounts to the unpredictable behavior of automata. What do we do?

Do we spend endless hours, days, years, millennia in simulations satisfying hedonistic fantasies? Of course, direct stimulation to certain centers of the brain would make those fantasies much more intense than anyone can imagine. They would certainly become addictive. I can't imagine how they would not. Even if not, why would any rational human resist? After, all the machines can keep us running in perfect order indefinitely. Nothing we do in that capacity should cause permanent damage. No need to worry about someone sneaking up on us while we are indulging in one of our perfect fantasies: the machines can protect us much better than we could ever protect ourselves. No need to worry about growing old: aging would be the first thing to go'the nanobots would see to that. Do you see where I'm heading with this? Are we still diverging from the aforementioned chimps?

There's got to be more than rocks!!!!!

Re: Wolfram -- End of Science
posted on 06/19/2002 7:16 PM by alanw@dtl.nos.pam.co.nz

[Top]
[Mind·X]
[Reply to this post]

We need to keep in mind the distinction between Engineering and Science which Wolfram also notes frequently. Engineering relies on useful approximations restricted to predictable outcomes at the macro level. Science tries for exact models of fundamental components.

Even if space-time/matter is proven to arise from some simple CA rules, predicting and controlling the consequences of complex life systems may remain difficult, with CA mathematics being just another tool in the armory - sometimes applicable and sometimes not, depending on the macro environment under consideration.

Alan Wilkinson

Re: Wolfram -- End of Science
posted on 06/19/2002 10:16 PM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

I see your point: there may always be the potential to build a better mousetrap. I might add, however, that all of the things I have alluded to are considered by many (not all) futurists to be reasonable extrapolations of current engineering principles.

I am among those who think Wolfram could be right. However, if we, as a species, cannot find it within ourselves to believe that there is something 'deeper' under the rules of those CA's, my prediction is likely to come to pass. Fifty billion to ten trillion years is a lot of time to kill!

I personally think the bridge to that deeper belief is consciousness, but Thomas, Grant, and I have already gone around about that ;)

Re: Wolfram -- End of Science
posted on 06/20/2002 3:06 PM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

Whew...I'm glad that spam attack got cleared out! Don't worry Thomas and Grant, I saw your comments before they got deleated with the spam. I agree with both of you: educate people, don't frighten them. Now, where were we?

Re: Wolfram -- End of Science
posted on 06/20/2002 5:14 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

As far as I've read in Wolfram's book, all the CAs seem to be two dimensional. We still have two more dimensions to consider and write rules for. Of course, he may get into that later in the book, but in any case, no rule can ever encompass the totality of our existence. Rules only extract the essence of something. Mathematics, for example, reduces language to a subset of all we can talk or write about. But there is a lot more to what we can express than what is contained in mathematics. I think the same applies to CA. There is a lot more to the world we live in than can be expressed with CA.

Re: Wolfram -- End of Science
posted on 06/20/2002 5:47 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> But there is a lot more to what we can express than what is contained in mathematics.

I don't think so.

> I think the same applies to CA.

Not, if the rule 110 - or any other - is an universal Turing machine.

- Thomas

Re: Wolfram -- End of Science
posted on 06/20/2002 6:33 PM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

> As far as I've read in Wolfram's book, all the CAs seem to be two dimensional. We still have two more dimensions to consider and write rules for. Of course, he may get into that later in the book,'

As a matter of fact, he does deal with higher dimensions (starting with page 169). Also, when he deals with networks (starting on page 193), he points out that they can effectively represent any number of dimensions.

>> But there is a lot more to what we can express than what is contained in mathematics.

>I don't think so.

I just don't see how you could think that. I once read a line: 'There is nothing deader than a page full of equations. The trick is waving a magic wand and bringing them to life.' Also, think about this: how does a cell know what to do when its turn comes up? Why does it not suddenly decide that it wants to be a rule 30 automaton instead of a rule 110 automaton? There's something missing from our whole perception of reality. I get the impression that if we could somehow step outside of our reality it would be obvious, but of course, I can't imagine what that something is. It would be a cheat to say that it is God or something like that, but I get the impression that if there is a God, he/she/it can see the thing I am referring to.

Re: Wolfram -- Kurzweil's "Patterns" ??
posted on 06/24/2002 10:05 AM by dennis@amzi.com

[Top]
[Mind·X]
[Reply to this post]

The biggest problem with "all is one" is that we can't directly talk or consciously think about it. The very nature of language, used for both, is to divide.

Even the phrase "all is one" implies its opposite. "All" implies a collection of things.

We can only think and talk around the outskirts of that idea. The only way to be immersed in the idea is to abandon language and conscious thought.

So they say "think of the whole," well not with the way our brains are wired.

Dennis

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/21/2002 1:36 AM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

I have come up with a way to explain the whole universe. It is based on Wolfram's automata, and inspired by some of his other ideas.

I am going to use his rule 110 as a kind of analogy, though the connection is somewhat abstract.

Suppose the universe began with no rules whatsoever. I do not mean that it was random, but that it literally had no rules. It was not empty, or full of matter. It did not have zero dimensions or many dimensions. It did not contain consciousness or a lack thereof. It simply had no state whatsoever. I use the term 'began' somewhat loosely, since a beginning would itself imply the rule of chronological time.

Now, we subject this rule-deficient universe to an application of logic. The logic I speak of is not our logic or any particular branch of logic, but THE logic. It is the logic that is to our logic what the largest cardinal number is to countable infinity.

In its initial state, without any rules whatsoever, the universe is analogous to rule 110 with a random initial setting. When this ultimate logic is applied to it, it begins to evolve much as rule 110 evolves. Immediately, there is a great deal of interaction as different aspects of the totally rule-deficient universe unfold. Just as the automata, in a sense, fight each other to try and settle into a regular pattern the different aspects of the universe fight each other to settle into a regular pattern.

There are contradictions that have to be worked out. Exactly what these contradictions are, may not be understandable in terms of our logic. However, once again, there are useful analogies. In this case, the analogies come from common experience. For example, a space cannot be both full and empty. There cannot be both an infinite number of dimensions and none at all. As I stated, these are only analogies: we cannot know for sure what will and will not be permitted within this ultimate logic.

Eventually, the universe starts to settle into a regular pattern. This pattern is not a result of the application of rules, but the result of pure exclusion. In a sense, it is like a giant proof by contradiction. Everything assumes the only form it can that is not in contradiction with everything else. Though the universe tries to settle into a regular pattern, there is no guarantee that every issue can be resolved. These issues are analogous to structures in automata that continually shift between two or more shapes. A real-world example might be the wave particle duality. Neither the wave nor the particle satisfies the ultimate logic, so photons switch back and forth from one state to another depending on how they are scrutinized. Another subtler example is the simple movement of matter. When an object moves from point A to point B, it does so because to remain at point A would constitute a contradiction. Everything happens because of exclusion. Everything from the motion of the smallest apparent particle to the formation of a great civilization happens because any other event would constitute a contradiction.

Now, here is the interesting thing. In Wolfram's treatise of automata, he explores the concept that a logical argument is essentially just the evolution of a symbolic cellular automaton. So, in a sense, since the evolution I am describing is the evolution of pure logic, we are not seeing something that is analogous to automata, but, in fact, actual automata.

The beautiful thing about this model is that it relies on nothing that would not, in some limiting case, have to be true. Therefore, the model must be correct. It is even testable, though a test would probably be very difficult to devise. The test would involve setting up simulations of various aspects of the universe and demonstrating that any other construct would lead to a contradiction.

What do you think Thomas, do I get a Nobel prize?

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/21/2002 3:08 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I don't know, if will be you, who gets the Nobelo prize for this.

Because, this is the almost exact way, how I've understood Smolin.

I would almost bet, that it should be something like that.

Sounds right to me! ;)

- Thomas

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/21/2002 3:20 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Sometimes I just look through my window an see a screen. Accordingly to the holographic principle.

There is no difference between what is happening behind the screen - or on the screen. It's also arbitrary where you put the screen. So I put one on my window. And my flat window becomes the quantum computer screen, emulating the Universe behind.

The name of a man, who will get the Nobel prize for this, is 't Hooft. (Strange name, a well known person).

Now, it's very conceivable for me, that the rule 110 is governing the screen('s picture).

We might be close already. ;)

- Thomas




Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/21/2002 4:59 AM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

I had a suspicion I wasn't the first one to think of it. It's really hard to think of anything original any more. My great experience using the Internet has been that it is impossible to think or feel anything that someone else has not turned into a major cause. I hold that observation itself as evidence that we are on the verge or evolving into something new.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/21/2002 6:27 AM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

I might add something to this model of the universe.

Wolfram's Automata provide the element that has always been missing from any such conception of how the universe may have unfolded. There is a natural tendency to think that a rule-deficient universe subjected to the ultimate logic must evolve into a rather bland and uniform state. Wolfram's automata show that this need not be the case. Just as rule 110, when started from the simplest possible condition, leads to a complex structure, the rule-deficient universe, subjected to the ultimate logic could develop into all of the myriad complexities we are witness to.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/21/2002 11:17 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

To my mind, there are no rules to the universe. There are patterns or regularities that we discover and make rules to describe. Sometimes our descriptions are accurate and sometimes they aren't, but the rules only exist inside us. The patterns reflect our perception of the universe, not the universe itself. Every time some event can be likened to some event we've observed before we see the possibility of a pattern. But a lot of what we think is the same thing really isn't. Some things just coincidentally resemble each other to our eyes.

To the color-blind man a green light looks the same as a red light. The man who can see color makes a rule that says "If the light is green I can go." The color-blind man makes a rule that says, "If the top light is on I must stop. If the bottom light is on I can go." Both rules solve the problem of deciding when to stop or go, but they are based on different perceptions. The light itself is unaware of the rules being used to obey it. Neither rule is an apt description of the light and what it is doing.

If the rules are being made by beings who see a different part of the light spectrum, their rules will be different from ours. If they grew up on a planet where water was always solid until someone melted it, their perception of water would not be the same as ours. The rules they would make to describe water and life would no doubt be different from the ones we made to help us cope with the world in which we live.

Our rules are based on our perceptions of what is happening in the world around us. Considering the size of the universe and the number of things in it we haven't even dreamed of yet, we can't really say the rules we've made to describe it apply to anthing more than what we've been able to see thus far. They certainly don't describe everything about it.

That's why we have to keep changing the rules we make. When we observe new phenomena (new to us) we either have to adjust an old rule to include it or make a new rule to explain it. In either case, the universe is not obeying any rules -- it isn't even aware of them. And the rules we make only help us use the regularities we've observed to control our own destiny, not that of the universe.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/21/2002 2:22 PM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

>To my mind, there are no rules to the universe. There are patterns or regularities that we discover and make rules to describe.

I agree completely. Also, very well explained. Actually, this discussion has resolved many of the dilemmas that have lingered in my mind'some never quite in focus'as to how the universe could exist at all.

It is also clear that Wolfram's toilings are anything but useless. I wouldn't have thought of the universe-by-exclusion idea if I hadn't been reading his book. Someone else reading it is bound to think of something useful ;)

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/21/2002 7:33 PM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

This will make everyone equally mad at me, and quite justifiably so, but I just couldn't resist.

Consider this line: 'In the beginning was the word and the word was with God and the word was God.' God is the aforementioned, ultimate, immutable logic! They are one and the same!

Well...now that I've explained everything to everyone's satisfaction, I can go back to reading my comic books.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/22/2002 4:44 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

In 1495 Montezuma has expected a white bearded man, to come from the east to finish his empire and himself.

In 1496 Pizaro has came. Accomplished the prophesy.

A pure coincidence, if you ask me.

Now, to expect that the GOD was the initial, seed "word" then processed by the "rule 110" ... I don't know! ;))

- Thomas

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/22/2002 2:57 PM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

Quite true.

However, consider this. The avenue through which we arrived at this idea creates the impression that the ultimate absolute logic that dictated the unfolding of the universe must, in some sense, be cold and sterile. And yet, if the universe we see appears to contain the potential for love, justice, beauty, goodness and many of the other high ideals we hold so dear, those things must have been inherent to the initial conditions. Our natural inclination is to think that if these things exist they must have been created. Yet, if they are actually part of the ultimate absolute logic, the universe could not have unfolded any other way. Maybe it is not the case that we suppose these things to be true because we want to believe in them; maybe we want to believe in them because they are true. Another thing: it seems to me that there is not much real difference between the word 'logical' and the word 'true'. If you substitute the word truth where I have used the word logic, you have something very comparable to a biblical testament.

Well, I am not a theologian; nor do I wish to become one. The real theologians will probably either shake their heads in dismay or laugh at the audacity of some novice reproducing an idea that was invented and discarded in some dusty old book they are all privy to. Besides...I don't think God likes this idea ;)

I only wanted to determine, through some attempt at application, if Wolfram's ideas have any real potential. In my mind, I have shown that they do, so I will continue reading.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 07/09/2002 5:26 AM by syed_hasan_murtaza@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

>Consider this line: 'In the beginning was the
>word and the word was with God and the word was
>God.' God is the aforementioned, ultimate,
>immutable logic! They are one and the same!


On that note how about this ('the declaration of faith' for Muslims)


There is no creator except The Creator.

(more commonly translated as there is no god except Allah.)

How about that rule for explaining all that exists in the universe? Mystics have claimed great things about that declaration above.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 07/10/2002 12:51 AM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

>There is no creator except The Creator.

Actually, I was only making an idle observation, but my thinking on that particular idle observation has advanced somewhat.

There are three things that bother me about the concept of cellular automata being the basic algorithm of the universe. The first is that there has to be some initial condition. The second is that a rule must be chosen. The third is that something has to make the universe follow the specified rule ad infinitum. If we really wish to explain the universe, we must come up with a rule that works independently of any particular physical representation. The only candidate for such a rule, to my knowledge, is logic. If we make the not-entirely unreasonable assumption that logic exists independent of physical representation and that it is the only primordial construct, then the controlling mechanism of the universe must be logic. The universe will obey nothing that is either inconsistent with logic or subsidiary to logic. It will always do exactly what logic dictates.

Now, imagine the universe as being somewhat like an automaton grid that has not yet been programmed. It does not have an initial state, it does not have a rule to follow, and it does not have anything, such as an electronic machine, to make it follow that rule. Somehow, these three things need to be implemented. It is tempting to think that if no choices are made, the board will automatically be blank, but this is a misconception. A blank board is as much of a choice as a full one or any other variation.

To this undecided board, we introduce logic. I use the term 'introduce' somewhat loosely, since it must be assumed that logic, if it is the primordial construct I am taking it for, must have been present from the start. The law of the excluded middle demands that the color of each and every square be chosen. In the case of the automaton model, we need only consider the choice of black or white. Some may argue that multi-valued logics invalidate the assumption that choices 'must' be made. I suspect that multivalued logics, though useful analytical tools, do not have the primordial characteristic that two-valued logic has. I strongly suspect that it is not actually possible for something in the universe to be undecided. If that is possible, then Einstein, Podolsky, and Rosen were wasting their time in writing EPR, and we may as well throw critical reasoning to the wind. Ultimately, it will be seen that I am proposing an alternative to either true indeterminacy or hidden variables.

So, we are stuck with this undecided board in which every square must be either black or white. Choices must be made and something has to choose. In the real universe a great many choices would need to be made'probably an infinite number of choices.

It is tempting to think that the real universe has some primordial state that it will assume by default. Of the three candidates I can think of, the first has already been dismissed by science: that of an empty three-dimensional vacuum with linear time. The other two candidates I can think of are nonexistence and a single dimensionless point. These represent two distinct choices. I tend to doubt that these, or any other choices, are actually primordial. Why should they be if the only primordial construct is logic?

We have a paradox. The universe has to be something, and there is nothing to decide what it will be. It can't just be everything: the law of the excluded middle dictates that it must be something specific. Something has to come into existence to resolve the paradox. People faced with such dilemmas often go insane. Machines faced with such dilemmas often explode. However, in the case of the universe, insanity or exploding would simply be two of many possible choices.

More than just a paradox, we have a loophole in reality: a gigantic loophole that demands to be filled. Into this loophole, we introduce a choice function. This choice function will probably need to be capable of making an infinite number of decisions at an infinite rate. Also, since the choice function's decisions exist for the purpose of breaking the paradox, the choices are, by definition, universal and total: i.e. omniscient and omnipotent. You can think of this choice function, if you will, as the much-sought choice function of topology that must choose one element from every set. In order to distinguish between choices, this choice function must have motivation, preference, and all the other attributes we associate with consciousness. We may as well call it a consciousness. This 'consciousness' would not be a physical thing, but a transcendental thing: it would be 'the process of choosing'. Since it is, by definition, omniscient, omnipotent, and conscious, we may as well call this consciousness God. Whether it is called the God of the Muslims, the Christians, or some other faith is immaterial.

Now, we have God. The existence of God must have preceded the existence of the human strand of consciousness. God, realizing the etiology of God's own existence, would know how to create more such beings. I am guessing that God would not want to create another God, but subsidiary consciousnesses. The key would be to create a region with limited logical structure: our universe. That is possibly why the universe we inhabit has limited logical discrepancies like the EPR paradox. Those things exist so that our consciousness will have to emerge to resolve them. Penrose has the right idea, but he is confusing cause and effect: quantum collapse does not generate consciousness; it is generated by consciousness'the idea that Einstein wouldn't consider.

I have encountered many efforts to argue the existence of God from basic premises. The efforts of Hegel are very comparable to my own. Unlike Hegel, I did not start from the unstated premise that God must exist and look for a proof. I stumbled upon this construct accidentally while reading Wolfram's book.

Judge for yourself: is this a legitimate argument or merely the mad ravings of a misinformed dilettante?

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 11/23/2002 12:11 PM by Anshu Sharma

[Top]
[Mind·X]
[Reply to this post]

In my humble opinion, the author tries to show that-
--there may exist a "Simple Rule" for "Generating Existence".
--and that rule is probably computational (CA) rather than a mathematical equation.

Let's do the following- think of the Existence (I prefer the word over Universe) as a 4-dimensional structure (or n-dimensional if you choose). Now let's assume you could model the Existence as a string of 1s and 0s- a very large string, probably.

Now, here is a few observations:
- Humans (and to be precise each human-life) is a substring in this representation.
- One substring (Stephen, myself, or you) is trying to come up with a rule that describes the entire string. In my case, I simply abstract the string by calling it "Existence", physicsts want to come up with a "bunch of equations" whose solution would be this string, Mr. Wolram predicts that it is a CA that can generate this string, and Mr. Kurzweil thinks its a combination of equations and CA's.

If you go by the simplest- "Existence" is a good small explanation. All you have to do is first accept it intellectually.

Our human brain (the substring) has the somewhat unique capability of finding patterns in the String (Existence) and so that we can abstract it and store it in our brains (also called "understanding"), we want to come up with a rule because the String is too large.

Let's take the following example- reduce our problem to considering a "Rose" in stead of "Existence".

"Rose"-
I would call it a "Rose" observe it, realize that I cannot store the entire Rosee-String in my brain.

Mr. Wolfram contends that may be we can come up with a CA such that it will generate the Rose-string pattern.

Some Physicists contend that you can come up with an equation that will generate the Rose-string.

I think the understandable obsession to capture the underlying Rose-rule is just an obsession- there is no proof as far as I have seen that shows that it is possible to come up with a rule that will generate this Rose-string. Coming up with equations and rules that generate complex Strings is just what it is- rules and equations that generate complex strings. Now if we apply our cognition abilities to see patterns similar to "Existence"- I think that's a testimony to our pattern matching ability (we are probably running a Largest Common Substring algorithm!).

Let me put it another way- this Existence string can be mathematically interpreted as an Integer value- say 284,49....537. And that my friends is another abstract way of saying I found the underlying "Rule", the underlying "Number", the underlying "Equation" to Existence that which Is.

Existence Is.

Let's keep matching patterns, especially the useful one's to improve our lives (which is nothing but a "pre-determined in 4-dimensions" but not "pre-evaluated in time domain").

Let's not lull ourselves into a false sense of beleif that there is a simple rule. It "MAY" be that the Existence Number I just picked is 2 raised to the power 2002002 minus 1. Does that make it a "Simple Rule" explaining existence?

Or if the Existence Number is the 10 trillionth prime number -32 : Is that a "Simple Rule"?

Think about it- a "simple rule" for existence already exists- its what I call the "Existence Number".

What we really want to find is a rule that is simple enough for our comprehension but complex enough so that we can do interesting stuff with it? But what if the Existence Number really is what I just gave you!

-Anshu Sharma

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 11/23/2002 1:04 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

> Or if the Existence Number is the 10 trillionth prime number -32 : Is that a "Simple Rule"?

I think it would be.

How many bits are needed, to construct this world. Only several thousand?

Wolfram - it seems - thinks so. It's plausible, if you ask me.

- Thomas

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 11/23/2002 11:42 PM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

>I think the understandable obsession to capture the underlying Rose-rule is just an obsession- there is no proof as far as I have seen that shows that it is possible to come up with a rule that will generate this Rose-string.

I agree. The universe may simply exist exactly as it is, and the only underlying rule may be existence in its entirety. The Universe is not small enough for humans to grasp (except with a mathematical abstraction), and the atoms it is made up of are not large enough for humans to grasp (again, except with a mathematical abstraction). Why should we assume that the basic underlying rule is simple enough for humans to grasp? The underlying rule may be as complex as all of existence itself. Moreover, the rules we think we have pinned down may stop functioning tomorrow and be replaced by any number of apocalyptic fantasies--or things may simply fall apart.

Wolfram seems to be saying that he has discovered evidence that any kind of existence can be accounted for by a simple rule. In fact, he has only discovered evidence that any kind of pattern can be accounted for by a simple rule. Existence is another story. Existence is, and possibly always will be, too elusive for our philosophies to delineate. The presence of that quality of our experience that we have dubbed 'consciousness' adds to the perplexity of forming such a construct.

Maybe it is time that all of us came to terms with seven essential ideas.

1. Science will never explain the universe, only model it. Nor has it purported to do otherwise.
2. As far as actual explanations go, the beliefs of a Hindu monk and the beliefs of a Catholic priest have as much validity as the beliefs of a positivist physicist like Stephen Hawking or the toilings of a Stephen Wolfram.
3. We can still continue to model the universe in such a way that allows us to build a more reliable toaster or a faster jet ski, but these are only reliable models and not explanations of anything.
4. Eating, sleeping, taking hot showers, having adult relations, looking at sunsets, giving gifts, and staying alive and healthy for as long as possible are still desirable things even if we cannot prove it mathematically.
5. Blowing someone up or admonishing them because they disagree with us are still undesirable things, no matter what we believe.
6. There is no point in giving up a comforting belief just because it seems like the majority of rational people have rejected it. There is always another layer of explanation that may somehow make that belief seem more plausible.
7. Someone may prove me wrong tomorrow.

If I have offended anyone or seemed ignorant or irrelevant, I apologize. As far as the issue of existence goes, I'm afraid I have come to the point where such questions no longer interest me. However, I am looking for a more reliable toaster!

'Scott

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 11/24/2002 1:25 PM by Anshu Sharma

[Top]
[Mind·X]
[Reply to this post]

"As far as the issue of existence goes, I'm afraid I have come to the point where such questions no longer interest me. However, I am looking for a more reliable toaster!"

I don't know how you arrived at the "point" but in many eastern philosophies its called the point of "Realization" or "Enlightenment". Personally, I have also made journey from a pure physics/mathematics view of the world to the point of accepting Existence Is.

I would be interested in how you made that journey. It would equally be interesting to know if Stephen Wolfram and Kurzweil ever paid attention to these ideas of 'realization', and what they think about it- (interesting but not necessarily the gospel truth).

I am still an admirer of patterns, rules, symmetries but that's because how my brain is wired. But to be an admirer is different than believing that Existence is nothing but patterns, rules and symmetries. (Trees-Forest analogy vaguely applies here).

The beauty of the Existence is that there are so many rules, patterns and symmetries including you and me- and our Consciousness that arises out of Existence (as a pattern probably) and upon death disperses back into Existence.
-Anshu

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 11/26/2002 1:14 AM by Scott Wall

[Top]
[Mind·X]
[Reply to this post]

>I don't know how you arrived at the "point" but in many eastern philosophies its called the point of "Realization" or "Enlightenment". Personally, I have also made journey from a pure physics/mathematics view of the world to the point of accepting Existence Is.

I suspect that you are seeing more in what I am saying than is actually there, but perhaps this excerpt from my personal journal will help to clarify my perspective.

'I think I see a pattern of emergence. But, the whole concept of pattern may be a local phenomenon. Maybe pattern emerged...or did something else besides emerge. The evidence is indicating that time is a local phenomenon. That's a strange enough idea. Why couldn't pattern be a local phenomenon? Maybe emergence emerged...but that would be truly impossible! I really don't know anything! I don't even have a case--or really even any evidence. I can make depressing speculations, but they are only worst-case scenarios. They are not, by any means, the most likely scenarios. On the scale of everything, my speculations have a zero percent chance of being correct.'

This excerpt from my journal tells only part of the story. The thing I have begun to realize is that there is no bottom line to human reasoning. The only definite trend in my personal thinking about the universe is that every time I think I have found that bottom line, or a good approximation to it, I am ultimately proved wrong. When you begin to suspect that there is no bottom line to human reasoning, you are freed from the constraints of reason itself--no basis, no structure.

I still have an interest in science, but it is now more like my interest in solving puzzles or playing video games. I have fewer stakes in it.

I am still looking for a better toaster!

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/26/2002 7:30 PM by ponchovia@thomaskilla.com

[Top]
[Mind·X]
[Reply to this post]

Thomas is actually stephen wolfram. Either that or he is on the payroll.

Either way I'm gonna punch him in the nose. Yeah that's right. You can hear me. Thomas!!!

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/27/2002 10:03 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

What's the matter with you?

Moderator sleeps?

- Thomas

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/27/2002 1:40 PM by youwantsome@thatsright.com

[Top]
[Mind·X]
[Reply to this post]

There is nothing wrong with me. I am just doing what any rational person would do. Punch you in the nose. Here's an asci image of a cellular automata my computer turned out:

..--...---.....
.----...-......
..Thomas!!.....
...--...---....
..----I.----...
..Must.-.Punch.
..-------You---

Can you see the pattern? It's magnificent!
This definitely disproves the second law of thermodynamics, just like wolfram said it would. Once I punch you, there will be more order in the universe.

I am the smartest man alive!

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/27/2002 1:42 PM by unbiased@observer.com

[Top]
[Mind·X]
[Reply to this post]

I think "youwantsome" brings up a good point.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/27/2002 4:55 PM by fonicowardlireturn@ddresses.common

[Top]
[Mind·X]
[Reply to this post]

Please: refutation, not elimination.

T for two
posted on 06/27/2002 5:54 PM by iyavhadenufad@kind.crap

[Top]
[Mind·X]
[Reply to this post]

Yea. Debate, don't defecate!

Re: T for two
posted on 06/27/2002 6:01 PM by thp@studiooctopussy.com

[Top]
[Mind·X]
[Reply to this post]

Instead of wasting our time with faking your email adresses and speak of what you know nothing of, be gone.

Otherwise I am afraid you are going to be in a lot of trouble.

Re: T for two
posted on 06/27/2002 7:51 PM by ohno!@I'msoscared.org

[Top]
[Mind·X]
[Reply to this post]

Was that some kind of threat?

Listen, the point is I know more about this than any of you, and to proove it I am gonna have to start battling each of you in a steel cage match.

I will call myself the Cellular Automaton!

Beware!

Re: T for two
posted on 06/27/2002 8:12 PM by iamsosmart@mathematica.com

[Top]
[Mind·X]
[Reply to this post]

Oh, I'm sorry, I should have made a rhyme.

Rhymes are good arguments.

Lets see...
Your arguments lack rationality.
The only recourse is brutality.

How's that, pretty good hunh?
Now lets get it on!
Thomas!!

Reflections in the Mirror
posted on 06/28/2002 12:18 AM by growth@lil.sizedthing

[Top]
[Mind·X]
[Reply to this post]

I think they have what you're looking for here: http://www.natural-penis-enlargement-pill.com/flash.html. Let us know how it works out.

Now, can we continue our discussion!

Re: Reflections in the Mirror
posted on 06/28/2002 2:16 PM by thecelularautomaton@kurzweilai.net

[Top]
[Mind·X]
[Reply to this post]

Ok,
I'm really sorry.

I just have a couple of quick questions.

1. Is that the big trouble you spoke of?
Oh no, please don't enlarge my penis!

2. Am I supposed to have been insulted?
Penis enlargements are a very serious matter to a large number of people, as I am sure you are aware,
being that you know the URL.

3. Do you understand nothing of science?
The size of my penis is the source of my powers! Enlargement will only make me stronger! It is the
enormity of my penis that has allowed me to overtake this web board and best all of you in this battle of
wits! I am the smartest man alive!

4. THOMAS!!

Reflation of someone's Sheepdom
posted on 06/28/2002 4:59 PM by whatam@eryoua.er

[Top]
[Mind·X]
[Reply to this post]

Witness the dude that found the short line,
Passing up everything, even the sign.


Bye for now. See everyone back here when this minnow gets lost.

Re: Reflation of someone's Sheepdom
posted on 06/28/2002 6:36 PM by mutton@iwon.com

[Top]
[Mind·X]
[Reply to this post]

Ok, you should please punch yourself. It's important.
So, am I the one who wont get into line with everyone else, or am I the sheep who just follows the heard?
I am two contrary metaphors at the same time! Witness the awesome power of the Cellular Automaton!
Are these some of the simple rules that make such complexity in our universe? Contradicting truths... super complex!
Very confusing, but I am glad you put your argument in the form of a rhyme, otherwise none of these other people would understand.
The important lesson for the rest of you to learn is that I am the winner. I have reflated this web board!

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 06/29/2002 3:48 AM by disillusioned@observer.com

[Top]
[Mind·X]
[Reply to this post]

Just use the solution John Nash employed in A Beautiful Mind. That is, if there is anything more to be said. I don't think there is. I think that is the real reason why we heard from our friend.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 07/01/2002 11:35 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

For an in-depth review of Wolfram's book by someone other than Ray, check out the following:
http://www.kurzweilai.net/meme/frame.html?main=/articles/art0503.html

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 07/06/2002 7:04 AM by lunderwood181408MI@comcast.net

[Top]
[Mind·X]
[Reply to this post]

Cellular Automata, hmm, seems alot easier applying this theory in dimensions.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 07/09/2002 5:49 PM by kingmobone@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Excellent review of Wolfram's book. I agree with Mr. Kurzweil on all his points which is refreshing since I have several programmer friends who have become Wolfram disciples.


If I get more time I will return to read all your responses


BTW the Octave CAT and the PPG Wave still OWN the K-2500 ;)


Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 07/15/2002 6:39 PM by therealbonelson@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

There is a good review of this book in physics today by a physicist from the University of Chicago.

Here's the URL:
http://www.aip.org/pt/vol-55/iss-7/p55.html

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 07/21/2002 2:12 AM by oprand@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

I didn't read all the comments above, sorry if i'm repeating something that's been brought up.

Mr. Kurzweil's main criticism of Wolfram's book seems to be that the CA that Wolfram refers back to so often is complex but "not complex enough". Now, maybe Mr. Kurzweil is right, but maybe his definition of complex is incorrect. True, the same triangles and more or less regular (and random) behavior occurs throughout the CA. But do you really need to see some more intricate designs in there to account for say, a human brain? Imagine that sequence written out to a point where the bottom line contains an enourmous number of black and white dots (i won't even try to somehow represent that number, i mean something very large). Wouldn't that sequence or pattern be classified as "extremly complex"?

Now, that CA can be drawn out arbitrarily large, so there could be an arbitrary number of those "extremely complex" patterns that would account for a human brain, or whatever, along with all the other stuff in our universe. What i'm saying is does the random structure of that CA really have to be very intricate at its lowest level to account for stuff in the universe, can't it just be HUGE and account for everything. So its "computational equivalence" really is all that matters, and the "software" isn't really needed. Actually, I didn't get to the chapter about computation equivalence yet, so i'm not entirely sure what it means, but maybe what i said still makes sense :P

/nik

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 07/21/2002 2:14 AM by oprand@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

And i just noticed that this article is over two moths old too, so no one will read my comment. But maybe that's a good thing :P

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 08/14/2002 11:11 AM by jackogreen@qwest,.net

[Top]
[Mind·X]
[Reply to this post]

Greetings,

I came across this article on new research results from the Center for Computational Genetics and Biological Modeling at Stanford. It seems to lend support to Wolfram's assertions regarding natural selection's limited role in producing biological complexity (being in this case gene networks) as outlined in A New Kind of Science. Below is an excerpt and a link to the article:

--Scientists used to think that developmental fidelity evolved via natural selection, principally through survival and reproduction of organisms with redundant genetic systems -- that is, ones with copies of important gene sequences. But Siegal and Bergman's results indicate that redundancy may only be one small manifestation of a bigger theme: the complexity of gene networks. In short, more complex systems are more resistant to change in their outputs.

"It is typically assumed that important properties of organisms are crafted by natural selection," says Dmitri Petrov, assistant professor of biological sciences. "What Siegal and Bergman show is that robustness in the face of mutation, or canalization, may be a byproduct of complexity itself and therefore that robustness may be only very indirectly a product of natural selection."

Says Siegal: "It might be that the complex nature of the genetic system itself is going to give you canalization independent of natural selection. This complexity goes beyond mere redundancy, incorporating all kinds of elaborate connections in the gene network."

That doesn't mean natural selection doesn't play an important role. Continues Petrov: "Natural selection has shaped the genetic networks of complex organisms so that they produce appropriate phenotypes -- the more highly interconnected these networks are, the more robust the corresponding phenotypes are. The importance of this result is that it shifts the focus of the field away from abstract models of natural selection and toward actual genetic networks. In so doing, it will provide a new perspective for analyzing and understanding the current outpouring of genetic data in model organisms." --

http://www.stanford.edu/dept/news/pr/02/bergman87.html



CMR
<--gratuitous quotation that implies my profundity goes here-->

CAs + Quantum Computing Concepts = ???
posted on 09/09/2002 1:06 AM by egh@istar.ca

[Top]
[Mind·X]
[Reply to this post]

--------------------------------------------------------
Quantum Computational Cosmology??
-------------------------------------------------------
On reading Wolfram's book, and in particular the part about physics as CAs operating on
a network to produce space-time, matter, energy, I was prompted to have the following
ideas. Please excuse the lack of rigour. I'm just trying to convey intuitions here
and get some feedback on whether anyone thinks there's promise in this direction
or if there are other references people can point me to.

These questions arise:
1. What would the network of nodes and arcs between nodes
be made of? i.e. what is the substrate of Wolfram's universe network?

2. How do we define the "time arrow" and what makes the universe
as it appears to be?

My essential concepts are these:

----------------
Principle 1
----------------------------------------------------------------------------------------------------
The substrate is simply (all possible arrangements of "differences")
----------------------------------------------------------------------------------------------------
or perhaps put another way "the capacity for all possible information", or if we want
to imagine a "growing" multiverse, it would be
all possible arrangements of "differences" which can be represented
in a bitstring of length n, and n is growing.

The fundament is the binary difference. Each "direct difference" is an arc
and nodes are created simply by virtue of being the things at either end of
a "direct difference".

e.g. The multiverse is
a universe with just one "thing" and no differences (boring) +
a universe with one difference (ergo, two things) +
all possible configurations of two differences +
all possible configurations of three differences + etc.

I believe but am not certain that the number of different possible network
configurations representable with up to n bits is:

2 to the power ( (n squared - n) / 2)

Imagine an nxn matrix with its nth row and nth column representing the nth
posited-to-exist "node" and whether that node is directly-different (arc-joined)
or not to each other node.

----------------
Principle 2
---------------------------------------------------------------------------------------------------------
If the multiverse is "all possible states "simultanuously" of a length-n bitstring,
then the "time-arrow" and the "actual universe"is defined as an order-producing
"selection" or "view" of a subset of the "potential states" of the multiverse.
--------------------------------------------------------------------------------------------------------

If we imagine the multiverse as kind of holding (or being the potential for)
all possible states of the long bitstring, then you can make a selection
from all of those states. i.e. you can define
U1 = a particular sequence of states of the bitstring.
The word "sequence" rather than "set" is chosen deliberately here,
because my contention is that, of all possible sequences Ui, some
sequences will be "order-producing".

So why don't we just make the bold claim that "the time arrow"
is the direction through state-space from the beginning to the end
of an "order-producing" sequence (U1) of states. And that U1,
the "order-producing" sequence of states, is the "observable
universe".

Why is U1 the observable universe? Well because its evolution
of states was order-producing in just the right measure to
produce just the right mixture of randomness and order to
produce matter and energy, the rules of physics, and
"emergent behaviour" systems such as intelligent observers.

----------------------------------------------------------
How does this relate to Wolfram's CAs?
----------------------------------------------------------
Well we can define programs as being simply the things which
specify the state transitions from "state i to state i+1" of a sequence Ui
of states of the multiverse.
In (vaguely recollected) Hoare logic terms, my contention is that the
multiverse can be "viewed" as simultaneously (in an extra-time sense)
executing every possible program Pij such that S(i) Pij -> S(j).
Most sequences of executions of programs will be like executing
"garbage" programs on "garbage data", but some sequences of
program executions will produce interesting evolutions of states exhibiting
complexity and order, and emergent behaviour.
A good guess is that some of the "interesting" program executions
may be understandable as, modellable as,
executions of particular universal CLASS 4 automata.
The details are left to the wonks ;-)

---------------------------
--- Summary --------
---------------------------
The multiverse (or substrate for our universe) is precisely
the potential for all information. That is, it is equivalent
to the "simultaneous" exhibition of or capacity for
all possible states of a long and possibly growing bitstring.

The time-arrow of an observed universe is the order of visitation of the
information-states of the multiverse which corresponds to the
execution of "Wolfram-automata" which locally-evolve the configuration
of the individuals and differences of the substrate in such a way as to
generate just the right mix of randomness and order which produces
stable, organized systems and ultimately observers.

With this formulation, we need not assume that there is some
magical, extra-universal "supercomputer" busily computing a
"Fredkin-Wolfram-information" universe for us. Computation
of ordered complexity just falls out, as just being a particular
path through a very large set of information-states all of
which co-exist in the multiverse substrate. The huge, but
unrealized set of information states in the substrate, in
fact IS the substrate. It only becomes (or hosts) an observable
universe when viewed by observers existing within
a set of its states that is consistent enough to be "real".

---------
I'd welcome any feedback. Feel free to be ruthless with this
bizarre and possibly obvious or poorly thought-through concept.

Eric Hawthorne

Re: CAs + Quantum Computing Concepts = ???
posted on 09/29/2002 1:22 AM by scottwall5@attbi.com

[Top]
[Mind·X]
[Reply to this post]

>On reading Wolfram's book, and in particular the part about physics as CAs operating on a network to produce space-time, matter, energy, I was prompted to have the following ideas.

Yes, I like this idea. Arguably, it is the only possible explanation. However, I am guessing that someone has thought of it before, and to my knowledge it is utterly untestable.

But don't feel bad about either observation. No law of reality ever guaranteed that the truth would be testable when someone thought of it, and no law of psychology ever guaranteed that the truth would be so difficult to formulate that few or no persons would think of it.

What we may be discovering here is that the truth is ultimately unknowable, and that we would all be a lot happier if we just stuck with usable, testable engineering principles and built much better toys.

Scott

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 09/20/2002 8:18 PM by greedy4it@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Without taking anything away from Wolfram, I feel that his research demonstrates a lot of what we already know. However, he has just pushed are frame of thinking a little further. I will state it 'as simply as possible, but no simpler', on any level, evolution in itself is inevitable. For the most part I feel that Wolfram's 'cellular-automaton computer' theory is a great tool for visualization purposes. However, I would've liked to seen the theory pushed a little further and look forward to the day that our technology has advanced enough for more in depth research!

Gregg Zimmerman

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 09/27/2002 7:02 PM by mkelly@stuart.iit.edu

[Top]
[Mind·X]
[Reply to this post]

I thought that this review provided a good explanation of the first half of Wolfram's book, on CAs and the information computer theoretical analysis of nature, but failed to link this with the notion of computational equivalence, which is essentially the main idea of the second half of the book. Though I especially appreciated the effort and detail that Dr Kurzweil expended in explaining his points of difference.

However it is errorneous to directly compare computer graphical CAs whose basic elements are colored grid cells with fully developed organisms which are constructed from far more complex units, such as organic cells and then conclude greater complexity of the latter over the former. This is the worst case of an apples against oranges argument and confuses the issue of complexity by not sufficiently distinguishing between the basic components out of which the two structures are built.

If we take a functional approach we can see that we have two quite different structures:
CA[graphicalCells, Rules[graphicalCells] ], and
Organism[organicCells, Rules[organicCells] ].
The point that Wolfram is making is that when considered as computational systems
Rules[graphicalCells] and Rules[organicCells] are equivalent as universal systems. But because organicCells are themselves complex structures described by
organicCell[cellElements, Rules[cellElements] ] then the resultant Organism object can be seen to be a hierarchical structure:
Organism[
organicCell[cellElements,
Rules[cellElements] ],
Rules[organicCells] ]

and its supposedly greater complexity derives from viewing the above structure as:
Organism[cellElements, ComplexRules[cellElements]]

But such an approach is erroneous and suffers from a confusion about levels of representation. Since both systems (graphical CAs and organisms) are universal then they can represent anything and so the former can describe the latter, so that the latter cannot be any more complex!

Best Regards

Michael Kelly, IIT.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 09/27/2002 10:14 PM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

Michael,

I agree, and would apply as analogy a "bias" in not recognizing an associativity. Stretching the meaning of functional notation just a bit ...

Begin with Objects-level-1 (Objs1), and Rules-level-1 (Funcs1), and define

Objs2 = Funcs1(Objs1)

Continue with Objs3 = Funcs2(Objs2).

Now, Objs3 = Funcs2(Funcs1(Objs1)). The composition Funcs2(Funcs1()) might be designated "FuncsB". We then have

Objs3 = Funcs2(Objs2), or equivalently

Objs3 = FuncsB(Objs1).

The intermediary complexity can be viewed as either "pushed down" into the argument "Objs2" of Funcs2, or "pulled up" into the function "FuncB" over Objs1.

Another way to view it: For all the computational power of Deep Blue, everything it can do "functionally" can be accomplished with my lowly PC, albeit rather slowly and tediously.

At least, I think this was your intent.

Cheers! ____tony b____

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 09/27/2002 10:29 PM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

Michael,

(Addendum)

Of cource, the functional complexity mapping between CA and "Organisms" only holds to the degree that organisms can be viewed as acting "algorithmically".

Although we can certainly abstract behaviors of organisms that align with such a view, it should be recognizaed that such a conscious abstraction is taking place in the comparison, and may involve discarding (ignoring) possible non-algorithmic forces at work in organisms that are absent in (common) CAs.

In particular, CAs are generally considered to operate according to purely deterministic laws. CA state changes must occur at levels large enough not to be influenced by small and unpredictable QM fluctuations. We have no such predicate for biological organisms.

Question for the CA-heads out there: Has anyone studied the effects of "ersatz" random fluctuations (rule violations) in the large-scale behaviors of Cellular Automata? Do "Class-X" automata look instead as "Class-Y" automata? Does a "Fifth Class" arise?

Cheers! ____tony b____

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 09/28/2002 1:41 PM by mkelly@stuart.iit.edu

[Top]
[Mind·X]
[Reply to this post]

Hi Tony

I concur fully with your emails, and in answer to your second:
I chose organisms specifically because that was the "greater complexity" example discussed by Dr Kurzweil.
Also I like your argument that "CAs are generally considered to operate according to purely deterministic laws. CA state changes must occur at levels large enough not to be influenced by small and unpredictable QM fluctuations. We have no such predicate for biological organisms". It is a complete refutation of the philosophical nonsense found in Roger Penrose's books - The Emporer's New Mind and Shadows of the Mind. Penrose is under the belief (delusion?) that consciousness is a direct product of the effects of unpredictability of QM on brain behaviour! There is no experimental evidence to support this absurdity, but you can see that Penrose also suffers from the same inability to appropriately distinguish levels of operational effect. He is driven to this ridiculous conclusion by an irrational fear and misunderstanding of AI. I have no doubt that he is probably frothing at Wolfram's ANKS.

Cheers aussi

Michael

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 10/11/2002 10:27 AM by guest@aol.com

[Top]
[Mind·X]
[Reply to this post]

This is a superb work.
when i read it at one sitting i knew he had advanced science a hundred years. Concepts i thought but hadn't properly formulated were there tabulated.
he'd gone futher than I had & i knew i was good.

the integrity between maths patterns and music or any fractals was set down.

It's a monolith

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 11/11/2002 5:17 PM by Genaro Juarez Martinez, Juan Carlos Seck Tuoh Mora

[Top]
[Mind·X]
[Reply to this post]


Some doubts and commentaries on cellular automata rule 110,

An error in "A New Kind of Science" ...

I believe there is an error in the graphics depicted in the book which
does not correspond absolutely with the illustrated schematic diagram in
the figure of page 683.

The parts needed separately for beginning the specification of the
Cyclic Tag System can be constructed using phases, hence these parts
must reproduce the figures of the book corresponding with the schematic
diagram using suitable alignments and distances.

In the figure of page 681 the eight parts of the system are presented,
we shall rename each one for getting a better identification on
codifying and visualize them.


book label
-------------------------------------------------------------
1. a black element in the sequence 1Ele_C2
2. a white element in the sequence 0Ele_C2
3. a black element in a block 1Blo_Eb
4. a white element in a block 0Blo_Eb
5. the initial form of some separator between blocks Sep1_EEb
6. the later form of some separator between blocks Sep0_EEb
7. a black element ready to be added 1Add_Eb
8. a white element ready to be added 0Add_Eb


Following the schematic diagram of page 683 (each gray tone in the
diagram represents a particular block of structures), then the sequence
can be codified in the following way:

[4_4
A]-[*e*]-[1Ele_C2]-[Sep0_EEb]-[1Blo_Eb]-[Sep1_EEb]-[1Blo_Eb]-[0Blo_Eb]-[1Blo
_Eb]
- ..., and successively. ("e" it represents a space defined by ether)

A commentary in the book says that the distances in the schematic
diagram are not suitably represented, but the important point of the
diagram is to show how each one the parts must interact to represent the
Cyclic Tag System, as it is illustrated in figure (d) in page 679.

The construction of complex configurations in Rule 110 is very
sensitive, the change of a single bit or cell means to disturb a whole
process, as we can verify in the reproduction of the cyclic tag system.

In order to calculate each one of the components in page 681, the phase
properties of ether are used. With this property the glider phases are
known and these components can be suitably grouped.

The phase and distance must be precise, otherwise a different collision
takes place.

In this way, in block one `1Blo_Eb' the phase of the first Ebar aligns
all the other gliders (12 Ebar's altogether) and the distances among
them are: 10-1-2-8-8-2-10-1-2-8-8 (we count the number of mosaics T3
between gliders).

In the case of block zero `0Blo_Eb' their distances are
10-1-2-8-8-8-10-1-2-8-8. The difference in both blocks is the distance
between the sixth and the seventh Ebar.

Figure (a) -page 684- begins with an element one `1Ele_C2', then an
initial separator `Sep1_EEb' formed by 8 gliders arrives and a block
one `1Blo_Eb' must arrive following the diagram in Page 683.

From the ninth Ebar a block one `1Blo_Eb' begins but their distances do
not correspond. The distances of these last four Ebar's in Figure (a)
are: 4-6-2. The piece of block one `1Blo_Eb' in Figure (c) shows the
remaining nine Ebar's.

Thus, joining the previous two parts, the distances of the corrected
block one `1Blo_Eb' must be: 4-6-2-8-8-2-10-1-2-8-8.

In Figure (g) we have again the first four Ebar's associated with the
block one `1Blo_Eb' corresponding with the distances 4-6-2. In Figure
(i) the last eight Ebar's of block one are showed, we can only check
this detail in Figures (a) and (g).

However, we made the construction using the original block one `1Blo_Eb'
in the book.

We obtain a valid collision in the sense that it produces a "reading"
one `1Add_Eb', but the phase in which it is originated does not allow to
yield the element one `1Ele_C2' on colliding with the group of 4A's,
because the collision instead of producing the first C2, generates the
glider sequence B-Bbar-F.

In this production there is no much to look for because the phase is
unique, 1 of 6 possible collisions is a soliton (4A -> Ebar), then when
the group of 4A's cross the first three Ebar's as solitons the fourth
Ebar must produce a C2.

In the inferior part of Figure (a) there is a pile of 8 C2's, this pile
is not possible to be obtained with the original block presented in the
book using only the corrected block; with the original block a pile of
20 C2's is produced, for this reason another phase is generated which
yields an unwished collision.

Another proof of this error can be verified in Figure (b), the distance
between the first Ebar and the second Ebar consists of 27 mosaics T3,
this distance is only yielded by the corrected block one `1Blo_Eb'.

With the original block in the book the distance between these two
Ebar's consists of 29 mosaics T3.


Another error in the figures of page 681 !! ...

The black reading element `1Add_Eb' is wrong, because the first Ebar
does not correspond with this block, the other three are fine.

Perhaps they took a bad picture and instead of selecting the four
gliders corresponding with the component `1Add_Eb', they took the first
three gliders and the remaining Ebar from a separator `Sep0_EEb'.

This error is easily verified because the distances in the element
`1Add_Eb' must be: 27-21-27 and not 20-27-21 as it is depicted in the
book.


The first operations have been implemented calculating the sequence
1s1s0s10s1s (s - separator) using approximately 20.000 cells in the
initial configuration, checking the result in 13,300 generations.

With this, it was possible to verify that the operations "addition",
"reading" and "erasing" work suitably.

Distances of the corrected blocks:

1. 1Blo_Eb: 4-6-2-8-8-2-10-1-2-8-8
2. 0Blo_Eb: 4-6-2-8-8-8-10-1-2-8-8
3. 1Add_Eb: 27-21-27

The errors in the book it looks like mistakes at the time of selecting
the figures, but if somebody wishes to see how the system works and
takes the components showed by the book, then it does not work in the
way showed by the other figures.

It does not mean that the system is wrong, we just want to say that the
precision required both in the collisions and in the glider groups is
really a very difficult feature to be obtained. We believe that 10 years
of research are well-represented in the implementation of the cyclic tag
system in Rule 110.

Another remaining tasks are:

I- To verify how the blocks of elements (1Blo_Eb, 0Blo_Eb) must cross
like solitons the sequences of elements (1Ele_C2, 0Ele_C2) and continue
with their operation.

II- To verify the good operation of the element zero `0Ele_C2', up to
now the corrected block is working, but in Figure (h) the complete block
zero `0Blo_Eb' can be seen and it is equal to the one presented in page
681. But since the block is erasing with the gliders D1 and 3A's and it
is not operating, we have to verify the process in all its points, that
is, when it generates `0Add_Eb' and crosses like soliton an element
`1Ele_C2' or `0Ele_C2'.

Using the original "zero" block `0Blo_Eb' beginning with the sequence:
...-1Ele_C2-Sep0_EEb-0Blo_Eb-..., the element `1Add_Eb' is well-produced
but it does not appear with the required phase and when the group of
4A's arrives to read this element, an unwished collision is generated.

The corrected zero block `0Blo_Eb' generates a valid `0Add_Eb' and when
it hits later against the group of 4A's, it produces an adequate
`0Ele_C2'.

We have to verify that it crosses like soliton an element "one" or
"zero".

The groups of 4A's are static but the reason is that one thinks that the
left part of the system can be copied and pasted, without changing any
phase or distance. The distances between these gliders are:
4A-27e-4A-23e-4A-25e-4A.

The necessary distance between these groups of 4A's is the same one, but
its phases are different. A block of 4A's has three phases and for
producing a soliton we need to look for the corresponding phase,
therefore we have to take care of the phase in which the 4A's arrive for
avoiding the decomposition of the system.


When the first simulations were made using the corrected "one" block
`1Blo_Eb' and we thought that the distances in the groups of 4A's
changed, then it is possible to handle a "one" `1Ele_C2' and a "zero"
`0Ele_C2' with a registry.

The group of 4A's is the one distinguishing 1 or 0, and not the blocks
of elements, for example:

if it is 1 the distances are: 4A-27e-4A-27e-4A-25e-4A
if it is 0 the distances are: 4A-27e-4A-19e-4A-25e-4A

then the central distance determines if the registry produces one "0" or
one "1". Finally:

1. In order to perform a particular operation, it is necessary to
construct another completely different initial configuration?

2. The first question does not contradicts universal computation?


A free program to study rule 110, for the systems OpenStep and Mac OS X
in: http://delta.cs.cinvestav.mx/~mcintosh/comun/s2001/s2001.html


Atte.
Genaro Juarez Martinez
Juan Carlos Seck Tuoh Mora


Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 11/23/2002 12:05 PM by Anshu Sharma

[Top]
[Mind·X]
[Reply to this post]

In my humble opinion, the author tries to show that-
--there may exist a "Simple Rule" for "Generating Existence".
--and that rule is probably computational (CA) rather than a mathematical equation.

Let's do the following- think of the Existence (I prefer the word over Universe) as a 4-dimensional structure (or n-dimensional if you choose). Now let's assume you could model the Existence as a string of 1s and 0s- a very large string, probably.

Now, here is a few observations:
- Humans (and to be precise each human-life) is a substring in this representation.
- One substring (Stephen, myself, or you) is trying to come up with a rule that describes the entire string. In my case, I simply abstract the string by calling it "Existence", physicsts want to come up with a "bunch of equations" whose solution would be this string, Mr. Wolram predicts that it is a CA that can generate this string, and Mr. Kurzweil thinks its a combination of equations and CA's.

If you go by the simplest- "Existence" is a good small explanation. All you have to do is first accept it intellectually.

Our human brain (the substring) has the somewhat unique capability of finding patterns in the String (Existence) and so that we can abstract it and store it in our brains (also called "understanding"), we want to come up with a rule because the String is too large.

Let's take the following example- reduce our problem to considering a "Rose" in stead of "Existence".

"Rose"-
I would call it a "Rose" observe it, realize that I cannot store the entire Rosee-String in my brain.

Mr. Wolfram contends that may be we can come up with a CA such that it will generate the Rose-string pattern.

Some Physicists contend that you can come up with an equation that will generate the Rose-string.

I think the understandable obsession to capture the underlying Rose-rule is just an obsession- there is no proof as far as I have seen that shows that it is possible to come up with a rule that will generate this Rose-string. Coming up with equations and rules that generate complex Strings is just what it is- rules and equations that generate complex strings. Now if we apply our cognition abilities to see patterns similar to "Existence"- I think that's a testimony to our pattern matching ability (we are probably running a Largest Common Substring algorithm!).

Let me put it another way- this Existence string can be mathematically interpreted as an Integer value- say 284,49....537. And that my friends is another abstract way of saying I found the underlying "Rule", the underlying "Number", the underlying "Equation" to Existence that which Is.

Existence Is.

Let's keep matching patterns, especially the useful one's to improve our lives (which is nothing but a "pre-determined in 4-dimensions" but not "pre-evaluated in time domain").

Let's not lull ourselves into a false sense of beleif that there is a simple rule. It "MAY" be that the Existence Number I just picked is 2 raised to the power 2002002 minus 1. Does that make it a "Simple Rule" explaining existence?

Or if the Existence Number is the 10 trillionth prime number -32 : Is that a "Simple Rule"?

Think about it- a "simple rule" for existence already exists- its what I call the "Existence Number".

What we really want to find is a rule that is simple enough for our comprehension but complex enough so that we can do interesting stuff with it? But what if the Existence Number really is what I just gave you!

-Anshu Sharma


Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/08/2004 7:54 PM by howard

[Top]
[Mind·X]
[Reply to this post]

Assume that the universe is not matter but pattern computed by cellular automata. Now we need to verify this. Unless we compute some part of it then we cannot verify it. Wolfram says that pattern 110 is universal and therefore that it can compute a tree with all of its leaves. However, just because something is computable does not mean that it is actually computed. We cannot deny the experience of existence. We know that something exists for we sense it and it has some reality and behaviour. So what exists were the only "complex" patterns "allowed" to become computed. There must be both a struggle and conditions for the computation to take place for otherwise we would witness a lot more variability in the universe than that which we see.


soon as we measure it we try to explain it in more complicated patterns that are not there. In your example of a complex rule such as finding a very large prime number and subtracting 32, ok, that is something that you can imagine and describe but that you cannot yourself easily compute.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 01/08/2004 2:35 AM by TwinBeam

[Top]
[Mind·X]
[Reply to this post]

Hmm.

What if the rules were not the same at each point? E.g. if there were two rules - rule0 and rule1 - arranged a pattern - say a checkerboard.

Taking that a bit further, what if one level of CA generated a pattern of on/off (0/1), which in turn selected rule0 or rule1 for corresponding cells of a second level?

If the lower level were cyclical - e.g a simple "flipping" checkerboard - it might create a regular, time-like function for the second level.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/23/2006 6:58 PM by anyguy

[Top]
[Mind·X]
[Reply to this post]

"Pure randomness becomes predictable by its lack of predictability."

so can we say?

with computation, the unknown becomes just an other pattern (algorithm)

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 09/12/2006 4:05 PM by mindx back-on-track

[Top]
[Mind·X]
[Reply to this post]

back-on-track

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 09/28/2006 1:08 AM by JeanDuvalloise

[Top]
[Mind·X]
[Reply to this post]

If you are stimulated by new ideas and if you can think for yourself rather than simply accept what Dr. Stephan Wolfram dishes out, I think you will find this letter of interest. Let us note first of all that Wolfram wants to persecute the innocent and let the guilty go unpunished. Who does he think he is? I mean, he, already oppressive with his rancorous, empty-headed scribblings, will perhaps be the ultimate exterminator of our human species -- if separate species we be -- for his reserve of unguessed horrors could never be borne by mortal brains if loosed upon the world. If you think that that's a frightening thought, then consider that Wolfram is extraordinarily brazen. We've all known that for a long time. However, his willingness to spit on sacred icons sets a new record for brazenness. Evil individuals are acting in concert with other evil individuals for an evil purpose. Of course, this sounds simple, but in reality, the real issue is simple: He is the ultimate source of alienation and repression around here. Despite the fact that a lot of people may end up getting hurt before the final spasm of Wolfram's rage is played out, Wolfram is blinded by greed. Why do I tell you this? Because these days, no one else has the guts to. As a parting thought, remember that Dr. Stephan Wolfram's toadies coerce children into becoming activists willing to serve, promote, spy, and fight for his harangues.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 09/28/2006 3:09 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

Wolfram's opus is his contribution to science, specifically mathematical computing.

He had his PHD at 20 years old (CalTech) and his new kind of science, specifically that shapes and not digits can do huge factored faster calculations is a revolution.

The fact he is a scientist billionaire isn't a bad thing as he didn't take money he created it.

Many self-actualisers who make huge contributions to Civilisation have been subject to inordinate guardian pressure.

In almost every case unless they become self-actualisers, they are dysfunctional.


I dont know if steve Woolfram is dysfunctional and it is his opus not his idiosyncracies that are intersting for me.


My god man, dont wince that the dog sings badly but wonder that it sings at all.

In this dimension he is a god.

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 10/05/2006 10:33 PM by JeanDuvalloise

[Top]
[Mind·X]
[Reply to this post]

Idiot. That letter was automatically generated. THINK FOR YOURSELF SOMETIME

Re: Reflections on Stephen Wolfram's "A New Kind of Science"
posted on 05/27/2009 12:06 AM by Sergio123Cabral

[Top]
[Mind·X]
[Reply to this post]

= =
So pra registrar que estou postulando a existencia de uma particula quantica para a Inteligencia ...

Chat com Kurzweill em discussao sobre Wolframs Cas ,,,

Thanks for your answer.
Yes , it sounds very coherent , and remember me a little bit Matrix ( I have readed Ray Kurzweil vision about Matrix and I m not sayng that you are suporting the Matrix possibilitie ) in terms of concept of downloading objects as modules to colaborate to interact with some task goals , wich can be creat problems and/or create solutions , instead writing codes for knowed situations. But in terms of singularity age I feell that we will have conditions to discover or aproximate to what I call the IQUAP , the Inteligence Quantum Particle. I m not shure yeat if CAs can helps us to start to think about them as minimalistic building block structure for AI Softwares , and we would try to analisy how Nature have worked on it some time ago. I feel that only manipulating the IQUAPs we will really have the power to reconfigure the enviroment and archive infinites possibilities for the reality. If we dont find the IQUAPs , we will be only extending or srinkg , ie , distorcing, the reality , and this will be , to me , the first consequence os Singularity age. If we accept the near age of Singularity , and the possibilite to discover the IQUAPs , probably we will have the power to reconfigure the universe , ie , writing or creating Programs , that will use the universse as a computer.
Could you help us with your comments about it ?
Sergio Cabral - IdeaValley.com.br / Rio de Janeiro / Brasil