|
|
|
|
|
|
|
Origin >
Visions of the Future >
Reflections on Stephen Wolfram's 'A New Kind of Science'
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0464.html
Printable Version |
|
|
|
Reflections on Stephen Wolfram's 'A New Kind of Science'
In his remarkable new book, Stephen Wolfram asserts that cellular automata operations underlie much of the real world. He even asserts that the entire Universe itself is a big cellular-automaton computer. But Ray Kurzweil challenges the ability of these ideas to fully explain the complexities of life, intelligence, and physical phenomena.
Stephen Wolfram's A
New Kind of Science is an unusually wide-ranging book covering
issues basic to biology, physics, perception, computation, and philosophy.
It is also a remarkably narrow book in that its 1,200 pages discuss
a singular subject, that of cellular automata. Actually, the book
is even narrower than that. It is principally about cellular automata
rule 110 (and three other rules which are equivalent to rule 110),
and its implications.
It's hard to know where to begin in reviewing Wolfram's treatise,
so I'll start with Wolfram's apparent hubris, evidenced in the title
itself. A new science would be bold enough, but Wolfram is presenting
a new kind of science, one that should change our thinking
about the whole enterprise of science. As Wolfram states in chapter
1, "I have come to view [my discovery] as one of the more important
single discoveries in the whole history of theoretical science."1
This is not the modesty that we have come to expect from scientists,
and I suspect that it may earn him resistance in some quarters.
Personally, I find Wolfram's enthusiasm for his own ideas refreshing.
I am reminded of a comment made by the Buddhist teacher Guru Amrit
Desai, when he looked out of his car window and saw that he was
in the midst of a gang of Hell's Angels. After studying them in
great detail for a long while, he finally exclaimed, "They
really love their motorcycles." There was no disdain in this
observation. Guru Desai was truly moved by the purity of their love
for the beauty and power of something that was outside themselves.
Well, Wolfram really loves his cellular automata. So much so, that
he has immersed himself for over ten years in the subject and produced
what can only be regarded as a tour de force on their mathematical
properties and potential links to a broad array of other endeavors.
In the end notes, which are as extensive as the book itself, Wolfram
explains his approach: "There is a common style of understated
scientific writing to which I was once a devoted subscriber. But
at some point I discovered that more significant results are usually
incomprehensible if presented in this style…. And so in writing
this book I have chosen to explain straightforwardly the importance
I believe my various results have."2
Perhaps Wolfram's successful technology business career may also
have had its influence here, as entrepreneurs are rarely shy about
articulating the benefits of their discoveries.
So what is the discovery that has so excited Wolfram? As I noted
above, it is cellular automata rule 110, and its behavior. There
are some other interesting automata rules, but rule 110 makes the
point well enough. A cellular automaton is a simple computational
mechanism that, for example, changes the color of each cell on a
grid based on the color of adjacent (or nearby) cells according
to a transformation rule. Most of Wolfram's analyses deal with the
simplest possible cellular automata, specifically those that involve
just a one-dimensional line of cells, two possible colors (black
and white), and rules based only on the two immediately adjacent
cells. For each transformation, the color of a cell depends only
on its own previous color and that of the cell on the left and the
cell on the right. Thus there are eight possible input situations
(i.e., three combinations of two colors). Each rule maps all combinations
of these eight input situations to an output (black or white). So
there are 28 = 256 possible rules for such a one-dimensional,
two-color, adjacent-cell automaton. Half of the 256 possible rules
map onto the other half because of left-right symmetry. We can map
half of them again because of black-white equivalence, so we are
left with 64 rule types. Wolfram illustrates the action of these
automata with two-dimensional patterns in which each line (along
the Y axis) represents a subsequent generation of applying the rule
to each cell in that line.
Most of the rules are degenerate, meaning they create repetitive
patterns of no interest, such as cells of a single color, or a checkerboard
pattern. Wolfram calls these rules Class 1 automata. Some rules
produce arbitrarily spaced streaks that remain stable, and Wolfram
classifies these as belonging to Class 2. Class 3 rules are a bit
more interesting in that recognizable features (e.g., triangles)
appear in the resulting pattern in an essentially random order.
However, it was the Class 4 automata that created the "ah ha"
experience that resulted in Wolfram's decade of devotion to the
topic. The Class 4 automata, of which Rule 110 is the quintessential
example, produce surprisingly complex patterns that do not repeat
themselves. We see artifacts such as lines at various angles, aggregations
of triangles, and other interesting configurations. The resulting
pattern is neither regular nor completely random. It appears to
have some order, but is never predictable.
Why is this important or interesting? Keep in mind that we started
with the simplest possible starting point: a single black cell.
The process involves repetitive application of a very simple rule3.
From such a repetitive and deterministic process, one would expect
repetitive and predictable behavior. There are two surprising results
here. One is that the results produce apparent randomness. Applying
every statistical test for randomness that Wolfram could muster,
the results are completely unpredictable, and remain (through any
number of iterations) effectively random. However, the results are
more interesting than pure randomness, which itself would become
boring very quickly. There are discernible and interesting features
in the designs produced, so the pattern has some order and apparent
intelligence. Wolfram shows us many examples of these images, many
of which are rather lovely to look at.
Wolfram makes the following point repeatedly: "Whenever a
phenomenon is encountered that seems complex it is taken almost
for granted that the phenomenon must be the result of some underlying
mechanism that is itself complex. But my discovery that simple programs
can produce great complexity makes it clear that this is not in
fact correct."4
I do find the behavior of Rule 110 rather delightful. However,
I am not entirely surprised by the idea that simple mechanisms can
produce results more complicated than their starting conditions.
We've seen this phenomenon in fractals (i.e., repetitive application
of a simple transformation rule on an image), chaos and complexity
theory (i.e., the complex behavior derived from a large number of
agents, each of which follows simple rules, an area of study that
Wolfram himself has made major contributions to), and self-organizing
systems (e.g., neural nets, Markov models), which start with simple
networks but organize themselves to produce apparently intelligent
behavior. At a different level, we see it in the human brain itself,
which starts with only 12 million bytes of specification in the
genome, yet ends up with a complexity that is millions of times
greater than its initial specification5.
It is also not surprising that a deterministic process can produce
apparently random results. We have had random number generators
(e.g., the "randomize" function in Wolfram's program "Mathematica")
that use deterministic processes to produce sequences that pass
statistical tests for randomness. These programs go back to the
earliest days of computer software, e.g., early versions of Fortran.
However, Wolfram does provide a thorough theoretical foundation
for this observation.
Wolfram goes on to describe how simple computational mechanisms
can exist in nature at different levels, and that these simple and
deterministic mechanisms can produce all of the complexity that
we see and experience. He provides a myriad of examples, such as
the pleasing designs of pigmentation on animals, the shape and markings
of shells, and the patterns of turbulence (e.g., smoke in the air).
He makes the point that computation is essentially simple and ubiquitous.
Since the repetitive application of simple computational transformations
can cause very complex phenomena, as we see with the application
of Rule 110, this, according to Wolfram, is the true source of complexity
in the world.
My own view is that this is only partly correct. I agree with Wolfram
that computation is all around us, and that some of the patterns
we see are created by the equivalent of cellular automata. But a
key issue is to ask is this: Just how complex are the results
of Class 4 Automata?
Wolfram effectively sidesteps the issue of degrees of complexity.
There is no debate that a degenerate pattern such as a chessboard
has no effective complexity. Wolfram also acknowledges that mere
randomness does not represent complexity either, because pure randomness
also becomes predictable in its pure lack of predictability. It
is true that the interesting features of a Class 4 automata are
neither repeating nor pure randomness, so I would agree that they
are more complex than the results produced by other classes of Automata.
However, there is nonetheless a distinct limit to the complexity
produced by these Class 4 automata. The many images of Class 4 automata
in the book all have a similar look to them, and although they are
non-repeating, they are interesting (and intelligent) only to a
degree. Moreover, they do not continue to evolve into anything more
complex, nor do they develop new types of features. One could run
these automata for trillions or even trillions of trillions of iterations,
and the image would remain at the same limited level of complexity.
They do not evolve into, say, insects, or humans, or Chopin preludes,
or anything else that we might consider of a higher order of complexity
than the streaks and intermingling triangles that we see in these
images.
Complexity is a continuum. In the past, I've used the word "order"
as a synonym for complexity, which I have attempted to define as
"information that fits a purpose."6
A completely predictable process has zero order. A high level of
information alone does not necessarily imply a high level of order
either. A phone book has a lot of information, but the level of
order of that information is quite low. A random sequence is essentially
pure information (since it is not predictable), but has no order.
The output of Class 4 automata does possess a certain level of order,
and they do survive like other persisting patterns. But the pattern
represented by a human being has a far higher level of order or
complexity. Human beings fulfill a highly demanding purpose in that
they survive in a challenging ecological niche. Human beings represent
an extremely intricate and elaborate hierarchy of other patterns.
Wolfram regards any pattern that combines some recognizable features
and unpredictable elements to be effectively equivalent to one another,
but he does not show how a Class 4 automaton can ever increase its
complexity, let alone to become a pattern as complex as a human
being.
There is a missing link here in how one gets from the interesting,
but ultimately routine patterns of a cellular automaton to the complexity
of persisting structures that demonstrate higher levels of intelligence.
For example, these class 4 patterns are not capable of solving interesting
problems, and no amount of iteration moves them closer to doing
so. Wolfram would counter that a rule 110 automaton could be used
as a "universal computer."7
However, by itself a universal computer is not capable of solving
intelligent problems without what I would call "software."
It is the complexity of the software that runs on a universal computer
that is precisely the issue.
One might point out that the Class 4 patterns I'm referring to
result from the simplest possible cellular automata (i.e., one-dimensional,
two-color, two-neighbor rules). What happens if we increase the
dimensionality, e.g., go to multiple colors, or even generalize
these discrete cellular automata to continuous functions? Wolfram
addresses all of this quite thoroughly. The results produced from
more complex automata are essentially the same as those of the very
simple ones. We obtain the same sorts of interesting but ultimately
quite limited patterns. Wolfram makes the interesting point that
we do not need to use more complex rules to get the complexity (of
Class 4 automata) in the end result. But I would make the converse
point that we are unable to increase the complexity of the end result
through either more complex rules or through further iteration.
So cellular automata only get us so far.
So how do we get from these interesting but limited patterns of
Class 4 automata to those of insects, or humans or Chopin preludes?
One concept we need to add is conflict, i.e., evolution. If we add
another simple concept to that of Wolfram's simple cellular automata,
i.e., an evolutionary algorithm, we start to get far more interesting,
and more intelligent results. Wolfram would say that the Class 4
automata and an evolutionary algorithm are "computationally
equivalent." But that is only true on what I could regard as
the "hardware" level. On the software level, the order
of the patterns produced are clearly different, and of a different
order of complexity.
An evolutionary algorithm can start with randomly generated potential
solutions to a problem. The solutions are encoded in a digital genetic
code. We then have the solutions compete with each other in a simulated
evolutionary battle. The better solutions survive and procreate
in a simulated sexual reproduction in which offspring solutions
are created, drawing their genetic code (i.e., encoded solutions)
from two parents. We can also introduce a rate of genetic mutation.
Various high-level parameters of this process, such as the rate
of mutation, the rate of offspring, etc., are appropriately called
"God parameters" and it is the job of the engineer designing
the evolutionary algorithm to set them to reasonably optimal values.
The process is run for many thousands of generations of simulated
evolution, and at the end of the process, one is likely to find
solutions that are of a distinctly higher order than the starting
conditions. The results of these evolutionary (sometimes called
genetic) algorithms can be elegant, beautiful, and intelligent solutions
to complex problems. They have been used, for example, to create
artistic designs, designs for artificial life forms in artificial
life experiments, as well as for a wide range of practical assignments
such as designing jet engines. Genetic algorithms are one approach
to "narrow" artificial intelligence, that is, creating
systems that can perform specific functions that used to require
the application of human intelligence.
But something is still missing. Although genetic algorithms are
a useful tool in solving specific problems, they have never achieved
anything resembling "strong AI," i.e., aptitude resembling
the broad, deep, and subtle features of human intelligence, particularly
its powers of pattern recognition and command of language. Is the
problem that we are not running the evolutionary algorithms long
enough? After all, humans evolved through an evolutionary process
that took billions of years. Perhaps we cannot recreate that process
with just a few days or weeks or computer simulation. However, conventional
genetic algorithms reach an asymptote in their level of performance,
so running them for a longer period of time won't help.
A third level (beyond the ability of cellular processes to produce
apparent randomness and genetic algorithms to produce focused intelligent
solutions) is to perform evolution on multiple levels. Conventional
genetic algorithms only allow evolution within the narrow confines
of a narrow problem, and a single means of evolution. The genetic
code itself needs to evolve; the rules of evolution need to evolve.
Nature did not stay with a single chromosome, for example. There
have been many levels of indirection incorporated in the natural
evolutionary process. And we require a complex environment in which
evolution takes place.
To build strong AI, we will short circuit this process, however,
by reverse engineering the human brain, a project well under way,
thereby benefiting from the evolutionary process that has already
taken place. We will be applying evolutionary algorithms within
these solutions just as the human brain does. For example, the fetal
wiring is initially random in certain regions, with the majority
of connections subsequently being destroyed during the early stages
of brain maturation as the brain self-organizes to make sense of
its environment and situation.
But back to cellular automata. Wolfram applies his key insight,
which he states repeatedly, that we obtain surprisingly complex
behavior from the repeated application of simple computational transformations
- to biology, physics, perception, computation, mathematics, and
philosophy. Let's start with biology.
Wolfram writes, "Biological systems are often cited as supreme
examples of complexity in nature, and it is not uncommon for it
to be assumed that their complexity must be somehow of a fundamentally
higher order than other systems. . . . What I have come to believe
is that many of the most obvious examples of complexity in biological
systems actually have very little to do with adaptation or natural
selection. And instead . . . they are mainly just another consequence
of the very basic phenomenon that I have discovered. . . .that in
almost any kind of system many choices of underlying rules inevitably
lead to behavior of great complexity."8
I agree with Wolfram that some of what passes for complexity in
nature is the result of cellular-automata type computational processes.
However, I disagree with two fundamental points. First, the behavior
of a Class 4 automaton, as the many illustrations in the book depict,
do not represent "behavior of great complexity." It is
true that these images have a great deal of unpredictability (i.e.,
randomness). It is also true that they are not just random but have
identifiable features. But the complexity is fairly modest. And
this complexity never evolves into patterns that are at all more
sophisticated.
Wolfram considers the complexity of a human to be equivalent to
that a Class 4 automaton because they are, in his terminology, "computationally
equivalent." But class 4 automata and humans are only computational
equivalent in the sense that any two computer programs are computationally
equivalent, i.e., both can be run on a Universal Turing machine.
It is true that computation is a universal concept, and that all
software is equivalent on the hardware level (i.e., with regard
to the nature of computation), but it is not the case that all software
is of the same order of complexity. The order of complexity of a
human is greater than the interesting but ultimately repetitive
(albeit random) patterns of a Class 4 automaton.
I also disagree that the order of complexity that we see in natural
organisms is not a primary result of "adaptation or natural
selection." The phenomenon of randomness readily produced by
cellular automaton processes is a good model for fluid turbulence,
but not for the intricate hierarchy of features in higher organisms.
The fact that we have phenomena greater than just the interesting
but fleeting patterns of fluid turbulence (e.g., smoke in the wind)
in the world is precisely the result of the chaotic crucible of
conflict over limited resources known as evolution.
To be fair, Wolfram does not negate adaptation or natural selection,
but he over-generalizes the limited power of complexity resulting
from simple computational processes. When Wolfram writes, "in
almost any kind of system many choices of underlying rules inevitably
lead to behavior of great complexity," he is mistaking the
random placement of simple features that result from cellular processes
for the true complexity that has resulted from eons of evolution.
Wolfram makes the valid point that certain (indeed most) computational
processes are not predictable. In other words, we cannot predict
future states without running the entire process. I agree with Wolfram
that we can only know the answer in advance if somehow we can simulate
a process at a faster speed. Given that the Universe runs at the
fastest speed it can run, there is usually no way to short circuit
the process. However, we have the benefits of the mill of billions
of years of evolution, which is responsible for the greatly increased
order of complexity in the natural world. We can now benefit from
it by using our evolved tools to reverse-engineer the products of
biological evolution.
Yes, it is true that some phenomena in nature that may appear complex
at some level are simply the result of simple underlying computational
mechanisms that are essentially cellular automata at work. The interesting
pattern of triangles on a "tent olive" shell or the intricate
and varied patterns of a snowflake are good examples. I don't think
this is a new observation, in that we've always regarded the design
of snowflakes to derive from a simple molecular computation-like
building process. However, Wolfram does provide us with a compelling
theoretical foundation for expressing these processes and their
resulting patterns. But there is more to biology than Class 4 patterns.
I do appreciate Wolfram's strong argument, however, that nature
is not as complex as it often appears to be. Some of the key features
of the paradigm of biological systems, which differ from much of
our contemporary designed technology, are that it is massively parallel,
and that apparently complex behavior can result from the intermingling
of a vast number of simpler systems. One example that comes to mind
is Marvin Minsky's theory of intelligence as a "Society of
Mind" in which intelligence may result from a hierarchy of
simpler intelligences with simple agents not unlike cellular automata
at the base.
However, cellular automata on their own do not evolve sufficiently.
They quickly reach a limited asymptote in their order of complexity.
An evolutionary process involving conflict and competition is needed.
For me, the most interesting part of the book is Wolfram's thorough
treatment of computation as a simple and ubiquitous phenomenon.
Of course, we've known for over a century that computation is inherently
simple, i.e., we can build any possible level of complexity from
a foundation of the simplest possible manipulations of information.
For example, Babbage's computer provided only a handful of operation
codes, yet provided (within its memory capacity and speed) the same
kinds of transformations as do modern computers. The complexity
of Babbage's invention stemmed only from the details of its design,
which indeed proved too difficult for Babbage to implement using
the 19th century mechanical technology available to him.
The "Turing Machine," Alan Turing's theoretical conception
of a universal computer in 1950, provides only 7 very basic commands9,
yet can be organized to perform any possible computation. The existence
of a "Universal Turing Machine," which can simulate any
possible Turing Machine (that is described on its tape memory),
is a further demonstration of the universality (and simplicity)
of computation. In what is perhaps the most impressive analysis
in his book, Wolfram shows how a Turing Machine with only two states
and five possible colors can be a Universal Turing Machine. For
forty years, we've thought that a Universal Turing Machine had to
be more complex than this10. Also
impressive is Wolfram's demonstration that Cellular Automaton Rule
110 is capable of universal computation (given the right software).
In my 1990 book, I showed how any computer could be constructed
from "a suitable number of [a] very simple device," namely
the "nor" gate11. This
is not exactly the same demonstration as a universal Turing machine,
but it does demonstrate that any computation can be performed by
a cascade of this very simple device (which is simpler than Rule
110), given the right software (which would include the connection
description of the nor gates).12
The most controversial thesis in Wolfram's book is likely to be
his treatment of physics, in which he postulates that the Universe
is a big cellular-automaton computer. Wolfram is hypothesizing that
there is a digital basis to the apparently analog phenomena and
formulas in physics, and that we can model our understanding of
physics as the simple transformations of a cellular automaton.
Others have postulated this possibility. Richard Feynman wondered
about it in considering the relationship of information to matter
and energy. Norbert Weiner heralded a fundamental change in focus
from energy to information in his 1948 book Cybernetics,
and suggested that the transformation of information, not energy,
was the fundamental building block for the Universe.
Perhaps the most enthusiastic proponent of an information-based
theory of physics was Edward Fredkin, who in the early 1980s proposed
what he called a new theory of physics based on the idea that the
Universe was comprised ultimately of software. We should not think
of ultimate reality as particles and forces, according to Fredkin,
but rather as bits of data modified according to computation rules.
Fredkin is quoted by Robert Wright in the 1980s as saying "There
are three great philosophical questions. What is life? What is consciousness
and thinking and memory and all that? And how does the Universe
work? The informational viewpoint encompasses all three. . . . What
I'm saying is that at the most basic level of complexity an information
process runs what we think of as physics. At the much higher level
of complexity, life, DNA - you know, the biochemical functions -
are controlled by a digital information process. Then, at another
level, our thought processes are basically information processing.
. . . I find the supporting evidence for my beliefs in ten thousand
different places, and to me it's just totally overwhelming. It's
like there's an animal I want to find. I've found his footprints.
I've found his droppings. I've found the half-chewed food. I find
pieces of his fur, and so on. In every case it fits one kind of
animal, and it's not like any animal anyone's ever seen. People
say, where is this animal? I say, Well he was here, he's about this
big, this that, and the other. And I know a thousand things about
him. I don't have him in hand, but I know he's there. . . . What
I see is so compelling that it can't be a creature of my imagination."13
In commenting on Fredkin's theory of digital physics, Robert Wright
writes, "Fredkin . . . is talking about an interesting characteristic
of some computer programs, including many cellular automata: there
is no shortcut to finding out what they will lead to. This, indeed,
is a basic difference between the "analytical" approach
associated with traditional mathematics, including differential
equations, and the "computational" approach associated
with algorithms. You can predict a future state of a system susceptible
to the analytic approach without figuring out what states it will
occupy between now and then, but in the case of many cellular automata,
you must go through all the intermediate states to find out what
the end will be like: there is no way to know the future except
to watch it unfold. . . There is no way to know the answer to some
question any faster than what's going on. . . . Fredkin believes
that the Universe is very literally a computer and that it is being
used by someone, or something, to solve a problem. It sounds like
a good-news / bad-news joke: the good news is that our lives have
purpose; the bad news is that their purpose is to help some remote
hacker estimate pi to nine jillion decimal places."14
Fredkin went on to show that although energy is needed for information
storage and retrieval, we can arbitrarily reduce the energy required
to perform any particular example of information processing, and
there is no lower limit to the amount of energy required15.
This result made plausible the view that information rather than
matter and energy should be regarded as the more fundamental reality.
I discussed Weiner's and Fredkin's view of information as the fundamental
building block for physics and other levels of reality in my 1990
book The
Age of Intelligent Machines16.
The complexity of casting all of physics in terms of computational
transformations proved to be an immensely challenging project, but
Fredkin has continued his efforts.17
Wolfram has devoted a considerable portion of his efforts over the
past decade to this notion, apparently with only limited communication
with some of the others in the physics community who are also pursuing
the idea.
Wolfram's stated goal "is not to present a specific ultimate
model for physics,"18 but in
his "Note for Physicists,"19
which essentially equates to a grand challenge, Wolfram describes
the "features that [he] believe[s] such a model will have."
In The
Age of Intelligent Machines, I discuss "the
question of whether the ultimate nature of reality is analog or
digital," and point out that "as we delve deeper and deeper
into both natural and artificial processes, we find the nature of
the process often alternates between analog and digital representations
of information."20 As an illustration,
I noted how the phenomenon of sound flips back and forth between
digital and analog representations. In our brains, music is represented
as the digital firing of neurons in the cochlear representing different
frequency bands. In the air and in the wires leading to loudspeakers,
it is an analog phenomenon. The representation of sound on a music
compact disk is digital, which is interpreted by digital circuits.
But the digital circuits consist of thresholded transistors, which
are analog amplifiers. As amplifiers, the transistors manipulate
individual electrons, which can be counted and are, therefore, digital,
but at a deeper level are subject to analog quantum field equations.21
At a yet deeper level, Fredkin, and now Wolfram, are theorizing
a digital (i.e., computational) basis to these continuous equations.
It should be further noted that if someone actually does succeed
in establishing such a digital theory of physics, we would then
be tempted to examine what sorts of deeper mechanisms are actually
implementing the computations and links of the cellular automata.
Perhaps, underlying the cellular automata that run the Universe
are yet more basic analog phenomena, which, like transistors, are
subject to thresholds that enable them to perform digital transactions.
Thus establishing a digital basis for physics will not settle the
philosophical debate as to whether reality is ultimately digital
or analog. Nonetheless, establishing a viable computational model
of physics would be a major accomplishment. So how likely is this?
We can easily establish an existence proof that a digital model
of physics is feasible, in that continuous equations can always
be expressed to any desired level of accuracy in the form of discrete
transformations on discrete changes in value. That is, after all,
the basis for the fundamental theorem of calculus22.
However, expressing continuous formulas in this way is an inherent
complication and would violate Einstein's dictum to express things
"as simply as possible, but no simpler." So the real question
is whether we can express the basic relationships that we are aware
of in more elegant terms, using cellular-automata algorithms. One
test of a new theory of physics is whether it is capable of making
verifiable predictions. In at least one important way that might
be a difficult challenge for a cellular automata-based theory because
lack of predictability is one of the fundamental features of cellular
automata.
Wolfram starts by describing the Universe as a large network of
nodes. The nodes do not exist in "space," but rather space,
as we perceive it, is an illusion created by the smooth transition
of phenomena through the network of nodes. One can easily imagine
building such a network to represent "naïve" (i.e.,
Newtonian) physics by simply building a three-dimensional network
to any desired degree of granularity. Phenomena such as "particles"
and "waves" that appear to move through space would be
represented by "cellular gliders," which are patterns
that are advanced through the network for each cycle of computation.
Fans of the game of "Life" (a popular game based on cellular
automata) will recognize the common phenomenon of gliders, and the
diversity of patterns that can move smoothly through a cellular
automaton network. The speed of light, then, is the result of the
clock speed of the celestial computer since gliders can only advance
one cell per cycle.
Einstein's General Relativity, which describes gravity as perturbations
in space itself, as if our three-dimensional world were curved in
some unseen fourth dimension, is also straightforward to represent
in this scheme. We can imagine a four-dimensional network and represent
apparent curvatures in space in the same way that one represents
normal curvatures in three-dimensional space. Alternatively, the
network can become denser in certain regions to represent the equivalent
of such curvature.
A cellular-automata conception proves useful in explaining the
apparent increase in entropy (disorder) that is implied by the second
law of thermodynamics. We have to assume that the cellular-automata
rule underlying the Universe is a Class 4 rule (otherwise the Universe
would be a dull place indeed). Wolfram's primary observation that
a Class 4 cellular automaton quickly produces apparent randomness
(despite its determinate process) is consistent with the tendency
towards randomness that we see in Brownian motion, and that is implied
by the second law.
Special relativity is more difficult. There is an easy mapping
from the Newtonian model to the cellular network. But the Newtonian
model breaks down in special relativity. In the Newtonian world,
if a train is going 80 miles per hour, and I drive behind it on
a nearby road at 60 miles per hour, the train will appear to pull
away from me at a speed of 20 miles per hour. But in the world of
special relativity, if I leave Earth at a speed of three-quarters
of the speed of light, light will still appear to me to move away
from me at the full speed of light. In accordance with this apparently
paradoxical perspective, both the size and subjective passage of
time for two observers will vary depending on their relative speed.
Thus our fixed mapping of space and nodes becomes considerably more
complex. Essentially each observer needs his own network. However,
in considering special relativity, we can essentially apply the
same conversion to our "Newtonian" network as we do to
Newtonian space. However, it is not clear that we are achieving
greater simplicity in representing special relativity in this way.
A cellular node representation of reality may have its greatest
benefit in understanding some aspects of the phenomenon of quantum
mechanics. It could provide an explanation for the apparent randomness
that we find in quantum phenomena. Consider, for example, the sudden
and apparently random creation of particle-antiparticle pairs. The
randomness could be the same sort of randomness that we see in Class
4 cellular automata. Although predetermined, the behavior of Class
4 automata cannot be anticipated (other than by running the cellular
automata) and is effectively random.
This is not a new view, and is equivalent to the "hidden variables"
formulation of quantum mechanics, which states that there are some
variables that we cannot otherwise access that control what appears
to be random behavior that we can observe. The hidden variables
conception of quantum mechanics is not inconsistent with the formulas
for quantum mechanics. It is possible, but is not popular, however,
with quantum physicists because it requires a large number of assumptions
to work out in a very particular way. However, I do not view this
as a good argument against it. The existence of our Universe is
itself very unlikely and requires many assumptions to all work out
in a very precise way. Yet here we are.
A bigger question is how could a hidden-variables theory be tested?
If based on cellular automata-like processes, the hidden variables
would be inherently unpredictable, even if deterministic. We would
have to find some other way to "unhide" the hidden variables.
Wolfram's network conception of the Universe provides a potential
perspective on the phenomenon of quantum entanglement and the collapse
of the wave function. The collapse of the wave function, which renders
apparently ambiguous properties of a particle (e.g., its location)
retroactively determined, can be viewed from the cellular network
perspective as the interaction of the observed phenomenon with the
observer itself. As observers, we are not outside the network, but
exist inside it. We know from cellular mechanics that two entities
cannot interact without both being changed, which suggests a basis
for wave function collapse.
Wolfram writes that "If the Universe is a network, then it
can in a sense easily contain threads that continue to connect particles
even when the particles get far apart in terms of ordinary space."
This could provide an explanation for recent dramatic experiments
showing nonlocality of action in which two "quantum entangled"
particles appear to continue to act in concert with one another
even though separated by large distances. Einstein called this "spooky
action at a distance" and rejected it, although recent experiments
appear to confirm it.
Some phenomena fit more neatly into this cellular-automata network
conception than others. Some of the suggestions appear elegant,
but as Wolfram's "Note for Physicists" makes clear, the
task of translating all of physics into a consistent cellular automata-based
system is daunting indeed.
Extending his discussion to philosophy, Wolfram "explains"
the apparent phenomenon of free will as decisions that are determined
but unpredictable. Since there is no way to predict the outcome
of a cellular process without actually running the process, and
since no simulator could possibly run faster than the Universe itself,
there is, therefore, no way to reliably predict human decisions.
So even though our decisions are determined, there is no way to
predetermine what these decisions will be. However, this is not
a fully satisfactory examination of the concept. This observation
concerning the lack of predictability can be made for the outcome
of most physical processes, e.g., where a piece of dust will fall
onto the ground. This view thereby equates human free will with
the random descent of a piece of dust. Indeed, that appears to be
Wolfram's view when he states that the process in the human brain
is "computationally equivalent" to those taking place
in processes such as fluid turbulence.
Although I will not attempt a full discussion of this issue here,
it should be noted that it is difficult to explore concepts such
as free will and consciousness in a strictly scientific context
because these are inherently first-person subjective phenomena,
whereas science is inherently a third person objective enterprise.
There is no such thing as the first person in science, so inevitably
concepts such as free will and consciousness end up being meaningless.
We can either view these first person concepts as mere illusions,
as many scientists do, or we can view them as the appropriate province
of philosophy, which seeks to expand beyond the objective framework
of science.
There is a philosophical perspective to Wolfram's treatise that
I do find powerful. My own philosophy is that of a "patternist,"
which one might consider appropriate for a pattern recognition scientist.
In my view, the fundamental reality in the world is not stuff, but
patterns.
If I ask the question, 'Who am I?' I could conclude that, perhaps
I am this stuff here, i.e., the ordered and chaotic collection of
molecules that comprise my body and brain.
However, the specific set of particles that comprise my body and
brain are completely different from the atoms and molecules than
comprised me only a short while (on the order of weeks) ago. We
know that most of our cells are turned over in a matter of weeks.
Even those that persist longer (e.g., neurons) nonetheless change
their component molecules in a matter of weeks.
So I am a completely different set of stuff than I was a month
ago. All that persists is the pattern of organization of that stuff.
The pattern changes also, but slowly and in a continuum from my
past self. From this perspective I am rather like the pattern that
water makes in a stream as it rushes past the rocks in its path.
The actual molecules (of water) change every millisecond, but the
pattern persists for hours or even years.
It is patterns (e.g., people, ideas) that persist, and in my view
constitute the foundation of what fundamentally exists. The view
of the Universe as a cellular automaton provides the same perspective,
i.e., that reality ultimately is a pattern of information. The information
is not embedded as properties of some other substrate (as in the
case of conventional computer memory) but rather information is
the ultimate reality. What we perceive as matter and energy are
simply abstractions, i.e., properties of patterns. As a further
motivation for this perspective, it is useful to point out that,
based on my research, the vast majority of processes underlying
human intelligence are based on the recognition of patterns.
However, the intelligence of the patterns we experience in both
the natural and human-created world is not primarily the result
of Class 4 cellular automata processes, which create essentially
random assemblages of lower level features. Some people have commented
that they see ghostly faces and other higher order patterns in the
many examples of Class 4 images that Wolfram provides, but this
is an indication more of the intelligence of the observer than of
the pattern being observed. It is our human nature to anthropomorphize
the patterns we encounter. This phenomenon has to do with the paradigm
our brain uses to perform pattern recognition, which is a method
of "hypothesize and test." Our brains hypothesize patterns
from the images and sounds we encounter, followed by a testing of
these hypotheses, e.g., is that fleeting image in the corner of
my eye really a predator about to attack? Sometimes we experience
an unverifiable hypothesis that is created by the inevitable accidental
association of lower-level features.
Some of the phenomena in nature (e.g., clouds, coastlines) are
explained by repetitive simple processes such as cellular automata
and fractals, but intelligent patterns (e.g., the human brain) require
an evolutionary process (or, alternatively the reverse-engineering
of the results of such a process). Intelligence is the inspired
product of evolution, and is also, in my view, the most powerful
"force" in the world, ultimately transcending the powers
of mindless natural forces.
In summary, Wolfram's sweeping and ambitious treatise paints a
compelling but ultimately overstated and incomplete picture. Wolfram
joins a growing community of voices that believe that patterns of
information, rather than matter and energy, represent the more fundamental
building blocks of reality. Wolfram has added to our knowledge of
how patterns of information create the world we experience and I
look forward to a period of collaboration between Wolfram and his
colleagues so that we can build a more robust vision of the ubiquitous
role of algorithms in the world.
The lack of predictability of Class 4 cellular automata underlies
at least some of the apparent complexity of biological systems,
and does represent one of the important biological paradigms that
we can seek to emulate in our human-created technology. It does
not explain all of biology. It remains at least possible, however,
that such methods can explain all of physics. If Wolfram, or anyone
else for that matter, succeeds in formulating physics in terms of
cellular-automata operations and their patterns, then Wolfram's
book will have earned its title. In any event, I believe the book
to be an important work of ontology.
1 Wolfram, A New Kind of
Science, page 2.
2 Ibid, page 849.
3 Rule 110 states that a cell becomes
white if its previous color and its two neighbors are all black
or all white or if its previous color was white and the two neighbors
are black and white respectively; otherwise the cell becomes black.
4 Wolfram, A New Kind of
Science, page 4.
5 The genome has 6 billion bits, which
is 800 million bytes, but there is enormous repetition, e.g., the
sequence "ALU" which is repeated 300,000 times. Applying compression
to the redundancy, the genome is approximately 23
million bytes compressed, of which about half specifies the brain's
starting conditions. The additional complexity (in the mature brain)
comes from the use of stochastic (i.e., random within constraints)
processes used to initially wire specific areas of the brain, followed
by years of self-organization in response to the brain's interaction
with its environment.
6 See my book The
Age of Spiritual Machines, When Computers Exceed Human Intelligence
(Viking, 1999), the section titled "Disdisorder" and "The Law of
Increasing Entropy Versus the Growth of Order" on pages 30 - 33.
7 A computer that can accept as input
the definition of any other computer and then simulate that other
computer. It does not address the speed of simulation, which might
be slow in comparison to the computer being simulated.
8 Wolfram, A New Kind of
Science, page 383.
9 The seven commands of a Turing Machine
are: (i) Read Tape, (ii) Move Tape Left, (iii) Move Tape Right,
(iv) Write 0 on the Tape, (v) Write 1 on the Tape, (vi) Jump to
another command, and (vii) Halt.
10 As Wolfram points out, the previous
simplest Universal Turing machine, presented in 1962, required 7
states and 4 colors. See Wolfram, A New Kind of Science,
pages 706 - 710.
11 The "nor" gate transforms two
inputs into one output. The output of "nor" is true if an only if
neither A nor B are true.
12 See my book The
Age of Intelligent Machines, section titled "A nor
B: The Basis of Intelligence?," pages 152 - 157.
13 Edward Fredkin, as quoted in Did
the Universe Just Happen by Robert Wright.
14 Ibid.
15 Many of Fredkin's results come
from studying his own model of computation, which explicitly reflects
a number of fundamental principles of physics. See the classic Edward
Fredkin and Tommaso Toffoli, "Conservative Logic," International
Journal of Theoretical Physics 21, numbers 3-4 (1982). Also,
a set of concerns about the physics of computation analytically
similar to those of Fredkin's may be found in Norman Margolus, "Physics
and Computation," Ph.D. thesis, MIT.
16 See The
Age of Intelligent Machines, section titled "Cybernetics:
A new weltanschauung," pages 189 - 198.
17 See the web site: www.digitalphilosophy.org,
including Ed Fredkin's essay "Introduction to Digital Philosophy."
Also, the National Science Foundation sponsored a workshop during
the Summer of 2001 titled "The Digital Perspective," which covered
some of the ideas discussed in Wolfram's book. The workshop included
Ed Fredkin Norman Margolus, Tom Toffoli, Charles Bennett, David
Finkelstein, Jerry Sussman, Tom Knight, and Physics Nobel Laureate
Gerard 't Hooft. The workshop proceedings will be published soon,
with Tom Toffoli as editor.
18 Stephen Wolfram, A New
Kind of Science, page 1,043.
19 Ibid, pages 1,043 - 1,065.
20 The
Age of Intelligent Machines, pages 192 - 198.
21 Ibid.
22 The fundamental theorem of calculus
establishes that differentiation and integration are inverse operations.
| | Join the discussion about this article on Mind·X! | |
|
|
Mind·X Discussion About This Article:
|
|
|
|
Re: CA
|
|
|
|
I have been contemplating this type of thing for a few years, and have had meager success when attempting to articulate it to co-workers and/or friends, and when attempting to get them to understand the significance and ramifications. It's nice to finally encounter someone with similar thinking. If our brains/minds (patterns) can in fact be be "uploaded" to some other machine/system, what a wonderous and strange facet of reality that will present. As you stated, shortly before a person dies (i.e., pattern ceases to exist) it would imperative to "upload". Along a similar line, since the pattern changes throughout the decades (albeit possibly within somewhat restrictive "individual" person constraints), couldn't several versions of patterns be uploaded, and prior to death, a selection could be made as to the "best" one (e.g., age 30), and which will be the one that you wish to represent you in perpetuity. Who would "own" your pattern? Would there be any limitations of if and how how often your "pattern" could be reproduced. What if somebody purposefully or accidentally deleted your pattern(s) ("killed" you). These questions seem somewhat similar to but even more significant than those being wrestled with by the human genome ethicists.
|
|
|
|
|
|
|
|
|
the secret-of-the-universe in one line of code
|
|
|
|
In response to this piece from the "Wired" article by Stephen Levy.
As dessert is served, I bring up the secret-of-the-universe question. Wolfram's theory that there is a single rule at the heart of everything - a single simple algorithm that, in effect, generates all the rules of physics and everything else - is bound to be one of his most controversial claims, a theory that even some of his close friends in physics aren't buying. Furthermore, Wolfram rubs our faces in the dreary implications of his contention. Not only does a single measly rule account for everything, but if one day we actually see the rule, he predicts, we'll probably find it unimpressive. "One might expect," he writes, "that in the end there would be nothing special about the rule for our universe - just as there has turned out to be nothing special about our position in the solar system or the galaxy."
This "New Kind of Science" is really very old science. The "single measly rule" is actually a proportion.
(The square root of 5, plus 1) divided by 2.
It's kind of shocking that Wolfram could spend 10 years writing this book and then have the nerve to publish it without the answer.
Stephen Blake
|
|
|
|
|
|
|
|
|
Re: CA
|
|
|
|
I enjoyed Ray's review of Wolfram's book. I wrote a similar point of view for a critique on an economics list at http://maelstrom.stjohns.edu/CGI/wa.exe?A2=ind0205&L=hayek-l&D=1&O=D&P=13386. I think Wolfram's work is great, but have some epistemological differences. Were I to write a complete review I would have to add in much of the praise Kurzweil has taked the trouble to write. Here is the critique anyway:
---------------------------------------
I have now done a brief speed read of Wolfram's book. The core idea that
complex (NP-complete) systems can evolve out of simple rules. This is indeed
Hayekian, but not new in and of itself. I am trying to find what else it
says. In the beginning he claims it has implications in philosophy, social
science, etc. etc., but unfortunately the book is not organized by such
areas. The rather proceed by examining particular sets of rules he found
that exhibited interesting, life-like behavior and shape of entities. On the
way, he says how this is relevant to regular science, technology, etc.
Unfortunately somewhere along the way he goes overboard and seems to adopt
Leibnitz's metaphysics. He claims that if complex systems can emerge out of
nodes acting per simple rules, then it is plausible that the Universe
*actually* consists of discrete nodes interacting per some Grand Simple
Rules which we have yet to find out (he does not claim to have found them
yet). This begs the question of continuous systems as being fundamentally
analog and not digital or discrete. That reality can be fairly accurately
modeled using an atomic system does not mean that it IS atomic (ie
consisting of 'uncuttable' -- atomic -- particles). Newton's metaphysics
transcended this dilemma by using continuous time equations and invented
much of calculus in order to do so. In fact I think Kant split with the
Leibnizians because of this metaphysics, though Kant's metaphysics tried to
transcend Newton's basic 3D moving in time.
In short, I find interesting experiments, and see some use in his
painstaking discovery of which particular systems of rules produce lifelike
objects and behavior. But I do not find a new Science, and the regular
Mathematics provides a LOT more than Wolfram does in this book (Leibniz's
contributions to math are phenomenal, even if his metaphysics is flawed). A
far better (and shorter) book with far better real math (which makes it a
very hard read) is John Holland's Adaptation in Natural and Artificial
Systems.
Since the view on metaphysics is the most important to an Austrian group, I
think that is where attention is due. I think Kant had it almost right,
though he inverts what I believe is important -- I believe that reality
itself and the data it provides is most important, and logic and math are
only tools that we use. The awe and respect should not be of such
'transcendental' relationships, but of reality; and the transcendental
relationships of mental constructs are only tools that we can use to
discover how reality (including us) works. The problem here is whether this
is too much of a blow to the Enlightenment prime mover of exaltation of the
ego and man's capabilities. I think sentience -- the ability of using
language, awareness that we use language to model reality, and awareness of
the distinction between models and reality itself -- is enough to sustain
ego as something positive and worth holding important, without going
overboard and becoming enraptured by transcendental / apodeictic aspects of
math and logic and forgetting about reality in the process. It is around
this principle that I find Hayek's "knowledge problem" and critique of
planning different from apriorists.
Certainly Wolfram also emphasizes time and again in this book that even
complex systems generated from simple axiomatic systems display "knowledge
problems" that are insurmountable. But this does not mean that logic and
math are not important and useful. Holland is an example of useful math in
complex system analysis (NOT in the generation of complex systems but
analysis of complex systems one finds given in the world already). Economics
too could potentially have a different math that could largely agree with
classical economists and still provide causal reasons for commonly known
economic principles -- it simply remains for someone to abstract what they
are saying into mathematical equations -- hard but not impossible.
Regards,
Karun
--
Karun Philip
Author: "Zen and the Art of Funk Capitalism: A General Theory of
Fallibility"
http://www.k-capital.com/HA.htm
|
|
|
|
|
|
|
|
|
Re: CA
|
|
|
|
Stan wrote:
"In your critique, you speak of reality as something concrete. One could submit that reality, as you propose it, is hard to find. Relativity shows that reality changes depending on your frame of reference. G'del assures us that no theorem can be proven. Quantum Mechanics revels that measuring things always changes them and therefore reality. "
What I mean by the word reality is not something that is hard to experience -- but perhaps hard to understand. I just mean the material universe, whatever it may *actually* be (energy, unknowable in toto, etc.) People do tend to distance themselves from reailty because they are thinking of their absolutely certian knowledge about it, instead of simply going out and smelling a rose -- it is easy to experience even if hard to understand.
In the book, I indeed begin with G'del, and point to JR Lucas' demonstration that the implication of G'del's meta-mathematics in mathematical logic is non-refutability of fallibility. There are levels of abstraction from reality in our brains. Our perecptions are (most probably, unrefutedly so far) neural outputs to stimulus to something "out there". I do not believe fallibility has major philosophical implications at the perception level (though technically one has to simply have a conjecture that there is a reality out there and have faith in that belief at least until and unless it if refuted). There can be perceptual illusions, mismeasurement, limitations of the senses, etc., but machines can usually be designed to more accurately perceive, at least to some degree of accuracy (let's leave QM out of it for now). From perceptions we have words. When we perceive difference in reality -- any difference -- we assign words to to each entity/attribute/process that were perceived as different. So language evolves as we perceive ever finer differences. At the next level of abstraction we have logic. Further out, we have mathematics, and then meta-mathematics.
Kant's theory was that these levels of abstraction are analogous (like a Fourier or Laplace Transform, for any engineers out there), and so one can use deductions at one level of abstraction -- say mathematics -- and apply the deduced rule to logic or reality. The problem is that while this is a powerful technique, it introduces fallibility -- a theorem may be apodeictic (necesssarily or demonstrably true) in math, but its application to reality is heuristic only, and this introduces fallibility. 1+1 = 2, but if I take one stone and place it near another stone and the second one breaks when I put it down, then I have three stones. (Obviously this does not mean 1+1 = 3!!).
Now, coming to QM, I do not see the problem is with the nature of reality, but with the limitation of our knowledge of it. See, I am talking about the first level of abstraction I mentioned above -- an object and knowledge of it are two things. One is inside our heads and the other is "out there". The rose and the name of the rose are two entities, though there is a direct correspondence between them. The real rose is complete though, and the neural perception is not per heuristic implications of G'del. In normal terms, when you remember the rose, it is never the complete thing -- just some of the attributes that you noticed. That we don't know the position and momentum of the electron is our problem. The uncertainty is Heisenberg's and not the particle's. Even the concept of particle is in serious philosophical trouble. Where exactly does the table in front of you end and the atmosphere begin? Where exactly does Mount Everest end and some other mountain begin? Nevertheless, we use coherence to create atomic contructs -- particle, Mount Everest -- because it is very useful, given our capability of language, logic and math, which need atomic base entities to function. This does not mean that the actual subset of reality you refer to is an atomic (uncuttable) entity. Even in superstring theory, some are now claiming once the 3D string equation is written then those strings are the uncuttable entitites that constitute the universe. But, even if there is no further cutting, what about choice in selecting the coordinate system? There is an Axiom of Choice in math which is probably related to both G'del and this apparent problem. But the problem is only a limitation of knowledge. Not a question of reality's nature, which is whatever it is and whose origins are whatever its origins are. Newton's metaphysics simply stated that and did not explicitly denied trying to explain the ultimate cause.
Theories are our atomic/linguisitic/mathematical models that are (hopefully) approximations to whatever reality is. Theories are always conjectures subject to refutation through logic or counter-evidence. With ingenuity, we are able to shift paradigms from time to time and switch to theories that better explain more of whatever perceptions and measurements we get from reality.
Regards,
Karun.
--
Karun Philip
Author: Zen and the Art of Funk Capitalism: A General Theory of Fallibility
http://www.k-capital.com
|
|
|
|
|
|
|
|
|
Re: CA
|
|
|
|
My comments were perhaps more concerned with Kant's concept of truth, rather than reality (maybe a difference without a distinction). In the introduction of Logic Kant declares,' A universal, material criterion of truth is not possible'indeed, it is even self-contradictory. 'But material truth must consist in the agreement of a cognition with that definite object to which it refers.' My contention is that our perception of 'reality' is only a course graining of truth, and that the coarseness is determined by the limitations of our perception and empirical experience combined with the limitations of our cognition. The desk supporting the monitor in front of me appears to be solid and substantial. It is counterintuitive to speak of it as being mostly empty space, even more so to assert that the monitor is not being supported by what I see, but rather by the unseen forces between that which I see.
I choose to view Stephan Wolfram's book not in the context of its universality of truth, but instead by the contribution (and it may be considerable) to both the discussion of complexity and of the four philosophical questions of Kant.
1. What can I know? (Answered by Metaphysics)
2. What ought I do? (Answered by Morality)
3. What may I hope? (Answered by Religion/Science)
4. What is Man? (If we can answer this one, we will know the answer to the other three)
Thanks,
Stan Rambin
|
|
|
|
|
|
|
|
|
Re: CA
|
|
|
|
"1. What can I know
vs.
1. What is
That is the whole point Kant is trying to make...
Saying 'What is' presupposes getting under the
skin of the Noumena.'What can I know' is much
more cautious"
I agree that it is more cautious. But this is exactly what I disagree with in Kant and Plato. They claim that noumena are "higher" forms than phenomena. They are just more abstract (no normative judgement applied). Neural excitations are still phenomena -- undoubtedly with distict properties not found in other phenomena. In other words, noumena are just particular types of phenomena -- the types found in perceptrons.
There is no need to get "under the skin" of noumena in order to experience phenomena. Kant does not deny this -- he just disses phenomena as "base". I diss noumena as fallible. Kant exalts the transcendental subset of noumena (which he again sees as non-noumenal and non-phenomenal, which is false -- transcendental is a subset of noumenal is a subset of phenomenal) but Godel proves the transcendental incomplete.
Once you get beyond exalting the transcendental, you can cease conflating metaphysics and epistemology. Yes, reality is not understandable with completeness, but reality is experiencable. Inferences based on observations are fallible, but fallible does not mean false -- just the possibility of falsehood along with a non-zero possibility of truth. As Popper put it, conjectural inferences subject to debate and refutation, and which survive debate and attempted refutation, are considered acceptable by modern Science. It seems to work quite well. Using transcendental concepts (such as Newton's use of the Hamiltonian in representing the mathematics of physics) is useful, but as Einstein showed, he was still wrong -- there is no instantaneous force in the universe (as Newton described gravity). Yet, we still use Newton's laws of physics and not Einstein's to build bridges and spacecraft -- it is a close enough approximation to reality for the purpose. Of course, we engineers are more realistic than scientists so we always over-engineer and test, test, test...
Regards,
Karun.
--
Karun Philip
Author: Zen and the Art of Funk Capitalism: A General Theory of Fallibility
http://www.k-capital.com
|
|
|
|
|
|
|
|
|
Re: CA
|
|
|
|
>Why? Is there only one possible logical framework? Is Kant the only source for such a framework? I fail to see what your point is. Wolfram's assumptions seem to me to be just as logical as any others I've seen. He lays out his case step by step, demonstrating each one as he goes along. What is he lacking in your opinion?
Logic, as a modern scientific methodology begins with Descartes, continued with Leibniz and culminated in Kant. If support for a position is based on logic, then, there is only one framework. Logical methodology is meant to illuminate a view, to make it clear and distinct from opposing views, and to work from the components to the whole of the conclusion. Science is not based on opinions regardless of their beauty or simplicity.
This brings us to the problem. The conclusion that 'everything' can be explained by CA is not supported with a structure either of logic or empirical evidence satisfactory for making such a broad claim. It seems as if he moves from A = B, B = C, to C = Everything. The conclusion is more than the sum of the parts.
This does not mean that much of the treatise is not provocative and well presented. It does represent a serious effort (and mostly a successful one) to shed light on a basic question about the nature of the universe. But, I think it is safe to say the jury is still out on the 'answer to everything' part. Big Ideas require Big peer review.
|
|
|
|
|
|
|
|
|
Re: CA
|
|
|
|
>If science uses logical assumptions and methodologies to arrive at a conclusion (opposed to empirical), then you must read Wolfram or Kurzweil with an eye on the framework of Logic extolled by Kant.
>Why? Is there only one possible logical framework? Is Kant the only source for such a framework? I fail to see what your point is. Wolfram's assumptions seem to me to be just as logical as any others I've seen. He lays out his case step by step, demonstrating each one as he goes along. What is he lacking in your opinion?
------
Certainly, Kant is not the only possible source for such a framework. However, his motivating question was whether or not metaphysics could be possible as a science. He was prompted in this direction by the retreat to skepticism embraced by Hume. Moreover, contrary to the suggestion of other posts on this topic, Kant did not formulate transcendental idealism lightly. Rather, it was the only means by which he could establish a framework under which causality could be realized as a function of the sentient organism. Without this, Hume's skepticism must reign.
(For the record, he retracted his use of the term "transcendental" in favor of the term "critical" so as to reduce the confusion it was causing. In either case, his use of the term is a restricted reference to the cognitive faculty. It does not refer to cognition itself. Nor is it a reference to hyperphysical experience.)
You see, Hume is very convincing in his rejection that causal relationships may be attributed to some independent reality. As a matter of pragmatism, we must each adopt some assumption concerning such independence lest we fall into the trap of solipsism. Nevertheless, the logical progression from Descartes' meditations leads to the conclusion that our understanding of causality is internally constructed rather than observed. That was Hume's contribution and, in Kant's opinion, he failed to recognize how he might recover from his conclusions.
Kant may not be the only source for a logical framework, however the problem which led him to write "Critique of Pure Reason" had the potential to undermine all logical frameworks. The question he addressed was far more devastating than the incompleteness of arithmetic discovered by Goedel. Unfortunately, the pragmatic assumptions of those who practice the "hard" sciences often allows them to dismiss these philosophical debates as if they lack relevance.
With regard to your question concerning uniqueness, I can only offer my own thoughts on the matter.
Kant begins with a definition of space and time which are not the platonistic concepts of modern physics. Quite simply, he states that time is the form of inner sense and space, by all appearances, is the form of outer sense. Note that the term "appearance" has a technical meaning as the presentation of sensory perception associated with an object. The nature of objects in and of themselves is not known.
In any case, time and space are forms of sensibility for Kant. They do not necessarily constitute an environment independent of the sensuous organism as they did for Newton. They are simply a bifurcation of sensory experience into an internal form and an external form.
Now, one of the motivations for pursuing a foundation for mathematics in the nineteenth and twentieth centuries was to understand the concept of certainty--and, possibly, to identify that which could be known for certain. A diverse collection of accumulated results were reorganized under axiomatic systems. At the same time, Cantor's idea concerning collections taken as objects in their own right began to demonstrate utility, Eventually, set theory became the focus of foundational mathematics.
It would seem that logical thought has been converging to a common logical framework. But, few would ever attribute Kant with a prediction of this event.
You see, Kant recognized that time and space, as forms of sensibility, were the foundation for the intuitions of mathematicians. He is very clear about this and characterizes mathematical intuition as synthetic a priori cognition. This is to be contrasted with the analytic cognition typical of metaphyical concepts. The difference between the two lies in the fact that synthetic cognitions must be accompanied by a mental visualization. Otherwise, the two types of cognition must share the simple constraint that they not be self-contradictory.
One may think of the visualization characterizing synthetic cognitions as a rule of detachment that allows a language user to formulate an opinion concerning the certainty of the cognition based on personal experience. In contrast, opinions concerning metaphysical questions in general often depend on how credible the source of information is believed to be.
Now, in "Prolegomena to Any Future Metaphysics" Kant is very clear about the fact that the relationship between appearances is described using mathematics. It is this assertion which I believe constitutes a prediction concerning the evolution of a logical framework. Moreover, it suggests that mathematics is a sublanguage--or, portable subgrammar--whose continued refinement is governed by our attempts to explain natural phenomena.
To the extent that the Kantian framework is valid, the assumed validity of physical laws adopted by the modern scientific community should lead to a common logical framework.
There are two unfortunate discrepancies in this scenario. On the one hand, Kant's detractors were able to focus criticism on some of his assertions because his assumptions concerning the absoluteness of Euclidean geometry were about to be challenged by the discovery of curved geometries. On the other, the twentieth century mathematicians failed to complete their work in foundations. As for the former, one need only focus on Kant's general statements to assess their continued relevance. I shall return to the latter momentarily.
This post began with the observation that Kant was motivated by Hume's arguments concerning causality. For Kant, nature is not the existence of things in and of themselves. Rather, it is the complex of all objects of experience as given to us through time and space as forms of sensibility. Although subtle, this distinction is the link which permits Kant to recover causal intuition.
Indeed, temporality as in internalized experience yields a notion of causality which can be attributed to the sentient organism a priori. Namely, there is a distinction between asserting that everything has a cause and asserting that every observed event has an antecedent which it follows according to a universal law. The former assumes knowledge concerning the nature of reality. In contrast, the latter merely reflects the ability of the cognitive faculty to organize its experiences according to an ordering. Naturally, Kant adopted the presumption of antecedents because he believed it reflected conditions on which experience was possible that could be known a priori to any possible experience.
In the end, however, Kant concluded that metaphysics as a science was not possible. In order for our knowledge of mathematics and natural science to be secure, our knowledge of reality must be obscured. Mathematics and natural science are valid with respect to appearances. However, reality is hidden behind the filter of our sensuous experience. As the translator's preface to my copy of the "Prolegomena to Any Future Metaphysics" observes, his transcendental philosophy "is a lesson in intellectual humility."
Returning momentarily to the earlier observation that the mathematical community failed to complete their work, I direct your attention to conventional first-order model theory where the identity predicate is taken to be a "logical" symbol of the language. This fails a use case analysis of mathematicians as language users. Specifically, note that mathematicians use definitions to introduce language elements. Neither the identity predicate nor the membership predicate of set theory have definitions in conventional models. Moreover, the fact that no model of the axioms of set theory can be proven to be a model of the class universe, the interpretation of these predicates is without foundation.
For the last seventeen years I have been working on a different foundation for set theory. The sentences which resolve the problems of the preceding paragraph are trivial and uninteresting from a mathematical perspective. The were formulated in such a manner so as to not impact any existing results in mathematics. Consequently, they solve no problems of current interest.
Nevertheless, the sentences are philosophically interesting. As the predicates are obtained via definition, the first predicate of the language is necessarily self-defining. It is a strict transitive predicate which is subsequently interpreted as strict set containment when characterized in terms of the membership predicate by a theorem.
Now, the transitivity of the initial predicate in necessary to the definition of a membership relation. To see this, visualize membership relative to ascending nestings of superclasses. That is, X is an element of Y if X is an element of every superset of Y.
Finally, the identity predicate is characterized in terms of topological separation rather than extensionality. More precisely, distinctness is characterized thusly. This result, however, requires axioms. Contrast this with conventional model theory where one must assume that an identity predicate is understood a priori.
In any case, the use of circular reference to obtain a language for mathematics completes its foundation. The self-referential syntactic forms constitute a construction from first principles. Circularity, however, makes those principles void of meaning without a supporting visualization in the sense of a synthetic cognition. It, therefore, becomes necessary to introduce Venn diagrams which illustrate the underlying intuitions. The axioms which subsequently yield an identity predicate are derived from manipulations of Venn diagrams expressing topological separation. It is quite elegant.
So, it has been my experience that there is precisely one logical framework. Its purpose is fundamentally grammatical. That is, in order for us to communicate, we must share a capability to process linguistic constructions. Whether or not language processing is binary, the data provided to each of us is obtained bitwise from the firing of neurons. Ultimately, our ability to agree on a representation for this framework necessitates a self-referential form. This follows from the fact that our neural networks are disconnected.
That is, we are agreeing on conscious experience. Therefore, we must also agree on the self-contained nature of that experience.
The existence of other frameworks derives from constructions similar to those of conventional model theory. They depend on metalinguistic support which, in turn, depends on metametalinguistic support and so on. This is just a grammatical expression of Russell's unsatisfactory theory of types. His fear of self-reference completely biased the foundations of mathematics to reject any use of circular reference. Thus, a generation of mathematicians failed to properly investigate nonparadoxical uses of self-referencing syntactic forms.
As for your final question, it would be difficult to contend with your observation that Mr. Wolfram's assumptions seem logical. As noted earlier, opinions formulated about analytic cognitions are closely bound with the credibility of the information source. And there is no problem with that. In spite of Kant's own conclusions, he recognized that humans would never stop seeking answers for their questions. He discussed incompleteness long before Goedel. Goedel simply expressed it in a form which would silence Hilbert. Thank goodness.
Mr. Wolfram is pursuing answers in the best of traditions. He is lacking nothing. Personally, however, I do not find his work compelling.
|
|
|
|
|
|
|
|
|
Re: CA
|
|
|
|
That was the most excellent and well thought-out reply I've ever received and I really appreciate the effort that went into it. I'm also convinced that mathematics is just a subset of language and it's the restrictions placed on the use of language by mathematics that makes it so powerful. Unfortunately, it also takes away some of the power of language by these restrictions.
I also find the space-time thing especially interesting, as I came to the opposite conclusion as Kant in that I see space as internal and time as external. I may be looking at different aspects of time and space than he was, but if you think about dimensions and how we use them, space is concerned with the internal dimensions of an object and time is concerned with the external dimensions.
I say this because length, width and height measure the object within its prescribed boundaries while time measures the movement of objects (including the internal dimensions but excluding the external dimensions) through the universe outside the object.
But all of these measurements, it seems to me, are arbitrary and contained within the linguistic framework by which we divide up our experience with the universe. Therefore, another type of being might approach the description of the universe from another point of view not based upon the linguistic structure that is the foundation of our shared experience. We divide the world up into the units we do because our language leads us down this path. Every new concept is built on an older one and is made to fit into the structure that language has built.
But if you read Worf, you will see that other linguistic groups, such as the American Indians for example, divided the universe of their experience in a different way. The fact that they came around to the Europoean model in the end has more to do with the size and complexity of that model (in my opinion) and the course of history in which they were absorbed and their culture extinguished by historical events than it has to do with the model we now use being the only possible model.
We are creatures of our own cultural evolution as well as our genetic evolution. Our culture influences the shape of the universe as we perceive it as well as the tools with which we are able to examine it. What we are missing is all the things our culture doesn't include -- which are beyond our imagination simply because our culture hasn't grown large and complex enough to include them yet.
This is starting to run on and run away from the things you were talking about, so I'll just say there were some other areas where the Kantian viewpoint also conflicts with my own but they are trivial by comparison. But we shouldn't forget that the sciences of chaos and complexity show that seemingly trivial parts of a complex adaptive system can cause tremendous changes in the development of that system and that's what Wolfram's work seems aimed at dealing with.
Grant
|
|
|
|
|
|
|
|
|
Re: CA
|
|
|
|
I expended much effort trying to understand Kant's concept of time. Having grown up with an avid interest in science, I had never questioned the platonistic environment assumed by the scientific community. Ultimately, my deliberations on Kant's veiw led me to understand time as the partitioning of my stream of consciousness by the internal vocalization of my thoughts. In essence, the demarcation of each syllable delineated the progression of a counting protocol.
I had hoped that Wolfram's work might offer insight concerning a slightly different problem. Namely, could discrete automata explain the uniform progression of time assumption underlying modern physics? Typically, computation is an iterative process which is understood in terms of clock cycles. It had been my hope that Wolfram might have been flipping this relationship in some way that would justify this belief in uniformity. Instead he implicitly relies on this very assumption to run his automata.
Another possibility might have dealt with causal connectedness. The probabilistic interpretation of quantum phenomena is at odds with traditional assumptions concerning causality. The idea that an automata-based model might organize events into a causal structure has some appeal to me. After all, evolution proceeds with respect to environmental conditions, and, causal structure is precisely the environment in which the next particle interaction realizes statefulness. What simple rule translates the outcome of that interaction into causal constraints for creations, interactions, and annihilations which have yet to occur?
It would, however, be incorrect to infer that I do not deeply respect the possibilities presented by chaos and complexity. Let me share a topological fantasy...
Imagine the positive real quadrant of a three dimensional complex space. Let a Cantor cube be situated so that one of its corners is coincident with the origin of the space. Moreover, suppose that any macroscopic attempt to measure the edge length of any such Cantor cube always yields the same result. Thus, metric structure fails because of the self-similarity of intermediate Cantor cubes arising from the iterative construction.
Suppose that the distinction between related iterations is temporal and suppose further that the "smaller" Cantor cubes converging to the origin of the space are actually three dimensional projections of higher dimensional cubes. More precisely, think of the iterations as a counting protocol and think of the action of that protocol as increasing the dimension by one for each "smaller" Cantor cube obtained as part of the iteration.
The reason for this increasing dimension lies with the length of the cube diagonals. As the dimension increases the diagonal length increases. Moreover, the angle between the cube edges and the diagonal increases with increasing dimension.
Now, interpreting this counting protocol temporally, only finitely many iterations can have occurred since the beginning of the universe. Whatever the number of iterations may be, it constitutes a local measurement of time. There is no reason to think that this value is universally the same.
How then are we to arrive at a comparison for two such elements? For one thing, we must assume convergence to the origin of the space. This is a bit of a solipsitic approach which asks, "What would I be measuring if I were at that other point?" Instead of thinking about the answer in the usual sense, consider that you must now regress temporally to the beginning of time and then progress temporally so as to arrive at the other point about which the question was asked.
As for the other relevant issue, the only mechanism for universally capturing "arbitrarily large but finite" in the absence of known bounds is by assuming global convergence through an infinite regression. It is my opinion that the assumption of uniform temporal progression throughout the universe is simply obfuscating an infinite assumption.
Now observe that in this limit, the diagonal of the cubes becomes an independent dimension orthogonal to all spatial dimensions from which it was derived. Moreover, it becomes infinitely long. Finally, the construction is situated in the positive real quadrant of an infinite dimensional complex space.
I admit that this is a total fantasy. But, it offered me a framework for things I was compelled to reconcile. I needed to see how the infinite dimensional complex spaces of quantum mechanics might arise from the spatiality I experience. Furthermore, I needed to realize a platonistic time dimension in terms of the spatial elements of my experience.
You should note, however, the central role of the Cantor cube in this construction. The self-similarity is essential to any utility it may have. So, you see, I have a great deal of respect for the contribution chaos and complexity might have for fundamental theories. I believe they may hold the key to understanding temporality as a topological feature of space.
When I first studied Kant's ideas of space and time I was highly motivated to understand its relationship to the platonistic space and time of modern physics. To a large extent I have found an answer that I can live with. However, I doubt it is the kind of answer with which others would be comfortable. Whether or not one is speaking of the ideality of space and time or the world view inherent to our natural languages, there are clearly limitations on our ability to describe the universe. Cellular automata may provide some new perspectives, but it is unlikely that they will circumvent these constraints.
|
|
|
|
|
|
|
|
|
Re: CA
|
|
|
|
I do not think it is possible to have anything but the linguistic discretized model -- the noumena in and of itself is indeed unknowable at some level, as Kant put it. However, as I mentioned, it is eminently experiencable. Just go out and small a rose. The point of some meditative techniques (focusing on beathing for instance) is supposed to help reach close to (but not quite) the pure experience without one's fallible conjectures intruding. Unfortunately, these techniques make many people forget about reality entirely and focus on their own conjectures. In the end it is impossible to tell the difference: true nirvana is only for dead people ;-).
Like Kurzweil, I also worked in pattern recognition, neural networks, and character recognition. I came to a point where I realized that neural networks, by nature of their structure, are inherently fallible. I realized the only way to solve character recognition tolerably was to at least achieve human capability. But if we did this we would have a complete AI. The key to it, I felt, was to have a neural model of language. How can language emerge spontaneously in a neural network receiving stimulus and creating abstract analogies? Hayek provides a structure and notes that once neural networks learn a pattern, they hold expectations ahead of perceiving. Then the perception comes in and is compared, modifying the expectations a bit if necessary. The key to a neural model of language is noting that perception is the perception of difference. Even if everything out there (and in here) are just superstrings, the particular organizations of superstrings that make up a cat are different from a dog. Blue is different from red. When we perceive a difference there is an implied perception of similarity or categorization. We then assign words to both sides of the difference, and language evolves by noticing ever finer differences. In an artificial neural network (ANN), we could have a preprocessor which behaves like a back prop network, and once a difference is found, it can be assigned a handle that is stored in a content addressible memory (also neurally constructed -- there are well known models). When the word is recalled, the weights of the back prop section are recreated to form the expectation. This ANN would keep learning by itself, with the goal of building a liguistic (handle based) model of reality, while never assuming a model to be final.
I further propose (not in my book) that creative conjecture procedes solely by a process of analogy (also neurally implemented, but better done in analog rather than digital electronics). Syllogism itself is an analogy to causality (whether such causality is real or apparent is unimportant). We experience such causality and develop its linguistic analogy of logic. Conjectures are eliminated by testing their logic with the logical rule as well as through evidence of new perceptions.
Building an AI, in my opinion, is useless, practically speaking. We might as well hire someone from China. Its sole use may be to explain ourselves better. I wonder whether it could become a better programmer than us -- somehow I doubt it: when the creative level reaches our levels the degradation due to emotion may be as large as in us.
I abandoned research in order to start a company and make real money through regular means, and in the process found myself applying my supposed understanding of myself (i.e., my abstract neural model) to business. As I learnt, I developed my philosophy of business, which my book is about, and ends with a proposal that I claim will largely get rid of extreme global poverty (and yes, it has to do with giving people the knowledge and philosophy of how to maximize the chance of building a successful small business). The last claim I will have to test by actually trying it out, at least at a small scale. 2004 is my goal.
Regards,
Karun.
--
Karun Philip
Author: Zen and the Art of Funk Capitalism: A General Theory of Fallibility
http://www.k-capital.com
|
|
|
|
|
|
|
|
|
Re: CA
|
|
|
|
Ray,
You point out that "...although a Class 4 CA does create some complexity (randomness combined with identifiable features), the order of complexity (from a Class 4 CA) is limited and never evolves into anything more intelligent."
I would suggest that in viewing CA in a closed environment, it is behaving just as it should...evolving outward to fill all the possible forms available to it. Suggesting that it is limited, which it is, and that it will not evolve within its closed environment into anything more intelligent would seem to be obvious. If we look at a closed system or create a closed system within nature itself we could expect to come to the same conclusion...that system's complexity will be limited and it will not evolve into anything more intelligent over what we may observe in a given period of time.
When rule 110, for instance, is evolving outward, we can change the course of its evolution through forcibly interacting with it at some point in its evolution. In viewing a system in nature we observe changes in what may be its normal course of evolution through the interaction with other systems. Beyond using CA (rule 110) as representative of one system in a world of many, we can also use it as an example of the universe itelf viewing it as a closed system, which some may believe the universe is, and observing the interaction of randomness with identifiable features which themselves (the randomness and identifiable features) may be viewed as evolving constructs (systems) within the larger rule 110 "universe".
Further, I would suggest that to ask, "Just how complex are the results of Class 4 Automata?" is to go back down the road that Wolfram diverged from in recognizing that complexity is destiny and natural selection is not all that important.
You have given me some things to consider relative to Wolfram's work...thank you for your thoughts relative "hardware, software". The topic does get a little deep at times for us average folk that Wolfram's book seemed to target.
Best.
Lester.
|
|
|
|
|
|
|
|
|
Re: CA
|
|
|
|
1) Thank you for engendering some interesting thoughts and comments; much like cellular automata, your review generated an overwhelming complexity of responses, which leads me to
2) could you (in your copious free time) switch to a response system that organizes responses by thread/theme, it is too overwhelming to process the discussion in a linear fashion, and some of the discussions may have a narrow audience
3)i have a thousand questions, but ...
a)regarding the discussion of analog vs digital modeling of the universe, and analytic vs ca computational solutions; my memory of texts in fluid dynamics (population dynamics, etc) is always of the derivation of the (analytic) differential equations, such as the navier stokes equations, starting with a picture of small interconnecting volumes(cells) and the interaction of mass or momentum between those neighboring cells; in essence we derive the analytic from the cellular. Conversely, we often arrive at discrete (digital) solutions using analytic wave equations, so I do not quite understand the perceived dichotomy of analog and cellular/computational models . It seems there is some deep connection between the two models.
b) what grounds are there for believing that "nature" will select the fastest method of ca computation; i.e. we cannot find the solution to complex natural problems faster than nature can compute it herself?
c) if ca solutions can represent shell patterns, why couldn't a class 4 solution represent a Chopin nocturne; has anyone tried playing the solutions rather than viewing them? did Chopin write nocturnes? Likwise, how do we know that the solutions do not represent the position of each penguin in an antarctic rookery from 8:00AM to 2:00 PM (AADT)? I don't believe that it is likely, but I don't see why it isn't possible. It seems that Wolfram could be right that fairly simple interactions between individual penguins (cellular automata) could lead to what appears to us as an extremely complex distribution pattern. And if these are possible, then It isn't quite clear to me why it isn't POSSIBLE for the class 4 solutions to represent "higher order" phenomena such as human behavior.
Once again, thanks for your site and insight
|
|
|
|
|
|
|
|
|
Re: CA
|
|
|
|
Jerry,
> "what grounds are there for believing that "nature" will select the fastest method of ca computation; i.e. we cannot find the solution to complex natural problems faster than nature can compute it herself?"
Whether one considers the "substrate" (underlying physical manifestation) to be "CAs", or simply the interactions of particles and forces, that has given rise to chemistry, biology, you and I, the weather, etc., it should be recognized that nature does not "solve problems".
What would nature find to be a "problem to be solved"? All life everywhere could go extinct, and nature would not find this to be a problem.
I assume that what you mean is, nature is "computing her future" as fast as she can, and your question becomes, "why can we not outdo nature in this regard?" I believe it is because this would be a contradiction in terms. We ARE nature, and we cannot outdo ourselves.
Suppose we could calculate the weather (in detail) faster than nature can "produce the weather". We might then act to change the weather, in advance. Is this evidence that "nature was wrong in her calculation?" No. such activity on our part would be "natures work" as much as anything else.
We cannot calculate the future ahead of schedule. If we could do so accurately, we could change the future and prove our calculations wrong.
Thinking of the substrate as "CAs producing patterns" is a useful way to examine the universe. But the analogy is not very close to "CAs" as we might produce them for a pentium processor. Why?
In the latter case, the "CAs" are software, and variable by us, with the (relatively unchanging) pentium processor as the "real substrate".
In contrast, (if we take the universe as CAs viewpoint), those CAs ARE the substrate. They are not software executing on yet another substrate, they are the rules of the substrate itself. Anything we "build" or engineer is still a manifestation of that same substrate.
We might device ever more capable "computers", molecular computers, quantum computers, whatever, and ever more clever software, but all of that still executes at the mercy of the underlying physics, the "universal founding CAs". They cannot be modified or moved, any more than you could shift the location of this universe - you have nowhere to stand in which to move the universe.
Perhaps I have missed the point of your question. If so, let me know. I am not sure what "improving on nature's rate of CA-calculation" is supposed to "effect".
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: CA
|
|
|
|
I fail to accept your argumentation on this point precisely. If Rule 110 is Turing-complete, this means it is able to emulate any concievable algorithm. Now if you accept that a computer is able to emulate human beings (for instance by just emulating the behaviour of the atoms in his brain) then you should accept that rule 110 is able to emulate a human being. It's just a matter of starting from the right initial state (presumably a highly complex initial state). Then rule 110 is able to produce the same level of complexity as a human being. Of course, this behaviour of rule 110 can't show up if we examining mere thousands of bits of its evolution. That's the same problems as if we were able to examine the behaviour of a single neuron, or maybe even a few neurons: it's boring, and it's hard to imagine how a more intelligent "big picture" can arise from it. But the intelligent big picture does arise.
The missing point above is: can computers produce human-like intelligence and complexity ?
I fail to grasp your position on this last question. Your statement: "but class 4 automata and humans are only computational equivalent in the sense that any two computer programs are computationally equivalent, i.e., both can be run on a Universal Turing machine" is simply incorrect, and leaves me wondering whether you really know what universality of Turing machines actually is. (For programs P and Q to be equivalent, it is *not* sufficient that both can run on some universal machine U; in some sense, the possible states of Q must contain a representation of the possible states of P, and conversely, so that state transitions are preserved. Which is to say that Q is able to emulate P and conversely.)
Frankly, from my limited current point of view you seem to have failed to grasp the true power of universal Turing machines in general and of rule 110 in particular. I can't take your argumentation seriously with that doubt lingering in my mind. |
|
|
|
|
|
|
|
|
Re: CA
|
|
|
|
Thanks for your answer.
Yes , it sounds very coherent , and remember me a little bit Matrix ( I have readed Ray Kurzweil vision about Matrix and I m not sayng that you are suporting the Matrix possibilitie ) in terms of concept of downloading objects as modules to colaborate to interact with some task goals , wich can be creat problems and/or create solutions , instead writing codes for knowed situations. But in terms of singularity age I feell that we will have conditions to discover or aproximate to what I call the IQUAP , the Inteligence Quantum Particle. I m not shure yeat if CAs can helps us to start to think about them as minimalistic building block structure for AI Softwares , and we would try to analisy how Nature have worked on it some time ago. I feel that only manipulating the IQUAPs we will really have the power to reconfigure the enviroment and archive infinites possibilities for the reality. If we dont find the IQUAPs , we will be only extending or srinkg , ie , distorcing, the reality , and this will be , to me , the first consequence os Singularity age. If we accept the near age of Singularity , and the possibilite to discover the IQUAPs , probably we will have the power to reconfigure the universe , ie , writing or creating Programs , that will use the universse as a computer.
Could you help us with your comments about it ?
Sergio Cabral - IdeaValley.com.br / Rio de Janeiro / Brasil |
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
The complexity of the software still belies the basic conversion to and processing of information on a binary basis, the very simplicty of model found in Class 4. The decisions made by the creator of the software reduced down to the most basic sequence of logical choices to be executed in the face of a defined pattern of 0s and 1s is at root no more complex than the 110 automaton.
The suggestion that there has to be a higher level of complexity, whether computational or not, in order to produce the empirical complexity evident around us; and, that in some way our consciousness of the patterns and apparent order around us is in any way a function of a power and control over the outcome of the billions of decisions made from nanosecond to nanosecond seems to me to operate more in the realm of wishful thinking.
If one were to eliminate the concept or perception of time from the equation, and effectively profile the universe and the perception of the universe as a balance sheet, rather than a cash-flow analysis; then in its standing state the ordering of information into a specific and pecurliarly human perceptual vernacular of things with mass and relative states of location to other things is simply a matter of cumulative and communal beliefs or awarenesses, resulting from experiences as data inputs and primal responses and conclusions to those inputs.
Isn't the simplicity of fight or flight as a behavioral choice simpler than the cellular automata described and as a practical matter just as quickly arrived at in the face of the stimulus provoking it? Is there any reason to imbue that choice with any more significance, weight or pretense reflecting a greater level of processing or sophistication, in the instance the decision is made and executed. I think objectively from simply a taking of inventory perspective, not.
If one were to accept Wolfrom's thesis, then the questions raised by your comments seem much more to involve the self-imposed perceptual constructs of man's reasoning resulting from acquiescence to the concept of time, as an immuteable feature of reality, and the dimensional limitations and linearity of perception that results.
If one were to suppose that the concepts of evolution and progression are merely derived from the Monday morning quarterbacking of from whence man has come, then it is possible to concede that the next instant is as wildly unpredictable on all levels and dimensions as postulated; however, that does not detract from the extraordinary complexity and experiential joy of a sunny Spring morning or a Mozart Sonata.
Perhaps the complexity around us is actually a perceptual orthodoxy to shield and protect our relatively primitive understanding of both the universe at large, and the operation of our brains within it. On some level, to concede the level of simplicity posited by Wolfrom is to require a taking off of the blinders and assumptions implicit in the codified body of scientific knowledge, and to admit of a much broader field of inquiry, including the possibility that we are not perceptually or intellectually evolved enough to understand.
|
|
|
|
|
|
|
|
|
how exactly are you measuring complexity?
|
|
|
|
You consider people, insects, and Chopin preludes to be of a higher "order of complexity" then wolfram's "streaks and intermingling triangles". How, pray tell, did you come to that conclusion? And what do you mean by "order of complexity", anyway?
To the best of my knowledge, no precise definition of "complexity" has become widely accepted. It seems to possess a porn-like I-know-it-when-I-see-it undefinablity, which leads me to reflexively question the premises it is based on.
I read an funny little paper on attempts to quantify self-organization a while ago... Where did I put that... Ah:
http://www.santafe.edu/~shalizi/Self-organization/soup-done/
If quantifying and defining self-organization is as difficult as that paper suggests, and if complexity is as related to self-organization as my intuition says it is, then "people are of a higher order of complexity then Wolfram's streaks and intermingling triangles" seems like an untenable position.
-----
Out with the spam, in with the fnord!
If you want to email me, that is... |
|
|
|
|
|
|
|
|
Re: Reflections on "ANKOS"
|
|
|
|
Well, I think the ideas are the most interesting part, and I'm happy to see Wolfram join the fray. But on the matter of who thought of it first, Wolfram had this to say to Forbes magazine in November 2000 http://www.forbes.com/asap/2000/1127/162_4.html
<Quote:>
Wolfram later recalled this breakthrough when he told author Ed Regis in 1987, "It was sort of amusing. I was thinking about these models of mine for a month or so, and then I happened to have dinner with some people from MIT, from the Lab for Computer Science, and I was telling everybody about them...and somebody said, 'Oh yeah, those things have been studied a bit in computer science; they're called cellular automata.'
[MIT? Lab for Computer Science? Sound familiar?]
[Snip]
Soon he was at an informal conference on the physics of computation. It took place in January 1982 on a small Caribbean island privately owned by computer scientist/physicist Ed Fredkin, then an MIT faculty member.
[Snip]
What was on Wolfram's mind was something he'd seen at the conference: a computer programmed to become a cellular automata machine. The Life game was on that machine, as was every other recent attempt to generate two-dimensional automata. Wolfram could sit at the keyboard and put in various conditions, and the cells would grow across the screen. "I find it really remarkable that such simple things can make such complicated patterns," he told Computing magazine. The experience would set the trajectory of his life for the next 18 years.
<End quote.>
'Nuff said? This is hard to reconcile with Athena springing from Wolfram's forehead.
- Ross Rhodes |
|
|
|
|
|
|
|
|
Emergence
|
|
|
|
The complaint about the inability of cellular automata to generate complex,ordered structures like animals, cars, etc. ignores the phenomena of emergence. Ray's complaint amounts to: Wolfram doesn't demonstrate that rule 110 gives rise to a spontaneously emergent syntax. I've read to chapter 5 and haven't seen this either, but this doesn't mean that it doesn't. Assuming that rule 110 gives rise to such a syntax, especially one that is as complex as rule 110 itself, it isn't hard to imagine this process bootstrapping itself on up to a reality as rich and full as ours. The problem is that Ray got lost in the background noise - if I am right that rule 110 does give rise to an emergent syntax as complex as the progenating rule itself. And, if it doesn't, well... this doesn't prove that no cellular automata does. Either way, emergence in such a system answers Ray's complaint quite nicely, I think. |
|
|
|
|
|
|
|
|
Re: Emergence
|
|
|
|
The key difference, which Mr. Kurzweil did not explicitly state, is that an evolutionary process that uses a genetic algorithm (GA) "intelligently" traverses a search space for an optimal solution; the individuals of one generation are evaluated for fitness and are proportionally represented in the following generation based on those values. Consequently, a GA process can have a direction, goal, endpoint, etc., and can produce increasingly complex solutions.
A CA, on the other hand, simply uses a set of fixed rules to generate each generation from the preceding one. A CA system may produce interesting patterns or even emergent behavior, but it is not the result of an optimization search. Subsequent generations are "improved" only at random, not by an optimization process.
Or maybe I'm completely wrong; I'm not a mathematician or computer scientist. |
|
|
|
|
|
|
|
|
Re: Emergence
|
|
|
|
True, there is no optimizing direction (at the base) to the development of these programs. This is why Wolfram states that he has come to view natural selection as not the primary force in evolution. However, it is important to observe that a fitness constraint could , and indeed should and does, emerge in the evolution of a CA. Look at rule 110, structures emerge - structures that lack "fitness" decay rapidly and do not persist. Further, the structures interact. As the system progresses and grows, these interactions could, should, give rise to yet larger structures with their own rules for interaction and emergent fitness contstraints for propagation of the pattern... Natural selection, the imposition of selective constraints on the growth and propagation of particular patterns, is emergent in the system. |
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
I always enjoy reading Ray Kurzweil because I get to read important numbers (or at least educated guesses) people seem to have scrupulously avoided offering elsewhere, probably out of fear of error.
The numbers which fascinate me now are the
"compression ratio" of the human genome, which is given as 800/23 or 35, and the fraction of the information dedicated to structuring the brain (half). But on to my actual point.
Now, while 23 megabytes of C-G-A-T sequencing is still pretty big, I wonder how fair it is to say that is the amount of information which starts a human on his trajectory from NOTHING to a complicated multi-cellular being versus merely from a single-celled ZYGOTE to that final place.
What I am getting at is a critique of the genome as human sine qua non which somewhat parallels Searle's critique of AI. Specifically, what good are those 23 megabytes without the CONTEXT of a human zygote? (i.e. that fraction which is NOT DNA (about 85% of the non-water mass?))
I can hear people offering the opinion that the 85% dry mass which is "merely" fats, sugar, proteins and "a bit" of structural organization really doesn't amount to ALL that much compared to the IMMENSE negentropy of the DNA sequence.
Maybe that is so - but maybe not.
Even if I can DESCRIBE the extra-nuclear part of the zygote in a compact manner - does that mean I have accounted for the vast number of designs, disregarded via natural selection, that DON'T WORK? How many jillion experiments failed? Sure, you can describe a brick-house as being made of a set of bricks at a certain set of relative locations - but you have assumed someone knows the brick and the technology around making and joining it, as distinguished from chopped liver.
I am not a biologist, but even I know that while some nuclear material is transplantable between species (one might say viruses hold important patents in the area!) that doesn't mean there isn't a complicated interplay between the
"designs" stored in the nucleic acids, and the "mere" COMPATIBLE factory in the cytosome which fabricates the incredible structure of subsequent cells - including the differentiated tissues and organs of massive multi-cellular organisms like Homo sapiens.
For that matter, why is the very fact that Nature uses C, G, T, and A "monomers" reckoned as constituting basically "zero" information? Think of all the millions upon millions of molecules of that size - let alone of ever-larger sizes - which are NOT used to store the information which the usual sequence of bases do. Let alone the jillions of four-tuple sets (or 148-tuple sets) which fail to pass muster.
Sure, the "essence" of the Pentium microprocessor is in the incredibly complicated circuit designs instantiated as masks - but they would be useless without the billions-dollar fabs which are FAR more complicated than Egyptian stone pyramids. The CHIP FAB ALSO represents a GREAT DEAL of the complexity of the brains in your PC - even if that complexity is *common* to the humble 7400 quad-gate IC and the multi-megatransistor Pentium.
Goodness knows I have no way to estimate the
"information" in the structure of the cell sans nucelic acids - but what good are those 23 megabytes unless IMPLEMENTED as monomers which can INTERACT with an incredibly complicated machine like the living cell?
Sorry if this was too far off-topic, but I guess I think the "central dogma" of genetic science may get a bit too much respect.
Ron Feigenblatt |
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
Re your comment on DNA: You are correct--you cannot explain an organism solely by its DNA. Except for the most simple organisms, the outside environment is also extremely important in determining how it behaves, and, how the species changes over time.
I have read the first five chapters of Wolfram's book, and have skipped around some of the notes in the back. I'm reserving final judgement, but it looks to me so far that at least in terms of biology, Wolfram has gotten part of the picture right, but has underestimated the importance of natural selection. I am an anthropologist, but my undergraduate and master's work was in biology.
I became interested in anthropology after spending two years in swamps studying the social structures and behavior of Ardea herodias, the American Great Blue Heron.
Looking at Wolfram's CA diagrams, and thinking of my old pals the herons, I can see that he's right in a sense--for instance, in explaining the heron's plumage, for example. The long feathers on his breast apparently have little or no survival value--and thus, are probably not the product of natural selection.
But I cannot believe that something as simple as a CA program could be solely responsible for their behavior. Herons are clever, highly adaptable birds; their range has expanded to include almost the entire US because they learn quickly, and can adapt to just about any water source that contains fish, amphibians and crustaceans. I have personally watched individual birds succesfully hone their skills catching different prey over a period of months. The most successful become highly efficient carnivores over their 20 year life span. My point behind this is that one cannot explain the heron's behavior without examining the pressures of his environment. Each generation of heron becomes ever-so-slightly more adept at finding food, and in replacing other shorebirds in various ecosystems. It is the pressure of selection that explains this.
Similarly, herons are loners, and when it is time for them to mate and nest, they respond with highly ritualistic behaviors--I don't think they could tolerate being so close to other individuals unless they were completely driven. Their behavior then shows much less variation--but even so, it's much more complicated than what I think you could get by running a CA program...and it works the way it works because of natural selection.
SW |
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
Twenty trades: bricklayers, rockers, roofers, siders, formsetters, painters, tile-setters, glaziers, ... , trade N.
Twenty amino acid types: glycine, alanine, valine, ... , glutamine.
All amino acids have in common the ammonia+carboxylate groups on carbon 1. The more electron orbitals in harmonic equilibrium with the electron orbitals of adjacent atoms the deeper the enrgy well of the Van der Waals bond(s).
Where there are any number of atoms forming a continuously adjacent set and there is at least one electron orbital in each atom having an identical frequency of resonance at ground, those orbitals mesh like gears. Energy, and hence signal, may be transferred along that chain of gears.
I am not a biologist, but even I know that a very high degree of order is required to produce the functionality of the cellular nucleus. On construction sites I observe a highly ordered group of functional modules (module=tradesman='amino acid') that indirectly reference a reproducible document (document=DNA) using off-prints ('off-prints'=mRNA) of the comprehensive planset to provide disposable instructions definitive of elemental substructures (structures=proteins) immediately required.
|
|
|
|
|
|
|
|
|
Universal melody emulator etc.
|
|
|
|
>Have you considered using an evolutionary algorithm to search for CAs that play melodies?
I am going to look at what I can do with CAs
but first going to read Wolfram's interesting
book firstk, being on page 656 now...
>Imagine if you found a familiar tune, Bethoven perhaps. That would raise some eyebrows.
The software is open for "virtual scales"
making it possible to use any melodies
"behind the stage". For example I have use
the famous Japanese song Sakara, Sakura a
few times. You only need these 12 numbers in
the Java source code:
{0,0,0,3,3,5,5,5,6,6,10,10}, //#5 sakura
I someone would program a user interface
it would be easy to program a "universal seed melody option" to emulate any melodies in
some approximation outside copyright restrictions.
But on the other hand any block of music could
also be emulated in some approximation just by
"finding" the right picture. Or painting it...
See the source if you are more interested:
http://personal.eunet.fi/pp/ske/musitives/synestesia.html
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
First of all, kudos to Mr. Kurzweil for the excellent review of the book. I've been a Kurzweil fan for many years, ever since his sampling keyboard days.
I haven't received my copy of "A New Kind of Science" yet, but from what I can tell, Mr. Wolfram is entirely dismissing a field of mathematics that's both very old, and a superset of cellular automata (CA) -- namely, differential equations (DEs).
The DE dh(x)/dt = F(h(x)) is a one-dimensional, continuous-valued, spatially-continuous cellular automaton with the next-state function F(h). As long as F(h) is "local", the next state at any point in x depends only on the previous state at that point and the previous state at neighboring points.
Just as in CAs, DEs have to be simulated to find the value at some point in the future -- you can't magically look ahead to find the end result. And, just as in CAs, they can start with simple initial conditions and evolve to great apparent complexity that defies conventional analysis.
The Swiss mathematician Leonhard Euler (1707 - 1783) wrote a set of DEs describing ideal fluids that has defied conventional analysis for more than 200 years. The Euler equations and the Navier-Stokes equations (discovered in 1823) describe time-evolving systems vastly more complex than CA 110. And, just like in CA 110, they can start with simple (smooth) initial conditions and rapidly evolve into unpredictable, though predetermined, complexity.
The study of DEs has spawned a huge number of numerical techniques, due to the inability of conventional techniques to solve them satisfactorily. Most of these numerical techniques amount to analog CAs on discretized spatial and temporal grids. They've been studied using computers since computers were even remotely fast enough (see Fermi, Pasta, and Ulam's work from 1955).
My point is this: since most current theories of physics boil down to a set of DEs that are being studied using analog CAs, and has for decades, Kurzweil is really just restating the obvious.
Indeed, one can't help but think that he chose discrete CAs instead of analog merely because in the discrete case, you can easily enumerate the entire problem domain, divide it into four parts, and thereby give yourself the illusion that you've made some meaningful classification. Analog CAs resist such simple treatment, since the problem domain is infinite.
Mr. Wolfram's "new kind of science" is exactly like the old kind of science, just digital instead of analog. It's easy to say that it should be possible to describe the universe with a discrete CA instead of an analog one, but it's infinitely more difficult to actually write down one that works. I suspect that's why he's left that little detail as an exercise for the reader.
Wade Walker |
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
Thanks for pointing out that intimate connection between discrete cellular automata and continuous differential equations. I knew it was there... somewhere.
As Kurzweil also pointed out:
"We can easily establish an existence proof that a digital model of physics is feasible, in that continuous equations can always be expressed to any desired level of accuracy in the form of discrete transformations on discrete changes in value. That is, after all, the basis for the fundamental theorem of calculus."
This whole chicken/egg argument about which is more fundamental - analog or digital operations, discrete or continuous functions - is one of those philosophical questions that I don't think science can answer because the problem isn't well defined. It's impossible to imagine one without the other. Is space really "made" of pre-existing points ? Or is a point an abstract limit representing a possible localization in space ? One which may not even apply to our universe if string theory is correct and there is a lower limit to the whole notion of distance.
I think a more interesting question is whether the universe is, in fact, computable. Can the linguistic constructs of science, whether CA algorithms or differential equations or something else, really simulate nature perfectly ?
David Deutsch made some very interesting observations about this in "The Fabric of Reality" (another book full of very BIG ideas).
He claims that the universe incorporates a kind of Strong Turing Principle that implies that any environment within it could be perfectly simulated with an appropriate program running on a physically realizable computer. With the proper man/machine interface, no scientist would be able to perform any experiment in such a virtual reality that could distinguish between the reality and the illusion.
Such a program would have to incorporate knowledge of the laws of physics, biology, etc, in order to perfectly simulate the behavior of real objects, creatures, and, yes, even real
people. But this could be done in principle.
Since a Turing machine can be embodied by a CA, of course a CA could perform such a computation and thereby explain the universe. But so could a variety of other mechanisms. What matters is not what parts the computer is made from, but that the universe allows itself to be modeled in this way. It has a kind of fractal self-similarity in which the part can mirror/model the whole and this is a fundamental property of the universe.
Incidently, this also challenges the notion that the only way to simulate certain processes and "jump ahead to see what happens" is to "run the actual program" - ie watch what the universe actually does. According to the Strong Turing Principle, it should be possible to simulate any such process in our universal computer and watch what happens there.
Tony Lundberg |
|
|
|
|
|
|
|
|
Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
Ray,
I agree that the "universality" theme is overstated in Wolfram's, but I would use a slightly different phrasing to explain why.
The theory of universal computation is not Wolfram's invention of course; and it has a thorn in its side that he does not emphasize in his book. This thorn is *infinity*. Any "universal computer" can simulate any other computer, but it can only do so by running a simulation program that may slow it down by an arbitrarily large finite amount, and expand its memory usage by an arbitrarily large finite amount.
This means that it's not enough to say that some particular universal computer (e.g. CA rule 110) is *in principle capable* of modeling the universe, or the brain, or biological evolution, or whatever. One has to show why the universal computer in question is *pragmatically effective* for modeling the systems in questions.
If you have another modeling framework M that you think is better than CA rule 110 for modeling, say, the brain or the universe -- then you are guaranteed that your whole modeling framework M can be simulated using CA rule 110.
Thus, when you complain in your review that the pictures of class 4 CA's that you see in his book are not as complex as lifeforms, universes, brains, etc. -- he can always retort "Yes, but a sufficiently large class 4 CA picture *would* have patterns of that complexity and variety in it, although they might not be visually discernable." But then you must retort: "So what? If it takes Class 4 CA's with initial conditions of length 10^50 and runtimes of 10^20 generations to emulate the brain, what use is the emulation?"
The point is that his Principle of Computational Universality ignores the question of computational efficiency at real-world scales. He proposes a statement going beyond universal computation theory, which is: Almost any dynamical system that doesn't lead to random or transparently fixed or oscillatory behavior, is likely to be a universal computer. This is a fascinating insight, though I'm not yet 100% convinced it's true. But even if it is true, so what? Each of these theoretically-universal dynamical systems is going to lead to different behaviors *within reasonable space and time constraints*. And that is what is important, because the physical world and the mental world have to exist within fairly tight space and time constraints.
I thought his book was very interesting, but I found no insights in it that were applicable to my own work in artificial general intelligence (AGI). AGI work is all about spacetime efficiency. It's easy to make a thinking machine if you assume (as Wolfram does in his Principle of Computational Universality) infinite space and time resources. What the human brain is all about -- and what the first digital minds will be all about -- is achieving *relatively broad scope* computation within *relatively limited resources*.
In this sense, I'm afraid Wolfram's "new kind of science" is not that new after all. He has given us a funky new angle on some familiar complexity-science ideas, and introduced some valid technical and conceptual insights. But, he has imported from standard computation theory the focus on *infinite resources*. I think the new kind of science we need is one that deals with the finite world we live in, i.e. one that considers average-case spacetime complexity as absolutely theoretically fundamental, not as an irritating technicality to be brushed aside. I was hoping to see something in this direction in Wolfram's book, but, no cigar.
Yours,
Ben Goertzel
Novamente LLC
www.realai.net
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
Ben!
> thorn is *infinity*. Any "universal computer" can simulate any other computer, but it can only do so by running a simulation program that may slow it down by an arbitrarily large finite amount, and expand its memory usage by an arbitrarily large finite amount.
Sometimes however, we can simulate faster. By putting something aside. Even deliberately cut it away, as disturbing anyway.
> One has to show why the universal computer in question is *pragmatically effective* for modeling the systems in questions.
We saw the examples already. But why ask Newton to actually calculate the Solar system? Or Quantum Theory to model the whole monkey DNA?
> "So what? If it takes Class 4 CA's with initial conditions of length 10^50 and runtimes of 10^20 generations to emulate the brain, what use is the emulation?"
Then it's useless, of course. But I don't think that will be (or is) the case. So what, if I think that way, you may ask? I _guess_ that way. You _guess_ the opposite.
Besides, I see a tremendous acceleration in the computation field, if CA will be proved doable. Every cell may be a (very simple) processor/RAM cell. The computing power should go unprecedentedlly higher, than went ever before.
> because the physical world and the mental world have to exist within fairly tight space and time constraints.
I don't see why. Maybe. Maybe not.
> AGI work is all about spacetime efficiency. It's easy to make a thinking machine if you assume (as Wolfram does in his Principle of Computational Universality) infinite space and time resources.
That's true. Very easy. So is with the energy (or money ... or whatever) supply in the infinite world. But it's just an irrelevant Cantor's game.
I saw this case before. Many times during the last few centuries.
- the new kind of science always goes to an unpredicted direction.
- opposition is always two component. That it's nothing really new and is evidently wrong and fruitless. The author should just do something else, where he may be even good at.
- "must be something deeper, it's too prosaic"
The pattern is too strong here, not to spot it. But it doesn't prove anything as well.
That's life! ;-)
- Thomas Kristan
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
Tomaz...
There may be our whole world encoded in the decimal expansion of Pi, but this is a *very inefficient* representation of the world, because along with our world it also encodes a hell of a lot of random and quasirandom nonsense.
This is the same as my complaint about CA's. Yeah, by universal computing theory, all the patterns in our world can be made to emerge from some CA. But how complex does the initial condition of the CA have to get? Sure, Rule 110 is simple, but this just means, that for modeling anything complex or getting any really complex behavior, all the complexity is pushed into the initial conditions.
I think the "graph-rewriting" systems Wolfram discusses in his physics chapter have a lot of promise, and I like his growing combinator systems too. My own taste would be to intermix self-rewriting combinator expressions with self-rewriting graphs. My intuition is that this is likely to lead to more compact models of many real-world systems than CA's (e.g. the brain, the physical universe at a low level). Whereas CA's are obviously a great modeling tool for fluid dynamics and other domains that depend simply and critically on local transmission of information, between a point in space and its neighbors.
One way or another though, I think the big thing missing in Wolfram's book is a systematic and mathematical way of analyzing, interrelating and synthesizing *emergent patterns*. He plays with iterations and identifies emergent patterns visually, then proves some things about them in special cases, or draws analogies between what he sees and various real-world systems. But until we have some real science & math about the emergent patterns in these complex systems -- even if there is a small collection of CA-ish rules giving rise to the universe as we know it, we'll never be able to find this collection...
-- Ben Goertzel
Novamente LLC
www.realai.net
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
NOTE: Following is a copy of email I sent to various colleagues in the areas of genetics, government, and organizational behavior (the latter two being my fields). I started out just reading a short piece in Newsweek, about the new book by Stephen Wolfram. From my own (admittedly limited) knowledge of these subjects, I felt that Wolfram was a)REALLLY egotistical, and b)the article left out some really important people that - oops - Wolfram learned a lot from. As the lengthy piece below reveals, this matter really resonated with me. I don't know if any of you will find this interesting - but I hope you do, in some ways. If you do read on about this, and have some thoughts to share, please do write back. Thanks,
Bruce Waltuck
Hi... I don't know how many of you see publications like Time, Newsweek, Wired, or The New York Times regularly. This past week marked the publication of a remarkable and controversial book. Stephen Wolfram has self-published a huge volume that purports to define "A New Kind of Science." I get Newsweek, and when I saw a brief piece about the book, it struck a major chord (pun intended) with me.
According to Newsweek, the central thesis of Wolfram's book is that pretty much any and all complex systems and phenomena in the universe come from the iterative computation of a few simple rules. Wolfram, an undeniably egotistical and brilliant guy (quit Oxford from boredom; awarded a Ph.D. by Caltech when 20; awarded a MacArthur "genius" grant the next year; creator of Mathematica, multi-million selling computational software) is quoted as saying that as far as he knows, no one else is thinking/doing the same science that he is.
Well, I am not a MacArthur genius (though I met one recently, James Randi), but I took exception to Mr. Wolfram's assertion. As friends and colleagues, most of you know the principal work I did in the quality improvement field. Together with my colleague Jim Armshaw in 1989-1990, we built an employee involvement and quality improvement system for the U.S. Department of Labor. We spent six months doing research, and devising a system that would meet the needs of the very diverse organizations within the USDOL. Our research taught us that we could expect - guess what - very complex behaviors based on a very simple set of rules (make decisions by consensus; create teams of people to solve process improvement problems; use meaningful and valid data to understand the problem, etc.). Not quite rocket science, and certainly no scientific "revolution" (as Wolfram claims for his work).
Back in the early 1990's I was heavily influenced by the work of the late W. Edwards Deming. Dr. Deming had summed up his own approach to organizational improvement in a set of just 14 "points" (simple rules like "drive fear out of the organization" and "maintain a constancy of purpose"). Not rocket science either, but to use a word that Dr. Deming favored, arguably "profound."
In those early years of my quality improvement work, I really thought that we had our hands on The Answer - that we had built, or come close to building, a system of improvement that solved all problems, answered all concerns.
Of course, the DOL's experiences over the past 12 years have proven me wrong. Our system was not perfect, not as complete as we had thought. A few years ago, an experience at a kids' science museum on a rainy day in San Francisco changed my perception of organizational and other systematic behaviors forever. The exhibit "Turbulent Landscapes" (still viewable at the Exploratorium website) featured simple science systems and toys that showed how - guess what - simple rules could create astonishingly complex behaviors. In particular, my observation of a magnetic pendular system (available as an "executive desk toy" called R.O.M.P.) gave me a flash of insight into human and organizational behavior that has informed and transformed my work ever since.
Many of you have heard me ramble on and on in the last few years about chaos, complexity, complex adaptive systems, and so on. I have read every book I could find (and understand) on the subject. From the well-known sources like Meg Wheatley's landmark "Leadership and the New Science" from 1993, I branched out just as Stephen Wolfram has done. The more I asked the "simple" questions of "how do people behave in organizations?" and "what makes a new idea become a shift in paradigm?" the farther afield my inquiries went. Before I knew it, I was reading about the pioneers of the Santa Fe Institute- Chris Langton, John Holland, Stuart Kauffman, Brian Arthur - and exploring new ideas about everything from economics to evolution and biology. It seemed that the ideas about how complexity influenced system behavior were on the minds of many truly great thinkers and scientists.
So now we have Sephen Wolfram, an acknolwedged genius, claiming that he and he alone, has figured out, well, everything. He has said he expects to be mentioned alongside Newton, and Einstein some day.
Well, I don't know about that. There are, as the also very smart Ray Kurzweil has written, some parts "missing" in Wolfram's massive work.
For myself, I (currently) believe that:
- we live in a quantum universe. Reality, as we know it, is defined in large part by our inter-relationship with things.
- as "independent agents" living and moving through our organizations, we perceive, appreciate, understand, and act.
- Unlike Wolfram's simple computer program rules, humans also desire, and intend. To the extent that a bit of computer code can mimic this behavior, it is a reflection of the design/intention of the program's author (see the very cool FRAMSTICKS website to watch computer "life forms" do their things).
- I increasingly believe that our behavior in organizations (and in general) is governed in a rather quantum way by both a mechanistic/Newtonian world of rules, causes, and effects, and a simultaneous co-equal world of desires, intentions, perceptions, and behaviors. These worlds BOTH are real, and both rely on the interaction of observer and observed to take form. I recently gave a talk on "A New Definition of Quality" that draws on these ideas, and described strategies and methods that are implied by this "Quantum Quality" (tm) model.
- Finally, and maybe my own most controversial notion, is that the real nature of the way things function in the universe was discovered by speculative thinkers as far back as the second century, and emerged in a body of thought and literature in the 13th century. Only in recent years has this body of knowledge become widely available in English, and generated a renewed level of interest and understanding. I really think these people had it right, centuries ago. I just don't know how they figured it all out (see "God and the Big Bang" by Berkeley professor Daniel Matt).
Well if you have read this far, you either know what this is all about, or you think I am "losing it." I am writing to Newweek, to criticize Wolfram's egotistical claims for being alone in this field of endeavor (he is not), and for the glaring omission of at least one foundational thinker in Wolfram's own area.
POSTNOTE: I was referring to John Holland, who I understand to have really pioneered work with Cellular Automata, his "genetic algorithm" and so on. I have not seen mention of John Holland in connection to the Wolfram book in the various articles and reviews I have read. What do you, the obviously bright and well-informed posters here, know about any of this?
Thanks
Bruce A. Waltuck
United States Department of Labor |
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
In the preface of his book, Mr. Wolfram devoted nearly three pages of small type to list name after name of people who contributed thoughts and ideas that helped him develop his concept. Some of them are the same names you mentioned in your message above. Throughout the notes at the back of the book he gives references to other people and their ideas. I think what he was saying is that the way he is approaching the subject is different from the way anyone else is doing it today. I don't see anywhere in the book a claim that he invented the mathematical ideas on his own. He was also a contributor to the SFI and mentions in his book the names of Murray Gell-mann, John Holland, Stuart Kaufman, the people he worked with at Princeton, etc., etc.
But to worry about who he gives credit to is to miss the point of the book, which is to introduce the reader to a new approach to numbers and mathematics and how we should think about them. He points out, rightly in my opinion, that a lot of the processes we use to do mathematical manipulations lead to fuzzy or untrue conclusions. He thinks his approach will be a more useful method in the long run and will leave out a lot of the complexity that confuses such matters today. It's his way of looking at the whole system that is unique and not being done by other thinkers -- not the individual concepts that he has organized into his own particular approach.
To worry about Wolfram's ego is to miss the point of what he is trying to tell us. The man is pointing to the stars and, instead of seeing what he is pointing at, people are arguing about the shape of his fingers and whether they are worthy of pointing at such a high and mighty feature of the universe. |
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
Although I think most of the review is on target, I found the discussion of genetic algorithms strangely facile. Kurzweil says that we need the genetic code itself to evolve, and that the currently static coding schemes somehow explain why genetic algorithms don't reach the complexity of living systems.
After such rigorous analysis of the meanings of complexity and information and what we can and cannot say about certain interesting CAs, it strikes me as odd that Kurzweil would at the same time give such casual treatment to another serious subject (GAs).
Kurzweil's hypothesis that somehow the key to evolution is having the genetic code itself evolve is not supported by a shred of evidence (which does not imply it is incorrect, but rather that it is presently naive). Although it worked well for nature, having the genetic code itself evolve inside of a computer would result in a massive explosion of the search space being covered by evolution. In such a scenario, many candidates would be eliminated not because they did not sufficiently perform, but rather because, in some sense, their genetic coding itself accidentally had self-imposed limitations. Thus, a large portion of the computational resources would dedicated to dead-end encodings, in addition to all the poor quality solutions that normal GAs suffer from.
Not only that, but who defines the space of encodings being searched? And who is to say that THAT space is not itself somehow suboptimal? We don't even know if nature got it right. Perhaps evolution would have been a thousand times faster if only the genetic encoding WASN'T allowed to evolve, thereby constraining the search space. The space of representations is likely to be rife with pitfalls, which are just as likely to decelerate evolution as to accelerate it.
Computers are also fundamentally different from nature in that they can only perform so many operations in parallel. Nature could afford a lot more "throw-away" encodings because they were evaluated in parallel with more robust strategies. In a computer, we don't want to waste valuable resources on throwing things out that actually perform well! Remember, Kurzweil is suggesting the encoding itself would evolve, implying that some poorly encoded solutions might actually perform well during their lifetime! That's a waste of resources.
I would suggest that the real "solution" here involves a much more sophisticated analysis of what it means to be a good encoding, and an understanding of how the encoding constrains the search space of organisms. I only bring this whole line of argument up to point out that when you are being rigorous in general, you need to be careful about making broad statements about entire fields of inquiry that are hardly yet understood.
I believe a similar criticism could be made about the whole "reverse engineering" broadstroke undertaking, but I'm sure I've gone on long enough.
Q. |
|
|
|
|
|
|
|
|
what science?
|
|
|
|
dear all
maybe its time we look for the emperor's clothing...
a (?) new (?) kind (?) of science(?) ?
i believe all of the above ?-marks individually challenge all of wolfram's propositions in that darn book of his (and, yes, i've a copy and i've read through most of it).
1. a (?)
fredkin et. al. speculated (and note an emphasis on that word) on the idea that "the universe may be a computer". so what's new ? and besides, i see ABSOLUTELY NO evidence of that claim, even he is correct (wolfram, that is) to that effect. ceo. wolfram: please to predict us what your "new kind of science" tells us about the top quark mass is... or better still, can you predict the right value for the cosmological constant? or forget that, your tall claims about gould's shell shape calculations being wrong. for any arbitrary shape, can you GENERATE that pattern exactly (in the kantian sense)?
2. new(?)
what's new in the book? at best its an exhaustive catalogue of cellular automatons, a catalogue nevertheless, exemplifying what a billionaire software guru does to kill spare time. conceptually, it misguided: remember the wise man who said that the map is not the territory? heck, even medieval cartography was more sane, the lack of samity being related to lack of predictive power. more mathematically, the argument about computationally irreducibility can be formally shown to be equivalent to turing's original universal machine argument, just in a different 'guise. to see it more clearly (i.e., make it more 'bimbo-friendly'), try visualizing turing machines as post production systems. in effect, one can do all sorts of things with these universal machines that are nevertheless NP-complete: ever heard of coupled tape machines?
3. kind (?)
this is going to be more philosophical: 'kinds' of science?: the last time i checked, there were two methods of doing things: either you were a positivist or a realist. now what exactly does ceo. wolfram stand for? this is certainly not positivism with all the talks about some immutable programs generating phenomenological apperances. unfortunately, it fails to statisfy realism because of the same reasons.
4. science (?)
hardly. one question would suffice: does your "new kind of science" have any predictive power? if it does, answer the questions laid down in q.1. if not, accept the fact that this another of those hippie (figurative:-)) approaches to things that were pursued to nurse your bruised ego, and perhaps, inspire some following amongst semi-litterate programmers who thinks of you as the best thing since sliced bread.
cheers
rej |
|
|
|
|
|
|
|
|
Re: what science?
|
|
|
|
>What's bothering you so much, that you are that >eager and hasty to dismiss it quickly???
>- Thomas
I'm glad rej had the sense to ask the basic scientific question about this book which is - is it actually any use to anyone?
If you consider the biggest advances in science, such as Newton's Principia, Relativity, QM, The Origin of Species, et al, they made two contributions.
1. They proposed a new conceptual framework for dealing with a given set of phenomena. Okay, we (arguably) have something like that here.
2. They offered the *immediate* possibility of using that framework in a predictive or otherwise scientifically useful way.
Put simply, *they actually solved some real problems.*
Wolfram's book does nothing like this. It has some interesting ideas, but as rej has pointed out, most of these aren't all that new. (As a former Life follower I know that people were drawing similar parallels between CA and The Entire Universe more than twenty years ago.)
Still, that's not the real issue. The real issue that this book makes *no* testable predictions, contributes *no* solutions to any of the problems that currently haunt science, and describes in detail *no* new technological applications. There are hints and possibilities that some of these ideas might work, but nothing so much as a definitive example.
So (to use an old, but still appropriate cliche) where's the beef?
If Wolfram can use his CA approach to solve a real problem in any discipline, then I think people should sit up and take notice. We don't even have to ask for the mythical Theory of Everything here, even though that's clearly Wolfram's implication and ultimate desired destination.
Something simpler, such as a worked example of how to reformulate an old problem using his CA approach in a way that provides useful new insights or cuts down substantially on computational cost would be a good start. Even something that *potentially* but still clearly, obviously and unambiguously does the above would be enough, even if it turned out we could't implement it with current technology.
But until he does that, or someone else takes the time to do it for him, I think we should all be wondering just what there is of substance here.
Now, my prediction is that this is never going to happen. The reason is that the principles in this book aren't worked through to the level where it's possible to apply them practically in all the areas that Wolfram claims.
More than that, I'm not convinced that they *can* be worked through to that level because I don't see that the computational sophistication is really there - for reasons that Ray has explained already.
Of course I could be wrong about that. But when you're faced with a book that repeats relentless 'It's my belief that...', 'I strongly believe that...', 'It is my expectation that...' over and over, without providing proofs or examples for any of these beliefs or expectations, I think it's reasonable, rational and wise to be sceptical until something more concrete is presented.
Richard Wentk |
|
|
|
|
|
|
|
|
Re: what science?
|
|
|
|
dear all
first a response to thomas' post:
1. there is NO such thing as lorentzian relativity: at best, there is a certain contraction factor derived by lorentz that relates rest length to kinetic length. (special)relativity starts out with the assumption that the tranformations between coordinate systems between (inertial) observers must involve a NON-TRIVIAL transformation between time coordinates (this demolishes the notion of "simultaenity") apart from non-trivial space coordinate transformations (some of which were already deduced by poincare on the basis of lorentz's work). that is relativity, and its beginning (atleast from a professional physical viewpoint). this kind of thinking started in 1905 with einstein. end of story.
2. stuff about blasphemies/paradigm shifts: if mr. wolfram had been able to predict ONE (however improbable testing that one thing might have been at the present time, say, due to computational costs involved), atleast his "theory" would have been falsifiable. isn't that a criterion for science? you know, popper has not been dead for that long... also, sane and rather simplistic questions such as mine are easy to dismiss by the zealous lot rather than be answered directly. in a forum more formal than this, sane questioning (such as mine) by university scientists are often dismissed by the same zealous lot as the "harassement of the academic mafia". all this points to one new research program: "a new kind of (academic) sociology"...
3. about prediction: all canons of modern science, to which wolfram and his cronies have been comparing ANKOS to, made (a) direct predictions (einstein's 1916 paper had the mercury perihilion shift prediction IN THE SAME PAPER as his revolutionary theory of gravitation from general covariance and equivalence) or (b) based on DIRECT observations of nature ITSELF (and not some "model") such as Darwin's. none of the aforementioned canons engaged in anything remotely as quixotic as ANKOS in terms of "all talk and no predictions".
as to someone who talked about matrices and QM etc., there is no such thing as specific matrices in QM. matrices are invoked in QM because of the "noncommutativity of the product of observables-operators" property, from which the uncertainty principle is directly derivable. consequently talking about SPECIFIC matrices that were available in the 1800s and were later used in QM (in order, presumably to invoke a comparison with the "tools" in ANKOS) is meaningless. i personally suspect that the correspondent's way of thinking about QM has been colored by reading too many popular books about the same. a good "conservative" textbook will be good remedy.
cheers
rej |
|
|
|
|
|
|
|
|
Re: what science?
|
|
|
|
dear all
thanks to the people who see reason in my reasoning
guess for those who dont, have to add some seasoning!
i will post one personal criticism every day to ANKOS in this forum. these criticisms will be of a technical nature, and generally avoid any references to the philosophical quagmire of the whole enterprise of ANKOS. though tempting, i shall refrain from pointing the more obvious problems in ANKOS w.r.t. wolfram's writing style, idiosyncrasies and a maddening disregrad of what others have already done in the field (and i'm refereeing to papers in extremely famous journals, not the work of some lone sole toiling away in secrecy in some basement in the valley). i challenge all those who believe in ANKOS in the ontological sense ;-) to refute each criticism as they are posted. please refrain from vague replies.
"SIMPLE" VERSUS "COMPLEX": WHERE ARE THE DEFINITIONS?
i shall start out with the most obvious problem with the book: lack of basic defintions. lets go back to the mantra of the book "simple programs generate complex behavior". what is "simple" and what is "complex"? lets start with "simple". suppose you have a rule that you want to implement. implementation could mean configuration space evolution or phase space
evolution. furthermore, evolution could mean iterative application of a set of rules with respect to some configuration space parameter(say, time). in order to implement this rule,
you need to have a suitable formal language. the rule is then encoded in this formal language to get a program. "simple program" then could mean two things: the naive meaning is that the length of the program is small. the more sophisticated meaning is the kolmogrov-chaitin complexity: the program is simple if it is algorithmically compressible. in the first case, the meaning of "simple" is context-dependent on the level of the formal language being used to encode the rule. clearly, one can implement CAs in mathematica 4.0 with inbuilt functions. "the naive length" is very small. the second meaning of "simple" as something that is algorithmicaly compressible leads to a conclusion directly opposite of wolfram's. i'm referring to chaitin's book "the unknowable": something to wolfram is simple if it looks very regular and non-random. his notion of complex is the converse. however, for chaitin, something is simple if it is compressable. $\pi$ for chaitin is simple. $\pi$ for wolfram is complex. however, this is where the problem begins: for wolfram, "simple behavior" is what has a non-random representation. however "simple program" is not defined. also, as i've argued, either it could mean ones that are more compressable than the others or ones that not lengthy. obviously, the first is not the case,, because that implies
chaitin's conclusions which are diametrically opposite to wolfram's. the second option is just too naive and context dependent to be of any use. so what exactly is a "simple program"? the same argument applies to "complex", when "complex" is seen as the inverse of "simple". i guess what i'm driving home is what while simple and complex representations in ANKOS have a semblance of definitions (in terms of being how close or how far the representations are from being random), simple and complex programs have no definitions as i've argued.
cheers
rej |
|
|
|
|
|
|
|
|
Re: what science?
|
|
|
|
Much of your argument against Wolfram's book reminds me of a scene in the play Inherit the Wind in which Clarence Darrow is expected to defend John Scopes without any reference to the theory of evolution. You say that any argument against your 'daily' criticism must be framed in precise technical terms'terms that you have largely restricted to your own specialty. Even this approach to criticizing Wolframs book suggests to me that you have largely missed the point of his discussion.
If I understand what Wolfram is trying to say, and I suspect that I am beginning to, his point is that science based on precise definitions and proceeding through a lattice of deductive reasoning has largely failed. He even points out that extended logical arguments are basically only symbolic cellular automata and subject to the same limitations.
By staying away from overly restricting definitions and simply accepting simplicity and complexity for what they seem to be he frees himself to study the progress of these objects as if he were observing life forms. His Science is, admittedly, more like art than science, but that is more to the point.
When Wolfram says that he is expounding a new 'kind' of science, he means literally that: not that he is proposing a superior kind of science or an inferior kind of science or a new branch of science, but that he is proposing, literally, a new way of undertaking the enterprise. If Wolfram had started off by making the precise definitions that you are demanding, he may eventually have been found out to be in contradiction with the method he is ultimately proposing. That is, if I understand his method, and I suspect that I am beginning to.
Has anyone ever figured out for sure what Machiavelli had in mind when he wrote The Prince? Was it a satire or literally a guidebook for statehood? Did Jonathan Swift really expect people to eat their children? When an obviously accomplished person writes a book that's methodology eludes you, you need to be very cautious in how you interpret it.
I am the last one to make an appeal to authority, but I find it difficult to believe that the creator of a program with the syntactical depth and complexity of Mathematica could have overlooked the possible need for precise definitions. Certainly, he had someone review his book, and that someone noticed the lack of precise definitions. It is on the basis of this that I am willing to consider that he may have something else in mind. I'm not sure yet exactly what it is that he has in mind'perhaps it is what a few paranoids are undoubtedly suspecting and merely a way to boost Mathematica sales'but I am taking my Prozac and lithium carbonate and hoping for the best. Who knows, maybe we are seeing the degeneration of another Howard Hughes. As Wolfram might say, I strongly suspect that this is not the case.
As Thomas has pointed out, you need to give his ideas a longer look.
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
I cannot comment on the future scientific potential for cellular automata, but I do want to defend the use of mathematics in physics and other areas of science.
A huge number of objectives that scientists and engineers want to find out about the universe are more than adequately approximated by pretending that space and time are mathematical continua rather than the discrete objects that quantum mechanics dictates.
For these purposes, the use of calculus, differential equations, linear algebra, statistics and probability, differential geometry, and other branches of math as they now exist -- in comination with mathematically-formulated laws of science -- has been extraordinarily successful in giving the answers to questions that have yielded enormous CONCRETE advances in science and technology.
These include detecting and finding cures for diseases, building skyscrapers, making computers, getting airplanes to fly, manufacturing new materials
To claim that mathematics is a failure because, for example, it cannot at present predict much about the stock market or, say, a hurricane's future is like saying the telescope is a failure because it cannot penetrate past the known universe.
OF COURSE there will be calculations which are so complicated that our best computers cannot complete them in a reasonable amount of time. Not to mention the fact that even if our computers were fast enough, we are still unable to collect the requisite amount of data for the calculation to be especially accurate.
In the case of a hurricane, this would be the position, velocity, temperature and electrical charge of the air and water, at a very fine spatial and temporal scale. There may even be a macro "Heisenberg uncertainty principle" at play here, since the very collection of the data might alter what is being observed. (This is certainly true with regard to measuring the physical state of the human brain.)
|
|
|
|
|
|
|
|
|
Re: Wolfram -- Kurzweil's "Patterns" ??
|
|
|
|
In his discussion of Wolfram, and elsewhere, Kurzweil both states his belief that PATTERNS are the real unit of existence, and refers to the notion of COMPLEXITY for various reasons.
I would appreciate his elaborating on this notion of PATTERNS, for I do not yet understand it, and I think it may well require an elucidation of his concept of COMPLEXITY.
For example, PATTERNS are definable both in spatial and in temporal terms. Presumably, spatial patterns occur in some kind of multi-dimensional feature space. Also, presumably, it is useful to think of patterns as differing along some dimension of COMPLEXITY -- e.g., simple linear alternation of the values of one feature is not high in complexity, while the production of drawings of all human faces is more so.
So, then, some of my questions:
1. What constitutes a pattern vs. a non-pattern? How is one sure that, given a finite measurement, it does or does not contain a "pattern"? Or, is the recognition of a pattern solopsistic -- one man's noise is another's pattern?
2. What are the alternative ways in which patterns may be defined? (e.g., by a summarization formula, by a brief verbal description, by matching a stored image, ...)? Is this set closed?
3. How do the definitions depend on whether the pattern is a temporal, a spatial, or a mixed one?
4. Is it meaningful to talk about "patterns of patterns" as a pattern? (e.g., the pattern of one drum rhythm against another and both against a pattern of tonal melody)
5. Is it meaningful to talk about the "evolution" of patterns, in the sense of ascribing their appearance to some underlying dynamics?
6. If it is meaningful to talk about a "pattern of patterns" then some notion of complexity has been invoked. What is an appropriate definition of complexity re patterns? In particular, how does he feel about Coveney and Highfield's ("Frontiers of Complexity", 1995) definition: "... complexity is the study of the behavior of macroscopic collections of such units [many basic but interacting units, be they atoms or molecules ...] that are endowed with the potential to evolve in time" [p7]?
Many thanks. ... Lance Miller |
|
|
|
|
|
|
|
|
Re: Wolfram -- Kurzweil's "Patterns" ??
|
|
|
|
I think I understand your concern, though it is difficult to explain.
For a pattern to exist, it seems that there must be something to interpret it. But the existence of an 'interpreter' implies something like the homunculus that John Searle wants so much to dismiss from his concept of consciousness. If we admit to the existence of such a homunculus, then the pattern is really in the homunculus and not in whatever the homunculus is interpreting. Hence, we have explained nothing.
Suppose that we have a long thin strait wire that is utterly unbroken and has absolutely no pattern in it. Now imagine an invisible curved surface that bends back and forth, something like a wave, and intersects the wire in a complex pattern. This pattern of intersections now represents a pattern in the wire. Now suppose that there is nothing (such as yourself) to imagine the invisible surface. Does the pattern in the wire still exist, or do we now have nothing but a long thin patternless wire?
If you think about it, there are no patterns in anything that do not have the same fallacy as the invisible plane intersecting the wire. We can always bend our perception in such a way as to create or demolish any supposed pattern.
Avoiding all the esoteric language and complex arguments that people employ, the pattern concept simply doesn't work. In truth, I am beginning to realize that none of the complicated syntactical structures that people employ really explain anything. We are no closer to a real understanding of our reality than primitive rock-throwers were. Now, we throw much smaller rocks (we call them atoms) and we have more complicated rules to describe their motion (we call them mathematics) but none of that really amounts to a hill of beans.
Am I close?
Of course, I have basically pointed out the reason why hard science takes the position that it does. Hard science does not try to explain anything; it simply models. If a particular model gives predictable results, the model is retained. It seems like Wolfram's automata model might be as good as any other, but it needs to be investigated much further. As many have pointed out, we need to find something testable about it.
|
|
|
|
|
|
|
|
|
Re: Wolfram -- Kurzweil's "Patterns" ??
|
|
|
|
>It's what you do with what you have that makes the difference between what we were and what we are. Without that, there's no difference between us and the chimps or the microbes.
So let's take that to its logical conclusion. Suppose, for the sake of argument, it turns out that Wolfram's theory is correct. Since rule 110 appears to be universal, let us even assume that we are just rule 110. Automata cannot go below the level of their own programming, so that would be the end of science. Now, let's throw in some of the other predictions that are being made. In thirty years, the singularity is achieved. If we are lucky (who knows) we may end up in the optimal position: we are commanders over electronic gods with perfect obedience. They can make or do anything we wish. There is nothing more for science to do but Godel has guaranteed that there will be no end to the mathematical theorems we can prove'that is, until we run out of RAM. What a waste of good RAM! However, taking Wolfram to heart, it really only amounts to the unpredictable behavior of automata. What do we do?
Do we spend endless hours, days, years, millennia in simulations satisfying hedonistic fantasies? Of course, direct stimulation to certain centers of the brain would make those fantasies much more intense than anyone can imagine. They would certainly become addictive. I can't imagine how they would not. Even if not, why would any rational human resist? After, all the machines can keep us running in perfect order indefinitely. Nothing we do in that capacity should cause permanent damage. No need to worry about someone sneaking up on us while we are indulging in one of our perfect fantasies: the machines can protect us much better than we could ever protect ourselves. No need to worry about growing old: aging would be the first thing to go'the nanobots would see to that. Do you see where I'm heading with this? Are we still diverging from the aforementioned chimps?
There's got to be more than rocks!!!!!
|
|
|
|
|
|
|
|
|
Re: Wolfram -- End of Science
|
|
|
|
> As far as I've read in Wolfram's book, all the CAs seem to be two dimensional. We still have two more dimensions to consider and write rules for. Of course, he may get into that later in the book,'
As a matter of fact, he does deal with higher dimensions (starting with page 169). Also, when he deals with networks (starting on page 193), he points out that they can effectively represent any number of dimensions.
>> But there is a lot more to what we can express than what is contained in mathematics.
>I don't think so.
I just don't see how you could think that. I once read a line: 'There is nothing deader than a page full of equations. The trick is waving a magic wand and bringing them to life.' Also, think about this: how does a cell know what to do when its turn comes up? Why does it not suddenly decide that it wants to be a rule 30 automaton instead of a rule 110 automaton? There's something missing from our whole perception of reality. I get the impression that if we could somehow step outside of our reality it would be obvious, but of course, I can't imagine what that something is. It would be a cheat to say that it is God or something like that, but I get the impression that if there is a God, he/she/it can see the thing I am referring to.
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
I have come up with a way to explain the whole universe. It is based on Wolfram's automata, and inspired by some of his other ideas.
I am going to use his rule 110 as a kind of analogy, though the connection is somewhat abstract.
Suppose the universe began with no rules whatsoever. I do not mean that it was random, but that it literally had no rules. It was not empty, or full of matter. It did not have zero dimensions or many dimensions. It did not contain consciousness or a lack thereof. It simply had no state whatsoever. I use the term 'began' somewhat loosely, since a beginning would itself imply the rule of chronological time.
Now, we subject this rule-deficient universe to an application of logic. The logic I speak of is not our logic or any particular branch of logic, but THE logic. It is the logic that is to our logic what the largest cardinal number is to countable infinity.
In its initial state, without any rules whatsoever, the universe is analogous to rule 110 with a random initial setting. When this ultimate logic is applied to it, it begins to evolve much as rule 110 evolves. Immediately, there is a great deal of interaction as different aspects of the totally rule-deficient universe unfold. Just as the automata, in a sense, fight each other to try and settle into a regular pattern the different aspects of the universe fight each other to settle into a regular pattern.
There are contradictions that have to be worked out. Exactly what these contradictions are, may not be understandable in terms of our logic. However, once again, there are useful analogies. In this case, the analogies come from common experience. For example, a space cannot be both full and empty. There cannot be both an infinite number of dimensions and none at all. As I stated, these are only analogies: we cannot know for sure what will and will not be permitted within this ultimate logic.
Eventually, the universe starts to settle into a regular pattern. This pattern is not a result of the application of rules, but the result of pure exclusion. In a sense, it is like a giant proof by contradiction. Everything assumes the only form it can that is not in contradiction with everything else. Though the universe tries to settle into a regular pattern, there is no guarantee that every issue can be resolved. These issues are analogous to structures in automata that continually shift between two or more shapes. A real-world example might be the wave particle duality. Neither the wave nor the particle satisfies the ultimate logic, so photons switch back and forth from one state to another depending on how they are scrutinized. Another subtler example is the simple movement of matter. When an object moves from point A to point B, it does so because to remain at point A would constitute a contradiction. Everything happens because of exclusion. Everything from the motion of the smallest apparent particle to the formation of a great civilization happens because any other event would constitute a contradiction.
Now, here is the interesting thing. In Wolfram's treatise of automata, he explores the concept that a logical argument is essentially just the evolution of a symbolic cellular automaton. So, in a sense, since the evolution I am describing is the evolution of pure logic, we are not seeing something that is analogous to automata, but, in fact, actual automata.
The beautiful thing about this model is that it relies on nothing that would not, in some limiting case, have to be true. Therefore, the model must be correct. It is even testable, though a test would probably be very difficult to devise. The test would involve setting up simulations of various aspects of the universe and demonstrating that any other construct would lead to a contradiction.
What do you think Thomas, do I get a Nobel prize?
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
To my mind, there are no rules to the universe. There are patterns or regularities that we discover and make rules to describe. Sometimes our descriptions are accurate and sometimes they aren't, but the rules only exist inside us. The patterns reflect our perception of the universe, not the universe itself. Every time some event can be likened to some event we've observed before we see the possibility of a pattern. But a lot of what we think is the same thing really isn't. Some things just coincidentally resemble each other to our eyes.
To the color-blind man a green light looks the same as a red light. The man who can see color makes a rule that says "If the light is green I can go." The color-blind man makes a rule that says, "If the top light is on I must stop. If the bottom light is on I can go." Both rules solve the problem of deciding when to stop or go, but they are based on different perceptions. The light itself is unaware of the rules being used to obey it. Neither rule is an apt description of the light and what it is doing.
If the rules are being made by beings who see a different part of the light spectrum, their rules will be different from ours. If they grew up on a planet where water was always solid until someone melted it, their perception of water would not be the same as ours. The rules they would make to describe water and life would no doubt be different from the ones we made to help us cope with the world in which we live.
Our rules are based on our perceptions of what is happening in the world around us. Considering the size of the universe and the number of things in it we haven't even dreamed of yet, we can't really say the rules we've made to describe it apply to anthing more than what we've been able to see thus far. They certainly don't describe everything about it.
That's why we have to keep changing the rules we make. When we observe new phenomena (new to us) we either have to adjust an old rule to include it or make a new rule to explain it. In either case, the universe is not obeying any rules -- it isn't even aware of them. And the rules we make only help us use the regularities we've observed to control our own destiny, not that of the universe. |
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
Quite true.
However, consider this. The avenue through which we arrived at this idea creates the impression that the ultimate absolute logic that dictated the unfolding of the universe must, in some sense, be cold and sterile. And yet, if the universe we see appears to contain the potential for love, justice, beauty, goodness and many of the other high ideals we hold so dear, those things must have been inherent to the initial conditions. Our natural inclination is to think that if these things exist they must have been created. Yet, if they are actually part of the ultimate absolute logic, the universe could not have unfolded any other way. Maybe it is not the case that we suppose these things to be true because we want to believe in them; maybe we want to believe in them because they are true. Another thing: it seems to me that there is not much real difference between the word 'logical' and the word 'true'. If you substitute the word truth where I have used the word logic, you have something very comparable to a biblical testament.
Well, I am not a theologian; nor do I wish to become one. The real theologians will probably either shake their heads in dismay or laugh at the audacity of some novice reproducing an idea that was invented and discarded in some dusty old book they are all privy to. Besides...I don't think God likes this idea ;)
I only wanted to determine, through some attempt at application, if Wolfram's ideas have any real potential. In my mind, I have shown that they do, so I will continue reading.
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
>There is no creator except The Creator.
Actually, I was only making an idle observation, but my thinking on that particular idle observation has advanced somewhat.
There are three things that bother me about the concept of cellular automata being the basic algorithm of the universe. The first is that there has to be some initial condition. The second is that a rule must be chosen. The third is that something has to make the universe follow the specified rule ad infinitum. If we really wish to explain the universe, we must come up with a rule that works independently of any particular physical representation. The only candidate for such a rule, to my knowledge, is logic. If we make the not-entirely unreasonable assumption that logic exists independent of physical representation and that it is the only primordial construct, then the controlling mechanism of the universe must be logic. The universe will obey nothing that is either inconsistent with logic or subsidiary to logic. It will always do exactly what logic dictates.
Now, imagine the universe as being somewhat like an automaton grid that has not yet been programmed. It does not have an initial state, it does not have a rule to follow, and it does not have anything, such as an electronic machine, to make it follow that rule. Somehow, these three things need to be implemented. It is tempting to think that if no choices are made, the board will automatically be blank, but this is a misconception. A blank board is as much of a choice as a full one or any other variation.
To this undecided board, we introduce logic. I use the term 'introduce' somewhat loosely, since it must be assumed that logic, if it is the primordial construct I am taking it for, must have been present from the start. The law of the excluded middle demands that the color of each and every square be chosen. In the case of the automaton model, we need only consider the choice of black or white. Some may argue that multi-valued logics invalidate the assumption that choices 'must' be made. I suspect that multivalued logics, though useful analytical tools, do not have the primordial characteristic that two-valued logic has. I strongly suspect that it is not actually possible for something in the universe to be undecided. If that is possible, then Einstein, Podolsky, and Rosen were wasting their time in writing EPR, and we may as well throw critical reasoning to the wind. Ultimately, it will be seen that I am proposing an alternative to either true indeterminacy or hidden variables.
So, we are stuck with this undecided board in which every square must be either black or white. Choices must be made and something has to choose. In the real universe a great many choices would need to be made'probably an infinite number of choices.
It is tempting to think that the real universe has some primordial state that it will assume by default. Of the three candidates I can think of, the first has already been dismissed by science: that of an empty three-dimensional vacuum with linear time. The other two candidates I can think of are nonexistence and a single dimensionless point. These represent two distinct choices. I tend to doubt that these, or any other choices, are actually primordial. Why should they be if the only primordial construct is logic?
We have a paradox. The universe has to be something, and there is nothing to decide what it will be. It can't just be everything: the law of the excluded middle dictates that it must be something specific. Something has to come into existence to resolve the paradox. People faced with such dilemmas often go insane. Machines faced with such dilemmas often explode. However, in the case of the universe, insanity or exploding would simply be two of many possible choices.
More than just a paradox, we have a loophole in reality: a gigantic loophole that demands to be filled. Into this loophole, we introduce a choice function. This choice function will probably need to be capable of making an infinite number of decisions at an infinite rate. Also, since the choice function's decisions exist for the purpose of breaking the paradox, the choices are, by definition, universal and total: i.e. omniscient and omnipotent. You can think of this choice function, if you will, as the much-sought choice function of topology that must choose one element from every set. In order to distinguish between choices, this choice function must have motivation, preference, and all the other attributes we associate with consciousness. We may as well call it a consciousness. This 'consciousness' would not be a physical thing, but a transcendental thing: it would be 'the process of choosing'. Since it is, by definition, omniscient, omnipotent, and conscious, we may as well call this consciousness God. Whether it is called the God of the Muslims, the Christians, or some other faith is immaterial.
Now, we have God. The existence of God must have preceded the existence of the human strand of consciousness. God, realizing the etiology of God's own existence, would know how to create more such beings. I am guessing that God would not want to create another God, but subsidiary consciousnesses. The key would be to create a region with limited logical structure: our universe. That is possibly why the universe we inhabit has limited logical discrepancies like the EPR paradox. Those things exist so that our consciousness will have to emerge to resolve them. Penrose has the right idea, but he is confusing cause and effect: quantum collapse does not generate consciousness; it is generated by consciousness'the idea that Einstein wouldn't consider.
I have encountered many efforts to argue the existence of God from basic premises. The efforts of Hegel are very comparable to my own. Unlike Hegel, I did not start from the unstated premise that God must exist and look for a proof. I stumbled upon this construct accidentally while reading Wolfram's book.
Judge for yourself: is this a legitimate argument or merely the mad ravings of a misinformed dilettante?
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
In my humble opinion, the author tries to show that-
--there may exist a "Simple Rule" for "Generating Existence".
--and that rule is probably computational (CA) rather than a mathematical equation.
Let's do the following- think of the Existence (I prefer the word over Universe) as a 4-dimensional structure (or n-dimensional if you choose). Now let's assume you could model the Existence as a string of 1s and 0s- a very large string, probably.
Now, here is a few observations:
- Humans (and to be precise each human-life) is a substring in this representation.
- One substring (Stephen, myself, or you) is trying to come up with a rule that describes the entire string. In my case, I simply abstract the string by calling it "Existence", physicsts want to come up with a "bunch of equations" whose solution would be this string, Mr. Wolram predicts that it is a CA that can generate this string, and Mr. Kurzweil thinks its a combination of equations and CA's.
If you go by the simplest- "Existence" is a good small explanation. All you have to do is first accept it intellectually.
Our human brain (the substring) has the somewhat unique capability of finding patterns in the String (Existence) and so that we can abstract it and store it in our brains (also called "understanding"), we want to come up with a rule because the String is too large.
Let's take the following example- reduce our problem to considering a "Rose" in stead of "Existence".
"Rose"-
I would call it a "Rose" observe it, realize that I cannot store the entire Rosee-String in my brain.
Mr. Wolfram contends that may be we can come up with a CA such that it will generate the Rose-string pattern.
Some Physicists contend that you can come up with an equation that will generate the Rose-string.
I think the understandable obsession to capture the underlying Rose-rule is just an obsession- there is no proof as far as I have seen that shows that it is possible to come up with a rule that will generate this Rose-string. Coming up with equations and rules that generate complex Strings is just what it is- rules and equations that generate complex strings. Now if we apply our cognition abilities to see patterns similar to "Existence"- I think that's a testimony to our pattern matching ability (we are probably running a Largest Common Substring algorithm!).
Let me put it another way- this Existence string can be mathematically interpreted as an Integer value- say 284,49....537. And that my friends is another abstract way of saying I found the underlying "Rule", the underlying "Number", the underlying "Equation" to Existence that which Is.
Existence Is.
Let's keep matching patterns, especially the useful one's to improve our lives (which is nothing but a "pre-determined in 4-dimensions" but not "pre-evaluated in time domain").
Let's not lull ourselves into a false sense of beleif that there is a simple rule. It "MAY" be that the Existence Number I just picked is 2 raised to the power 2002002 minus 1. Does that make it a "Simple Rule" explaining existence?
Or if the Existence Number is the 10 trillionth prime number -32 : Is that a "Simple Rule"?
Think about it- a "simple rule" for existence already exists- its what I call the "Existence Number".
What we really want to find is a rule that is simple enough for our comprehension but complex enough so that we can do interesting stuff with it? But what if the Existence Number really is what I just gave you!
-Anshu Sharma
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
>I think the understandable obsession to capture the underlying Rose-rule is just an obsession- there is no proof as far as I have seen that shows that it is possible to come up with a rule that will generate this Rose-string.
I agree. The universe may simply exist exactly as it is, and the only underlying rule may be existence in its entirety. The Universe is not small enough for humans to grasp (except with a mathematical abstraction), and the atoms it is made up of are not large enough for humans to grasp (again, except with a mathematical abstraction). Why should we assume that the basic underlying rule is simple enough for humans to grasp? The underlying rule may be as complex as all of existence itself. Moreover, the rules we think we have pinned down may stop functioning tomorrow and be replaced by any number of apocalyptic fantasies--or things may simply fall apart.
Wolfram seems to be saying that he has discovered evidence that any kind of existence can be accounted for by a simple rule. In fact, he has only discovered evidence that any kind of pattern can be accounted for by a simple rule. Existence is another story. Existence is, and possibly always will be, too elusive for our philosophies to delineate. The presence of that quality of our experience that we have dubbed 'consciousness' adds to the perplexity of forming such a construct.
Maybe it is time that all of us came to terms with seven essential ideas.
1. Science will never explain the universe, only model it. Nor has it purported to do otherwise.
2. As far as actual explanations go, the beliefs of a Hindu monk and the beliefs of a Catholic priest have as much validity as the beliefs of a positivist physicist like Stephen Hawking or the toilings of a Stephen Wolfram.
3. We can still continue to model the universe in such a way that allows us to build a more reliable toaster or a faster jet ski, but these are only reliable models and not explanations of anything.
4. Eating, sleeping, taking hot showers, having adult relations, looking at sunsets, giving gifts, and staying alive and healthy for as long as possible are still desirable things even if we cannot prove it mathematically.
5. Blowing someone up or admonishing them because they disagree with us are still undesirable things, no matter what we believe.
6. There is no point in giving up a comforting belief just because it seems like the majority of rational people have rejected it. There is always another layer of explanation that may somehow make that belief seem more plausible.
7. Someone may prove me wrong tomorrow.
If I have offended anyone or seemed ignorant or irrelevant, I apologize. As far as the issue of existence goes, I'm afraid I have come to the point where such questions no longer interest me. However, I am looking for a more reliable toaster!
'Scott
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
>I don't know how you arrived at the "point" but in many eastern philosophies its called the point of "Realization" or "Enlightenment". Personally, I have also made journey from a pure physics/mathematics view of the world to the point of accepting Existence Is.
I suspect that you are seeing more in what I am saying than is actually there, but perhaps this excerpt from my personal journal will help to clarify my perspective.
'I think I see a pattern of emergence. But, the whole concept of pattern may be a local phenomenon. Maybe pattern emerged...or did something else besides emerge. The evidence is indicating that time is a local phenomenon. That's a strange enough idea. Why couldn't pattern be a local phenomenon? Maybe emergence emerged...but that would be truly impossible! I really don't know anything! I don't even have a case--or really even any evidence. I can make depressing speculations, but they are only worst-case scenarios. They are not, by any means, the most likely scenarios. On the scale of everything, my speculations have a zero percent chance of being correct.'
This excerpt from my journal tells only part of the story. The thing I have begun to realize is that there is no bottom line to human reasoning. The only definite trend in my personal thinking about the universe is that every time I think I have found that bottom line, or a good approximation to it, I am ultimately proved wrong. When you begin to suspect that there is no bottom line to human reasoning, you are freed from the constraints of reason itself--no basis, no structure.
I still have an interest in science, but it is now more like my interest in solving puzzles or playing video games. I have fewer stakes in it.
I am still looking for a better toaster!
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
I didn't read all the comments above, sorry if i'm repeating something that's been brought up.
Mr. Kurzweil's main criticism of Wolfram's book seems to be that the CA that Wolfram refers back to so often is complex but "not complex enough". Now, maybe Mr. Kurzweil is right, but maybe his definition of complex is incorrect. True, the same triangles and more or less regular (and random) behavior occurs throughout the CA. But do you really need to see some more intricate designs in there to account for say, a human brain? Imagine that sequence written out to a point where the bottom line contains an enourmous number of black and white dots (i won't even try to somehow represent that number, i mean something very large). Wouldn't that sequence or pattern be classified as "extremly complex"?
Now, that CA can be drawn out arbitrarily large, so there could be an arbitrary number of those "extremely complex" patterns that would account for a human brain, or whatever, along with all the other stuff in our universe. What i'm saying is does the random structure of that CA really have to be very intricate at its lowest level to account for stuff in the universe, can't it just be HUGE and account for everything. So its "computational equivalence" really is all that matters, and the "software" isn't really needed. Actually, I didn't get to the chapter about computation equivalence yet, so i'm not entirely sure what it means, but maybe what i said still makes sense :P
/nik |
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
Greetings,
I came across this article on new research results from the Center for Computational Genetics and Biological Modeling at Stanford. It seems to lend support to Wolfram's assertions regarding natural selection's limited role in producing biological complexity (being in this case gene networks) as outlined in A New Kind of Science. Below is an excerpt and a link to the article:
--Scientists used to think that developmental fidelity evolved via natural selection, principally through survival and reproduction of organisms with redundant genetic systems -- that is, ones with copies of important gene sequences. But Siegal and Bergman's results indicate that redundancy may only be one small manifestation of a bigger theme: the complexity of gene networks. In short, more complex systems are more resistant to change in their outputs.
"It is typically assumed that important properties of organisms are crafted by natural selection," says Dmitri Petrov, assistant professor of biological sciences. "What Siegal and Bergman show is that robustness in the face of mutation, or canalization, may be a byproduct of complexity itself and therefore that robustness may be only very indirectly a product of natural selection."
Says Siegal: "It might be that the complex nature of the genetic system itself is going to give you canalization independent of natural selection. This complexity goes beyond mere redundancy, incorporating all kinds of elaborate connections in the gene network."
That doesn't mean natural selection doesn't play an important role. Continues Petrov: "Natural selection has shaped the genetic networks of complex organisms so that they produce appropriate phenotypes -- the more highly interconnected these networks are, the more robust the corresponding phenotypes are. The importance of this result is that it shifts the focus of the field away from abstract models of natural selection and toward actual genetic networks. In so doing, it will provide a new perspective for analyzing and understanding the current outpouring of genetic data in model organisms." --
http://www.stanford.edu/dept/news/pr/02/bergman87.html
CMR
<--gratuitous quotation that implies my profundity goes here--> |
|
|
|
|
|
|
|
|
CAs + Quantum Computing Concepts = ???
|
|
|
|
--------------------------------------------------------
Quantum Computational Cosmology??
-------------------------------------------------------
On reading Wolfram's book, and in particular the part about physics as CAs operating on
a network to produce space-time, matter, energy, I was prompted to have the following
ideas. Please excuse the lack of rigour. I'm just trying to convey intuitions here
and get some feedback on whether anyone thinks there's promise in this direction
or if there are other references people can point me to.
These questions arise:
1. What would the network of nodes and arcs between nodes
be made of? i.e. what is the substrate of Wolfram's universe network?
2. How do we define the "time arrow" and what makes the universe
as it appears to be?
My essential concepts are these:
----------------
Principle 1
----------------------------------------------------------------------------------------------------
The substrate is simply (all possible arrangements of "differences")
----------------------------------------------------------------------------------------------------
or perhaps put another way "the capacity for all possible information", or if we want
to imagine a "growing" multiverse, it would be
all possible arrangements of "differences" which can be represented
in a bitstring of length n, and n is growing.
The fundament is the binary difference. Each "direct difference" is an arc
and nodes are created simply by virtue of being the things at either end of
a "direct difference".
e.g. The multiverse is
a universe with just one "thing" and no differences (boring) +
a universe with one difference (ergo, two things) +
all possible configurations of two differences +
all possible configurations of three differences + etc.
I believe but am not certain that the number of different possible network
configurations representable with up to n bits is:
2 to the power ( (n squared - n) / 2)
Imagine an nxn matrix with its nth row and nth column representing the nth
posited-to-exist "node" and whether that node is directly-different (arc-joined)
or not to each other node.
----------------
Principle 2
---------------------------------------------------------------------------------------------------------
If the multiverse is "all possible states "simultanuously" of a length-n bitstring,
then the "time-arrow" and the "actual universe"is defined as an order-producing
"selection" or "view" of a subset of the "potential states" of the multiverse.
--------------------------------------------------------------------------------------------------------
If we imagine the multiverse as kind of holding (or being the potential for)
all possible states of the long bitstring, then you can make a selection
from all of those states. i.e. you can define
U1 = a particular sequence of states of the bitstring.
The word "sequence" rather than "set" is chosen deliberately here,
because my contention is that, of all possible sequences Ui, some
sequences will be "order-producing".
So why don't we just make the bold claim that "the time arrow"
is the direction through state-space from the beginning to the end
of an "order-producing" sequence (U1) of states. And that U1,
the "order-producing" sequence of states, is the "observable
universe".
Why is U1 the observable universe? Well because its evolution
of states was order-producing in just the right measure to
produce just the right mixture of randomness and order to
produce matter and energy, the rules of physics, and
"emergent behaviour" systems such as intelligent observers.
----------------------------------------------------------
How does this relate to Wolfram's CAs?
----------------------------------------------------------
Well we can define programs as being simply the things which
specify the state transitions from "state i to state i+1" of a sequence Ui
of states of the multiverse.
In (vaguely recollected) Hoare logic terms, my contention is that the
multiverse can be "viewed" as simultaneously (in an extra-time sense)
executing every possible program Pij such that S(i) Pij -> S(j).
Most sequences of executions of programs will be like executing
"garbage" programs on "garbage data", but some sequences of
program executions will produce interesting evolutions of states exhibiting
complexity and order, and emergent behaviour.
A good guess is that some of the "interesting" program executions
may be understandable as, modellable as,
executions of particular universal CLASS 4 automata.
The details are left to the wonks ;-)
---------------------------
--- Summary --------
---------------------------
The multiverse (or substrate for our universe) is precisely
the potential for all information. That is, it is equivalent
to the "simultaneous" exhibition of or capacity for
all possible states of a long and possibly growing bitstring.
The time-arrow of an observed universe is the order of visitation of the
information-states of the multiverse which corresponds to the
execution of "Wolfram-automata" which locally-evolve the configuration
of the individuals and differences of the substrate in such a way as to
generate just the right mix of randomness and order which produces
stable, organized systems and ultimately observers.
With this formulation, we need not assume that there is some
magical, extra-universal "supercomputer" busily computing a
"Fredkin-Wolfram-information" universe for us. Computation
of ordered complexity just falls out, as just being a particular
path through a very large set of information-states all of
which co-exist in the multiverse substrate. The huge, but
unrealized set of information states in the substrate, in
fact IS the substrate. It only becomes (or hosts) an observable
universe when viewed by observers existing within
a set of its states that is consistent enough to be "real".
---------
I'd welcome any feedback. Feel free to be ruthless with this
bizarre and possibly obvious or poorly thought-through concept.
Eric Hawthorne |
|
|
|
|
|
|
|
|
Re: CAs + Quantum Computing Concepts = ???
|
|
|
|
>On reading Wolfram's book, and in particular the part about physics as CAs operating on a network to produce space-time, matter, energy, I was prompted to have the following ideas.
Yes, I like this idea. Arguably, it is the only possible explanation. However, I am guessing that someone has thought of it before, and to my knowledge it is utterly untestable.
But don't feel bad about either observation. No law of reality ever guaranteed that the truth would be testable when someone thought of it, and no law of psychology ever guaranteed that the truth would be so difficult to formulate that few or no persons would think of it.
What we may be discovering here is that the truth is ultimately unknowable, and that we would all be a lot happier if we just stuck with usable, testable engineering principles and built much better toys.
Scott |
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
I thought that this review provided a good explanation of the first half of Wolfram's book, on CAs and the information computer theoretical analysis of nature, but failed to link this with the notion of computational equivalence, which is essentially the main idea of the second half of the book. Though I especially appreciated the effort and detail that Dr Kurzweil expended in explaining his points of difference.
However it is errorneous to directly compare computer graphical CAs whose basic elements are colored grid cells with fully developed organisms which are constructed from far more complex units, such as organic cells and then conclude greater complexity of the latter over the former. This is the worst case of an apples against oranges argument and confuses the issue of complexity by not sufficiently distinguishing between the basic components out of which the two structures are built.
If we take a functional approach we can see that we have two quite different structures:
CA[graphicalCells, Rules[graphicalCells] ], and
Organism[organicCells, Rules[organicCells] ].
The point that Wolfram is making is that when considered as computational systems
Rules[graphicalCells] and Rules[organicCells] are equivalent as universal systems. But because organicCells are themselves complex structures described by
organicCell[cellElements, Rules[cellElements] ] then the resultant Organism object can be seen to be a hierarchical structure:
Organism[
organicCell[cellElements,
Rules[cellElements] ],
Rules[organicCells] ]
and its supposedly greater complexity derives from viewing the above structure as:
Organism[cellElements, ComplexRules[cellElements]]
But such an approach is erroneous and suffers from a confusion about levels of representation. Since both systems (graphical CAs and organisms) are universal then they can represent anything and so the former can describe the latter, so that the latter cannot be any more complex!
Best Regards
Michael Kelly, IIT.
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
Michael,
I agree, and would apply as analogy a "bias" in not recognizing an associativity. Stretching the meaning of functional notation just a bit ...
Begin with Objects-level-1 (Objs1), and Rules-level-1 (Funcs1), and define
Objs2 = Funcs1(Objs1)
Continue with Objs3 = Funcs2(Objs2).
Now, Objs3 = Funcs2(Funcs1(Objs1)). The composition Funcs2(Funcs1()) might be designated "FuncsB". We then have
Objs3 = Funcs2(Objs2), or equivalently
Objs3 = FuncsB(Objs1).
The intermediary complexity can be viewed as either "pushed down" into the argument "Objs2" of Funcs2, or "pulled up" into the function "FuncB" over Objs1.
Another way to view it: For all the computational power of Deep Blue, everything it can do "functionally" can be accomplished with my lowly PC, albeit rather slowly and tediously.
At least, I think this was your intent.
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
Some doubts and commentaries on cellular automata rule 110,
An error in "A New Kind of Science" ...
I believe there is an error in the graphics depicted in the book which
does not correspond absolutely with the illustrated schematic diagram in
the figure of page 683.
The parts needed separately for beginning the specification of the
Cyclic Tag System can be constructed using phases, hence these parts
must reproduce the figures of the book corresponding with the schematic
diagram using suitable alignments and distances.
In the figure of page 681 the eight parts of the system are presented,
we shall rename each one for getting a better identification on
codifying and visualize them.
book label
-------------------------------------------------------------
1. a black element in the sequence 1Ele_C2
2. a white element in the sequence 0Ele_C2
3. a black element in a block 1Blo_Eb
4. a white element in a block 0Blo_Eb
5. the initial form of some separator between blocks Sep1_EEb
6. the later form of some separator between blocks Sep0_EEb
7. a black element ready to be added 1Add_Eb
8. a white element ready to be added 0Add_Eb
Following the schematic diagram of page 683 (each gray tone in the
diagram represents a particular block of structures), then the sequence
can be codified in the following way:
[4_4
A]-[*e*]-[1Ele_C2]-[Sep0_EEb]-[1Blo_Eb]-[Sep1_EEb]-[1Blo_Eb]-[0Blo_Eb]-[1Blo
_Eb]
- ..., and successively. ("e" it represents a space defined by ether)
A commentary in the book says that the distances in the schematic
diagram are not suitably represented, but the important point of the
diagram is to show how each one the parts must interact to represent the
Cyclic Tag System, as it is illustrated in figure (d) in page 679.
The construction of complex configurations in Rule 110 is very
sensitive, the change of a single bit or cell means to disturb a whole
process, as we can verify in the reproduction of the cyclic tag system.
In order to calculate each one of the components in page 681, the phase
properties of ether are used. With this property the glider phases are
known and these components can be suitably grouped.
The phase and distance must be precise, otherwise a different collision
takes place.
In this way, in block one `1Blo_Eb' the phase of the first Ebar aligns
all the other gliders (12 Ebar's altogether) and the distances among
them are: 10-1-2-8-8-2-10-1-2-8-8 (we count the number of mosaics T3
between gliders).
In the case of block zero `0Blo_Eb' their distances are
10-1-2-8-8-8-10-1-2-8-8. The difference in both blocks is the distance
between the sixth and the seventh Ebar.
Figure (a) -page 684- begins with an element one `1Ele_C2', then an
initial separator `Sep1_EEb' formed by 8 gliders arrives and a block
one `1Blo_Eb' must arrive following the diagram in Page 683.
From the ninth Ebar a block one `1Blo_Eb' begins but their distances do
not correspond. The distances of these last four Ebar's in Figure (a)
are: 4-6-2. The piece of block one `1Blo_Eb' in Figure (c) shows the
remaining nine Ebar's.
Thus, joining the previous two parts, the distances of the corrected
block one `1Blo_Eb' must be: 4-6-2-8-8-2-10-1-2-8-8.
In Figure (g) we have again the first four Ebar's associated with the
block one `1Blo_Eb' corresponding with the distances 4-6-2. In Figure
(i) the last eight Ebar's of block one are showed, we can only check
this detail in Figures (a) and (g).
However, we made the construction using the original block one `1Blo_Eb'
in the book.
We obtain a valid collision in the sense that it produces a "reading"
one `1Add_Eb', but the phase in which it is originated does not allow to
yield the element one `1Ele_C2' on colliding with the group of 4A's,
because the collision instead of producing the first C2, generates the
glider sequence B-Bbar-F.
In this production there is no much to look for because the phase is
unique, 1 of 6 possible collisions is a soliton (4A -> Ebar), then when
the group of 4A's cross the first three Ebar's as solitons the fourth
Ebar must produce a C2.
In the inferior part of Figure (a) there is a pile of 8 C2's, this pile
is not possible to be obtained with the original block presented in the
book using only the corrected block; with the original block a pile of
20 C2's is produced, for this reason another phase is generated which
yields an unwished collision.
Another proof of this error can be verified in Figure (b), the distance
between the first Ebar and the second Ebar consists of 27 mosaics T3,
this distance is only yielded by the corrected block one `1Blo_Eb'.
With the original block in the book the distance between these two
Ebar's consists of 29 mosaics T3.
Another error in the figures of page 681 !! ...
The black reading element `1Add_Eb' is wrong, because the first Ebar
does not correspond with this block, the other three are fine.
Perhaps they took a bad picture and instead of selecting the four
gliders corresponding with the component `1Add_Eb', they took the first
three gliders and the remaining Ebar from a separator `Sep0_EEb'.
This error is easily verified because the distances in the element
`1Add_Eb' must be: 27-21-27 and not 20-27-21 as it is depicted in the
book.
The first operations have been implemented calculating the sequence
1s1s0s10s1s (s - separator) using approximately 20.000 cells in the
initial configuration, checking the result in 13,300 generations.
With this, it was possible to verify that the operations "addition",
"reading" and "erasing" work suitably.
Distances of the corrected blocks:
1. 1Blo_Eb: 4-6-2-8-8-2-10-1-2-8-8
2. 0Blo_Eb: 4-6-2-8-8-8-10-1-2-8-8
3. 1Add_Eb: 27-21-27
The errors in the book it looks like mistakes at the time of selecting
the figures, but if somebody wishes to see how the system works and
takes the components showed by the book, then it does not work in the
way showed by the other figures.
It does not mean that the system is wrong, we just want to say that the
precision required both in the collisions and in the glider groups is
really a very difficult feature to be obtained. We believe that 10 years
of research are well-represented in the implementation of the cyclic tag
system in Rule 110.
Another remaining tasks are:
I- To verify how the blocks of elements (1Blo_Eb, 0Blo_Eb) must cross
like solitons the sequences of elements (1Ele_C2, 0Ele_C2) and continue
with their operation.
II- To verify the good operation of the element zero `0Ele_C2', up to
now the corrected block is working, but in Figure (h) the complete block
zero `0Blo_Eb' can be seen and it is equal to the one presented in page
681. But since the block is erasing with the gliders D1 and 3A's and it
is not operating, we have to verify the process in all its points, that
is, when it generates `0Add_Eb' and crosses like soliton an element
`1Ele_C2' or `0Ele_C2'.
Using the original "zero" block `0Blo_Eb' beginning with the sequence:
...-1Ele_C2-Sep0_EEb-0Blo_Eb-..., the element `1Add_Eb' is well-produced
but it does not appear with the required phase and when the group of
4A's arrives to read this element, an unwished collision is generated.
The corrected zero block `0Blo_Eb' generates a valid `0Add_Eb' and when
it hits later against the group of 4A's, it produces an adequate
`0Ele_C2'.
We have to verify that it crosses like soliton an element "one" or
"zero".
The groups of 4A's are static but the reason is that one thinks that the
left part of the system can be copied and pasted, without changing any
phase or distance. The distances between these gliders are:
4A-27e-4A-23e-4A-25e-4A.
The necessary distance between these groups of 4A's is the same one, but
its phases are different. A block of 4A's has three phases and for
producing a soliton we need to look for the corresponding phase,
therefore we have to take care of the phase in which the 4A's arrive for
avoiding the decomposition of the system.
When the first simulations were made using the corrected "one" block
`1Blo_Eb' and we thought that the distances in the groups of 4A's
changed, then it is possible to handle a "one" `1Ele_C2' and a "zero"
`0Ele_C2' with a registry.
The group of 4A's is the one distinguishing 1 or 0, and not the blocks
of elements, for example:
if it is 1 the distances are: 4A-27e-4A-27e-4A-25e-4A
if it is 0 the distances are: 4A-27e-4A-19e-4A-25e-4A
then the central distance determines if the registry produces one "0" or
one "1". Finally:
1. In order to perform a particular operation, it is necessary to
construct another completely different initial configuration?
2. The first question does not contradicts universal computation?
A free program to study rule 110, for the systems OpenStep and Mac OS X
in: http://delta.cs.cinvestav.mx/~mcintosh/comun/s2001/s2001.html
Atte.
Genaro Juarez Martinez
Juan Carlos Seck Tuoh Mora
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
In my humble opinion, the author tries to show that-
--there may exist a "Simple Rule" for "Generating Existence".
--and that rule is probably computational (CA) rather than a mathematical equation.
Let's do the following- think of the Existence (I prefer the word over Universe) as a 4-dimensional structure (or n-dimensional if you choose). Now let's assume you could model the Existence as a string of 1s and 0s- a very large string, probably.
Now, here is a few observations:
- Humans (and to be precise each human-life) is a substring in this representation.
- One substring (Stephen, myself, or you) is trying to come up with a rule that describes the entire string. In my case, I simply abstract the string by calling it "Existence", physicsts want to come up with a "bunch of equations" whose solution would be this string, Mr. Wolram predicts that it is a CA that can generate this string, and Mr. Kurzweil thinks its a combination of equations and CA's.
If you go by the simplest- "Existence" is a good small explanation. All you have to do is first accept it intellectually.
Our human brain (the substring) has the somewhat unique capability of finding patterns in the String (Existence) and so that we can abstract it and store it in our brains (also called "understanding"), we want to come up with a rule because the String is too large.
Let's take the following example- reduce our problem to considering a "Rose" in stead of "Existence".
"Rose"-
I would call it a "Rose" observe it, realize that I cannot store the entire Rosee-String in my brain.
Mr. Wolfram contends that may be we can come up with a CA such that it will generate the Rose-string pattern.
Some Physicists contend that you can come up with an equation that will generate the Rose-string.
I think the understandable obsession to capture the underlying Rose-rule is just an obsession- there is no proof as far as I have seen that shows that it is possible to come up with a rule that will generate this Rose-string. Coming up with equations and rules that generate complex Strings is just what it is- rules and equations that generate complex strings. Now if we apply our cognition abilities to see patterns similar to "Existence"- I think that's a testimony to our pattern matching ability (we are probably running a Largest Common Substring algorithm!).
Let me put it another way- this Existence string can be mathematically interpreted as an Integer value- say 284,49....537. And that my friends is another abstract way of saying I found the underlying "Rule", the underlying "Number", the underlying "Equation" to Existence that which Is.
Existence Is.
Let's keep matching patterns, especially the useful one's to improve our lives (which is nothing but a "pre-determined in 4-dimensions" but not "pre-evaluated in time domain").
Let's not lull ourselves into a false sense of beleif that there is a simple rule. It "MAY" be that the Existence Number I just picked is 2 raised to the power 2002002 minus 1. Does that make it a "Simple Rule" explaining existence?
Or if the Existence Number is the 10 trillionth prime number -32 : Is that a "Simple Rule"?
Think about it- a "simple rule" for existence already exists- its what I call the "Existence Number".
What we really want to find is a rule that is simple enough for our comprehension but complex enough so that we can do interesting stuff with it? But what if the Existence Number really is what I just gave you!
-Anshu Sharma
|
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
If you are stimulated by new ideas and if you can think for yourself rather than simply accept what Dr. Stephan Wolfram dishes out, I think you will find this letter of interest. Let us note first of all that Wolfram wants to persecute the innocent and let the guilty go unpunished. Who does he think he is? I mean, he, already oppressive with his rancorous, empty-headed scribblings, will perhaps be the ultimate exterminator of our human species -- if separate species we be -- for his reserve of unguessed horrors could never be borne by mortal brains if loosed upon the world. If you think that that's a frightening thought, then consider that Wolfram is extraordinarily brazen. We've all known that for a long time. However, his willingness to spit on sacred icons sets a new record for brazenness. Evil individuals are acting in concert with other evil individuals for an evil purpose. Of course, this sounds simple, but in reality, the real issue is simple: He is the ultimate source of alienation and repression around here. Despite the fact that a lot of people may end up getting hurt before the final spasm of Wolfram's rage is played out, Wolfram is blinded by greed. Why do I tell you this? Because these days, no one else has the guts to. As a parting thought, remember that Dr. Stephan Wolfram's toadies coerce children into becoming activists willing to serve, promote, spy, and fight for his harangues. |
|
|
|
|
|
|
|
|
Re: Reflections on Stephen Wolfram's "A New Kind of Science"
|
|
|
|
= =
So pra registrar que estou postulando a existencia de uma particula quantica para a Inteligencia ...
Chat com Kurzweill em discussao sobre Wolframs Cas ,,,
Thanks for your answer.
Yes , it sounds very coherent , and remember me a little bit Matrix ( I have readed Ray Kurzweil vision about Matrix and I m not sayng that you are suporting the Matrix possibilitie ) in terms of concept of downloading objects as modules to colaborate to interact with some task goals , wich can be creat problems and/or create solutions , instead writing codes for knowed situations. But in terms of singularity age I feell that we will have conditions to discover or aproximate to what I call the IQUAP , the Inteligence Quantum Particle. I m not shure yeat if CAs can helps us to start to think about them as minimalistic building block structure for AI Softwares , and we would try to analisy how Nature have worked on it some time ago. I feel that only manipulating the IQUAPs we will really have the power to reconfigure the enviroment and archive infinites possibilities for the reality. If we dont find the IQUAPs , we will be only extending or srinkg , ie , distorcing, the reality , and this will be , to me , the first consequence os Singularity age. If we accept the near age of Singularity , and the possibilite to discover the IQUAPs , probably we will have the power to reconfigure the universe , ie , writing or creating Programs , that will use the universse as a computer.
Could you help us with your comments about it ?
Sergio Cabral - IdeaValley.com.br / Rio de Janeiro / Brasil
|
|
|
|
|
|
|
|