|
|
|
|
|
|
|
Origin >
Will Machines Become Conscious? >
Are We Spiritual Machines? >
Chapter 2: I Married a Computer
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0499.html
Printable Version |
|
|
|
Chapter 2: I Married a Computer
John Searle challenges Ray Kurzweil's predictions, such as downloading our minds onto hardware, nanotech-enhanced new bodies, evolution without DNA, virtual sex, personal immortality, and conscious computers. He uses his famous "Chinese Room" argument to show how machines cannot really understand human language or be conscious. Searle's conclusion is that Kurzweil's ideas on "strong AI" are based on "conceptual confusions."
Originally published in print June 18, 2002 in Are
We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI
by the Discovery
Institute. Published on KurzweilAI.net on June 18, 2002.
Kurzweil’s Central Argument
Moore’s Law on Integrated Circuits was first formulated by
Gordon Moore, former head of Intel, in the mid-Sixties. I have seen
different versions of it, but the basic idea is that better chip
technology will produce an exponential increase in computer power.
Every two years you get twice as much computer power and capacity
for the same amount of money. Anybody who, like me, buys a new computer
every few years observes Moore’s Law in action. Each time I
buy a new computer I pay about the same amount of money as, and
sometimes even less than, I paid for the last computer, but I get
a much more powerful machine. And according to Ray Kurzweil, who
is himself a distinguished software engineer and inventor, “There
have been about thirty-two doublings of speed and capacity since
the first operating computers were built in the 1940s.”
Furthermore, we can continue to project this curve of increased
computing power into the indefinite future. Moore’s Law itself
is about chip technology, and Kurzweil tells us that this technology
will reach an upper limit when we reach the theoretical possibilities
of the physics of silicon in about the year 2020. But Kurzweil tells
us not to worry, because we know from evolution that some other
technology will take over and “pick up where Moore’s Law
will have left off, without missing a beat.” We know this,
Kurzweil assures us, from “The Law of Accelerating Returns,”
which is a basic attribute of the universe; indeed it is a sublaw
of “The Law of Time and Chaos.” These last two laws are
Kurzweil’s inventions.
It is fair to say that The Age of Spiritual Machines is an extended
reflection on the implications of Moore’s Law, and is a continuation
of a line of argument begun in his earlier book, The Age of Intelligent
Machines. He begins by placing the evolution of computer technology
within the context of evolution in general, and he places that within
the history of the universe. The book ends with a brief history
of the universe, which he calls “Time Line,” beginning
at the Big Bang and going to 2099.
So what, according to Kurzweil and Moore’s Law, does the future
hold for us? We will very soon have computers that vastly exceed
us in intelligence. Why does increase in computing power automatically
generate increased intelligence? Because intelligence, according
to Kurzweil, is a matter of getting the right formulas in the right
combination and then applying them over and over, in his sense “recursively,”
until the problem is solved. With sheer computational brute force,
he thinks, you can solve any solvable problem. It is true, Kurzweil
admits, that computational brute force is not enough by itself,
and ultimately you will need “the complete set of unifying
formulas that underlie intelligence.” But we are well on the
way to discovering these formulas: “Evolution determined an
answer to this problem in a few billion years. We’ve made a
good start in a few thousand years. We are likely to finish the
job in a few more decades.”
Let us suppose for the sake of argument that we soon will have
computers that are more “intelligent” than we are. Then
what? This is where Kurzweil’s book begins to go over the edge.
First off, according to him, living in this slow, wet, messy hardware
of our own neurons may be sentimentally appealing, like living in
an old shack with a view of the ocean, but within a very few decades,
sensible people will get out of neurons and have themselves “downloaded”
onto some decent hardware. How is this to be done? You will have
your entire brain and nervous system scanned, and then, when you
and the experts you consult have figured out the programs exactly,
you reprogram an electronic circuit with your programs and database.
The electronic circuit will have more “capacity, speed, and
reliability” than neurons. Furthermore, when the parts wear
out they permit much easier replacement than neurons do.
So that is the first step. You are no longer locked into wet, slow,
messy, and above all decaying hardware; you are upgraded into the
latest circuitry. But it would be no fun just to spend life as a
desktop in the office, so you will need a new body. And how is that
to be done? Nanotechnology, the technology of building objects atom
by atom and molecule by molecule, comes to the rescue. You replace
your old body atom by atom. “We will be able to reconstruct
any or all of our bodily organs and systems, and do so at the cellular
level. ...We will then be able to grow stronger, more capable organs
by redesigning the cells that constitute them and building them
with far more versatile and durable materials.” Kurzweil does
not tell us anything at all about what these materials might be,
but they clearly will not be flesh and blood, calcium bones and
nucleoproteins.
Evolution will no longer occur in organic carbon-based materials
but will pass to better stuff. However, though evolution will continue,
we as individuals will no longer suffer from mortality. Even if
you do something stupid like get blown up, you still keep a replacement
copy of your programs and database on the shelf so you can be completely
reconstructed at will. Furthermore, you can change your whole appearance
and other characteristics at will, “in a split second.”
You can look like Marlon Brando one minute and like Marlene Dietrich
the next.
In Kurzweil’s vision, there is no conflict between human beings
and machines, because we will all soon, within the lifetimes of
most people alive today, become machines. Strictly speaking we will
become software. As he puts it, “We will be software, not hardware”
(italics his) and can inhabit whatever hardware we like best. There
will not be any difference between robots and us. “What, after
all, is the difference between a human who has upgraded her body
and brain using new nanotechnology, and computational technologies
and a robot who has gained an intelligence and sensuality surpassing
her human creators?” What, indeed? Among the many advantages
of this new existence is that you will be able to read any book
in just a few seconds. You could read Dante’s Divine Comedy
in less time than it takes to brush your teeth.
Kurzweil recognizes that there are some puzzling features of this
utopian dream. If I have my programs downloaded onto a better brain
and hardware but leave my old body still alive, which one is really
me? The new robot or the old pile of junk? A problem he does not
face: Suppose I make a thousand or a million copies of myself. Are
they all me? Who gets to vote? Who owns my house? Who is my spouse
married to? Whose driver’s license is it, anyhow?
What will sex life be like in this brave new world? Kurzweil offers
extended, one might even say loving, accounts. His main idea is
that virtual sex will be just as good as, and in many ways better
than, old-fashioned sex with real bodies. In virtual sex your computer
brain will be stimulated directly with the appropriate signal without
the necessity of any other human body, or even your own body. Here
is a typical passage:
Virtual touch has already been introduced, but the all-enveloping,
highly realistic, visual-auditory-tactile virtual environment will
not be perfected until the second decade of the twenty-first century.
At this point, virtual sex becomes a viable competitor to the real
thing. Couples will be able to engage in virtual sex regardless
of their physical proximity. Even when proximate, virtual sex will
be better in some ways and certainly safer. Virtual sex will provide
sensations that are more intense and pleasurable than conventional
sex, as well as physical experiences that currently do not exist.
The section on prostitution is a little puzzling to me:
Prostitution will be free of health risks, as will virtual
sex in general. Using wireless, very-high-bandwidth communication
technologies, neither sex workers nor their patrons need to leave
their homes.
But why pay, if it is all an electrically generated fantasy anyway?
Kurzweil seems to concede as much when he says, “Sex workers
will have competition from simulated—computer generated—partners.”
And, he goes on, “once the simulated virtual partner is as
capable, sensual, and responsive as a real human virtual partner,
who’s to say that the simulated virtual partner isn’t
a real, albeit virtual, person?”
It is important to emphasize that all of this is seriously intended.
Kurzweil does not think he is writing a work of science fiction,
or a parody or satire. He is making serious claims that he thinks
are based on solid scientific results. He is himself a distinguished
computer scientist and inventor and so can speak with some authority
about current technology. One of his rhetorical strategies is to
cite earlier successful predictions he has made as evidence that
the current ones are likely to come true as well. Thus he predicted
within a year when a computer chess machine would be able to beat
the world chess champion, and he wants us to take his prediction
that we will all have artificial brains within a few decades as
just more of the same sort of solidly based prediction. Because
he frequently cites the IBM chess-playing computer Deep Blue as
evidence of superior intelligence in the computer, it is worth examining
its significance in more detail.
When it was first announced that Deep Blue had beaten Gary Kasparov,
the media gave it a great deal of attention, and I suspect that
the attitude of the general public was that what was going on inside
Deep Blue was much the same sort of thing as what was going on inside
Kasparov, only Deep Blue was better at that sort of thing and was
doing a better job. This reveals a total misunderstanding of computers,
and the programmers, to their discredit, did nothing to remove the
misunderstanding. Here is the difference: Kasparov was consciously
looking at a chessboard, studying the position and trying to figure
out his next move. He was also planning his overall strategy and
no doubt having peripheral thoughts about earlier matches, the significance
of victory and defeat, etc. We can reasonably suppose he had all
sorts of unconscious thoughts along the same lines. Kasparov was,
quite literally, playing chess. None of this whatever happened inside
Deep Blue. Nothing remotely like it.
Here is what happened inside Deep Blue. The computer has a bunch
of meaningless symbols that the programmers use to represent the
positions of the pieces on the board. It has a bunch of equally
meaningless symbols that the programmers use to represent options
for possible moves. The computer does not know that the symbols
represent chess pieces and chess moves, because it does not know
anything. As far as the computer is concerned, the symbols could
be used to represent baseball plays or dance steps or numbers or
nothing at all.
If you are tempted to think that the computer literally understands
chess, then remember that you can use a variation on the Chinese
Room Argument against the chess-playing computer. Let us call it
the Chess Room Argument. Imagine that a man who does not know how
to play chess is locked inside a room, and there he is given a set
of, to him, meaningless symbols. Unknown to him, these represent
positions on a chessboard. He looks up in a book what he is supposed
to do, and he passes back more meaningless symbols. We can suppose
that if the rule book, i.e., the program, is skillfully written,
he will win chess games. People outside the room will say, “This
man understands chess, and in fact he is a good chess player because
he wins.” They will be totally mistaken. The man understands
nothing of chess; he is just a computer. And the point of the parable
is this: If the man does not understand chess on the basis of running
the chess-playing program, neither does any other computer solely
on that basis.
The Chinese Room Argument shows that just carrying out the steps
in a computer program is not by itself sufficient to guarantee cognition.
Imagine that I, who do not know Chinese, am locked in a room with
a computer program for answering written questions, put to me in
Chinese, by providing Chinese symbols as answers. If properly programmed
I will provide answers indistinguishable from those of native Chinese
speakers, but I still do not understand Chinese. And if I don’t,
neither does any other computer solely on the basis of carrying
out the program. See my “Minds, Brains and Programs,”
Behavioral and Brain Sciences, Vol. 3 (1980) for the first
statement of this argument. See also “The Myth of the Computer,”
published in the New York Review of Books, April 29, 1982.
The ingenuity of the hardware engineers and the programmers who
programmed Deep Blue was manifested in this: from the point of view
of mathematical game theory, chess is a trivial game because each
side has perfect information. You know how many pieces you and your
opponent have and what their locations are. You can theoretically
know all of your possible moves and all of your opponent’s
possible countermoves. It is in principle a solvable game. The interest
of chess for human beings and the problem for programmers arises
from what is called a combinatorial explosion. In chess at any given
point there is a finite number of possible moves. Suppose I am white
and I have, say, eight possible moves. For each of these moves there
is a set of possible countermoves by black and to them a set of
possible moves by white, and so on up exponentially. After a few
levels the number of possible positions on the board is astronomical
and no human being can calculate them all. Indeed, after a few more
moves the numbers are so huge that no existing computer can calculate
them. At most a good chess player might calculate a few hundred.
This is where Deep Blue had the advantage. Because of the increased
computational power of the machinery, it could examine 200 million
positions per second; so, according to the press accounts at the
time, the programmers could program the machine to follow out the
possibilities to twelve levels: first white, then black, then white,
and so on to the twelfth power. For some positions the machine could
calculate as far as forty moves ahead. Where the human player can
imagine a few hundred possible positions, the computer can scan
billions.
But what does it do when it has finished scanning all these positions?
Here is where the programmers have to exercise some judgment. They
have to design a “scoring function.” The machine attaches
a numerical value to each of the final positions of each of the
possible paths that developed in response to each of the initial
moves. So for example a situation in which I lose my queen has a
low number, a position in which I take your queen has a high number.
Other factors are taken into consideration in determining the number:
the mobility of the pieces (how many moves are available), the position
of the pawns, etc. IBM experts are very secretive about the details
of their scoring function, but they claim to use about 8,000 factors.
Then, once the machine has assigned a number to all the final positions,
it assigns numbers to the earlier positions leading to the final
positions depending on the numbers of those final positions. The
machine then selects the symbol that represents the move that leads
to the highest number. It is that simple and that mechanical, though
it involves a lot of symbol shuffling to get there. The real competition
was not between Kasparov and the machine, but between Kasparov and
a team of engineers and programmers.
Kurzweil assures us that Deep Blue was actually thinking. Indeed
he suggests that it was doing more thinking than Kasparov. But what
was it thinking about? Certainly not about chess, because it had
no way of knowing that these symbols represent chess positions.
Was it perhaps thinking about numbers? Even that is not true, because
it had no way of knowing that the symbols assigned represented numerical
values. The symbols in the computer mean nothing at all to the computer.
They mean something to us because we have built and programmed the
computer so that it can manipulate symbols in a way that is meaningful
to us. In this case we are using the computer symbols to represent
chess positions and chess moves.
Now, with all this in mind, what psychological or philosophical
significance should we attach to Deep Blue? It is, of course, a
wonderful hardware and software achievement of the engineers and
the programmers, but as far as its relevance to human psychology
is concerned, it seems to me of no interest whatsoever. Its relevance
is similar to that of a pocket calculator for understanding human
thought processes when doing arithmetic. I was frequently asked
by reporters at the time of the triumph of Deep Blue if I did not
think that this was somehow a blow to human dignity. I think it
is nothing of the sort. Any pocket calculator can beat any human
mathematician at arithmetic. Is this a blow to human dignity? No,
it is rather a credit to the ingenuity of programmers and engineers.
It is simply a result of the fact that we have a technology that
enables us to build tools to do things that we cannot do, or cannot
do as well or as fast, without the tools.
Kurzweil also predicts that the fact that a machine can beat a
human being in chess will lead people to say that chess was not
really important anyway. But I do not see why. Like all games, chess
is built around the human brain and body and its various capacities
and limitations. The fact that Deep Blue can go through a series
of electrical processes that we can interpret as “beating the
world champion at chess” is no more significant for human chess
playing than it would be significant for human football playing
if we built a steel robot which could carry the ball in a way that
made it impossible for the robot to be tackled by human beings.
The Deep Blue chess player is as irrelevant to human concerns as
is the Deep Blue running back.
Some Conceptual Confusions
I believe that Kurzweil’s book exhibits a series of conceptual
confusions. These are not all Kurzweil’s fault; they are common
to the prevailing culture of information technology, and especially
to the subculture of artificial intelligence, of which he is a part.
Much of the confusion in this entire field derives from the fact
that people on both sides of the debate tend to suppose that what
is at issue is the success or failure of computational simulations.
Are human beings “superior” to computers or are computers
superior? That is not the point at issue at all. The question is
not whether computers can succeed at doing this or that. For the
sake of argument, I am just going to assume that everything Kurzweil
says about the increase in computational power is true. I will assume
that computers both can and will do everything he says they can
and will do, that there is no question about the capacity of human
designers and programmers to build ever faster and more powerful
pieces of computational machinery. My point is that to the issues
that really concern us about human consciousness and cognition,
these successes are irrelevant.
What, then, is at issue? Kurzweil’s book exhibits two sets
of confusions, which I shall consider in order.
(1) He confuses the computer simulation of a phenomenon with a
duplication or re-creation of that phenomenon. This comes out most
obviously in the case of consciousness. Anybody who is seriously
considering having his “program and database” downloaded
onto some hardware ought to wonder whether or not the resulting
hardware is going to be conscious. Kurzweil is aware of this problem,
and he keeps coming back to it at various points in his book. But
his attempt to solve the problem can only be said to be plaintive.
He does not claim to know that machines will be conscious, but he
insists that they will claim to be conscious, and will continue
to engage in discussions about whether they are conscious, and consequently
their claims will be largely accepted. People will eventually just
come to accept without question that machines are conscious.
But this misses the point. I can already program my computer so
that it says that it is conscious—i.e., it prints out “I
am conscious”—and a good programmer can even program it
so that it will carry on a rudimentary argument to the effect that
it is conscious. But that has nothing to do with whether or not
it really is conscious. Actual human brains cause consciousness
by a series of specific neurobiological processes in the brain.
What the computer does is a simulation of these processes, a symbolic
model of the processes. But the computer simulation of brain processes
that produce consciousness stands to real consciousness as the computer
simulation of the stomach processes that produce digestion stands
to real digestion. You do not cause digestion by doing a computer
simulation of digestion. Nobody thinks that if we had the perfect
computer simulation running on the computer, we could stuff a pizza
into the computer and it would thereby digest it. It is the same
mistake to suppose that when a computer simulates the processes
of a conscious brain it is thereby conscious.
The computer, as we saw in our discussion of the chess-playing
program, succeeds by manipulating formal symbols. The symbols themselves
are quite meaningless; they have only the meaning we have attached
to them. The computer knows nothing of this, it just shuffles the
symbols. And those symbols are not by themselves sufficient to guarantee
equivalent causal powers to actual biological machinery like human
stomachs and human brains.
Kurzweil points out that not all computers manipulate symbols.
Some recent machines simulate the brain by using networks of parallel
processors called “neural nets,” which try to imitate
certain features of the brain. But that is no help. We know from
the Church-Turing Thesis, a mathematical result, that any computation
that can be carried out on a neural net can be carried out on a
symbol-manipulating machine. The neural net gives no increase in
computational power. And simulation is still not duplication.
But, someone is bound to ask, can you prove that the computer is
not conscious? The answer to this question is: Of course not. I
cannot prove that the computer is not conscious, any more than I
can prove that the chair I am sitting on is not conscious. But that
is not the point. It is out of the question, for purely neurobiological
reasons, to suppose that the chair or the computer is conscious.
The point for the present discussion is that the computer is not
designed to be conscious. It is designed to manipulate symbols in
a way that carries out the steps in an algorithm. It is not designed
to duplicate the actual causal powers of the brain to cause consciousness.
It is designed to enable us to simulate any process that we can
describe precisely.
Kurzweil is aware of this objection and tries to meet it with a
slippery-slope argument: We already have brain implants, such as
cochlear implants in the auditory system, that can duplicate and
not merely simulate certain brain functions. What is to prevent
us from a gradual replacement of all the brain anatomy that would
preserve and not merely simulate our consciousness and the rest
of our mental life? In answer to this, I would point out that he
is now abandoning the main thesis of the book, which is that what
is important for consciousness and other mental functions is entirely
a matter of computation. In his words, we will become software,
not hardware.
I believe that there is no objection in principle to constructing
an artificial hardware system that would duplicate the powers of
the brain to cause consciousness using some chemistry different
from neurons. But to produce consciousness any such system would
have to duplicate the actual causal powers of the brain. And we
know, from the Chinese Room Argument, that computation by itself
is insufficient to guarantee any such causal powers, because computation
is defined entirely in terms of the manipulation of abstract formal
symbols.
(2) The confusion between simulation and duplication is a symptom
of an even deeper confusion in Kurzweil’s book, and that is
between those features of the world that exist intrinsically, or
independently of human observation and conscious attitudes, and
those features of the world that are dependent on human attitudes—the
distinction, in short, between features that are observer-independent
and those that are observer-relative.
Examples of observer-independent features are the sorts of things
discussed in physics and chemistry. Molecules, and mountains, and
tectonic plates, as well as force, mass, and gravitational attraction,
are all observer-independent. Since relativity theory we recognize
that some of their limits are fixed by reference to other systems,
but none of them are observer-dependent in the sense of requiring
the thoughts of conscious agents for their existence. On the other
hand, such features of the world as money, property, marriage, government,
and football games require conscious observers and agents in order
for them to exist as such. A piece of paper has intrinsic or observer-independent
chemical properties, but a piece of paper is a dollar bill only
in a way that is observer-dependent or observer-relative.
In Kurzweil’s book many of his crucial notions oscillate between
having a sense that is observer-independent, and another sense that
is observer-relative. The two most important notions in the book
are intelligence and computation, and both of these exhibit precisely
this ambiguity. Take intelligence first.
In a psychological, observer-independent sense, I am more intelligent
than my dog, because I can have certain sorts of mental processes
that he cannot have, and I can use these mental capacities to solve
problems that he cannot solve. But in this psychological sense of
intelligence, wristwatches, pocket calculators, computers, and cars
are not candidates for intelligence, because they have no mental
life whatever.
In an observer-relative sense, we can indeed say that lots of machines
are more intelligent than human beings because we have designed
the machines in such a way as to help us solve problems that we
cannot solve, or cannot solve as efficiently, in an unaided fashion.
Chess-playing machines and pocket calculators are good examples.
Is the chess-playing machine really more intelligent at chess than
Kasparov? Is my pocket calculator more intelligent than I at arithmetic?
Well, in an intrinsic or observer-independent sense, of course not,
the machine has no intelligence whatever, it is just an electronic
circuit that we have designed, and can ourselves operate, for certain
purposes. But in the metaphorical or observer-relative sense, it
is perfectly legitimate to say that the chess-playing machine has
more intelligence, because it can produce better results. And the
same can be said for the pocket calculator.
There is nothing wrong with using the word “intelligence”
in both senses, provided you understand the difference between the
observer-relative and the observer-independent. The difficulty is
that this word has been used as if it were a scientific term, with
a scientifically precise meaning. Indeed, many of the exaggerated
claims made on behalf of “artificial intelligence” have
been based on this systematic confusion between observer-independent,
psychologically relevant intelligence and metaphorical, observer-relative,
psychologically irrelevant ascriptions of intelligence. There is
nothing wrong with the metaphor as such; the only mistake is to
think that it is a scientifically precise and unambiguous term.
A better term than “artificial intelligence” would have
been “simulated cognition.”
Exactly the same confusion occurs over the notion of “computation.”
There is a literal sense in which human beings are computers because,
for example, we can compute 2+2=4. But when we design a piece of
machinery to carry out that computation, the computation 2+2=4 exists
only relative to our assignment of a computational interpretation
to the machine. Intrinsically, the machine is just an electronic
circuit with very rapid changes between such things as voltage levels.
The machine knows nothing about arithmetic just as it knows nothing
about chess. And it knows nothing about computation either, because
it knows nothing at all. We use the machinery to compute with, but
that does not mean that the computation is intrinsic to the physics
of the machinery. The computation is observer-relative, or to put
it more traditionally, “in the eye of the beholder.”
This distinction is fatal to Kurzweil’s entire argument, because
it rests on the assumption that the main thing humans do in their
lives is compute. Hence, on his view, if—thanks to Moore’s
Law—we can create machines that can compute better than humans,
we have equaled and surpassed humans in all that is distinctively
human. But in fact humans do rather little that is literally computing.
Very little of our time is spent working out algorithms to figure
out answers to questions. Some brain processes can be usefully described
as if they were computational, but that is observer-relative. That
is like the attribution of computation to commercial machinery,
in that it requires an outside observer or interpreter.
Another result of this confusion is a failure on Kurzweil’s
part to appreciate the significance of current technology. He describes
the use of strands of DNA to solve the Traveling Salesman Problem—the
problem of how to plot a route for a salesman so that he never goes
through the same city twice—as if it were the same sort of
thing as the use, in some cases, of neural implants to cure Parkinson’s
Disease. But the two cases are completely different. The cure for
Parkinson’s Disease is an actual, observer-independent causal
effect on the human brain. But the sense in which the DNA strands
stand for or represent different cities is entirely observer-relative.
The DNA knows nothing about cities.
It is worth pointing out here that when Alan Turing first invented
the idea of the computer, the word “computer” meant “person
who computes.” “Computer” was like “runner”
or “skier.” But as commercial computers have become such
an essential part of our lives, the word “computer” has
shifted in meaning to mean “machinery designed by us to use
for computing,” and, for all I know, we may go through a change
of meaning so that people will be said to be computers only in a
metaphorical sense. It does not matter as long as you keep the conceptual
distinction clear between what is intrinsically going on in the
machinery, however you want to describe it, and what is going on
in the conscious thought processes of human beings. Kurzweil’s
book fails throughout to perceive these distinctions.
We are now in the midst of a technological revolution that is full
of surprises. No one thirty years ago was aware that one day household
computers would become as common as dishwashers. And those of us
who used the old Arpanet of twenty years ago had no idea that it
would evolve into the Internet. This revolution cries out for interpretation
and explanation. Computation and information processing are both
harder to understand and more subtle and pervasive in their effects
on civilization than were earlier technological revolutions such
as those of the automobile and television. The two worst things
that experts can do when explaining this technology to the general
public are first to give the readers the impression that they understand
something they do not understand, and second to give the impression
that a theory has been established as true when it has not.
Kurzweil’s book suffers from both of these defects. The title
of the book is The Age of Spiritual Machines. By “spiritual,”
Kurzweil means conscious, and he says so explicitly. The implications
are that if you read his book you will come to understand the machines
and that we have overwhelming evidence that they now are or will
shortly be conscious. Both of these implications are false. You
will not understand computing machinery from reading Kurzweil’s
book. There is no sustained effort to explain what a computer is
and how it works. Indeed one of the most fundamental ideas in the
theory of computation, the Church-Turing Thesis, is stated in a
way which is false.
Here is what Kurzweil says:
This thesis says that all problems that a human being
can solve can be reduced to a set of algorithms, supporting the
idea that machine intelligence and human intelligence are essentially
equivalent.
That definition is simply wrong. The actual thesis comes in different
formulations (Church’s is different from Turing’s, for
example), but the basic idea is that any problem that has an algorithmic
solution can be solved on a Turing machine, a machine that manipulates
only two kinds of symbols, the famous zeroes and ones.
Where consciousness is concerned, the weaknesses of the book are
even more disquieting. One of its main themes, in some ways the
main theme, is that increased computational power gives us good,
indeed overwhelming, reason to think we are moving into an era when
computing machinery artifacts, machines made by us, will be conscious,
“the age of spiritual machines.” But from everything we
know about the brain, and everything we know about computation,
increased computational power in a machine gives us no reason whatever
to suppose that the machine is duplicating the specific neurobiological
powers of the brain to create consciousness. Increased computer
power by itself moves us not one bit closer to creating a conscious
machine. It is just irrelevant.
Suppose you took seriously the project of building a conscious
machine. How would you go about it? The brain is a machine, a biological
machine to be sure, but a machine all the same. So the first step
is to figure out how the brain does it and then build an artificial
machine that has an equally effective mechanism for causing consciousness.
These are the sorts of steps by which we built an artificial heart.
The problem is that we have very little idea of how the brain does
it. Until we do, we are most unlikely to produce consciousness artificially
in nonbiological materials. When it comes to understanding consciousness,
ours is not the age of spiritual machines. It is more like the age
of neurobiological infancy, and in our struggles to get a mature
science of the brain, Moore’s Law provides no answers.
A Brief Recapitulation
In response to my initial review of Kurzweil’s book in The
New York Review of Books, Kurzweil wrote both a letter to the editor
and a more extended rebuttal on his website. He claims that I presented
a “distorted caricature” of his book, but he provided
no evidence of any distortion. In fact I tried very hard to be scrupulously
accurate both in reporting his claims and in conveying the general
tone of futuristic techno-enthusiasm that pervades the book. So
at the risk of pedantry, let’s recapitulate briefly the theses
in his book that I found most striking:
(1) Kurzweil thinks that within a few decades we will be able to
download our minds onto computer hardware. We will continue to exist
as computer software. “We will be software, not hardware”
(p. 129, his italics). And “the essence of our identity will
switch to the permanence of our software” (p.129).
(2) According to him, we will be able to rebuild our bodies, cell
by cell, with different and better materials using “nanotechnology.”
Eventually, “ there won’t be a clear difference between
humans and robots” (p.148).
(3) We will be immortal, not only because we will be made of better
materials, but because even if we were destroyed we will keep copies
of our programs and databases in storage and can be reconstructed
at will. “Our immortality will be a matter of being sufficiently
careful to make frequent back-ups,” he says, adding the further
caution: “If we’re careless about this, we’ll have
to load an old backup copy and be doomed to repeat our recent past”
(p. 129). (What is this supposed to mean? That we will be doomed
to repeat our recent car accident and spring vacation?)
(4) We will have overwhelming evidence that computers are conscious.
Indeed there will be “no longer any clear distinction between
humans and computers” (p. 280).
(5) There will be many advantages to this new existence, but one
he stresses is that virtual sex will soon be a “viable competitor
to the real thing,” affording “sensations that are more
intense and pleasurable than conventional sex” (p. 147).
Frankly, had I read this as a summary of some author’s claims,
I might think it must be a “distorted caricature,” but
Kurzweil did in fact make each of these claims, as I show by extensive
quotation. In his letter he did not challenge me on any of these
central points. He conceded by his silence that my understanding
of him on these central issues is correct. So where is the “distorted
caricature?”
I then point out that his arguments are inadequate to establish
any of these spectacular conclusions. They suffer from a persistent
confusion between simulating a cognitive process and duplicating
it, and even worse confusion between the observer-relative, in-the-eye-of-the-beholder
sense of concepts like intelligence, thinking, etc., and the observer-independent
intrinsic sense.
What has he to say in response? Well, about the main argument he
says nothing. About the distinction between simulation and duplication,
he says he is describing neither simulations of mental powers nor
re-creations of the real thing, but “functionally equivalent
re-creations.” But the notion “functionally equivalent”
is ambiguous precisely between simulation and duplication. What
exactly functions to do exactly what? Does the computer simulation
function to enable the system to have external behavior, which is
as if it were conscious, or does it function to actually cause internal
conscious states? For example, my pocket calculator is “functionally
equivalent” to (indeed better than) me in producing answers
to arithmetic problems, but it is not thereby functionally equivalent
to me in producing the conscious thought processes that go with
solving arithmetic problems. Kurzweil’s argument about consciousness
is based on the assumption that the external behavior is
overwhelming evidence for the presence of the internal conscious
states. He has no answer to my objection that once you know that
the computer works by shuffling symbols, its behavior is no evidence
at all for consciousness. The notion of functional equivalence does
not overcome the distinction between simulation and duplication;
it just disguises it for one step.
In his letter he told us he is interested in doing “reverse
engineering” to figure out how the brain works. But in the
book there is virtually nothing about the actual working of the
brain and how the specific electro-chemical properties of the thalamo-cortical
system could produce consciousness. His attention rather is on the
computational advantages of superior hardware.
On the subject of consciousness there actually is a “distorted
caricature,” but it is Kurzweil’s distorted caricature
of my arguments. He said, “Searle would have us believe that
you can’t be conscious if you don’t squirt neurotransmitters
(or some other specific biological process).” Here is what
I actually wrote: “I believe there is no objection in principle
to constructing an artificial hardware system that would duplicate
the causal powers of the brain to cause consciousness using some
chemistry different from neurons.” Not much about the necessity
of squirting neurotransmitters there. The point I made, and repeat
again, is that because we know that brains cause consciousness with
specific biological mechanisms, any nonbiological mechanism has
to share with brains the causal power to do it. An artificial brain
might succeed by using something other than carbon-based chemistry,
but just shuffling symbols is not enough, by itself, to guarantee
those powers. Once again, he offers no answer to this argument.
He challenges my Chinese Room Argument, but he seriously misrepresents
it. The argument is not the circular claim that I do not understand
Chinese because I am just a computer, but rather that I don’t,
as a matter of fact, understand Chinese and could not acquire an
understanding by carrying out a computer program. There is nothing
circular about that. His chief counterclaim is that the man is only
the central processing unit, not the whole computer. But this misses
the point of the argument. The reason the man does not understand
Chinese is that he does not have any way to get from the symbols,
the syntax, to what the symbols mean, the semantics. But if the
man cannot get the semantics from the syntax alone, neither can
the whole computer. It is, by the way, a misunderstanding on his
part to think that I am claiming that a man could actually carry
out the billions of steps necessary to carry out a whole program.
The point of the example is to illustrate the fact that the symbol
manipulations alone, even billions of them, are not constitutive
of meaning or thought content, conscious or unconscious. To repeat,
the syntax of the implemented program is not semantics.
Concerning other points in his letter: He says that I am wrong
to think that he attributes superior thinking to Deep Blue. But
here is what he wrote in response to the charge that Deep Blue just
does number crunching and not thinking: “One could say that
the opposite is the case, that Deep Blue was indeed thinking through
the implications of each move and countermove, and that it was Kasparov
who did not have the time to think very much during the tournament”
(p. 290).
He also says that on his view Moore’s Law is only a part of
the story. Quite so. In my review I mention other points he makes
such as, importantly, nanotechnology.
I cannot recall reading a book in which there is such a huge gulf
between the spectacular claims advanced and the weakness of the
arguments given in their support. Kurzweil promises us our minds
downloaded onto decent hardware, new bodies made of better stuff,
evolution without DNA, better sex without the inconvenience of actual
partners, computers that convince us that they are conscious, and
above all personal immortality. The main theme of my critique is
that the existing technological advances that are supposed to provide
evidence in support of these predictions, wonderful though they
are, offer no support whatever for these spectacular conclusions.
In every case the arguments are based on conceptual confusions.
Increased computational power by itself is no evidence whatever
for consciousness in computers.
Copyright ' 2002 by the Discovery
Institute. Used with permission.
| | Join the discussion about this article on Mind·X! | |
|
|
Mind·X Discussion About This Article:
|
|
|
|
Re: Searle is stuck in the past
|
|
|
|
yhuang,
You missed my point, which is that the computer's programmer made all the decisions about strategy and game analysis. All the computer did was carry out the programmer's strategy, which was a brute force approach to examining every possible move on the board. The chess master, on the other hand, devised his own strategy and made his own decisions and, until that last game, consistently came up with a winning game plan.
What you guys seem to me to be doing is giving the computer credit for the work of the programmer and saying the computer came up with the strategy that beat Kasparov. There's more to the game of chess that running through an algorithm. Sure, Kasparov used algorithms. But his was the mind that either chose or devised them. For Deep Blue, that's what the programmer did. So who really beat Kasparov? Was it the strategist who devised the method and planned the strategy or the machine that carried it out?
To my mind, without the strategist, the machine wouldn't have had a chance. The programmer was making changes to his program right up to the end of the game. So who was really doing the thinking there? IMHO it was a person who really beat Kasparov, not a machine.
Grant |
|
|
|
|
|
|
|
|
Re: Searle is stuck in the past
|
|
|
|
To grantc4@hotmail.com:
Thanks for your quick reply. I'll try to convince you of my point of view.
In short I think what you are saying is that there is a human programmer behind the computer's strategy, but there isn't a programer behind the chess master. So therefore the computer doesn't think and the chessmaster does. Am I correct?
"Sure, Kasparov used algorithms. But his was the mind that either chose or devised them. For Deep Blue, that's what the programmer did. "
If it is simply the case that the guy who programed a machine is the ultimate thinker, then if a computer programmed another computer (its possible even now in a limited way), then that makes the parent computer a "thinker"?
As to the "brute force" approach to examining every possible move (and comparing it with historical moves), it is a legitamate strategy used by chess masters as well. I fail to see how it is celebrated when a chessmaster does it but not when a computer does it.
"You missed my point, which is that the computer's programmer made all the decisions about strategy and game analysis."
I very much doubt the human programmers behind Deep Blue are so good that they can replicate what Deep Blue can do with chess. They probably can't even predict Deep Blue's next move. It would be kind of like giving a football coach (The programer) all the credit for a team victory(Deep Blue). Sure the coach came up with a strategy, but the teamates have to implement it.
To my mind, if the prerequisite of thinking is merely being able to program another computer (biological or otherwise), then some computers are already "thinking".
I don't think that the only way to think is the way people does it. It would be a bit unfair. Like saying a car needs to run on legs or else it won't really be moving. If the end result between a thinking person and a machine is the same, we might as well consider a machine to be thinking no matter how it does it...
Respectfully yours
YJ |
|
|
|
|
|
|
|
|
Re: Chess vs. Language
|
|
|
|
the reason I am not impressed by the Chinese Room is that it really boils down to (once you eliminate all of the excess baggage): "I
don't think machines can be conscious because I just don't think they can be." This is not an argument, it circular and an inappropriate extension of a naive intuition which is hard-wired into our neurology but nevertheless
misleading (I can say something about the hard-wiring later).
It is a circular argument. If you don't believe machines can be conscious, then of course a Chinese Room cannot be conscious. But you are assuming the conclusion! Formally the Chinese Room is equivalent to a computer, yet he
constructs an elaborate example including a person in it just to make the intellectual subtrefuge even more complete. I have seen other examples of this; making a giant brain out of beer cans and valves, etc., and asserting
that a giant complex of beer cans and values could not possibly be conscious, etc. The beer can brain argument doesn't have the little
homunculus inside of it carrying out the operations, but it is the same stupid idea: blow up a computer so it is really really big and you can see the insides of it and whamo: must not be conscious. Yet: why? No answer to this, it is just "obvious". But again --- only obvious if you were convinced of the result to begin with. This is philosophy? It is utterly ludicrous. I mean, any field that could take this sort of absurdo argument seriously cannot possibly be worth much.
To the extent the Chinese Room is aware, it is obviously not the human operator that would be aware. It would be the room itself, as a system.
But again, the sad thing about analytical philosophy, even the "scientific" sort, is that, ironically, they don't really even understand the
implications of philosophy of science: i.e., Kuhn, Lakatos, etc. Any philosophy that cannot come to terms with Kuhn or Lakatos is philosophy that I don't care to bother with.
|
|
|
|
|
|
|
|
|
Re: Chess vs. Language
|
|
|
|
Of course the operator of the room does not
understand Chinese. But then, the neurotransmitters and neurons in my brain
do not understand English, either. There is absolutely nothing whatsoever
surprising or strange about that.
Bateson's argument go like this: "awareness" does not arise as a _state_ of a material substance, but rather it arises because of a causal
arrangement of a feedback system, and is therefore fundamentally holistic.
I.e., it is not localized in space and time, and yet it is not separate from physical process. Bateson's model basically says that consciousness and mental phenomena are based in a material substrate but are not themselves material --- rather, they are based on information relationships between successive material states when organized in certain kinds of feedback loops
of a certain kind of complexity.
I like this view because it is both monist and yet it acknowledges and to some extent explains why we have this impression of mental phenomena as not being physical in some sense. Bateson says yes, mental phenomena are not
physical.
For example, suppose I had a room full of billions of petri dishes, each one
of them containing one live neuron. Now, let's say I took a snapshot of my brain, somehow, and determined the precise state of every one of my neurons at time t. Then I electrochemically stimulated each one of the neurons in these
dishes to be in precisely the same state that the neurons in my brain are in at time t. Would the room full of disconnected neurons in petri dishes
then, for a split second, be "in the same mental state" as my brain at time
t?
No. Because mental state, as Bateson convincingly argues, depends upon
not just physical state, but comparison of physical state from one moment
to the next: difference. This difference can only be realized when there
are feedback loops. Therefore: MENTAL STATES ARE NOT PHYSICAL STATES, but
rather require relationships between physical states in feedback loops.
A beautiful viewpoint which really gets far less airplay so to speak than
it deserves.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
You already know about Searle's room, now I want to tell you about Clark's Chinese Room. You are a professor of Chinese Literature and are in a room with me and the great Chinese Philosopher and Poet Laotse. Laotse writes something in his native language on a paper and hands it to me. I walk 10 feet and give it to you. You read the paper and are impressed with the wisdom of the message and the beauty of its language. Now I tell you that I don't know a word of Chinese, can you find any deep implications from that fact? I believe Clark's Chinese Room is just as profound as Searle's Chinese Room.
Not very.
All Searle did was come up with was a wildly impractical model (the Chinese Room) of an intelligence in which a human being happens to play a trivial part. Consider what's in Searle's model:
1) An incredible book, larger than the bservable universe even if the writing was microfilm sized.
2) An equally large or larger book of blank paper.
3) A pen, several trillion galaxies of ink, and oh yes I almost forgot, your little man.
Searle claims to have proven something profound when he shows that a trivial part does not have all the properties that the whole system does. In his example the man could be replaced with a simple machine made with a few vacuum tubes or even mechanical relays, and it would do a better job. It's like saying the synaptic transmitter dopamine does not understand how to solve differential equations, dopamine is a small part of the human brain thus the human brain does not understand how to solve differential equations.
Yes, it does seem strange that consciousness is somehow hanging around the room as a whole, even if slowed down by a factor of a billion trillion or so, but no stranger than the fact that consciousness is hanging around 4 pounds of gray goo in our head, and yet we know that it does. It's time to just face the fact that consciousness is a property matter has when it is organized in certain complex ways.
John K Clark jonkc@att.net
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
I agree that "consciousness is a property matter has when it is organized in certain complex ways", but I think we're still a long way from truly understanding what those "complex ways" are. I think Ray Kurzweil's visions will
eventually come to fruition, but not until we really undestand how the brain works and what, from an objective scientific standpoint, consciousness really is. As a programmer, I cannot believe that all my thoughts and especially my *emotions* (the most important part of consciousness, IMHO) can be reduced to function calls, for-loops, and if-then-else blocks. We have to figure out how the brain/conciousness funtions and then build a new kind of "computer" to emulate it (Quantum computers, perhaps? I dunno...). Trying to shoehorn our minds into a conventional computational model, I think is doomed to failure.
--
Dave
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
> > As a programmer, I cannot believe that all my thoughts and especially my *emotions* (the most important part of consciousness, IMHO) can be reduced to function calls, for-loops, and if-then-else blocks.
> Well, at least we can emulate the physics/chemistry with C++ or Java. Can't we?
Yeah. If we want, we can emulate all of the physics/chemistry with a sufficiently large abacus. A bit slow, but so what? Its the computation that counts, right?
But if QM effects play a "significant role" in living systems, no less conscious sentience, the abacus may not suffice.
Cheers! ____tony____ |
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
Tomaz,
The qubit-processing quantum computer (TransTuring, if you like), although exploiting QM indeterminacy to enhance efficiency and parallelism, is still effectively executing "algorithm". As long as it is limited to algorithm execution, there is nothing that a qubit-processing computer can do that an ordinary Turing machine cannot, given enough time to execute.
Just because a great deal of intelligent behavior, and apparent mental processing, can be modelled algorithmically with a computer/TuringMachine, it does not follow that algorithmic processing is all that it significant to "mind", no less the entertainment of "sense of consciousness."
I do not posit that the quality of consciousness cannot be manifest in non-biological substrate. Indeed, I see no reason that consciousness cannot manifest in silico, or other substrate ... but that is not the same as saying that "the substrate matters not at all".
*** We are manifestations of a physics that can be modelled, and not merely manifestations of a model. ***
One can posit a perfectly operational Turing machine, manifest entirely with paper and pencils. A largely un-intelligent robot reading, writing, and erasing 0/1 symbols, and thus executing the given algorithm. The robot is just a "dumb processor" moving the symbols about. The symbols it reads and writes are both its input/output data, as well as the very "algorithm" it is executing. Thus, if the algorithm is sufficiently complex and self-referential, this "system" can effectively re-write and improve upon the very algorithm it is executing. If this system is enclosed in a box, and operated "fast enough", it could well appear quite intelligent to us as outside observers, and might even convince us of a claim to self-aware consciousness. Indeed, lacking the ability to "peer inside", we would have no right to deny its claim.
But this is no guarantee that it would be "conscious" the way that you or I use the term, as a "waking awareness".
Our evolved neural-structures exhibit something that is akin to "computation", and so there is certainly something to be gained in exploring and emulating that functionality. But there may also be what I call "proximity effects" between neurons, ancillary to the "wiring diagram". These effects (EM, QM, other) may also contribute to "sense of consciousness", and not every substrate able to support "algorithm" may be sufficient to support these other physical effects. I suspect that the pencil-and-paper turing machine may be insufficient in this regard.
People tend to notice that the brain supports "electrical activity", and that logic circuits also exercise electrical activity, see both as doing "algorithms", and make the leap that mind is algorithm. Perhaps the "physics" itself is important.
I view mind as a purely physical manifestation, but requiring perhaps "more" of the physics than the behaviors of arbitrary components able to manifest "algorithm".
Cheers! ____tony b____ |
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
Thomas,
Modeling math is as good as any math. Modeling fire is not as good as fire.
We, both our bodies and the consciousness it supports or generates, are still manifestations of physics, not simply of physics concepts.
The physics can be modeled in detail (say). Ok, that means we are a manifestation of a physics that can be modeled. That does not mean we are models, or manifestations of a model.
We might model ourselves to any extent, and the "read-outs" (so to speak) might indicate that our "model selves" behave like real selves. That might mean that they "act conscious" (the same way that properly modeled water might act like ice or steam.) But that does not imply that the "modeled selves" are actually experiencing consciousness as we have come to "feel it".
Modeling "intelligence" (rational thought) is much simpler, and might be done to such a degree that the AI far surpasses our abilities to be creative, etc. But the "result" of proper LOGICAL thought, whether in our brain or an AI, is the same; you arrive at a "right answer". There, how the "model acts" is no different than how the bio-rational mind acts, in terms of a result.
But consciousness is by its nature a pure subjective experience. You cannot know if or when something else "has it", only whether it "acts" like it has it.
For an AI that will be good, one might say "who cares if it is actually conscious, or merely "acts conscientiously". No problem.
But if we expect that beings with our "sense of awareness, qualia-wise" is to be also supported in a foreign medium, we would like to ensure that it will be so, and not merely doing modeling of this experience.
Of course, whatever new substrate might support our kind of consciousness (or an improved one), the super-AI will likely devise what that must be. It might end up being more than just algorithmic modeling of pattern. It may be a physically-sensitive manifestation.
Cheers! ____tony b____ |
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
John K Clark,
"Nature" seems to have evolved "emotion" before intelligence, granted. But that is not a proof that intelligence-capability requires emotion-capability. One might posit that our computers will be the first creatures "evolved" to possess (at first) only that latter.
There is a sort of trichotomy in viewing emotion (E), intelligence (I), and consciousness (C) as separable aspects of phenomena.
Take the stance of a conscious observer, viewing an interaction between two other entities (minds) A and B. If both minds are capable of only emotional behaviors, they would still likely react to one another in ways that we would interpret by saying "A recognized the emotion in B". Neither entity would need to act "intelligently". But if A also possessed significant intelligence, and acted with this faculty, entity B would likely not "act as recognizing" or appreciating the intelligence of A.
Consciousness is even more a problematic measure. To some, it is merely "awakeness", the subjective sensation that can only be inferred in others by projection, or extension of one's self-sensations. In this sense, minds that are capable of only emotion certainly exhibit evidence of consciousness. If intelligence is ADDED to this mix, then a mind can be "deliberately reflective" of its waking state, a sensation that I assume a merely emotional mind would not experience.
Our current path of Turing-machine approach to emulating intelligent behavior leads us first to emulate "good decision making" (that which is actually easier to quantify, and closer to the facilities of the computational substrate) than to emulate "emotion". In "biologics", emotion is tied into chemical and hormonal mind-body effects, leading to amplifications of state, and for which a "merely intelligent algorithm" has neither the need nor mechanism to display. Even an intelligent algorithm capable of self-evolution need not necessarily develop emotion except that it serves a purpose in communicating with its environment.
The curious thing about evolution is that, despite the specific nature of the "beginnings" (biologic or otherwise), those facilities that eventually manifest are those that serve some "persistence utility", and in many ways, these will be the same utilities, a sort of universal convergence toward effective patterns.
Cheers! ____tony b____ |
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
John B Davey,
I know you were responding to John K Clark with your post, but allow me to walk a very fine line.
If sufficiently complex algorithmically-based entities could access "senses" (light, sound unpredictable experience from "others"), and could also "mate" (ala genetic algorithms, sharing/inheriting traits and capabilities), compete with similar algorithms having similar "non-determinist sensory input", and could thus effect unbounded evolution, ... then I am not sure that "semantic" could not be "generated". At least, if "we" truly generate semantic, as opposed to merely integrating and transforming environmental noise into new forms, and calling it semantic because it pleases us.
Thus, I do not say that "something mind-like" might not erupt from an algorithmic basis (perhaps needing access to QM indeterminacy in some way as "input", to keep itself from falling into attractor basins.) It might display "superior intelligence", and even lay claim to acting "creatively", appreciating nuance, etc.
That is why I say that we would have no right to deny its own claim to consciousness (if it made such a claim to us, and convinced us through its behaviors.)
However ... it may not have the "consciousness experience" as we have come to experience it. Whether this should be inmportant to anyone is really the question.
Those who hope to "upload their consciousness" into a new substrate might feel concerned, because they imagine that "they" would want to continues "experiencing awakeness" (so to speak.) But I feel this is based upon the fallacy that there is any such beast as "continuity of consciousness". Consciousness, or the "conscious I", is a "sensation of the moment", and such an "I" never lasts more than a moment. In this sense, the "I" that writes these words will never experience waking up tomorrow morning. Rather, a "new I" wakes up, and believes itself to be a continuation of "the same consciousness" because of the common memory context.
In other words, even if I can successfully upload "my mind" into a new machine, "I" never go anywhere. The "I" that performed the upload never gets to find out whether the uploaded "I" is, or is not conscious, no matter how well it might behave to other observers. Just as the "I" that types at this keyboard never finds out whether a "new I" continues tomorrow or not.
So, in this respect, to whom should it matter whether the "uploaded minds" are actually conscious in the way we experience waking consciousness, or merely behave as if such subjective sensation is present?
Could they not be just as "productive"?
Cheers! ____tony b____ |
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
Even Searle says there is nothing supernatural involves in consciousness, it just involves science we don't understand yet, call it Process X. Being rational that means we can use our minds to examine what sort of thing it might turn out to be. It seems pretty clear that information processing can produce something that's starting to look like intelligence, but we'll assume that Process X can do this
too, and in addition Process X can generate consciousness and a feeling of self, something mere information processing can not do.
What Process X does is certainly not simple, so it's very hard to avoid concluding that Process X itself is not simple. If it's complex it can't be made of only one thing, it must be made of parts. If Process X is not to act in a random, incoherent way some order must exist between the parts. A part must have some knowledge of what the other parts are doing and the only way to do that is with information. It could be that communication among the parts is of only secondary importance and that the major work is done by the parts themselves but then the parts must be very complex and be made of sub parts. The simplest possible sub part is one that can change in only one way, say, on to off. It's getting extremely difficult to tell the difference between Process X and information processing.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
John,
You write:
- "It seems pretty clear that information processing can produce something that's starting to look like intelligence, but we'll assume that Process X [assumed requisite for consciousness] can do this too, and in addition Process X can generate consciousness and a feeling of self, something mere information processing can not do.
Another possibility is that "Process X" accesses, is "sensitive to", or interfaces with the information-processing system (say, the neural signalling complex) but cannot "do" the information processing on its own. Otherwise, your "Process X" is de-facto complex because the term "includes" the information-processing complexities, as opposed to being some resonance-manifestation ancillary to the particular physics of the information processing activity.
- "What Process X does is certainly not simple, so it's very hard to avoid concluding that Process X itself is not simple."
That depends upon what you attribute to "Process X".
Let me offer one of my patented "Very Poor Analogies":
Suppose you have a Ziljan cymbal, such as might be part on an expensive percussion/drum set. You strike it simultaneously near its center with a padded wooden mallet, and on its edge opposed by 113 degrees by a small steel rod.
The harmonics that are set up are enormously complex, shifting and modulating with an ever-changing sonic character.
Suppose we fashion a "paper cymbal", capable of supporting no such vibrations at all, but we attach a few billion tiny actuators to its paper surface, tiny needles each with their own "piston action". We might be able, through sufficient programming, to signal all of these actuators such that the paper manifests "vibrations" that appear to match that of the Ziljan cymbal, but an awful lot of information would need to be generated and delivered.
In contrast, it was "easy" to have the real cymbal sustain these effects, due to the physics of the real medium. You might argue that pressure waves propagating through the metal molecular matrix act as "information", phonon propagation from and between the inter-molecular binding forces. But that is still quite a stretch from a billion actuators that must receive signals algorithmically generated from afar.
I liken the sense of consciousness to the shifting harmonies of the physical cymbal. Complex to map in detail, but not complex in actual "mechanism".
Cheers! ____tony b____ |
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
you are reiterating the points you made earlier and , as most AI enthusists do with equanimity, operate on the basis of no scientific evidence whatsoever. Well let me put you out of your misery. There is no no need for science to 'prove' that algorithms can't create consciosuness : logic and epistemology work well just fine .
Its as absurd to suggest that modelling a brain reproduces a brain ( via representing it ) as to say that modelling an electron via a mathematical model reproduces an electron. That , pretty much, is it. If you don't understand this elementary difference between non-mental 'stuff' such as electrons, neutrons and bits of wood, consciosness, noises from a car exhaust ,for instance - and mental,syntactical, observer-relative 'stuff' such as 'processes','algorithms', 'computation' you will remain forever trying to bash square bits of wood into round holes.And forget complexity , its a load of hogwash. Each complex algorithm can be reduced to a larger number of very simple ones by the very definition of algorithmic mathematics in any case - the complexity argument, weak as it is, doesn't even add up on mathematical grounds.
I can stare at a block of atoms in my desk and elude a supremely complex algorithm from their vibrartions, but that doesn't mean to say my desk is thinking.
Representation is not reality - a duck is not the same as a painting of a duck. |
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
John B,
I agree that modeling/simulating an explosion is not the same as a physical one.
But place yourself for a moment in a world where entities with consciousness exist in "some other physical substrate". One of them is a scientist who postulates that some myriad of atoms, arranged so to form a theoretical carbon-based life, with a concentration of neurons in a "brain", might engender mind and consciousness. A colleague says no, that construction would merely be supporting a conflux of signals that might "simulate" mind and consciousness (or conscious-like behaviors.) If the first scientist actually went ahead and produced these carbon-based forms, (like you and me), the other scientist still need not be convinced that were are conscious, but merely behaving so.
I honestly do not know how consciousness arises, of course, so I cannot say what forms of substrate might support such experience.
But my analogy of the "cymbal" can be extended. We must agree that the ersatz "paper cymbal" with its billion tiny programmed actuators, if able to manifest the precise complex of vibrations that the real-metal cymbal display, would produce the same "sound" (what an external observer gets to judge). Moreover, the actuators could also be programmed to vibrate the "paper cymbal" in ways that the metal one might never be able to support, no matter how it was struck. This is perhaps analogous to the recognition that we could produce AI that was "smarter, more capable, and even more creative" than ourselves. But no matter how the paper was vibrated, it would not be reproducing the actual intermolecular forces that occured in the metal cymbal.
The proposition (neither proven nor disproven, simply offered) is that consciousness my be likened to some artifact of the phonon propagation occurring in the metal cymbal, and absent in the paper cymbal, no matter how sophisticated the noises from the paper cymbal might be.
And none of this says that there cannot be a non-biological manifestation of conscious experience. However, not every physics employed to effect the model can be treated equally. Most of all, modeling only "intelligent behavior", even if manifest of creativity and unexpected adaptability, is not the same as modeling a consciouness-as-we-experience-it.
I surmise that such cannot be ascertained by any external means. It is inescapably a subjective valuation.
Why do I assume that other people have a conscious-waking experience as I have? I cannot prove they do, but I recognize that they are made of the "same stuff", born in the "same way", etc., as I manifest, thus it will naturally be the default assumption. It would be pure solipsism to assume that I am really conscious, while other people are merely "cleverly behaving as if conscious".
But when we produce powerful and self-adaptive AI, in all manner of substrates, we lose this default assumption. We cannot "identify" with such entities, and have no reason OTHER than behaviors to infer the presence of "wakeful awareness" as humans experience it.
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
John K,
> "so if a computer acts like it?s conscious then it probably is."
That would have to be my "safe assumption" as well. I reason that, it either is, or is not, and what are the consequnces of my behaviors toward it in either case.
If it behaves consciously, but somehow is not, they I "mistakenly" respect the "rights" of a non-sentient. No big deal.
If it actually is conscious, and I treat it as a brick, I would be doing harm to a sentient, which I hold to be a great big deal.
My discourse on the possible "ancillary physics" of the substrate, above algorithmic processing, in the support of consciousness (as we experience it) is intended more to address the concerns of those that might hope to "transfer themselves" into a "new computronium substrate". If one could only do this as an all-or-nothing leap, one would like some additional assurance that the new substrate would support, at least (if not more than) our current experience of consciousness, as opposed to being a receptical that merely allowed our mind-logic plus memories to drive outwardly convincing behaviors.
Cheers! ____tony b____ |
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
A standard confusion about consciousness relates to issues about 'proof', reasonable beliefs, and a constant never-ending confusion of subjective experrience with objective facts .
The fact that your conscious experience is subjective in NATURE does not exclude it from being the objective fact of your consideration. You have no proof of a lot of things you can't know through the senses : atoms, molecules and electrons are all based upon assumptions about the nature of matter : no formal logical proof can be made for their existence : the existence of atoms is based upon faith in science and constitiutes what philosophers would call a 'reasonable belief'.
When placing your consciousness in the universe of objective facts you must make a case for your own uniqueness ( in absolute terms ) in order to make an OBJECTIVE assertion of the denial of consciosuness in others. If you assume a) you are not the only human in the universe and b)most human beings are similar , then you have no conclusion to come to other than if you have consciousness, then so must everybody else.
This is the difference between looking at the subjective of experience of consciousness and its analysis in epistemological terms. To say that 'i can only be sure of my own consciousness' is correct if and only if you also maintain that you don't think there are more people than yourself in the universe constitutes a 'reasonable belief'. Personally I don't think that I am the only person in the universe : I don't consider it a reasonable belief : so I don't consider it a reasonable belief to assume I can't say it exists in others, if I can say that "grass is green" or "rotten eggs smell" or "he is black".
One thing is for sure : unlike brains we know PRECISELY how computers work : they work via observer-relative mechanical implementations : they don't have a physical existence. As mental events are semantical and not syntactical , this means that you can never, ever generate mental events from computation, no more than you can produce bricks, wind, electrons, planets, or any other form of matter , or any other semantical components of the universe such as time and space.
Computers cant be conscious because you cant make apples from oranges, and it really is quite as simple as that.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
John B,
You write, "As mental events are semantical and not syntactical"
Other than pure faith, how do you know that we entertain semantic, and not merely perform something "akin to" sophisticated transformations of "environmental inputs" to "outputs"? I tend to think that our "processing" is not purely causal (if that is what you mean by "mechanical") but how is semantic more than a "claim"? We may feel that we generate semantic (and I feel that we do, perhaps unprovably). But that is not something that could be said to be "obvious".
Surely, there is no reason to hold that a future artificial brain, formed perhaps of a dense matrix of silicone gel, carbon nanotubes, and tea leaves, with sensory i/o to the world, "simply cannot manifest a conscious intelligent state".
I agree that, although we can "interpret" (model) our brains (at least, their intelligent-like behaviors) to be performing "syntactic symbol manipulation", that does not mean they are merely manipulating symbols. Thus, a greatly sophisticated "future symbol processor" might not necessarily support the "sense of consciousness" we feel subjectively. But that is NOWHERE the say as saying that no artificially constructed system, which might employ symbol processing in part, cannot become a conscious individual, essentially a person.
There is no basis for supposing that our physical manifestation is the only one that "works", nor that we are more than physical manifestation.
As far as "quacks like a duck":
Difficult as the engineering might be, we can posit that an artificial person, as in "Data" from Star Trek, might someday exist. Even if given the same original "programming" (for want of a better term) three individual Data might diverge in behaviors as they re-adjust their own programming (beliefs if you like) and "experience" different histories. All three, I would assume, would avoid walking into a blast furnace, surmising that this would not help them to fulfill some sense of a long-term mission. As outside observers, we might say "they fear their own destruction", even though we really would have no idea of what they might feel, or fear.
Along the banks of a raging river, a crowd gathers, including these three "Data". A mother screams as her child is swept away by the current. The crowd, and two of the Data, hold back, but one of the Data leaps into the water, risking being swept away, to attempt to rescue the child.
Folks might say, "That was a brave thing to do" (or a foolish thing). We have no idea whether the other two Data were reluctant because (a) they feared their own destruction, or (b) they calculated risk > reward, or (c) something else.
I would call the rescuer brave, since I could not know either way what, if anything, this artificial person "felt".
It quacks like a duck. Call it a duck.
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
John B,
> "Without semantic there is no language - without semantic there would be no conversation. This sentence would mean nothing to you."
:)
It might "mean nothing", even though I might respond to it intelligently, correct?
If a thousand monkeys bang at a keyboard and produce the phrase "I am hungry", that familiar phrase has no semantic content (we judge) because we assume (correctly, I surmise) that the monkey had no "intention" of writing what it did. We, at least, "feel" that we "intend" when we offer communication, and upon receiving it from another, tend to hold that "other" as having "intended" what they wrote. Thus, semantic content is an assumption, strengthened when two sides of a communication seem to be acting with a consistent "sense" of the thread of "conversation".
You tell me "I am hungry", I hear it and bring you an apple. Since the communication you intended resulted in "successful activity on the part of the other entity", we judge that the communication conveyed "semantic".
A robot discovers it needs more "coal" and issues a request (symbols) to another robot. If that other robot goes off and returns with more coal ... the message thus conveyed no semantic? Curious indeed! Did the robots need "our sense of consciousness" to effect this activity?
None of that is "proof" that semantic is more than our belief that we "intend". I happen to belive we "intend", but my belief is no proof to anyone but me.
If (ala Turing Test) you communicate with (unknown entity), how do you judge whether the other "entity" conveys semantic, as opposed to transforms symbols in rich and complex learned ways? How will you do so in 10 years, 20 years, as artificial symbol processing systems become far more sophisticated?
Care to elaborate?
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
John B,
You wrote:
> "The company employing the programmer needs the coal, so he formalises that requirement symbolically."
This is like saying that God wanted certain work to be done, and programmed us to perform that work. Thus we "need nothing" (who says we "need") and all we are doing by passing "semantic content" to and fro is to carry out God's instructions.
Unless you think you can prove, or disprove the existence of God, and thus the existence or non-existence of "intent" on our part, the viewpoint that we convey meaning or intent is quite a matter of faith.
We may not be simple algorithmic symbol processors, easily deconstructed into obvious mechanical parts, but unless you think that we have some magical mind-fluid that allows us somehow to entertain meaning and purpose in an objective sense, I do not understand the basis for your argument.
Question: Just how far from a "symbol processing computer" must a device get before it is no longer "ridiculous" to entertain the possibility of its sentience?
Important:
You have noticed, it seems, that I tend to argue both sides of the fence. If I sense someone is taking it for granted that any Turing machine (say, built of rubber band and colored marbles) running a sufficiently complex and self-evolving algorithm is thus definitively sentient when it acts with sufficient intelligence, I argue why the very physics of the processing MAY be a determining factor, at least in matching our subjective experience of what sentience "feels like". Alternately, when someone (guess who) demands that no "machine" can be sentient because it merely passes "symbols with no semantic", I challenge you to establish how we are not machines passing symbols lacking semantic, except in terms of unsubstantiable belief on our part.
You seem to denigrate playing both sides. Perhaps you think that discussion is about "winning" rather than spurring thought and further investigation. Your choice of words to characterize opposing arguments (half-assed, ridiculous) are a distraction and technique of communication designed to head off further investigation of the issues. They are a defensive reaction, and serve to sway others to your side out of fear of appearing "ridiculous" otherwise, at least in your eyes.
Then, perhaps it is just your habit of speech.
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
"This is like saying that God wanted certain work to be done, and programmed us to perform that work. "
No it isn't. I stick to statements issued on planet Earth.
"Unless you think you can prove, or disprove the existence of God, and thus the existence or non-existence of "intent" on our part, the viewpoint that we convey meaning or intent is quite a matter of faith. "
No it isn't. You can't deny the existence of meaning when making an argument in linguistic terms, as its contradictory - to make the suggestion with the meaning that there is no meaning. Like most AI enthusisasts, you are seeking to solve the problem by pretending the problem isn't clear, which is a fraudulent and petty act. You are wasting time.
"I do not understand the basis for your argument. "
Too damn right you don't.
"I challenge you to establish how we are not machines passing symbols lacking semantic, except in terms of unsubstantiable belief on our part. "
In scientifc terms as I am making no positive assertion about the creation of consciosuness through computers I am under no obligation to prove anything. Its up to you to prove that computers can generate consciousness in scientific terms, and that starts by you having a theory of consciousness which it doesn't sound like me you've the capacity to produce.
However although you have , in contravention of the entire history of science , placed the burden of disproof upon ME to 'prove' your case, rather than doing yoour own work for you, I'll set you a simple test , that is you can pass it, will allow me to believe that mental events are syntactical.
Describe, preferably in a more succinct textual style than you're used to, the colour blue to me, without reference to the colour blue.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
John B,
> "Describe, preferably in a more succinct textual style than you're used to, the colour blue to me, without reference to the colour blue."
I do not believe that is possible, which is exactly the point I am making. When I look at what you might call the "blue sky", I may be experiencing the "color" that you (and perhaps others) call "red". But I have learned to identify that color by the name "blue". You might then present to me a set of colored cards and ask "which is colored blue?" I pick the "correct" one, of course, (the one that appears "red" in my subjective experience) and blue to you, out of training.
This tells me that semantic is contextual, and not intrinsic. We might objectively measure the wavelength and agree that "sky" and your "blue card" match. That does not establish what I am experiencing, but only a correspondence on terms we thus deem "semantic agreement".
You object to my "coal-fetching" robot as engaging in semantic "on its own", and insist it is merely passing dead symbols to effect the semantic of the original programmer.
If the programmer had not specified "coal-fetching" per se, but rather "build more steel girders", and the (system of) robots manage to apply reason to create a steel factory, determine that coal is needed at some stage, and one "tells another" to fetch more coal, that message still contains no semantic in your view, even though it is serves to effect the gathering of more coal.
If the programmer had specified "build better cars", or "build a better physical infrastructure for human society", and some system of robots use this "simple directive" to establish factories and self-sustaining systems, you will still maintain that the directive has "semantic for the human", but was merely "symbols" to the robo-system.
Although you fail to articulate, I can only surmise that your view is based, at heart, upon the notion that "artificial stuff" is completely causally constrained, thus all artificial actions are uncreative (being always attributable to a prior-cause that can be traced back to a human at some stage) and that humans escape this very causal-void of semantic. Somehow, "we intend X" and no artificial system can "intend".
If you felt that I was asking you to "prove something", I was not.
But I would like to know your view of how far an artificial system needs to depart from "symbol manipulation" before it might be capable of "originating intent", as you would judge it.
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
Grant,
I agree that you might describe to someone what the "color of sky or water" makes you feel (cool, calm, etc). But if that is what I subjectively experience as your color "red", then to me, it is the "cool red lake" and the "cool red sky", which I have been trained from birth to call by the name "blue".
So, perhaps John B can explain which form of "describe blue to me" he was seeking. I suspect he sought the latter "subjective visual experience", and if so, I do not believe that it can be conveyed in any sense of the term "objective". You say "makes me feel cool or relaxed", I say the same, so we refer to the "same thing" (color of water or sky), even though we might interpret the purely visual sensation differently.
Cheers! ____tony b____ |
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
"But everybody else is not conscious, those who are sleeping are not or those under anesthesia or those who are dead. There is only one way I can tell the difference, by their actions. "
I'm afraid you are hoisting yourself by your own petard here, in addition to being confused. Nobody is talking about consciousness detection systems here : none has been invented (yet) although there is no fundamental reason why one should not be. What is at discussion here is the question of objective belief about consciousness . That you may believe of not believe somebody is conscious at a particular point in time due to their actions is not the same question as 'is it a reasonable belief to assume I am the only conscious person in the world ?'.
AI enthusiasts try to have their cake and eat it. They try to say on the one hand that consciouness of other people and animals is judged by their external behaviour, on the other that as it is undetectable in others all we need to do is create externaly manifested conscious behaviour and het presto ! as we can't prove he ISN'T conscious, we must assume that he is.
It is important to note that this line if argument is unscientific drivel of the most bogus kind and some individuals have been living off it for years.
In the first place the case rests on consciousness being unobjectively testable, and that this restriction is inaliable. This is a false assumption and belies a colossal ignorance of science. Consciousness has a first party , subjective quality but that doesn't stop it from being a third party phenomenon of the universe and thus the subject of scintific hypothesis. Given a theory of consciousness ( relating to MATTER , and not 'process') then detection of consciousness would be perfectly possible via the usual hypothesis/experimentation/verification route used in all other branches of science and physics. The only restriction on measuring consciousness is a complete absence of consciousness theories.
The other point is this : if I say on the one hand that I only know that other people are conscious because of the way they behave, then I already have a personal theory of other peoples' consciousness based upon their being like me. But what I have is a theory of the STATE of that person's consciousness : not his CAPACITY to be conscious, which I've assumed is implicit in him being a human being and pretty much like me. This is by any stretch of the imagination a reasonable belief. So your behaviour theory of consciousness state in humans is predicated on one assumption : other human beings have the capacity to be conscious.
Now when we change focus on the computer we see a bolt change in argument. There is no reason to assume that a block of silicon has the capcity to be conscious. So we get this facile response : how do we know its not conscious, particularly if he behaves like it? After all, we only know a human is conscious because of his behaviour . Not so ! You assume a person is conscious because he's like you : that's it. His behaviour relates merely to your personal theory of consciousness STATE. You associate conscious behaviours with certain acts, such as speaking, laughing and (in the case of dogs , for instance ) barking. But its YOU who makes the connection between the behaviour and the idea that the possessor of the behaviour may be conscious : YOUR theory is PREDICATED upon the reasonable belief that other humans and to a lesser extent animals are pretty much like yourself. You can't suddenly switch your domain to boxes of silicon, which you don't have any reasonable belief are like you, and suddenly say 'if I get animal behaviour from a box if silicon i can apply the same theories about consciousness that I apply to humans and animals' because computers AREN'T humans or animals. The argument is all washed up and based upon confused notions of theory and proof.
"Not so. It would only take a few minutes to write a computer program to search for an even number greater than 4 that is not the sum of two odd primes, that is all the primes except 1 and 2, and then stop. What will this simple little program do, will it ever stop? I don't know, you don't know, nobody knows. "
This does not imply in any way, shape or form that we don't know how computers work. Computer programs that never stop are still working in the same way as ones that do.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
BC,
I understand your point, as one of degree. Let us compare and contrast the brain with (at least today's) sophisticated computer.
I agree that it is unfair to simply compare our understanding of chemistry yet lack of understanding of mind/consciousness, to our understanding of computer substrate and lack of knowing the end-state of a complex process.
With even the most complex "program over processor", we could in principle do every step with pencil and paper. It would simply take a very long time. In this case, at least, we understand both the "base operations" and in principle how all consequences must follow.
Today, even if we completely understand all of the rules of chemistry, we cannot take paper and pencil and arrive reliably at the future state of a brain, even in terms of future chemical state, no less ferret what "thoughts" that brain might be entertaining.
The issue is one of "destiny in principle". If we know the state of every atmospheric molecule (and the flapping wings of every butterfly, methaphorically) AND the physics of the universe were deterministic (No Strong QM) then we could, in principle, calculate all future weather. Dense, moist air at location X would imply a hot muggy day at location X.
But calculating ALL future ionic concentrations in the brain, and the sequence of every future neural cascade, still does not tell us what that brain might be "thinking". We might INFER that it is entertaining fear or happiness or anger, because the concentrations or activity correlate strongly so, but we have no way to be certain that the "consciousness" feels as such, at least until we have a very amazingly complete theory of consciousness.
So the issue becomes, if we succeed eventually to understand everything about the brain mechanics, even to the point of (again, in principle) affording a "pencil and paper" calculation of what that brain will be "thinking", then as a consequence, our "thoughts" are just as "mechanically uncreative and uncapable of (objective) semantic" as the microprocessor supported algorithm. We may not know what the original program looked like, but the current state-of-the-algorithm would be sufficient to calculate all future state.
In such a case, our "sensation of conveying meaning" by our thoughts and actions would be a form of self-delusion. We would be merely "robots fetching coal", to draw upon the post I made to John B.
How are we, in principle, to escape this cold, mechanical conclusion?
A) "We cannot figure out the brain to that degree".
So we get to maintain forever that there is some "inexplicable reason" that WE entertain real meaning, as opposed to carrying out some chemical destiny written long before we were born, whereas the most sophisticated algorithmic systems cannot entertain real meaning, because they are fully explicable, and must carry out the algorithmic destiny of their original programming.
B) "We figure out the brain perfectly, including a correct theory of consciousness, but still cannot predict what a brain today will be thinking tomorrow, even in principle, because we understand how it exploits fundamental QM-indeterminacy."
This would suggest that a deterministic algorithm cannot "generate semantic", but one that might exploit QM-indeterminacy is a sufficiently similar way may give rise to a conscious, semantic-generating system, a "mindful being".
C) "The voice of God booms from the Heavens that He gave us mind and will and meaning, and we all believe Him (or Her.)"
Not much to gain in further discussion of consciousness in that case.
D) (----------------) (fill in the blank.)
Thoughts?
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
"What does that have to do with the price of tea in China? You seem to be implying that a deterministic process could never produce something interesting like consciousness, "
I never said anything of the sort. You commented that weather, I recall, was , difficult to predict and hence its mechanisms not 'understood' somehow. I pointed out weather was difficult to predict for one reason : too much initial informatuion required , so a random element is always present in weather behaviour, You had tried in a half-hearted way to compare the progress of the weather with a computer program. Something completely invalid as an isolated computer prgram is 100% determinsitic, as I said.
". Everything, absolutely everything, happens because of cause and effect OR it does not happen because of cause and effect and is random, "
Gibberish. Even quantum systems have cause and effect - quantum mechanics is about measurement.
"there is no third alternative. I don't see how that has anything to do with consciousness, intelligence or free will, whatever that is "
I don'y know where on earth you got the impression I thought that consciosuness wasn't caused by something. Certainly not from anything I've said, so it must be something you THINK I've said.
"I am very familiar with every one of the 26 letters Shakespeare used in his work thus I understand every nuance he tried to express in his writing. "
They use the same letters in France - are you as familar with the works of Proust ? You need more than an understanding of syntax to get a grasp of semantic.
"We have known the genetic code for 40 years and now we have the complete genome, ..thus we know all there is to know about human biology. "
Ho Ho ! You'd better tell all the scientiosts grafting away all over the world to put their coats on - you've sorted it all out ! It was all so easy in the end ...
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
John B,
You seem like an intelligent fellow, so I am interested in your opinions. I pose a few questions, and would like to hear your views.
1. Do you think that it is possible to attain a theory of consciousness sufficient to explain "mind and thought" in terms of the properties of physics, or do you think there is some insurmountable barrier to such a theory?
2. If a theory of mind/consciousness were attained, and shown to be a pure consequence of physics, would this make our actions "completely determined" (ala, analogous to algorithm) or can we somehow "originate thought or action" (free will, so to speak.)
3. If we are "as if algorithmically determined" (by consequences of chemistry, etc.,) then would our "sense of entertaining semantic" be anything more than "our sense of it". That is, would semantic be "in the mind of the beholder"?
4. How far from a pure "symbol-processing automaton" must a thing be before it can be considered an "intentional actor"?
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
1. Do you think that it is possible to attain a theory of consciousness sufficient .." in terms of the properties of physics, ..there is some insurmountable barrier to such a theory? "
Physics gives an account of matter but it does not explain it.So physics can give us an account of what matter DOES but not what matter IS. To that extent physics is a syntactical discipline that jumps off a boatload of semantic 'givens'. One 'given' is time : time is never explained by physics in terms of itself : it is merely represented formally in mathematical statements. If time /matter / space are "de-semanticized" by physics its usually into a form of grand semantic such as mass-energy. Physics refers to its semantic elements as dimensions and these represent the limits of its capacity to explain. So the simple answer is no , I don't think there is a syntactical expression of that will lead naturally to the conclusion that people can think. Semantic cannot be DERIVED from physics. But there may well be a syntactical expression of
some kind that gets ASSOCIATED with thinking. This is, in fact, all physics tends to do in any case. The charge of an electron is not derived from base principles, its a semantic idea produced by human beings after a lot of experimentation and hypothesis.
"2. If a theory of mind/consciousness were attained, ..thought or action" (free will, so to speak.)
Don't know. I suspect that the problem may well lie in the question somewhere.We have predeterminstic assumpotions about physics currently. They may well turn out to be incomplete.
3. If we are "as if algorithmically determined" (by consequences of chemistry,
Chemistry is about stuff. Real, semantic stuff. That there are laws about chemicals does not make us algorithmically determined. It makes us chemically determined.
"then would our "sense of entertaining semantic" be anything more than "our sense of it".
No . Wrong ontologies , you're getting confused. "Sense" in this context is not a physical sense but a logical one. We have non-physical semantic too - the idea of other people being the most obvious example.
"4. How far from a pure "symbol-processing automaton" must a thing be before it can be considered an "intentional actor"? "
This is a question akin to : "when did you stop beating your wife ? " : its base assumptions are erroneous but nonethless requires a binary response. The answer is very simple : a symbol processor doesn't physically exist so doesn't even belong in the realm of intentional actors , so the problem lies in the question - which assumes there is some kind of boundary between the two, which there is not.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
John B,
> > 1. Do you think that it is possible to attain a theory of consciousness sufficient .." in terms of the properties of physics, ..there is some insurmountable barrier to such a theory? "
> ... the simple answer is no , I don't think there is a syntactical expression of that will lead naturally to the conclusion that people can think. Semantic cannot be DERIVED from physics.
I would agree that theories of physics are generally axiomatic constructs, such that "selected premises" lead to derivable consequences that will match physical observations. But are you implying that expressions of theory are merely syntactic?
However "artificial" or syntactic all theories of physics may be, they generally lead to our ability to predict with greater accuracy. It would seem that we could "predict all future thoughts" given a sufficiently effective theory (not practical, but in principle.) It is not practical, at least, because of the amount of "state" that would need to be known with accuracy, so that issue is not interesting.
My second and third questions were related: (2) Would an "in principle" theory, or understanding that we were purely "consequences of physics" reduce us to "non-willful beings", and (3) How could non-willful beings "mean anything" (intend semantic) by their utterings.
Apologies if I did not word them as clearly.
I happen to hold that we are willful, that we can "originate intent". If otherwise, then I meant exactly the "sensory" sense of "sense" (whew) when I implied that our "sense of engaging in semantic" would be a joke, as it were. I cannot fathom how the words I might utter tomorrow can bear a semantic that "I intend", if they were in fact completely determined a century ago by the state of the universe. Fortunately, our understanding of physics leads us to find it not strictly determined.
Even so, it would seem that, as I wrote, semantic is in the eye of he beholder. Hypothesize an advanced race of beings that happen upon us, and judge us to be "robots". They have no particular reason to establish we are conscious, intentional, or willful. Our messages (written, voice, etc.) are fancy symbols being passed about as we go about our business "fetching coal".
One might argue (taking the omniscient view) that our "symbols" convey semantic because "we consciously intend them" to convey a specific meaning, instruction, or directive.
In this view, "semantic" and "conscious intent" become synonymous. If two artificially constructed "beings" pass messages between themselves, they are "conscious" IFF "they pass semantic".
> > 4. How far from a pure "symbol-processing automaton" must a thing be before it can be considered an "intentional actor"?
> "its base assumptions are erroneous but nonethless requires a binary response. The answer is very simple : a symbol processor doesn't physically exist so doesn't even belong in the realm of intentional actors."
Binary response? It was not a "yes or no" question. I asked "how far from", not "how big and complex need it be."
I think that "algorithm" (as an abstract logical description) and "robot with processors effecting algorithms" (effecting physical consequences) are indeed two different beasts. I have argued on this list that a Turing machine running a billion lines of sophisticated, self-modifying code, does nothing more than I could to with pencil, paper, and a great deal of spare time.
But a "robot" (or whatever we might artificially construct that employs algorithms in some degree) is not an isolated system. A robot is rather useless without some way of interacting with the rest of the physical world. It might have visual systems, aural systems, tactile systems, as well as adaptively learn from dealing with its unpredictable surroundings. In this sense, is it still merely a symbol-processing automaton?
If yes, is this because its behaviors are "determined"? This begins to seem a stretch of the word "determined", since two such robots, "raised" in different surroundings and "experiencing" (intentional quotations) different situations, would end up adapting differently, drawing different inferences, etc. Their subsequent "predictability" is no more predictable than the weather.
Would this (plus evidence of their intelligence) demand they be conscious? I don't think it is demanded, no.
But at some point of sophistication, the term "robot" becomes unnecesarily perjorative, suggesting the simple mechanical man. One can no longer argue that it is all simply "determined behavior" due to the mechanical nature of its physical logic gates, unless we are merely "determined behavior" due to the mechanical nature of chemistry.
Is there a scale across which this "physical manifestation called robot" and "intentional actor" meet?
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
john.b.davey@btinternet.com
'The reason that you can't tell one atom is going to decay rather than another is because you can't know the exact position AND velocity of a particle simultaneously, the fundamental principle of MEASUREMENT that is quantum mechanics. Cause and effect still apply '
But it's much deeper than just a measurement problem. Take the old 2 slit experiment for example, it's not that the photon goes through
one slit and we just don't know which one, it must go through the left slit only, and the right slit only, and both slits, and no slit at all, and it must do all these things at the same time.
Shine a light on 2 closely spaced slits and it will produce a complex interference pattern on a film, even if the light beam is so weak the
photons (or any other particle) are sent out one at a time. If a particle goes through one slit it wouldn't seem to matter if the other slit,
the one it didn't go though, was there or not, but it does.
Even stranger, place a polarizing filter set at 0 degrees over one slit, and one set at 90 degrees over the other, the interference pattern disappears.Now place a third filter set at 45 degrees one inch in front of the film and
10 light years from the slits. The interference pattern comes back, even though you didn't decide to put the filter in front of the film until 10 years after the photons passed the slits! Heisenberg's Uncertainty Principle does
not enter into any of this. Quantum Mechanics may or may not be a good idea but one thing is certain, it's the law.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
John B,
You fastidiously avoid addressing the issue of artificial sentience, which seems to be the central theme to most of the posts made in this thread. Such artifice need not be purely "the algorithm in abstraction", but as an element of a physical manifestation (e.g., "robot").
The issue is not whether a simulated rainstorm, however accurately modeled, will require that I grab an umbrella.
I don't happen to feel that a Pentium 4, plus "cool algorithm", acting in concert, represents what I would consider a "sentient entity" (or "intentional actor"). You would appear to agree with this position.
Yet I ask, wherein lies the difficulty-in-principle, of producing artificial sentience (or that which would be indistiguishable from sentient artificial intelligence)?
1. Is this simply a matter of degree, lack of sufficient complexity, parallelism, or real-world interactions?
2. Is it that the "state of the system" (despite variable outside interference) is in principle "explicable"?
3. Is it that siliconic "logic gates" are not mushy and grey enough?
4. Is it that it would lack a "soul"?
5. Is it Something Else?
You are good at taking pot-shots, and yet are reluctant to stand up even a straw man of your own. Surely you must have conjectures of your own to make on this issue. Why not reveal them?
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
"You fastidiously avoid addressing the issue of artificial sentience,"
Give me an example.
"..which seems to be the central theme to most of the posts made in this thread. Such artifice need not be purely "the algorithm in abstraction", but as an element of a physical manifestation (e.g., "robot"). "
A robot does not exist other than as an "algorithm in abstraction". A robot is a functional entity , not a physical one. A robot delivers "service" for want of a better word. It has an arbitrary physical implementation. What you may seem to be suggesting is that the syntactcical "service" components as defined by the observer ( robot user ) are somehow conferred onto/into the physical device implementing the "service" requirements.
Impossible.
"Yet I ask, wherein lies the difficulty-in-principle, of producing artificial sentience (or that which would be indistiguishable from sentient artificial intelligence)? "
None. If those artificial factors have the causal powers to create consciousness , then they have the power to create consciousness. My personal belief is that biologists have the only identifiable access to those causal powers : namely brain tissue. If anybody synthesises consciousness/thinking in the near future it will be them . On the other hand syntactical objects have no causal powers whatsoever, because they are abstract.
"1. Is this simply a matter of degree, lack of sufficient complexity, parallelism, or real-world interactions? "
Forget algorithms and other syntactical objects. They don't have physical existence, which thinking and consciousness do. There is no capacity for the interaction of the two.
"You are good at taking pot-shots, and yet are reluctant to stand up even a straw man of your own. Surely you must have conjectures of your own to make on this issue. Why not reveal them? "
You are constrained by the limitations of your own argument set. You think that anybody who doesn't believe that computers can generate mental events must be motivated by religious motives. In fact I happen to think that the religious case, though clearly contrary to my point of view , makes more consistent sense than the AI case as at least an all-powerful deity would have causal powers, were he/she/it to exist. What makes AI so bizarre is the maintenance of its adherents that observer-relative syntactical objects DO have causal powers , which is totally nonsensical.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
oh no, here we go again, the old 'you can't tell the difference so we can assume they're both the same' nonsense - otherwise known as the assertion that 'a duck is as good as a painting of a duck to an insensate AI enthusiast'. Well for one thing you are YET AGAIN confusing ( how many times are you actually going to do this ) the EXISTENCE of consciosuness with a PROOF of consciousness. The answer to the question, not that it remotely affects the objective existence of consciousness and so is MONSTROUSLY irrelevant, is simple. The robot is not conscious. The robot is lying - or it would be , if he had the capacity for intentionality, which it doesn't. For the simple reason that our 'intelligent, man-made' function server
fulfill;s the delivery of abstract needs to humans, and has no causal powers to create consciousness for the simple reason that it wasn't built to be a conscious agent.
And I don't know why you bark on so ( like so many AI types ) about 'intelligence'. They belong to different ontologies, different realms.
Its perfectly possible to be conscious and completely unintelligent, after all. |
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
John B,
I might argue:
"The 2-slit experiment means that cause and effect don't apply (at QM levels of activity) because to assume that a (hidden) "cause" exists would imply that, in principle, there are "variables and values" (however inaccessible) that if they could be known, would render the universe entirely deterministic, and leads to statistics on QM observations (via Bell's Theorem) that would differ to arbitrary degree from the observed statistics."
Alternately, I might argue against fundamental QM-indeterminacy using the inscrutable John-B method:
"Behavior at the QM-level is still cause and effect, because, well, everything has to have a cause! It is so obvious, only an idiot would think otherwise, duh!"
Likewise, your only ability to elicidate why "machine-consciousness" is impossible (different ontologies) is that it is "obviously" different ontologies, which explicates nothing in particular.
In a purely-causal universe, we are as much "machine" as any robot, unable to "intend" or generate "meaning", despite having developed a "sense of intention". There is no escape from such a conclusion, (without appeal to "magic".)
Cheers! ____tony b____ |
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
"Likewise, your only ability to elicidate why "machine-consciousness" is impossible (different ontologies) is that it is "obviously" different ontologies.......ing in particular. "
Are they or are they not different ? Is there no difference between an obervervable feature of a piece of matter and the piece of matter itself ?
I don't make an 'assertion' that matter is distinct from the observable fetures of matter : it's a fact. It is not up for discussion.Likewise there is no argument with the fact that syntactical objects have no causal powers - it's a fact.
If not explain otherwise. Explain how the number 2 has causal power.
"machine-consciousness" is impossible "
More unfounded nonsense. If the machine has been designed with materials that have the causal powers of conscionsess, then the machine will have consciousness. If the machine has been designed purely as a "function server" defined totallly in terms of what it delivers to the user ( and having no intrinsic/specific causal content of its own ) like all contemporary 'robots', then no, it will not have consciousness.
"we are as much "machine" as any robot, "
Correct. Meat machines.Consciousness machines.
"unable to "intend" or generate "meaning", "
You didn't intend to write this mail about the subject of consciousness ?
". There is no escape from such a conclusion, (without appeal to "magic".) "
Absolute cataclysmic drivel , archetypal AI tactic to pretend that any disagreement with its view is based upon superstition. As I've said before, I'm nor religious but at least a God, were, he/she/it to exist , would have causal powers. The number 2 could not, by any strectch of the imagination, have causal powers yet set such assertions are at the heart of AI hocus pocus.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
john.b.davey@btinternet.com
'The robot is not conscious. The robot is lying - or it would be , if he had the capacity for intentionality, which it doesn't.'
=========
So you say but saying it does not make it so, not even if your arguments are all in CAPATAL LETTERS. The independent third party could conclude just as logically that the human is not conscious. The human is lying - or it would be , if he had the capacity for intentionality, which it doesn't.
============
'And I don't know why you bark on so ( like so many AI types ) about 'intelligence'. They belong to different ontologies, different realms.'
=============
Since you admit you don't know I'll tell you exactly why. There is no way evolution, that is random mutation and natural selection, could tell the difference between intelligent conscious behavior and intelligent non conscious behavior so if the two were not linked, if they were really in 'different realms', then there is absolutely no way evolution could ever have ever produced it.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
BC,
> "Randomness does not necessarily imply acausality"
True. But acausality would imply statistical ramdomness of some sort.
I can think of perhaps three forms of randomness.
1. Conformance to "random-like statistics". All forms of (apparent) randomness would need to pass test such as these, things like distributions of run-lengths, long-term convergence in certain properties such as mean variance, etc.
The digits of Pi, or most any irrational decimal expansion, tend to a good "(pseudo) random" sequence, even though produced by a relatively short algorithm (thus possessing only finite information, despite the never repeating.)
2. Strongly Random sequences. In theory, such a sequence cannor be expressed or generated by any finite algorithm. They are "non-computable", and informationally incompressible. Such a sequence's "shortest encoding" is itself.
No algorithm can generate such a sequence, unless it already has the sequence "in its back pocket", and is merely reiterating the values. But that would make the algorithm itself infinite in length.
The decimal expansion of "almost all" real numbers are strongly random (since there are only a countable infinity of finite algorithms, yet an uncountable infinity of real numbers.) Unfortunately, we have no way to know when we are "in possession" of a strongly random sequence, at least by inspection.
3. Acausal "generation" (really, "observation", since it is probably an abuse of the term "generate" to "acausally generate".)
I had suggested that if a mix of C-14 and C-12, were gradually replaced with a flow of C-14, sufficient to counter-balance the decay-rate of C-14 in the mixture, then a count of "decays-per-unit-time" might serve as the source of a strongly-random sequence, by virtue of acausality.
Curiously, if one imagines the universe to be "playing out" a huge random "generation", one can almost associate three camps of "believers" according to which form of randomness the universe is exhibiting.
The "Order Sufficiently Complex Leads To Apparent Chaos" folks might subscribe to type-1. Some "small seed" is sufficient.
The "Weak-QM" folk (hidden variables, et al) might subscribe to type-2. The seed is infinitly long and undiscernable, "but it exists already".
The "Strong-QM" folk would subscribe to type-3.
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
BC,
I find myself increasingly in "camp-3", fundamentally.
Of course there is definitely something to camp-1, at least in explaining how the "averages of aggregations" and the consequent rise of causality lead to some "forms" being (more or less) universal.
The "laws" of physics, while perhaps incapable of telling us when an atom will decay, may lead to precisely why the "rates and proportions" manifest. These "values" interrelate to make certain "patterns" effectively attractors. On the "very small" they are very strong attractors, hence the ubiquity of protons, etc.
I guess I see "physics" (the theories) serving to explain how "type-1-fundamentals" lead to "type-3-stable manifestations".
In these terms, I see "type-2" as being a short-cut that is a "cut too short".
I am reminded of something Einstein said, with respect to formulating the laws of physics, (paraphrasing):
"They should be made as simple as possible, and no simpler."
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
You wrote:
"I am very familiar with every one of the 26 letters Shakespeare used in his work thus I understand every nuance he tried to express in his writing."
The two cases (weather and Shakespeare) aren't similar at all.
While I think John overstated the case when he implied that we completely understand the weather, no one doubts that the weather on the earth is deterministic at the classical level, even if chaotic. Weather is a physical process.
On the other hand, the relationship between the alphabet and Shakespeare's writing is different. First, the alphabet that Shakespeare used and the one that we use is subtly different. And the English language has changed dramatically since the 16th Century. So, perhaps it is not true that you completely understand the writing that Shakespeare used to encode his meaning. But more importantly, aesthetics is not a science (at least yet, thank goodness.) The question of what Shakespeare meant is not a scientific one. Even if Shakespeare were still alive and we could ask him, he might not even realize every thing that he meant. To borrow an idea from postmodern philosophy, when I read Shakespeare, it "means" something different from what it "means" when you read it.
"We have known the genetic code for 40 years and now we have the complete genome, in effect we have the human blueprints and we know the language it is written in, thus we know all there is to know about human biology."
Here at least we are dealing with two scientific questions. But again, you are comparing apples and oranges. No, we don't know "everything there is to know about human biology." But John did not claim that we know "everything there is to know about the weather" either. He said we understand the mechanism by which the weather works. We do not yet know as much about human genetics as we do about the weather, but we learn more every day. When I received my PhD in anthropology back in the dark ages, we did not understand how to use mitochondrial DNA to trace kinship relations. Now we do. More importantly, even though we don't know how every gene works (for example, we don't know why half my beard went white before the other), we do have very useful knowledge (which gene combination contributes to cystic fibrosis, for example.) And I would say that within a few years, we will have as good a working knowledge of the genome as we do of the weather.
Finally, I'd like to say that there are no "final answers" in science. I doubt that we will ever know everything there is to know, even if we do emerge into the glorious posthuman future that we talk about in these forums. But one doesn't have to have perfect knowledge to understand how something works. Otherwise, we would never understand anything.
BC
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
john.b.davey Wrote:
'You need to read some work on episetemology, as claarly you don't know any.'
===========
Mr. Davey, spare me the condescending tone, your philosophical acumen has not impressed me. By the way, the word is spelled 'epistemology'.
=========
'it doesn't meany anything unless it produces a logical consequence we can test via a experiment.'
===========
I could not agree with you more. If you asked me to come up with a theory to explain intelligence I would be at a loss, you would be able find an experiment that would blow a hole in any theory I came up with in no time. Intelligence is hard. Consciousness on the other hand is easy, even my theory about feet causing consciousness, stupid as it may be, is not contradicted by any objective fact. A good consciousness theory needs to do more than just 'explain consciousness' because that just means 'tell me a story about consciousness'. Stories are cheap and most of them are fiction, experiments are neither. I can not imagine any experiment that could tell the difference between a good consciousness theory and a bad consciousness theory. If you know of such an experiment please don't be shy, tell the world!
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
"Mr. Davey, spare me the condescending tone, your philosophical acumen has not impressed me. "
Well you could impress me by at least stumping up with a half-hearted coherent argument.
"I could not agree with you more. If you asked me to come up with a theory to explain intelligence I would be at a loss"
That is simple. Intelligence is a qualitative feature of brains with no formal properties and largely dependedn upon the opinion and largesse of the observer Consciousness is a phenomenon with formal properities that exists whether an observer is there to view it or not. Yet again, you are mixing ontologies willy nilly.
"I would be at a loss, you would be able find an experiment that would blow a hole in any theory I came up with in no time. "
Yes. My brain supports cosnciousness and its not a size 11 foot. Next.
"Consciousness on the other hand is easy, even my theory about feet causing consciousness, stupid as it may be, is not contradicted by any objective fact. "
It is stupid as it suggests that people without feet can't be conscious. I don't even need to consider it. Your theory is a useful one in that it does produce objectively measurable conclusions which can lead to us quite quicly concluding that it is false.
"A good consciousness theory needs to do more than just 'explain consciousness' because that just means 'tell me a story about consciousness'."
Yet again a misunderstanding of science. Science needs to give an account of consciousness, that is all : the circumstances surrounding its production and all causal factors. It doesn't need to explain what consciousness, for want of a better word , 'is'. No more than physics tells you what matter 'is'. Physics merely gives an account of the formal propwrties of matter.
" I can not imagine any experiment that could tell the difference between a good consciousness theory and a bad consciousness theory."
I'm sorry but I don't think this is formal grounds for disproof. A bad consciousness theory would be one that was circular and provided no grounds for falsification via experiment. Most AI theories fall into this category. A good one would make a positive statement about matter and the physical world that would enable it to be DISPROVED via experirment. Positive verification of the already existing body of known facts is generally considered to be insuffiecient grounds for accpetance.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
"do you think people who are sleeping are conscious, or those under anesthesia or those who are dead and if not why not? "
You really haven't read the mail have you ?
If you think that people are conscious because of the way they are behaving, that is NOT an objective statement about consciousness. It is an objective statement about how most people have simple working theories ( as part of their normal working psychology ) about the RELATIONSHIP between consciousness and behaviour.
Am I making myself clear now ? You don't use people's behaviour as proof, per se, of the notion of the existence of consciousness per se ( in the way that you seem to be suggesting ) : rather you connect certain behaviours with consciousness STATE. This is a theory based upon a combination of personal experience and an instinctive belief that all humans are similar.
What you are describing is not how to produce an objective test of consciousness : you are describing how human psychology uses simple theories of consciousness state on a daily basis - but this theory is PREMISSED on two ideas : a)that consciousness is an objective fact and b) all humans are quite similar.
So I do judge/guess/ people's consciousness state by looking at them - because like most people I use the theory. But the catch is my theory only applies to other humans and ( to a lesser extent ) animals.
The other thing to point out that I am aware from a third party point of view that my theory is stricly limited. I could be fooled into thinking that I've seen a human when I haven't. For instance , a robot . In which case I am aware that the primitive test I apply to consciousness conditions based upon behaviour is fundamentally flawed,as I can't always tell wether something is a human being or not. From the small set of external and easily reproducible criteria ( behaviours, appareance , noise ) I cannot conclude that the appropriate INTERNAL criteria for something being a human being has been met. In short, a painting of a duck is not the same thing AS a duck.
I don't think people under anaesthesia are conscious.
That is because anaesthetists have working theories of consciousness based upon a combination of third party metrics : heart rate, anaesthetic level,oxygen level and ( in some circumstances ) brain scans monitoring various alpha activity in the brain. Anaesthetists came to the conclusion a long time ago that consciousness wasn't best assessed via a visual assessment ofthe patient.This is an observable and of use : but of its own is not a good guide to consciousness level.
The question of wether people who are sleeping are conscious is more complicated. People who are asleep may be dreaming, and this indicates a level of consciousness. They may be dozing and in and out of consciousness and semi-consciousness. So I wouldn't come to any conclusions about the conscious state of sleeping people by looking at them. That is why scientists have a preference for monitoring alpha activity in the brain when studying sleeping people, as its a good guide to changes in mental state.
What you see in both these cases is science in action : scientists using theories of consciousness based upon non-sensory criteria as the usual human psychology version is woefully inadequate.
But in case you haven't still got the point I reiterate : when you say "you think people are concious because of their behaviour" what you are commenting about is human bevaiour . You are not making an objective statement about consciousness. You cannot therefore conclude that if you see human behaviour in robots you can assume they are conscious.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
"you have said the old cause and effect assumption is as strong today as it was in the days before the Quantum Mechanics revolution, "
No I didn't. Classical causality may be dead, but causality per se is not. Read what I said . What I stated, repeatedly, was that quantum mechanics and the 2-slit experiment does not mean the end of cause and effect. And it doesn't. And if you think it does give me your explanation, as I'd be only too glad to hear it.
"You have said the awesome profundities exposed in the 2 slit experiment are no more interesting than a pop up book, "
No I didn't. Wrong again. I said I'm familiar with them and boggled by the results. I said it was my assumption you'd read a pop-up book rather than studied the sibject , which is true.
"You are confusing the existence of simple theories of consciousness in everyday use by human beings as prima facae evidence that the test for consciousness is synonymous with the test for human behaviour. "
Need I say more than repeat it again ? A test for human behaviour is a test for human behaviour. Not a test for consciousness. We can only extend the test for consciousness to the test for human behaviour, if, and only if, the object we are studying is a human being. We cannot conclude that a non-human has consciousness just because it acts like a human, as our everyday theory rests on the requirement that the object of our theory is, actually, a human being, and not just a mimic.
It is in short, nonsense to think that a duck is the same thing as a painting of a duck.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
"What I stated, repeatedly, was that quantum mechanics and the 2-slit experiment does not mean the end of cause and effect. And it doesn't. And if you think it does give me your explanation, as I'd be only too glad to hear it."
=======
Ok, I well.When a photon of undetermined polarization hits a polarizing filter there is a 50% chance it will make it through. For many years physicists who disliked the idea that God played dice with the universe figured there must be a hidden variable inside the photon that told it what to do. By "hidden variable" they meant something different about that particular photon that we just don't know about. They meant something equivalent to a lookup table inside the photon that for one reason or another we are unable to access but the photon can when it wants to know if it should go through a filter or be stopped by one. We now understand that is impossible. In 1964 (but not published until 1967) John Bell showed that correlations that work by hidden variables must be less than or equal to a certain value, this is called Bell's inequality. In experiment it was found that some correlations are actually greater than that value. Quantum Mechanics can explain this, classical physics or even classical logic can not.
Even if Quantum Mechanics is someday proven to be untrue Bell's argument is still valid, in fact his original paper had no Quantum Mechanics in it; his point was that any successful theory about the world must explain why his inequality is violated. I will attempt to show how to find the inequality, show why it is perfectly logical, and demonstrate that nature refuses to be sensible and just doesn't work the way you'd think it should.
I have a black box, it has a red light and a blue light on it, it also has a rotary switch with 6 connections at the 12,2,4,6,8 and 10 o'clock positions. The red and blue light blink in a manner that passes all known tests for being completely random, this is true regardless of what position the rotary switch is in. Such a box could be made and still be completely deterministic by just pre-computing 6 different random sequences and recording them as a lookup table in the box. Now the box would know which light to flash.
I have another black box. When both boxes have the same setting on their rotary switch they both produce the same random sequence of light flashes. This would also be easy to reproduce in a classical physics world, just record the same 6 random sequences in both boxes.
The set of boxes has another property, if the switches are set to opposite positions, 12 and 6 o'clock for example, there is a total negative correlation, when one flashes red the other box flashes blue and when one box flashes blue the other flashes red. This just makes it all the easier to make the boxes because now you only need to pre-calculate 3 random sequences, then just change every 1 to 0 and every 0 to 1 to get the other 3 sequences and record all 6 in both boxes.
The boxes have one more feature that makes things very interesting, if the rotary switch on a box is one notch different from the setting on the other box then the sequence of light flashes will on average be different 1 time in 4. How on Earth could I make the boxes behave like that? Well, I could change on average one entry in 4 of the 12 o'clock lookup table (hidden variable) sequence and make that the 2 o'clock table. Then change 1 in 4 of the 2 o'clock and make that the 4 o'clock, and change 1 in 4 of the 4 o'clock and make that the 6 o'clock. So now the light flashes on the box set at 2 o'clock is different from the box set at 12 o'clock on average by 1 flash in 4. The box set at 4 o'clock differs from the one set at 12 by 2 flashes in 4, and the one set at 6 differs from the one set at 12 by 3 flashes in 4.
But I said before that that boxes at opposite settings should have a 100% anti-correlation, the flashes on the box set at 12 o'clock should differ from the box set 6 o'clock by 4 flashes in 4 NOT 3 flashes in 4. Thus if the boxes work by hidden variables then when one is set to 12 o'clock and the other to 2 there MUST be a 2/3 correlation, at 4 a 1/3 correlation, and of course at 6 no correlation at all.
A correlation greater that 2/3, such as 3/4, for adjacent settings produces paradoxes, at least it would if you expected everything to work mechanistically because of some hidden variable involved.
Does this mean it's impossible to make two boxes that have those specifications? Nope, but it does mean hidden variables can not be involved and that means something very weird is going on. Actually it would be quite easy to make a couple of boxes that behave like that, it's just not easy to understand how that could be.
Photons behave in just this spooky manner, so to make the boxes all you need it 4 things:
1)A glorified light bulb, something that will make two photons of unspecified but identical polarization moving in opposite directions so you can send one to each box. An excited calcium atom would do the trick, or you could turn a green photon into two identical lower energy red photons with a crystal of potassium dihydrogen phosphate.
2)A light detector sensitive enough to observe just one photon. Incidentally the human eye is not quite good enough to do that but frog's can, for frogs when light gets very weak it must stop getting dimmer and appear to flash.
3)A polarizing filter, we've had these for a century or more.
4)Some gears and pulleys so that each time the rotary switch is advanced one position the filter is advanced by 30 degrees. This is because it's been known for many years that the amount of light polarized at 0 degrees that will make it through a polarizing filter set at X degrees is [COS (x)]^2; and if x = 30 DEGREES then the value is .75 If light is made photons that translates to the probability any individual photon will make it through the filter is 75%.
The bottom line of all this is that there can not be something special about a specific photon, some internal difference, some hidden variable that determines if it makes it through a filter or not. Thus the universe is either non-deterministic or non-local, that is, everything influences everything else and does so without regard for time or space. One thing is certain, whatever the truth is it's weird.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
"How do we know there is no understanding? "
I think searle's Chinese room points out one thing : that digital computers can never have a sense of what they are actually doing.
Think about it - I'll assume you're familiar with computers.
All a computer consists of is commands that say "take the contents of register a, do something with it , possibly with the contents of another register b, and then stick it somewhere in memory or disk".
It repeats these operations time and time again : it can have no aggregate understanding of what it is meant to be. In fact , as most programmers know, you can't even work that out from the source code - you need to ask the programmer.
So let's look at an emotion like fear. Fear is pure semantic : what it is like to be scared can only be described in terms of itself. There may be objective features you could point to ( increase in heart rate, sweat, palpitation etc. ) but the subjective , first party sense of terror is not communicable in anything other than words referring to terror.
Subjective fear is semantic : it consists of nothing else.
Now let's look at a ( vey simple ) implementation of 'fear'
void fear ()
{
printf("J'ai peur");
}
Now this is an implementation for the French market. The computer of course will see the characters as digits and will merely move digits from one place to the other. It wouldn't have a clue that it was actually meant to be feeling quite scared. A non-french speaking programmer similarly would not be able to discern what the source was meant to do.
The only way , in fact , to find out what the meaning of the program is, is to find the human being who wrote the program , and who produced the arbitrary implementation of his own 'fear' in the first place, and ask him what its meant to be.
It is impossible, therefore, to discern MEANING from Turing representations, from computer programs. The whole point about the man in the Chinese Room is that he actually doesn't have the slightest idea what the function of the Chinese Room is meant to be , as only the inventor of the Room would be aware of that.
At this point we hear from AI crowd, the typical refrain "it doesn't matter what happens internally, from the outside it appears to be doing the job properly". But this assumes that the existence of mental states is CONDITIONAL upon the existence of observers.
That I am conscious if, and only if, other people are there to see it. Either that or I have "another" observing person in my own head.In which case where the hell did he come from ? Is he a computer program too then ?
The other thing to point out is that computers themselves are arbitrarily implemented. Most digital compoters use varriable voltage levels, but it could be anything. So a computer is basically a block of silicon with arbitrarily varying voltage : only the designer ( observer ) actually has the ability to see what bits of the silicon block are doing the computing in the first place. So the computer doesn't know its a computer in any case. Its existence is actually in the designers' mathematical world, not the physical one. It wouldn't know where it begins or ends, as its existence is observer relative. So it wouldn't even know it was executing these arbitrary instructions in the first place - the Chinese Room is actually too generous to AI, as it doesn't acknowledge that syntax is not intrinsic to nature but to thinking.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
John B,
You wrote:
> "Is there no difference between an obervervable feature of a piece of matter and the piece of matter itself ?
I don't make an 'assertion' that matter is distinct from the observable fetures of matter : it's a fact. It is not up for discussion. Likewise there is no argument with the fact that syntactical objects have no causal powers - it's a fact."
I repeatedly "assert" that same "fact", as my "paper and crayons" Turing machine is intended to clarify.
> "If the machine has been designed with materials that have the causal powers of consciousness, then the machine will have consciousness. If the machine has been designed purely as a "function server" defined totally in terms of what it delivers to the user ( and having no intrinsic/specific causal content of its own ) like all contemporary 'robots', then no, it will not have consciousness."
You effectively rule out consciousness by the qualifier "having no intrinsic/specific causal content of its own", to make the latter statement tautological.
And I assume that you mean "materials whose interactions" support consciousness, and not that DNA, lipids, or proteins themselves (no less carbon, oxygen, protons or neutrons) automatically bestow causal powers of consciousness, independent of "arrangement".
> > (A purely deterministic universe would imply that we humans are) "unable to "intend" or generate "meaning", "
> You didn't intend to write this mail about the subject of consciousness ?
I "feel that I intend", but would not a purely deterministic universe imply that my "intention to write" is no different than a tree's "intention to fall over"? Just could not be helped. I could not "choose to write" nor "intend", except delusionally do.
> "All a computer consists of is commands that say "take the contents of register a, do something with it , possibly with the contents of another register b, and then stick it somewhere in memory or disk".
Why not reduce it further? "All a computer does is open and close electrostatic gates in response to electrical potentials consequent to previous flows of electrons through gates." That description purposely blurs the distinction between transistor-electric-behavior and (gross) neural-electric behavior, and forces us to ask "what else is significantly different" (there ARE still significant differences). The fact that the initial "settings" for such a system (computer) are (usually) the intention of a programmer, and thus can be interpreted to represent "syntactic manipulation", does not make the underlying physics "less physical".
Granted, I don't believe that the "physics of a digital processor" are sufficient to support consciousness "as I feel consciousness", but the reason must be more subtle. We are arguing "physics" in each case, not comparing the "physical brain" to a syntactic abstraction.
I see that there remain two significant "differences".
1. There is "more going on" to neurons than a mere analogy to transistors can represent (greater variety of physical effects, not simply "signal cascades".)
2. The "algorithmic processor", even down to the level of "gate-charge-transfer", is fully explicable and (in principle) fully determined (as long as the "system" never interacts with the "unpredictable outside world" and thus incorporates "changes" whose consequences were not forseen in advance by the programmer.)
I find the (1) more compelling as an argument to "machine consciousness is difficult", especially "computer-wise", but (2) may play a role as well. If the "brain" were ever "mechanically understood" so well, that its behaviors could be interpreted as "syntactic manipulation", its ability to support consciousness may require internal "state" be disturbed by "unpredictable outside influence", at least at some point in its development.
Cheers! ____tony b____ |
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
"You effectively rule out consciousness by the qualifier "having no intrinsic/specific causal content of its own", to make the latter statement tautological. "
I wasn't being clear. What I should have emphasised is that you don't care about HOW your robot is implemented physically, and so that any causal powers sufficient to create consciousness in the machine would be coincidental and not a result of your design, as your design was concerned solely with delivery of funtion to you as user/observer.
"And I assume that you mean "materials whose interactions" support consciousness, and not that DNA, lipids, or proteins themselves (no less carbon, oxygen, protons or neutrons) automatically bestow causal powers of consciousness, independent of "arrangement".
Correct.
"I "feel that I intend", "
That's all you need as proof. Really.
"but would not a purely deterministic universe imply that my "intention to write" is no different than a tree's "intention to fall over"?"
You have to differentiate between intentionality in acts of thinking and determinism per se. Intentionality is a component of thinking that is directed toward something in the outside world - it is a feature of psychology. Your intention to write is also a thought act. That you DID intend to write something ( ie had the psychological processes assocated with inending ) and then did so doesn't affect / impede/ have anything to with , in any way, the issue of whether it was pre-determined or not. Its possible to go through a process of intentional mental thinking in a determinstic world. Its that old chestnut - different ontologies. Determinism is about objective facts in the world of physics. Intentionality is an objective fact about the subjective workings of the brain.
"does not make the underlying physics "less physical". "
There are no physics to computers. They are not physical objects. They can be implemented arbitrarily - water and sluice gates work more slowly but just as reliably as electronics. Physically comparing the brain to a computer serves no purpose due to this exact point. They exist in the syntax world of the user.
"1. There is "more going on" to neurons than a mere analogy to transistors can represent (greater variety of physical effects, not simply "signal cascades".) "
Yes - they are matter. Somehow they generate consciousness. But they are not ( and this is very important ) defined by their extrinsic content ( their metrics ) as they ARE matter, after all. They are defined by what they ARE. This is a difference which AI people just don't seem to get. Reproducing the movement and arrangement of a certain set of neural metrics ( the "signal" patterns , for instance ) can't be used a basis for reproducing the mental effect assocaited with them. It CAN be used as a basis for investigation . But the signal patterns and external metrics of neural activity do not CAUSE consciousness. They are assocaited with it. The metrics ( signal patterns etc.) are syntactical, and have no causal powers. Its a bit like modelling a nuclear reaction. We can model and simulate a nuclear reaction on a computer, but we do not have the causal powers available in the form of semantical ( and not completely understood ) matter of uranium.
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
"He asked the man if he understood, he said no, "
Well..... he didn't exactly ask the man , face to face , if he understood. He wasked us all, in a kind of general sense, if what he was doing constituted the same thing as an understanding. The guy was basically translating Chinese without understanding Chinese, you must concede that. Or if not that guy , then put myself, who doesn't understand Chinese, I guarantee, in his place.
You see, the whole point is SOMEBODY has to understand Chinese in order to create the Chinese Room. Somebody has to 'understand' what Chinese words mean what English words in order to produce the translation lexicon used by the Chinese Room. Otherwise how do you tie the two things together ? At somepoint , if I am on about , say a human, I need to know what a human IS without reference to any kind of symbol - I need to know "what a human is" and have a sense of that semantic in order to be able to translate it into Chinese. This is what the man in the Chinese Room is not doing - he's just using a lookup table in the manner of a standard computer program.
"The series of 0s and 1s is a question written in a language you don't understand, to find the answer ask a computer "
I do underastand them. All I need to do is work out the mathematical operations eacho of those 0s amd 1s entail. Even after all that I am still none the wiser as to what the program is actually meant to do.
"you have any integrity you will retract that despicable accusation of plagiarism"
Unreservedly. I don't know what came over me.
"
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
"How do you know? Let's say you have developed a marvelous new conscious theory, how do you know it is correct? The only way to test it is by observing behavior. Your theory may predict that my current brain state should produce a feeling of sadness, you may even see tears in my eyes, but the only way to know if I have the subjective experience you expect, or any subjective experience at all for that matter is to ask me, take note of the sounds produced by my mouth, and hope I'm telling the truth. That doesn't sound very objective to me."
I think you are getting ontology confused with epistemology.
Consciousness is a fact. I am conscious at this moment. While you are reading this, you are conscious. Consciousness exists.
However, I cannot prove that you are conscious. I cannot *know* that any other entity, whether human, sentient animal, or computer is conscious. I can only infer it from behavior. But that does not mean that consciousness does not exist. If you've ever spent any time with any of the higher non-human primates, you will know what I'm talking about here. You look in their eyes, and you sense intuitively that there's somebody home, so to speak. But there is absolutely no way for me to prove that they are, in fact, conscious beings, absent telepathy, which, alas, seems to not exist.
BC
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
BC,
The "fact" you speak of is still a subjective one. As you follow, you "infer" (albeit very reasonably) its existence in your reader, as I (your reader) infer its existence in you the writer.
Do we agree that the "human sensation of consciousness" is at least a manifestation of the physical world (the body in particular) and not some ethereal, or even physical "substance independent of the body"?
If so, then the issue of telepathy becomes quite problematic, even if, for the sake of argument, we imagine it could exist.
Suppose I employ some god-like genie to grant my wishes. Rather than ask the genie "please tell me whether that entity actually is, or is not, experiencing what I would recognize as a conscious waking state", I ask the genie, "Please allow me to experience the mind of that entity (if indeed it has a mind) for one minute.
The genie grants my wish. If the entity is you, I experience you. If it is Ramona, or a typical AI of the day, perhaps I experience nothing (I am unconscious), or if there were a "sufficiently complex AI-like-thing", I experience something very weird. Then the minute is up.
I turn to the genie and say, "Hey, I thought you said you would grant my wish!" The genie replies, "I did. Did you expect to retain some memory of that experience?"
I say, "Sure, why not?", The genie replies, "That would only be possible if you retain some sense of being YOU, while also experiencing that other entity. But that is NOT what the entity you wished to investigate experiences, and that is what you asked for. It simply experiences itself. If you retain some sense of yourself while experiencing the other, you cannot know whether the difference represents part of what the other entity experiences, or merely your interpretation of the disturbance created by the mixing." The experiences of the other entity remain where they are entertained, naturally, with that entity. Its in the physics of that entity."
I say, "Could you not bestow upon me some hybrid memory, alter my chemisry, whatever, to allow me now to have the memory of what that was like? The genie replies, "Just how would I do that?" I cannot know what you, or that other entity experiences without losing myself in the process. I would have no standard for accuracy, and I might create any one of a thousand possible hybrid memories. You could infer nothing in particular from that.
My point in this hypothetical genie-powered telepathy exercise is to point out that, if we could create a technology to "read other human minds", it would probably operate because of some correlated artifacts of the physics/biology. I suppose it is possible, but it may not resolve the "new-machine-consciousness" issue at all.
You might use the device to accurately discover my thoughts, the "hidden number I am thinking of", and not that of the "artificial". That says nothing about whether the artificial entertains a consciousness, since that "sensation" may be modulated by a different media for which we have no proper correlates. The idea that it could would suggest that "conscious mind" is some sort of uber-fluid that gains an existence of its own, independent of the physics of the substrate.
You might find "an aura" or a "wave manifestation" or something similar, and holding that to be somehow (correlated to) "the presence of consciousness", but that would not tell you whether the other entity "experiences" by that manifestation, even though you might extract the "right hidden number".
I think that the subject of "consciousness" is where epistemology and ontology become indistinguishable (and quite dark.)
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
BC,
I too believe consciousness is an epiphenomenon of the brain. It is "generated" by the underlying physics. To use the candle flame as an analogy, I do not believe that by extinguishing the candle flame, the former epiphenomenal "glow" goes floating through the ether in search of a new home in which to manifest. The "glow" has no existence except as an artifact of the chemical transitions.
QM-indeterminacy exists as well in the candle flame as everywhere else, but leads to no observable "special effect". The structure (or lack of structure) in the flame body is incapable of amplifying "tunneling" (for instance) to effect any phenomena of substantial difference. (It may play a role the very moment an oily-rag bursts spontaneously into combustion, but thereafter it is all effectively causal.)
But in physical systems that are highly structured, wherein certain small events can be coherently amplified, QM effects may play a larger role. This is still "brain". Certainly, "epiphenomena of brain" does not mean "of brain chemistry in complete isolation to QM-physics."
> "The really hard question is why does consciousness exist at all?"
The "Why" question is the real can of worms. If you ask "what is consciousness" or "how does it get generated", we have a chance of finding an answer. But "why" suggests (perhaps) purposefulness which may or may not exist.
We might ask "why protons and neutrons". We can perhaps show how they are a consequence of the fundamental forces, but this just pushes the question back to "why does the universe manifest that particular division of forces".
The "fact" that we (and most mammals likely) manifest degrees of consciousness is evidence that it serves to enhance "persistence". It is more useful having it than not. (By "useful", I intend purposeless tendency to persist in the environment. One might argue the individual persists so to perpetuate the species, but again, the species persists to what purpose? Purpose is in the eye of the beholder.)
However the forces that make the H2O molecule a "favorable pattern" originated, they did not do so (I believe) in order to manifest the variety of beautiful snowflakes that occur. Yet those beautiful and intricate snowflake patterns are, in an extended sense, patterns embedded in the relationships that exist among the natural forces.
(I don't know if that is a particular answer to "why consciousness", but that its very existence implies it to be a favorable manifestation.)
ASIDE: Some may argue that the feature of "consistency at a distance" which QM indicates may make consciousness some sort of "universal", which we merely interpret as "individuals" due to the circumstance of the physical locality of our "instruments". My eyeballs are usually in the same room as my hands, so to speak, so the "connectedness" is not glaringly apparent. There may be something to this viewpoint.
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Chapter 2: I Married a Computer
|
|
|
|
John K,
> "One other thing, if behavior can not indicate consciousness then why on earth did evolution produce it?"
I would not say behavior can not "indicate" consciousness, it is just not a "proof" that the entity experiences the awareness as we do.
I may well be that evolution, taking the "bottom-up" path of chemistry to forge reactive systems, emotional systems, and finally our analytical capabilities, "found" that the artifact of subjective awareness (as we know it) to be valuable, or to be a natural adjunct to the manner in which the processing occurs.
In contrast, we can take Boolean-logic-based mechanics, and create stuff that BEGINS with the analytic activity. Today's computers can do many complex analytic jobs better than a human, and we don't think they are conscious. Thus, it is at least possible that we could create ever-more-clever analytic-based constructs without (necessarily) engendering the "sense of sentience" we possess. That might depend upon how we implement the processes in a physical manifestation. There may be thousands of possible analytic/sensory-processing physical-basis architectures (what I call "substrates"). Transistors in silicon is just one of these. Perhaps all of them can be capable of supporting "analysis", and only 10% capable of supporting "minds of subjective conscious sensation".
Total conjecture, of course. Real proofs or counter proofs are encouraged. So are good arguments.
Cheers! ____tony b____ |
|
|
|
|
|
|
|