Origin > Will Machines Become Conscious? > Are We Spiritual Machines? > Chapter 2: I Married a Computer
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0499.html

Printable Version
    Chapter 2: I Married a Computer
by   John Searle

John Searle challenges Ray Kurzweil's predictions, such as downloading our minds onto hardware, nanotech-enhanced new bodies, evolution without DNA, virtual sex, personal immortality, and conscious computers. He uses his famous "Chinese Room" argument to show how machines cannot really understand human language or be conscious. Searle's conclusion is that Kurzweil's ideas on "strong AI" are based on "conceptual confusions."


Originally published in print June 18, 2002 in Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI by the Discovery Institute. Published on KurzweilAI.net on June 18, 2002.

Kurzweil’s Central Argument

Moore’s Law on Integrated Circuits was first formulated by Gordon Moore, former head of Intel, in the mid-Sixties. I have seen different versions of it, but the basic idea is that better chip technology will produce an exponential increase in computer power. Every two years you get twice as much computer power and capacity for the same amount of money. Anybody who, like me, buys a new computer every few years observes Moore’s Law in action. Each time I buy a new computer I pay about the same amount of money as, and sometimes even less than, I paid for the last computer, but I get a much more powerful machine. And according to Ray Kurzweil, who is himself a distinguished software engineer and inventor, “There have been about thirty-two doublings of speed and capacity since the first operating computers were built in the 1940s.”

Furthermore, we can continue to project this curve of increased computing power into the indefinite future. Moore’s Law itself is about chip technology, and Kurzweil tells us that this technology will reach an upper limit when we reach the theoretical possibilities of the physics of silicon in about the year 2020. But Kurzweil tells us not to worry, because we know from evolution that some other technology will take over and “pick up where Moore’s Law will have left off, without missing a beat.” We know this, Kurzweil assures us, from “The Law of Accelerating Returns,” which is a basic attribute of the universe; indeed it is a sublaw of “The Law of Time and Chaos.” These last two laws are Kurzweil’s inventions.

It is fair to say that The Age of Spiritual Machines is an extended reflection on the implications of Moore’s Law, and is a continuation of a line of argument begun in his earlier book, The Age of Intelligent Machines. He begins by placing the evolution of computer technology within the context of evolution in general, and he places that within the history of the universe. The book ends with a brief history of the universe, which he calls “Time Line,” beginning at the Big Bang and going to 2099.

So what, according to Kurzweil and Moore’s Law, does the future hold for us? We will very soon have computers that vastly exceed us in intelligence. Why does increase in computing power automatically generate increased intelligence? Because intelligence, according to Kurzweil, is a matter of getting the right formulas in the right combination and then applying them over and over, in his sense “recursively,” until the problem is solved. With sheer computational brute force, he thinks, you can solve any solvable problem. It is true, Kurzweil admits, that computational brute force is not enough by itself, and ultimately you will need “the complete set of unifying formulas that underlie intelligence.” But we are well on the way to discovering these formulas: “Evolution determined an answer to this problem in a few billion years. We’ve made a good start in a few thousand years. We are likely to finish the job in a few more decades.”

Let us suppose for the sake of argument that we soon will have computers that are more “intelligent” than we are. Then what? This is where Kurzweil’s book begins to go over the edge. First off, according to him, living in this slow, wet, messy hardware of our own neurons may be sentimentally appealing, like living in an old shack with a view of the ocean, but within a very few decades, sensible people will get out of neurons and have themselves “downloaded” onto some decent hardware. How is this to be done? You will have your entire brain and nervous system scanned, and then, when you and the experts you consult have figured out the programs exactly, you reprogram an electronic circuit with your programs and database. The electronic circuit will have more “capacity, speed, and reliability” than neurons. Furthermore, when the parts wear out they permit much easier replacement than neurons do.

So that is the first step. You are no longer locked into wet, slow, messy, and above all decaying hardware; you are upgraded into the latest circuitry. But it would be no fun just to spend life as a desktop in the office, so you will need a new body. And how is that to be done? Nanotechnology, the technology of building objects atom by atom and molecule by molecule, comes to the rescue. You replace your old body atom by atom. “We will be able to reconstruct any or all of our bodily organs and systems, and do so at the cellular level. ...We will then be able to grow stronger, more capable organs by redesigning the cells that constitute them and building them with far more versatile and durable materials.” Kurzweil does not tell us anything at all about what these materials might be, but they clearly will not be flesh and blood, calcium bones and nucleoproteins.

Evolution will no longer occur in organic carbon-based materials but will pass to better stuff. However, though evolution will continue, we as individuals will no longer suffer from mortality. Even if you do something stupid like get blown up, you still keep a replacement copy of your programs and database on the shelf so you can be completely reconstructed at will. Furthermore, you can change your whole appearance and other characteristics at will, “in a split second.” You can look like Marlon Brando one minute and like Marlene Dietrich the next.

In Kurzweil’s vision, there is no conflict between human beings and machines, because we will all soon, within the lifetimes of most people alive today, become machines. Strictly speaking we will become software. As he puts it, “We will be software, not hardware” (italics his) and can inhabit whatever hardware we like best. There will not be any difference between robots and us. “What, after all, is the difference between a human who has upgraded her body and brain using new nanotechnology, and computational technologies and a robot who has gained an intelligence and sensuality surpassing her human creators?” What, indeed? Among the many advantages of this new existence is that you will be able to read any book in just a few seconds. You could read Dante’s Divine Comedy in less time than it takes to brush your teeth.

Kurzweil recognizes that there are some puzzling features of this utopian dream. If I have my programs downloaded onto a better brain and hardware but leave my old body still alive, which one is really me? The new robot or the old pile of junk? A problem he does not face: Suppose I make a thousand or a million copies of myself. Are they all me? Who gets to vote? Who owns my house? Who is my spouse married to? Whose driver’s license is it, anyhow?

What will sex life be like in this brave new world? Kurzweil offers extended, one might even say loving, accounts. His main idea is that virtual sex will be just as good as, and in many ways better than, old-fashioned sex with real bodies. In virtual sex your computer brain will be stimulated directly with the appropriate signal without the necessity of any other human body, or even your own body. Here is a typical passage:

Virtual touch has already been introduced, but the all-enveloping, highly realistic, visual-auditory-tactile virtual environment will not be perfected until the second decade of the twenty-first century. At this point, virtual sex becomes a viable competitor to the real thing. Couples will be able to engage in virtual sex regardless of their physical proximity. Even when proximate, virtual sex will be better in some ways and certainly safer. Virtual sex will provide sensations that are more intense and pleasurable than conventional sex, as well as physical experiences that currently do not exist.

The section on prostitution is a little puzzling to me:

Prostitution will be free of health risks, as will virtual sex in general. Using wireless, very-high-bandwidth communication technologies, neither sex workers nor their patrons need to leave their homes.

But why pay, if it is all an electrically generated fantasy anyway? Kurzweil seems to concede as much when he says, “Sex workers will have competition from simulated—computer generated—partners.” And, he goes on, “once the simulated virtual partner is as capable, sensual, and responsive as a real human virtual partner, who’s to say that the simulated virtual partner isn’t a real, albeit virtual, person?”

It is important to emphasize that all of this is seriously intended. Kurzweil does not think he is writing a work of science fiction, or a parody or satire. He is making serious claims that he thinks are based on solid scientific results. He is himself a distinguished computer scientist and inventor and so can speak with some authority about current technology. One of his rhetorical strategies is to cite earlier successful predictions he has made as evidence that the current ones are likely to come true as well. Thus he predicted within a year when a computer chess machine would be able to beat the world chess champion, and he wants us to take his prediction that we will all have artificial brains within a few decades as just more of the same sort of solidly based prediction. Because he frequently cites the IBM chess-playing computer Deep Blue as evidence of superior intelligence in the computer, it is worth examining its significance in more detail.

Deep Blue and the Chinese Room

When it was first announced that Deep Blue had beaten Gary Kasparov, the media gave it a great deal of attention, and I suspect that the attitude of the general public was that what was going on inside Deep Blue was much the same sort of thing as what was going on inside Kasparov, only Deep Blue was better at that sort of thing and was doing a better job. This reveals a total misunderstanding of computers, and the programmers, to their discredit, did nothing to remove the misunderstanding. Here is the difference: Kasparov was consciously looking at a chessboard, studying the position and trying to figure out his next move. He was also planning his overall strategy and no doubt having peripheral thoughts about earlier matches, the significance of victory and defeat, etc. We can reasonably suppose he had all sorts of unconscious thoughts along the same lines. Kasparov was, quite literally, playing chess. None of this whatever happened inside Deep Blue. Nothing remotely like it.

Here is what happened inside Deep Blue. The computer has a bunch of meaningless symbols that the programmers use to represent the positions of the pieces on the board. It has a bunch of equally meaningless symbols that the programmers use to represent options for possible moves. The computer does not know that the symbols represent chess pieces and chess moves, because it does not know anything. As far as the computer is concerned, the symbols could be used to represent baseball plays or dance steps or numbers or nothing at all.

If you are tempted to think that the computer literally understands chess, then remember that you can use a variation on the Chinese Room Argument against the chess-playing computer. Let us call it the Chess Room Argument. Imagine that a man who does not know how to play chess is locked inside a room, and there he is given a set of, to him, meaningless symbols. Unknown to him, these represent positions on a chessboard. He looks up in a book what he is supposed to do, and he passes back more meaningless symbols. We can suppose that if the rule book, i.e., the program, is skillfully written, he will win chess games. People outside the room will say, “This man understands chess, and in fact he is a good chess player because he wins.” They will be totally mistaken. The man understands nothing of chess; he is just a computer. And the point of the parable is this: If the man does not understand chess on the basis of running the chess-playing program, neither does any other computer solely on that basis.

The Chinese Room Argument shows that just carrying out the steps in a computer program is not by itself sufficient to guarantee cognition. Imagine that I, who do not know Chinese, am locked in a room with a computer program for answering written questions, put to me in Chinese, by providing Chinese symbols as answers. If properly programmed I will provide answers indistinguishable from those of native Chinese speakers, but I still do not understand Chinese. And if I don’t, neither does any other computer solely on the basis of carrying out the program. See my “Minds, Brains and Programs,” Behavioral and Brain Sciences, Vol. 3 (1980) for the first statement of this argument. See also “The Myth of the Computer,” published in the New York Review of Books, April 29, 1982.

The ingenuity of the hardware engineers and the programmers who programmed Deep Blue was manifested in this: from the point of view of mathematical game theory, chess is a trivial game because each side has perfect information. You know how many pieces you and your opponent have and what their locations are. You can theoretically know all of your possible moves and all of your opponent’s possible countermoves. It is in principle a solvable game. The interest of chess for human beings and the problem for programmers arises from what is called a combinatorial explosion. In chess at any given point there is a finite number of possible moves. Suppose I am white and I have, say, eight possible moves. For each of these moves there is a set of possible countermoves by black and to them a set of possible moves by white, and so on up exponentially. After a few levels the number of possible positions on the board is astronomical and no human being can calculate them all. Indeed, after a few more moves the numbers are so huge that no existing computer can calculate them. At most a good chess player might calculate a few hundred.

This is where Deep Blue had the advantage. Because of the increased computational power of the machinery, it could examine 200 million positions per second; so, according to the press accounts at the time, the programmers could program the machine to follow out the possibilities to twelve levels: first white, then black, then white, and so on to the twelfth power. For some positions the machine could calculate as far as forty moves ahead. Where the human player can imagine a few hundred possible positions, the computer can scan billions.

But what does it do when it has finished scanning all these positions? Here is where the programmers have to exercise some judgment. They have to design a “scoring function.” The machine attaches a numerical value to each of the final positions of each of the possible paths that developed in response to each of the initial moves. So for example a situation in which I lose my queen has a low number, a position in which I take your queen has a high number. Other factors are taken into consideration in determining the number: the mobility of the pieces (how many moves are available), the position of the pawns, etc. IBM experts are very secretive about the details of their scoring function, but they claim to use about 8,000 factors. Then, once the machine has assigned a number to all the final positions, it assigns numbers to the earlier positions leading to the final positions depending on the numbers of those final positions. The machine then selects the symbol that represents the move that leads to the highest number. It is that simple and that mechanical, though it involves a lot of symbol shuffling to get there. The real competition was not between Kasparov and the machine, but between Kasparov and a team of engineers and programmers.

Kurzweil assures us that Deep Blue was actually thinking. Indeed he suggests that it was doing more thinking than Kasparov. But what was it thinking about? Certainly not about chess, because it had no way of knowing that these symbols represent chess positions. Was it perhaps thinking about numbers? Even that is not true, because it had no way of knowing that the symbols assigned represented numerical values. The symbols in the computer mean nothing at all to the computer. They mean something to us because we have built and programmed the computer so that it can manipulate symbols in a way that is meaningful to us. In this case we are using the computer symbols to represent chess positions and chess moves.

Now, with all this in mind, what psychological or philosophical significance should we attach to Deep Blue? It is, of course, a wonderful hardware and software achievement of the engineers and the programmers, but as far as its relevance to human psychology is concerned, it seems to me of no interest whatsoever. Its relevance is similar to that of a pocket calculator for understanding human thought processes when doing arithmetic. I was frequently asked by reporters at the time of the triumph of Deep Blue if I did not think that this was somehow a blow to human dignity. I think it is nothing of the sort. Any pocket calculator can beat any human mathematician at arithmetic. Is this a blow to human dignity? No, it is rather a credit to the ingenuity of programmers and engineers. It is simply a result of the fact that we have a technology that enables us to build tools to do things that we cannot do, or cannot do as well or as fast, without the tools.

Kurzweil also predicts that the fact that a machine can beat a human being in chess will lead people to say that chess was not really important anyway. But I do not see why. Like all games, chess is built around the human brain and body and its various capacities and limitations. The fact that Deep Blue can go through a series of electrical processes that we can interpret as “beating the world champion at chess” is no more significant for human chess playing than it would be significant for human football playing if we built a steel robot which could carry the ball in a way that made it impossible for the robot to be tackled by human beings. The Deep Blue chess player is as irrelevant to human concerns as is the Deep Blue running back.

Some Conceptual Confusions

I believe that Kurzweil’s book exhibits a series of conceptual confusions. These are not all Kurzweil’s fault; they are common to the prevailing culture of information technology, and especially to the subculture of artificial intelligence, of which he is a part. Much of the confusion in this entire field derives from the fact that people on both sides of the debate tend to suppose that what is at issue is the success or failure of computational simulations. Are human beings “superior” to computers or are computers superior? That is not the point at issue at all. The question is not whether computers can succeed at doing this or that. For the sake of argument, I am just going to assume that everything Kurzweil says about the increase in computational power is true. I will assume that computers both can and will do everything he says they can and will do, that there is no question about the capacity of human designers and programmers to build ever faster and more powerful pieces of computational machinery. My point is that to the issues that really concern us about human consciousness and cognition, these successes are irrelevant.

What, then, is at issue? Kurzweil’s book exhibits two sets of confusions, which I shall consider in order.

(1) He confuses the computer simulation of a phenomenon with a duplication or re-creation of that phenomenon. This comes out most obviously in the case of consciousness. Anybody who is seriously considering having his “program and databasedownloaded onto some hardware ought to wonder whether or not the resulting hardware is going to be conscious. Kurzweil is aware of this problem, and he keeps coming back to it at various points in his book. But his attempt to solve the problem can only be said to be plaintive. He does not claim to know that machines will be conscious, but he insists that they will claim to be conscious, and will continue to engage in discussions about whether they are conscious, and consequently their claims will be largely accepted. People will eventually just come to accept without question that machines are conscious.

But this misses the point. I can already program my computer so that it says that it is conscious—i.e., it prints out “I am conscious”—and a good programmer can even program it so that it will carry on a rudimentary argument to the effect that it is conscious. But that has nothing to do with whether or not it really is conscious. Actual human brains cause consciousness by a series of specific neurobiological processes in the brain. What the computer does is a simulation of these processes, a symbolic model of the processes. But the computer simulation of brain processes that produce consciousness stands to real consciousness as the computer simulation of the stomach processes that produce digestion stands to real digestion. You do not cause digestion by doing a computer simulation of digestion. Nobody thinks that if we had the perfect computer simulation running on the computer, we could stuff a pizza into the computer and it would thereby digest it. It is the same mistake to suppose that when a computer simulates the processes of a conscious brain it is thereby conscious.

The computer, as we saw in our discussion of the chess-playing program, succeeds by manipulating formal symbols. The symbols themselves are quite meaningless; they have only the meaning we have attached to them. The computer knows nothing of this, it just shuffles the symbols. And those symbols are not by themselves sufficient to guarantee equivalent causal powers to actual biological machinery like human stomachs and human brains.

Kurzweil points out that not all computers manipulate symbols. Some recent machines simulate the brain by using networks of parallel processors called “neural nets,” which try to imitate certain features of the brain. But that is no help. We know from the Church-Turing Thesis, a mathematical result, that any computation that can be carried out on a neural net can be carried out on a symbol-manipulating machine. The neural net gives no increase in computational power. And simulation is still not duplication.

But, someone is bound to ask, can you prove that the computer is not conscious? The answer to this question is: Of course not. I cannot prove that the computer is not conscious, any more than I can prove that the chair I am sitting on is not conscious. But that is not the point. It is out of the question, for purely neurobiological reasons, to suppose that the chair or the computer is conscious. The point for the present discussion is that the computer is not designed to be conscious. It is designed to manipulate symbols in a way that carries out the steps in an algorithm. It is not designed to duplicate the actual causal powers of the brain to cause consciousness. It is designed to enable us to simulate any process that we can describe precisely.

Kurzweil is aware of this objection and tries to meet it with a slippery-slope argument: We already have brain implants, such as cochlear implants in the auditory system, that can duplicate and not merely simulate certain brain functions. What is to prevent us from a gradual replacement of all the brain anatomy that would preserve and not merely simulate our consciousness and the rest of our mental life? In answer to this, I would point out that he is now abandoning the main thesis of the book, which is that what is important for consciousness and other mental functions is entirely a matter of computation. In his words, we will become software, not hardware.

I believe that there is no objection in principle to constructing an artificial hardware system that would duplicate the powers of the brain to cause consciousness using some chemistry different from neurons. But to produce consciousness any such system would have to duplicate the actual causal powers of the brain. And we know, from the Chinese Room Argument, that computation by itself is insufficient to guarantee any such causal powers, because computation is defined entirely in terms of the manipulation of abstract formal symbols.

(2) The confusion between simulation and duplication is a symptom of an even deeper confusion in Kurzweil’s book, and that is between those features of the world that exist intrinsically, or independently of human observation and conscious attitudes, and those features of the world that are dependent on human attitudes—the distinction, in short, between features that are observer-independent and those that are observer-relative.

Examples of observer-independent features are the sorts of things discussed in physics and chemistry. Molecules, and mountains, and tectonic plates, as well as force, mass, and gravitational attraction, are all observer-independent. Since relativity theory we recognize that some of their limits are fixed by reference to other systems, but none of them are observer-dependent in the sense of requiring the thoughts of conscious agents for their existence. On the other hand, such features of the world as money, property, marriage, government, and football games require conscious observers and agents in order for them to exist as such. A piece of paper has intrinsic or observer-independent chemical properties, but a piece of paper is a dollar bill only in a way that is observer-dependent or observer-relative.

In Kurzweil’s book many of his crucial notions oscillate between having a sense that is observer-independent, and another sense that is observer-relative. The two most important notions in the book are intelligence and computation, and both of these exhibit precisely this ambiguity. Take intelligence first.

In a psychological, observer-independent sense, I am more intelligent than my dog, because I can have certain sorts of mental processes that he cannot have, and I can use these mental capacities to solve problems that he cannot solve. But in this psychological sense of intelligence, wristwatches, pocket calculators, computers, and cars are not candidates for intelligence, because they have no mental life whatever.

In an observer-relative sense, we can indeed say that lots of machines are more intelligent than human beings because we have designed the machines in such a way as to help us solve problems that we cannot solve, or cannot solve as efficiently, in an unaided fashion. Chess-playing machines and pocket calculators are good examples. Is the chess-playing machine really more intelligent at chess than Kasparov? Is my pocket calculator more intelligent than I at arithmetic? Well, in an intrinsic or observer-independent sense, of course not, the machine has no intelligence whatever, it is just an electronic circuit that we have designed, and can ourselves operate, for certain purposes. But in the metaphorical or observer-relative sense, it is perfectly legitimate to say that the chess-playing machine has more intelligence, because it can produce better results. And the same can be said for the pocket calculator.

There is nothing wrong with using the word “intelligence” in both senses, provided you understand the difference between the observer-relative and the observer-independent. The difficulty is that this word has been used as if it were a scientific term, with a scientifically precise meaning. Indeed, many of the exaggerated claims made on behalf of “artificial intelligence” have been based on this systematic confusion between observer-independent, psychologically relevant intelligence and metaphorical, observer-relative, psychologically irrelevant ascriptions of intelligence. There is nothing wrong with the metaphor as such; the only mistake is to think that it is a scientifically precise and unambiguous term. A better term than “artificial intelligence” would have been “simulated cognition.”

Exactly the same confusion occurs over the notion of “computation.” There is a literal sense in which human beings are computers because, for example, we can compute 2+2=4. But when we design a piece of machinery to carry out that computation, the computation 2+2=4 exists only relative to our assignment of a computational interpretation to the machine. Intrinsically, the machine is just an electronic circuit with very rapid changes between such things as voltage levels. The machine knows nothing about arithmetic just as it knows nothing about chess. And it knows nothing about computation either, because it knows nothing at all. We use the machinery to compute with, but that does not mean that the computation is intrinsic to the physics of the machinery. The computation is observer-relative, or to put it more traditionally, “in the eye of the beholder.”

This distinction is fatal to Kurzweil’s entire argument, because it rests on the assumption that the main thing humans do in their lives is compute. Hence, on his view, if—thanks to Moore’s Law—we can create machines that can compute better than humans, we have equaled and surpassed humans in all that is distinctively human. But in fact humans do rather little that is literally computing. Very little of our time is spent working out algorithms to figure out answers to questions. Some brain processes can be usefully described as if they were computational, but that is observer-relative. That is like the attribution of computation to commercial machinery, in that it requires an outside observer or interpreter.

Another result of this confusion is a failure on Kurzweil’s part to appreciate the significance of current technology. He describes the use of strands of DNA to solve the Traveling Salesman Problem—the problem of how to plot a route for a salesman so that he never goes through the same city twice—as if it were the same sort of thing as the use, in some cases, of neural implants to cure Parkinson’s Disease. But the two cases are completely different. The cure for Parkinson’s Disease is an actual, observer-independent causal effect on the human brain. But the sense in which the DNA strands stand for or represent different cities is entirely observer-relative. The DNA knows nothing about cities.

It is worth pointing out here that when Alan Turing first invented the idea of the computer, the word “computer” meant “person who computes.” “Computer” was like “runner” or “skier.” But as commercial computers have become such an essential part of our lives, the word “computer” has shifted in meaning to mean “machinery designed by us to use for computing,” and, for all I know, we may go through a change of meaning so that people will be said to be computers only in a metaphorical sense. It does not matter as long as you keep the conceptual distinction clear between what is intrinsically going on in the machinery, however you want to describe it, and what is going on in the conscious thought processes of human beings. Kurzweil’s book fails throughout to perceive these distinctions.

The Problem of Consciousness

We are now in the midst of a technological revolution that is full of surprises. No one thirty years ago was aware that one day household computers would become as common as dishwashers. And those of us who used the old Arpanet of twenty years ago had no idea that it would evolve into the Internet. This revolution cries out for interpretation and explanation. Computation and information processing are both harder to understand and more subtle and pervasive in their effects on civilization than were earlier technological revolutions such as those of the automobile and television. The two worst things that experts can do when explaining this technology to the general public are first to give the readers the impression that they understand something they do not understand, and second to give the impression that a theory has been established as true when it has not.

Kurzweil’s book suffers from both of these defects. The title of the book is The Age of Spiritual Machines. By “spiritual,” Kurzweil means conscious, and he says so explicitly. The implications are that if you read his book you will come to understand the machines and that we have overwhelming evidence that they now are or will shortly be conscious. Both of these implications are false. You will not understand computing machinery from reading Kurzweil’s book. There is no sustained effort to explain what a computer is and how it works. Indeed one of the most fundamental ideas in the theory of computation, the Church-Turing Thesis, is stated in a way which is false.

Here is what Kurzweil says:

This thesis says that all problems that a human being can solve can be reduced to a set of algorithms, supporting the idea that machine intelligence and human intelligence are essentially equivalent.

That definition is simply wrong. The actual thesis comes in different formulations (Church’s is different from Turing’s, for example), but the basic idea is that any problem that has an algorithmic solution can be solved on a Turing machine, a machine that manipulates only two kinds of symbols, the famous zeroes and ones.

Where consciousness is concerned, the weaknesses of the book are even more disquieting. One of its main themes, in some ways the main theme, is that increased computational power gives us good, indeed overwhelming, reason to think we are moving into an era when computing machinery artifacts, machines made by us, will be conscious, “the age of spiritual machines.” But from everything we know about the brain, and everything we know about computation, increased computational power in a machine gives us no reason whatever to suppose that the machine is duplicating the specific neurobiological powers of the brain to create consciousness. Increased computer power by itself moves us not one bit closer to creating a conscious machine. It is just irrelevant.

Suppose you took seriously the project of building a conscious machine. How would you go about it? The brain is a machine, a biological machine to be sure, but a machine all the same. So the first step is to figure out how the brain does it and then build an artificial machine that has an equally effective mechanism for causing consciousness. These are the sorts of steps by which we built an artificial heart. The problem is that we have very little idea of how the brain does it. Until we do, we are most unlikely to produce consciousness artificially in nonbiological materials. When it comes to understanding consciousness, ours is not the age of spiritual machines. It is more like the age of neurobiological infancy, and in our struggles to get a mature science of the brain, Moore’s Law provides no answers.

A Brief Recapitulation

In response to my initial review of Kurzweil’s book in The New York Review of Books, Kurzweil wrote both a letter to the editor and a more extended rebuttal on his website. He claims that I presented a “distorted caricature” of his book, but he provided no evidence of any distortion. In fact I tried very hard to be scrupulously accurate both in reporting his claims and in conveying the general tone of futuristic techno-enthusiasm that pervades the book. So at the risk of pedantry, let’s recapitulate briefly the theses in his book that I found most striking:

(1) Kurzweil thinks that within a few decades we will be able to download our minds onto computer hardware. We will continue to exist as computer software. “We will be software, not hardware” (p. 129, his italics). And “the essence of our identity will switch to the permanence of our software” (p.129).

(2) According to him, we will be able to rebuild our bodies, cell by cell, with different and better materials using “nanotechnology.” Eventually, “ there won’t be a clear difference between humans and robots” (p.148).

(3) We will be immortal, not only because we will be made of better materials, but because even if we were destroyed we will keep copies of our programs and databases in storage and can be reconstructed at will. “Our immortality will be a matter of being sufficiently careful to make frequent back-ups,” he says, adding the further caution: “If we’re careless about this, we’ll have to load an old backup copy and be doomed to repeat our recent past” (p. 129). (What is this supposed to mean? That we will be doomed to repeat our recent car accident and spring vacation?)

(4) We will have overwhelming evidence that computers are conscious. Indeed there will be “no longer any clear distinction between humans and computers” (p. 280).

(5) There will be many advantages to this new existence, but one he stresses is that virtual sex will soon be a “viable competitor to the real thing,” affording “sensations that are more intense and pleasurable than conventional sex” (p. 147).

Frankly, had I read this as a summary of some author’s claims, I might think it must be a “distorted caricature,” but Kurzweil did in fact make each of these claims, as I show by extensive quotation. In his letter he did not challenge me on any of these central points. He conceded by his silence that my understanding of him on these central issues is correct. So where is the “distorted caricature?”

I then point out that his arguments are inadequate to establish any of these spectacular conclusions. They suffer from a persistent confusion between simulating a cognitive process and duplicating it, and even worse confusion between the observer-relative, in-the-eye-of-the-beholder sense of concepts like intelligence, thinking, etc., and the observer-independent intrinsic sense.

What has he to say in response? Well, about the main argument he says nothing. About the distinction between simulation and duplication, he says he is describing neither simulations of mental powers nor re-creations of the real thing, but “functionally equivalent re-creations.” But the notion “functionally equivalent” is ambiguous precisely between simulation and duplication. What exactly functions to do exactly what? Does the computer simulation function to enable the system to have external behavior, which is as if it were conscious, or does it function to actually cause internal conscious states? For example, my pocket calculator is “functionally equivalent” to (indeed better than) me in producing answers to arithmetic problems, but it is not thereby functionally equivalent to me in producing the conscious thought processes that go with solving arithmetic problems. Kurzweil’s argument about consciousness is based on the assumption that the external behavior is overwhelming evidence for the presence of the internal conscious states. He has no answer to my objection that once you know that the computer works by shuffling symbols, its behavior is no evidence at all for consciousness. The notion of functional equivalence does not overcome the distinction between simulation and duplication; it just disguises it for one step.

In his letter he told us he is interested in doing “reverse engineering” to figure out how the brain works. But in the book there is virtually nothing about the actual working of the brain and how the specific electro-chemical properties of the thalamo-cortical system could produce consciousness. His attention rather is on the computational advantages of superior hardware.

On the subject of consciousness there actually is a “distorted caricature,” but it is Kurzweil’s distorted caricature of my arguments. He said, “Searle would have us believe that you can’t be conscious if you don’t squirt neurotransmitters (or some other specific biological process).” Here is what I actually wrote: “I believe there is no objection in principle to constructing an artificial hardware system that would duplicate the causal powers of the brain to cause consciousness using some chemistry different from neurons.” Not much about the necessity of squirting neurotransmitters there. The point I made, and repeat again, is that because we know that brains cause consciousness with specific biological mechanisms, any nonbiological mechanism has to share with brains the causal power to do it. An artificial brain might succeed by using something other than carbon-based chemistry, but just shuffling symbols is not enough, by itself, to guarantee those powers. Once again, he offers no answer to this argument.

He challenges my Chinese Room Argument, but he seriously misrepresents it. The argument is not the circular claim that I do not understand Chinese because I am just a computer, but rather that I don’t, as a matter of fact, understand Chinese and could not acquire an understanding by carrying out a computer program. There is nothing circular about that. His chief counterclaim is that the man is only the central processing unit, not the whole computer. But this misses the point of the argument. The reason the man does not understand Chinese is that he does not have any way to get from the symbols, the syntax, to what the symbols mean, the semantics. But if the man cannot get the semantics from the syntax alone, neither can the whole computer. It is, by the way, a misunderstanding on his part to think that I am claiming that a man could actually carry out the billions of steps necessary to carry out a whole program. The point of the example is to illustrate the fact that the symbol manipulations alone, even billions of them, are not constitutive of meaning or thought content, conscious or unconscious. To repeat, the syntax of the implemented program is not semantics.

Concerning other points in his letter: He says that I am wrong to think that he attributes superior thinking to Deep Blue. But here is what he wrote in response to the charge that Deep Blue just does number crunching and not thinking: “One could say that the opposite is the case, that Deep Blue was indeed thinking through the implications of each move and countermove, and that it was Kasparov who did not have the time to think very much during the tournament” (p. 290).

He also says that on his view Moore’s Law is only a part of the story. Quite so. In my review I mention other points he makes such as, importantly, nanotechnology.

I cannot recall reading a book in which there is such a huge gulf between the spectacular claims advanced and the weakness of the arguments given in their support. Kurzweil promises us our minds downloaded onto decent hardware, new bodies made of better stuff, evolution without DNA, better sex without the inconvenience of actual partners, computers that convince us that they are conscious, and above all personal immortality. The main theme of my critique is that the existing technological advances that are supposed to provide evidence in support of these predictions, wonderful though they are, offer no support whatever for these spectacular conclusions. In every case the arguments are based on conceptual confusions. Increased computational power by itself is no evidence whatever for consciousness in computers.

Copyright ' 2002 by the Discovery Institute. Used with permission.

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Searle is stuck in the past
posted on 06/18/2002 12:57 PM by derecho@prodigy.net

[Top]
[Mind·X]
[Reply to this post]

What Gary Kasparov did during his famous chess match is the same thing that Deep Blue did. Calculate. Just because Deep Blue did not think about what the chess pieces are, or the history of chess matches, or other unneccessary junk does not mean that it is not intelligent. What Gary Kasparov's brain did is...at its most basic level...the same thing Deep Blue did. It is just that Deep Blue was more focused. Not only does Kasparov think slower...he thinks of many things that are not relevant to winning the chess match. Deep Blue could be programmed to recognize the pieces, the history of chess, the different schemes for winning...etc., but it is all irrelevant to winning the game. As far as the Chinese room argument goes...I do not get the logic. Whether there is a human or a computer in the room does not seem to matter. The tangible result or the "output" of the room is all that matters. The Turing test is essentially equivalent to the Chinese room argument. Let us say a computer passes the Turing test. It is programmed so well that no human on the planet can say whether it is human or computer. Since we (humans) are the final arbiters of what is human (and essentially what is conscious) then as long as no one looks inside the "Chinese Room" or behind the wall of the Turing test, for all pratical purposes the "being" behind the wall is human. With respect to human beings we are now starting to look "behind the wall" (into the skull - the Chinese room). What do we find? A machine. Gary Kasparov and Deep Blue can both be deconstructed into electrons, protons, and neutrons. We can mix all the particles together. If it were possible to perfectly reconstruct both Deep Blue and Kasparov, what is the difference between them? Only the pattern of reconstruction. They are both machines made of the same basic particles.

Re: Searle is stuck in the past
posted on 06/18/2002 2:43 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

derecho!

They are two kinds of people. Those who understand this - and those who do not.

And there is a third kind of people. Those who will understand it. If not moderately soon - they belong to the second kind already.

We still have several years time, to fight them on this, and other forums, though. ;)

Good work, BTW.

- Thomas


Re: Searle is stuck in the past
posted on 06/18/2002 4:18 PM by thp@studiocotopussy.com

[Top]
[Mind·X]
[Reply to this post]

On top of that the Rainman syndrome should also be an indicator that humans are in fact able to work more focused like computers. And the fact that theese people are mostly not able to communicate properly with the rest of us should be an indicator that thees in fact resembles Deep Blue more than any normal person do.

Tomaz as always in agree with you completely.

Re: Searle is stuck in the past
posted on 06/18/2002 5:00 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Saying that Deep Blue beat Gary Kasparov at chess is a lot like saying that the poker machine that took my money at the casino beat me at poker. The difference for Gary Kasparov and Deep Blue is that Gary thought he was playing chess and Deep Blue didn't think anything. It was merely carrying out a set of instructions created by a programmer based on mathematical calculations. The same thing my calculator does when I ask it to calculate the interest rate on a loan for a house and the monthly payment.

The calculator doesn't know what it is making the calculations for. It just adds up the 1s and 0s and spits out the result. I'm the one who takes that result and uses it to evaluate the relative merits of borrowing money from a particular lender and buying a house based on the calculation.

Deep Blue is not playing chess, either. The programmer of Deep Blue is using the calculations produced by the computer to make decisions about which pieces to move in response to where Gary Kasparov places his pieces on the board. The fact that the result makes Gary feel like he lost a game to something that was not playing a game is irrelevant. That's Gary's problem.

But the algorythm that produced this result was created by a human being, not a computer. The computer exhibited no thought or intelligence whatsoever. It just added up the 1s and 0s and spit out the result. It did not come up with a strategy or a method of outwitting a chess master. The programmer did that. So what did Deep Blue do that my calculator or the slot machine at the casino can't do? In my estimation, nothing.

Re: Searle is stuck in the past
posted on 06/18/2002 6:03 PM by derecho@prodigy.net

[Top]
[Mind·X]
[Reply to this post]

"The difference for Gary Kasparov and Deep Blue is that Gary thought he was playing chess and Deep Blue didn't think anything. It was merely carrying out a set of instructions created by a programmer based on mathematical calculations."

To this...all I can say is that Gary Kasparov was just merely carrying out a set of instructions also. Deep Blue is a simple program compared to the human mind. If we had sufficient tools to examine what neurons in Gary's brain were firing at what time during the match/moves we would find he was doing the exact same thing as Deep Blue. Again, Deep Blue was more focused and can think faster and therefore was able to defeat Kasparov.

Re: Searle is stuck in the past
posted on 06/20/2002 2:41 AM by yhuang@sfu.ca

[Top]
[Mind·X]
[Reply to this post]

To grantc4@hotmail.com:

People generally like to think of themselves as special somehow. Whether it is because we've got a "soul" or we can think, we need to feel that there is something we can do better than anything else out there. There is a real fear that if we discovered enough, we might find that we are not exempt from the laws of nature. That we might be a "machine" full of moving parts and gears that control our every action like programs control Deep Blue.

I think at the most basic level of your arguments, you are saying that "If it doesn't make decisions the way a human does, then it ain't real thinking." What does it matter if in the end, computer calculations beat out our much flaunted "thinking" processes? I bet Paul Bunyon the Lumberjack is probably still complaining that gas powered saw isn't a "real" treecutter.

I agree with derecho@prodigy.net in that people "make calculations" according to some physical law or logic as well. For a real detailed explaination of this, just take a look at "The Brain" section of Kuzweilai.net.

Maybe once that poker machine is programed to do morgages too its capacity for calculations would be one step closer to us.

Respectfully yours.
YJ

Re: Searle is stuck in the past
posted on 06/20/2002 10:22 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

yhuang,

You missed my point, which is that the computer's programmer made all the decisions about strategy and game analysis. All the computer did was carry out the programmer's strategy, which was a brute force approach to examining every possible move on the board. The chess master, on the other hand, devised his own strategy and made his own decisions and, until that last game, consistently came up with a winning game plan.

What you guys seem to me to be doing is giving the computer credit for the work of the programmer and saying the computer came up with the strategy that beat Kasparov. There's more to the game of chess that running through an algorithm. Sure, Kasparov used algorithms. But his was the mind that either chose or devised them. For Deep Blue, that's what the programmer did. So who really beat Kasparov? Was it the strategist who devised the method and planned the strategy or the machine that carried it out?

To my mind, without the strategist, the machine wouldn't have had a chance. The programmer was making changes to his program right up to the end of the game. So who was really doing the thinking there? IMHO it was a person who really beat Kasparov, not a machine.

Grant

Re: Searle is stuck in the past
posted on 06/20/2002 1:06 PM by yhuang@sfu.ca

[Top]
[Mind·X]
[Reply to this post]

To grantc4@hotmail.com:

Thanks for your quick reply. I'll try to convince you of my point of view.

In short I think what you are saying is that there is a human programmer behind the computer's strategy, but there isn't a programer behind the chess master. So therefore the computer doesn't think and the chessmaster does. Am I correct?

"Sure, Kasparov used algorithms. But his was the mind that either chose or devised them. For Deep Blue, that's what the programmer did. "

If it is simply the case that the guy who programed a machine is the ultimate thinker, then if a computer programmed another computer (its possible even now in a limited way), then that makes the parent computer a "thinker"?

As to the "brute force" approach to examining every possible move (and comparing it with historical moves), it is a legitamate strategy used by chess masters as well. I fail to see how it is celebrated when a chessmaster does it but not when a computer does it.

"You missed my point, which is that the computer's programmer made all the decisions about strategy and game analysis."

I very much doubt the human programmers behind Deep Blue are so good that they can replicate what Deep Blue can do with chess. They probably can't even predict Deep Blue's next move. It would be kind of like giving a football coach (The programer) all the credit for a team victory(Deep Blue). Sure the coach came up with a strategy, but the teamates have to implement it.

To my mind, if the prerequisite of thinking is merely being able to program another computer (biological or otherwise), then some computers are already "thinking".

I don't think that the only way to think is the way people does it. It would be a bit unfair. Like saying a car needs to run on legs or else it won't really be moving. If the end result between a thinking person and a machine is the same, we might as well consider a machine to be thinking no matter how it does it...

Respectfully yours
YJ

Re: Searle is stuck in the past
posted on 06/20/2002 5:34 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

>If it is simply the case that the guy who programed a machine is the ultimate thinker, then if a computer programmed another computer (its possible even now in a limited way), then that makes the parent computer a "thinker"?

I agree that the computer may be thinking in a limited way, but what we were doing was comparing the thinking of the machine with the thinking of Kasparov and I don't believe there is any comparison between the two. Some day the computer may surpass us, but that day is still far off and was not even close to being realized by Deep Blue. It's like comparing the mind of Sun Tze with an abacus.

Respectfully,

Grant

Re: Searle - a programmer behind the chessmaster
posted on 07/04/2002 8:36 AM by tharsaile@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Y Huang, I agree

>>If it is simply the case that the guy who programed a machine is the ultimate thinker, then if a computer programmed another computer (its possible even now in a limited way), then that makes the parent computer a "thinker"? <<

>>In short I think what [grantc4@hotmail.com is]saying is that there is a human programmer behind the computer's strategy, but there isn't a programer behind the chess master<<

And I was wondering, could we also argue that there is a programmer of sorts behind the chessmaster? Evolution I mean. It may be a collection of 'blind' processes, but evolution built and 'programmed' the human brain. Environment and interaction with other humans may also have programmed and 'updated' a growing human, the same way that Deep Blue's programmer was fiddling with it up until the end.

Re: Searle - a programmer behind the chessmaster
posted on 07/04/2002 11:05 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> And I was wondering, could we also argue that there is a programmer of sorts behind the chessmaster? Evolution I mean.

Exactly! A simple (Evolution) algorithm's steps mounted up - and we got what we now call - the intelligence.

There is no thin red line, which divides "natural" algorithms from "intelligent". It's a question of a degree. And we use some simple algorithms in our thinking process ... and some not that simple.

We are executing them faster, than most systems do. But Deep Blue is even faster in executing some of them.

Eventually .. every and all algorithms will be executed faster, than we run them today. And many new algorithms, which we are unable to - also.

Is there anything else? I think not.

- Thomas




Re: Searle - a programmer behind the chessmaster
posted on 01/07/2004 12:02 PM by kridya

[Top]
[Mind·X]
[Reply to this post]

But we don't know evolution is the grand programmer. Some of us just postulate that it is.

-chris

Re: Searle - a programmer behind the chessmaster
posted on 01/07/2004 12:25 PM by Tomaz_(Thomas)_Kristan

[Top]
[Mind·X]
[Reply to this post]

Some of us who postulate that, will try to convert it into a usable technology. The most usable so far.

I guess the chance is, that we will be proved right. Worth to try.

Re: Searle is stuck in the past
posted on 05/27/2003 10:06 AM by pookie

[Top]
[Mind·X]
[Reply to this post]

Kasparov came up with his own strategy because he learnt to play chess using some set of learning algorithms. Deep Blue didn't have any learning algorithms and as a result, had to be "given" a set of chess-playing algorithms.

Gary's brain ran his chess-algorithms
Deep Blue's hardware ran it's chess-algorithms

Deep Blue didn't reflect on, or take pride in it's win because it only has the capacity to play chess.

Intelligence is not a good word to contrast what Kasparov and Deep Blue did, but esentially they did the same thing:

Low level processing of higher level algorithms that lead to an intelligent output, they may have used different algorithms, but they were both playing chess.

Re: Searle is stuck in the past
posted on 07/02/2002 6:57 PM by omhats@optonline.net

[Top]
[Mind·X]
[Reply to this post]

Q: Describe the quality of the chess-playing experience.

Q: What is the value of playing chess?

Somehow I think Gary can give us more interesting answers to these questions than Deep Blue can, or ever will. Unless qualia can be totally reduced to quantity.

--Tom

Re: Searle is stuck in the past
posted on 07/04/2002 11:32 AM by yhuang@sfu.ca

[Top]
[Mind·X]
[Reply to this post]

Q: Describe the quality of the chess-playing experience.

Well, Gary has better speech tools than Deep Blue does so of course he might be able to "explain" his moves better. I'm not an avid chess player myself so I cannot say for sure, but I have a feeling that when you get up there looking at hundreds of moves, you'd sound somewhat like a computer too when you explain it. ("I looked at those likely possibilities and past history and decided on blah move")

Q: What is the value of playing chess?

Just because we can make better pitching machines than players nowadays doesn't mean that baseball is obsolete. Just because computers can memorise all the correct spelling of a world doesn't mean spelling bees are obsolete.

If the value of playing chess isn't to prove "we are more intellegent than everything in the world" and more along the competitive spirit of any sporting event, then chess is still a worthwhile endevour. (The chess championships will just outlaw computers).

A question for you all, do you know of any computer that plays Go well? I hear it is nigh impossible to see the possibilities of Go since there are way too many...

Respectfully
-YJ

Re: Searle is stuck in the past
posted on 03/03/2003 4:33 PM by Shafee

[Top]
[Mind·X]
[Reply to this post]

1. By stating that a chess player computes, you are assuming what you so blatantly seem to argue for from your postion in the future I persume.

2. Obviously we do not know how certain "machines" (e.g., the human brain) create consciousness. We do not know even how such machines were evolved from non-conscious ones. But we definitely know that they did not evolve by increasing computational power.

3. Try to imagine applying Turing Test to the question "Are Women Men?" to show you that Turing Test is faulty.

4. You chair and your body are made up of the same particles. Does it mean that you are a chair of a different format?

Re: Chapter 2: I Married a Computer
posted on 06/20/2002 1:26 AM by g2002@prisco.info

[Top]
[Mind·X]
[Reply to this post]

A simple confutation of the Chinese Room, or Chess Room argument:
Consciousness requires that there are both something to be conscious of, and someone to be conscious of it. In the C. Room there is the something: the rules of chess or chinese, but not the someone: the system (room + I/O + rules + human) just does not have the properties that permit the emergence of consciousness.
This does not mean that the same rules cannot be implemented on a system which does have these properties: in this case we would have someone which is conscious of the fact that (s)he is applying the rules.
---
Giu1i0

Re: Chapter 2: I Married a Computer
posted on 06/20/2002 3:06 AM by yhuang@sfu.ca

[Top]
[Mind·X]
[Reply to this post]

To Giu1i0:

Thanks for the thoughtfull post. I'll try my best to poke some holes into that argument.

"Consciousness requires that there are both something to be conscious of, and someone to be conscious of it."

Forgive my confusion, but to me, it appears more of a play on words rather than a logical definition.

I mean, if I program a computer with "I am conscious=True" The computer will assume this to be true as well. How do we know for certain that we are really conscious and not just hardwired (by our neurons) to take it for a fact that we are? Any way we try to prove that we are conscious we can probably program some computer to act the same given enough time.

And if by definition a conscious thing is a "someone", then that computer will be a "someone" too. The fact that we reject this now might merely be because computers cannot behave completely human yet.

Maybe thinking people are just made up of a room (our brain), I/O (Eyes, ears and mouth), rules (How to behave like a human for dummies)...

Although I for one am hoping they find something unique about people that would prove our consciousness. (Darn, I guess I'm hotwired to believe I'm conscious too...)

YJ

Chess vs. Language
posted on 06/20/2002 2:17 PM by s_douglas_peters@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Why consider Chess to be a standard of intelligence? Chess is a few-rule simple-domain task that few humans really master. Rather, let's consider a many-rule complex-domain task that nearly all humans master at an early age: language.

Just look at the Loebner Prize transcripts to see what great strides have been made in the last decade in this area... (there have been none). And once one layers speech recognition into the task appropriately, we observe computers to fail miserably.

Is this simply a question of computation? Not at all. There are "no-computational-limit" speech recognition competitions for which the "winners" achieve less than 70% word accuracy in relatively quiet conditions. In noise, speech recognition is considerably more difficult.

Searle's requirement for reaching the semantics past the syntax is the essence of the Chinese Room argument, and while his critics can avoid the issue when it comes to the problem of chess, they will be hard pressed to avoid it for the problem of language.

regards, dp

Re: Chess vs. Language
posted on 06/20/2002 3:20 PM by thp@studiooctopussy.com

[Top]
[Mind·X]
[Reply to this post]

How on earth can you compare 10.000 years of evolution with the computers roughly 60 years.

When a human calculates he/she uses logic, the computer do the same, just much faster. The human brain could infact have been wired so that we where walking calculators if needed, but evolution did not favor it.

Chess playing is ultimately about thinking as many steps ahead as possible, when 2 humans play against each other. Deep Blue ofcourse could do much more calculation of the many possible future moves, but it could not get an idea.

And now Deep Blue won. Focusing on 1 part of the myriade of human perception and thinking it actually won. That does not make it human, but it does not need to be human anyway. Animals are sophisticated computers, they are not selfaware like us, but they are favored for various skills for survival again focusing and reacting.

The point is that so will computers, they will be favored and in time develop a form of awareness that you and I will never understand their agenda is most probably also going to be different.

Re: Chess vs. Language
posted on 06/20/2002 3:27 PM by s_douglas_peters@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

> How on earth can you compare 10.000 years of
> evolution with the computers roughly 60 years.
Frankly, I wasn't. But now that you mention it, I will: your 10.000 years of evolution had no intelligent design involved, did they? Kurzweil claims that a couple of decades will be sufficient to achieve the "singularity". Do you disagree with him? The "evolution" of technology is intelligence-directed.

The rest of your post seems not to address my points at all. Were you perhaps responding to someone else?

regards, dp

Re: Chess vs. Language
posted on 06/20/2002 3:33 PM by thp@studiooctopussy.com

[Top]
[Mind·X]
[Reply to this post]

the reason I am not impressed by the Chinese Room is that it really boils down to (once you eliminate all of the excess baggage): "I
don't think machines can be conscious because I just don't think they can be." This is not an argument, it circular and an inappropriate extension of a naive intuition which is hard-wired into our neurology but nevertheless
misleading (I can say something about the hard-wiring later).

It is a circular argument. If you don't believe machines can be conscious, then of course a Chinese Room cannot be conscious. But you are assuming the conclusion! Formally the Chinese Room is equivalent to a computer, yet he
constructs an elaborate example including a person in it just to make the intellectual subtrefuge even more complete. I have seen other examples of this; making a giant brain out of beer cans and valves, etc., and asserting
that a giant complex of beer cans and values could not possibly be conscious, etc. The beer can brain argument doesn't have the little
homunculus inside of it carrying out the operations, but it is the same stupid idea: blow up a computer so it is really really big and you can see the insides of it and whamo: must not be conscious. Yet: why? No answer to this, it is just "obvious". But again --- only obvious if you were convinced of the result to begin with. This is philosophy? It is utterly ludicrous. I mean, any field that could take this sort of absurdo argument seriously cannot possibly be worth much.

To the extent the Chinese Room is aware, it is obviously not the human operator that would be aware. It would be the room itself, as a system.

But again, the sad thing about analytical philosophy, even the "scientific" sort, is that, ironically, they don't really even understand the
implications of philosophy of science: i.e., Kuhn, Lakatos, etc. Any philosophy that cannot come to terms with Kuhn or Lakatos is philosophy that I don't care to bother with.

Re: Chess vs. Language
posted on 06/20/2002 3:44 PM by s_douglas_peters@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

I'm afraid that you misunderstand the Chinese Room. The essence of the argument is the great divide between phonetics and semantics. The reason that I mentioned language is because spanning this divide is integral to solving the problems of Natural Language Understanding. One can pretend, as you have done, that the divide does not exist by considering simple problems like chess.

The fact is that researchers are NOT making incremental steps toward an NLU solution, in spite of Moore's Law. Frankly, a little study of the history of NLU work will give anyone a greater appreciation of Searle's insights. Kuhn and Lakatos notwithstanding.

regards, Doug

Re: Chess vs. Language
posted on 06/20/2002 4:09 PM by thp@studiooctopussy.com

[Top]
[Mind·X]
[Reply to this post]

Of course the operator of the room does not
understand Chinese. But then, the neurotransmitters and neurons in my brain
do not understand English, either. There is absolutely nothing whatsoever
surprising or strange about that.

Bateson's argument go like this: "awareness" does not arise as a _state_ of a material substance, but rather it arises because of a causal
arrangement of a feedback system, and is therefore fundamentally holistic.
I.e., it is not localized in space and time, and yet it is not separate from physical process. Bateson's model basically says that consciousness and mental phenomena are based in a material substrate but are not themselves material --- rather, they are based on information relationships between successive material states when organized in certain kinds of feedback loops
of a certain kind of complexity.

I like this view because it is both monist and yet it acknowledges and to some extent explains why we have this impression of mental phenomena as not being physical in some sense. Bateson says yes, mental phenomena are not
physical.

For example, suppose I had a room full of billions of petri dishes, each one
of them containing one live neuron. Now, let's say I took a snapshot of my brain, somehow, and determined the precise state of every one of my neurons at time t. Then I electrochemically stimulated each one of the neurons in these
dishes to be in precisely the same state that the neurons in my brain are in at time t. Would the room full of disconnected neurons in petri dishes
then, for a split second, be "in the same mental state" as my brain at time
t?

No. Because mental state, as Bateson convincingly argues, depends upon
not just physical state, but comparison of physical state from one moment
to the next: difference. This difference can only be realized when there
are feedback loops. Therefore: MENTAL STATES ARE NOT PHYSICAL STATES, but
rather require relationships between physical states in feedback loops.
A beautiful viewpoint which really gets far less airplay so to speak than
it deserves.

Re: Chess vs. Language
posted on 06/21/2002 7:08 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> MENTAL STATES ARE NOT PHYSICAL STATES, but rather require relationships between physical states in feedback loops


That we are not material - is in a big accordance with my views!

What we are, is a data (process) running on a material substrate. It very well may be, that the energy is a lousy substrate. As - that it is more than excellent. May be both ways.

But we _are_ _pure_ _data_! An emergent property of some (evolution made) computation. Running wherever we can.

We are not Homo Sapiens. We are a _data_ _parasite_ on that animal.

What's still bothering me, is the (non)possibility of spatial computation, beside temporal ...

Well, anyway ... I agree with this "relationships between physical states in feedback loops" theory.

- Thomas

Re: Chapter 2: I Married a Computer
posted on 07/02/2002 6:50 PM by omhats@optonline.net

[Top]
[Mind·X]
[Reply to this post]

But what is the quality of the chess-playing experience? Of what value is playing chess?

I bet Gary can give us a more interesting answer than Deep Blue.

--Tom

Re: Chapter 2: I Married a Computer
posted on 07/02/2002 10:41 PM by derecho@prodigy.net

[Top]
[Mind·X]
[Reply to this post]

Sure it would be more interesting...to a human. But it doesn't mean that deep blue was not thinking (at least at some basic level). Gary Kasparov's program is just more complicated and can give more interesting answers.

Re: Chapter 2: I Married a Computer
posted on 07/23/2002 12:41 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

You already know about Searle's room, now I want to tell you about Clark's Chinese Room. You are a professor of Chinese Literature and are in a room with me and the great Chinese Philosopher and Poet Laotse. Laotse writes something in his native language on a paper and hands it to me. I walk 10 feet and give it to you. You read the paper and are impressed with the wisdom of the message and the beauty of its language. Now I tell you that I don't know a word of Chinese, can you find any deep implications from that fact? I believe Clark's Chinese Room is just as profound as Searle's Chinese Room.
Not very.

All Searle did was come up with was a wildly impractical model (the Chinese Room) of an intelligence in which a human being happens to play a trivial part. Consider what's in Searle's model:

1) An incredible book, larger than the bservable universe even if the writing was microfilm sized.
2) An equally large or larger book of blank paper.
3) A pen, several trillion galaxies of ink, and oh yes I almost forgot, your little man.

Searle claims to have proven something profound when he shows that a trivial part does not have all the properties that the whole system does. In his example the man could be replaced with a simple machine made with a few vacuum tubes or even mechanical relays, and it would do a better job. It's like saying the synaptic transmitter dopamine does not understand how to solve differential equations, dopamine is a small part of the human brain thus the human brain does not understand how to solve differential equations.

Yes, it does seem strange that consciousness is somehow hanging around the room as a whole, even if slowed down by a factor of a billion trillion or so, but no stranger than the fact that consciousness is hanging around 4 pounds of gray goo in our head, and yet we know that it does. It's time to just face the fact that consciousness is a property matter has when it is organized in certain complex ways.

John K Clark jonkc@att.net





Re: Chapter 2: I Married a Computer
posted on 07/23/2002 2:18 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Well said. Very well.

- Thomas

Re: Chapter 2: I Married a Computer
posted on 07/24/2002 3:02 PM by thp@studiooctopussy.com

[Top]
[Mind·X]
[Reply to this post]

Yes that was very well said.

Re: Chapter 2: I Married a Computer
posted on 07/28/2002 7:30 PM by daveh47@mindspring.com

[Top]
[Mind·X]
[Reply to this post]

I agree that "consciousness is a property matter has when it is organized in certain complex ways", but I think we're still a long way from truly understanding what those "complex ways" are. I think Ray Kurzweil's visions will
eventually come to fruition, but not until we really undestand how the brain works and what, from an objective scientific standpoint, consciousness really is. As a programmer, I cannot believe that all my thoughts and especially my *emotions* (the most important part of consciousness, IMHO) can be reduced to function calls, for-loops, and if-then-else blocks. We have to figure out how the brain/conciousness funtions and then build a new kind of "computer" to emulate it (Quantum computers, perhaps? I dunno...). Trying to shoehorn our minds into a conventional computational model, I think is doomed to failure.

--
Dave

Re: Chapter 2: I Married a Computer
posted on 07/29/2002 1:11 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> As a programmer, I cannot believe that all my thoughts and especially my *emotions* (the most important part of consciousness, IMHO) can be reduced to function calls, for-loops, and if-then-else blocks.

Well, at least we can emulate the physics/chemistry with C++ or Java. Can't we?

And that's enough. A BASIC program could simulate the whole human body, with brains inside, with thinking inside.

That's the whole point.

- Thomas

Re: Chapter 2: I Married a Computer
posted on 07/29/2002 4:45 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

> > As a programmer, I cannot believe that all my thoughts and especially my *emotions* (the most important part of consciousness, IMHO) can be reduced to function calls, for-loops, and if-then-else blocks.

> Well, at least we can emulate the physics/chemistry with C++ or Java. Can't we?

Yeah. If we want, we can emulate all of the physics/chemistry with a sufficiently large abacus. A bit slow, but so what? Its the computation that counts, right?

But if QM effects play a "significant role" in living systems, no less conscious sentience, the abacus may not suffice.

Cheers! ____tony____

Re: Chapter 2: I Married a Computer
posted on 08/10/2002 7:38 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

tony,

What if we are transTuring machines?

In that case, we will need some QM devices built into the computorium. Some qubit code written.

We will probably have that all, anyway.

But I doubt, we are transTuring thinking machines.

- Thomas

Re: Chapter 2: I Married a Computer
posted on 08/10/2002 8:47 PM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

Tomaz,

The qubit-processing quantum computer (TransTuring, if you like), although exploiting QM indeterminacy to enhance efficiency and parallelism, is still effectively executing "algorithm". As long as it is limited to algorithm execution, there is nothing that a qubit-processing computer can do that an ordinary Turing machine cannot, given enough time to execute.

Just because a great deal of intelligent behavior, and apparent mental processing, can be modelled algorithmically with a computer/TuringMachine, it does not follow that algorithmic processing is all that it significant to "mind", no less the entertainment of "sense of consciousness."

I do not posit that the quality of consciousness cannot be manifest in non-biological substrate. Indeed, I see no reason that consciousness cannot manifest in silico, or other substrate ... but that is not the same as saying that "the substrate matters not at all".

*** We are manifestations of a physics that can be modelled, and not merely manifestations of a model. ***

One can posit a perfectly operational Turing machine, manifest entirely with paper and pencils. A largely un-intelligent robot reading, writing, and erasing 0/1 symbols, and thus executing the given algorithm. The robot is just a "dumb processor" moving the symbols about. The symbols it reads and writes are both its input/output data, as well as the very "algorithm" it is executing. Thus, if the algorithm is sufficiently complex and self-referential, this "system" can effectively re-write and improve upon the very algorithm it is executing. If this system is enclosed in a box, and operated "fast enough", it could well appear quite intelligent to us as outside observers, and might even convince us of a claim to self-aware consciousness. Indeed, lacking the ability to "peer inside", we would have no right to deny its claim.

But this is no guarantee that it would be "conscious" the way that you or I use the term, as a "waking awareness".

Our evolved neural-structures exhibit something that is akin to "computation", and so there is certainly something to be gained in exploring and emulating that functionality. But there may also be what I call "proximity effects" between neurons, ancillary to the "wiring diagram". These effects (EM, QM, other) may also contribute to "sense of consciousness", and not every substrate able to support "algorithm" may be sufficient to support these other physical effects. I suspect that the pencil-and-paper turing machine may be insufficient in this regard.

People tend to notice that the brain supports "electrical activity", and that logic circuits also exercise electrical activity, see both as doing "algorithms", and make the leap that mind is algorithm. Perhaps the "physics" itself is important.

I view mind as a purely physical manifestation, but requiring perhaps "more" of the physics than the behaviors of arbitrary components able to manifest "algorithm".

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/15/2002 4:18 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

tony,

True random - which officially exists in QM - IS a transTuring phenomena.

I doubt however, that it has any real significance in our mind. A Turing pseudorandom is just as good.

> But there may also be what I call "proximity effects" between neurons, ancillary to the "wiring diagram". These effects (EM, QM, other) may also contribute to "sense of consciousness", and not every substrate able to support "algorithm" may be sufficient to support these other physical effects. I suspect that the pencil-and-paper Turing machine may be insufficient in this regard.

Why do you think so? Proximity which is classical and EM could be easily dealt with a Turing machine.

In what level something what can't be modeled by a Turing machine arises - according to you?

- Thomas


Re: Chapter 2: I Married a Computer
posted on 08/15/2002 5:17 AM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

Thomas,

Modeling math is as good as any math. Modeling fire is not as good as fire.

We, both our bodies and the consciousness it supports or generates, are still manifestations of physics, not simply of physics concepts.

The physics can be modeled in detail (say). Ok, that means we are a manifestation of a physics that can be modeled. That does not mean we are models, or manifestations of a model.

We might model ourselves to any extent, and the "read-outs" (so to speak) might indicate that our "model selves" behave like real selves. That might mean that they "act conscious" (the same way that properly modeled water might act like ice or steam.) But that does not imply that the "modeled selves" are actually experiencing consciousness as we have come to "feel it".

Modeling "intelligence" (rational thought) is much simpler, and might be done to such a degree that the AI far surpasses our abilities to be creative, etc. But the "result" of proper LOGICAL thought, whether in our brain or an AI, is the same; you arrive at a "right answer". There, how the "model acts" is no different than how the bio-rational mind acts, in terms of a result.

But consciousness is by its nature a pure subjective experience. You cannot know if or when something else "has it", only whether it "acts" like it has it.

For an AI that will be good, one might say "who cares if it is actually conscious, or merely "acts conscientiously". No problem.

But if we expect that beings with our "sense of awareness, qualia-wise" is to be also supported in a foreign medium, we would like to ensure that it will be so, and not merely doing modeling of this experience.

Of course, whatever new substrate might support our kind of consciousness (or an improved one), the super-AI will likely devise what that must be. It might end up being more than just algorithmic modeling of pattern. It may be a physically-sensitive manifestation.

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/15/2002 9:21 AM by thp@studiooctopussy.com

[Top]
[Mind·X]
[Reply to this post]

"...Modeling math is as good as any math. Modeling fire is not as good as fire. "

Fire is modeled.

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 11:45 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

Modelled on what ?

Re: Chapter 2: I Married a Computer
posted on 08/03/2002 12:39 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

The issue is whether something that has intelligence and only intelligence is even possible. The answer is clearly no, that's why
evolution never came up with such a thing.

The fact is that the emotion creating mechanism must be much simpler than the intelligence creating one, that's why lots of drugs can make
you feel almost any emotion you care to name but no known drug will make you smarter, you'd need more than one small molecule to do something that complex. The fossil record tells us the same thing, animals have had emotion for about 500 million years but intelligence for only about one million.

It would make an interesting exercise in sociology to figure out how the myth got started that emotions are what distinguish humans
from everything else, the idea that they are almost mystical and certainly much more difficult to produce than intelligence. Although it existed before TV I think Star Trek and the entire MR.Spock unemotional nonsense had a lot to do with popularizing this silly idea.

John K Clark jonkc@att.net

Re: Chapter 2: I Married a Computer
posted on 08/10/2002 7:38 PM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

John K Clark,

"Nature" seems to have evolved "emotion" before intelligence, granted. But that is not a proof that intelligence-capability requires emotion-capability. One might posit that our computers will be the first creatures "evolved" to possess (at first) only that latter.

There is a sort of trichotomy in viewing emotion (E), intelligence (I), and consciousness (C) as separable aspects of phenomena.

Take the stance of a conscious observer, viewing an interaction between two other entities (minds) A and B. If both minds are capable of only emotional behaviors, they would still likely react to one another in ways that we would interpret by saying "A recognized the emotion in B". Neither entity would need to act "intelligently". But if A also possessed significant intelligence, and acted with this faculty, entity B would likely not "act as recognizing" or appreciating the intelligence of A.

Consciousness is even more a problematic measure. To some, it is merely "awakeness", the subjective sensation that can only be inferred in others by projection, or extension of one's self-sensations. In this sense, minds that are capable of only emotion certainly exhibit evidence of consciousness. If intelligence is ADDED to this mix, then a mind can be "deliberately reflective" of its waking state, a sensation that I assume a merely emotional mind would not experience.

Our current path of Turing-machine approach to emulating intelligent behavior leads us first to emulate "good decision making" (that which is actually easier to quantify, and closer to the facilities of the computational substrate) than to emulate "emotion". In "biologics", emotion is tied into chemical and hormonal mind-body effects, leading to amplifications of state, and for which a "merely intelligent algorithm" has neither the need nor mechanism to display. Even an intelligent algorithm capable of self-evolution need not necessarily develop emotion except that it serves a purpose in communicating with its environment.

The curious thing about evolution is that, despite the specific nature of the "beginnings" (biologic or otherwise), those facilities that eventually manifest are those that serve some "persistence utility", and in many ways, these will be the same utilities, a sort of universal convergence toward effective patterns.

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 11/24/2002 2:10 PM by mustafa

[Top]
[Mind·X]
[Reply to this post]

guys,

may be you should check out the site below
and accept the fact that we are already robots.

http://www.well.com/user/jct

have fun.

Re: Chapter 2: I Married a Computer
posted on 08/14/2002 5:31 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"You already know about Searle's room, now I want to tell you about Clark's Chinese Room. "
In fact Clark's room demonstrates something very important about the nature of "information". In your scenario your are distributing a syntactical media from one person to another via a process,albeit a not particulalrly interesting one ( you walking across a room ),the object of the exercise being to deliver semantic content from out resident poet to your Chinese speaking listener. In exactly the same way a computer delivers semantical content from programmer to user : it adds no semantical content of its own at all, which is the basis of the Chinese Room argument.
The Chinese Room argument is also not about consciousness. It is about computing and how syntax can under no circumstance generate semantic : how , in other words, 'matter properties', as you put them, can never be generated from their representations. There is no doubt WHATSOEVER that consciousness is a matter property : but matter properties are semantical and not syntactical.

Re: Chapter 2: I Married a Computer
posted on 08/14/2002 5:35 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"Yes, it does seem strange that consciousness is somehow hanging around the room as a whole, even if slowed down by a factor of a billion trillion or so"
what scientific basis do you have for assuming that consciousness is a time derivative and so allowing you to make these claims, unsubstantiated as they are ?

Re: Chapter 2: I Married a Computer
posted on 08/14/2002 6:39 AM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

John B Davey,

I know you were responding to John K Clark with your post, but allow me to walk a very fine line.

If sufficiently complex algorithmically-based entities could access "senses" (light, sound unpredictable experience from "others"), and could also "mate" (ala genetic algorithms, sharing/inheriting traits and capabilities), compete with similar algorithms having similar "non-determinist sensory input", and could thus effect unbounded evolution, ... then I am not sure that "semantic" could not be "generated". At least, if "we" truly generate semantic, as opposed to merely integrating and transforming environmental noise into new forms, and calling it semantic because it pleases us.

Thus, I do not say that "something mind-like" might not erupt from an algorithmic basis (perhaps needing access to QM indeterminacy in some way as "input", to keep itself from falling into attractor basins.) It might display "superior intelligence", and even lay claim to acting "creatively", appreciating nuance, etc.

That is why I say that we would have no right to deny its own claim to consciousness (if it made such a claim to us, and convinced us through its behaviors.)

However ... it may not have the "consciousness experience" as we have come to experience it. Whether this should be inmportant to anyone is really the question.

Those who hope to "upload their consciousness" into a new substrate might feel concerned, because they imagine that "they" would want to continues "experiencing awakeness" (so to speak.) But I feel this is based upon the fallacy that there is any such beast as "continuity of consciousness". Consciousness, or the "conscious I", is a "sensation of the moment", and such an "I" never lasts more than a moment. In this sense, the "I" that writes these words will never experience waking up tomorrow morning. Rather, a "new I" wakes up, and believes itself to be a continuation of "the same consciousness" because of the common memory context.

In other words, even if I can successfully upload "my mind" into a new machine, "I" never go anywhere. The "I" that performed the upload never gets to find out whether the uploaded "I" is, or is not conscious, no matter how well it might behave to other observers. Just as the "I" that types at this keyboard never finds out whether a "new I" continues tomorrow or not.

So, in this respect, to whom should it matter whether the "uploaded minds" are actually conscious in the way we experience waking consciousness, or merely behave as if such subjective sensation is present?

Could they not be just as "productive"?

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/14/2002 1:13 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

Even Searle says there is nothing supernatural involves in consciousness, it just involves science we don't understand yet, call it Process X. Being rational that means we can use our minds to examine what sort of thing it might turn out to be. It seems pretty clear that information processing can produce something that's starting to look like intelligence, but we'll assume that Process X can do this
too, and in addition Process X can generate consciousness and a feeling of self, something mere information processing can not do.

What Process X does is certainly not simple, so it's very hard to avoid concluding that Process X itself is not simple. If it's complex it can't be made of only one thing, it must be made of parts. If Process X is not to act in a random, incoherent way some order must exist between the parts. A part must have some knowledge of what the other parts are doing and the only way to do that is with information. It could be that communication among the parts is of only secondary importance and that the major work is done by the parts themselves but then the parts must be very complex and be made of sub parts. The simplest possible sub part is one that can change in only one way, say, on to off. It's getting extremely difficult to tell the difference between Process X and information processing.


Re: Chapter 2: I Married a Computer
posted on 08/14/2002 3:10 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

John,

You write:

- "It seems pretty clear that information processing can produce something that's starting to look like intelligence, but we'll assume that Process X [assumed requisite for consciousness] can do this too, and in addition Process X can generate consciousness and a feeling of self, something mere information processing can not do.

Another possibility is that "Process X" accesses, is "sensitive to", or interfaces with the information-processing system (say, the neural signalling complex) but cannot "do" the information processing on its own. Otherwise, your "Process X" is de-facto complex because the term "includes" the information-processing complexities, as opposed to being some resonance-manifestation ancillary to the particular physics of the information processing activity.

- "What Process X does is certainly not simple, so it's very hard to avoid concluding that Process X itself is not simple."

That depends upon what you attribute to "Process X".

Let me offer one of my patented "Very Poor Analogies":

Suppose you have a Ziljan cymbal, such as might be part on an expensive percussion/drum set. You strike it simultaneously near its center with a padded wooden mallet, and on its edge opposed by 113 degrees by a small steel rod.

The harmonics that are set up are enormously complex, shifting and modulating with an ever-changing sonic character.

Suppose we fashion a "paper cymbal", capable of supporting no such vibrations at all, but we attach a few billion tiny actuators to its paper surface, tiny needles each with their own "piston action". We might be able, through sufficient programming, to signal all of these actuators such that the paper manifests "vibrations" that appear to match that of the Ziljan cymbal, but an awful lot of information would need to be generated and delivered.

In contrast, it was "easy" to have the real cymbal sustain these effects, due to the physics of the real medium. You might argue that pressure waves propagating through the metal molecular matrix act as "information", phonon propagation from and between the inter-molecular binding forces. But that is still quite a stretch from a billion actuators that must receive signals algorithmically generated from afar.

I liken the sense of consciousness to the shifting harmonies of the physical cymbal. Complex to map in detail, but not complex in actual "mechanism".

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/14/2002 3:39 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

you are reiterating the points you made earlier and , as most AI enthusists do with equanimity, operate on the basis of no scientific evidence whatsoever. Well let me put you out of your misery. There is no no need for science to 'prove' that algorithms can't create consciosuness : logic and epistemology work well just fine .
Its as absurd to suggest that modelling a brain reproduces a brain ( via representing it ) as to say that modelling an electron via a mathematical model reproduces an electron. That , pretty much, is it. If you don't understand this elementary difference between non-mental 'stuff' such as electrons, neutrons and bits of wood, consciosness, noises from a car exhaust ,for instance - and mental,syntactical, observer-relative 'stuff' such as 'processes','algorithms', 'computation' you will remain forever trying to bash square bits of wood into round holes.And forget complexity , its a load of hogwash. Each complex algorithm can be reduced to a larger number of very simple ones by the very definition of algorithmic mathematics in any case - the complexity argument, weak as it is, doesn't even add up on mathematical grounds.
I can stare at a block of atoms in my desk and elude a supremely complex algorithm from their vibrartions, but that doesn't mean to say my desk is thinking.
Representation is not reality - a duck is not the same as a painting of a duck.

Re: Chapter 2: I Married a Computer
posted on 08/14/2002 4:38 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

John B,

I agree that modeling/simulating an explosion is not the same as a physical one.

But place yourself for a moment in a world where entities with consciousness exist in "some other physical substrate". One of them is a scientist who postulates that some myriad of atoms, arranged so to form a theoretical carbon-based life, with a concentration of neurons in a "brain", might engender mind and consciousness. A colleague says no, that construction would merely be supporting a conflux of signals that might "simulate" mind and consciousness (or conscious-like behaviors.) If the first scientist actually went ahead and produced these carbon-based forms, (like you and me), the other scientist still need not be convinced that were are conscious, but merely behaving so.

I honestly do not know how consciousness arises, of course, so I cannot say what forms of substrate might support such experience.

But my analogy of the "cymbal" can be extended. We must agree that the ersatz "paper cymbal" with its billion tiny programmed actuators, if able to manifest the precise complex of vibrations that the real-metal cymbal display, would produce the same "sound" (what an external observer gets to judge). Moreover, the actuators could also be programmed to vibrate the "paper cymbal" in ways that the metal one might never be able to support, no matter how it was struck. This is perhaps analogous to the recognition that we could produce AI that was "smarter, more capable, and even more creative" than ourselves. But no matter how the paper was vibrated, it would not be reproducing the actual intermolecular forces that occured in the metal cymbal.

The proposition (neither proven nor disproven, simply offered) is that consciousness my be likened to some artifact of the phonon propagation occurring in the metal cymbal, and absent in the paper cymbal, no matter how sophisticated the noises from the paper cymbal might be.

And none of this says that there cannot be a non-biological manifestation of conscious experience. However, not every physics employed to effect the model can be treated equally. Most of all, modeling only "intelligent behavior", even if manifest of creativity and unexpected adaptability, is not the same as modeling a consciouness-as-we-experience-it.

I surmise that such cannot be ascertained by any external means. It is inescapably a subjective valuation.

Why do I assume that other people have a conscious-waking experience as I have? I cannot prove they do, but I recognize that they are made of the "same stuff", born in the "same way", etc., as I manifest, thus it will naturally be the default assumption. It would be pure solipsism to assume that I am really conscious, while other people are merely "cleverly behaving as if conscious".

But when we produce powerful and self-adaptive AI, in all manner of substrates, we lose this default assumption. We cannot "identify" with such entities, and have no reason OTHER than behaviors to infer the presence of "wakeful awareness" as humans experience it.

Cheers! ____tony b____





Re: Chapter 2: I Married a Computer
posted on 08/15/2002 12:08 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

The more abstract something is the more accurate the simulation can be. A simulated brick is not the same as a real brick but simulated arithmetic is the same as a real arithmetic, and simulated music is the same as real music. The question you have to ask yourself is, are we more like bricks or more like symphonies? Put me in the symphony camp. Besides, I've always had a rule of thumb that I've found useful, if something looks like a duck and walks like a duck and quacks like a duck then it's probably a duck, so if a computer acts like it's conscious then it probably is.

Re: Chapter 2: I Married a Computer
posted on 08/15/2002 12:29 AM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

John K,

> "so if a computer acts like it?s conscious then it probably is."

That would have to be my "safe assumption" as well. I reason that, it either is, or is not, and what are the consequnces of my behaviors toward it in either case.

If it behaves consciously, but somehow is not, they I "mistakenly" respect the "rights" of a non-sentient. No big deal.

If it actually is conscious, and I treat it as a brick, I would be doing harm to a sentient, which I hold to be a great big deal.

My discourse on the possible "ancillary physics" of the substrate, above algorithmic processing, in the support of consciousness (as we experience it) is intended more to address the concerns of those that might hope to "transfer themselves" into a "new computronium substrate". If one could only do this as an all-or-nothing leap, one would like some additional assurance that the new substrate would support, at least (if not more than) our current experience of consciousness, as opposed to being a receptical that merely allowed our mind-logic plus memories to drive outwardly convincing behaviors.

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/15/2002 9:23 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"so if a computer acts like it's conscious then it probably is. "
and your scientific basis for this is ....
well done, so you're the AI enthusiast that finally admits that as far as hes' concerned a painting of a duck IS a duck. External , observer relative matter metrics are indistinguishable from matter itself - the logical conclusion of a train of thought as embedded in lunacy as the Flat Earth Society.

Re: Chapter 2: I Married a Computer
posted on 08/15/2002 10:19 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

If a computer is aware of what it is doing and able to think about that and make changes based on what it thought, I'd say it was conscious. If all the computer is able to do is carry out programmed reactions to situations based on someone else's input, I'd say that doesn't quite qualify as conscious. The automatic pilot in a 747 is capable of very complex behavior, but I wouldn't call it conscious. Likewise I program that makes automatic diagnoses of patients based on evaluation of blood and other tests, although it may be more accurate statistically than a human doctor, is still not conscious. It can't think about it's own behavior and change it accordingly.

Consciousness, to my mind, is being aware of what you are doing and using that awareness to decide what to do next.

Re: Chapter 2: I Married a Computer
posted on 08/15/2002 9:19 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"I honestly do not know how consciousness arises, of course, so I cannot say what forms of substrate might support such experience. "
then why waste such a mass of joules on vacuous projections that defy basic epistemology in order to even be entertained, let alone implemented ? Until you understand the CIRCUMSTANCES ( and NOT the PROCESSES ) that are assocaited with consciousness you are wasting your time. And that can only be don by looking at skulls' contents, not data flow diagrams and self-refernetial mulitbyzantial interdependential point-to-point crackpot integrity systems.

Re: Chapter 2: I Married a Computer
posted on 08/15/2002 12:49 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

I'll never be able to prove that a conscious acting computer is really conscious but it seems like a pretty good bet; I'll never be able to prove a conscious acting human being is really conscious either but that seems like a pretty good bet too. The only consciousness I can prove to exist is my own and that proof is available only to me.

One other thing, if behavior can not indicate consciousness then why on earth did evolution produce it? However important consciousness may be to us from evolution's point of view only intelligent external behavior is important. If the two are always linked or at least if one is a shortcut for the other then we will find it easier to make a intelligent conscious computer than a intelligent non conscious computer. If the two are not linked then you need to resort to miracles to explain how it ever appeared on this planet.

Re: Chapter 2: I Married a Computer
posted on 08/15/2002 1:20 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

A standard confusion about consciousness relates to issues about 'proof', reasonable beliefs, and a constant never-ending confusion of subjective experrience with objective facts .
The fact that your conscious experience is subjective in NATURE does not exclude it from being the objective fact of your consideration. You have no proof of a lot of things you can't know through the senses : atoms, molecules and electrons are all based upon assumptions about the nature of matter : no formal logical proof can be made for their existence : the existence of atoms is based upon faith in science and constitiutes what philosophers would call a 'reasonable belief'.
When placing your consciousness in the universe of objective facts you must make a case for your own uniqueness ( in absolute terms ) in order to make an OBJECTIVE assertion of the denial of consciosuness in others. If you assume a) you are not the only human in the universe and b)most human beings are similar , then you have no conclusion to come to other than if you have consciousness, then so must everybody else.
This is the difference between looking at the subjective of experience of consciousness and its analysis in epistemological terms. To say that 'i can only be sure of my own consciousness' is correct if and only if you also maintain that you don't think there are more people than yourself in the universe constitutes a 'reasonable belief'. Personally I don't think that I am the only person in the universe : I don't consider it a reasonable belief : so I don't consider it a reasonable belief to assume I can't say it exists in others, if I can say that "grass is green" or "rotten eggs smell" or "he is black".
One thing is for sure : unlike brains we know PRECISELY how computers work : they work via observer-relative mechanical implementations : they don't have a physical existence. As mental events are semantical and not syntactical , this means that you can never, ever generate mental events from computation, no more than you can produce bricks, wind, electrons, planets, or any other form of matter , or any other semantical components of the universe such as time and space.
Computers cant be conscious because you cant make apples from oranges, and it really is quite as simple as that.

Re: Chapter 2: I Married a Computer
posted on 08/15/2002 4:11 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

John B,

You write, "As mental events are semantical and not syntactical"

Other than pure faith, how do you know that we entertain semantic, and not merely perform something "akin to" sophisticated transformations of "environmental inputs" to "outputs"? I tend to think that our "processing" is not purely causal (if that is what you mean by "mechanical") but how is semantic more than a "claim"? We may feel that we generate semantic (and I feel that we do, perhaps unprovably). But that is not something that could be said to be "obvious".

Surely, there is no reason to hold that a future artificial brain, formed perhaps of a dense matrix of silicone gel, carbon nanotubes, and tea leaves, with sensory i/o to the world, "simply cannot manifest a conscious intelligent state".

I agree that, although we can "interpret" (model) our brains (at least, their intelligent-like behaviors) to be performing "syntactic symbol manipulation", that does not mean they are merely manipulating symbols. Thus, a greatly sophisticated "future symbol processor" might not necessarily support the "sense of consciousness" we feel subjectively. But that is NOWHERE the say as saying that no artificially constructed system, which might employ symbol processing in part, cannot become a conscious individual, essentially a person.

There is no basis for supposing that our physical manifestation is the only one that "works", nor that we are more than physical manifestation.

As far as "quacks like a duck":

Difficult as the engineering might be, we can posit that an artificial person, as in "Data" from Star Trek, might someday exist. Even if given the same original "programming" (for want of a better term) three individual Data might diverge in behaviors as they re-adjust their own programming (beliefs if you like) and "experience" different histories. All three, I would assume, would avoid walking into a blast furnace, surmising that this would not help them to fulfill some sense of a long-term mission. As outside observers, we might say "they fear their own destruction", even though we really would have no idea of what they might feel, or fear.

Along the banks of a raging river, a crowd gathers, including these three "Data". A mother screams as her child is swept away by the current. The crowd, and two of the Data, hold back, but one of the Data leaps into the water, risking being swept away, to attempt to rescue the child.

Folks might say, "That was a brave thing to do" (or a foolish thing). We have no idea whether the other two Data were reluctant because (a) they feared their own destruction, or (b) they calculated risk > reward, or (c) something else.

I would call the rescuer brave, since I could not know either way what, if anything, this artificial person "felt".

It quacks like a duck. Call it a duck.

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/15/2002 6:37 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"Other than pure faith, how do you know that we entertain semantic, and not merely perform something "akin to" sophisticated transformations of "environmental inputs" to "outputs"? "
This statement is so ridiculous/bizarre I don't even know where to begin. Why don't strong AI enthusaists use one useful English word instead of twenty useless ones ?
Now there's an example - English - that's semantic. If you don't know what the word means, buy a dictionary. Buy a dictionary anyway.
Without semantic there is no language - without semantic there would be no conversation. This sentence would mean nothing to you. On the other hand a lot of your sentences use words with only vague connections to semantical objects hence the distinct impression a lot of what you speak doesn't actually mean anything : this is the difference between syntax and semantic.
If all language were syntactical it would have nothiong to refer to and no damn 'inputs' and 'outputs' to identify in the first place as all you would have is a spaceless, timeless, matterless world.
There is NOTHING to stop consciosness from possibly appearing in alternatice physical forms than brains , or 'substrates' as you call them. I wouldn't seek to deny it : I don't seek to deny that artificial consciosuness is a possibility . What is definitely and most certainly a probability is that compuattiional syntax vapour objects will never, ever create semantic.

Re: Chapter 2: I Married a Computer
posted on 08/15/2002 7:02 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

John B,

> "Without semantic there is no language - without semantic there would be no conversation. This sentence would mean nothing to you."

:)

It might "mean nothing", even though I might respond to it intelligently, correct?

If a thousand monkeys bang at a keyboard and produce the phrase "I am hungry", that familiar phrase has no semantic content (we judge) because we assume (correctly, I surmise) that the monkey had no "intention" of writing what it did. We, at least, "feel" that we "intend" when we offer communication, and upon receiving it from another, tend to hold that "other" as having "intended" what they wrote. Thus, semantic content is an assumption, strengthened when two sides of a communication seem to be acting with a consistent "sense" of the thread of "conversation".

You tell me "I am hungry", I hear it and bring you an apple. Since the communication you intended resulted in "successful activity on the part of the other entity", we judge that the communication conveyed "semantic".

A robot discovers it needs more "coal" and issues a request (symbols) to another robot. If that other robot goes off and returns with more coal ... the message thus conveyed no semantic? Curious indeed! Did the robots need "our sense of consciousness" to effect this activity?

None of that is "proof" that semantic is more than our belief that we "intend". I happen to belive we "intend", but my belief is no proof to anyone but me.

If (ala Turing Test) you communicate with (unknown entity), how do you judge whether the other "entity" conveys semantic, as opposed to transforms symbols in rich and complex learned ways? How will you do so in 10 years, 20 years, as artificial symbol processing systems become far more sophisticated?

Care to elaborate?

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 4:46 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

If a thousand monkeys bang at a keyboard and produce the phrase "I am hungry", ..judge) because we assume (correctly, I surmise) that the monkey had no "intention" of writing what it did.
This half-ass argument is tiresome and trundled out on a regular basis. That proper syntax is capable of being randomly generated belies the fact ( yet again) that syntax has arbitrary forms and thus incapable of generating semantic and meaning.
"A robot discovers it needs more "coal" and issues a request (symbols) to another robot.
Impossible. A robot never needs anything, because its a computer and computers don't need anything because they're symbol processors : they don't even have a physical existence. The company employing the programmer needs the coal, so he formalises that requirement symblically.

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 6:07 AM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

John B,

You wrote:

> "The company employing the programmer needs the coal, so he formalises that requirement symbolically."

This is like saying that God wanted certain work to be done, and programmed us to perform that work. Thus we "need nothing" (who says we "need") and all we are doing by passing "semantic content" to and fro is to carry out God's instructions.

Unless you think you can prove, or disprove the existence of God, and thus the existence or non-existence of "intent" on our part, the viewpoint that we convey meaning or intent is quite a matter of faith.

We may not be simple algorithmic symbol processors, easily deconstructed into obvious mechanical parts, but unless you think that we have some magical mind-fluid that allows us somehow to entertain meaning and purpose in an objective sense, I do not understand the basis for your argument.

Question: Just how far from a "symbol processing computer" must a device get before it is no longer "ridiculous" to entertain the possibility of its sentience?

Important:

You have noticed, it seems, that I tend to argue both sides of the fence. If I sense someone is taking it for granted that any Turing machine (say, built of rubber band and colored marbles) running a sufficiently complex and self-evolving algorithm is thus definitively sentient when it acts with sufficient intelligence, I argue why the very physics of the processing MAY be a determining factor, at least in matching our subjective experience of what sentience "feels like". Alternately, when someone (guess who) demands that no "machine" can be sentient because it merely passes "symbols with no semantic", I challenge you to establish how we are not machines passing symbols lacking semantic, except in terms of unsubstantiable belief on our part.

You seem to denigrate playing both sides. Perhaps you think that discussion is about "winning" rather than spurring thought and further investigation. Your choice of words to characterize opposing arguments (half-assed, ridiculous) are a distraction and technique of communication designed to head off further investigation of the issues. They are a defensive reaction, and serve to sway others to your side out of fear of appearing "ridiculous" otherwise, at least in your eyes.

Then, perhaps it is just your habit of speech.

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 6:38 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"This is like saying that God wanted certain work to be done, and programmed us to perform that work. "
No it isn't. I stick to statements issued on planet Earth.

"Unless you think you can prove, or disprove the existence of God, and thus the existence or non-existence of "intent" on our part, the viewpoint that we convey meaning or intent is quite a matter of faith. "
No it isn't. You can't deny the existence of meaning when making an argument in linguistic terms, as its contradictory - to make the suggestion with the meaning that there is no meaning. Like most AI enthusisasts, you are seeking to solve the problem by pretending the problem isn't clear, which is a fraudulent and petty act. You are wasting time.

"I do not understand the basis for your argument. "
Too damn right you don't.


"I challenge you to establish how we are not machines passing symbols lacking semantic, except in terms of unsubstantiable belief on our part. "
In scientifc terms as I am making no positive assertion about the creation of consciosuness through computers I am under no obligation to prove anything. Its up to you to prove that computers can generate consciousness in scientific terms, and that starts by you having a theory of consciousness which it doesn't sound like me you've the capacity to produce.
However although you have , in contravention of the entire history of science , placed the burden of disproof upon ME to 'prove' your case, rather than doing yoour own work for you, I'll set you a simple test , that is you can pass it, will allow me to believe that mental events are syntactical.

Describe, preferably in a more succinct textual style than you're used to, the colour blue to me, without reference to the colour blue.

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 2:49 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

John B,

> "Describe, preferably in a more succinct textual style than you're used to, the colour blue to me, without reference to the colour blue."

I do not believe that is possible, which is exactly the point I am making. When I look at what you might call the "blue sky", I may be experiencing the "color" that you (and perhaps others) call "red". But I have learned to identify that color by the name "blue". You might then present to me a set of colored cards and ask "which is colored blue?" I pick the "correct" one, of course, (the one that appears "red" in my subjective experience) and blue to you, out of training.

This tells me that semantic is contextual, and not intrinsic. We might objectively measure the wavelength and agree that "sky" and your "blue card" match. That does not establish what I am experiencing, but only a correspondence on terms we thus deem "semantic agreement".

You object to my "coal-fetching" robot as engaging in semantic "on its own", and insist it is merely passing dead symbols to effect the semantic of the original programmer.

If the programmer had not specified "coal-fetching" per se, but rather "build more steel girders", and the (system of) robots manage to apply reason to create a steel factory, determine that coal is needed at some stage, and one "tells another" to fetch more coal, that message still contains no semantic in your view, even though it is serves to effect the gathering of more coal.

If the programmer had specified "build better cars", or "build a better physical infrastructure for human society", and some system of robots use this "simple directive" to establish factories and self-sustaining systems, you will still maintain that the directive has "semantic for the human", but was merely "symbols" to the robo-system.

Although you fail to articulate, I can only surmise that your view is based, at heart, upon the notion that "artificial stuff" is completely causally constrained, thus all artificial actions are uncreative (being always attributable to a prior-cause that can be traced back to a human at some stage) and that humans escape this very causal-void of semantic. Somehow, "we intend X" and no artificial system can "intend".

If you felt that I was asking you to "prove something", I was not.

But I would like to know your view of how far an artificial system needs to depart from "symbol manipulation" before it might be capable of "originating intent", as you would judge it.

Cheers! ____tony b____


Re: Chapter 2: I Married a Computer
posted on 08/16/2002 4:24 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

> "Describe, preferably in a more succinct textual style than you're used to, the colour blue to me, without reference to the colour blue."

I can't explain "blue" without using the word blue because there would be nothing to explain. But I can tell a blind person what the color blue means to me and what it makes me feel.

Think about that patch of skin above your nose and beneath your brow that is infinitely more sensitive than the rest of your skin. And when the sun shines down on the back of your hand and warms it, it also shines down on that patch of skin, the eye, which sees a contrast between the warmth of the sun and the coolness of air or water. The light from the sun creates in the eye a texture a thousand times stronger than what you feel on your hand. We call it yellow. The cool air creates a contrasting texture we call blue.

The yellow of the sun we usually associate with happiness. But sometimes the sun can be too warm. It makes us hot and lethargic. The coolness of water provides a refreshing contrast to the hot sun. We associate the color we call blue with the air that flows around us and the water that quenches our thirst. The water we swim in is also cool and blue, as is the air that stretches for miles above us. Blue is the watery color of empty sky and rolling sea. We are surrounded by it, even on a bright and sunny day.

In music, the sound we call blue reminds us of when we are sad -- perhaps becuse it brings to our eyes the blue water of our tears.

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 4:44 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

Grant,

I agree that you might describe to someone what the "color of sky or water" makes you feel (cool, calm, etc). But if that is what I subjectively experience as your color "red", then to me, it is the "cool red lake" and the "cool red sky", which I have been trained from birth to call by the name "blue".

So, perhaps John B can explain which form of "describe blue to me" he was seeking. I suspect he sought the latter "subjective visual experience", and if so, I do not believe that it can be conveyed in any sense of the term "objective". You say "makes me feel cool or relaxed", I say the same, so we refer to the "same thing" (color of water or sky), even though we might interpret the purely visual sensation differently.

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/18/2002 12:17 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

THis is an immensely confused response I may say. On the one hand you accept that colors cannot be explained in terms of anuthing other than thesmelves. You thus accept semantic exists , and meaning exists , a 180 degree turnaround from your presvuious suggestion . Then you claim that this is PROOF )( ie that you cannot describe a color in anything other than color terms ) that "semantic is contextual" and somehow non-existent. At least your fudged argunment admits that semantic exists. That it is "contextual" is a misunderstanding although my personal suspicion is you're looking for a way out.
Semantic either exists or it doesn't. Full stop. There is no logical alternative. What you are not denying in fact is the existence of semantic - you admit every individual has his 'version' of blue - you are denying that there is no such thiing as a global experrience of blue, which is a completely separate claim. That you think that everybody has color experiences is 100% support for the idea that brains have innate semantic support.
You may be trying to convince yourself that the hypothesis of variable sensory function means the same thuing as 'there is no semantic in thinking' but naturally it doesn't and ( yet again ) you are confusing ontologies. If humans have color experirnces in their thinking THEN human thinking supports semantic. Full stop. That an external object ( a light ray ) has a slightly different sensory impact on one individual as opposed to another is utterly and completely irrelevant , and a huge red herring. The key point is that the mental event of color experience in ALL humans has one feauture about it : it is semantical .

The other point to obviously make is that there is no reason to assume that most people DO see color particulalrly differently. In fact we know that about 8% of males see them different as they are color blind. But the whole point is we know about color blindness. That every individual may be wandering around in a completely different sensory space is not backed up by experience : we can quickly tell if color interpretations differ from OBJECTIVE tests. There is no reason to assume either that there are vast differences between humans in mental life, that a punch in the face hurts some humans but others feel happy.

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 12:20 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b. Wrote:

=========
If you assume a) you are not the only human in the universe and b)most human beings are similar , then you have no conclusion to come to other than if you have consciousness, then so must everybody else.
=========

But everybody else is not conscious, those who are sleeping are not or those under anesthesia or those who are dead. There is only one way I can tell the difference, by their actions.

=======
One thing is for sure : unlike brains we know PRECISELY how computers work
========

Not so. It would only take a few minutes to write a computer program to search for an even number greater than 4 that is not the sum of two odd primes, that is all the primes except 1 and 2, and then stop. What will this simple little program do, will it ever stop? I don't know, you don't know, nobody knows.

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 4:28 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"But everybody else is not conscious, those who are sleeping are not or those under anesthesia or those who are dead. There is only one way I can tell the difference, by their actions. "
I'm afraid you are hoisting yourself by your own petard here, in addition to being confused. Nobody is talking about consciousness detection systems here : none has been invented (yet) although there is no fundamental reason why one should not be. What is at discussion here is the question of objective belief about consciousness . That you may believe of not believe somebody is conscious at a particular point in time due to their actions is not the same question as 'is it a reasonable belief to assume I am the only conscious person in the world ?'.
AI enthusiasts try to have their cake and eat it. They try to say on the one hand that consciouness of other people and animals is judged by their external behaviour, on the other that as it is undetectable in others all we need to do is create externaly manifested conscious behaviour and het presto ! as we can't prove he ISN'T conscious, we must assume that he is.
It is important to note that this line if argument is unscientific drivel of the most bogus kind and some individuals have been living off it for years.
In the first place the case rests on consciousness being unobjectively testable, and that this restriction is inaliable. This is a false assumption and belies a colossal ignorance of science. Consciousness has a first party , subjective quality but that doesn't stop it from being a third party phenomenon of the universe and thus the subject of scintific hypothesis. Given a theory of consciousness ( relating to MATTER , and not 'process') then detection of consciousness would be perfectly possible via the usual hypothesis/experimentation/verification route used in all other branches of science and physics. The only restriction on measuring consciousness is a complete absence of consciousness theories.
The other point is this : if I say on the one hand that I only know that other people are conscious because of the way they behave, then I already have a personal theory of other peoples' consciousness based upon their being like me. But what I have is a theory of the STATE of that person's consciousness : not his CAPACITY to be conscious, which I've assumed is implicit in him being a human being and pretty much like me. This is by any stretch of the imagination a reasonable belief. So your behaviour theory of consciousness state in humans is predicated on one assumption : other human beings have the capacity to be conscious.
Now when we change focus on the computer we see a bolt change in argument. There is no reason to assume that a block of silicon has the capcity to be conscious. So we get this facile response : how do we know its not conscious, particularly if he behaves like it? After all, we only know a human is conscious because of his behaviour . Not so ! You assume a person is conscious because he's like you : that's it. His behaviour relates merely to your personal theory of consciousness STATE. You associate conscious behaviours with certain acts, such as speaking, laughing and (in the case of dogs , for instance ) barking. But its YOU who makes the connection between the behaviour and the idea that the possessor of the behaviour may be conscious : YOUR theory is PREDICATED upon the reasonable belief that other humans and to a lesser extent animals are pretty much like yourself. You can't suddenly switch your domain to boxes of silicon, which you don't have any reasonable belief are like you, and suddenly say 'if I get animal behaviour from a box if silicon i can apply the same theories about consciousness that I apply to humans and animals' because computers AREN'T humans or animals. The argument is all washed up and based upon confused notions of theory and proof.

"Not so. It would only take a few minutes to write a computer program to search for an even number greater than 4 that is not the sum of two odd primes, that is all the primes except 1 and 2, and then stop. What will this simple little program do, will it ever stop? I don't know, you don't know, nobody knows. "

This does not imply in any way, shape or form that we don't know how computers work. Computer programs that never stop are still working in the same way as ones that do.


Re: Chapter 2: I Married a Computer
posted on 08/16/2002 10:30 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

I said:

"It would only take a few minutes to write a computer program to search for an even number greater than 4 that is not the sum of two odd primes, that is all the primes except 1 and 2, and then stop. What will this simple little program do, will it ever stop? I don't know, you don't know, nobody knows. "

john.b. said

'This does not imply in any way, shape or form that we don't know how computers work.'
==========
Don't be silly, of course it does. You may understand how a flip flop switch works but that doesn't mean you understand how this computer works anymore than knowing how to type means you understand Shakespeare. You don't understand something if you don't know what it will do, in the example I gave the only way to know what the computer will do is watch it and see. I think you are the one confused about the semantical and the syntactical.

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 11:22 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"Don't be silly, of course it does. You may understand how a flip flop switch works but that doesn't mean you understand how this computer works anymore than knowing how to type means you understand Shakespeare. You don't understand something if you don't know what it will do, in the example I gave the only way to know what the computer will do is watch it and see. I think you are the one confused about the semantical and the syntactical. "

Complete drivel of the very highest order. Are you saying that if we start a car on a motorway, glue the pedal to the floor and set its on its way we don't know how the car works just because we don't know when it stops ? We know how computers work - that they are capable of an infinity of separate tasks is nothing to do with their construction .

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 12:12 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey Wrote

'Are you saying that if we start a car on a motorway, glue the pedal to the floor and set its on its way we don't know how the car works just because we don't know when it stops ?'
========
Well we certainly don't understand the car's trajectory I'll tell you that, and in my example we not only don't know when it will stop we don't even know if it will stop. Unlike the car the computer is constantly changing its internal logical state and there is no shortcut to figure out how it is changing, you must simply watch it and see. To say there is no mystery in this situation because you understand how a flip flop circuit works is foolish.

Re: Chapter 2: I Married a Computer
posted on 08/18/2002 10:19 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

you are confusing the computers mechanism with its execution path. How a computer works is known , because its mechanism is known. If you don't believe me then tell Dell they don't know how their computers work. That some prograns may be infinite in duration is irrelevant, and completely irrelavant to their likelihood of producing 'consciousness'.

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 5:41 PM by wclary5424@aol.com

[Top]
[Mind·X]
[Reply to this post]

"Don't be silly, of course it does. You may understand how a flip flop switch works but that doesn't mean you understand how this computer works anymore than knowing how to type means you understand Shakespeare. You don't understand something if you don't know what it will do, in the example I gave the only way to know what the computer will do is watch it and see. I think you are the one confused about the semantical and the syntactical."

When you remove the six dollar words, what you are saying is this:

If we do not know how a process will end, we cannot understand it.

Were this to be true, we couldn't understand most of what happens in the world.

I can write a simple cellular automata program which will exhibit behavior that is, for all intents and purposes, completely random from now until the day the sun becomes a red giant. I will never be able to predict precisely what happens, but I understand the program completely.

BC







Re: Chapter 2: I Married a Computer
posted on 08/17/2002 1:04 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

BC Wrote:
"I will never be able to predict precisely what happens, but I understand the program completely."

It's easy to understand that the weather is caused by the movement of gas molecules, does that mean you understand the weather?

Re: Chapter 2: I Married a Computer
posted on 08/17/2002 3:41 AM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

BC,

I understand your point, as one of degree. Let us compare and contrast the brain with (at least today's) sophisticated computer.

I agree that it is unfair to simply compare our understanding of chemistry yet lack of understanding of mind/consciousness, to our understanding of computer substrate and lack of knowing the end-state of a complex process.

With even the most complex "program over processor", we could in principle do every step with pencil and paper. It would simply take a very long time. In this case, at least, we understand both the "base operations" and in principle how all consequences must follow.

Today, even if we completely understand all of the rules of chemistry, we cannot take paper and pencil and arrive reliably at the future state of a brain, even in terms of future chemical state, no less ferret what "thoughts" that brain might be entertaining.

The issue is one of "destiny in principle". If we know the state of every atmospheric molecule (and the flapping wings of every butterfly, methaphorically) AND the physics of the universe were deterministic (No Strong QM) then we could, in principle, calculate all future weather. Dense, moist air at location X would imply a hot muggy day at location X.

But calculating ALL future ionic concentrations in the brain, and the sequence of every future neural cascade, still does not tell us what that brain might be "thinking". We might INFER that it is entertaining fear or happiness or anger, because the concentrations or activity correlate strongly so, but we have no way to be certain that the "consciousness" feels as such, at least until we have a very amazingly complete theory of consciousness.

So the issue becomes, if we succeed eventually to understand everything about the brain mechanics, even to the point of (again, in principle) affording a "pencil and paper" calculation of what that brain will be "thinking", then as a consequence, our "thoughts" are just as "mechanically uncreative and uncapable of (objective) semantic" as the microprocessor supported algorithm. We may not know what the original program looked like, but the current state-of-the-algorithm would be sufficient to calculate all future state.

In such a case, our "sensation of conveying meaning" by our thoughts and actions would be a form of self-delusion. We would be merely "robots fetching coal", to draw upon the post I made to John B.

How are we, in principle, to escape this cold, mechanical conclusion?

A) "We cannot figure out the brain to that degree".

So we get to maintain forever that there is some "inexplicable reason" that WE entertain real meaning, as opposed to carrying out some chemical destiny written long before we were born, whereas the most sophisticated algorithmic systems cannot entertain real meaning, because they are fully explicable, and must carry out the algorithmic destiny of their original programming.

B) "We figure out the brain perfectly, including a correct theory of consciousness, but still cannot predict what a brain today will be thinking tomorrow, even in principle, because we understand how it exploits fundamental QM-indeterminacy."

This would suggest that a deterministic algorithm cannot "generate semantic", but one that might exploit QM-indeterminacy is a sufficiently similar way may give rise to a conscious, semantic-generating system, a "mindful being".

C) "The voice of God booms from the Heavens that He gave us mind and will and meaning, and we all believe Him (or Her.)"

Not much to gain in further discussion of consciousness in that case.

D) (----------------) (fill in the blank.)


Thoughts?

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/18/2002 11:10 AM by john.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

more misunderstandings. If all the movements of the gas particles in the universe were known then yes , you would be able to PREDICT the weather. But even if we don't think we can predict the weather it doesn't mean we don't understand it. The weather is governed by chaos theory and an overload of information requirements : the mechanism of the weather are well-known.
In any case, an isolated alogorithm is 100% determinstic and incomparable to the weather in any case.

Re: Chapter 2: I Married a Computer
posted on 08/18/2002 2:41 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey Wrote:

'an isolated alogorithm is 100% deterministic'
========
What does that have to do with the price of tea in China? You seem to be implying that a deterministic process could never produce something interesting like consciousness, and that's pretty silly. Everything, absolutely everything, happens because of cause and effect OR it does not happen because of cause and effect and is random, there is no third alternative. I don't see how that has anything to do with consciousness, intelligence or free will, whatever that is.
=============
'even if we don't think we can predict the weather it doesn't mean we don't understand it.'
=============
I am very familiar with every one of the 26 letters Shakespeare used in his work thus I understand every nuance he tried to express in his writing.

We have known the genetic code for 40 years and now we have the complete genome, in effect we have the human blueprints and we know the language it is written in, thus we know all there is to know about human biology.

Foolish arguments don't you think.


Re: Chapter 2: I Married a Computer
posted on 08/18/2002 6:09 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"What does that have to do with the price of tea in China? You seem to be implying that a deterministic process could never produce something interesting like consciousness, "
I never said anything of the sort. You commented that weather, I recall, was , difficult to predict and hence its mechanisms not 'understood' somehow. I pointed out weather was difficult to predict for one reason : too much initial informatuion required , so a random element is always present in weather behaviour, You had tried in a half-hearted way to compare the progress of the weather with a computer program. Something completely invalid as an isolated computer prgram is 100% determinsitic, as I said.


". Everything, absolutely everything, happens because of cause and effect OR it does not happen because of cause and effect and is random, "

Gibberish. Even quantum systems have cause and effect - quantum mechanics is about measurement.

"there is no third alternative. I don't see how that has anything to do with consciousness, intelligence or free will, whatever that is "

I don'y know where on earth you got the impression I thought that consciosuness wasn't caused by something. Certainly not from anything I've said, so it must be something you THINK I've said.

"I am very familiar with every one of the 26 letters Shakespeare used in his work thus I understand every nuance he tried to express in his writing. "
They use the same letters in France - are you as familar with the works of Proust ? You need more than an understanding of syntax to get a grasp of semantic.

"We have known the genetic code for 40 years and now we have the complete genome, ..thus we know all there is to know about human biology. "

Ho Ho ! You'd better tell all the scientiosts grafting away all over the world to put their coats on - you've sorted it all out ! It was all so easy in the end ...


Re: Chapter 2: I Married a Computer
posted on 08/18/2002 7:10 PM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

John B,

You seem like an intelligent fellow, so I am interested in your opinions. I pose a few questions, and would like to hear your views.

1. Do you think that it is possible to attain a theory of consciousness sufficient to explain "mind and thought" in terms of the properties of physics, or do you think there is some insurmountable barrier to such a theory?

2. If a theory of mind/consciousness were attained, and shown to be a pure consequence of physics, would this make our actions "completely determined" (ala, analogous to algorithm) or can we somehow "originate thought or action" (free will, so to speak.)

3. If we are "as if algorithmically determined" (by consequences of chemistry, etc.,) then would our "sense of entertaining semantic" be anything more than "our sense of it". That is, would semantic be "in the mind of the beholder"?

4. How far from a pure "symbol-processing automaton" must a thing be before it can be considered an "intentional actor"?


Cheers! ____tony b____


Re: Chapter 2: I Married a Computer
posted on 08/19/2002 5:10 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

1. Do you think that it is possible to attain a theory of consciousness sufficient .." in terms of the properties of physics, ..there is some insurmountable barrier to such a theory? "
Physics gives an account of matter but it does not explain it.So physics can give us an account of what matter DOES but not what matter IS. To that extent physics is a syntactical discipline that jumps off a boatload of semantic 'givens'. One 'given' is time : time is never explained by physics in terms of itself : it is merely represented formally in mathematical statements. If time /matter / space are "de-semanticized" by physics its usually into a form of grand semantic such as mass-energy. Physics refers to its semantic elements as dimensions and these represent the limits of its capacity to explain. So the simple answer is no , I don't think there is a syntactical expression of that will lead naturally to the conclusion that people can think. Semantic cannot be DERIVED from physics. But there may well be a syntactical expression of
some kind that gets ASSOCIATED with thinking. This is, in fact, all physics tends to do in any case. The charge of an electron is not derived from base principles, its a semantic idea produced by human beings after a lot of experimentation and hypothesis.
"2. If a theory of mind/consciousness were attained, ..thought or action" (free will, so to speak.)
Don't know. I suspect that the problem may well lie in the question somewhere.We have predeterminstic assumpotions about physics currently. They may well turn out to be incomplete.


3. If we are "as if algorithmically determined" (by consequences of chemistry,
Chemistry is about stuff. Real, semantic stuff. That there are laws about chemicals does not make us algorithmically determined. It makes us chemically determined.

"then would our "sense of entertaining semantic" be anything more than "our sense of it".
No . Wrong ontologies , you're getting confused. "Sense" in this context is not a physical sense but a logical one. We have non-physical semantic too - the idea of other people being the most obvious example.

"4. How far from a pure "symbol-processing automaton" must a thing be before it can be considered an "intentional actor"? "

This is a question akin to : "when did you stop beating your wife ? " : its base assumptions are erroneous but nonethless requires a binary response. The answer is very simple : a symbol processor doesn't physically exist so doesn't even belong in the realm of intentional actors , so the problem lies in the question - which assumes there is some kind of boundary between the two, which there is not.



Re: Chapter 2: I Married a Computer
posted on 08/19/2002 8:29 AM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

John B,

> > 1. Do you think that it is possible to attain a theory of consciousness sufficient .." in terms of the properties of physics, ..there is some insurmountable barrier to such a theory? "

> ... the simple answer is no , I don't think there is a syntactical expression of that will lead naturally to the conclusion that people can think. Semantic cannot be DERIVED from physics.

I would agree that theories of physics are generally axiomatic constructs, such that "selected premises" lead to derivable consequences that will match physical observations. But are you implying that expressions of theory are merely syntactic?

However "artificial" or syntactic all theories of physics may be, they generally lead to our ability to predict with greater accuracy. It would seem that we could "predict all future thoughts" given a sufficiently effective theory (not practical, but in principle.) It is not practical, at least, because of the amount of "state" that would need to be known with accuracy, so that issue is not interesting.

My second and third questions were related: (2) Would an "in principle" theory, or understanding that we were purely "consequences of physics" reduce us to "non-willful beings", and (3) How could non-willful beings "mean anything" (intend semantic) by their utterings.

Apologies if I did not word them as clearly.

I happen to hold that we are willful, that we can "originate intent". If otherwise, then I meant exactly the "sensory" sense of "sense" (whew) when I implied that our "sense of engaging in semantic" would be a joke, as it were. I cannot fathom how the words I might utter tomorrow can bear a semantic that "I intend", if they were in fact completely determined a century ago by the state of the universe. Fortunately, our understanding of physics leads us to find it not strictly determined.

Even so, it would seem that, as I wrote, semantic is in the eye of he beholder. Hypothesize an advanced race of beings that happen upon us, and judge us to be "robots". They have no particular reason to establish we are conscious, intentional, or willful. Our messages (written, voice, etc.) are fancy symbols being passed about as we go about our business "fetching coal".

One might argue (taking the omniscient view) that our "symbols" convey semantic because "we consciously intend them" to convey a specific meaning, instruction, or directive.

In this view, "semantic" and "conscious intent" become synonymous. If two artificially constructed "beings" pass messages between themselves, they are "conscious" IFF "they pass semantic".

> > 4. How far from a pure "symbol-processing automaton" must a thing be before it can be considered an "intentional actor"?

> "its base assumptions are erroneous but nonethless requires a binary response. The answer is very simple : a symbol processor doesn't physically exist so doesn't even belong in the realm of intentional actors."

Binary response? It was not a "yes or no" question. I asked "how far from", not "how big and complex need it be."

I think that "algorithm" (as an abstract logical description) and "robot with processors effecting algorithms" (effecting physical consequences) are indeed two different beasts. I have argued on this list that a Turing machine running a billion lines of sophisticated, self-modifying code, does nothing more than I could to with pencil, paper, and a great deal of spare time.

But a "robot" (or whatever we might artificially construct that employs algorithms in some degree) is not an isolated system. A robot is rather useless without some way of interacting with the rest of the physical world. It might have visual systems, aural systems, tactile systems, as well as adaptively learn from dealing with its unpredictable surroundings. In this sense, is it still merely a symbol-processing automaton?

If yes, is this because its behaviors are "determined"? This begins to seem a stretch of the word "determined", since two such robots, "raised" in different surroundings and "experiencing" (intentional quotations) different situations, would end up adapting differently, drawing different inferences, etc. Their subsequent "predictability" is no more predictable than the weather.

Would this (plus evidence of their intelligence) demand they be conscious? I don't think it is demanded, no.

But at some point of sophistication, the term "robot" becomes unnecesarily perjorative, suggesting the simple mechanical man. One can no longer argue that it is all simply "determined behavior" due to the mechanical nature of its physical logic gates, unless we are merely "determined behavior" due to the mechanical nature of chemistry.

Is there a scale across which this "physical manifestation called robot" and "intentional actor" meet?

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/19/2002 9:16 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

". But are you implying that expressions of theory are merely syntactic?"
The mathematical axioms are - by very definition .The theories themselves are synthetic.

". It would seem that we could "predict all future thoughts" given a sufficiently effective theory (not practical, but in principle.) "
Possibly. But it seems unlikely that there is a theory that predicted this conversation.

..rest of response

I think you are getting once more into the confused mire of proof/disproof of internal mental life from an external viewpoint. This really is a colossal non-point. The viewpoint of external observers is totally, completely irrelevant to the discussion. If we assume that there exists such a thing as internal mental life AS an objective feature of the universe, then how it is detected is not remotely relevant to the issues around it. We assume its existence from the outset of our theorising, in much the same way we assume matter exists before conducting physical experiments. It doesn't matter what robots / aliens/ god knows what/ may think, no more than a man from a distance thinks something is a duck when it is in fact a painting of a duck.

Re: Chapter 2: I Married a Computer
posted on 08/19/2002 1:49 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

John.b.davey Wrote:

'Even quantum systems have cause and effect'
=========
Wrong. There is no law of logic that demands every effect have a cause and when modern physics looks at a tritium atom and sees it decay there is no reason it did so right now and not an hour from now or a century from now. True randomness does exist.

Re: Chapter 2: I Married a Computer
posted on 08/19/2002 4:30 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"Wrong. There is no law ...cause and when modern physics looks at a tritium ..now and not an hour from now or a century from now. True randomness does exist. "
Utter nonsense. Truie randomness does exist but that doesn't contradict the principles of cause and effect. What quantum states is that we can't know the exact position AND velocity of a particle at any point and so the movements of particles in force fields are indeterminstic. The 'cause' and 'effect' parameters in these examples are intranuclear forces - its merely the simultaneous measurement of position and velocity that causes problems.

Re: Chapter 2: I Married a Computer
posted on 08/19/2002 11:21 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey@btinternet.com Wrote:

'Truie randomness does exist but that doesn't contradict the principles of cause and effect.'
===========
Ok, let me get this straight, something is random but it is still caused by something. Oh well, it's not the first time you said something that didn't make sense.
===========
'The 'cause' and 'effect' parameters in these examples are intranuclear forces'
===========
If you think these 'intranuclear forces' of yours (I assume you mean the weak nuclear force and beta decay) can explain why one tritium atom will decay now and another identical tritium atom will not decay for a century then I can only conclude that you have not read a book about physics published in the last 70 years. I should be accustomed to it by now but it still amazes me when people feel confident to make grand philosophical proclamations and yet are ignorant of basic high school science.

Re: Chapter 2: I Married a Computer
posted on 08/19/2002 4:17 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"If you think these 'intranuclear forces' of yours (I assume you mean the weak nuclear force and beta decay) can explain why one tritium atom will decay now and another identical tritium atom will not decay for a century then I can only conclude that you have not read a book about physics published in the last 70 years. "

I have a degree in physics if you must know. Which is why I know that cause and effect still apply in quantum systems ( it'd be dreadfully difficult to do any calculations otherwise ) , and whose particle positions and momemta are governed by the uncertainty principle, making it impossible to know which particles are going to decay. Its really not that difficult and doesn't counter the idea of cause and effect : in fact you can have a good stab at deriving the half life of nuclei from first principles using simple force field mathematics,thermodynamics and aggrgegation , a la conventional statistical physics from the mechanics of gases.

Re: Chapter 2: I Married a Computer
posted on 08/20/2002 12:40 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey@btinternet.com Wrote:

'in fact you can have a good stab at deriving the half life of nuclei from first principles'
=========
Very true but I'm not interested in the half life, I want to know the reason this particular tritium atom decays now while that tritium atom will survive another century. The answer of course is that there is nor reason, it's random.

Re: Chapter 2: I Married a Computer
posted on 08/20/2002 3:36 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

Very true but I'm not interested in the half life, I .. ..etc, it's random.
http://www2.slac.stanford.edu/vvc/theory/stronginteract.html

This will give you an overview of intranuclear forces and the cause of nuclear decay. The reason that you can't tell one atom is going to decay rather than another is because you can't know the exact position AND velocity of a particle simultaneously, the fundamental principle of MEASUREMENT that is quantum mechanics. Cause and effect still apply : if they didn't 70 years of non-stop theoretical development of the quantum meachanics of bodies in force fields has all been a waste of time.

Re: Chapter 2: I Married a Computer
posted on 08/20/2002 10:52 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey@btinternet.com

'The reason that you can't tell one atom is going to decay rather than another is because you can't know the exact position AND velocity of a particle simultaneously, the fundamental principle of MEASUREMENT that is quantum mechanics. Cause and effect still apply '

But it's much deeper than just a measurement problem. Take the old 2 slit experiment for example, it's not that the photon goes through
one slit and we just don't know which one, it must go through the left slit only, and the right slit only, and both slits, and no slit at all, and it must do all these things at the same time.

Shine a light on 2 closely spaced slits and it will produce a complex interference pattern on a film, even if the light beam is so weak the
photons (or any other particle) are sent out one at a time. If a particle goes through one slit it wouldn't seem to matter if the other slit,
the one it didn't go though, was there or not, but it does.

Even stranger, place a polarizing filter set at 0 degrees over one slit, and one set at 90 degrees over the other, the interference pattern disappears.Now place a third filter set at 45 degrees one inch in front of the film and
10 light years from the slits. The interference pattern comes back, even though you didn't decide to put the filter in front of the film until 10 years after the photons passed the slits! Heisenberg's Uncertainty Principle does
not enter into any of this. Quantum Mechanics may or may not be a good idea but one thing is certain, it's the law.

Re: Chapter 2: I Married a Computer
posted on 08/20/2002 4:38 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

I am perfectly familiar with all these examples as they are intriductory fare, a kind of 'welcome to the wide world of quantum' for beginners.
And I am now 100% certain that prior to today you knew nothing about quantum mechanics whatsoever, or at least only read some pop-book on the. Congratulations , you've learned something! But of course the points you made about the Uncertainty Principle were a total bucket of pigswill. Nonetheless, its not your fault , and I don't know what on earth it has to do with your main assertion we don't know how computers work.

Re: Chapter 2: I Married a Computer
posted on 08/21/2002 12:00 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

"the points you made about the Uncertainty Principle were a total bucket of pigswill."
=================
Mr. Davey the points I made were very definitely not pigswill, I was accurately describing the way the world works, if you have a problem with that then don't blame me, blame God. I'm sure you could have done a better job than He did but unfortunately the position was already taken.

Re: Chapter 2: I Married a Computer
posted on 08/21/2002 3:54 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

That may be and I'm glad you're personally aware of the deity recruitment marketplace. It still doesn't make the quantum world non causal : the examples you gave were the Uncertaint Principle in action. Cause and effect still apply.

Re: Chapter 2: I Married a Computer
posted on 08/21/2002 12:48 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

Mr. Davey Wrote:

'It still doesn't make the quantum world non causal : the examples you gave were the Uncertaint Principle in action. Cause and effect still apply.'
=============
That doesn't make sense; this experiment clearly shows it is not just a measurement problem. You seemed rather bored with the 2 slit problem and even compared it to a pop up book, but others have had a great deal more respect for it. No less a person than Richard Feynman was very impressed by it, he said it was 'a phenomenon which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery ... the basic peculiarities of all quantum mechanics.'

There is one other quotation of interest this one by Niels Bohr 'Anybody who is not shocked by Quantum Mechanics does not understand it'. Mr. Davey you do not appear to find anything shocking in the 2 slit experiment, hmm.

Re: Chapter 2: I Married a Computer
posted on 08/22/2002 3:35 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"Mr. Davey you do not appear to find anything shocking in the 2 slit experiment, "
How do you know I find nothing shocking about it ? That is pure guesswork on your part for which you have absolutely no evidence. I do find quantum mechanics boggling : but then again I find most of science boggling. And I notice in your quotes that neither Bohr nor Feynman were suggesting that the 2-slit experimement meant the end of cause and effect as we know it.Very odd that, isn't it ? Maybe not.

Re: Chapter 2: I Married a Computer
posted on 08/22/2002 10:31 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

'I notice in your quotes that neither Bohr nor Feynman were suggesting that the 2-slit experimement meant the end of cause and effect as we know it.'
==============
That was the clear implication, but if they were not good enough for you try this quote from Steven Hawking ' The quantum effects of black holes suggests that not only does God play dice, He sometimes throws them where they cannot be seen.'

Re: Chapter 2: I Married a Computer
posted on 08/22/2002 11:56 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"That was the clear implication, but if they were not good enough for you try this quote .. Steven Hawking '..them where they cannot beseen.'
So tell me - again - why this means that there is no cause and effect in quantum systems ?

Re: Chapter 2: I Married a Computer
posted on 08/22/2002 12:59 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey@btinternet.com

'So tell me - again - why this means that there is no cause and effect in quantum systems ?'
=======
No. I'm tired of spoon feeding you, read a book someday.


Re: Chapter 2: I Married a Computer
posted on 08/22/2002 2:50 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

Now there there there. No - I need an explanation. Please forgive ne if I don't possess your superior intellect . Please please please explain to me why the 2 slit effect means that things happen for no reason whatsoever. You've stumbled across one of the greatest discoveries in mankind I think - don't keep it to yourself , spread the word!
Complete the following paragraph :-
"I am the only person in the history of the world to realist that the 2-slit experiment means that cause and effect don't apply because ...."

Re: Chapter 2: I Married a Computer
posted on 08/20/2002 5:23 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

This is important. If it was the case, that atom decay with no reason, then the relativistic effect - of the slower decay under 0.9c - wouldn't be observable.

It's at least this real causality between decaying and accelerating. You can preserve 100 radium atoms against decay. Just accelerate them enough for a billion years. And they will still be like new. ;)

- Thomas

Re: Chapter 2: I Married a Computer
posted on 08/20/2002 10:58 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

tomaz@techemail.com

'You can preserve 100 radium atoms against decay. Just accelerate them enough for a billion years. And they will still be like new.'

You can't be sure of that, no matter how fast you go some atoms will decay and some will not and there is no way to tell which one will and which one will not because there is no reason for it, no cause, it's random

Re: Chapter 2: I Married a Computer
posted on 08/20/2002 5:08 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

It's a strong corelation between speed and radium decay.

It's enough to say, taht it's not 100% random.

- Thomas

Re: Chapter 2: I Married a Computer
posted on 08/20/2002 6:14 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

John B,

You fastidiously avoid addressing the issue of artificial sentience, which seems to be the central theme to most of the posts made in this thread. Such artifice need not be purely "the algorithm in abstraction", but as an element of a physical manifestation (e.g., "robot").

The issue is not whether a simulated rainstorm, however accurately modeled, will require that I grab an umbrella.

I don't happen to feel that a Pentium 4, plus "cool algorithm", acting in concert, represents what I would consider a "sentient entity" (or "intentional actor"). You would appear to agree with this position.

Yet I ask, wherein lies the difficulty-in-principle, of producing artificial sentience (or that which would be indistiguishable from sentient artificial intelligence)?

1. Is this simply a matter of degree, lack of sufficient complexity, parallelism, or real-world interactions?

2. Is it that the "state of the system" (despite variable outside interference) is in principle "explicable"?

3. Is it that siliconic "logic gates" are not mushy and grey enough?

4. Is it that it would lack a "soul"?

5. Is it Something Else?

You are good at taking pot-shots, and yet are reluctant to stand up even a straw man of your own. Surely you must have conjectures of your own to make on this issue. Why not reveal them?

Cheers! ____tony b____


Re: Chapter 2: I Married a Computer
posted on 08/22/2002 3:52 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"You fastidiously avoid addressing the issue of artificial sentience,"
Give me an example.

"..which seems to be the central theme to most of the posts made in this thread. Such artifice need not be purely "the algorithm in abstraction", but as an element of a physical manifestation (e.g., "robot"). "

A robot does not exist other than as an "algorithm in abstraction". A robot is a functional entity , not a physical one. A robot delivers "service" for want of a better word. It has an arbitrary physical implementation. What you may seem to be suggesting is that the syntactcical "service" components as defined by the observer ( robot user ) are somehow conferred onto/into the physical device implementing the "service" requirements.
Impossible.

"Yet I ask, wherein lies the difficulty-in-principle, of producing artificial sentience (or that which would be indistiguishable from sentient artificial intelligence)? "
None. If those artificial factors have the causal powers to create consciousness , then they have the power to create consciousness. My personal belief is that biologists have the only identifiable access to those causal powers : namely brain tissue. If anybody synthesises consciousness/thinking in the near future it will be them . On the other hand syntactical objects have no causal powers whatsoever, because they are abstract.

"1. Is this simply a matter of degree, lack of sufficient complexity, parallelism, or real-world interactions? "
Forget algorithms and other syntactical objects. They don't have physical existence, which thinking and consciousness do. There is no capacity for the interaction of the two.

"You are good at taking pot-shots, and yet are reluctant to stand up even a straw man of your own. Surely you must have conjectures of your own to make on this issue. Why not reveal them? "
You are constrained by the limitations of your own argument set. You think that anybody who doesn't believe that computers can generate mental events must be motivated by religious motives. In fact I happen to think that the religious case, though clearly contrary to my point of view , makes more consistent sense than the AI case as at least an all-powerful deity would have causal powers, were he/she/it to exist. What makes AI so bizarre is the maintenance of its adherents that observer-relative syntactical objects DO have causal powers , which is totally nonsensical.




Re: Chapter 2: I Married a Computer
posted on 08/22/2002 1:14 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

John B Davey and a intelligent robot are having a debate. Mr. Davey says:

'A robot does not exist other than as an "algorithm in abstraction". A robot is a functional entity , not a physical one. A robot delivers "service" for want of a better word. My personal belief is that biologists have the only identifiable access to those causal powers : namely brain tissue.'

The intelligent robot says:

'A human does not exist other than as an "algorithm in abstraction". A human is a functional entity, not a physical one. A human delivers "service" for want of a better word. My personal belief is that electronics have the only identifiable access to those causal powers: namely circuit boards.'

How could an independent third party determine who, if anyone was correct?


Re: Chapter 2: I Married a Computer
posted on 08/22/2002 3:01 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

oh no, here we go again, the old 'you can't tell the difference so we can assume they're both the same' nonsense - otherwise known as the assertion that 'a duck is as good as a painting of a duck to an insensate AI enthusiast'. Well for one thing you are YET AGAIN confusing ( how many times are you actually going to do this ) the EXISTENCE of consciosuness with a PROOF of consciousness. The answer to the question, not that it remotely affects the objective existence of consciousness and so is MONSTROUSLY irrelevant, is simple. The robot is not conscious. The robot is lying - or it would be , if he had the capacity for intentionality, which it doesn't. For the simple reason that our 'intelligent, man-made' function server
fulfill;s the delivery of abstract needs to humans, and has no causal powers to create consciousness for the simple reason that it wasn't built to be a conscious agent.
And I don't know why you bark on so ( like so many AI types ) about 'intelligence'. They belong to different ontologies, different realms.
Its perfectly possible to be conscious and completely unintelligent, after all.

Re: Chapter 2: I Married a Computer
posted on 08/22/2002 4:41 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

John B,

I might argue:

"The 2-slit experiment means that cause and effect don't apply (at QM levels of activity) because to assume that a (hidden) "cause" exists would imply that, in principle, there are "variables and values" (however inaccessible) that if they could be known, would render the universe entirely deterministic, and leads to statistics on QM observations (via Bell's Theorem) that would differ to arbitrary degree from the observed statistics."

Alternately, I might argue against fundamental QM-indeterminacy using the inscrutable John-B method:

"Behavior at the QM-level is still cause and effect, because, well, everything has to have a cause! It is so obvious, only an idiot would think otherwise, duh!"

Likewise, your only ability to elicidate why "machine-consciousness" is impossible (different ontologies) is that it is "obviously" different ontologies, which explicates nothing in particular.

In a purely-causal universe, we are as much "machine" as any robot, unable to "intend" or generate "meaning", despite having developed a "sense of intention". There is no escape from such a conclusion, (without appeal to "magic".)

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/23/2002 3:41 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"Likewise, your only ability to elicidate why "machine-consciousness" is impossible (different ontologies) is that it is "obviously" different ontologies.......ing in particular. "
Are they or are they not different ? Is there no difference between an obervervable feature of a piece of matter and the piece of matter itself ?
I don't make an 'assertion' that matter is distinct from the observable fetures of matter : it's a fact. It is not up for discussion.Likewise there is no argument with the fact that syntactical objects have no causal powers - it's a fact.
If not explain otherwise. Explain how the number 2 has causal power.
"machine-consciousness" is impossible "
More unfounded nonsense. If the machine has been designed with materials that have the causal powers of conscionsess, then the machine will have consciousness. If the machine has been designed purely as a "function server" defined totallly in terms of what it delivers to the user ( and having no intrinsic/specific causal content of its own ) like all contemporary 'robots', then no, it will not have consciousness.

"we are as much "machine" as any robot, "
Correct. Meat machines.Consciousness machines.

"unable to "intend" or generate "meaning", "
You didn't intend to write this mail about the subject of consciousness ?


". There is no escape from such a conclusion, (without appeal to "magic".) "

Absolute cataclysmic drivel , archetypal AI tactic to pretend that any disagreement with its view is based upon superstition. As I've said before, I'm nor religious but at least a God, were, he/she/it to exist , would have causal powers. The number 2 could not, by any strectch of the imagination, have causal powers yet set such assertions are at the heart of AI hocus pocus.

Re: Chapter 2: I Married a Computer
posted on 08/22/2002 11:52 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey@btinternet.com

'The robot is not conscious. The robot is lying - or it would be , if he had the capacity for intentionality, which it doesn't.'
=========
So you say but saying it does not make it so, not even if your arguments are all in CAPATAL LETTERS. The independent third party could conclude just as logically that the human is not conscious. The human is lying - or it would be , if he had the capacity for intentionality, which it doesn't.
============
'And I don't know why you bark on so ( like so many AI types ) about 'intelligence'. They belong to different ontologies, different realms.'
=============
Since you admit you don't know I'll tell you exactly why. There is no way evolution, that is random mutation and natural selection, could tell the difference between intelligent conscious behavior and intelligent non conscious behavior so if the two were not linked, if they were really in 'different realms', then there is absolutely no way evolution could ever have ever produced it.

Re: Chapter 2: I Married a Computer
posted on 08/23/2002 3:50 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"So you say but saying it does not make it so, not even if your arguments are all in CAPATAL LETTERS. The independent third party could conclude just as logically that the human is not conscious. The human is lying - or it would be , if he had the capacity for intentionality, which it doesn't. "
No no no no. You are assuming that
a) Consciousness cannot exist independently of a test
b) Consequently that , de facto, the OPINION of the third party is relevant to the issue of wether or not the r the human is conscious. It is not. You are pointing out potential shortfalls in the day-to-day theory of consciousness used by a lot of people which is based upon the detection of human behaviour ( as opposed to consciousness.


"Since you admit you don't know I'll tell you exactly why. There is no way evolution, that is random mutation and natural selection, could tell the difference between intelligent conscious behavior and intelligent non conscious behavior so if the two were not linked, if they were really in 'different realms', then there is absolutely no way evolution could ever have ever produced it. "

There is no way that 'evolution' the process would ever look at anything and describe is as 'intelligent' as 'intelligence' is a human value judgment and a SUBJECTIVE fact. Consciousness is an OBJECTIVE fact like a brick or a human being or a mountain or a large vat of water. It is part of the world that does not need human observers in order to see exist.

Re: Chapter 2: I Married a Computer
posted on 08/23/2002 10:50 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey Wrote:

'There is no way that 'evolution' the process would ever look at anything and describe is as 'intelligent''
=================

From Evolution's point of view 'intelligent' means good decision maker and problem solver. From Evolution's point of view 'good' means passing your genes onto the next generation. From Evolution's point of view consciousness is irrelevant, the fact that it nevertheless exists means it can only be a inevitable byproduct of intelligence.

Re: Chapter 2: I Married a Computer
posted on 08/21/2002 1:05 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

If things were 100% random science would be imposable, but things don't need to be 100% deterministic either for it to work pretty well, and in fact they are not.

Re: Chapter 2: I Married a Computer
posted on 08/22/2002 3:37 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

I think you know that I know that you know not a great deal on the subject of quantum mechanics. That is all that I have to say.

Re: Chapter 2: I Married a Computer
posted on 08/19/2002 6:24 PM by wclary5424@aol.com

[Top]
[Mind·X]
[Reply to this post]

You wrote:

"Ok, let me get this straight, something is random but it is still caused by something. Oh well, it's not the first time you said something that didn't make sense."

The two aren't related at all. Randomness does not necessarily imply acausality.

BC

Re: Chapter 2: I Married a Computer
posted on 08/19/2002 9:12 PM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

BC,

> "Randomness does not necessarily imply acausality"

True. But acausality would imply statistical ramdomness of some sort.

I can think of perhaps three forms of randomness.

1. Conformance to "random-like statistics". All forms of (apparent) randomness would need to pass test such as these, things like distributions of run-lengths, long-term convergence in certain properties such as mean variance, etc.

The digits of Pi, or most any irrational decimal expansion, tend to a good "(pseudo) random" sequence, even though produced by a relatively short algorithm (thus possessing only finite information, despite the never repeating.)

2. Strongly Random sequences. In theory, such a sequence cannor be expressed or generated by any finite algorithm. They are "non-computable", and informationally incompressible. Such a sequence's "shortest encoding" is itself.

No algorithm can generate such a sequence, unless it already has the sequence "in its back pocket", and is merely reiterating the values. But that would make the algorithm itself infinite in length.

The decimal expansion of "almost all" real numbers are strongly random (since there are only a countable infinity of finite algorithms, yet an uncountable infinity of real numbers.) Unfortunately, we have no way to know when we are "in possession" of a strongly random sequence, at least by inspection.

3. Acausal "generation" (really, "observation", since it is probably an abuse of the term "generate" to "acausally generate".)

I had suggested that if a mix of C-14 and C-12, were gradually replaced with a flow of C-14, sufficient to counter-balance the decay-rate of C-14 in the mixture, then a count of "decays-per-unit-time" might serve as the source of a strongly-random sequence, by virtue of acausality.

Curiously, if one imagines the universe to be "playing out" a huge random "generation", one can almost associate three camps of "believers" according to which form of randomness the universe is exhibiting.

The "Order Sufficiently Complex Leads To Apparent Chaos" folks might subscribe to type-1. Some "small seed" is sufficient.

The "Weak-QM" folk (hidden variables, et al) might subscribe to type-2. The seed is infinitly long and undiscernable, "but it exists already".

The "Strong-QM" folk would subscribe to type-3.

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/19/2002 9:43 PM by wclary5424@aol.com

[Top]
[Mind·X]
[Reply to this post]

Tony wrote:

"Curiously, if one imagines the universe to be "playing out" a huge random "generation", one can almost associate three camps of "believers" according to which form of randomness the universe is exhibiting.

The "Order Sufficiently Complex Leads To Apparent Chaos" folks might subscribe to type-1. Some "small seed" is sufficient.

The "Weak-QM" folk (hidden variables, et al) might subscribe to type-2. The seed is infinitly long and undiscernable, "but it exists already".

The "Strong-QM" folk would subscribe to type-3."

And, just for curiosity's sake, which camp are you in, if any? I bounce around between #'s 2 and 3, with an occasional visit to #1.

BC



Re: Chapter 2: I Married a Computer
posted on 08/19/2002 10:04 PM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

BC,

I find myself increasingly in "camp-3", fundamentally.

Of course there is definitely something to camp-1, at least in explaining how the "averages of aggregations" and the consequent rise of causality lead to some "forms" being (more or less) universal.

The "laws" of physics, while perhaps incapable of telling us when an atom will decay, may lead to precisely why the "rates and proportions" manifest. These "values" interrelate to make certain "patterns" effectively attractors. On the "very small" they are very strong attractors, hence the ubiquity of protons, etc.

I guess I see "physics" (the theories) serving to explain how "type-1-fundamentals" lead to "type-3-stable manifestations".

In these terms, I see "type-2" as being a short-cut that is a "cut too short".

I am reminded of something Einstein said, with respect to formulating the laws of physics, (paraphrasing):

"They should be made as simple as possible, and no simpler."

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/19/2002 6:13 PM by wclary5424@aol.com

[Top]
[Mind·X]
[Reply to this post]

You wrote:

"Wrong. There is no law of logic that demands every effect have a cause and when modern physics looks at a tritium atom and sees it decay there is no reason it did so right now and not an hour from now or a century from now. True randomness does exist."

Actually, we don't know enough to say one way or the other. It depends on how you interpret the equations. Heisenberg can be interpreted in several different ways.


BC


Re: Chapter 2: I Married a Computer
posted on 08/19/2002 6:07 PM by wclary5424@aol.com

[Top]
[Mind·X]
[Reply to this post]

You wrote:
"I am very familiar with every one of the 26 letters Shakespeare used in his work thus I understand every nuance he tried to express in his writing."

The two cases (weather and Shakespeare) aren't similar at all.

While I think John overstated the case when he implied that we completely understand the weather, no one doubts that the weather on the earth is deterministic at the classical level, even if chaotic. Weather is a physical process.
On the other hand, the relationship between the alphabet and Shakespeare's writing is different. First, the alphabet that Shakespeare used and the one that we use is subtly different. And the English language has changed dramatically since the 16th Century. So, perhaps it is not true that you completely understand the writing that Shakespeare used to encode his meaning. But more importantly, aesthetics is not a science (at least yet, thank goodness.) The question of what Shakespeare meant is not a scientific one. Even if Shakespeare were still alive and we could ask him, he might not even realize every thing that he meant. To borrow an idea from postmodern philosophy, when I read Shakespeare, it "means" something different from what it "means" when you read it.


"We have known the genetic code for 40 years and now we have the complete genome, in effect we have the human blueprints and we know the language it is written in, thus we know all there is to know about human biology."

Here at least we are dealing with two scientific questions. But again, you are comparing apples and oranges. No, we don't know "everything there is to know about human biology." But John did not claim that we know "everything there is to know about the weather" either. He said we understand the mechanism by which the weather works. We do not yet know as much about human genetics as we do about the weather, but we learn more every day. When I received my PhD in anthropology back in the dark ages, we did not understand how to use mitochondrial DNA to trace kinship relations. Now we do. More importantly, even though we don't know how every gene works (for example, we don't know why half my beard went white before the other), we do have very useful knowledge (which gene combination contributes to cystic fibrosis, for example.) And I would say that within a few years, we will have as good a working knowledge of the genome as we do of the weather.

Finally, I'd like to say that there are no "final answers" in science. I doubt that we will ever know everything there is to know, even if we do emerge into the glorious posthuman future that we talk about in these forums. But one doesn't have to have perfect knowledge to understand how something works. Otherwise, we would never understand anything.


BC



Re: Chapter 2: I Married a Computer
posted on 08/16/2002 11:04 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey Wrote'

'The only restriction on measuring consciousness is a complete absence of consciousness theories.'
==========
Nope, the problem is not that there are too few consciousness theories but too many. Consciousness theories (but not intelligence theories!) are a dime a dozen, the reason they are so cheap it that there are no objective facts they need to explain so even if by blind luck you did stumble onto the correct theory there is no way to ever know it is the correct theory. That being the case we must make do with second best, if something acts conscious then it probably is. Like most things in life there is no guarantee we won't make a mistake but it's the best we can do.

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 11:14 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"Nope, the problem is not that there are too few consciousness theories but too many. "

Give me one. Full hypothesis, experimental scenarios, brain function explanation , the lot.

I think you'll find there aren't any. There may be some dippy half-assed drawing board pseudoscientific nonsense masquerading as a theory, but as they're hogwash that doesn't comply with the set of standards normally known as 'science' I'm afraid they don't count.

If on the other hand you can find one genuine hpothetical scientific explanation of consciousness backed up by a complete explanation of brain function ( verified by experiment ) I'll give you $1000.

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 12:01 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

I Wrote:

"the problem is not that there are too few consciousness theories but too many.'

john.b.davey Wrote:

'Give me one.'

Ok, consciousness is generated by matter but only if the matter is in the shape of a size 11 foot. This theory is not contradicted by any objective or subjective fact I'm aware of. Like I said they're a dime a dozen.

Re: Chapter 2: I Married a Computer
posted on 08/18/2002 10:01 AM by john.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"Ok, consciousness is generated by matter but only if the matter is in the shape of a size 11 foot. ..Like I said they're a dime a dozen. "

You need to read some work on episetemology, as claarly you don't know any. Not all hypotheses have equal value and that means that some psuedoscientific garbage about 'algorithms' leading some kine of 'complexity to consciousness does not constitute a useful working hypothesis as it is circular and untestab;we.. I could argue there is a giant worm at the centre of jupiter or that pluot is made of green cheese but it doesn't meany anything unless it produces a logical consequence we can test via aexperiment. When you say 'they are a dime a dozen' and fail to give any examples what you really mean is you don't know any. And there is a good reason for that - there aren't any.


Re: Chapter 2: I Married a Computer
posted on 08/18/2002 3:59 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey Wrote:

'You need to read some work on episetemology, as claarly you don't know any.'
===========
Mr. Davey, spare me the condescending tone, your philosophical acumen has not impressed me. By the way, the word is spelled 'epistemology'.
=========
'it doesn't meany anything unless it produces a logical consequence we can test via a experiment.'
===========
I could not agree with you more. If you asked me to come up with a theory to explain intelligence I would be at a loss, you would be able find an experiment that would blow a hole in any theory I came up with in no time. Intelligence is hard. Consciousness on the other hand is easy, even my theory about feet causing consciousness, stupid as it may be, is not contradicted by any objective fact. A good consciousness theory needs to do more than just 'explain consciousness' because that just means 'tell me a story about consciousness'. Stories are cheap and most of them are fiction, experiments are neither. I can not imagine any experiment that could tell the difference between a good consciousness theory and a bad consciousness theory. If you know of such an experiment please don't be shy, tell the world!

Re: Chapter 2: I Married a Computer
posted on 08/18/2002 5:55 PM by jkohn.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"Mr. Davey, spare me the condescending tone, your philosophical acumen has not impressed me. "
Well you could impress me by at least stumping up with a half-hearted coherent argument.

"I could not agree with you more. If you asked me to come up with a theory to explain intelligence I would be at a loss"
That is simple. Intelligence is a qualitative feature of brains with no formal properties and largely dependedn upon the opinion and largesse of the observer Consciousness is a phenomenon with formal properities that exists whether an observer is there to view it or not. Yet again, you are mixing ontologies willy nilly.

"I would be at a loss, you would be able find an experiment that would blow a hole in any theory I came up with in no time. "
Yes. My brain supports cosnciousness and its not a size 11 foot. Next.

"Consciousness on the other hand is easy, even my theory about feet causing consciousness, stupid as it may be, is not contradicted by any objective fact. "

It is stupid as it suggests that people without feet can't be conscious. I don't even need to consider it. Your theory is a useful one in that it does produce objectively measurable conclusions which can lead to us quite quicly concluding that it is false.

"A good consciousness theory needs to do more than just 'explain consciousness' because that just means 'tell me a story about consciousness'."

Yet again a misunderstanding of science. Science needs to give an account of consciousness, that is all : the circumstances surrounding its production and all causal factors. It doesn't need to explain what consciousness, for want of a better word , 'is'. No more than physics tells you what matter 'is'. Physics merely gives an account of the formal propwrties of matter.

" I can not imagine any experiment that could tell the difference between a good consciousness theory and a bad consciousness theory."

I'm sorry but I don't think this is formal grounds for disproof. A bad consciousness theory would be one that was circular and provided no grounds for falsification via experiment. Most AI theories fall into this category. A good one would make a positive statement about matter and the physical world that would enable it to be DISPROVED via experirment. Positive verification of the already existing body of known facts is generally considered to be insuffiecient grounds for accpetance.

Re: Chapter 2: I Married a Computer
posted on 08/19/2002 1:20 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey Wrote:

'It is stupid as it suggests that people without feet can't be conscious.'
========
Exactly, and as I said it is not contradicted by any objective fact I know of, it so happens that I have feet. Stupid yes, but no more stupid than any other consciousness theory.
==========

'it does produce objectively measurable conclusions which can lead to us quite quicly concluding that it is false.'
=========

And how do you objectively measure the consciousness of people without feet, or people with feet for that matter? There are an infinite number of consciousness theories, all different, that can never be disproved by experiment. It follows that there will never be a consciousness theory that is worth a bucket of warm spit. Lacking a theory I must use a rule of thumb, if it acts intelligently then it is probably conscious, even if it is made of silicon and not meat.

Re: Chapter 2: I Married a Computer
posted on 08/19/2002 4:41 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"And how do you objectively measure the consciousness of people without feet, or people with feet for that matter? "
In the absence of a more detailed theory of consciosness you can only do so with a simple test based upon good assumptions. Currently that would involve something like an assumption that the brain causes consciousness . A marked change in electrical / activity and a loss of consciousness in the patient ( who would no longer be able to communicate and whose eyes would now be shut ) following the removal of feet would be experimental grounds for further typical grounds for hypothesis/investigation. This is no different to any other scientific theory.


"There are an infinite number of consciousness theories, all different, that can never be disproved by experiment. "
There are NONE. You are confusing , CONSTANTLY, the subjective nature of mental pheonomena with their third party existence. There is no test for 'matter' either : only its formal properties within theoretical frameworks. That does not mean to say that matter cannot be examined within an objective framework.

Re: Chapter 2: I Married a Computer
posted on 08/19/2002 12:21 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey@btinternet.com Wrote:

'In the absence of a more detailed theory of consciosness you can only do so with a simple test based upon good assumptions. Currently that would involve something like an assumption that the brain causes consciousness .'
===========
People have thought most of their fellow human beings (those who were not sleeping or dead that is) were conscious for as long as there have been human beings. During almost all of that time they knew nothing about the brain, the Greeks thought it was just an organ to cool the blood and the heart was the seat of the soul. They nevertheless thought other people were conscious because they acted that way, and we do the same thing today. All I ask is when judging the consciousness or lack thereof of a computer you play by the same rules.

Re: Chapter 2: I Married a Computer
posted on 08/19/2002 4:05 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

Standard AI though trap. You DON'T accept people's behaviour as "proof of consciousness". In one sense you are falling yet again into confusion between objective criteria for proof and the subjective experience of mental life.

What you describe is the one, psychological , subjective experience of one person seeing another. You DON'T think people are conscious 'just because they act like you'. First of all if you see something 'acting like you' what you think is 'this is another human'. Your brain moves into a state of expectations and function based upon the idea that there is another human being around. Up to now its instinct. Abstract notions like consciousness don't even come into it.

It may be the case after some quiet reflection in the presence of another human being it may cross your mind to think that he is conscious. But if you do it is only because you bave made a SUBSEQUENT, intellectual step - namely , that 'humans are like me' , tnerefore 'they have the attributes of consciousness like me' leads you to the belief/conclusion that another human being has consciousness BECAUSE you have it, and you BELIEVE that human beings are basically the same.

But the important point is that your statement 'we think is a human is conscious because of behaviour' is an objective reflection ABOUT human behaviour - not an obective reflection on the nature of consciousness.

That consciousness exists is a statement we either believe or not. That is exists objectively in all humans is a sound belief upon which to base scientific investigation. That it is subjective in nature does not disqualify it from objective investigation.

Re: Chapter 2: I Married a Computer
posted on 08/20/2002 12:41 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

'You DON'T accept people's behaviour as "proof of consciousness".
=========
Hey, speak for yourself. I judge people by their actions, not by the amount of pigment in their skin or even the amount of silicon in it. By the way, do you think sleeping people are conscious, or dead people, if not why not.

Re: Chapter 2: I Married a Computer
posted on 08/20/2002 4:20 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"Hey, speak for yourself. I judge people by their actions, not ..people are conscious, or dead people, if not why not. "
Did you even remotely read the response I gave you ? I don't think you did. Reread it - and make an effort to understand it. That way you may end up learning something.

Re: Chapter 2: I Married a Computer
posted on 08/20/2002 11:49 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

I took your advice and reread your post and I learned twice as much as I did the first time, but two times zero is still zero. And I've asked this question 3 times but received no answer, do you think people who are sleeping are conscious, or those under anesthesia or those who are dead and if not why not?

Re: Chapter 2: I Married a Computer
posted on 08/21/2002 4:35 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"do you think people who are sleeping are conscious, or those under anesthesia or those who are dead and if not why not? "
You really haven't read the mail have you ?
If you think that people are conscious because of the way they are behaving, that is NOT an objective statement about consciousness. It is an objective statement about how most people have simple working theories ( as part of their normal working psychology ) about the RELATIONSHIP between consciousness and behaviour.
Am I making myself clear now ? You don't use people's behaviour as proof, per se, of the notion of the existence of consciousness per se ( in the way that you seem to be suggesting ) : rather you connect certain behaviours with consciousness STATE. This is a theory based upon a combination of personal experience and an instinctive belief that all humans are similar.
What you are describing is not how to produce an objective test of consciousness : you are describing how human psychology uses simple theories of consciousness state on a daily basis - but this theory is PREMISSED on two ideas : a)that consciousness is an objective fact and b) all humans are quite similar.
So I do judge/guess/ people's consciousness state by looking at them - because like most people I use the theory. But the catch is my theory only applies to other humans and ( to a lesser extent ) animals.
The other thing to point out that I am aware from a third party point of view that my theory is stricly limited. I could be fooled into thinking that I've seen a human when I haven't. For instance , a robot . In which case I am aware that the primitive test I apply to consciousness conditions based upon behaviour is fundamentally flawed,as I can't always tell wether something is a human being or not. From the small set of external and easily reproducible criteria ( behaviours, appareance , noise ) I cannot conclude that the appropriate INTERNAL criteria for something being a human being has been met. In short, a painting of a duck is not the same thing AS a duck.

I don't think people under anaesthesia are conscious.
That is because anaesthetists have working theories of consciousness based upon a combination of third party metrics : heart rate, anaesthetic level,oxygen level and ( in some circumstances ) brain scans monitoring various alpha activity in the brain. Anaesthetists came to the conclusion a long time ago that consciousness wasn't best assessed via a visual assessment ofthe patient.This is an observable and of use : but of its own is not a good guide to consciousness level.
The question of wether people who are sleeping are conscious is more complicated. People who are asleep may be dreaming, and this indicates a level of consciousness. They may be dozing and in and out of consciousness and semi-consciousness. So I wouldn't come to any conclusions about the conscious state of sleeping people by looking at them. That is why scientists have a preference for monitoring alpha activity in the brain when studying sleeping people, as its a good guide to changes in mental state.
What you see in both these cases is science in action : scientists using theories of consciousness based upon non-sensory criteria as the usual human psychology version is woefully inadequate.
But in case you haven't still got the point I reiterate : when you say "you think people are concious because of their behaviour" what you are commenting about is human bevaiour . You are not making an objective statement about consciousness. You cannot therefore conclude that if you see human behaviour in robots you can assume they are conscious.

Re: Chapter 2: I Married a Computer
posted on 08/21/2002 11:43 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

'If you think that people are conscious because of the way they are behaving, that is NOT an objective statement about consciousness.'
=========
A keen grasp of the obvious. I didn't say it's an objective fact about consciousness because there are none, I said it's a useful rule of thumb. Actually it would be useful even if it were untrue because I would be unable to function if I thought I was the only conscious being in the universe.
=========
'I do judge/guess/ people's consciousness state by looking at them - because like most people I use the theory. But the catch is my theory only applies to other humans and ( to a lesser extent ) animals.'
==============
Then tell me why it would be totally unreasonable to go one more step and make the following moderation of your words:

I do judge/guess/ men's consciousness state by looking at them - because like most men I use the theory. But the catch is my theory only applies to other men and ( to a lesser extent ) women.
==========
'I don't think people under anaesthesia are conscious.That is because anaesthetists have working theories of consciousness based upon a combination of third party metrics'
==========
Baloney. Most people don't know anything about anesthesia theory but still don't have the slightest trouble telling if somebody has passed out. And the idea that a squiggle on a EEG graph paper is more important in determining consciousness than intelligent behavior is absolutely ridiculous.

Re: Chapter 2: I Married a Computer
posted on 08/22/2002 4:13 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"I didn't say it's an objective fact about consciousness because there are none, I said it's a useful rule of thumb. "
You did actually. You maintained it was useful as an objective test of consciousness in non-humans.

"I do judge/guess/ men's consciousness state by looking at them - because like most men I use the theory. But the catch is my theory only applies to other men and ( to a lesser extent ) women. "
Because women are human and have brains too.

"Most people don't know anything about anesthesia theory but still don't have the slightest trouble telling if somebody has passed out."

Sure. But there are levels of "passing out" : levels of consciousness. What you have stated is exactly what I said before, namely that most humans operate a primitive theory of consciousness state. But that theory is of no use to anaesthetists who need more detail than 'on' or 'off'. They need to know that somebody is sufficiently 'on' not to be dead and sufficiently 'off' not to wake up during an operation. So they need other apparatus and theory. They need more information, and that involves additional theory about consciousness and additional data.Its not sophisticated but it IS more better than visual evidence alone.


"And the idea that a squiggle on a EEG graph paper is more important in determining consciousness than intelligent behavior is absolutely ridiculous. "
What the hell has intelligent behaviour got to do with it ? Where did that come from ? Are you suggesting anaesthetists ask their patients a few questions about share prices to see if they're conscious ?

Consciousness is a natural phenomena : it exists regardless of the existence of observers. If I were the last man in the universe I would still be conscious. Intelligent behaviour is a value judgement of one human being of another. It enables one human being to guage the mental capacities of another. It has nothing to do, in the sense you describe it, as being a 'test of consciousness' - the consciousness of the other human is already assumed. It is a test of capabilities of one human by another.

You are confusing the existence of simple theories of consciousness in everyday use by human beings as prima facae evidence that the test for consciousness is synonymous with the test for human behaviour. Nonsense.

Re: Chapter 2: I Married a Computer
posted on 08/23/2002 12:48 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey Wrote:

'You are confusing the existence of simple theories of consciousness in everyday use by human beings as prima facae evidence that the test for consciousness is synonymous with the test for human behaviour. Nonsense.'
===========
Yes that is exactly, precisely, what I am saying; and I can not find the slightest particle of 'nonsense' in it. As for being confused let's review a bit. You have said the awesome profundities exposed in the 2 slit experiment are no more interesting than a pop up book, you have said something can be random but still be caused by something, you have said the old cause and effect assumption is as strong today as it was in the days before the Quantum Mechanics revolution, and although you don't say why you even insist that the famous quote from Steven Hawking ' The quantum effects of black holes suggests that not only does God play dice, He sometimes throws them where they cannot be seen' in no way contradicts your view. Given all that it would seem to me you are in no position to call anyone 'confused'.

Re: Chapter 2: I Married a Computer
posted on 08/23/2002 1:47 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"you have said the old cause and effect assumption is as strong today as it was in the days before the Quantum Mechanics revolution, "
No I didn't. Classical causality may be dead, but causality per se is not. Read what I said . What I stated, repeatedly, was that quantum mechanics and the 2-slit experiment does not mean the end of cause and effect. And it doesn't. And if you think it does give me your explanation, as I'd be only too glad to hear it.

"You have said the awesome profundities exposed in the 2 slit experiment are no more interesting than a pop up book, "
No I didn't. Wrong again. I said I'm familiar with them and boggled by the results. I said it was my assumption you'd read a pop-up book rather than studied the sibject , which is true.

"You are confusing the existence of simple theories of consciousness in everyday use by human beings as prima facae evidence that the test for consciousness is synonymous with the test for human behaviour. "

Need I say more than repeat it again ? A test for human behaviour is a test for human behaviour. Not a test for consciousness. We can only extend the test for consciousness to the test for human behaviour, if, and only if, the object we are studying is a human being. We cannot conclude that a non-human has consciousness just because it acts like a human, as our everyday theory rests on the requirement that the object of our theory is, actually, a human being, and not just a mimic.

It is in short, nonsense to think that a duck is the same thing as a painting of a duck.


Re: Chapter 2: I Married a Computer
posted on 08/24/2002 12:32 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

"What I stated, repeatedly, was that quantum mechanics and the 2-slit experiment does not mean the end of cause and effect. And it doesn't. And if you think it does give me your explanation, as I'd be only too glad to hear it."
=======

Ok, I well.When a photon of undetermined polarization hits a polarizing filter there is a 50% chance it will make it through. For many years physicists who disliked the idea that God played dice with the universe figured there must be a hidden variable inside the photon that told it what to do. By "hidden variable" they meant something different about that particular photon that we just don't know about. They meant something equivalent to a lookup table inside the photon that for one reason or another we are unable to access but the photon can when it wants to know if it should go through a filter or be stopped by one. We now understand that is impossible. In 1964 (but not published until 1967) John Bell showed that correlations that work by hidden variables must be less than or equal to a certain value, this is called Bell's inequality. In experiment it was found that some correlations are actually greater than that value. Quantum Mechanics can explain this, classical physics or even classical logic can not.

Even if Quantum Mechanics is someday proven to be untrue Bell's argument is still valid, in fact his original paper had no Quantum Mechanics in it; his point was that any successful theory about the world must explain why his inequality is violated. I will attempt to show how to find the inequality, show why it is perfectly logical, and demonstrate that nature refuses to be sensible and just doesn't work the way you'd think it should.

I have a black box, it has a red light and a blue light on it, it also has a rotary switch with 6 connections at the 12,2,4,6,8 and 10 o'clock positions. The red and blue light blink in a manner that passes all known tests for being completely random, this is true regardless of what position the rotary switch is in. Such a box could be made and still be completely deterministic by just pre-computing 6 different random sequences and recording them as a lookup table in the box. Now the box would know which light to flash.

I have another black box. When both boxes have the same setting on their rotary switch they both produce the same random sequence of light flashes. This would also be easy to reproduce in a classical physics world, just record the same 6 random sequences in both boxes.

The set of boxes has another property, if the switches are set to opposite positions, 12 and 6 o'clock for example, there is a total negative correlation, when one flashes red the other box flashes blue and when one box flashes blue the other flashes red. This just makes it all the easier to make the boxes because now you only need to pre-calculate 3 random sequences, then just change every 1 to 0 and every 0 to 1 to get the other 3 sequences and record all 6 in both boxes.

The boxes have one more feature that makes things very interesting, if the rotary switch on a box is one notch different from the setting on the other box then the sequence of light flashes will on average be different 1 time in 4. How on Earth could I make the boxes behave like that? Well, I could change on average one entry in 4 of the 12 o'clock lookup table (hidden variable) sequence and make that the 2 o'clock table. Then change 1 in 4 of the 2 o'clock and make that the 4 o'clock, and change 1 in 4 of the 4 o'clock and make that the 6 o'clock. So now the light flashes on the box set at 2 o'clock is different from the box set at 12 o'clock on average by 1 flash in 4. The box set at 4 o'clock differs from the one set at 12 by 2 flashes in 4, and the one set at 6 differs from the one set at 12 by 3 flashes in 4.

But I said before that that boxes at opposite settings should have a 100% anti-correlation, the flashes on the box set at 12 o'clock should differ from the box set 6 o'clock by 4 flashes in 4 NOT 3 flashes in 4. Thus if the boxes work by hidden variables then when one is set to 12 o'clock and the other to 2 there MUST be a 2/3 correlation, at 4 a 1/3 correlation, and of course at 6 no correlation at all.
A correlation greater that 2/3, such as 3/4, for adjacent settings produces paradoxes, at least it would if you expected everything to work mechanistically because of some hidden variable involved.

Does this mean it's impossible to make two boxes that have those specifications? Nope, but it does mean hidden variables can not be involved and that means something very weird is going on. Actually it would be quite easy to make a couple of boxes that behave like that, it's just not easy to understand how that could be.

Photons behave in just this spooky manner, so to make the boxes all you need it 4 things:

1)A glorified light bulb, something that will make two photons of unspecified but identical polarization moving in opposite directions so you can send one to each box. An excited calcium atom would do the trick, or you could turn a green photon into two identical lower energy red photons with a crystal of potassium dihydrogen phosphate.

2)A light detector sensitive enough to observe just one photon. Incidentally the human eye is not quite good enough to do that but frog's can, for frogs when light gets very weak it must stop getting dimmer and appear to flash.

3)A polarizing filter, we've had these for a century or more.

4)Some gears and pulleys so that each time the rotary switch is advanced one position the filter is advanced by 30 degrees. This is because it's been known for many years that the amount of light polarized at 0 degrees that will make it through a polarizing filter set at X degrees is [COS (x)]^2; and if x = 30 DEGREES then the value is .75 If light is made photons that translates to the probability any individual photon will make it through the filter is 75%.

The bottom line of all this is that there can not be something special about a specific photon, some internal difference, some hidden variable that determines if it makes it through a filter or not. Thus the universe is either non-deterministic or non-local, that is, everything influences everything else and does so without regard for time or space. One thing is certain, whatever the truth is it's weird.

Re: Chapter 2: I Married a Computer
posted on 08/24/2002 9:25 AM by john.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

And this means that there is no such thing as cause and effect because ..... well no such argument is given because nonesuch can be inferred. Nonesuch could even be given

Unfortunately the clean textual style and lucid , coherent sense of this reply belies the fact that yor chances of owning the authorship ( after the first sentence ) of this response are at about 0%.

However , this does not constitute a disproof of causality. It constitutes a critique of one possible classically based solution to quantum mechanics, namely the idea of 'hidden variables' based upon classicists ( in the physics sense ) refusal to believe that measurement per se is an effective contributor to a quantum systems' state .

Even assuming that hidden variable theory may be plausible, its STILL doesn't constitute a defeat of cause and effect. If I have and electron wave moving in an electrical field then the electron wave is distorted by it. Cause and effect. A nuclear disintegrates not 'because it feels like it' or that 'the random hand of causality wanders in and does what it likes' but becuase the intranuclear forces create Potential Energy states that favour disintergration. Nothing in this proof , which is essentially a critique of a quesionable idea , changes that.

Re: Chapter 2: I Married a Computer
posted on 08/24/2002 10:24 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey Wrote:

'Unfortunately the clean textual style and lucid , coherent sense of this reply belies the fact that yor chances of owning the authorship ( after the first sentence ) of this response are at about 0%.'
==============
Hey Bozo, plagiarism is a serious charge! This is not the first time I've debated matters of this sort with people online, I wrote this critique of the Bell Inequality on June 03, 2000 and sent it to the Extropian list on that day, check the archives if you don't believe me. And before you accuse somebody of being a thief it would be wise to do a bit of checking, it's not hard to do nowadays, just take a sentence from it and do a Google search. I assume you've heard of Google. I have nothing more to say to a man who unjustly and publicly called me a thief except to say you have proven moral integrity can not be used to distinguish man from machine.

Re: Chapter 2: I Married a Computer
posted on 08/24/2002 1:16 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

I'm sorry. I was being naughty.
You should have said you prepared it earlier! The change of style was too much of a shock. Please accept my heartfelt aplogies.

Re: Chapter 2: I Married a Computer
posted on 08/24/2002 2:31 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

"Please accept my heartfelt aplogies."

Apology accepted Mr.Davey, no hard feelings.

Re: Chapter 2: I Married a Computer
posted on 08/22/2002 9:21 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"I didn't say it's an objective fact about consciousness because there are none, "
btw, how come you think that people under anaethesic are unconscious if quote "there are no objective facts about consciousness" ?

Re: Chapter 2: I Married a Computer
posted on 08/22/2002 10:24 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

Because just like everybody else I believe a lot of things I can not prove, including most of the important things. That also means just like everybody else some of the things I am absolutely positively 100% certain of are probably untrue. Such is life. As for consciousness I agree it is a fact, one of them anyway, but it is not objective, it is a subjective fact.


Re: Chapter 2: I Married a Computer
posted on 08/22/2002 12:01 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

" As for consciousness I agree it is a fact, one of them anyway, but it is not objective, it is a subjective fact. "
May I suggest the acquisition of a dictionary. The very fact that we are talking about consciousness makes it an objective fact, and the fact we have been talking about its states means that we have objive facts ABOUT the objective fact of consciiusness. What you mean to say , I think, is that the nature of consciousness is private and subjective as are all mental experiences. As usual you are confusing in , may I say , a classically way typical on these pages, the nature and structure of consciousness with its actual objective existence.


Re: Chapter 2: I Married a Computer
posted on 08/23/2002 12:14 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

I want to get bank to Searle's Chinese Room. He claims to have proven that only humans can produce understand, he does this by producing a system that acts intelligently but has no understanding. How do we know there is no understanding? He assumes the only part of the system that could possibly have it is the human being so he just asks him. The trouble is, that was the very thing he was trying to prove.



Re: Chapter 2: I Married a Computer
posted on 08/23/2002 4:21 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"How do we know there is no understanding? "

I think searle's Chinese room points out one thing : that digital computers can never have a sense of what they are actually doing.
Think about it - I'll assume you're familiar with computers.

All a computer consists of is commands that say "take the contents of register a, do something with it , possibly with the contents of another register b, and then stick it somewhere in memory or disk".

It repeats these operations time and time again : it can have no aggregate understanding of what it is meant to be. In fact , as most programmers know, you can't even work that out from the source code - you need to ask the programmer.

So let's look at an emotion like fear. Fear is pure semantic : what it is like to be scared can only be described in terms of itself. There may be objective features you could point to ( increase in heart rate, sweat, palpitation etc. ) but the subjective , first party sense of terror is not communicable in anything other than words referring to terror.

Subjective fear is semantic : it consists of nothing else.

Now let's look at a ( vey simple ) implementation of 'fear'

void fear ()
{
printf("J'ai peur");
}

Now this is an implementation for the French market. The computer of course will see the characters as digits and will merely move digits from one place to the other. It wouldn't have a clue that it was actually meant to be feeling quite scared. A non-french speaking programmer similarly would not be able to discern what the source was meant to do.

The only way , in fact , to find out what the meaning of the program is, is to find the human being who wrote the program , and who produced the arbitrary implementation of his own 'fear' in the first place, and ask him what its meant to be.

It is impossible, therefore, to discern MEANING from Turing representations, from computer programs. The whole point about the man in the Chinese Room is that he actually doesn't have the slightest idea what the function of the Chinese Room is meant to be , as only the inventor of the Room would be aware of that.

At this point we hear from AI crowd, the typical refrain "it doesn't matter what happens internally, from the outside it appears to be doing the job properly". But this assumes that the existence of mental states is CONDITIONAL upon the existence of observers.

That I am conscious if, and only if, other people are there to see it. Either that or I have "another" observing person in my own head.In which case where the hell did he come from ? Is he a computer program too then ?

The other thing to point out is that computers themselves are arbitrarily implemented. Most digital compoters use varriable voltage levels, but it could be anything. So a computer is basically a block of silicon with arbitrarily varying voltage : only the designer ( observer ) actually has the ability to see what bits of the silicon block are doing the computing in the first place. So the computer doesn't know its a computer in any case. Its existence is actually in the designers' mathematical world, not the physical one. It wouldn't know where it begins or ends, as its existence is observer relative. So it wouldn't even know it was executing these arbitrary instructions in the first place - the Chinese Room is actually too generous to AI, as it doesn't acknowledge that syntax is not intrinsic to nature but to thinking.

Re: Chapter 2: I Married a Computer
posted on 08/23/2002 6:23 AM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

John B,

You wrote:

> "Is there no difference between an obervervable feature of a piece of matter and the piece of matter itself ?
I don't make an 'assertion' that matter is distinct from the observable fetures of matter : it's a fact. It is not up for discussion. Likewise there is no argument with the fact that syntactical objects have no causal powers - it's a fact."

I repeatedly "assert" that same "fact", as my "paper and crayons" Turing machine is intended to clarify.

> "If the machine has been designed with materials that have the causal powers of consciousness, then the machine will have consciousness. If the machine has been designed purely as a "function server" defined totally in terms of what it delivers to the user ( and having no intrinsic/specific causal content of its own ) like all contemporary 'robots', then no, it will not have consciousness."

You effectively rule out consciousness by the qualifier "having no intrinsic/specific causal content of its own", to make the latter statement tautological.

And I assume that you mean "materials whose interactions" support consciousness, and not that DNA, lipids, or proteins themselves (no less carbon, oxygen, protons or neutrons) automatically bestow causal powers of consciousness, independent of "arrangement".

> > (A purely deterministic universe would imply that we humans are) "unable to "intend" or generate "meaning", "

> You didn't intend to write this mail about the subject of consciousness ?

I "feel that I intend", but would not a purely deterministic universe imply that my "intention to write" is no different than a tree's "intention to fall over"? Just could not be helped. I could not "choose to write" nor "intend", except delusionally do.

> "All a computer consists of is commands that say "take the contents of register a, do something with it , possibly with the contents of another register b, and then stick it somewhere in memory or disk".

Why not reduce it further? "All a computer does is open and close electrostatic gates in response to electrical potentials consequent to previous flows of electrons through gates." That description purposely blurs the distinction between transistor-electric-behavior and (gross) neural-electric behavior, and forces us to ask "what else is significantly different" (there ARE still significant differences). The fact that the initial "settings" for such a system (computer) are (usually) the intention of a programmer, and thus can be interpreted to represent "syntactic manipulation", does not make the underlying physics "less physical".

Granted, I don't believe that the "physics of a digital processor" are sufficient to support consciousness "as I feel consciousness", but the reason must be more subtle. We are arguing "physics" in each case, not comparing the "physical brain" to a syntactic abstraction.

I see that there remain two significant "differences".

1. There is "more going on" to neurons than a mere analogy to transistors can represent (greater variety of physical effects, not simply "signal cascades".)

2. The "algorithmic processor", even down to the level of "gate-charge-transfer", is fully explicable and (in principle) fully determined (as long as the "system" never interacts with the "unpredictable outside world" and thus incorporates "changes" whose consequences were not forseen in advance by the programmer.)

I find the (1) more compelling as an argument to "machine consciousness is difficult", especially "computer-wise", but (2) may play a role as well. If the "brain" were ever "mechanically understood" so well, that its behaviors could be interpreted as "syntactic manipulation", its ability to support consciousness may require internal "state" be disturbed by "unpredictable outside influence", at least at some point in its development.

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/23/2002 7:30 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"You effectively rule out consciousness by the qualifier "having no intrinsic/specific causal content of its own", to make the latter statement tautological. "
I wasn't being clear. What I should have emphasised is that you don't care about HOW your robot is implemented physically, and so that any causal powers sufficient to create consciousness in the machine would be coincidental and not a result of your design, as your design was concerned solely with delivery of funtion to you as user/observer.

"And I assume that you mean "materials whose interactions" support consciousness, and not that DNA, lipids, or proteins themselves (no less carbon, oxygen, protons or neutrons) automatically bestow causal powers of consciousness, independent of "arrangement".

Correct.

"I "feel that I intend", "
That's all you need as proof. Really.

"but would not a purely deterministic universe imply that my "intention to write" is no different than a tree's "intention to fall over"?"

You have to differentiate between intentionality in acts of thinking and determinism per se. Intentionality is a component of thinking that is directed toward something in the outside world - it is a feature of psychology. Your intention to write is also a thought act. That you DID intend to write something ( ie had the psychological processes assocated with inending ) and then did so doesn't affect / impede/ have anything to with , in any way, the issue of whether it was pre-determined or not. Its possible to go through a process of intentional mental thinking in a determinstic world. Its that old chestnut - different ontologies. Determinism is about objective facts in the world of physics. Intentionality is an objective fact about the subjective workings of the brain.


"does not make the underlying physics "less physical". "

There are no physics to computers. They are not physical objects. They can be implemented arbitrarily - water and sluice gates work more slowly but just as reliably as electronics. Physically comparing the brain to a computer serves no purpose due to this exact point. They exist in the syntax world of the user.


"1. There is "more going on" to neurons than a mere analogy to transistors can represent (greater variety of physical effects, not simply "signal cascades".) "
Yes - they are matter. Somehow they generate consciousness. But they are not ( and this is very important ) defined by their extrinsic content ( their metrics ) as they ARE matter, after all. They are defined by what they ARE. This is a difference which AI people just don't seem to get. Reproducing the movement and arrangement of a certain set of neural metrics ( the "signal" patterns , for instance ) can't be used a basis for reproducing the mental effect assocaited with them. It CAN be used as a basis for investigation . But the signal patterns and external metrics of neural activity do not CAUSE consciousness. They are assocaited with it. The metrics ( signal patterns etc.) are syntactical, and have no causal powers. Its a bit like modelling a nuclear reaction. We can model and simulate a nuclear reaction on a computer, but we do not have the causal powers available in the form of semantical ( and not completely understood ) matter of uranium.




Re: Chapter 2: I Married a Computer
posted on 08/23/2002 11:35 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey Wrote:

'The whole point about the man in the Chinese Room is that he actually doesn't have the slightest idea what the function of the Chinese Room is meant to be'
========
The Chinese Room has no point because it assumes the very thing it is trying to prove. The man is the only thing that could understand, the man does not understand, thus there is no understanding anywhere. From that he claims to have proven the man is the only thing that can understand. Idiotic.
=============
'The only way , in fact , to find out what the meaning of the program is, is to find the human being who wrote the program'
==============
In many cases the only way to know what the program will do is run it on a computer and see, the human programmer who wrote it will have no better idea of what it will do than your average caveman. The reason I emphasize what the program will do is because that is objective, meaning on the other hand is not.

When talking about meaning and purpose you have to ask "purpose for who?". What is the purpose of a violin? To the manufacturer the purpose is to get a paycheck. To a drowning man its purpose is to act as a life-preserver. To a musician its purpose is to make music and to his tone deaf child its purpose is to be used to be used as a club to swat a bug. To a rock the violin has no purpose at all. To the computer the purpose of the program may be to keep running forever or maybe it will be to stop after a time, the only way to know is ask the computer and see what it does.

Re: Chapter 2: I Married a Computer
posted on 08/23/2002 1:56 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

The Chinese Room has no point because it assumes the very thing it is trying to prove. ..From that he claims to have proven the man is the only thing that can understand. Idiotic. "
It is not. What he was trying to state is that symbol processors have no capacity to elucidate meaning, which they don't. They can't possible know from their symbols what they're 'meant' to be.

"In many cases the only way to know what the program will do is run it on a computer and see, the human programmer who wrote it will have no better idea of what it will do than your average caveman. The reason I emphasize what the program will do is because that is objective, meaning on the other hand is not. "

What rubbish. If I intend to run a program simulating a rainstorm, the meaninmg of the program is a rainstorm. And rainstorms are objective.

"When talking about meaning and purpose you have to ask "..way to know is ask the computer and see
what it does. "
Instead of trying to windbag your way out/around/up/down, why the hell don't you acknowledge the problem like verybody else does ? A program can't tell what it's meant to be . So how do you program a 'human brain' and expect your computer to know that its meant to be a brain ( with an internal conscious state ) and not a word processor ?






Re: Chapter 2: I Married a Computer
posted on 08/24/2002 12:18 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey Wrote:

'What he was trying to state is that symbol processors have no capacity to elucidate meaning'
==========
Sure, that's what he was trying to prove and he failed miserably, if you assume the thing you are trying to prove then you can 'prove' anything.
============
'They can't possible know from their symbols what they're 'meant' to be.'
============
Yea yea, we've all heard you say that before, many times in fact, but even a astronomical number of repetitions does not constitute a proof. The way to figure out if someone or something understands something is to ask questions on the topic and judge the wisdom of the response.
============
'If I intend to run a program simulating a rainstorm, the meaninmg of the program is a rainstorm.'
==========
You may have writing the program just because you thought it would be fun to simulate a rainstorm but why should I care why you did it, I don't even like you. For me the meaning may be to predict the weather or make money selling the program it or to hear the pleasant sound of rain falling.

Re: Chapter 2: I Married a Computer
posted on 08/24/2002 9:41 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"Sure, that's what he was trying to prove and he failed miserably, if you assume the thing you are trying to prove then you can 'prove' anything. "
At what point did he assume, in constructing the Chinese Room, that , it was not possible for meaning to be implied from symbolic manipluation? Tell me please. I thought he was just creating a simple version of the inside of a computer.

"The way to figure out if someone or something understands something is to ask questions on the topic and judge the wisdom of the response. "
The duck is the painting of the duck.

"You may have writing the program just because you thought it would be fun to simulate a rainstorm but why should I care why you did it, I don't even like you. "
I must confess to a sniggering admiration for infantility of this sort in what would profess to be serius debate. However maybe you would like ask how would you know , if I was meant to be simulating a rainstorm, and had no graphical interface ( I know you are the sort of person who would need pictures ) only producing a series of numbers, how you would know that the program was meant to be a rainstorm ?

"For me the meaning may be to predict the weather or make money selling the program it or to hear the pleasant sound of rain falling. "

In which case you would have a program to implement making money OR a program to make the sound. But how would I know lokking a a series of 0s and 1s ?




Re: Chapter 2: I Married a Computer
posted on 08/24/2002 10:44 AM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

'At what point did he assume, in constructing the Chinese Room, that , it was not possible for meaning to be implied from symbolic manipluation?'
============
He asked the man if he understood, he said no, he assumed the man was the only place understanding could reside, he concluded the system did not understand, thus man is the only place understanding can reside. Idiotic.
=============
'how would I know lokking a a series of 0s and 1s'
========
The series of 0s and 1s is a question written in a language you don't understand, to find the answer ask a computer and see what it does.

One more thing Mr. Davey, if you have any integrity you will retract that despicable accusation of plagiarism you made against me.

Re: Chapter 2: I Married a Computer
posted on 08/24/2002 1:29 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"He asked the man if he understood, he said no, "
Well..... he didn't exactly ask the man , face to face , if he understood. He wasked us all, in a kind of general sense, if what he was doing constituted the same thing as an understanding. The guy was basically translating Chinese without understanding Chinese, you must concede that. Or if not that guy , then put myself, who doesn't understand Chinese, I guarantee, in his place.
You see, the whole point is SOMEBODY has to understand Chinese in order to create the Chinese Room. Somebody has to 'understand' what Chinese words mean what English words in order to produce the translation lexicon used by the Chinese Room. Otherwise how do you tie the two things together ? At somepoint , if I am on about , say a human, I need to know what a human IS without reference to any kind of symbol - I need to know "what a human is" and have a sense of that semantic in order to be able to translate it into Chinese. This is what the man in the Chinese Room is not doing - he's just using a lookup table in the manner of a standard computer program.


"The series of 0s and 1s is a question written in a language you don't understand, to find the answer ask a computer "
I do underastand them. All I need to do is work out the mathematical operations eacho of those 0s amd 1s entail. Even after all that I am still none the wiser as to what the program is actually meant to do.

"you have any integrity you will retract that despicable accusation of plagiarism"
Unreservedly. I don't know what came over me.
"

Re: Chapter 2: I Married a Computer
posted on 08/24/2002 3:59 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

John B Davey Wrote:

'The guy was basically translating Chinese without understanding Chinese, you must concede that.'
==========
I concede that Chinese was being translated and I concede that the man did not know Chinese, but I do not concede that Chinese was not understood. The man was just one small part of the system, like a single neuron in a brain. The way to investigate the system is to write in Chinese 'Do you understand Chinese?' and submit the question to the room. I'll bet the answer will be 'yes', an answer of 'no' would be a bit of a paradox.
=============
'I do underastand them [0s and 1s]. All I need to do is work out the mathematical operations eacho of those 0s amd 1s entail.'
=============
It's deterministic yes, but so what? To understand it, to figure out what it will do you need to imitate what the computer does, but the computer is much better at that sort of thing than humans are so the best thing for you to do is just watch the computer and see what it does.
============
'Even after all that I am still none the wiser as to what the program is actually meant to do.'
============
What the program writer meant for it to do is irrelevant because that may have no relation to what the program actually does. With just a few lines of code I can write a program that will behave in ways even God can not predict.

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 11:16 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"reason they are so cheap it that there are no objective facts they need to explain "

rubbish. Consciousness IS an objective fact. It has a subjective quality when experienced but it is an objective fact. It sounds like you need some kind of introductory explanation to the basic principles of philosophy and ontology.

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 12:23 PM by jonkc@att.com

[Top]
[Mind·X]
[Reply to this post]

john.b.davey Wrote

'Consciousness IS an objective fact.'

How do you know? Let's say you have developed a marvelous new conscious theory, how do you know it is correct? The only way to test it is by observing behavior. Your theory may predict that my current brain state should produce a feeling of sadness, you may even see tears in my eyes, but the only way to know if I have the subjective experience you expect, or any subjective experience at all for that matter is to ask me, take note of the sounds produced by my mouth, and hope I'm telling the truth. That doesn't sound very objective to me.

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 5:26 PM by wclary5424@aol.com

[Top]
[Mind·X]
[Reply to this post]

"How do you know? Let's say you have developed a marvelous new conscious theory, how do you know it is correct? The only way to test it is by observing behavior. Your theory may predict that my current brain state should produce a feeling of sadness, you may even see tears in my eyes, but the only way to know if I have the subjective experience you expect, or any subjective experience at all for that matter is to ask me, take note of the sounds produced by my mouth, and hope I'm telling the truth. That doesn't sound very objective to me."

I think you are getting ontology confused with epistemology.

Consciousness is a fact. I am conscious at this moment. While you are reading this, you are conscious. Consciousness exists.

However, I cannot prove that you are conscious. I cannot *know* that any other entity, whether human, sentient animal, or computer is conscious. I can only infer it from behavior. But that does not mean that consciousness does not exist. If you've ever spent any time with any of the higher non-human primates, you will know what I'm talking about here. You look in their eyes, and you sense intuitively that there's somebody home, so to speak. But there is absolutely no way for me to prove that they are, in fact, conscious beings, absent telepathy, which, alas, seems to not exist.

BC



Re: Chapter 2: I Married a Computer
posted on 08/16/2002 6:57 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

BC,

The "fact" you speak of is still a subjective one. As you follow, you "infer" (albeit very reasonably) its existence in your reader, as I (your reader) infer its existence in you the writer.

Do we agree that the "human sensation of consciousness" is at least a manifestation of the physical world (the body in particular) and not some ethereal, or even physical "substance independent of the body"?

If so, then the issue of telepathy becomes quite problematic, even if, for the sake of argument, we imagine it could exist.

Suppose I employ some god-like genie to grant my wishes. Rather than ask the genie "please tell me whether that entity actually is, or is not, experiencing what I would recognize as a conscious waking state", I ask the genie, "Please allow me to experience the mind of that entity (if indeed it has a mind) for one minute.

The genie grants my wish. If the entity is you, I experience you. If it is Ramona, or a typical AI of the day, perhaps I experience nothing (I am unconscious), or if there were a "sufficiently complex AI-like-thing", I experience something very weird. Then the minute is up.

I turn to the genie and say, "Hey, I thought you said you would grant my wish!" The genie replies, "I did. Did you expect to retain some memory of that experience?"

I say, "Sure, why not?", The genie replies, "That would only be possible if you retain some sense of being YOU, while also experiencing that other entity. But that is NOT what the entity you wished to investigate experiences, and that is what you asked for. It simply experiences itself. If you retain some sense of yourself while experiencing the other, you cannot know whether the difference represents part of what the other entity experiences, or merely your interpretation of the disturbance created by the mixing." The experiences of the other entity remain where they are entertained, naturally, with that entity. Its in the physics of that entity."

I say, "Could you not bestow upon me some hybrid memory, alter my chemisry, whatever, to allow me now to have the memory of what that was like? The genie replies, "Just how would I do that?" I cannot know what you, or that other entity experiences without losing myself in the process. I would have no standard for accuracy, and I might create any one of a thousand possible hybrid memories. You could infer nothing in particular from that.

My point in this hypothetical genie-powered telepathy exercise is to point out that, if we could create a technology to "read other human minds", it would probably operate because of some correlated artifacts of the physics/biology. I suppose it is possible, but it may not resolve the "new-machine-consciousness" issue at all.

You might use the device to accurately discover my thoughts, the "hidden number I am thinking of", and not that of the "artificial". That says nothing about whether the artificial entertains a consciousness, since that "sensation" may be modulated by a different media for which we have no proper correlates. The idea that it could would suggest that "conscious mind" is some sort of uber-fluid that gains an existence of its own, independent of the physics of the substrate.

You might find "an aura" or a "wave manifestation" or something similar, and holding that to be somehow (correlated to) "the presence of consciousness", but that would not tell you whether the other entity "experiences" by that manifestation, even though you might extract the "right hidden number".

I think that the subject of "consciousness" is where epistemology and ontology become indistinguishable (and quite dark.)

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/17/2002 9:40 PM by wclary5424@aol.com

[Top]
[Mind·X]
[Reply to this post]

Tony,

I *believe* consciousness to be an epiphenomenon of the brain. Occam's razor seems to require it. Sometimes when I think about Bell's Theorem, I consider other possibilities, but that's a can of worms I don't particulraly want to pry open.
The really hard question is why does consciousness exist at all?


BC

Re: Chapter 2: I Married a Computer
posted on 08/17/2002 11:30 PM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

BC,

I too believe consciousness is an epiphenomenon of the brain. It is "generated" by the underlying physics. To use the candle flame as an analogy, I do not believe that by extinguishing the candle flame, the former epiphenomenal "glow" goes floating through the ether in search of a new home in which to manifest. The "glow" has no existence except as an artifact of the chemical transitions.

QM-indeterminacy exists as well in the candle flame as everywhere else, but leads to no observable "special effect". The structure (or lack of structure) in the flame body is incapable of amplifying "tunneling" (for instance) to effect any phenomena of substantial difference. (It may play a role the very moment an oily-rag bursts spontaneously into combustion, but thereafter it is all effectively causal.)

But in physical systems that are highly structured, wherein certain small events can be coherently amplified, QM effects may play a larger role. This is still "brain". Certainly, "epiphenomena of brain" does not mean "of brain chemistry in complete isolation to QM-physics."

> "The really hard question is why does consciousness exist at all?"

The "Why" question is the real can of worms. If you ask "what is consciousness" or "how does it get generated", we have a chance of finding an answer. But "why" suggests (perhaps) purposefulness which may or may not exist.

We might ask "why protons and neutrons". We can perhaps show how they are a consequence of the fundamental forces, but this just pushes the question back to "why does the universe manifest that particular division of forces".

The "fact" that we (and most mammals likely) manifest degrees of consciousness is evidence that it serves to enhance "persistence". It is more useful having it than not. (By "useful", I intend purposeless tendency to persist in the environment. One might argue the individual persists so to perpetuate the species, but again, the species persists to what purpose? Purpose is in the eye of the beholder.)

However the forces that make the H2O molecule a "favorable pattern" originated, they did not do so (I believe) in order to manifest the variety of beautiful snowflakes that occur. Yet those beautiful and intricate snowflake patterns are, in an extended sense, patterns embedded in the relationships that exist among the natural forces.

(I don't know if that is a particular answer to "why consciousness", but that its very existence implies it to be a favorable manifestation.)

ASIDE: Some may argue that the feature of "consistency at a distance" which QM indicates may make consciousness some sort of "universal", which we merely interpret as "individuals" due to the circumstance of the physical locality of our "instruments". My eyeballs are usually in the same room as my hands, so to speak, so the "connectedness" is not glaringly apparent. There may be something to this viewpoint.

Cheers! ____tony b____



Re: Chapter 2: I Married a Computer
posted on 08/18/2002 10:14 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"The "fact" you speak of is still a subjective one. "
No it isn't ! Its an objective fact for, instance , that consciousness is subjective in nature. Consciousness as natural phenonmena is an OBJECTIVE fact. The retort that there is no proof of consciousness is not a response to this , as in this domain, or ontologiocal domain, we are referring to conscoousness as the IDEA which we both understand to have some meaning.

Re: Chapter 2: I Married a Computer
posted on 08/18/2002 10:07 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

How do you know matter is made of atoms ? You can't see them directly. You can't touch them. You can't talk to them and ask them if they're atoms. But if atoms don't exist it'd be on ehell of a surprise. But the nature of scientifc discovery is like that : ultimately you don't know if atoms exist, but its a very reasonable belief that they do.
Similarly a test for consciousness would be based upon a hypothesis about what causes caonsciousness. IN objective scientific terns if out theory of consciousness' cause seems to hold out , is backed by experiment , then we can apply the same validity to an objective consciiousness test as ANY OTHER scientifc test , fronm the size of atoms to the size of the universe to the age of the earth - all facts scientifc in nature and based upon bedrocks of experomentation and hypothesis.

Re: Chapter 2: I Married a Computer
posted on 08/15/2002 5:36 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

John K,

> "One other thing, if behavior can not indicate consciousness then why on earth did evolution produce it?"

I would not say behavior can not "indicate" consciousness, it is just not a "proof" that the entity experiences the awareness as we do.

I may well be that evolution, taking the "bottom-up" path of chemistry to forge reactive systems, emotional systems, and finally our analytical capabilities, "found" that the artifact of subjective awareness (as we know it) to be valuable, or to be a natural adjunct to the manner in which the processing occurs.

In contrast, we can take Boolean-logic-based mechanics, and create stuff that BEGINS with the analytic activity. Today's computers can do many complex analytic jobs better than a human, and we don't think they are conscious. Thus, it is at least possible that we could create ever-more-clever analytic-based constructs without (necessarily) engendering the "sense of sentience" we possess. That might depend upon how we implement the processes in a physical manifestation. There may be thousands of possible analytic/sensory-processing physical-basis architectures (what I call "substrates"). Transistors in silicon is just one of these. Perhaps all of them can be capable of supporting "analysis", and only 10% capable of supporting "minds of subjective conscious sensation".

Total conjecture, of course. Real proofs or counter proofs are encouraged. So are good arguments.

Cheers! ____tony b____

Re: Chapter 2: I Married a Computer
posted on 08/16/2002 11:40 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

q