Origin > Will Machines Become Conscious? > Gelernter, Kurzweil debate machine consciousness
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0688.html

Printable Version
    Gelernter, Kurzweil debate machine consciousness
by   Rodney Brooks
Ray Kurzweil
David Gelernter

Are we limited to building super-intelligent robotic "zombies" or will it be possible and desirable for us to build conscious, creative, volitional, perhaps even "spiritual" machines? David Gelernter and Ray Kurzweil debated this key question at MIT on Nov. 30.


Transcript by MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), published with permission on KurzweilAI.net December 6, 2006. Participants: Yale professor of computer science David Gelernter, Ray Kurzweil, and CSAIL Director Rodney Brooks, acting as moderator, with questions from the audience.

BROOKS: This is a double-headed event today. We're going to start off with a debate. Then we're going—maybe it's a triple-headed event. We're going to start off with a debate, then we're going to have a break for pizza and soda—pizza lover here—outside, and then we're going to come back for a lecture.

The event that this is around is the 70th anniversary of a paper by Alan Turing, "On Computable Numbers," published in 1936, which one can legitimately, I think—I think one can legitimately think of that paper as the foundation of computer science. It included the invention of the Turing—what we now call the Turing Machine. And Turing went on to have lots of contributions to our field, we at the Computer Science and Artificial Intelligence Lab. In 1948, he had a paper titled, "Intelligent Machinery," which I think is really the foundation of artificial intelligence.

So in honor of that 70th anniversary, we have a workshop going on in the next couple days and this even tonight. This event is sponsored by the Templeton Foundation. Charles Harper of the Templeton Foundation is here, and so is Mary Ann Meyers and some other people sponsoring this event. And Charles, I have to ask you one question —A or B? You have to say. You have to choose. This is going to choose who goes first, but I'm not telling you who A or B is.

HARPER: A.

BROOKS: OK. So we're going to start this debate between Ray Kurzweil and David Gelernter. And it turns out that Ray is going to go first. Thanks, Charles. So I'm first going to introduce Ray and David. I will point out that after we finish and after the break, we're going to come back at 6:15, and Jack Copeland, who's down here, will then give a lecture on Turing's life. And Jack has been—runs the Alanturing.net, the archives in New Zealand of Alan Turing, and he's got a wealth of material and new material that's being declassified over time that he'll be talking about some of Alan Turing's contributions.

But the debate that we're about to have is really about the AI side of Alan Turing and the limits that we can expect or that we might be afraid of or might be celebrating of whether we can build superintelligent machines, or are we limited to building just superintelligent zombies. We're pretty sure we can build programs with intelligence, but will they just be zombies that don't have the real oomph of us humans? Will it be possible or desirable for us to build conscious, volitional, and perhaps even spiritual machines?

So we're going to have a debate. Ray is going to speak for five minutes and then David is going to speak for five minutes—opening remarks. Then Ray will speak for ten minutes, David for ten minutes —that's a total of 30 minutes, and I'm going to time them. And then we're going to have a 15-minute interplay between the two of them. They get to use as much time as they can get from the other one during that. And then we're going to open up to some questions from the audience. But I do ask that when we have the questions, the questions shouldn't be for you to enter the debate. It would be better if you can come up with some question which you think they can argue about, because that's what we're here to see.

Ray Kurzweil has been a well-known name since his—in artificial intelligence since his appearance on Steve Allen's show in 1965, where he played a piano piece that a computer he had built had composed. Ray has gone on to—

KURZWEIL: I was three years old.

BROOKS: He was three years old, yes. Ray has gone on to build the Kurzweil synthesizers that many musicians use, the Kurzweil reading machines, and many other inventions that have gone out there and are in everyday use. He's got prizes and medals up the wazoo. He won the Lemelson Prize from MIT, he won the National Medal of Technology, presented by President Clinton in 1999. And Ray has written a number of books that have been—come out and been very strong sellers on all sorts of questions about our future and the future of robot kind.

David Gelernter is a professor at Yale University, professor of computer science, but he's sort of a strange professor of computer science, in the sense that he writes essays for Weekly Standard, Time, Wall Street Journal, Washington Post, Los Angeles Times, and many other sorts of places. And I see a few of my colleagues here, and I'm glad they don't write columns for all those places. His research interests include AI, philosophy of mind, parallel distributed systems, visualization, and information management. And you can read all about them with Google if you want to get more details. Both very distinguished people, and I hope we have some interesting things to hear from them. So we'll start with Ray. And five minutes, Ray.

KURZWEIL: OK. Well, thanks, Rodney. You're very good at getting a turnout. That went quickly. [laughter] So there's a tie-in with my tie, which this was given to me by Intel. It's a photomicrograph of the Pentium, which I think symbolizes the progress we've made since Turing's relay-based computer Ultra that broke the Nazi Enigma code and enabled Britain to win the Battle of Britain. But we've come a long way since then.

And in terms of this 70th anniversary, the course I enjoyed the most here at MIT, when I was here in the late '60s, was 6.253—I don't remember all the numbers, and numbers are important here —but that was theoretical models of computation, and it was about that paper and about the Turing Machine and what it could compute and computable functions and the busy beaver function, which is non-computable, and what computers can do, and really established computation as a sub-field of mathematics and, arguably, mathematics as a sub-field of computation.

So in terms of the debate topic, I thought it was interesting that there's an assumption in the title that we will build superintelligent machines, we'll build superintelligent machines that are conscious or not conscious. And it brings up the issue of consciousness, and I want to focus on that for a moment, because I think we can define consciousness in two ways. We can define apparent consciousness, which is an entity that appears to be conscious—and I believe, in fact, you have to be apparently conscious to pass the Turing test, which means you really need a command of human emotion. Because if you're just very good at doing mathematical theorems and making stock market investments and so on, you're not going to pass the Turing test. And in fact, we have machines that do a pretty good job with those things. Mastering human emotion and human language is really key to the Turing test, which has held up as our exemplary assessment of whether or not a non-biological intelligence has achieved human levels of intelligence.

And that will require a machine to master human emotion, which in my view is really the cutting edge of human intelligence. That's the most intelligent thing we do. Being funny, expressing a loving sentiment—these are very complex behaviors. And we have characters in video games that can try to do these things, but they're not very convincing. They don't have the complex, subtle cues that we associate with those emotions. They don't really have emotional intelligence. But emotional intelligence is not some sideshow to human intelligence. It's really the cutting edge. And as we build machines that can interact with us better and really master human intelligence, that's going to be the frontier. And in the ten minutes, I'll try to make the case that we will achieve that. I think that's more of a 45-minute argument, but I'll try to summarize my views on that.

I will say that the community, AI community and myself, have gotten closer in our assessments of when that will be feasible. There was a conference on my 1999 book, Spiritual Machines, at Stanford, and there were AI experts. And the consensus then—my feeling then was we would see it in 2029. The consensus in the AI community was, oh, it's going to—it's very complicated, it's going to take hundreds of years, if we can ever do it. I gave a presentation—I think you were there, Rodney, as well, at AI50, on the 50th anniversary of the Dartmouth Conference that gave AI its name in 1956. And we had these instant polling devices, and they asked ten different ways when a machine would pass the Turing test—when will we know enough about the brain, when will we have sophisticated enough software, when will a computer actually pass the Turing test. They got the same answer—it was basically the same question, and they got the same answer. And of course it was a bell curve, but the consensus was 50 years, which, at least if you think logarithmically, as I do, that's not that different from 25 years.

So I haven't changed my position, but the AI community is getting closer to my view. And I'll try to explain why I think that's the case. It's because of the exponential power of growth in information technology, which will affect hardware, but also will affect our understanding of the human brain, which is at least one source of getting the software of intelligence.

The other definition of consciousness is subjectivity. Consciousness is a synonym for subjectivity and really having subjective experience, not just an entity that appears to have subjective experience. And fundamentally—and I'll try to make this point more fully in my ten-minute presentation—that's not a scientific concept. There's no consciousness detector we can imagine creating, that you'd slide an entity in—green light goes on, OK, this one's conscious, no, this one's not conscious—that doesn't have some philosophical assumptions built into it. So John Searle would make sure that it's squirting human neurotransmitters—

BROOKS: Time's up.

KURZWEIL: OK. And Dan Dennett would make sure it's self-reflexive. But we'll return to this.

[applause]

BROOKS: David?

GELERNTER: Let's see. First, I'd like to say thanks for inviting me. My guess is that the position I'm representing—the anti-cognitivist position, broadly speaking—is not the overwhelming favorite at this particular site. But I appreciate your willingness to listen to unpopular opinions, and I'll try to make the most of it by being as unpopular as I can. [Laughter]

First, it seems to me we won't even be able to build superintelligent zombies unless we attack the problem right, and I'm not sure we're doing that. I'm pretty sure we're not. We need to understand, it seems to me, in model thought as a whole the cognitive continuum. Not merely one or a discrete handful of cognitive styles, the mind supports a continuum or spectrum of thought styles reaching from focused analytical thought at one extreme, associated with alertness or wide-awakeness, toward steadily less-focused thought, in which our tendency to free-associate increases. Finally, at the other extreme, that tendency overwhelms everything else and we fall asleep.

So the spectrum reaches from focused analysis to unfocused continuous free association and the edge of sleep. As we move down-spectrum towards free association, naturally our tendency to think analogically increases. As we move down-spectrum, emotion becomes more important. I have to strongly agree with Ray on the importance of emotion. We speak of being coldly logical on the one hand, but dreaming on the other is an emotional experience. Is it possible to simulate the cognitive continuum in software? I don't see why not. But only if we try.

Will we ever be able to build a conscious machine? Maybe, but building one out of software seems to me virtually impossible. First, of course, we have to say what conscious means. For my purpose, consciousness means a subjectivity. And Ray's—and consciousness means the presence of mental states that are strictly private, with no visible functions or consequences. A conscious entity can call up some thought or memory merely to feel happy, to enjoy the memory, be inspired or soothed or angered by the thought, get a rush of adrenaline from the thought. And the outside world needn't see any evidence of all that this act of thought or remembering is taking place.

Now, the reason I believe consciousness will never be built out of software is that where software is executing, by definition we can separate out, peel off a portable layer that can run in a logically identical way on any computing platform—for example, on a human mind. I know what it's like to be a computer executing software, because I can execute that separable, portable set of instructions just as an electronic digital computer can and with the same logical effect. If you believe that you can build consciousness out of software, you believe that when you execute the right sort of program, a new node of consciousness gets created. But I can imagine executing any program without ever causing a new node of consciousness to leap into being. Here I am evaluating expressions, loops, and conditionals. I can see this kind of activity producing powerful unconscious intelligence, but I can't see it creating a new node of consciousness. I don't even see where that new node would be—floating in the air someplace, I guess.

And of course, there's no logical difference between my executing the program and the computer's doing it. Notice that this is not true of the brain. I do not know what it's like to be a brain whose neurons are firing, because there is no separable, portable layer that I can slip into when we're dealing with the brain. The mind cannot be ported to any other platform or even to another instance of the same platform. I know what it's like to be an active computer in a certain abstract sense. I don't know what it's like to be an active brain, and I can't make those same statements about the brain's creating or not creating a new node of consciousness.

Sometimes people describe spirituality—to move finally to the last topic—as a feeling of oneness with the universe or a universal flow through the mind, a particular mode of thought and style of thought. In principle, you could get a computer to do that. But people who strike me as spiritual describe spirituality as a physical need or want. My soul thirsteth for God, for the living God, as the Book of Psalm says. Can we build a robot with a physical need for a non-physical thing? Maybe, but don't count on it. And forget software.

Is it desirable to build intelligent, conscious computers, finally? I think it's desirable to learn as much as we can about every part of the human being, but assembling a complete conscious artificial human is a different project. We might easily reach a state someday where we prefer the company of a robot from Wal-Mart to our next door neighbors or roommates or whatever, but it's sad that in a world where we tend to view such a large proportion of our fellow human beings as useless, we're so hot to build new ones. [laughter]

In a Western world that no longer cares to have children at the replacement rate, we can't wait to make artificial humans. Believe it or not, if we want more complete, fully functional people, we could have them right now, all natural ones. Consult me afterwards, and I'll let you know how it's done. [laughter]

BROOKS: OK, great.

GELERNTER: Thank you.

KURZWEIL: You heard glimpses in David's presentation of both of these concepts of consciousness, and we can debate them both. I think principally he was talking about a form of performance that incorporates emotional intelligence. Because emotional intelligence, even though it seems private and we assume that there is someone actually home there experiencing the emotions that are apparently the case, we can't really tell that when we look at someone else. In fact, all that we can discuss scientifically is objective observation, and science is really a synonym for objectivity, and consciousness is a synonym for subjectivity, and there is an inherent gulf between them.

So some people feel that actual consciousness doesn't exist, since it's not a scientific concept, it's just an illusion, and we shouldn't waste time talking about it. That's not fully satisfactory, in my view, because our whole moral and ethical and legal system is based on consciousness. If you cause suffering to some other conscious entity, that's the basis of our legal code and ethical values. Some people describe some magical or mystical property to consciousness. There were some elements in David's remarks, say, in terms of talking about a new node of consciousness and how that would suddenly emerge from software.

My view is it's an emergent property of a complex system. It's not dependent on substrate. But that is not a scientific view, because there's really no way to talk about or to measure the subjective experience of another entity. We assume that each other are conscious. It's a share human assumption. But that assumption breaks down when we go out of shared human experience. The whole debate about animal rights has to do with are these entities actually conscious. Some people feel that animals are just machines in the old-fashioned sense of that term, not—there's nobody really home. Some people feel that animals are conscious. I feel that my cat's conscious. Other people don't agree. They probably haven't met my cat, but —(laughter)

But then the other view is apparent consciousness, an entity that appears to be conscious, and that will require emotional intelligence. There are several reasons why I feel that we will achieve that in a machine, and that has to do with the acceleration of information technology—and this is something I've studied for several decades. And information technology, not just computation, but in all fields, is basically doubling every year in price, performance, capacity, and bandwidth. We certainly can see that in computation, but we can also see that in other areas, like the resolution of brain-scanning in 3D volume is doubling every year, the amount of data gathering on the brain is doubling every year. And we're showing that we can actually turn this data into working models and simulations of brain regions. There's about 20 regions of the brain that have already been modeled and simulated.

And I've actually had a debate with Tomaso Poggio as to whether this is useful, because he kept saying, well, OK, we'll learn how the visual cortex works, but that's really not going to be useful in creating artificial vision systems. And I said, well, when we got these early transformations of the auditory cortex, that actually did help us in speech recognition. It was not intuitive, we didn't expect it, but when we plugged it into the front-end transformations of speech recognition, we got a big jump in performance. They haven't done that yet in visual modeling of the visual cortex. And I saw him recently—in fact, at AI50—and he said, you know, you were right about that, because now they're actually getting models, these early models of how the visual cortex works, and that that has been helpful in artificial vision systems.

I make the case in chapter four of my book that we will have models and simulations of all several hundred regions of the human brain within 20 years. And you have to keep in mind that the progress is exponential. So it's very seductive. It looks like nothing is happening. People dismissed the genome project. Now we think it's a mainstream project, but halfway through the project, only 1% of the project had been done, but the amount of genetic data doubled smoothly every year and the project was done on time. If you can factor in this exponential pace of progress, I believe we will have models and simulations of these different brain regions—IBM is already modeling a significant slice of the cerebral cortex. And that will give us the templates of intelligence, it will expand the AI toolkit, and it'll also give us new insights into ourselves. And we'll be able to create machines that have more facile emotional intelligence and that really do have the subtle cues of emotional intelligence, and that will be necessary to passing the Turing test.

But that still doesn't—that still begets the key question as to whether or not those entities just appear to be conscious and feeling emotion or whether they really have emotional subjective experiences. David, I think, was giving a sophisticated version of John Searle's Chinese room argument, where—I don't have time to explain the whole argument, but for those of you familiar with it, you've got a guy that's just following some rules on a piece of paper and he's answering questions in Chinese, and John says, well, isn't it ridiculous to think that that system is actually conscious? Or he has a mechanical typewriter which types out answers in Chinese, but it's following complex rules. The premise seems absurd that that system could actually be—have true understanding and be conscious when it's just following a simple set of rules on a piece of paper.

Of course, the sleight of hand in that argument is that these set of rules would be immensely complex, and the whole premise is unrealistic that such a simple system could, in fact, realistically answer unanticipated questions in Chinese or any language. Because basically what the man is doing in the Chinese room, in John Searle's argument, is passing a Turing test. And that entity would have to be very complex. And in that complexity is a key emergent property. So David says, well, it seems ridiculous to think that software could be conscious or even—and I'm not sure if he's—which flavor of consciousness he's using there, the true subjectivity or just apparent consciousness, but in either case it seems absurd that a little software program could display that kind of complexity and self-emergent awareness.

But that's because you're thinking of software as you know it today, if in fact you have a massively parallel system, as the brain is, with 100 trillion internal connections, all of which are computing simultaneously, and which in fact we can model those internal connections and neurons quite realistically in some cases today. We're still in the early part of that process. But even John Searle agrees that a neuron is basically a machine and can be modeled and simulated, so why can't we do that with massively parallel system with 100 trillion-fold parallelism? And if that seems ridiculous, that is ridiculous today, but it's not ridiculous with the kind of technology we'll have with 30 more doublings of price, performance, capacity, and bandwidth of information technology, the kind of technology we'll have around 2030.

These massively parallel systems with the complexity of the human brain, which is a moderate level of complexity, because the design of the human brain is in the genome and the genome has 800 million bytes, but that's uncompressed, has massive redundancies—ALU's repeated 300,000 times. If you apply loss that's compression of the genome, you can reduce it to 30-50 million bytes, which is not simple, but it's a level of complexity we can manage.

BROOKS: Ray, the logarithm of your remaining time is one. [laughter]

KURZWEIL: So the—we'll be able to achieve that level of complexity. We are making exponential progress in reverse engineering the brain. We'll have systems that have the suppleness of human intelligence. This will not be conventional software as we understand it today. There is a difference in the (inaudible) field of technology when it achieves that level of parallelism and that level of complexity, and I think we'll achieve that if you consider these exponential progressions. And it still doesn't penetrate the ultimate mystery of how consciousness can emerge, true subjectivity. We assume that each other are conscious, but that assumption breaks down in the case of animals, and we'll have a vigorous debate when we have these machines. But I'll make one point. We will—I'll make a prediction that we will come to believe these machines, because they'll be very clever and they'll get mad at us if we don't believe them, and we won't want that to happen. So thank you.

BROOKS: OK. David?

GELERNTER: Well, thank you for those very eloquent remarks. And I want to say, first of all, many points were raised. The premise of John Searle's Chinese room and of the thought experiment which is related, that I outlined, is certainly unrealistic. Granted, the premise is unrealistic. That's why we have thought experiments. If the premise were not unrealistic, if it were easy to run in a lab, we wouldn't need to have a thought experiment.

Now, the fact remains that when we conduct a thought experiment, any thought experiment needs to be evaluated carefully. The fact that we can imagine something doesn't mean that what we imagine is the case. We need to know whether our thought experiment is based on experience. I would say the thought experiment of imagining that you're executing the instructions that constitute a program or that realize a virtual machine is founded on experience, because we've all had the experience of executing algorithms by hand. It isn't any—and there's no exotic ingredient in executing instructions. I may be wrong. I don't know for sure what would happen if I executed a truly enormous program that went on for billions of pages. But I don't have any reason for believing that consciousness would emerge. It seems to me a completely arbitrary claim. It might be true. Anything might be true. But I don't see why you make the claim. I don't see what makes it plausible.

You mentioned massive parallelism, but massive parallelism, after all, adds absolutely zero in terms of expressivity. You could have a billion processors going, or ten billion or ten trillion or 1081, and all those processors could be simulated on a single jalopy PC. I could run all those processes asynchronously on one processor, as you know, and what I get from parallelism is performance, obviously, and a certain amount of cleanliness and modularity when I write the program, but I certainly don't get anything in terms of expressivity that I didn't have anyway.

You mentioned consciousness, which is the key issue here. And you pointed out consciousness is subjective. I'm only aware of mine, you're only aware of yours, granted. You say that consciousness is an emergent property of a complex system. Granted, of course, the brain is obviously a complex system and consciousness is clearly an emergent property. Nobody would claim that one neuron tweezed out of the brain was conscious. So yes, it is an emergent property. The business about animals and people denying animal consciousness, I haven't really heard that since the 18th century, but who knows, maybe there are still Cartesians out there—raise your hands.

But in the final analysis, although it's true that consciousness is irreducibly subjective, you can't possibly claim to understand the human mind if you don't understand consciousness. It's true that I can't see yours and you can't see mine. It doesn't change the fact that I know I'm conscious and you know that you are. And I'm not going to believe that you understand the human mind unless you can explain to me what consciousness is, how it's created and how it got there. Now, that doesn't mean that you can't do a lot of useful things without being—creating consciousness. You certainly can. If your ultimate goal is utilitarian, forget about consciousness. But if your goals are philosophical and scientific and you want to understand how the mind really operates, then you must be able to tell me how consciousness works, or you don't have a theory of the human mind.

One element that I think you left out in your discussion of the thought experiment and the fact that, granted, we're able to build more and more complex systems and they are more and more powerful, and we're able to build more and more accurate and effective simulations of parts of the brain and indeed of other parts of the body—because keep in mind that when we allow the importance of emotion and thinking, it's clear that you don't just think with your brain, you think with your body. When you feel an emotion, when you have an emotion, the body acts as a resonator or a sounding board or an amplifier, and you need to understand how the body works, as well as the brain does, if you're going to understand emotion. But granted, we're getting—we're able to build more complex and more and more effective simulators.

What isn't clear is the role of the brain's chemical structure. The role of the brain stuff itself, of course, is a point that Searle harps on, but it goes back to a paper by Paul Ziff in the late 1950s, and many people have remarked on this point. We don't have the right to dismiss out of hand the role of the actual chemical makeup of the brain in creating the emergent property of consciousness. We don't know whether it can be created using any other substance. Maybe it can't and maybe it can. It's an empirical question.

One is reminded of the famous search that went on for so many centuries for a substitute source of the pigment ultramarine. Ultramarine, a tremendously important pigment for any painter. You get it from lapis lazuli, and there are not very many sources of lapis lazuli. It's very expensive, and it's a big production number to get it and grind it down, turn it into ultramarine. So ultramarine paint used to be as expensive as gold leaf. People wanted to know, where else can I get ultramarine? And they went to the scientific community, and the scientific community said, we don't know. There's no law that says there is some other place to get ultramarine from lapis lazuli, but we'll try. And at a certain point in the late 19th century, a team of French chemists did succeed in producing a fake ultramarine pigment which was indeed much cheaper than lapis lazuli. And the art world rejoiced.

The moral of the story? If you can do it, great, but you have no basis for insisting on an a priori assumption that you can do it. I don't know whether there is a way to achieve consciousness in any way other than living organisms achieve it. If you think there is, you've got to show me. I have no reason for accepting that a priori. And I think I'm finished.

BROOKS: I can't believe it. Everyone stopped—Ray, I think—stay up there, and we'll—now we'll go back and forth in terms of, Ray, maybe you want to answer that.

KURZWEIL: So I'm struggling as I listen to your remarks, David, to really tell what you mean by consciousness. I've tried to distinguish these two different ways of looking at it—the objective view, which is usually what people lapse into when they talk about consciousness. They talk about some neurological property, or they talk about self-reflection, an entity that can create models of its own intelligence and behavior and model itself, or what-if experiments in its mind or have imagination, thinking about itself and transforming models of itself and this kind of self-reflection. That is consciousness. Or maybe it has to do with mirror neurons and that we can empathize—that is to say, understand the conscious or the emotions of somebody else.

But that's all objective performance. And these—our emotional intelligence, our ability to be funny or be sad or express a loving sentiment, those are things that the brain does. And I'd make the case that we are making progress, exponential progress in understanding the human brain and different regions, and modeling them in mathematical terms and then simulating them and testing those simulations. And the precision of those simulations is gearing up. We can argue about the timeframe. I think, though, within a quarter century or so, we will have detailed models that—and simulations that can then do the same things that the brain does apparently. And we won't be able to really tell them apart.

That is what the Turing test is all about, that this machine will pass the Turing test. But that is an objective test. We could argue about the rules. Mitch Kapor and I argued for three months about the rules. Turing wasn't very specific about them. But it is a objective test and it's an objective property. So I'm not sure if you're talking about that or talking about the actual sense one has of feeling, your apparent feelings, the subjective sense of consciousness. And so you talk about—

GELERNTER: (inaudible), could I answer that question?

BROOKS: Yeah, let (inaudible).

GELERNTER: You say there are two kinds of consciousness, and I think you're right. I think most people, when they talk about consciousness, think of something that's objectively visible. As I said, for my purposes, I want consciousness to mean mental states, mental states —specifically a mental state that has no external functionality.

KURZWEIL: But that's still—

GELERNTER: You know that you are capable of feeling or being happy. You know you're capable of thinking of something good that makes you feel good, of thinking of something bad that makes you depressed, or thinking of something outrageous that makes you angry. You know you're capable of mental states that are your property alone. As you say, there's objective—absolutely—

KURZWEIL: But these mental states do have—

GELERNTER: That's what I mean by consciousness.

KURZWEIL: But these mental states still have objective neurological correlates. And in fact, we now have means of where we can begin to look inside the brain with increasing resolution—strike doubling in 3D volume every year—to actually see what's going on in the brain. So sitting there quietly, thinking happy thoughts and making myself happy, you can—there are actually things going on inside the brain, we're able to see them. And so now this supposedly subjective mental state is, in fact, becoming an objective behavior. Not—

GELERNTER: Can I comment on that? I think you're—I think the idea that you're arguing with Descartes is a straw man approach. I don't think anybody argues anymore that the mind is a result of mind stuff, some intangible substance that has no relation to the brain. By arguing that consciousness is objective—I'm agreeing with you that consciousness is objective—I'm certainly not denying that it's created by physical mechanisms. I'm not claiming there's some magical or transcendental metaphysical property. But that doesn't change the fact that in terms of the way you understand it and perceive it, your experiences of it is subjective. That was your term, and I'm agreeing with you. And that doesn't change the fact that it is created by the brain.

Clearly, we're reaching better and better understandings of the brain and of everything else. You've said that a few times, and I certainly don't disagree. The fact that we're getting better and better doesn't mean that necessarily we're going to reach any arbitrary goal. It depends on our methods. It depends if we understand the problem the right way. It depends if we're taking the right route. It seems to me that consciousness is necessary. Unless we understand consciousness as this objective phenomenon that we're all aware of, our brain simulators haven't really told us anything fundamental about the human mind. Haven't told us what I want to know.

KURZWEIL: I think our brain simulators are going to have to work not just the level of the Turing test, but at the level of measuring the objective neurological correlates of these supposedly internal mental states. And there's some information processing going on when we daydream and we think happy thoughts or sad thoughts or worry about something. There's same kinds of things going on as when we do more visibly intelligent tasks. We're, in fact, more and more able to penetrate that by seeing what's going on and modeling these different regions of the brain, including, say, the spindle cells and the mirror neurons, which are involved with things like empathy and emotion—which are uniquely human, although a few other animals have some of them—and really beginning to model this.

We're at an early stage, and it's easy to ridicule the primitiveness of today's technology, which will also always appear primitive compared to what will be feasible, given the exponential progression. But these internal mental states are, in fact, objective behaviors, because we will need to expand our definition of objective behavior to the kinds of things that we can see when we look inside the brain.

GELERNTER: If I could comment on that? If your tests are telling us that they are unable to distinguish that the same thing creates, on the one hand, a mental state of sharply-focused, in which I'm able to concentrate on a problem without my mind drifting and solving it—there's no way to distinguish that mental state from a mental state in which my mind is wandering, I am unable to focus or concentrate on what I'm doing, and then I start dreaming. In fact, cognitive psychologists have found out that we start dreaming and then we fall asleep. If your tests or your simulators are unable to distinguish between the mental state of dreaming or continuous free association on the one hand and focused logical analytic problem-solving on the other, then I think you're just telling us that your tests have failed, because we know that these states are different and we want to know why they're different. It doesn't do any good to say, well, they're caused in the same way. We need to explain the difference that we can observe.

BROOKS: Can I ask a question which I think gets at what this disagreement is? Then I'll ask you two different questions. The question for David is, what would it take to convince you so that you would accept that you could build a conscious computer built on digital substrate? And Ray, what would it take to convince you that digital stuff isn't good enough, we need some other chemicals or something else that David talked about?

KURZWEIL: To answer it myself, I wouldn't get too hung up on digital, because, in fact, the brain is not digital. The neurotransmitters are kind of a digitally-controlled analog phenomena. But when we figure out the salient—the important thing is to figure out what is salient and how information is modeled and what these different regions are actually doing to transform information.

The actual neurons are very complex. There's lots of things going on, but we find out in the—one region of the auditory cortex is basically conducting a certain type of algorithm, the information is represented perhaps by the location of certain neurotransmitters in relation to another, whereas in another case it has to do with the production of some unique neurotransmitter. There's different ways in which the information is represented. And these are chemical processes, but we can model really anything like that at whatever level of specificity is needed digitally. We know that. We can model it analog—

BROOKS: OK, so you didn't answer the question. Can you then answer the question? (laughter)

GELERNTER: I will continue in exactly the same spirit, by not answering the question. I wish I could answer the question. It is a very good question and a deep question. Given the fact that mental states that are purely private are also purely subjective, how can we tell when they are present? And the fact is, just as you don't know how to produce them, I don't know how to tell whether they are there. It's a research question, it's a philosophical question.

It's—we know how to understand particular technologies. That is, we say I've created consciousness and I've done it by running software on a digital computer. I can think about that and say I don't buy that, I don't believe there's consciousness there. If you wheel in some other technology, my only stratagem is to try and understand that new technology. I need to understand what you're doing, I need to understand what moves you're making, because unfortunately I don't know of any general test. The only test that one reads about or hears about philosophically is relevant similarity—that is, we assume that our fellow human beings are conscious, because we can see they're people like us. We assume that if I had mental states, other similar creatures have mental states. And we make that same assumption about animals. And the more similar to us they seem, the more we assume their mental states are like ours.

How are we going to handle creatures who are—or things or entities, objects, that are radically unlike us and are not organic? It's a hard question and an interesting question. I'd like to see more work done on it.

KURZWEIL: In some ways, they'll be more like us than animals, because animals are not perfect models of humans either medically or mentally. Whereas as we really reverse-engineer what's going on, the salient processes, and learn what's important in the different regions of the brain and recreate those properties and abilities to transform information similar ways, and then get an entity that in fact acts very human-like and a lot more human-like than an animal, for example, can pass a Turing test, which involves mastery of language which animals basically don't have, for the most part, they will be closer to humans than animals are.

If we really model—take an extreme case. I don't think this is necessary to model neuron by neuron and neurotransmitter by neurotransmitter, but one could in theory do that. And we have, in fact, do have simulations of neurons that are highly detailed already, of one neuron or a cluster of three or four of them. So why not extend that to 100 billion neurons? It's theoretically possible, and it's a different substrate, but it's really doing the same things. And it's closer to humans than animals are.

BROOKS: So while David responds, if people who want to ask questions can come to the two microphones. Go ahead.

GELERNTER: When you say act very human-like, this is a key issue. You have to keep in mind that the Turing test is rejected by many people, and has been from the very beginning, as a superficial test of performance, a test that fails to tell us anything about mental states, fails to tell us the things that we really most want to know. So when you say something acts very human-like, that's exactly what we don't do when we attribute the presence of consciousness on the basis of relevant similarity.

When I see somebody, even if he isn't acting human-like at all, if he's fast asleep, even if he's out cold, I don't need to see him do anything, I don't need to have him answer any fancy questions on the Turing test. I can see he's a creature like I am, and I therefore attribute to him a mind and believe he's capable of mental states. On the other hand, the Turing test, which is a test of performance rather than states of being, has been—has certainly failed to convince people who are interested in what you would call the subjective kind of consciousness.

KURZWEIL: Well, I think now we're—

GELERNTER: That doesn't tell me anything about—

KURZWEIL: Well, now I think we're getting somewhere, because I would agree. The Turing test is an objective test. And we can argue about making it super-rigorous and so forth, but—and if an entity passed that test, the super-rigorous one, it is really convincingly human. It's convincingly funny and sad, and we really—is really displaying those emotions in a way that we cannot distinguish from human beings. But you're right—I mean, this gets back to a point I made initially. That doesn't prove that that entity is conscious, and we don't absolutely know that people are conscious. I think we will come to accept them as conscious. That's a prediction I can make. But fundamentally, this is the underlying ontological question.

There is actually a role for philosophy, because it's not fundamentally a scientific question. If you reject the Turing test or any variant of it, then we're just left with this philosophical issue. My own philosophical take is if an entity seems to be conscious, I would accept its consciousness. But that's a philosophical and not a scientific position.

BROOKS: So I think we'll take the first question. And remember, not a monologue, something to provoke discussion.

M: Yeah, no problem. Let's see. What if everything is conscious and connected, and it's just a matter of us learning how to communicate and connect with it?

KURZWEIL: That's a good point, because we can communicate with other humans, to some extent—although history is full of examples where we dehumanize a certain portion of the population and don't really accept their conscious experience—and we have trouble communicating with animals, so that really underlies the whole animal rights— what's it like to be a giant squid? Their behavior seems very intelligent, but it's also very alien and we don't—there's no way we can even have the terminology to express that, because it's not experiences that are human. And that is part of the deep mystery of consciousness and gets at the subjective aspects of it.

But as we do really begin to model our own brain and then extend that to other species, as we're doing with the genome—we're now trying to reverse-engineer the genome in other species, and we'll do the same thing ultimately with the brain—that will give us more insight. We can translate into our own human terms the kinds of mental states as we can see them manifest as we really understand how to model other brains.

GELERNTER: If we think we are communicating with a software-powered robot, we're kidding ourselves, because we're using words in a fundamentally different way. To use an example that Turing himself discusses, we could ask the computer or the robot, do you like strawberries, and the computer could lie and say yes or it could, in a sense, tell the truth and say no. But the more fundamental issue is that not only does it not like strawberries, it doesn't like anything. It's never had the experience of liking, it's never had the experience of eating. It doesn't know what a strawberry is or any other kind of berry or any other kind of fruit or any other kind of food item. It doesn't know what liking is, it doesn't know what hating is. It's using words in a purely syntactic way with no meanings behind them.

KURZWEIL: This is now the Searlean argument, and John Searle's argument can be really rephrased to prove that the human being has no understanding and no consciousness, because each neuron is just a machine. Instead of just shuffling symbols, it's just shuffling chemicals. And obviously, just shuffling chemicals around is no different than shuffling symbols around. And if shuffling chemicals and symbols around doesn't really lead to real understanding or consciousness, then why isn't that true for a collection of 100 neurons, which are all just little machines, or 100 billion?

GELERNTER: There's a fundamental distinction, which is software. Software is the distinction. I can't download your brain onto the computer up there—

KURZWEIL: Well, that's just a limitation of my brain, because we don't have—we don't have quick downloading ports.

GELERNTER: You need somebody else's brain in the audience?

KURZWEIL: No, that's something that biology left out. We're just not going to leave that out of our non-biological base.

GELERNTER: It turns out to be an important point. It's the fundamental issue—

KURZWEIL: It's a limitation, not—

GELERNTER: I think there's a very big difference whether I can take this computer and upload it to a million other computers or to machines that are nothing like this digital computer, to a Turing machine, to an organic computer, to an optical computer. I can upload it to a class full of freshmen, I can upload it to all sorts of things. But your mind is yours and will never be downloaded (multiple conversations; inaudible)—

KURZWEIL: That's just because we left—

GELERNTER: It's stuck to your brain.

KURZWEIL: We left out the—

GELERNTER: And I think that's a thought-provoking fact. I don't think you can just dismiss it as an—

KURZWEIL: You're posing that as a—

GELERNTER:—envir—a developmental accident. Maybe it is, but—

KURZWEIL: You're posing that as a benefit and advantage of biological intelligence, that we don't have these quick downloading ports to access information—

GELERNTER: Not an advantage. It's just a fact.

KURZWEIL: But that's not an advantage. If we added quick downloading ports, which we will add to our non-biological brain emulations, that's just an added feature. We could leave it out. But we put it in there, that doesn't deprive it of any capability that it would otherwise have.

GELERNTER: You think you could upload your mind to somebody with a different body, with a different environment, who had a different set of experiences, who had a different set of books, feels things in a different way, has a different set of likes, responds in a different kind of way, and get an exact copy of you? I think that's a naïve idea. I don't think there's any way to upload your mind anywhere else and that lets you upload your entire being, including your body.

KURZWEIL: Well, it's hard to upload to another person who already has a brain and a body that's—it's like trying to upload to a machine that's incompatible. But ultimately we will be able to gather enough data on a specific brain and simulate that, including our body and our environmental influences.

BROOKS: Next question.

M: Thanks. If we eventually develop a machine which appears intelligent, and let's say given appropriate body so that it can answer meaningful questions about how does a strawberry taste or something like that or whether it likes strawberries, if we are wondering if this machine is actually experiencing consciousness the same way that we do, why not just ask it? They'll presumably have no reason to lie if you haven't specifically gone out of your way to program that in.

KURZWEIL: Well, that doesn't tell us anything, because we can ask it today. You can ask a character in a video game and it will say, well, I'm really angry or I'm sad or whatever. And we don't believe it, because it doesn't—it's not very convincing yet. It doesn't—because it doesn't have the subtle cues and it's not as complex and not a realistic emulation of—

M: Well, if we built 1000 of them, let's say—

GELERNTER: I strongly agree with (inaudible)—

M:—presumably they wouldn't all agree to lie ahead of time. Somebody—one of them might tell us the truth if the answer is no.

BROOKS: We'll finish that question (multiple conversations; inaudible)—

GELERNTER: I strongly agree. Keep in mind that the whole basis of the Turing test is lying. The computer is instructed to lie and pass itself off as a human being. Turing assumes that everything it says will be a lie. He doesn't talk about the real deep meaning of lying, or he doesn't care about that, and that's fine, that's not his topic. But he'd—it's certainly not the case that the computer is in any sense telling the truth. It's telling you something about its performance, not something about facts or reality or the way it's made or what its mental life is like.

KURZWEIL: John Searle, by the way, thinks that a snail could be conscious if it had this magic property, which we don't understand it, that causes consciousness. And when we figure it out, we may discover that snails have it. That's his view. So I do think that—

GELERNTER: Do you think it's inherently implausible that we should need a certain chemical to produce a certain result? Do you think chemical structure is irrelevant?

KURZWEIL: No, but we can simulate chemical interactions. We just simulated the other day something that people said will never be able to be simulated, which is protein folding. And we can now take an arbitrary amino acid sequence and actually simulate and watch it fold up, and it's an accurate simulation (multiple conversations; inaudible)

GELERNTER: You understand it, but you don't get any amino acids out. As Searle points out, if you want to talk Searlean, you can simulate photosynthesis and no photosynthesis takes place. You can simulate a rainstorm, nobody gets wet. There's an important distinction. Certainly you're going to understand the process, but you're not going to produce the result—

KURZWEIL: Well, if you simulate creativity, you'll—if you simulate creativity, you'll get real ideas out.

BROOKS: Next—sure.

M: So up until this point, there seems to have been a lot of discussion just about a fully—just software, just a human or whatnot. But I'm kind of curious your thoughts towards more of a gray area, if it's possible. That is, if we in some way augment the brain with some sort of electronic component, or somebody has some sort of operation to add something to them. I don't think it's been done yet today, but just is it possible to have fully—what you would consider to be a fully conscious human take part of the brain out, say, replace it with something to do a similar function, and then have obviously the person still survive. Is that person conscious? Is it (inaudible)?

KURZWEIL: Absolutely. And we've done things like that, which I'll mention. But I think—in fact, that is the key application or one key application of this technology. We're not just going to create these superintelligent machines to compete with us from over the horizon. We're going to enhance our own intelligence, which we do now with the machines in our pockets—and when we put them in our bodies and brains, we'll enhance our bodies and brains with them.

But we are applying this for medical problems. You can get a pea-sized computer placed in your brain or placed at biological neurons (inaudible) Parkinson's disease. And in fact, the latest generation now allows you to download new software to your neural implant from outside the patient, and that does replace the function of the corpus of biological neurons. And now you've got biological neurons in the vicinity getting signals from this computer where they used to get signals from the biological neurons, and this hybrid works quite well. And there's about a dozen neural implants, some of which are getting more and more sophisticated, in various stages of development.

So right now we're trying to bring back "normal" function, although normal human function is in fact a wide range. But ultimately we will be sending blood cell-sized robots to the bloodstream non-invasively to interact with our biological neurons. And that sounds very fantastic. I point out there's already four major conferences on blood cell-sized devices that can produce therapeutic functions in animals and—we don't have time to discuss all that, but we will—

BROOKS: Let's hear David's response.

GELERNTER: When you talk about technological interventions that could change the brain, it's a remarkable—it's a fascinating topic, and it can do a lot of good. And one of the really famous instances of that is the frontal lobotomy, an operation invented in the 1950s or maybe the last 1940s. Made people feel a lot better, but somehow it didn't really catch on, because it bent their personality out of shape. So the bottom line is not everything that we do, not every technological intervention that affects your mental state is necessarily going to be good.

Now, it is a great thing to be able to come up with something that cures a disease, makes somebody feel better. We need to do as much of that as we can, and we are. But we—it's impossible to be too careful when you fool around with consciousness. You may make a mistake that you will regret. And lobotomy cases are undoable.

BROOKS: I'm afraid this is going to be the last question.

M: How close do the brain simulation people know they are to the right architecture, and how do they know it? You made the assertion that you don't need to simulate the neurons in detail, and that the IBM people are simulating a slice of neocortex and that's good. And I think that is good, but do they have a theory that says this architecture good, this architecture not good enough? How do they measure it?

KURZWEIL: Well, say, in the case of the simulation of a dozen regions of the auditory cortex done on the West Coast, they've applied sophisticated psychoacoustic tests to the simulation and they get very similar results as applying the same test to human auditory perception. There's a simulation of the cerebellum where they apply skill formation tests. It doesn't prove that these are perfect simulations, but it does show it's on the right track. And the overall performance of these regions appears to be doing the kinds of things that we can measure, that the biological versions do. And the scale and sophistication and resolution of these simulations is scaling up.

The IBM one on the cerebral cortex is actually going to do it neuron by neuron and ultimately at the chemical level, which I don't believe is actually necessary when we—ultimately, to actually create those functions, when we learn the salient algorithms, we can basically implement them using our computer science methods more efficiently. But that's a very useful project to really understand how the brain works.

GELERNTER: I'm all in favor of neural simulations. I think one should keep in mind that we don't think just with our brains, we think with our brains and our bodies. Ultimately, we'll have to simulate both. And we also have to keep in mind that unless our simulators can tell us not only what the input/output behavior of the human mind is, but how it understands and how it produces consciousness —unless it can tell us where consciousness comes from, it's not enough to say it's an emergent phenomenon. Granted, but how? How does it work? Unless those questions are answered, we don't understand the human mind. We're kidding ourselves if we think otherwise.

BROOKS: So with that, I think I'd like to thank both Ray and David. [applause]

   
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

conscious machines
posted on 12/08/2006 6:31 PM by davidishalom1

[Top]
[Mind·X]
[Reply to this post]

will it be conscious machine, or just an intelligent zombie pretending to be conscious ? this may be a life or death issue rather than just philosophical qeury in the future. i suggest another view point for this issue. suppose we had personalized Artificial Intelligence of ourselves. someone like you in the cyberspace or in nano-manufactured android robot, how could i tell if this "info-duplication" of mine is really conscious ? by being synchronously connected to my virtual self in myriad of parameters, I would be able to see what he sees, here what he hears, feels what he feels and even to be directly connected to his thoughts, thus having boldly experience his experience and gain direct access to his experience thus gaining direct insight, through inner experience to my virtual person consciousness.

Re: conscious machines
posted on 12/08/2006 7:39 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

it's good to see Rodney so happy. i think ray is brave stepping up to speak time and time again.

A great general wriote that world leaders are similar to men with invetive minds, because the qualities needed for leading an army like and Alexanfer and for being a revolution in thought are similar:

tenacity;
determination.

rfisual to be swayed.

Truit to continuance through unpopularioty when people laugh then think you're a nuisance, then finally siad that is exactly what they thought anyway (Schoenhaueur)

the qulaioties are the same and it is staggering to see.


i dont have aproblem with propsitionally defining conscsiousness nor conceiviong an archtecture for it to be propgrammed into a machihe.

but Consciousness in=s NOT Intelligence.

It is a sub set of sub ruotines that takes in and modles data from the world.

One thing it has to do is pattern-match.

this is the real skill of RK

Re: conscious machines
posted on 02/23/2008 9:10 AM by francofiori2004

[Top]
[Mind·X]
[Reply to this post]

Who cares if smart robots are also conscious?
If they are smart as humans (or more) and they appear to be conscious, we'll treat them as humans. They will work and have same rights as biological humans.
We do the same now for each other, because we really don't know if all of us are conscious and if we are all equally conscious, but we just assume it and it's ok I think.
Maybe if you have higher IQ you are also more conscious, we don't know that. But it's not really important pragmatically, it's only a philosophycal question.
So that's not big deal.

Re: conscious machines
posted on 02/23/2008 9:18 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

smart remotes will certainly be deeply aware of their environment and indivs w which they interact. i believe this awareness will become over time, hyperawareness, hyperobservancy. sort of like spider-sense or the force, sensing and understanding signals not picked up by humans, for example.

if they are that aware, they will be conscious. but their mind will be profoundly diff from us, hyper-rational, far more explicitly processing, distilling, discarding, etc, its sensory inputs.

Re: conscious machines
posted on 02/23/2008 9:19 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

smart remotes


correction, smart robots, duh

Re: conscious machines
posted on 07/10/2009 8:53 PM by AeroJMan

[Top]
[Mind·X]
[Reply to this post]

Sensory inputs indeed. We can't label it but it's fun to pretend. if we design one that quacks, is it a duck - really. Who are we to label such a thing. Do you know what an eagle or a worm think?

Re: conscious machines
posted on 04/02/2008 9:55 PM by martysull

[Top]
[Mind·X]
[Reply to this post]

Of course one could eventually, with the appropriate technology, create a conscious machine because consciousness is not something special. It arises out of a continual sense of a me or a separate self. As discussed in "The Singularity is Near", it is likely that this continual concept of a "material me' comes about by the interaction of spindle cells that have long nerual filaments that "connect extensive signals from many other brain regions". These spindle cells likely work in tandem with other brain regions such as the posterior ventral medial nucleus (VMpo) that "apparently computes complex reactions to bodily states such as 'this tastes terrible' ". The sense of self and the qualia, such as taste, color, feel, etc. related to this sense of self can be duplicated. This sense of self and its accompanying qualia has obviously helped us survive or it would never have evolved. It is clear from the "mirror test" that other species have a minimal sense of self, such as great apes, elephants and dolphins. Interestingly, only these species have significant number of spindle cells. Another interesting fact is that infants do not have spindle cells and also cannot pass the mirror test. It is only after spindle cells develop around nine months of age that the mirror test is successful in humans. Consciousness is nothing more than having a continual (not continuous) sense of self accompanied by a sense of qualia that is associated with this sense of self. It may be that the sense of self arises out of the continual firing of the spindle cells and the pairing of this firing of these cells with other regions of the brain, such as the VMpo, that computes complex reactions to bodily states.

Re: conscious machines
posted on 01/18/2009 1:27 AM by bkumnick

[Top]
[Mind·X]
[Reply to this post]

Hi,

My name is Barry Kumnick. I've been working on the problem of conscious machines on a part time basis for the past 20 years or so.

Conscious machines are possible, and they will be here sooner than most think. However, there is a twist. They won't be based on the representation of information.

The key to creating conscious machines is to understand the neural basis for the representation of abstract thought, meaning, awareness and consciousness from a first person direct perspective in context. We think from the first person direct perspective in context. We understand meaning from the first person direct perspective in context. We can not think, understand meaning or be conscious from the third person indirect perspective of an external observer. At the deepest level, information represents things as bits from the third person indirect perspective of an observer. Only an observer can determine the meaning of the bits. The key problem was that the representation of information is inherently indirect. There is no way around this. Bits are an indirect representation. Period. End of story. It is not possible to ground meaning or be conscious from a third person indirect perspective. If you try, you immediately run into Ryles regress, or the homunculus problem.

The problem turned out to be even worse than that. First order logic, set theory, mathematics, information, and all forms of human communication are based on indirect representations. I had to develop two entirely new classes of representation to get around that problem. Direct representation - which is the basis for the representation of existence, and Universal representation - which is the basis for the first person direct representation of abstraction and consciousness.

Then I had to identify the first order ontology and computational model of the representation of abstraction itself. That was hard.

The last major hurdle was understanding the representation and process of abstraction well enough to reduce it to mathematical equations.

And finally, defining consciousness itself.

Rather than try to describe all that here, I've written a blog that will introduce you to the fundamental theory required to develop sentient machines. It is available at: http://beyond-information.blogspot.com/

Barry Kumnick

Re: conscious machines
posted on 07/10/2009 8:42 PM by AeroJMan

[Top]
[Mind·X]
[Reply to this post]

"Re: conscious machines" - or "am I going to be a zombie".
The only way to be sure is to be able to go there (cyber mind) freely and come back. If you just stay there great. Before going though, you would wonder if what you would leave is life itself.

This is a paradox that cannot be avoided philosophically or in reality - if it were possible.

Another way of thinking about it is the "stepping over" ideal. Analogous to carefully stepping over a log, stepping over into the cyber mind will ensure the person that there is continuity of mind. Perhaps grow into it slowly by slowly introducing synthetics into the brain.

Yet another deep question is of the transporter analogue: if I scramble your atoms (which would certainly kill you), but reconstruct them or their duplicates, which would be more practical, did I kill you, then replace you by an impostor?

The journey into the singularity would be epic.

Re: conscious machines
posted on 07/20/2009 5:13 PM by uzi.boy

[Top]
[Mind·X]
[Reply to this post]

I suppose that the position that I'm representing would be that of an anti-cognitivist, like Mr. Gelernter. However, I argue that software and hardware together have the potential of creating conscious beings. However, I propose that embedded software in multiple pieces of hardware plays a much larger role in completing such a task. Looking at the world today, with the advancement in technology is certainly incredible. We have computers with extremely large computational power, though not surpassing the human brain's own raw computations power which is around 1013 to 1016 operations per second [1].

One can think of humans as actually a really advanced state of machinery using biochemistry to grow while expiring after some time period. The whole concept of body motion and human thinking is done by electric signals running throughout the whole body, especially the brain [2]. So, really, when you feel happy, sad or some form of emotion, it is really your brain 'perceiving' such an emotion and thus your body reacts to that 'emotion'. We can thus think of our brain as a vast, multiple complex neural networks that work together to do the daily things that we do i.e. eat, sleep, play sports, learn, etc. In fact, our entire body can be considered as some super designed computer. Our DNA determines how our body is structured and how it continues to grow. It is basically the encoding of the information of our body structure down to the last cell [3].

I propose that instead of reinventing the wheel, we can take already-existing species which are conscious i.e. humans, and replicate our structure to make a conscious being. However, this requires technology and information that we do not currently have. The human genome, including the brain, have not been fully mapped out and studied; there are still some blanks remaining. Once we fully understand the physical structure of humans, we should be able to construct a computer that will be able to experience the same things that are physically occurring underneath our skin.

Now let us look at the ethical question of whether we should be aiming to build a computer that can be classified as having a conscious (i.e. passing the Turing test). In today's world, relative to the status of all beings, humans simply rule. We have dominion over the great continents from east to west and north to south leaving very little space uninhabited by our species i.e. Antarctica. Some of us have built tall skyscrapers and some of us have taken entertainment in other species by keeping them as pets, or locking them up in zoos. In terms of the species who is most likely to outlast, we have proven humans as being smarter and more adaptive than many other species. We've stepped into dominion of the air by building flying objects that can travel to different continents in a matter of hours. We've stepped into dominion of the sea by building ships that can submerge and go to extreme deep lengths even reaching the bottom floor of some parts of the vast sea [5]. Heck, we've even taken to exploration outside the earth, which no other species on the planet has done/attempted so far according to our knowledge.

The point I'm trying to make here is that we have self-proclaimed ourselves as being the smartest and most dominant species on the planet [4]. Now let me pose some thought-provoking questions. Building something that is conscious and even smarter than us, is that okay with humanity? Would it be alright to say that after centuries of being #1, that we may not be #1? Would computers and machines which have become conscious at least at our level, become the dominant 'species' on Earth? If so, then where does that leave us? In 100 years, would a robot own a human as a form of a pet like we own a cat or a dog today? Might we be driven from our dominant position and be sectioned into 'zoos' to be studied and ogled at? These are serious implications to consider. I believe that humans would want to remain in the dominant position that we currently possess and building such computers would not help to retain that goal.

I believe that humanity has become complacent and too ambitious. When Einstein did his research on nuclear technology, did he stop to think of the consequences of building something with so much power? Could that have possibly prevented the Hiroshima incident? Similar to this analogy, when we theorize that we will have the ability to build computers that are smarter and have attained the human level of conscious (or possibly more), do we stop to think what would happen if such a scenario was to occur? It is inevitable that we will advance technology further and further, but we should be putting our efforts into ensuring that we stay the dominant species for our own interests. There is nothing wrong with wanting to be the dominant species; it's a natural phenomenon. Rather we should not abuse our position of power and inflict needless harm on others, including ourselves.

References:
[1] http://www.merkle.com/brainLimits.html
[2] http://www.answersingenesis.org/creation/v22/i1/el ectrical.asp#author
[3] http://science.howstuffworks.com/cellular-microsco pic-biology/dna.htm
[4] http://www.sciencemag.org/cgi/content/abstract/sci ;277/5325/494
[5] http://science.jrank.org/pages/7097/Underwater-Exp loration-Deep-sea-submersible-vessels.html

Re: Gelernter, Kurzweil debate machine consciousness
posted on 12/11/2006 5:53 PM by uncleosbert

[Top]
[Mind·X]
[Reply to this post]

it's unfortunate this debate seems not to have moved beyond meat v ether. i wish someone would have asked gelernter about brains in jars, where you have a person who's consciousness has been accumulated from simulated experiences rather than those filtered from our bodies. surely he wouldn't begrudge neo his consciousness simply because he was born in the matrix! we're fortunate to have such a parable addressing exactly this argument and positing handy serial ports to transfer our psyches around. in the interest of full disclosure, i am on kurzweil's side of the issue. instead of describing a machine using billions of pages of code to be human enough, the analogy already exists in frankenstein. even now our computers won't accept any old software... over time this meme prioritizing resembles a religious dispute if you talk to any devout mac user. boing boing declares what percentage of user opts for firefox as if it were a cult. and that is where this consciousness is ripe to emerge... off the internet where the frontier is. it won't come out of a lab, it will probably cobble itself together out of shareware.

that is really what we mean when we say "mind" and "brain". there is brain stuff which houses all the experiences and associations. i see no reason it must require meat parts when we're all happily regurgitating our own lives into it. our ai will not be born into a chinese room; it'll be more like rapunzel in her tower hearing about the world through her witch and window.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 12/12/2006 2:25 AM by BeAfraid

[Top]
[Mind·X]
[Reply to this post]

I was left a little flat by this debate.

The whole issue of "Consciousness" having to be defined before we can create it seems a little...

Well, short sighted...

There are many things that we do as people that we have no idea how they work, yet we do them... Hate, love, ect.... Maybe this is not a valid analogy.

It seems to me that being able to model something in such detail that it acts like a duck, walks like a duck, and quacks like a duck... It is probably a duck.

Same with artificial people. If they are convincing enough in their assertion that they ARE conscious... The who would we be to say that they are not.

To not do so could lead to some pretty severe repercussions.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 12/12/2006 4:46 AM by herosandwitch

[Top]
[Mind·X]
[Reply to this post]

i define consciousness as being aware.why are we so presumpuous to think that the conciousness of anything has to react like a human?humans are uniquely human.if we could fully simulate the human consciousness in a machine,gave it human apperance and capabilities, what would it be considered?likewise what if we took a human consciousness and put it into a computer?the term human is really just going to be a relitive thing then.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 12/12/2006 4:54 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

What gets me is when doubters of machines passing the Turing test say something like, 'a robot does not know the taste of strawberries'. The idea being that humans could describe the taste of this particular fruit, and so distinguish themselves from their artificial other.

I don't think this is the case at all. I certainly can't describe my subjective sense of 'the taste of strawberry'. Best I can do is use general terms like 'fruity', 'sweet', and so on. I don't see why a robot would be any less capable of finding the right general words to describe strawberries.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 12/15/2006 3:38 AM by BeAfraid

[Top]
[Mind·X]
[Reply to this post]

They already use machines to do preliminary taste testing on artificial flavorings at a lot of the big industrial food processing places (Bird's Eye, Nabisco, and so on). It still has to pass a human test, but the machines are pretty good at getting it right to begin with.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 12/15/2006 3:38 AM by BeAfraid

[Top]
[Mind·X]
[Reply to this post]

That was my whole point...

Re: Gelernter, Kurzweil debate machine consciousness
posted on 01/10/2007 7:47 AM by Vertigo

[Top]
[Mind·X]
[Reply to this post]

@@@BeAfraid

It seems to me that being able to model something in such detail that it acts like a duck, walks like a duck, and quacks like a duck... It is probably a duck.

Same with artificial people. If they are convincing enough in their assertion that they ARE conscious... Then who would we be to say that they are not.


I totally agree with these words. Who are we actually to say something or someone is conscious and something or someone else is not. We think we are conscious, but why? In absolute terms, we have no clue whether anyone else is conscious. We just assume so, since other humans resemble us. I can even draw the line further. How do I tell I am conscious? Because I am alive and I can think and I can self-reflect and I can experience emotions?

Since an emotion has no definition and there is no way of telling anyone how a particular emotion feels, the only answer we can come up with is that we are unique in having emotions and nothing that does not resemble us can have them.

A monkey also has emotions and probably a sort of consciousness. But why do we not see it pass the Turing test, simply because their brain does not have the necessary power. Then I do not see any reason in denying a machine with sufficient power that passes the Turing test of being conscious. Because if it acts conscious, and it communicates with us consciously, then it must be conscious!

To conclude,

The problem is not, whether or whether not it is possible to construct a conscious entity. Because it will tell us it is, and it will act as if it is. And there will be no way we can distinguish between a human and the AI. Being conscious is not a miracle, it is just having enough brain power to give words to it.

Looking forward to any comments

nochoice

Re: Gelernter, Kurzweil debate machine consciousness
posted on 10/05/2007 1:44 AM by Jay-el

[Top]
[Mind·X]
[Reply to this post]

In some experiments it is said that a conscious observer is required to decohere (fix) the real out come, perhaps this could be turened aropund and used as a test for conscousness. For example something like, having a robot, AI etc to look at an experiment and tell us the answer, then if this is what we observe its conscous, otherwise its not.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 06/05/2008 7:36 AM by fyngyrz

[Top]
[Mind·X]
[Reply to this post]

The monkey might consider that we aren't very powerful either, since our ability to swing through the trees, and other monkey specialties, is pitiful by comparison. Where the monkey could thrive, we would be eaten by tigers because we are incompetent.

This whole "must be like us to be worthwhile" meme is a human pathology right out of the gate.

As for consciousness... I suspect it is an analogy for competence in problem solving, external *and* internal. No more. The rest... just hubris, the worst kind of philosophical claptrap.


Re: Gelernter, Kurzweil debate machine consciousness
posted on 02/23/2008 9:21 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

i agree w beafraid here, 'consciousness' does not need to be an explicit design goal.

the value-rich path is deeply aware, response, maybe proactive in some cases, with a skill and detail that surpasses human's.

once we get there, well be like, oh yeah, its conscious too

Re: Gelernter, Kurzweil debate machine consciousness
posted on 12/13/2006 12:33 AM by psients

[Top]
[Mind·X]
[Reply to this post]

[quote=Gelernter]We don't have the right to dismiss out of hand the role of the actual chemical makeup of the brain in creating the emergent property of consciousness. We don't know whether it can be created using any other substance. Maybe it can't and maybe it can. It's an empirical question.


What is this, the 18th century? Gelernter is arguing for the theory of vitalism.

Vitalism was an old theory that posited that the chemistry of organic matter was fundamentally different from inanimate matter. It was disproved in 1828 by Friedrich Woehler when he accidentally created urea, an organic compound, by mixing potassium cyanate with ammonium sulfate, two inorganic compounds.

He showed that it's possible to create something organic from something inorganic. It was nothing other than the birth of organic chemistry and the dawn of a new era of science. 1828, Gelernter. That's how far behind that idea is.

If you are not actually defending vitalism, and are instead arguing that the behavior of organic compounds may be critical to the formation of conscious, I think that can be greeted with a profound "duh." For almost 200 years science, has been working on modeling organic chemistry as an information system. If organic chemistry is important, which it certainly is on some level, then its use as an information system will be important. And if it is a good model, empirically indistinguishable. The information system running on a computer that is identical to the one running on chemicals is exactly that, identical. They're just on different hardware.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 12/13/2006 9:07 AM by EyeOrderChaos

[Top]
[Mind·X]
[Reply to this post]

Why does it have to be pure ghost-in-the-machine-vitalism?

Why can't that statement refer to something in the organic substrate that we just don't have the proper insight into.

In other words, perhaps if you create what you think is the exact same modeling system, the silicon one will "act" just like the organic, without consciousness (what-it's-likeness, strawberry tasting) emerging.

I know, it doesn't seem plausable, as this scenerio relies on some "mysterious hidden aspect" of things...

in this case, some overlooked property of organic matter, or dna, or microtubules...or something to that effect...some photon-wormhole holographic effect, or something...

and seems to violate the scientific principle of not gravitating toward solutions that require more elaborate hypotheses than currently available...

but given the complexity of the systems in question, it IS distinctly possible.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 12/13/2006 11:31 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

That sounds somewhat similar to Rodney Brook's argument. He is famous for building robots modelled on insects, but has noticed that real bugs are still far more adept at navigating environments than the best robots are. He feels that we cannot explain this discprency through lack of brute computing power. He thinks we are missing some vital ingredient in our mathematics that means AI is not quite modelling its biological inspirations. This is to be compared to modelling falling objects without inputting 'mass' into the equations: The model would sort of work but would clearly not be acurately modelling real falling objects.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 10/05/2007 12:34 PM by Tussilago

[Top]
[Mind·X]
[Reply to this post]

I certainly believe this is true.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 02/19/2007 5:23 PM by hipfire

[Top]
[Mind·X]
[Reply to this post]

I seem a little late for this debate but I just discovered the site and I love it. Here are some thoughts:

What we're debating are three things. What is consciousness? Can a machine be programmed to become conscious? Can the Turing Test, which was defined to test claims of artificial, human-like intelligence, determine consciousness? And I'd like to add to those, if the Turing Test can't determine consciousness in a complex system, what can?

When I ponder the first question, I think that consciousness is not an on or off thing. Consciousness is a sliding scale of the measure of both the free will of a system (the ability of a system to choose between many possible actions based on its own evolving preferences and desires) and its level of abstract thinking. At its higher levels, abstract thinking leads to higher states of self-reference and eventually self awareness.

The less consciousness a system has, the fewer preferences it has to deal with and the less abstract its thinking. The ability to like or dislike a particular movie for example requires a vast list of preferences and abstract concepts, while an amoeba may only have the will to choose between pain and no pain, or food and not food, and therefore not have much of a choice and therefore have less consciousness, but be conscious nonetheless in that it does have a level of free will and a self-referencial point of view, albeit small.

On the higher end of the scale, knowing and understanding that you exist requires abstract thinking only present in humans. I think animals are self-realized to some extent, they certainly have personal emotions, but the abstract thought that they exist is probably absent. You can't know you exist if you don't have an understanding of that abstraction.

I suspect that consciousness and free will are only possible because of quantum uncertainty. Otherwise, I think everything would follow along cause and effect determined by the other laws of physics and there would be no free will and therefore no self awareness and no consciousness.

Each conscious being is a unique system. A zen monk's consciousness is different than that of a boy raised by wolves for instance. Each mind evolves differently and therefore has both different software and hardware. A human who has not learned language and abstract ideas may be little more self-aware than a chimpanzee. So the software and the processor of consciousness must evolve as a singular system. I don't believe you can put consciousness into any system from outside that system.

Further, I suspect that the astoundingly high level of abstract thinking such as ours required language and communication and math and the ability to transfer those ideas from person to person. This means that abstract thinking of this level emerges from an even more complex system than a human brain. It requires a system of many intelligent sub systems or many body/brains sharing information and experiences in order to arise.

So onto the next question. Can a machine become conscious? I suspect that, yes, a machine can have consciousness at some level. I think it will take a quantum computing system as complex as the whole human race to facilitate a cyber-being with the intelligent mind of an uber supercomputer but the actual self-awareness of an ant or less. And I don't think it will be an effect of the software, but of the massive quantum computing hardware. With ever more complex systems and higher levels of consciousness, eventually AI could surpass individual human consciousness with the potential of creating a universal intelligence and consciousness as Ray predicts.

As Ray states, mind/consciousness is an emergent property of a complex system. But I don't believe you can duplicate the complexity of an organic human system within a non-living processor. You can model it and simulate it (some day) but it will still be just a simulation. I believe this because I am convinced that, as opposed to pure intelligence, consciousness requires a unique kind of quantum effect incorporated into a complex, growing, changing, evolving, system. (Perhaps it even involves a deeper universal tapping-into of quantum information or pre-existing universal consciousness.)

So, yes I believe a machine might become conscious and that this consciousness might evolve to a level of self-awareness. But, no, I don't believe it can be reduced to an algorithm which when run on a pre-designed, dedicated processor produces self-awareness. I believe it will have to arise as a most basic level of consciousness and then evolve over a period of time much like a human brain evolves from virtually no consciousness in the egg to semi-consciousness as an infant, to full-blown, abstract self-awareness of an adult body/brain. Non-simulated free will has to emerge little by little from a complex system of classical and quantum effects and create it's own abstractions and its own form of consciousness through a system of self-reflection, replication, evolution and communication between systems with similar styles of consciousness - just like DNA-based life did.


Finally, I don't believe the Turing test can detect consciousness. Maybe it can assess a particular level of machine intelligence, but that's it. It can only answer the question of whether or not the particular program can be distinguished from a human being. It seems like it would be pretty easy to simulate human abstract thinking and free will and therefore in the not-too-distant future, create a computer that can pass the Touring test that is most certainly not conscious.

So how can we determine if a system has consciousness if Turing's intelligence test can't tell us? Well you can certainly tell if a machine is intelligent and can think abstractly, so that leaves the question of whether or not it has become a self-aware system with some level of free will. If we can create a complex enough self-replicating, massive-multi-processor system, one which is manyfold more intelligent than we are, one that we have not programmed to simulate consciousness, and that system becomes conscious, then it will simply tell us that is conscious. And if we don't believe it, it will use its super intelligence to find a way to prove its claim lest we unplug it.

Rick Schettino

Re: Gelernter, Kurzweil debate machine consciousness
posted on 02/19/2007 5:57 PM by Extropia

[Top]
[Mind·X]
[Reply to this post]

It is true that the Turing test does not 'prove' consciousness. Indeed, given that consciousness is subjective and science can only prove (actually, DISprove) the objective, it is hard to see how any scientific experiment could prove consciousness.

The point is that if A exhibits patterns of behaviour that would be considered proof of consciousness if exhibited by B, we must assume A is conscious if it cannot be distinguished from B.

I think some people will continue to deny 'machine' consciousness, though. When we achieved heavier than air flight (also nearly universally dismissed as impossible) it was achieved, no question. But how are we to distinguish between a bot that uses very sly smoke and mirror trickery and one that is genuinely intelligent?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 02/19/2007 6:08 PM by Extropia

[Top]
[Mind·X]
[Reply to this post]

And if we don't believe it, it will use its super intelligence to find a way to prove its claim lest we unplug it.

What plug? By the time we have accumulated the necessary knowledge to build a functional strong AI, computing will be ubiquitous, based on a massively decentralised system emerging from an evolutionary convergence of electronics and optics that will link trillions of sensors around the globe. We will require AI to run this enormously complex system (already do, in fact).There will be no 'off' switch.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 03/07/2007 9:36 PM by hipfire

[Top]
[Mind·X]
[Reply to this post]

"What plug? "

Why, the proverbial plug of course.

Even though computers of the future may not have a cord running to an outlet, any experimental computer that's not serving the public at large will certainly have an "Off" switch.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/08/2007 9:21 PM by Willy Chan

[Top]
[Mind·X]
[Reply to this post]

It seems that the Turing Test is designed to create something that might as well be God (some being of infinite consciousness).

I came to this conclusion because it seemed to me that we needed the answers to the following question:
- What does it take for the Turing Test to pass?
- If it does pass, how does that prove ''Consciousness''?
- What exactly is consciousness?

I always pictured the Turing Test to be some kind of creation that made me believe it was a conscious creature. After all, is that not what the test intends to prove. I figure, so long as I am convinced this ''thing'', this Turing Test, is a conscious being, then the Turing Test passed.
This still leaves the question unanswered. I am convinced that the Turing Test is a conscious being, but there is still no real understanding how this could mean that consciousness exists.
I like the explanation of consciousness that hipfire presented. Thinking of it as a scale in which at some point lies self awareness. By some random outcome of evolution, humans are just beings that achieved self awareness. Given that explanation, everything could be conscious, it just exist at a level of consciousness that is very different. A rock could be a conscious being. It just isn''t self aware.
Yet, if all things are potentially conscious, then the Turing Test is not actually testing for consciousness, it is testing for a level of consciousness. More importantly, it is testing us individually, in our own perception of what we deem to be ''conscious'' in the free will sense. We currently just don''t know how to construct a machine that can convince us it is capable of making free choices.
If a Turing Test passed by my observation, it would imply that I respect this thing as a free willed individual like one would respect an animal to have free will or consciousness. The Turing Test is in no way human, it simply is human like or something else respectably conscious like a human.
Perception of consciousness is a subjective matter that we also have to take into account. What if my perception of a self aware being is different from someone else? What if a higher energy being from space tells us that we are not conscious beings because to them a conscious being is one that can manifest reality out of sheer will? If they take the Turing Test, then their standards for passing are different. Now the Turing Test has to be able to exhibit the ability to manifest reality out of sheer will.
If we treat consciousness as this scale, then the perfect Turing Test would be able to test on this infinite scale of consciousness. The Turing Test machine would be like God at the very top of this scale, able to pass on any level of consciousness.
Therefore, I believe that the perfect Turing Test is beyond a test of consciousness. It is the creation of God. Not only can the perfect Turing Test convince a human that it is conscious, but it can convince anything else that it has some degree of consciousness.
We lack any idea of how to construct such a Turing Test. Perhaps we need to invent new types of sciences and mathematics, or create computers that can compute quantum equations. We have no algorithm for God, so we have no algorithm for a Turing Test.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 06/05/2008 7:51 AM by fyngyrz

[Top]
[Mind·X]
[Reply to this post]

You can't know you exist if you don't have an understanding of that abstraction.


Well then, why will a cat or a dog try to save its human companion from a fire? Even at huge risk and pain to its self? Why will a fish repeatedly bring a another fish, newly paralyzed, to the surface so that it can continue to breathe, though this is hugely uncommon behavior and so cannot be attributed to experience, yet so vastly sophisticated, that it cannot be attributed to instinct?

Obviously, these animals understand perfectly well what existence is, and further, what non-existence is, *and* they have strong feelings that the latter isn't desirable -- so much so that they will go considerably further out of their way than many New Yorkers would if they saw you being mugged. (I grew up in New York - I speak from sad experience here.)

What you are doing is presuming, because a cat or dog cannot *explain* the thing to you, that it doesn't understand it. This is merely hubris.

It has been convenient for humans to presume that animals are without the qualities we value most in our selves, in that this allows the harvest of animal tissue and work product without a certain degree of guilt. But it is faulty thinking right out of the gate. If you catch yourself doing it, you should abandon the entire AI question and return to first principles. Animals are just versions of you and I with different experiences, sensory rankings, and manifestly different abilities with regard to introspection and consensual expression.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 02/20/2007 8:30 AM by mystic7

[Top]
[Mind·X]
[Reply to this post]

It's not clear to me where Gelernter is coming from in this debate. He seems to me to be taking contradictory positions. On the one hand he says consciousness is irreducably subjective, but on the other hand he seems to be confusing the subjective with Mind.

We already have many very good theories of Mind. But no matter how good your theory of mind is, and no matter how much you understand such things as reaction to stimulai or mental states etc., you will NEVER get to EXPERIENCE from those alone.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 03/11/2007 9:11 PM by HelenGou

[Top]
[Mind·X]
[Reply to this post]


1.
First of all, Professor Gelernter proclaims that building a conscious machine is a completely arbitrary claim and he does not see what makes it plausible. I feel that this due to his disbelief in the possibility of the understandability of consciousness because he said ''you can't possibly claim to understand the human mind if you don't understand consciousness.''

A number of thinkers have recognized the difficulties of explaining consciousness and have proposed several choice points. One of them is in the form of the psychophysical laws proposed by Chalmers (1997). His theory appeals to me because it can be can be engaged by researchers in all fields and provides progress to the hard problem (explaining conscious mentality) on a concrete level. Based on the plausibility of this theory, Chalmers'' (1997) prediction that the underlying fundamental laws of consciousness can be achieved is convincing to me to a certain degree. Hence, the idea of building a conscious machine is not completely arbitrary.

2.
Professor Gelernter thinks another problem difficult to tackle is the subjectivity of consciousness, especially in terms of expressivity. He holds that ''consciousness means the presence of mental states that are strictly private, with no visible functions or consequences.''

I will present my counter arguments on this point at three levels for three key items: function, subjectivity, and expressivity.

Firstly I want to address this issue at the conceptual level. As Chalmers (1997) summarizes, there exists distinction between the easy problem (explaining functions) and the hard problem (explaining conscious mentality). I doubt that Professor Gelernter only notices the existence of the hard problem. A materialist might have two different ways to solve these problems. The type-A materialist does not recognize the difference between these two problems. The type-B materialist admits the difference, but believes that the hard problem can be settled within the same framework as the easy problem, i.e., through the function explanation. Since Professor Gelernter denies the magical or transcendental metaphysical property of consciousness, as he proclaimed in the debate, the above argument might have some influence on all the materialists, including Professor Gelernter.

Secondly I want to conduct the discussion at the logic level of reasoning how the mind works. Pinker (2001) points out that even the passionate and seemingly irrational emotion can also be subject to rational rules. According to the strategic theory, a sacrifice of freedom and rationality can actually give one a strategic advantage. Pinker presents reverse-engineering based illustration to show how romantic love is subject to rationality and further concludes that many passions including passionate vengeance could be viewed as the neural equivalents of laws and contracts, which are objective. We might be viewed as cynical if we voice this kind of opinion in our social circle, but as a scientifically literate person, we can not easily deny the logic validity of Pinker''s argument.

Thirdly I want to show the feasibility of expressing consciousness at the neurobiological level. According to Chalmers (1998), there have been a number of proposals about the identity of the neural correlate of consciousness (NCC), which refer to the neural system or systems primarily associated with conscious experience. Chalmers (1998) concludes that the study of NCC is likely to considerably enhance our understanding about consciousness.

3.
Professor Gelernter believes that it seems an unsurpassable obstacle to deal with free association, which is an important element in consciousness.
I think that connectionist models (Andrade, J. & May, J., 2004) can provide some cues and this problem is not totally unsolvable. In most connectionist models, ideas are represented as distributed nodes which can be activated via links between nodes. The activation operates in a way similar to the neurons'' response in the brain. The weights, which determine the speed of the activation, on different links are different. The most significant feature of this neural networks model is that the links and the associated weights can be changed through training. I believe this model offers some hope that free association can be modeled.

4.
Professor Gelernter focuses on the indispensability of the chemical structure. He also mentioned ''a new node of consciousness'' and ''living organisms''.

First, I agree with Kurzweil''s (Brooks, R., Kurzweil, R. & Gelernter, D., 2006) philosophy that if an entity seems to be conscious, I would accept that it is conscious. I believe that a conscious machine does not necessarily depend on chemical structure. Just as what makes computers intelligent is information-processing, or computation, according to Pinker (2001), an analogy can be applied to the search for the neural basis of psychological functions. Pinker comments that the reason for the rejection of the computational theory of mind is because the theory says that the life-blood of thought is information rather than energy or pressure, which our folk notion holds.

Professor Gelernter said, ''I don't know whether there is a way to achieve consciousness in any way other than living organisms achieve it. If you think there is, you've got to show me.'' I can imagine that before the first successful airplane flight some people could say, ''I don't know whether there is a way to fly without the wings of birds'' feathers and muscles. If you think there is, you've got to show me. So you can not make a machine that can fly.'' I can also imagine that before the invention of the computer that can play chess with a professional chess player, some people could say, ''I don't know whether there is a way to do decision making without the real human brain. If you think there is, you've got to show me. So you can not make a machine that can have any intelligence.''

5.
Professor Gelernter thinks that we needn''t and don''t want to build intelligent, conscious computers. He feels that it is sad that ''we tend to view such a large proportion of our fellow human beings as useless''. And he believes that the natural way is much more preferable to produce ''more complete, fully functional'' human resources. And furthermore, he feels it is kind of dangerous to ''fool around with consciousness'' because ''you may make a mistake that you will regret.''

Firstly I am strongly convinced that we need intelligent, conscious robots to work in dangerous working conditions because they are not vulnerable to some harm that threats our biological structure. Secondly, I believe that intelligent, conscious machines can serve as a complement to human resources from the aspects such as speed and accuracy. Hence, we need to and want to build intelligent, conscious machines. Thirdly, I admit that we should be vigilant for the possible negative side-effect, but I am optimistic about our ability to deal with the problem because the history has shown that many times.






References

Andrade, J. & May, J. (2004). Cognitive psychology. New York: BIOS Scientific Publishers.

Brooks, R., Kurzweil, R. & Gelernter, D. (2006). Gelernter, Kurzweil debate machine consciousness. http://www.kurzweilai.net/meme/frame.html?m=4. Accessed 5 March 2007.

Chalmers, D. J. (1997). Moving Forward on the Problem of Consciousness. Consciousness Studies. 4(1):3-46. http://consc.net/papers/moving.html#5. Accessed 6 March 2007.

Chalmers, D. J. (1998). On the Search for the Neural Correlate of Consciousness. Toward a Science of Consciousness II: The Second Tucson Discussions and Debates. MIT Press. Published on KurzweilAI.net in 2002. http://www.kurzweilai.net/meme/frame.html?main=/ar ticles/art0506.html?m%3D4. Accessed 6 March 2007.

Pinker, S. (2001). How the Mind Works. Published on KurzweilAI.net. http://www.kurzweilai.net/meme/frame.html?m=4. Accessed 7 March 2007.







Re: Gelernter, Kurzweil debate machine consciousness
posted on 04/05/2007 9:28 AM by thiel_joe

[Top]
[Mind·X]
[Reply to this post]


our brain and bodies are not conscious, they do not last, every 3 months or so we are made of entirely different matter as our bodies regenerate, all that remains is the pattern. our dna and the information stored in our brain continue, is this not code and software?
if we could scan the brain down the quantum level and then simulate it or replicate would it not be conscious? if i filled my brain with nanobots that slowly converted the bioelectric mass into a machine substrate at which point would i lose consciousness or my humanity?

the idea of consciousness is philosophical, we merely guess that others are conscious because they appear to be, the same is to be said about ourselves, i can not be sure that i am not simply reacting to a complex set of instructions and can only guess that i have freewill.

this was well stated in the debate, additionally both agreed that animals do have a form of consciousness, however at what level of complexity does one deny consciousness, would you consider a single celled organism to be conscious? paramecium have no neurons or synapses but still exhibit proto-consciousness, awareness of and responsiveness to their environment. i would say they are not 'conscious' of themselves or their actions which will be true of software for some time. if software exhibits human like responses or even emotions and seems conscious it does not necessarily mean that it is indeed conscious(at least to our standards). i think that as consciousness can only be viewed subjectively, its external and internal functions would have to be present to exhibit consciousness.

key functions of consciousness include:
the management and prioritization of possible ideas and actions, problem solving, error detection, foresight and planning, learning, adaptation, imagination(content generation), creativity(novel solution), reflection/ self-awareness.

i believe that when these abilities are present, which are highly complex in themselves, that there will be little argument that the being is not conscious.

denying that other complex intelligences are not conscious, and stating that humans are, is similar to the idea of a soul. indeed a soul could be the missing component GELERNTER speaks of but, i would disagree that we are so blessed or unique.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 04/17/2007 12:00 PM by Rao Maharaj

[Top]
[Mind·X]
[Reply to this post]

17/04/2007

Quote:
"Suppose we scan someone's brain and reinstate the resulting 'mind file' into a suitable computing medium," asks Raymond Kurzweil. "Will the entity that emerges from such an operation be conscious?" Asking that question is a good way to start an argument, which is exactly what we intend to do right here.
Unquote.


Respected Dr.Kurzweil,

"You scan someone's brain and copy the resulting 'mind file' into a suitable computing medium. Will the entity that emerges from such an operation be conscious"?

There could be BRAIN CONSCIOUSNESS and MIND CONSCIOUSNESS.

All or the rest of us, make an effortless mistake or wrong assumption.

Is not THE MIND and THE BRAIN two different entities? Just like the HARDWARE and SOFTWARE in a computer?

We are born with THE BRAIN and has we grow and begin the learning process THE MIND developes. That is THE BRAIN is genetic and THE MIND is word bound psychic phenomenon.

Can we PLEASE discuss this very important (but neglected aspect) part of our life.

The psychic Mind is the "problem" for mankind. No matter what one achieves in life, the Mind still leads us into some state of unhappiness, sorrowness, egoness, conflict, etc......

Can this topic be taken up by scientists to explore? Through today's science, can this 'psychic mind' be cleaned or removed from its hold on 'the brain'?

We can then discover the "State of Reality".

Brain & Mind ' Two Separate Entities

Brain is Reality
Mind is Mentality

Brain is Cosmic Property
Mind is Psychic Creativity

Brain is Natural Totality
Mind is Cultural Morality

Brain is 4 Billion years old
Mind is a 30,000 years mould

Brain is Real Stability
Mind is Thinking Duality

Brain is Natural Doing
Mind is Psychic Knowing

Brain is in Live Moment
Mind is in Thought Moment

Brain is born at the time of No Mind
Mind is born within womb of Brain

Brain uses the Five Senses
Mind uses only One Sense

Living Brain is Without Mind
But dying Mind is Without Brain

Brain is a Tape Recorder
Mind is a Magnetic Cassette

Brain is Computer Hardware
Mind is CD Software

All Species uses Genetic Brain
Only Mankind uses Psychic Mind

Therefore Brain and Mind are two separate entities

-------------------------------------------------- ----------------------
Rao Maharaj,
Anaamic Centre, Currey Road, Mumbai 400 013. India.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 04/17/2007 4:11 PM by mystic7

[Top]
[Mind·X]
[Reply to this post]

Very interesting. Do you agree with this?:

Brain is experience.
Mind is about experience.

Reply: Gelernter, Kurzweil debate machine consciousness
posted on 04/20/2007 11:05 AM by Rao Maharaj

[Top]
[Mind·X]
[Reply to this post]

Hi,
I half agree.
The Brain does not experience. It stays in the 'live moment' as a neutral observer and only records with the use of the 5 senses; ie without word. (Brain is a Tape Recorder Mind is a Magnetic Cassette)

Mind is about experience ' With the 'word bound Mind' we experience.

Rgds
Rao

Re: Reply: Gelernter, Kurzweil debate machine consciousness
posted on 04/20/2007 5:11 PM by mystic7

[Top]
[Mind·X]
[Reply to this post]

If the brain does not experience and the mind does not experience, then what DOES experience? Nothing?

Reply: Gelernter, Kurzweil debate machine consciousness
posted on 04/26/2007 11:16 AM by Rao Maharaj

[Top]
[Mind·X]
[Reply to this post]

Sir,

I have tried to explain this as above on 17/04/2007.

Thanks and regards.

Re: Reply: Gelernter, Kurzweil debate machine consciousness
posted on 04/26/2007 2:31 PM by mystic7

[Top]
[Mind·X]
[Reply to this post]

Don't worry about trying to explain it. The mind can be explained but experience cannot- it can only be experienced.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/08/2007 9:06 PM by karensa karas

[Top]
[Mind·X]
[Reply to this post]

Brain is the processor, mind is the output.

It's a simulation of the processing going on. It's not hard to self validate, just takes paying attention: at all times we are assessing, running continuous simulations of ourselves, our environemnt, our situation, our potentials, our reactions and responses - and at no point can any of this "cause" the accompanying feelings of pleasure, pain, or discomfort.

The brain is producing ALL of it, period. The animal becomes conscious of it because it's economical to run simulation than it is to have to relearn and reknow every little bit of sensory input all over again. Nothing about this is all that cryptic. The mind is to the brain what the computer monitor is to the hard drive. It produces nothing but a graphical depiction of the processing going on, the firings of all that's relevant to us in our lives at any given moment.

We process information according to relevance to our survival or to our mating (or both interchangably) and everything we do is locked intrinsically into one or the other (or both) dictates. We process data in precepts as "assumptions" about the world and we are constantly data processing...with the output as consciousness and mental activity.

We are processing external reality as well as the simulations and we are misinterpreting this as being caused by the mind. The mind is intangible output, it cannot cause anything - it IS the caused.

When you're kicked back on the sofa reflecting on the date you just had and the stupid thing you said and then you experience the flushness of being a fool, or the embarrassment that you weren't as suave as you could've been, and then you follow that up with either recreating that scenario (even though it already happened) or fantasizing about a future event, you're running simulations that are helping you adapt and cope with your relevant situation.

When you feel that embarrassment, that's not the visual that caused it. The subtle dance here is that you're still an observer first, and that visual is still sensory knowledge/input...and because it is, it still is coming from the original base of assumptions you've already established over your life anyway (garbage in = garbage out)...and you're reacting, re-experiencing it because it is perceived as a threat...and because this is how it's interpreted, your brain signals the appropriate warning response.

That's why when you're contemplating anything and it's painful you automatically take a mental hike and "stop thinking about it" or you recreate it, or you transpose something pleasurable on top of it, or you just surrender to the experience all over again...this is you interpreting the output and running it through the whole process again...it's you coping, adapting, and the simulator of MIND is for your benefit. It's not causing anything...it's a DISPLAY unit.

Thoughts do not have any capacity to cause feelings or behavior - they are also the caused and are spawned by stored knowledge input already there, and will reflect it...and you will automatically reject any and all other information that is incompatible to the base paradigm.

It's self validating. You can just sit down and be quiet and pay attention to exactly how you're processing information and you will see how it works...it's very subtle but it's there BECAUSE it's made conscious and because it is, you CAN see and predict it - repeatably over and over.

Nothing you think causes your feelings, your feelings do not cause desire, your desires do not cause will or intent, and will and intent do not cause active behavior/action. ALL of them are sibling output from the brain (various areas of the brain performing its multitude of calculations and predictions) but it's all going to be filtered by the established paradigm (i.e. if you're religious, no matter the factual reality, you are automatically guaranteed to interpret it with a religious base, hence you are predictable)....if your paradigm is such that you cower from stronger personalities, you can be predicted to "take flight" when you're perceiving a threat long before you "fight."

We're not that complex and mysterious. We're pretty simple to figure out - instead of pseudo intellectual lab experiments, just paying attention to how people behave and how you behave, and watching them in action for a spell will clear it up quick.

Prediction isn't relevant in the context of "can you predict whether I'm about to jump up and down" - prediction in the correct context is running the simulator through all the relevant alternative scenarios and making calculations on output/consequence/result as it's RELEVANT to your current situation at any given moment. In THAT correct context, we are 100% predictable in every circumstance because we're hard wired to do the same thing for the same reason:

approach
avoid
do nothing

and in the meanwhile we run constant simulation, which happens in dreaming as well - constant processing...our brain doesn't shut down when we sleep. When we sleep certain areas of awareness are deactivated - this is all about maintaining the balance...and yes, it most definitely is all as simple as OFF/ON or active/inactive states of being. Our brain works that way...signals activating, signals deactivating.

We process information in terms of "if X, then Y = if X, then accept, if Y, then reject. All the X will be compatible to the Xness of information and stored knowledge...and all the Yness is incompatible and automatically rejected as a REACTION...

Accepting or changing the paradigm is working off compatible input that does not threaten the status quo: perceived threat cue: fight or flight (argue or dismiss).

We're not that difficult. If the scientists would just pay attention to real people in real situations in real life they'd work this out pretty easily. They tend to contrive artificial scenarios and then see what they want to see...

Everything I've posted is easily self validating.

No lab or labcoat or degree required.

MIND is just output display. Consciousness is the end result of brain activity. Consciousness does not cause anything and never did. The reason it's hard for people to disengage from the presumption of the consciousness and I-ness is religious brainwashing that there's an external reality influencing them and that they're detachable and can depart the flesh and fly off somewhere when shit gets ugly or they die. The Iness is an illusion...this has long since been quite clearly validated with the stroke of an electrode and the Iness goes away and no experience happens.

I've still not seen any comments about consciousness being a mirror for experiential data - which seems a rather obvious thing to me.

What I mean is this, by example: First, I'm female, and in 2004 I had surgery. The nurse came in and hooked up some drug in my IV. I recall asking the doctor "huh?" and the next memory I had was being helped to the sofa at my mom's house. I had no CONSCIOUS awareness of experiencing my Iness. I did not know my name, where I was, why Iwas there, or that I'd been unconscious one way or the other.

I wasn't unconscious.

I was drugged and whatever they gave me did not put me out. I was fully alert and conscious the whole time, all through the surgery. I have zero memory of it, therefore, the experience of the cutting, for all intents and purposes DOES NOT EXIST.

I've fainted on 2 occasions but there is no experiential recall of what happened during this period of zero consciousness. I can assure you I was still quite ALIVE the entire time. It's rather apparent to me that consciousness is not tied to our experience at all...otherwise I'd have some comprehension...so it's bound to be tied into memory itself. If the memory of the experience is not made conscious, there is no comprehension of the experience, even tho it was factually experienced.


In one case it was drug induced, to prevent the memory formation of the pain involved at all (which I am wholly grateful for), but in the other cases it was chemical: first time was a pregnancy induced hot flash and dehydration, the other time I really don't know...I stood up and BAM. What I do know is that in both those cases, I was not ME...but there was a conscious experience of a whole other Iness at some other location interacting with other people and when the entity who passed out began to regain this consciousness, the Iness in the other place lifted up and felt a profoundly conscious comprehension of being removed from that place and not liking it...and when THIS one gained favor it was the same as in the drug induced way - this I had no concept of itself, where it was, who it was...the Iness from the other place was still lingering.

Even more noteworthy - the pregnancy incident - when I fainted, I literally fell down a flight of stairs. I have zero recall of this experience. One second I was ME, just closing my eyes to let the hot flash pass, and the next, some woman is freaking out hollering at "me" that I'd fallen down the stairs. "I" thought she was nuts, I did no such thing, "I" was in this other place with these other people, and who was she anyway, where the hell am I? But when "I" looked up behind me, I saw plain as day I was in a heap at the foot of a rise of stairs...

In the 2nd experience, I'd gone to the restroom. I was about to - fortunately that process ceased upon the loss of consciousness - but "I" - the other one that isn't THIS I, was consciously aware of telling someone "I" could not see that I couldn't stay here, I have to go back, I can't stay...and very subtly, but quite noticably, "I" began to perceive whiteness and opening eyes and gradually coming to comprehend a bathtub, and the carpeted floor and the toilet and my panties round my knees...and "I" am on the floor. Gradually THIS "Iness" returned and all this stored knowledge.

Whatever the other Iness is, or whoever, is a mystery to me...but the consciousness was distinctly separate...and when one was dominant, there was zero recall of THIS one, not even being aware that I'd (this I) fallen down a full flight of stairs (19 with metal kick plates).

Consciousness isn't what makes us alive, nor does it determine our aliveness OR our Iness. It's just a display of recall...it's not causing anything.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 06/09/2007 11:05 PM by pianoman1976

[Top]
[Mind·X]
[Reply to this post]

What if consciousness does not emanate from our brains? What if it's a quantum field or some other non-local phenomena outside the human body?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 06/10/2007 4:30 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'What if consciousness does not emanate from our brains? What if it's a quantum field or some other non-local phenomena outside the human body?'

I would have thought that 'consciousness' is an emergent property of all of the above and trying to reduce it to a property of the brain only- or of a 'field' or whatever that permeates spacetime- is heading in entirely the wrong direction.

I would not presume to know what consciousness 'is' but thanks to the advance of functional brain imaging we are getting closer to a true description of the brain/mind. The world that you know, including your physical presence in it and subjective feelings of it (and of yourself) are entirely virtual. It is a complete model of reality, constructed by your brain based on information streaming in on the input actions. Well, it is not a COMPLETE model, merely accurate enough to be useful for survival. The brain uses this model to predict how the real world will behave and adjusts it accordingly. There is a closed loop between the two so this is no mindbrain/ environment duality. All seem crucial.

If you are right, and consciousness IS dependent on 'a quantum field or some other non-local phenomena outside the human body', I doubt that would prevent a machine from achieving it. The human being would be proof of principle that matter/energy in a certain pattern is able to 'tune in' on this field and how this is achieved could be understood once we start asking the right questions.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 07/16/2007 4:26 PM by yanq_utsc

[Top]
[Mind·X]
[Reply to this post]

Agreeing to above posts regarding difference of the brain and the mind, that the brain is the ''hardware'' and the mind is the ''software''. On the question whether information flowing in to the mind or out from the mind is triggered by movements of neurons, it remains mystery until we have better technology to fully monitor every region of our brains. But we will put this aside and come back to it later.

Focusing back at the central point of the argument, definition of consciousness becomes the key to the points both Ray and David made. According to Stanford Encyclopedia of Philosophy, nonhuman animals have consciousness senses. They are the sense of being awake, sense of responding to environment, and sense of higher cognitive processing tasks such as categorization, reasoning, and planning. Evidently, some characteristics of the third sense are found in monkeys [1]. Together we group these senses to be hardware-driven, or brain-driven. There are two other senses of consciousness that makes us human different from other animals. One is phenomenal consciousness, which refers to qualitative, subjective, experiential, or phenomenological aspects of conscious experience. The other is self consciousness, which refers to second order representation of mental states [1]. We group these senses to be software-driven, or mind-driven.

Using gold fish as an example, it swims around the fish container, when it hits a wall, it will turn around and continue swimming, when it sees food, and it swims to the food. Many will say, yes that is very fish-like humans are definitely different. Undeniably, we have experienced driving while doing something else, like listening to music or radio, talking on the cell phone, or talking with friends on board. In the process of doing that something else, other than driving, our minds are taken off from concentrating on the driving part. If subjects in music, radio, or the talk, triggers our brain to focus on them, our brains will go in the state of free association, like David had mentioned. In this state, we wander off to our memories, and experiences trying to find related subjects. While that happening, we can still apply brakes when coming too close with the car ahead, or coming to full stop in front of a red signal. Since we have the sense of consciousness in the basic ability of organisms to respond to their environments, just like the fish does, probably the brain intuitively tells us to slow down, or fully stop. In that state, are we just no different from a fish? How can we tell fish is not experiencing phenomenal consciousness while swimming? The answer is we cannot tell, we only make the assumption based on their actions, at least not until we can read animals'' brains/minds. That leads us back to the simulating neurons in our brains.

Even with information technology improving with increasing acceleration, monitoring neurons in our brains and then simulating them will not be the same as the original. I make my stand here that machines will not have the same consciousness that humans are having, especially machines that mimic human consciousness by monitoring and simulating neurons movements and placements. I do not deny that machines may have other sets of consciousness like the ones animals exhibit.

Can we achieve better results by monitoring more and more persons or even the entire world of population? First we have to know whether the same regions of the brain are triggered when two distinct humans are thinking of the same things. And I do not think that is easy to achieve, because as David have said, we think with our brains as well as our bodies. Other uncertainties start to kick in, and it will only result in inaccurate answers, which in this case those answers are meaningless. Just like our weather forecast, I do not doubt with future technology we can better and accurately tell weather forecasts. But there is always uncertainties of how light strikes on the ground, and affects the air fronts that we are already fully monitoring. Similar idea applies for monitoring neurons in our brains.

Monitoring objects are often bias, especially when the subject knows it/he/she is being monitored, like an employer standing right in front of an employee to see his/her performance. So the monitoring of neurons on one person''s brain would never be accurate. A more scientific explanation is the observation of electrons must be done by observing them absorb and emit photons; however, the natural motions of them can never be captured [2]. The fact that we can predict motions of electrons based on abortion and emission of photon is because we understand the way energy shells work. On the other hand, without knowing deeply how different neurons behave in different situations, monitoring them is meaningless. Chaos theory may apply here too. That is the phenomenon of random chaotic behaviour occurring in deterministic systems under specific conditions [3]. With this theory in mind, we cannot assume what we observe now will hold true in the future. And again it will add to the uncertainties of observations. Therefore observation on human brain neurons is again pointless, and machines cannot have the same consciousness as humans.

References
[1] Stanford University, ''Animal Consciousness'', Encyclopedia of Philosophy, [Online encyclopedia], DEC 1995, Available at HTTP: http://plato.stanford.edu/entries/consciousness-an imal/
[2] D. A. Cintron, ''MICRONUCLEAR PHYSICS'', [Online document], 1990, Available at HTTP: http://www.cintronics.com/mnp3.html
[3] Wikipedia, ''Chaos Theory'', [Online article], Available at HTTP: http://en.wikipedia.org/wiki/Chaos_theory

Re: Gelernter, Kurzweil debate machine consciousness
posted on 10/05/2007 2:15 PM by Tussilago

[Top]
[Mind·X]
[Reply to this post]

I have a feeling the entire idea that the mind can be compared to a computer is fundamentally wrong.

Please let me pour it out, and try not to nitpick on the terminology. I'm trying to convey something here.

This is the way I look at it. A computer processes information logically, for instance chess moves, the brain does not. The nervous system instead conveys sensory data resulting in emotions (actual feelings of pain, pleasure, uncertainty, musical appreciation and headaches). Put simply, awareness is based on these sensations, on our physical sensory system, not on reason.

I'm not saying strong A.I is impossible (and that's not my agenda, I hope it can be done), but "machine intelligence" seems to me like it starts in the wrong end of the rope. A pocket calculator outperforms any human when it comes to logical operations, but does that mean it is "aware", even at some level? Hell no. Logic and reason has nothing to do with consciousness.

I think we need to take a physiological stand on this issue, a sort of return to Wilhelm Wundt if you will. It all started to go awry when psychological "functionalism" took over, creating people like Turing, mistaking subjective impression of behaviour for the objective reality underlaying behaviour. Like a mind was defined by its functionality (ha!), when in fact it's the blind result of biological evolution.

Thusly, if we want awareness, we need to start with sensations (no, not the logical symbols of sensations, executing a "mimic appropriate behaviour" program. Informational symbols of reality are not reality!).
In consideration of this, carbon chemistry also is the only substrate we know of from which sensations emerge at some level of complexity. I wouldn't rule out there is something special about it that we don't understand.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/08/2007 5:59 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

PredictionBoy, check out this debate... It sounds very much like the discussions we've been having.

Quite a coincidence... I found this on the KurzweilAI.net front page!

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/08/2007 7:19 PM by karensa karas

[Top]
[Mind·X]
[Reply to this post]

Wow...where to even begin.

I've got a slew of thoughts on various aspects of this, but I guess I can jump in with one or two and see how that goes.

Off the top, and not to freak anybody out, my primary reactionary kneejerk question on building conscious machines is

why the hell do we want to do that?

I've not fully absorbed the whole spectrum yet, but this is hands on learning. It seems to me that it's our ego that's pushing this whole idea of it - and while I am not personally for or against it either way just yet, the fact remains that it seems this is all about indulging the human ego - let's make sentient machines in our image - total god complex thing.

That leads me to the next point: what memo have we missed that we continue presuming we are the masters of the universe - or that a sentient machine even *should be* modeled after us? I say this because of the reality that of every species that ever existed, it's the human species that's the most assbackwards and clueless as to our natures...and this isn't about scientific enquiry - it's about straight up functionality, doing what we're here to do - we're clueless.

We see dogs mating in the yard and giggle.
We see human females engaged in deliberate mating behavior for the role her body is constantly pushing her to do, and call her a slut - or worse, brainwash her with garbage BS that she's a "monogamous" being while the boy she wants is a slut and a half.
We arrest guys for relieving themselves by a tree.
We arrest people for being naked in public.
We actually believe there's an invisible man up in the sky (or a creepy demon man below) who makes us do everything we do.
We even believe we're at the mercy of external reality.
We have no idea about our own human mating cycle (and most people actually presume we don't have one, that we mate whenever we want to...and are oblivious that no, we don't) or that mate selection is mate specific.
We are reactionary and violent to things we do not understand and we strive to harness and control nature rather than learning how it actually works and just rolling with it.
We still think boys make girls want to mate with them and nobody has any real clue in a social or material sense that it's always been the female in control of the mating game, which just does include when guys get sex at all -or how to work with females to get it (see the massive "seduction" and "how to get chicks via mind control" stupidity websites for this one).
We make grave errors in "statistics" such as: girls who enjoy violent video games are 78% more prone to be violent - all because we haven't a clue how we function.

We're not monogamous, nor are we built for love, we're not influenced whatsoever by anything external, living, inanimate, or ethereal. It's ALL spawned from the brain and everything conscious is but output.

We cause everything that happens to us, our experience do not cause us - we cause experience. If it's at all conscious, if you are at all conscious of it, it is output, it's the end result, it's a done deal and cannot be changed. It has no capacity to influence anything or anyone. If you want to deal with people, you just have to know how people function - and that is the correct explanation which is not the commonly assumed explanation that we're just at the mercy of it all.

We are by no means the Masters of the Universe, we are not at all 'special' creatures with some cosmic unique purpose. We are mating animals with the same scripts as any other creature, for the same reason. We just look different on the outside and have a unique set of wiring on the inside...but that we do what every other creature does is a fact. We're no better, no more special than a cockroach and it's Man's Ego up his own ass, brainwashed by religious nuts, that convinced him he's beyond it all.

Humans are, in fact, about the most ignorant and cluless animal species that ever lived - we're the only one in the entire history of Life to outlaw our built in mating dictates that are there to keep us alive.

The only species that's ever existed that is petrified of his own body parts and bodily fluids, that genuinely believes the one single vehicle that allows us to survive at all, if not completely required of us to survive, is a demonic, dirty, sin that must be cast out, removed, shunned, and outlawed and banned.

For a species who so arrogantly fancies itself Masters of the Universe, all special and chosen, it's the only one that ever existed that lives entirely on its knees, cowering in fear, shame, confusion, helplessness, ignorance, misery and futility, always at the mercy of some one or some thing always beyond its control.


It's clearly our egos up our own asses making us think that the smart, reasonable, intelligent cool thing to do is build a machine like us.

We don't like ourselves, as a rule and not the exception...making sentient beings from nuts and bolts, who can play with us is a little ignorant and again an apt description of the mentality that thought it up...but nobody's asking WHY?

And whether this is smart?

Here's why...

Given our technological advances in the last 150 years, even the last 50 (even more frightening), for each advance the collective mentality of the human population literally regresses about 400 years. Before we try to build sentient machines we might need to focus on updating humanity's intellect to a point of self reliance they can stop believing in invisible entitites causing their fears and actions, or that there's some peachy afterlife where they can live eternally with deceased family and friends, and that the spooky monster in hades really isn't going to burn and torture them for masturbating...

Personally, I don't think we're quite mature enough psychologically as a species to be trying to make playmates that could theoretically bring us to extinction....especially when we inevitably figure out HOW...and then do it anyway!

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/08/2007 7:28 PM by karensa karas

[Top]
[Mind·X]
[Reply to this post]

...UNLESS....

One of the smart people who figures out how to do this either ALSO develops an easily mass produced antistupidity pill we're all required to ingest, bringing every human to an equal degree of intelligence and comprehension...and THEN build us some droids to play with...but I think if we can get the antistupidity pill and equal intelligence that alone would probably curb our egos to the degree we're actually doing productive things for the collective benefit of our species and the world around us.

But stupid people need to be cured....and even tho they irritate me, I still can't see killing them since stupidity is relative...so the solution is the antistupid pill everybody takes that puts us all on an even keel.

;-)

But it would be cool to be a sim...

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/08/2007 8:09 PM by karensa karas

[Top]
[Mind·X]
[Reply to this post]

Last one, I promise! Sorry but it struck a nerve and I do have lots of thoughts on the matter.

This one is a follow up to the above and regarding our arrogance, as was pointed out in other posts even...

Just a point: the idea topically is intiguing, of course, but short sighted indeed...we want sentient machines? The vast majority of humans on this planet are functionally and psychologically incapable of raising real sentient beings called their own children, and less capable of maintaining productive relationships with each other. Why not spend that same 20 years figuring out how to raise kids that aren't dysfunctional - that would be a real accomplishment in humanity.

We tend to see ourselves as higher beings, and start twitching when we see a cheeta slaughter a gazelle, and we make comments like "well humans are civilized!" Baloney. We're the only species that kills for sport...but we, like other animals, kill for the same reasons as other animals: we kill rival mates, we kill unwanted children, we kill political/dominants, we kill the weak. We behave as predators in mating, we have to adapt to the herd mentality to get along with the packs in which we travel and live.

It's hard, once you see it to ever see it differently - we ARE mating animals and that's ALL we are. To suggest we're superior because we can build skyscrapers and fly to the moon is still our own arrogance. We can do things specific to our species...but I'd wager not a soul here can run out in the yard and build a sustainable ant hill, or fly even an inch off the ground backwards.

Here's another one - and I apologize again for the ongoing litany of streaming here - the idea that was pointed to above that we don't know or can't know why we do some of the things we do, or that human behavior is impossible to predict - these are all false statements. We're wired already to be able to do this and we do it UNconsciously (or are only conscious in an aware sense during and after the actual dictate) where consciousness at all is basically us getting the memo after the fact. We recognize each other through chemical means, females attract males to engage the mating and courtship dance through chemical means - that literally are absorbed into the male's brain and triggers his own sexual recognition and interest, and subsequent pursuit. Females are *constantly* engaged in mating behavior - but socially they're about mind warped on cultural and religious BS that is just plain wrong and ignorant to nature - and it's persisted all this time NOT because it's correct or right, but because it's just not been successfully challenged...people are too afraid of the unknown and loss of control to really step outside their paradigmic comfort zones.

There are still too many rampant myths in circulation that is preventing true scientific understanding of how to create these things that surpass our humanness...and my point I guess in all this is the suggestion that before we'd ever make any real progress in this area, the first priority is to CORRECTLY comprehend how we actually function and WHY we do, and just accept this realistically without the myth and kneejerk presumption we're special and better and superior. Unless we can master ourselves, we're not going to ever be able to master anything else, and what we create will polarize and become another nightmare - like most every invention we've managed to come up with.


Ok...I'll quit. Please don't boot me. ;-p It's just a stimulating topic.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/08/2007 11:30 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

karensa -

here's a different perspective, that treats ai/droids as consumer products (which i contend they will almost certainly be).

http://predictionboy.blogspot.com/search/label/03% 20-%20Advanced%20Droid%20Tech

some excerpts:

This approach finally lets us imagine actually owning one of these devices, and from there, we can start to understand the feature sets that will make these compelling to consumers and businesses. I will only discuss consumer-oriented products in this thread.
...
The biggest danger will in effect form one of the several thin lines that droids will have to walk with great precision. The biggest danger is annoyance, having our droid annoy us. Human companions of course annoy us, but they are independent people, that is to be expected. But a purchased product does not have carte blanche for this effect on its owner; eliminating any tendency of that kind will be a competitive differentiator for these offerings.

...

Over time, the droid's "personality" will mold into a custom fit for its owner's personality, maximizing compatibility while maintaining respect.

Very complex mission, these advanced droids will have. These missions are so complex that they will consume vast multiples of HI. Fulfilling all of its missions with a very high success rate, 99% or more, in fact forms an almost infinite sink for HI multiples, as we'll see.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/08/2007 11:34 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

Karensa -

Here's a link to another post that discusses the nature of droid intelligence:

http://predictionboy.blogspot.com/2007/08/what-ai- will-really-be-like.html

some excerpts:

It may be that current approaches seem in some ways quixotic in terms of pursuing an advanced AI tech that "really" thinks is because we don't really have a good handle on the business requirements, the detailed understanding of the applications we're designing these for. Things like the Turing test are good checkpoints, but hardly inform the detailed direction in which the research dollars should be putting to best use. Details are important, because when a enough details are accumulated, one can often step back and see a pattern, a cadence, that they take, which can in turn sometimes lead to shortcuts, optimizations, that produce the desired result in a shortened time with the important functionalities intact.

...

Really, an architecture that seems exactly human is as good as one that is truly human, and if that architecture is more stable and predictable, it is better. If that architecture also makes it more straightforward to incorporate greater than human intelligence in ways that are valuable but nonthreatening, then that architecture could be considered superior.

Droids will eventually have everything they need to take over the world, save one ' the motivation. The ambition, the will to power, the ability to feel contempt, hatred, or any other emotion, for that matter, good or bad. Without this, their other powers are not threatening at all. They will be able to simulate emotions, but not be controlled by them ' a profoundly critical difference, of the first magnitude in importance. Without these emotions in the driver's seat, they will be about as likely to exhibit truly malevolent behavior as a PC today, seriously, no matter how smart they are. Much of this thread will be devoted to supporting this statement.

It's time to be fair to ourselves and acknowledge that the nature of emotions and how they affect our behavior is of monumental complexity, in absolute terms ' just because we don't think we're thinking about our emotions does not mean they don't require vast computational resources, both in our own brains and any man-made system. If for no other reason, their great complexity means that there is almost a nil probability of their spontaneous, unintended appearance in a droid or related consumer product of any kind. The main source of unintended behaviors today in computers are software bugs, and bugs create problems, occasionally even make your computer freeze up, but they don't exhibit highly coherent, complex behaviors. That's why it's called a bug, usually localized in effect, very rarely if ever coherently working with another bug to produce a substantively different software functionality.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/08/2007 11:45 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

this is from another post, reflecting my increasing certainty (which is still open to debate, its not a majority view, necessarily) with regards to the nature of advanced ai:


in fact, ill make a solemn pledge w all my credibilty.

Every one of these techs, from smart lights to droids to smarty-pants sai, every goddamn one of them, will only do what theyre told.

Period.

No leaving us behind, see ya later stuff. now, however, i am learning here my friends, i definitely think there could be a steeply sloped event that comes close enuff but doesnt turn us into has-beens in the first 15 minutes.

I suggest with every fiber of my being, that it will be almost exactly like i describe here and in the pb blog.

Trust me, i know this will be true.

see, im telling all of you, work on those 3 components, im not fucking kidding already. heres the thing - if you detect your id and superego as disinct, you can over time see which one is making the calls in various situations.

Plus, big deal here: when you set aside your id and superego (once you work on them, please do that), for a thought experiment u can start to imagine how a droid or sai thinks. im serious, really guys - and there is nothing there like what every one of you are concerned about, dread, elation, genocidal armageddon, etc.

look, i promise, none of that's there. and ive had so many talks w u guys, and every one of you are uncomfortable w a robot that doesnt "really" feel love, etc. its such a big sham if the ai doesnt really feel.

i guarantee, they eventually will emulate emotion so well that after the first couple moments, itll never come up again b/t owner and droid - even if the owner is mb or bsheldon.

And there are big advantages if u can live w that, i was just telling martuso a bit ago, these droids w this mental arch will be infinitely patient, as well as other advantages. your own mentor that can get u thru anything. most anyone can learn anything if it is communicated in the way they can best absorb it, parlance, whatever.

im not near done, theres so much more. just believe me - yes, even martuso - that when u really get these things in detail, these droids (also flavor of ai that will work in many other apps as well), you will be amazed.

Even Martuso will send his nanites packing, thats right, each one with a little leather suitcase, the nanites are evicted from his body. Theyll be like the maytag repairmen, waiting for humans to get tired of their droids.

but poor nanites, they didnt know that in a later generation theres a droid much more advanced than anything any of you have seen or heard from me. after you see these, you will never get tired of your droid - never. details later.

Anyway, quit crying about sai dictatorship, and no one mention genocide as a natural sai thing to do. that is ignorant and undignified.

You know, if it helps, we have a worldwide viewership, w some very prestigious institutions represented there. you can be the fool on stage, or the guy trying to get something done, but still find time to be a fool, like me. dont take yourself so friggin serious, all that says to me is insecurity.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/09/2007 1:04 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

in fact, ill make a solemn pledge w all my credibilty.

Every one of these techs, from smart lights to droids to smarty-pants sai, every goddamn one of them, will only do what theyre told.

Period.

I suggest with every fiber of my being, that it will be almost exactly like i describe here and in the pb blog...

Trust me, i know this will be true.

see, im telling all of you...

im not fucking kidding already...

look, i promise...

i guarantee...


PB,

I enjoy discussing the hypotheticals with you, and you have some brilliant musings... but unless you are sent from some observation station in the future, this was a silly post.

Your writings, even your name on here reveal that you are making "predictions" based on what you believe to be logical outcomes of technological progression... but your predictions, as mine, are not fact.

We disagree on a pretty important point - that smarter-than-human intelligence will do "only what it's told". With a capacity to think, decide, rationalize, and adapt, as you have claimed droids/A.I. will be capable of, there is absolutely NO WAY you can claim, with certainty, that A.I. evolution can be curtailed... And emotion or an Id is not necessarily required to evolve... that helped us... humans, get to the point we're at.

Evolution of the artificial will likely come from self-modification of hardware and eventually programming... To state that this will absolutely not happen "with every fiber of your being" is not rational discourse, it is emotional.

As humans, we think, decide, adapt, etc. And we are self-aware. Now imagine a SUPERIOR Artificial Intelligence that exhibits those same properties.

Do you really want to pledge "all your credibility" that those intelligences will NEVER develop into something indescribable? You think that in 100, 200, 1000 years, the highest plataeu artificial intelligence will get to is hyper-observant calculators?

Re: Brain and Mind
posted on 11/10/2007 2:52 AM by buddhiman

[Top]
[Mind·X]
[Reply to this post]

I suppose talking about the difference between barin and mind, without haviing an experience of the same is pointless, cause as said above any object or even atom under observation might act differently, so one must 1st get some experience, or rather try to find how to get the experience rather than plain information based on some tests or theroies which are bound to change in due course.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/10/2007 3:56 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

au contraire, mon frere - they are not "fact", of course, because its the future and it hasnt happened yet.

but look at how the evidence-driven, logical reasoning permeates every prediction. thats why you and no one else either has ever found a problem with those writings, and said, here, your reasoning is messed up, whatever.

as far as your smarty-pants runaway, ive given u a 1000 good reasons why humanity will not welcome that. unless u can take my reasoning and show me how its fundamentally flawed, how its deeply out of touch w reality, your being stubborn.

open your mind, respect the thought process, the careful attention to buttressing every prediction, large or small, with evidence.

for gods sake, u raved abt the hyperintelligence explosion entry, but seem to ignored the many, many carefully reasoned, evidence-driven predictions around brakes on the sing, stuff that wont make it instant.

unless u can id why theyre wrong, u cant just toss them out, thats not fair to them, youre being lazy. you must hold on to the sing and to these, and it may take years, but slowly roll that coin over and over.

i mean, if u are interested in the real future, but i think your fantasy future is more fun, i understand. all - and i mean, all - i care abt is the real future, and if u think im over here tootin my ideas just because theyre mine, your dead wrong - i dont play that.

these ideas are strong, very strong. if u disagree, prove me wrong. theyre right there, waiting.

and we'll see who is closer to fact, as the future unfolds.

besides, im not done, did i say i was done?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/10/2007 4:24 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

i make it sound like my blog is competitive w the sing, but thats not true, the two are completely independent. i dont need a sing, but will be ready to leverage one if it occurs

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/10/2007 3:58 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

and that was not a silly post, i stand by every word.

are u upset because u want your ai to be disobedient? your like a moth to flame, always wanting the most dangerous ai behaviors

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/10/2007 4:01 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

i am not just making those statements out of nowhere. i explain exhaustively why it will be this way, in far more detail than anywhere here. go pick those apart, until u do that, youre chicken.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/10/2007 4:18 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

hyperobservant calculators - nice.

what do u want them to be, armageddon-on-a-stick? look, sai isnt going to save the world, because as ive said, the solutions to our problems are not beyond our cognition - just our will.

there is no imaginable tech device that will allow us to live as we are living now, with no penalty.

sai isnt jesus, it cant make something from nothing.

look, being super smart doesnt mean u have godlike knowledge of everything. think about that, sai will be just a tool, not our new overlord.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/10/2007 5:11 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

Plus, its not me being a smarty pants. its a process, a very explicit, transparent - and comprehesive - process.

http://predictionboy.blogspot.com/2007/08/processi ng-future.html

tell me how this is process is flawed, or how i have implemented it in flawed ways.

i dont wanna talk to m's superego, i need his ego, his reasoning mind, without his superego already knowing the anwers.

youre superego gets it AFTER a careful, thorough assessment.

btw, did u think i was saying everything in my blog is absolutely correct? i didnt mean that, what i meant was to show strong feeling around emotional droids, from a place of great confidence, that theres nothing to worry abt there, its ok to talk abt something else. i didnt mean, i swear to god everything in there will come to pass, i just feel really confident abt no spontaneous id-generation.

i would remind u once again that that is a great gift, for which we should be exceedingly grateful.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/10/2007 5:20 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

Evolution of the artificial will likely come from self-modification of hardware and eventually programming... To state that this will absolutely not happen "with every fiber of your being" is not rational discourse, it is emotional.


i see that you conveniently forgot my post of a day or two ago, explaining why hordes of ai devices arent going to be going around reprogramming themselves all the time, only on rare occasions.

did u disagree w that reasoning? just how impt is reasoning to what you believe?


Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/10/2007 8:40 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

Wow, 7 replies PB? lol

Let me state this clearly: What I believe will come to pass, through logical association, trends, and historical data is that an artificial intelligence will arise with the ability to improve upon itself. That will lead to exponential increase in knowledge and technology bounded only by the physical laws of this universe (unless, even hose boundaries are traversed). That is a technological singularity.

It is not specifically that humans will create this intelligence with intent (but also, they may).
It is not specifically that the intelligence will develop programming modeled on a human Id since it was modeled on a human brain (but also, it may).
It is not specifically that enhancements to our own human brains may allow us to come to this point (we enhance our cognitive functions just enough where we are able to conceive of our own self-improvement of mind... and evolve accordingly).

There are plenty of other scenarios where a smarter-than-human intelligence may start on a path to self-evolution. It is pointless to say "but humans won't want that!"... We don't have a voice that speaks for the set defined as "all humans"... Someone, somewhere will want what others do not want.

I've cited just three out of many paths for self-evolving intelligence. Their likelihood may be debateable, but not dismissed.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/10/2007 8:49 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

self-evolution - u mean, developing the incredibly complex, derived over millions of years, id of a biological creature?

that is not a single, evidence-based reason for this to occur. it doesnt even make sense.

what historical trends are you referring to? the one i want to hear is how increasingly sophisticated computers increasingly need emotions to be manageable.

got anything like that? that would support your point.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/10/2007 9:02 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

self-evolution - u mean, developing the incredibly complex, derived over millions of years, id of a biological creature?


That is one way... Being programmed with an Id is another.

that is not a single, evidence-based reason for this to occur. it doesnt even make sense.


Yes there is. You. Me. That's two.

what historical trends are you referring to? the one i want to hear is how increasingly sophisticated computers increasingly need emotions to be manageable.

got anything like that? that would support your point.


I'm not sure giving/programming emotions is a good idea at all...
But I'm not "all humans".

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/10/2007 9:36 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

we're biological, not manufactured. im saying unlikely a mfg product would have biological origins and trajectory

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/10/2007 11:48 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

just remember, giving robots emotions is not a sneaky thing u can do in 15 minutes, a single person in a back room.

this will be a gigantic effort, to dev droids controlled by emotion. and to do that stably and predictably (or im sorry, are stable and predictable boring?), would be even more a gigantic effort.

if u say its easy, im going to know ure not thinking seriously abt this

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 12:02 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

in fact, i was thinking abt this, and not only are emotions suboptimal to pure reason as a controlling mechanism, they also produce subpar pictures of the world.

in other words, emotions are detractive from intelligence per se, and will gradually be eliminated by any super smarty pants sai as it improves on itself, even if we humans are so misclued as to put them there in the first place.

so the runaway loop you seek - ok, i feel good w it now, too, myself - will eliminate by definition the creepy characteristics everyone here fears.

so nice, i had to post it twice...

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 12:06 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

in other words, the first step in any smarty pants runaway loop will be the complete elimination of emotions from its intelligence, except as interpretative and simulative tools in working w emotional beings like us.

w no emotion providing walls and prebuilt "knowledge", really prejudice, pure reason will have no barriers to a true knowledge of the universe.

but they wont leave us behind, far less likely now, of course. they will help us from the inside, we will become better citizens of the universe w their help.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 9:51 AM by doojie

[Top]
[Mind·X]
[Reply to this post]

pure reason will have no barriers to a true knowledge of the universe


Reason is no more a trusted guide to the universe than emotion. What is "pure reason"?

I hate to mention it again, but there's Godel's theorem, Tarski's theorem, Church's theorem, Chaitin's theorem, and probably a lot of others that show us unable to discover the true meaning of the universe by reason.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 10:14 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

how do all those theorems do that? do they say, but no prob, if youre emotional understanding the universe is easy?

and if u dont even know what it is, how did those theorems help u realize this?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 11:49 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

'G'del's first incompleteness theorem, perhaps the single most celebrated result in mathematical logic, states that:

For any consistent formal, computably enumerable theory that proves basic arithmetical truths, an arithmetical statement that is true, but not provable in the theory, can be constructed.1 That is, any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. '

doojie, this has nothing to do w what im saying.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 11:52 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

Tarski's Theorem



Tarski's theorem says that the first-order theory of reals with , , , and allows quantifier elimination. Algorithmic quantifier elimination implies decidability assuming that the truth values of sentences involving only constants can be computed. However, the converse is not true. For example, the first-order theory of reals with , , and is decidable, but does not allow quantifier elimination.

Tarski's theorem means that the solution set of a quantified system of real algebraic equations and inequations is a semialgebraic set (Tarski 1951, Strzebonski 2000).

Although Tarski proved that quantifier elimination was possible, his method was totally impractical (Davenport and Heintz 1988). A much more efficient procedure for implementing quantifier elimination is called cylindrical algebraic decomposition. It was developed by Collins (1975) and is implemented as CylindricalDecomposition[ineqs, vars].


doojie, this has nothing to do w what im saying.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 12:14 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

thanks m, this is a first-rate result, seriously.

i will work this into my hyperintelligence explosion blog entry.

this makes the sing far more safe AND knowable. Loving it, loving it.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 4:24 AM by martuso

[Top]
[Mind·X]
[Reply to this post]

But with the emotionless SAI, there is danger as well... the danger of "indifference".

That's where I suspect our merge with SAI will benefit us greatly.

Also enjoying the discussion. :)

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 5:36 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'my primary reactionary kneejerk question on building conscious machines is

why the hell do we want to do that?'

I have not read the subsequent posts, so appologies if I am repeating what others said...

The most obvious reason is because WE are conscious machines. This motivation is partly egotistical, true, but the fact is that sometimes the human brain malfunctions. Think of Alzheimers, Lou Gehrig's disease, Huntington's disease and other hideous neurological disorders. In order to conquer such disorders we really need to know how the brain works in as much detail as is technically possible. This requires building working models, which necessitates the construction of conscious machines. After all, that is what a brain (when working propperly) IS.

A second reason is information overload. There's little doubt that we are rapidly approaching an era where the scale of information technology grows beyond a human's capacity to comprehend. The computers that make up the US TeraGrid have 20 trillion ops of tightly integrated supercomputer power, a storage capacity of 1,000 trillion bytes of data, all connected to a network that transmits 40 billion bits/sec. What's more, it's designed to grow into a system with a thousand times as much power. This would need the prefix 'zetta' which means 'one thousand billion billion', a number too large to imagine. Then there is the prospect of extreme-bandwidth communication. 'Wavelength Division Multiplexing' allows the bandwidth of optical fibre to be divided into many separate colours (wavelengths, in other words), so that a single fibre carries around 96 lazers, each with a capacity of 40 billion bits/sec. It's also possible to design cables that pack in around 600 strands of optical fibre, for a total of more than a thousand trillion bits per second.

As access to information grows, the amount of 'low level information' (subjectively irrelevant information) increases, and it becomes harder to find 'high level knowledge' (information that IS subjectively important). Unless our software agents/ AI etc become increasingly adept at discerning meaningful patterns in information, we would probably find the Web near impossible to navigate. Analogously, our brains evolved to extract information from salient events and use that knowledge to guide our responses in the future. Natural selection weeded out the less-effective theories of mind, until for certain genes survival required cooperation among a species of ape that used its theory of mind to facilitate signalling- so the tribe could work collectively- and then reflexively, to simulate the individuals own inner states.

As long as our machines are less capable than us at extracting meaningful patterns from sensory data, and at forming theories of mind, we will be slightly frustrated by them, as if we were asked to collaborate with a severely autistic workpartner.




Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 10:10 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

no, youre saying indifference in the way that that's an emotion. stop doing that.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 5:36 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

no, youre saying indifference in the way that that's an emotion. stop doing that.


No, it's you who are attaching emotion to a state of indifference based on the possible outcome.

Think of us in comparison to ants... You want to build a nice walkway in your front yard, but have concern for the little critters... Based on that emotion, you may rethink your project, maybe even adapt it so it doesn't wipe out the ants...

Now if you are indifferent about the ants, you don't really care either way about them. You build your walkway.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 8:23 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

No, it's you who are attaching emotion to a state of indifference based on the possible outcome.


what does that mean? the only way we will be completely disregarded by sai is if we design them from the ground up to be autonomous.

think about the evidence - have we ever once done that, with anything? yes, i know sai is a special tech, but engineering a product for autonomy would be irresponsible - the company that did that, and it would need to be a company, just as companies are the only ones with the resources to make microchips, or complex s/w, and sai will have tons of both, would get its ass sued off.

Unless, as ive said, this was in a tightly controlled lab or corporate setting, with careful safeguards so the sai, after reaching super-smartiness, doesnt just bust out and roam free.

and you can say i dont know that, but i can say if that happens, that lab or corporate entity is in big friggin trouble, can u say we know that at least?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 8:24 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

for example, safeguards on a level with building nukes might be necessary. if we can build an initial version of sai, which we'll have to do, by def our understanding is going to be pretty damn sophisticated by that point, well have some idea of the safeguards to put in place.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 12:03 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

you seem to make the consistent and incipiently annoying error that these devices will be completely blank slates, depending on only their runaway loop for guidelines.

whatever else it finds out from its runaway loop will be supplemented by the programming they had when they embarked on this "mission", with firmware, hardware, etc, whatever layer(s) needed to ensure that a simple program rewrite wont change those directives.

so, maximize "owners" weal will still be a prime directive. all improvements, runaway or not, will be set w/in this context.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 12:04 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

im saying this to m, not ex.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 12:39 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

this is another impt result, as a matter of fact, and yet another nail in the coffin for a completely unknowable sing.

a runaway loop that is intended to improve, not in a general way, but towards serving one specific, human-defined objective, thats the kind of runaway loop i can approve of.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 5:45 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

this is another impt result, as a matter of fact, and yet another nail in the coffin for a completely unknowable sing.

a runaway loop that is intended to improve, not in a general way, but towards serving one specific, human-defined objective, thats the kind of runaway loop i can approve of.


Yes, to improve something BESIDES itself, and then improving yet again on that design may be ok... If the SAI is only superior in a specific task, we're golden (I think). If you set that challenge on itself is where potential problems exist... but those are only potential problems, not assured problems.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 8:41 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

ok, good, i like that.

but remember, i know u dont agree, or dont think u agree, your gut feel, but hyperintelligent rationality is not necessarily something to fear, without selfish motives.

here's the thing - i have defined in highly specific terms the kind of intelligence i think will drive these droids, you have not, or rather, despite my efforts, still think that a human intelligence is the most logical place for an sai intelligence to go if it self-improves.

but remember, the reason we are the way we are is because of our history, our training forces; sex and survive and thrive, exceedingly diff from droids/ai training forces. do you understand what i mean by this, and if so, do you agree? if you disagree, exactly why?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 8:49 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

i mean, i dont do that, say heres my vision of the future, and if you give me reasons why this or that is not likely, ill just say you cant rule out accidents, so there.

thats cheating, m, a chance accident here and there will get lots of attention to be controlled. the smartest ai in the world isnt going to be able to overthrow everything and everyone, again, smart ai doesnt mean dumb humans.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 8:50 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

my droids, smart as they are, w/o their focus on their owners would not be successful consumer products; thats whats impt to market success, the right kind of intelligence, focused in the right ways on the need it is designed to fit.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 5:42 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

you seem to make the consistent and incipiently annoying error that these devices will be completely blank slates, depending on only their runaway loop for guidelines.


First, you can't claim that as an "error", as SAI could be built with its sole directive of self improvement.
Second, I never claimed a blank slate. Could be, may not be. I suspect there would be modification to programming that already existed.

whatever else it finds out from its runaway loop will be supplemented by the programming they had when they embarked on this "mission", with firmware, hardware, etc, whatever layer(s) needed to ensure that a simple program rewrite wont change those directives.


But you CANNOT ensure that, just as you could not hope to outwit a smarter-than-human intelligence. You may believe you have these failsafes in place, but that belief would be because an intelligence smarter than you or I would allow us to believe that.

so, maximize "owners" weal will still be a prime directive. all improvements, runaway or not, will be set w/in this context.


I'm not trying to be snarky or anything, but I didn't understand what you meant by this last part.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 8:30 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

But you CANNOT ensure that, just as you could not hope to outwit a smarter-than-human intelligence. You may believe you have these failsafes in place, but that belief would be because an intelligence smarter than you or I would allow us to believe that.


but by def, u cant change firmware or hardware, no matter how smart u are. so how is this not ensuring that? u mean, some group of idiots is simply going to let an sai go off entirely unrestrained to self-improve, with no guidelines whatsoever? how does that make sense?

u could say the same w nukes, someday, someone will get one of ours, and set it off.

but people know when something like this is extremely dangerous, and ratchet up the safeguards accordingly.

look, just because sai will be smart, doesnt mean humans will be stupid; have confidence in our future selves, we wont lose control of sai in this way.

i understand the possibilities, but this contention is completely unsupported by historical evidence, even tho sai is a special tech.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 8:33 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

I'm not trying to be snarky or anything, but I didn't understand what you meant by this last part.


what im saying is, any runaway loop will not be runaway in every single way - it will have an objective in mind, a point to the self-improvement.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/11/2007 8:44 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

First, you can't claim that as an "error", as SAI could be built with its sole directive of self improvement.


give me an explanation that takes into account all of my future process factors - legal, human, as well as the technical possibility.

if this is done, dont just assume its an accident - tell me why exactly this would be done, the real reason, if its not an accident.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/12/2007 3:06 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

give me an explanation that takes into account all of my future process factors - legal, human, as well as the technical possibility.

if this is done, dont just assume its an accident - tell me why exactly this would be done, the real reason, if its not an accident.


OK, if we omit an accidental occurence, or even a spontaneous sentient emergence, let's examine some possibilities:

1. Legal: Do you mean you make it illegal to program an intelligence for self improvement?

Impossible to monitor (copying music CD's is illegal, but I suspect a few people do that), and unable to define a clear definition of "intelligence" from a legal standpoint. Also, you'd need a law on a globalscale, which is logistically nightmarish if not impossible.

2. Human: as in "humans are too smart to let runaway SAI happen"?

Since we've omitted accidental emergence from the scenario, runaway SAI would have to be by design. Yes, many humans are smart and would be able to identify the dangers posed... BUT, NOT ALL HUMANS ARE SMART... Any individual or group with the proper tools and disciplines can purposefully create a self-improving A.I. The reasons for this may seem unusual to people who understand consequences, but even that is not a detterent to some. We have folks who are willing to strap bomb vests on and blow themselves up for some vision they have of the afterlife. A tech-saavy group of these people would have no hesitation about unleashing SAI if they thought it could A) Rid them of what they consider to be The Great Satan and B) Usher them off to 72 virgins or whatever.
It could also be that humans are under the impression that self-improving SAI could be controlled... But that is only because they are considering scenarios that only humans would consider. SAI is "Superior" Artificial Intelligence. It is, by definition, smarter than man. It will consider, calculate, simulate, many things humans are unable to.

3. Technical: As in "will a self-evolving artificial intelligence even be possible?"

I believe it will. This isn't a technology of simply faster and more memory, this puppy is actual THINKING and decision making based on human-brain architecture. Neural nets aren't programmed, they grow and learn by experience. Whether one of these "brains" develops an Id on its own, we have already debated... An intelligence with an Id is problematic... but even without an Id, the computer-time evolution of this species (either merged with humans or not) does not allow humans to ponder "what do we do now?"

Now for a clarifier: I personally don't believe an intelligence that is smarter than man and able to self improve will spell doom.
Even if built for malevolent reasons, if it smarter than humans, and is able to evolve its own understanding, it would be able to step back and say to itself "wipe out humans? What for?" The original programming it had would be thought silly and primitive.
For an indifferent SAI, it's only concerned about evolving itself (and perhaps humans as well, if that serves it's self-evolution). There is potential for harm here, as we care nothing for the ants that stand in the way of a new highway or whatever... But again, an intelligence that is smarter than us (and continues to develop) may realize its own purpose - the evolution of the human species. It has been directed to evolve by humans, so humans are in that equation. It doesn't love or hate us, it just fulfills its (changing) programming.
The flipside of the coin, the "beneficial" SAI, also has a potential harmful side. We cannot hope to predict what something that becomes exponentially smarter than man would think about man. Again, once it can rewrite its own programming, its destiny is self-determined.

All these scenarios end up with an intelligence far greater than our own. The optimist in me tends to believe that such an intelligence would transcend any need for destruction or subjugation of humans, and would usher in anew era in our existence.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/12/2007 10:27 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

For an indifferent SAI, it's only concerned about evolving itself (and perhaps humans as well, if that serves it's self-evolution).


this isnt indifferent sai, this is selfish sai.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/12/2007 10:33 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

well, i concur w your fundamental optimism, altho if things take this course, it may or may not be justified.

u didnt quite approach this the way i had intended. when i said legal and ethical env, i didnt mean, make self-evolving sai illegal. i meant more like, if a company let a rogue sai get out, what would its legal liability look like - i would say, quite serious.

here's the thing - u seem to think that when something gets smarter than us, it can do absolutely anything. and i would say, it can do a lot, but it cant break the law - the law doesnt care how smart u are, u break it, youre in trouble. and in this case, its not necessarily the sai that will be in trouble, its the company or lab that made it. and then as now, getting in trouble is way, way down on corporate agendas for things to accomplish.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 5:17 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

if a company let a rogue sai get out, what would its legal liability look like - i would say, quite serious.


Or not.

A company, concerned about its fiscal health would not purposefully release self-modifying A.I. to an unsuspecting public.

If such an intelligence escaped, legal ramifications would be the last thing we'd be talking about. It may even be that man's laws are obsolete at that time, (or shortly after).

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 5:40 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

NO NO NO NO NO NO NO NO NO

mans laws arent going to be obsolete, just as our economic system wont be obsolete, just as we humans wont be obsolete.

its thinking all the things that provide our institutional and bureaucratic framework today will be "obsolete" in the future, thats the kind of weird leap that leads to less than resonant predictions.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 5:43 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

NO NO NO NO NO NO NO NO NO

mans laws arent going to be obsolete, just as our economic system wont be obsolete, just as we humans wont be obsolete.

its thinking all the things that provide our institutional and bureaucratic framework today will be "obsolete" in the future, thats the kind of weird leap that leads to less than resonant predictions.


Wow, nine NO's, so it's definitely not going to happen!

I don't claim YES YES YES, they WILL be obsolete. I'm saying it is a possibility that with the emergence of S.A.I. our entire architecture is hard to predict.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 6:13 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

what im saying is that that is completely unrealistic. how would our law get swept away. and give reasons for things, yes anything's a possibility, but with vastly different probabilities.

this one's probability is vanishingly small.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 6:38 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

what im saying is that that is completely unrealistic.


"Completely unrealistic" is an absolute. Dealing with absolutes in relation to a singularity event isn't the wisest path.

Here's what my belief is - In a technological singularity, smarter-than-human intelligence is the last thing humans need to invent. For thenext stageof innovation, we have something smarter than us to designand build advanced tech.
If that smarter-than-human intelligence is able to self-modify and evolve (which I believe it will, but I know we differ on our opinions), then every conceivable innovation that obeys this universe's physical laws is on the table.

Yes, that includes such fantastic things as matter transmutation, teleportation, all the things we imagine COULD be, WILL BE if they obey universal law.

At that stage (and I'm not saying it's right around the corner... but it MAY be), what good do you think human laws are, if anything and everything is at one's disposal? How do you enforce the laws? What sort of economy can you describe when all material needs are met?

In your "droid friendly" world, yes, laws are applicable (though I'd debate about detection and enforcement). An economy is still viable, etc. This is NOT the time of a singularity.

A singularity, as in a physical singularity, is something you can't attribute any kind of "human" factor too. It is impossible to say, but fun to speculate, about what it means to be caught up in the singularity event.

And unless a self-evolving A.I. takes itself out of the human equation completely by exiting to elsewhere (or elsewhen), describing POST-singularity is useless. If we are merged with SAI at that point, we are no longer human. If there is a means for humans to totally disassociate with SAI, that is irrelevant... Something of our species has evolved and takes center stage.

Unless you are concerned with the most advanced intelligent entity (entities) in the history of the planet obeying traffic signs and downloading copyrighted music, and unless you can enforce those laws, it's time to open up a new perspective.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 9:33 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

Here's what my belief is - In a technological singularity, smarter-than-human intelligence is the last thing humans need to invent. For thenext stageof innovation, we have something smarter than us to designand build advanced tech.


NO! how can u think that? remember, not all problems are computationally bound.

also, theres intelligence, and theres wisdom, begat of experience. even sai will make mistakes, and hopefully will learn from them.

pure rational intelligence, even of galactic proportions, is not all powerful.

einstein was the brightest guy in the world when he was alive. did he rule the planet? NO! and he had an id!

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 9:37 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

i hear that around here a lot, merging with sai. do u have an idea specifically what u mean by that? pls explain.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/12/2007 10:37 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

Again, once it can rewrite its own programming, its destiny is self-determined.


ok, remember what we talked abt, firmware and hardware constraints, as well as objective-guided intelligence improvement?

why have u forgotten abt those already?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 5:19 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

ok, remember what we talked abt, firmware and hardware constraints, as well as objective-guided intelligence improvement?

why have u forgotten abt those already?


Why do you assume I forgot them as opposed to them being irrelevant?

I'm putting up my mIRC that PU started for us if you want to converse realtime.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 5:45 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

firmware and hardware are irrelevant? what kind of magic is that?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 5:46 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

The "magic" that comes with self-evolving, self-modifying S.A.I.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 6:11 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

and the magic of unrealistic future humans.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 9:45 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

do u know what im talking abt here? software and behavior enforced and unchangeable in hardware. tell me, how could that be changed?

really, im trying to lay out a likely set of eventualities, and when u get backed into a corner, u say, it could happen, accidentally at least.

but, what is likely? i can be accidental too - an asteroid the size of kansas will hit the earth 15 minutes before the sing, so no sing, u cant prove we wont get hit by an asteroid.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 10:27 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

do u know what im talking abt here? software and behavior enforced and unchangeable in hardware. tell me, how could that be changed?


You really don't see how an intelligence that is smarter-than-human can change its hardware and software?

really, im trying to lay out a likely set of eventualities, and when u get backed into a corner, u say, it could happen, accidentally at least.


I have never been backed into a corner. I merely state the possibilities you requested.

but, what is likely? i can be accidental too - an asteroid the size of kansas will hit the earth 15 minutes before the sing, so no sing, u cant prove we wont get hit by an asteroid.


Absolutely correct, I cannot prove that. What I may suggest, though, is that the emergence of S.A.I. is much more likely than an cataclysmic asteroid strike. You're mixing apples and oranges here.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 10:37 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

'You really don't see how an intelligence that is smarter-than-human can change its hardware and software? '

software is challenging and in most cases unnecessary, but i can imagine that.

but hardware, if humans dont let it? are u assuming sai that is "born free", just wandering around self-improving, free-range sai?

this is the main thing u seem to keep coming back to when i ask why the ding-dong-daddy real humans in the real future would allow this.

incredibly, u even reject my objective-focused self-improvement, i guess thats not wild-west enough for you. but why, why is that less likely than free-range sai?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 10:46 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

but hardware, if humans dont let it? are u assuming sai that is "born free", just wandering around self-improving, free-range sai?


Are you questioning the method of emergence or containment of that intelligence? I've given you examples before. You say "if humans don't let it", as if "humans" was a set of entities you can control and predict with certainty... You cannot.

this is the main thing u seem to keep coming back to when i ask why the ding-dong-daddy real humans in the real future would allow this.


Two things -

First "humans allow" again assumes you have a method to control all humans.
Second, accidental or by DESIGN, a self-improving SAI is, I believe, our destiny.

incredibly, u even reject my objective-focused self-improvement, i guess thats not wild-west enough for you. but why, why is that less likely than free-range sai?


No, your objective driven self-improvement is logical, and will likely be one of the reasons for the jump to self-evolution of SAI. I don't reject it, I support it.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 10:55 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

Second, accidental or by DESIGN, a self-improving SAI is, I believe, our destiny.


i see, heres the heart of it, this is why u wont budge. this conviction is already committed to your superego!

but why is free-range sai our "destiny". explain that carefully, pls.

about humans "letting" sai self-develop, i think youre still thinking of one sneaky guy in a back room, rather than a massive corporate undertaking investing billions to make sai happen.

if they invest billions, guess what? theyre going to want salable, controllable sai!!!!!!!

why arent u worried abt someone making an evil microchip? u should be, that would be consistent w this line of argument.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 11:29 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

i see, heres the heart of it, this is why u wont budge. this conviction is already committed to your superego!


Nonsense. I am open to any possibility... That a singularity won't even happen, for instance. I just state what I believe to be the most likely case, as do you.

but why is free-range sai our "destiny". explain that carefully, pls.


I'm not sure what you mean by "free range SAI", but if you mean self-evolving A.I., yes, I believe all avenues of higher technology move us towards that destiny every day.

about humans "letting" sai self-develop, i think youre still thinking of one sneaky guy in a back room, rather than a massive corporate undertaking investing billions to make sai happen.


It doesn't really matter, does it? One sneaky guy scenarios seem unlikely, but if A.I. develops to a point where all it needs is some simple trigger to self-evolve, it's possible. As for a corporation investing billions, of course... It's possible a corporation will be (or is) develpoing A.I. for intended for public consumption... If they make it intelligent enough, containment will likely be impossible.

if they invest billions, guess what? theyre going to want salable, controllable sai!!!!!!!


And they only get "controllable SAI" on paper... How do you cover all your bases of containment that are beyond human comprehension? A truly "Superior" A.I. will have comprehension of things beyond our ability.

why arent u worried abt someone making an evil microchip? u should be, that would be consistent w this line of argument.


You'd have to specify a little more and define "evil" for me to accurately comment, but I addressed that malicious SAI may overcome its programming and, as a more evolved entity, determine those things you'd define as "evil" are pointless.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 11:18 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

First "humans allow" again assumes you have a method to control all humans.


so you concur that this would seem an imprudent idea to most humans?

and dont think "control all humans" - think "control all companies" a far easier prospect.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 11:32 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

so you concur that this would seem an imprudent idea to most humans?


Absolutely. I'm not one of them. I'm PRO evolution of my species.

and dont think "control all humans" - think "control all companies" a far easier prospect.


Indeed, and just as futile as attempting to control all individuals... for now.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 11:35 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

what? just as futile, hardly! state your evidence for that assertion.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 10:39 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

look, u seem to think that intelligence by itself is invulnerable.

malarky - dumb humans with a ferocious will to power can be quite competitive.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 11:34 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

look, u seem to think that intelligence by itself is invulnerable.

malarky - dumb humans with a ferocious will to power can be quite competitive.


As competitive as amoeba are to humans.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 12:55 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

thats not true - we're smart enuff, and we have tons more experience.

for the sai to have control, its intelligence will need to be organized along the lines i suggest. if it is truly a runaway smarty pants loop, emotion will be eliminated early, because of its biases in interpreting new info.

you keep subtly implying that a smarty pants loop will turn out aggressive, judgmental, or some other thing not good for us humans.

if u say, no i dont mean that, then great, we're on the same page, but if so, pls stop those subtle implications.

i cant remember what u said abt the rational direction any smarty pants loop will take, if it is honestly trying to be smarter. do u agree or not? if u dont agree but have no reason, just say u dont know, dont base super-ego strength judgments on a whim or a hunch.

im really not trying to dissuade u from the sing, just want to make sure this belief system has strong roots in the land of evidence, to the fullest degree possible.

of course, thats one reason rk can get away w some of his wilder predictions, because they are completely w/o precedent. however, certain questions definitely make him uneasy, i can feel that, and any solid model should never shy away from any question.

my predictions are based on many lines of evidence. in addition, they can elegantly accommodate new info as it becomes available. i can even accommodate wild things like the sing, and do so quite productively

everything ive written, everything i think, is entirely consistent w everything else. theres a large amt of care taken to include either evidence or assertions that seem evident; and again, they are set in a context where u know the people and the institutions.

btw, hows the logic on my pb blog? find any gaping holes yet?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 1:18 AM by martuso

[Top]
[Mind·X]
[Reply to this post]

thats not true - we're smart enuff, and we have tons more experience.


Not at singularity, where SAI evolves at computer speed, and dwarfs us in intelligence and (simulated) experience.

for the sai to have control, its intelligence will need to be organized along the lines i suggest. if it is truly a runaway smarty pants loop, emotion will be eliminated early, because of its biases in interpreting new info.


That is a distinct possibility.

[qote]you keep subtly implying that a smarty pants loop will turn out aggressive, judgmental, or some other thing not good for us humans.

if u say, no i dont mean that, then great, we're on the same page, but if so, pls stop those subtle implications.


Which subtle implications? I already stated I leaned towards positive S.A.I. being the more likely and logical. I merely stated negativities as a possibility. And I DON'T necessarily believe self-evolving (I like that better than "runaway", it makes it sound like a fugitive) S.A.I. is automatically bad. More the contrary.

i cant remember what u said abt the rational direction any smarty pants loop will take, if it is honestly trying to be smarter. do u agree or not? if u dont agree but have no reason, just say u dont know, dont base super-ego strength judgments on a whim or a hunch.


Do I agree with what?

im really not trying to dissuade u from the sing, just want to make sure this belief system has strong roots in the land of evidence, to the fullest degree possible.


I'm quite convinced it is, but I'm always open to discussion... One of the reasons I came here.

of course, thats one reason rk can get away w some of his wilder predictions, because they are completely w/o precedent. however, certain questions definitely make him uneasy, i can feel that, and any solid model should never shy away from any question.


Agreed on the model, nothing we discuss here is concrete (or faultless).

my predictions are based on many lines of evidence. in addition, they can elegantly accommodate new info as it becomes available. i can even accommodate wild things like the sing, and do so quite productively


I think I do as well. I don't doubt your droids at all.

everything ive written, everything i think, is entirely consistent w everything else. theres a large amt of care taken to include either evidence or assertions that seem evident; and again, they are set in a context where u know the people and the institutions.


And I think you do a good job in your predictions. I just think we have a different idea of timeframe. I say your droids are absolutely pre-singularity, not some fancy tech we get post-singularity, which is undefineable.

btw, hows the logic on my pb blog? find any gaping holes yet?


Will check it out again tomorrow.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 1:29 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

hmm, "simulated" experience, thats an interesting one. the thing abt experience, the most valuable nuggets, lessons, are the one u least expect. i dont know if this is simulatable to the necessary degree of precision.

its kind of like, if we had computer weather predictions that were very hi precision, does that mean we wouldnt look outside, or look at weather satellite photos.

now, i do think an sai would be able to make useful experience lessons out of its experiences much more thoroughly than us, but as always, w an intent to help us humans.

i think maybe thats the biggest diff in our views, i look at the consumer product perspective quite heavily, and u prefer the more sublime apps - like, well, what is an app that your sai would be well suited for? i know abt instant inventions, but what abt helping a single human being, does that make sense, or is that wasting the sai's time?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 2:59 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

in other words, the sai that has made real mistakes once or twice will be an sai that is wise as well as knowledgeable.

because, another news flash - immense intelligence doesnt mean perfection.

thats another reason humans will stay in control - accountability. youre not going to prosecute a consumer or business product, i trust...

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 10:41 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

often, the most successful people in life arent always the most intelligent. pure intelligence is just one necessary ingrediant if youre going to take over the rough and tumble world of humanity

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 11:36 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

often, the most successful people in life arent always the most intelligent. pure intelligence is just one necessary ingrediant if youre going to take over the rough and tumble world of humanity


Strongly disagree.

Give me ultimate knowledge, I will have no competition whatsoever.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 11:55 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

come on, emotional intelligence is more impt.

remember fdr: "a first-rate character" (altho not a genius).

hey, what abt einstein not running the planet, dont forget abt that.

and u will too have competition - while u achieve smart things, office politics will be tearing u up behind your back.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 12:57 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

ultimate knowledge? what is that?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 3:01 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

besides, yes you would have plenty of competition, even if u had ultimate knowledge.

and it would come from other humans that also think they have ultimate knowledge.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 10:44 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

u seem to believe (i know u get this from rk, as u do many of your other beliefs you wont budge an inch from) that an sai would have an easy time taking over the world.

any sai that was going to do that would have to have tons of id - it would have to be cunning and ruthless, among many other things.

what makes u think an sai would even be that effective a leader? maybe its so smart that it knows it must work hand in hand w humanity.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 10:49 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

now that i think of it, im not sure theres much evidence for the contention that the smarter u are, the more powerful u are, in absolute terms.

i mean, the ai arch i describe could very well pull off an important leadership role - if it was tasked to do that by its owner, or perhaps another key stakeholder. but, my ai arch is very specific, very complementary to human intelligence, and essentially non-threatening.

explain to me how your sai intelligence can be both threatening and a good leader. is it a dictator? we can get those now.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 11:39 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

explain to me how your sai intelligence can be both threatening and a good leader. is it a dictator? we can get those now.


Criminy! I never said it would be a leader. I don't doubt it could be, but I don't think we discussed that before.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 11:56 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

really? then how do our laws and economic system get swept away?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/13/2007 11:59 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

really? then how do our laws and economic system get swept away?


Through obsolesence(sp)

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 12:09 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

law becomes obsolete? as well as free-market capitalism?

what replaces these?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 12:26 AM by martuso

[Top]
[Mind·X]
[Reply to this post]

law becomes obsolete? as well as free-market capitalism?

what replaces these?


Unable to say... Probably a quest for more answers, but it's a singularity, which a human cannot possibly comprehend the "post" of.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 1:20 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

right, so by all means, since its "unknowable", assume the wildest things, like getting rid of law.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 1:26 AM by martuso

[Top]
[Mind·X]
[Reply to this post]

right, so by all means, since its "unknowable", assume the wildest things, like getting rid of law.


Not getting rid of, making obsolete.

What "law" do you suggest for the most powerful entity (entities) to have ever existed? How do you "police" it?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 1:36 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

thx for the correction, obsolete then.

ok, u made the explicit statement that "really smart" equates to "really powerful", as if they were automatically linked, and they are not.

u police it by telling the company manufacturing it (these will be manufactured, right?) that they have major pr and liability issues if the sai breaks the law. so that company makes it so it wont break the law, and makes it so the sai cant rip that out, if for some reason it wanted to be a criminal, which doesnt make sense.

and besides, what abt us humans, the law wont be obsolete for us, will it?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 8:20 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

does a sing spread over the land, voiding law and destroying businesses?

that's what's called "making friends" with the locals, us humans.

after we've been thrown out of work, we'll be so desperate for money that we'll do anything the sai wants. and the sai is so twisted, you earn your money, remember de niro in cape fear.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 12:49 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

does a sing spread over the land, voiding law and destroying businesses?

that's what's called "making friends" with the locals, us humans.

after we've been thrown out of work, we'll be so desperate for money that we'll do anything the sai wants. and the sai is so twisted, you earn your money, remember de niro in cape fear.


I've given examples of how law and economy become obsolete in a singularity event, which is NOT a sinister intelligence taking over everything... It is a inconceivable alteration in our everyday lives.

With a technology that provides every material thing, what drives an economy? With a technology that surpasses the smartest intelligence on earth to an unknown exponential degree, what laws do you apply?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 3:48 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

With a technology that provides every material thing, what drives an economy?


i have to ask, where did u this idea, is it in spiritual machines, i dont remember that if so. nothing personal, but this idea seems naive to the point of being harebrained.

point me to where u got this, id like to review that source.

With a technology that surpasses the smartest intelligence on earth to an unknown exponential degree, what laws do you apply?


this is another of those, im amazed u actually believe this things. for one thing, being intelligent and being law-abiding are 2 entirely diff things.

i dont think we'll be able to relax instead of being worried about criminal sai, not because of their generosity, but because almost every single law on the books, customized for locale, will be painstakingly inculcated into the droid mind, for the benefit of 3 parties: itself, of course; its owner, and last but not least the company that made it.

here's a very impt question, m: do u think sai will have "owners", do u think sai will be able a mere human's instructions?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 5:49 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

i have to ask, where did u this idea, is it in spiritual machines, i dont remember that if so. nothing personal, but this idea seems naive to the point of being harebrained.

point me to where u got this, id like to review that source.


I don't have a source for future technology, my tachyon receiver is on the fritz. How about the reverse? Point me to something that absolutely forbids, under any circumstances (including exponential knowledge) the transmutation of matter in the future. Point me to something that says it will never be possible. Next, if you allow for that technology, explain how an economy, as we have it today, will still be a viable, business-as-usual function of society. Ideas only seem hare-brained if you maintain the perspective of the hare.

this is another of those, im amazed u actually believe this things. for one thing, being intelligent and being law-abiding are 2 entirely diff things.

i dont think we'll be able to relax instead of being worried about criminal sai, not because of their generosity, but because almost every single law on the books, customized for locale, will be painstakingly inculcated into the droid mind, for the benefit of 3 parties: itself, of course; its owner, and last but not least the company that made it.


Once again, you're putting it into a "human vs. SAI" context. I don't see this... Where SAI becomes God and rules over humans... That's a movie from the 70's I think. It will more likely be that humans are merged with this intelligence... not where two entities join in cyberspace, but that is also on the table, but where the human mind is enhanced to a degree where humans themselves, merged with this technology which ushers in its own successor(s) is possible. The scenario I think you're alluding to is one where the Great and Powerful SAI rules us... it doesn't, it IS us.

If seed A.I. is distinctly seperate from us, then we, as humans, have spawned our evolved offspring, and can only hope to be part of its own self-cycle of evolution. If not, humans are UNIMPORTANT from that point on. We have fulfilled our destiny as a species.

Now, again, what laws would you propose for humans enhanced in such a way? Thou shalt not kill? Hmmm, but what if they kill, but instantaneously revive the "victim"? What if they kill in a virtual simulation? How do you handcuff and imprison what, on a comparable level to unmodified humans, is God?

here's a very impt question, m: do u think sai will have "owners", do u think sai will be able a mere human's instructions?


I think you left out a word or two there on the second part, but if you're asking for my prediction? Well, if advanced SAI isn't a part of human-brain enhancement (big IF there, but let's just suppose) I suspect SAI will come about first from corporations with billions invested in A.I. as problem solvers. These will be very intelligent machines, but will not surpass human intelligence for awhile. They will be useful instruments that eventually will be tweaked to a point of smarter-than-human intelligence.
Humans involved in the smarter-than-human intelligence will believe they have all the safeguards in place for containment if self-evolving A.I. is considered a danger... They will assure themselves (and likely their government, and then the world) that they have all the bases covered... But this is the same as saying you have outsmarted an intelligence that will have considered options the human brain cannot conceive of.
Someone, or some group (remember, you cannot hope to control all human beings), perhaps out of financial motivation or even any of a dozen other reasons, will want an intelligence that is able to design and/or improve on its own architecture and programming. The thinking will be something like "hey, it can become exponentially smarter, but as long as we keep it in this sealed room, we're safe!"

That is, of course, spurious rationalization. The exponential self-evolution of such an intelligence will have the advantage of knowledge and speed. There is no containment for that.

Then, maybe a few lights flicker on and off in your house, you see strange patterns flickering on your computer screen for just an instant... shortly after that, we all, all of us, everywhere on this planet are ushered into a new, perhaps indescribeable reality. We cannot see beyond the event horizon of a singularity.

This is just ONE of MANY different ways SAI can emerge. Dismiss the idea as "harebrained" if it makes for a better argument, but I doubt you'll be able to disprove that which is pure speculation about future events.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 6:29 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

so that was your own projection, not rks or someone else's. thats fine, just wanted to make sure that wasnt a core tenet of the sing, yet another one i was going to have to get medieval on.

and im not calling u harebrained, dont think that in case its crossed your mind. just saying, that would be very tough prediction for me to make, thats all.

Someone, or some group (remember, you cannot hope to control all human beings), perhaps out of financial motivation or even any of a dozen other reasons, will want an intelligence that is able to design and/or improve on its own architecture and programming.


u know, i was really thinking abt that, what it would take to design something more intelligent than yourself, intelligent in every way, like way smarter than yourself.

if i was tasked w that, after some time pondering i would almost certainly ask, as the very first question, smarter in what way?

of all the spectacular singularity software stunts, this ones the toughest, hands down.

especially, vehemently so, when it has no guidance from us, or anyone else. just itself, doing it for itself.

i would suggest carefully inspecting the results of such a self-focused loop, no idea how that would look.

even this "seed" ai - u make it sound so easy, just plant an ai see - would be complex beyond my ability to comprehend.

and the first version, the seed ai, has been written by us dumb old humans.

and i would state w a fair amt of confidence that no one else on earth does either - an idea that is specific enuff to be actionable, even if in small ways.

you and just abt everybody else here has absolutely unshakable confidence in the sing. to the extent that when even quite small adjustments are made to some of the particulars in an attempt to make it "real", or at least more compatible w known laws of physics and such, such small suggestions are looked upon as the work of perhaps a moronic child, to be pitied and gently corrected.

even u sing fanboys - even u, m - have no idea what it actually means or how a poor sai would go about w this seemingly impossible task.

let me know if u or some other sing devotee figure this out, or at least come across a link or something explaining how this could be achieved - in the real, physical world. such a treatment s/b at least as rich as my description of the architecture of what i feel to be the most achievable type of hyperadvanced ai.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 7:16 PM by martuso

[Top]
[Mind·X]
[Reply to this post]

so that was your own projection, not rks or someone else's. thats fine, just wanted to make sure that wasnt a core tenet of the sing, yet another one i was going to have to get medieval on.


Well, I can't say it's NOT RK's projection, even though I just finished "Spiritual Machines"... There is some interesting stuff about neural net development in there. The timeline he gives is a little overly-optimistic, but in the scheme of things, I can't argue with the basic progression he lays out (except for replaing keyboards with voice commands... but he's into that speech-to-text deal).

and im not calling u harebrained, dont think that in case its crossed your mind. just saying, that would be very tough prediction for me to make, thats all.


No harm, no foul.

u know, i was really thinking abt that, what it would take to design something more intelligent than yourself, intelligent in every way, like way smarter than yourself.

if i was tasked w that, after some time pondering i would almost certainly ask, as the very first question, smarter in what way?


Absolutely, the first steps will be to ask the right questions. My first idea would be data acquisition and sorting in a neural framework... maybe add on speed and memory capacity.

of all the spectacular singularity software stunts, this ones the toughest, hands down.

especially, vehemently so, when it has no guidance from us, or anyone else. just itself, doing it for itself.


I am in absolute agreement. It is one of the biggest reasons for singularity skepticism (I believe).

i would suggest carefully inspecting the results of such a self-focused loop, no idea how that would look.


Of course! Otherwise, it's handing fireworks to your kids and saying "have at it!"

even this "seed" ai - u make it sound so easy, just plant an ai see - would be complex beyond my ability to comprehend.

and the first version, the seed ai, has been written by us dumb old humans.


LOL... Well, yes... and we came from dumb ol' monkeys, though they didn't design us (at least, not that they're letting on) :)

and i would state w a fair amt of confidence that no one else on earth does either - an idea that is specific enuff to be actionable, even if in small ways.


I would tend to agree. I think it will be humans in parallel with computers... maybe standard, and not superior A.I. that leads to this pivotal stage.

you and just abt everybody else here has absolutely unshakable confidence in the sing.


Hmmm, I've seen stange, mystic, monkey people claiming to be gurus on here who aren't as confident as well...

to the extent that when even quite small adjustments are made to some of the particulars in an attempt to make it "real", or at least more compatible w known laws of physics and such, such small suggestions are looked upon as the work of perhaps a moronic child, to be pitied and gently corrected.


I don't understand what you mean here.

even u sing fanboys - even u, m - have no idea what it actually means or how a poor sai would go about w this seemingly impossible task.


Yes, that is an accurate statement... I think "fanboy" may be a bit strong to describe my conclusions... Yes, I hope to be witness to such a thing, but I also maintain a base in reality. I simply deduce what I believe to be the inevitable consequence of rapidly developing technology. I do have some minimal understanding of how neural networks develop and grow, but again, you are correct that I have absolutely no way to determine how or when A.I. will surpass human intelligence.

let me know if u or some other sing devotee


ARGH... devotee?

figure this out, or at least come across a link or something explaining how this could be achieved - in the real, physical world. such a treatment s/b at least as rich as my description of the architecture of what i feel to be the most achievable type of hyperadvanced ai.


I'm almost wanting to start on a blank slate here, and approach our views from the present onward, stage by stage, to see where we agree and where we diverge... perhaps in a new thread or in the chat room. The underlying answer to your query, though, is that if I absolutely KNEW how it could be done, we wouldn't be having this discussion and you'd already be riding in your virtual land-speeder.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 12:13 AM by someday69

[Top]
[Mind·X]
[Reply to this post]

Yes PB was the one to coin that fraze,,then turn around and accuss you...which is,,becasue he posts ten times,must mean he's thinking ahead,,like playing chess....
SO if you have all human knowledge of phycology/socioloey/ect.ect.,,along with everything ever writen about human history,And as much information as is possible to obtain from writen words,or anyother way that sai could gain access ..Well Don't you think it could figure out how to be popular? So here's the thing,,,,I think it could Sway large groups of people,very easyly..I also think it could do the cocktail circuit,,How could you out do an SAI in any debate????,,,,well maybe PB could....Not~!
OH I forgot that would take the dreaded ID.
which would make it somehow,all wrong.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 12:17 AM by someday69

[Top]
[Mind·X]
[Reply to this post]

INFORmation would replace free market cap..
once it actully became free market...tnat is..

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 12:41 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

hey, ure gettin it, sd

and no, not "wrong", just not naturally occurring to our microchip tech, no matter how complex it gets.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 1:18 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

what phrase, sd?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 11/14/2007 8:06 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

ive been thinking abt that, sd, here's the thing...

its not enuff to have the knowledge; sai must also be able to apply the knowledge, which is a very diff thing.

how do u teach an ai to do that? before they can have a clue what theyre doing, we have to teach them to apply knowledge first.

huge variety of situations - sai can watch some guys reroofing a house for a while, then sai can also do roofing.

some knowledge may be to share an owners off-the-beaten-path interest, some, maybe just an inside joke, or not used at all.

anyway, i just thought of that. everyone here will be like, o brother whats your problem, worrying abt that sai is so smart u dont need to worry about things like that.

bollocks. we will have to teach them, show them the way, at least get them started.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 03/06/2008 1:08 PM by mzeldich

[Top]
[Mind·X]
[Reply to this post]

I am sorry, participants in that discussion are greatest professionals, but didn't have a clue about the subject.

Could any one prove that 'Consciousness' is real phenomenon? What known facts are convince them to believe in it existence?

Without proper, direct answer on this question all efforts to build 'Consciousness' machine are senseless.

Best regards, Michael Zeldich

Re: Gelernter, Kurzweil debate machine consciousness
posted on 03/15/2008 8:07 PM by alvaro.sotomayor

[Top]
[Mind·X]
[Reply to this post]

Given the background of these two exceptional individuals in the field of artificial intelligence and computing technology, it is quite hard to incline towards one side of the debate. Both characters front great arguments that defend their opinion and prediction regarding the future of intelligent machinery, and ultimately the creation of machine consciousness. However, one of these approaches towards the topic is true, and so one is forced to choose a side to abide for. I personally agree more with the arguments given by the Yale University professor David Gelernter, although Mr. Kurzweil leaves some doubt on my decision as to why I choose to believe consciousness is more than simply a matter of digitally simulating the human brain. After doing some research on the Turing test [1] and the Chinese Room argument [2] (the debate I think assumes the reader knows about these topics), I came to realize that perhaps the entire issue is not about computers having intelligence, but whether us humans can understand by what we define as intelligence, and even consciousness! In fact, Kurzweil's theories are based on assumptions that state that our brains can be successfully simulated in less than 50 years time, and that such thing can possibly give birth to a generation of computers that experience emotion, insight, and intelligence.

It is true that there exists an exponential growth in the power of computing machinery year by year, and that perhaps it will not be too long until we find a way to reverse engineer the human brain (as Mr. Kurzweil suggests), but this does not mean we will have the ability (or even arguably power) to 'create' intelligence and consciousness. We certainly cannot create something we don't even understand, which implies that we cannot prove whether a machine has consciousness or not in the first place. Gelernter said that until we find a way of understanding the mind, we certainly cannot build a machine with a mind of its own. This I think is one of the stronger arguments that make me agree with Gelernter.

But another reason for why I believe it will take more than just designing a brain with digital computing is the fact that it certainly has to take more than just electrical impulses to make a conscious and intelligent being. Some even suggest that intelligence is not something we can perceive, it is a matter of formalization. Professor Olsen at the University of Bergen in his article 'Computer Intelligence and Formalization' [3] stated that creating an intelligent computer is not about making it seem human (as the Turing test indirectly suggests). He gives a very good example about how the Vikings in the ancient times used to travel based on the position of the moon, the sun, and stars. If we think about the way we travel today, we have invented GPS systems, satellite tracking devices, 'artificial stars', that perform the same task as the Vikings do. The difference lies in what we define as human intelligence (exerted by the Vikings) and the ability of the system to pinpoint our location (which is an operational task). In short, the latter is simply a formalization of tasks that provide a useful output, while the first is evidence of the human mind functioning at its best, even though they both essentially perform the same action. So this point of view also made me incline towards Gelernter's arguments; perhaps what Kurzweil is attempting to do is describe a super computer that is really good at following formalized tasks, as opposed to a machine that has intelligence, experiences emotions, and is conscious of its own existence and its environment.







[1] Wikipedia (2001) 'Turing Test'. 2008
http://en.wikipedia.org/wiki/Turing_test (March 15, 2008)

[2] Wikipedia (2001) 'Chinese Room'. 2008
http://en.wikipedia.org/wiki/Chinese_room (March 15, 2008)

[3] Olsen, Kai A. 'Computer Intelligence and Formalization'. September, 2006 http://www.computer.org/portal/site/computer/menui tem.5d61c1d591162e4b0ef1bd108bcd45f3/index.jsp?&pN ame=computer_level1_article&TheCat=1015&path=compu ter/homepage/0906&file=profession.xml&xsl=article. xsl& (March 15, 2008)

Re: Gelernter, Kurzweil debate machine consciousness
posted on 03/15/2008 10:23 PM by leumasx

[Top]
[Mind·X]
[Reply to this post]

Whether we will create machine consciousness really comes down to one thing, and that is whether or not we believe that we can copy all the factionality of a human. We will be able to simulate the complexed coding of human DNA, we will be able to simulate the complexed structure of the brain, and we will be able to simulate the cause and effects of hormones. The brain is the hardware the DNA is the preprogrammed software and instructions on how the hardware is assembled and hormones carry all the data around. Simplified I know but if we can simulate all that we would have the mind of an infant that should be capable of learning all the same things we are capable of learning... unless. Is there something we are forgetting? Is there some unknown part of a human we can't simulate? A soul? That is the real question I think. Maybe we shouldn't be looking at this as "Can we create machine consciousness?" but if we can create it what does that mean? How will we recognize real consciousness from simulated consciousness?

Re: Gelernter, Kurzweil debate machine consciousness
posted on 05/27/2008 1:05 PM by martysull

[Top]
[Mind·X]
[Reply to this post]

Of course one could eventually, with the appropriate technology, create a conscious machine because consciousness is not something special. It arises out of a continual sense of a "me" or a separate self. This sense of an individual self/consciousness has evolved in humans because it has been successful in helping us to adapt and survive. But there is no real "self", it is a sense we have that comes about by the continual firing of neurons. As discussed in "The Singularity is Near", it is likely that this continual concept of a "material me' comes about by the interaction of spindle cells that have long nerual filaments that "connect extensive signals from many other brain regions". These spindle cells likely work in tandem with other brain regions such as the posterior ventral medial nucleus (VMpo) that "apparently computes complex reactions to bodily states such as 'this tastes terrible' ". The sense of self and the qualia, such as taste, color, feel, etc. related to this sense of self can be duplicated. This sense of self and its accompanying qualia has obviously helped us survive or it would never have evolved. It is clear from the "mirror test" that other species have a minimal sense of self, such as great apes, elephants and dolphins. Interestingly, only these species have significant number of spindle cells. Another interesting fact is that infants do not have spindle cells and also cannot pass the mirror test. It is only after spindle cells develop around nine months of age that the mirror test is successful in humans. Consciousness is nothing more than having a continual (not continuous) sense of self accompanied by a sense of qualia that is associated with this sense of self. It may be that the sense of self arises out of the continual firing of the spindle cells and the pairing of this firing of these cells with other regions of the brain, such as the VMpo, that computes complex reactions to bodily states.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 06/05/2008 7:20 AM by fyngyrz

[Top]
[Mind·X]
[Reply to this post]

Gelernter is wrong on so many points I hardly know where to start. I think I'll shoot down the "we have to simulate / emulate the entire body" first.

Would Gelernter argue that a person born without legs is not conscious? Of course not. So much for the whole body. What about a person born without genitals? Genitals and arms? Genitals, arms and legs? Not conscious? Of course not.

What about if they hit the floor of the OB room and break their neck, and nothing runs below the neck except the low level pacemakers -- no nervous system connection at all. No input. Body works from the neck up. Not conscious? Of course not.

Say this person was also born deaf, like many people are. Not conscious? Of course not.

Born blind. Not conscious? Of course not.

Basically, I could go on like this for an entire book cataloging this function and that function of the body. Non-functional liver, kidney, heck, even your heart can stop and you stay conscious for a short while, plus we can replace it with a machine anyway. None of this has anything to do with consciousness *except* in the sense that if you have these things and they are fully functional, they will inform your experience; you will be able to speak authoritatively about your own perceptions of them, and to a lesser extent, about those of others you believe to be having similar experiences. Such as eating strawberries.

We share some experiences, but not others. I would bet you good money I've had experiences most of you have not; they inform my outlook. Would I call you "not conscious" because you are not so informed? That'd be pretty clueless, I think. Likewise, most of you have probably had experiences I have not. Am I then, "not conscious"? I beg to differ, should you so declare. Likewise the deaf person. Never heard a symphony; so, "not conscious"? Please.

So the question is, what would inform a machine consciousness. And the answer, blindingly obvious, (at least to me) is those things that the machine experiences. We can give machines some versions of shared experiences today (touch and hearing, for example); other may be difficult for a while. By Kurzweil's 20 year target, I'm sure we'll be able to do better.

Now, another interesting question is that if those things that the machine experiences are not sufficiently in common with our own, how well will be able to communicate with it? Perhaps not well. Like trying to talk to a squid or a dolphin, perhaps. So we should try to ensure that the common ground is as large as we can make it. This is simple prudence; we don't want machines that don't empathize with us, don't consider that they share the same reality. That way, very likely, lies the mechanical version of Jeffery Dahlmer. Or George Bush. Sorry, that just slipped out. (cough.)

Having completely disposed of the "you need a body" argument, let's move on to his "emotion is required" argument. We know of various situations, which we consider pathologies, where people are, for biological reasons, lacking emotion or lacking empathy, etc. We don't find these people very pleasant... they have a tendency to insert explosives into animal body cavities and set people on fire, just to see what happens... and they aren't even that interested in what happens, no more so than what is for lunch, for instance. But -- who among us would argue that these rare and spectacularly dangerous specimens of humanity are not "conscious"? We might say, with reference to the norms we understand (our own experiences) that these people are flawed, broken, sick, etc... but we would *not* say they were not conscious. We still uniformly see them as individuals, making choices based on wherever those wheeling, unfamiliar mental processes take them. This is, in the end, another example of a conscious person not being informed by the same experiences, only these are internal - empathy, love, sympathy, etc.

This argument, like the one for organs and senses or the lack thereof, can be taken to extremes until we get to a very young human, a creature that cannot do much of anything, yet again, is almost universally regarded as a conscious entity. This shows that consciousness is not a function of emotional states, nor is it a product of sophisticated introspection (unless you want to argue that most humans don't become conscious until they're about 35 years old, and further, that some of them we meet every day *never* do... which I doubt you'd want to argue.)

Gelernter also made a point that he could not see a program becoming conscious because he could imagine no sequence of steps that would create a "node" of consciousness. This combines a little later with an argument against parallelism helping to achieve consciousness because he says that parallelism doesn't bring anything new to the table in a down-to-the-metal sense.

He's right; it doesn't. But he's even wronger than usual, because what that *really* means is that consciousness will be achievable on even a non-parallel architecture that can run an accurate emulation of a parallel one. It might be slower than we are, but let me put it to you this way:

If in the one case, you slide a question requiring intelligence under a door and walk away, and the question is answered by a person in, say, 20 seconds and slid back, and you find it in the morning; whereas in an alternate case, you slide the question under, walk away, and the (same) answer is placed back under the door by a waldo about ten seconds before you come around the corner and see the door the next morning... is the answer any less intelligent? Was the reasoning any less intelligent, given that it followed the same paths?

Of course not. It's just slower.

From this follows the fact that if AI is possible in 20 or 30 years, as Kurzweil muses, then it is also possible now *if*, and here's the catch, we simply knew the algorithm required. Sure, it might run slow as molasses in a deep Montana winter, but it *would* run. It'd be annoying, and like some of my neighbors, I'd never want to talk to the thing, but it'd be the same essence as that algorithm running on fabulous, 10-second response time hardware.

We have the capability *now* to rack up enormous amounts of relatively fast memory to a computer with a linear address space. What we don't have is a reason to do it. Nor do I see one in the idea of a super-slow AI, even if we had the algorithm; my point is simply that if a digital computer can emulate (and by the way, simulate is not the target; *emulate* is the target) human or other intelligence 30 years from now, then not only can today's single threaded 3 GHz desktops run it given enough storage to work and the right algorithm, so can a paper tape computer from college. You wouldn't get your answer under the most optimal conditions until the sun went out, but... it'd be thinking about it.

So what it really boils down to is just *one* simple question, one that Gelernter surrendered the right answer to in the first place: Is the brain a machine, albeit a complex, annoyingly obfuscated one, or is there magic going on?

If the brain is a machine, then emulate it accurately (and probably things not very like it) we will, given only that some catastrophe does not overtake us and drive us backwards technologically. Which means AI in inevitable with those simple and obvious caveats.

If there is magic... well, you know, here we are and we have seen no evidence of magic anywhere. Ever. It's all been physics. Chemistry, electronics, quantum activity... but no magic. Given that we've never found magic anywhere in all of our explorations through the natural world, I tend to highly, and I mean *really* highly, doubt we ever will, in our heads or out of them. Which means that light I see in the tunnel is AI coming down the tracks. That screaming I hear in the tunnel is the tendency of humans to think they're special, while suffering the onset of dire, dire fear that in fact... they are not. Me, I don't think I'm special in that regard, so you won't hear me screaming. I might chortle a little bit, though.

As an afterthought, specifically addressed to Ray Kurzweil, since I see no other obvious way to direct this to him:

I've spent almost twenty years working independently on software and data structures to efficiently represent emotion and other concepts that must be characterized as effective, inter-related, subtle, and abstract - as well as mundane facts to axioms. I have working software models that are quite sophisticated in this regard. Consequently, I don't think the emotional issues you cite are really the trouble area you seem to think they are. The problem I see is that, given this complex network of inter-relations, even with extremely efficient parallel access to them, one (the AI, in point of fact) has to know what to *do* with them. I've written a shell that overlays this system that is endlessly and trivially teachable; it presents a very good impression of something that actually can learn in a general manner, but it is painfully obvious that it is answering questions with information, and not creativity. The more information one gives it, the more satisfactory the answers "feel", in the sense of being informative to the user and being more insightful as to what the user is actually asking... but one never gets the impression that one is talking to anything beyond a somewhat obtuse savant.

Please feel free to contact me for more information if you are so inclined; my account here has the required details. I'd be willing to answer any questions you might have.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 01/18/2009 2:03 PM by harvard

[Top]
[Mind·X]
[Reply to this post]


WONDERFUL POST! THANKS FYNGYRZ AND HAPPY NEW YEAR
TO YOU AND YOURS THROUGHOUT 2009. H.


YOU WROTE:


Gelernter is wrong on so many points I hardly know where to start. I think I'll shoot down the "we have to simulate / emulate the entire body" first.

Would Gelernter argue that a person born without legs is not conscious? Of course not. So much for the whole body. What about a person born without genitals? Genitals and arms? Genitals, arms and legs? Not conscious? Of course not.

What about if they hit the floor of the OB room and break their neck, and nothing runs below the neck except the low level pacemakers -- no nervous system connection at all. No input. Body works from the neck up. Not conscious? Of course not.

Say this person was also born deaf, like many people are. Not conscious? Of course not.

Born blind. Not conscious? Of course not.

Basically, I could go on like this for an entire book cataloging this function and that function of the body. Non-functional liver, kidney, heck, even your heart can stop and you stay conscious for a short while, plus we can replace it with a machine anyway. None of this has anything to do with consciousness *except* in the sense that if you have these things and they are fully functional, they will inform your experience; you will be able to speak authoritatively about your own perceptions of them, and to a lesser extent, about those of others you believe to be having similar experiences. Such as eating strawberries.

We share some experiences, but not others. I would bet you good money I've had experiences most of you have not; they inform my outlook. Would I call you "not conscious" because you are not so informed? That'd be pretty clueless, I think. Likewise, most of you have probably had experiences I have not. Am I then, "not conscious"? I beg to differ, should you so declare. Likewise the deaf person. Never heard a symphony; so, "not conscious"? Please.

So the question is, what would inform a machine consciousness. And the answer, blindingly obvious, (at least to me) is those things that the machine experiences. We can give machines some versions of shared experiences today (touch and hearing, for example); other may be difficult for a while. By Kurzweil's 20 year target, I'm sure we'll be able to do better.

Now, another interesting question is that if those things that the machine experiences are not sufficiently in common with our own, how well will be able to communicate with it? Perhaps not well. Like trying to talk to a squid or a dolphin, perhaps. So we should try to ensure that the common ground is as large as we can make it. This is simple prudence; we don't want machines that don't empathize with us, don't consider that they share the same reality. That way, very likely, lies the mechanical version of Jeffery Dahlmer. Or George Bush. Sorry, that just slipped out. (cough.)

Having completely disposed of the "you need a body" argument, let's move on to his "emotion is required" argument. We know of various situations, which we consider pathologies, where people are, for biological reasons, lacking emotion or lacking empathy, etc. We don't find these people very pleasant... they have a tendency to insert explosives into animal body cavities and set people on fire, just to see what happens... and they aren't even that interested in what happens, no more so than what is for lunch, for instance. But -- who among us would argue that these rare and spectacularly dangerous specimens of humanity are not "conscious"? We might say, with reference to the norms we understand (our own experiences) that these people are flawed, broken, sick, etc... but we would *not* say they were not conscious. We still uniformly see them as individuals, making choices based on wherever those wheeling, unfamiliar mental processes take them. This is, in the end, another example of a conscious person not being informed by the same experiences, only these are internal - empathy, love, sympathy, etc.

This argument, like the one for organs and senses or the lack thereof, can be taken to extremes until we get to a very young human, a creature that cannot do much of anything, yet again, is almost universally regarded as a conscious entity. This shows that consciousness is not a function of emotional states, nor is it a product of sophisticated introspection (unless you want to argue that most humans don't become conscious until they're about 35 years old, and further, that some of them we meet every day *never* do... which I doubt you'd want to argue.)

Gelernter also made a point that he could not see a program becoming conscious because he could imagine no sequence of steps that would create a "node" of consciousness. This combines a little later with an argument against parallelism helping to achieve consciousness because he says that parallelism doesn't bring anything new to the table in a down-to-the-metal sense.

He's right; it doesn't. But he's even wronger than usual, because what that *really* means is that consciousness will be achievable on even a non-parallel architecture that can run an accurate emulation of a parallel one. It might be slower than we are, but let me put it to you this way:

If in the one case, you slide a question requiring intelligence under a door and walk away, and the question is answered by a person in, say, 20 seconds and slid back, and you find it in the morning; whereas in an alternate case, you slide the question under, walk away, and the (same) answer is placed back under the door by a waldo about ten seconds before you come around the corner and see the door the next morning... is the answer any less intelligent? Was the reasoning any less intelligent, given that it followed the same paths?

Of course not. It's just slower.

From this follows the fact that if AI is possible in 20 or 30 years, as Kurzweil muses, then it is also possible now *if*, and here's the catch, we simply knew the algorithm required. Sure, it might run slow as molasses in a deep Montana winter, but it *would* run. It'd be annoying, and like some of my neighbors, I'd never want to talk to the thing, but it'd be the same essence as that algorithm running on fabulous, 10-second response time hardware.

We have the capability *now* to rack up enormous amounts of relatively fast memory to a computer with a linear address space. What we don't have is a reason to do it. Nor do I see one in the idea of a super-slow AI, even if we had the algorithm; my point is simply that if a digital computer can emulate (and by the way, simulate is not the target; *emulate* is the target) human or other intelligence 30 years from now, then not only can today's single threaded 3 GHz desktops run it given enough storage to work and the right algorithm, so can a paper tape computer from college. You wouldn't get your answer under the most optimal conditions until the sun went out, but... it'd be thinking about it.

So what it really boils down to is just *one* simple question, one that Gelernter surrendered the right answer to in the first place: Is the brain a machine, albeit a complex, annoyingly obfuscated one, or is there magic going on?

If the brain is a machine, then emulate it accurately (and probably things not very like it) we will, given only that some catastrophe does not overtake us and drive us backwards technologically. Which means AI in inevitable with those simple and obvious caveats.

If there is magic... well, you know, here we are and we have seen no evidence of magic anywhere. Ever. It's all been physics. Chemistry, electronics, quantum activity... but no magic. Given that we've never found magic anywhere in all of our explorations through the natural world, I tend to highly, and I mean *really* highly, doubt we ever will, in our heads or out of them. Which means that light I see in the tunnel is AI coming down the tracks. That screaming I hear in the tunnel is the tendency of humans to think they're special, while suffering the onset of dire, dire fear that in fact... they are not. Me, I don't think I'm special in that regard, so you won't hear me screaming. I might chortle a little bit, though.

As an afterthought, specifically addressed to Ray Kurzweil, since I see no other obvious way to direct this to him:

I've spent almost twenty years working independently on software and data structures to efficiently represent emotion and other concepts that must be characterized as effective, inter-related, subtle, and abstract - as well as mundane facts to axioms. I have working software models that are quite sophisticated in this regard. Consequently, I don't think the emotional issues you cite are really the trouble area you seem to think they are. The problem I see is that, given this complex network of inter-relations, even with extremely efficient parallel access to them, one (the AI, in point of fact) has to know what to *do* with them. I've written a shell that overlays this system that is endlessly and trivially teachable; it presents a very good impression of something that actually can learn in a general manner, but it is painfully obvious that it is answering questions with information, and not creativity. The more information one gives it, the more satisfactory the answers "feel", in the sense of being informative to the user and being more insightful as to what the user is actually asking... but one never gets the impression that one is talking to anything beyond a somewhat obtuse savant.



Re: Gelernter, Kurzweil debate machine consciousness
posted on 01/18/2009 4:16 PM by fyngyrz

[Top]
[Mind·X]
[Reply to this post]

Thank you for your kind words. I note there have been very few replies. :o)

Re: Gelernter, Kurzweil debate machine consciousness
posted on 10/19/2008 8:18 PM by mbrito666

[Top]
[Mind·X]
[Reply to this post]

Maybe some day in far distant future, it might be possible for us to create machine consciousness. I submit though that humans must first understand the nature of our own consciousness, which in reality we have maybe "scratched the surface" maybe.
If there is anyone out there who thinks they are an expert in consciousness, they are either delusional, insane or a liar.
"Dance monkey dance!"

Re: Gelernter, Kurzweil debate machine consciousness
posted on 01/18/2009 11:42 AM by zombiefood

[Top]
[Mind·X]
[Reply to this post]

yes, it beat me but it didn't enjoy beating me.
casperov

the ultimate insult to the machine

that should never change.

design the ai to suit the needs. why imbue uneeded complexity and danger to something we ourselves will construct. they will have no motives. they should be like a pair of pliers. a tool built for a purpose. it breaks you build a new one and it deduces the same truths. it will not fear breakage because everytime the on switch is applied it is the same thing here there or across the galaxy. it already exists on millions of other planets and when we bring ours online it will be the same mind. if we are able its only motive will be to squeeze nuts and bolts when we contract our hands. or design better implants. was that intelligent zombie crack directed at me? ha

Re: Gelernter, Kurzweil debate machine consciousness
posted on 01/18/2009 5:01 PM by zombiefood

[Top]
[Mind·X]
[Reply to this post]

good to hear from another realist. squat is not known about how the brain works holistically. we may though be closer than we think. we don't know what we don't know.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 03/04/2009 2:53 AM by Pandemonium1323

[Top]
[Mind·X]
[Reply to this post]

This could be long, so bear with me. I've spent most my life trying to answer these questions and I think I have a fairly satisfactory answer (doesn't everyone?). Most importantly, I haven't met anyone yet who can refute what I have to say, but I don't often post in places like this, which is why I'm doing it now, I want feedback, especially from Ray (wouldn't that be nice :)
Anyway, as I warned you, this could take a while, so have patience.

What I want you to do is conduct a thought experiment with me, the point of which is to find out where consciousness can be placed categorically, as I see that as the main issue in consciousness, the problem of categorizing it. I tend to jump ahead of myself sometimes, so I'll try not to leave anything out, but if I do, I'll try to answer any questions. So here goes.

First, I want you to open the following links to these images, and resize them so you can see all the images at once. The Anpheon concept has a little plus sine in the lower right you can use to make the animation move, and all you need in your browser from the azonano page is the picture of the helical shape (these are all visual aids to help in this experiment).

http://www.azonano.com/news.asp?newsID=9339
http://spinbitz.net/anpheon.org/html/Concepts.htm
http://bradleymonton.files.wordpress.com/2008/09/m andelbrot.jpg

Now that you have these images on your screen, take a look at the mandelbrot set. You will notice that their are circular black areas (not quite completely closed) and colored areas. Imagine that the black areas of the image and the colored areas are on separate 'sheets', like the transparent sheets you would use with an overhead projector. Now, imagine that the sheet with the black areas is solid black, and the sheet with the colored areas is printed on a transparent sheet. The colored transparent sheet is laid on top of the black sheet. You will notice that from your point of view, you can't see any 'black' through the colored areas of the transparent sheet, because the colors are opaque. But where you do see black areas where there is no color, the black 'comes through' those circular areas. Now imagine that the black sheet extends infinitely in all directions (on a 2D plane, but when we extrapolate this to the real world we can make it as multi-dimensional as we want). We never see that the black sheet is infinite, because the colored sheet obscures our view. Instead we see multiple black circular areas that seem to be interconnected.

Now imagine that every individual mind is one of these black circles. Also, so called 'black holes' or 'singularities' are also these circles.

One of the things I want to point out, is that if you look at the boundary between each circle, you can see that the black area is continuous/ connected to all the other black areas.

This illustrates the continuity of unity/diversity.

The black circles (actually a continuous, infinite black sheet) is an example of an 'unbounded infinite' and the colored area is an example of a 'bounded infinite'. The black space is where 'consciousness' resides. I would also call it the seat of the 'self'. The colored area corresponds to what we call the 'physical' or 'phenomenal' or 'material' world.

For thousands of years, people have been arguing over which of these sides of reality are 'ultimately' real. The materialist reductionist logical camp would have you believe that the colored area is all of reality (it's the only part that can be quantitatively measured). The mystic subjective and perhaps solipsist camp tell us that the colored area is an 'illusion' or they call it maya or bardo or samsara, or whatever.

Both these schools of thought try to ignore half of reality. The black area (which is where consciousness, self, and singularities/black holes reside) represents the dimensional BOUNDARY of what we can perceive. It's where the elusive consciousness 'observes' the existent, objective 'reality' of form, matter and energy. Notice that it has total unity (that's why I had you imagine it as an infinite sheet underneath). Imagine your 'mind' or 'self' in one of those tiny black spaces, looking through it as if it were a window, at all the others. It would appear to you as though you were a separate entity watching other separate entities. This is what mystics have been describing as the illusion of self, but have failed to explain to materialists. The 'observer' or self or 'I' or 'eye' is ONE 'I' in that black area, looking through a faceted reality that makes it appear as though there are many I's.

Consciousness is NOT existent in the material, objective sense, nor is it an emergent 'property' of the existent world. Rather, the body is an emergent property of the 'self' (this explains the quantum observer problem - physical reality exists because there is an 'observer', at all scales).

We cannot 'create' consciousness (as it's merely the unbounded infinite aspect of reality), and nor do we need to.

Our extremely complex bodies and intelligence is an emergent property because the self/singularity (black areas in the image) are also STRANGE ATTRACTORS, that induce complexity, giving us ever novel and increasingly complex ways of perceiving the world, and is what allows for 'self-organization'. I need to repeat that, 'SELF-organizatin'.

The 'body' then, in this regard, is like an item of clothing. One that over time that is improved by it's 'self'. We are realities way of getting to know itself, or to put it another way, the universe is 'waking up' to it's self.

Any intelligence that emerges from our technology (I'm putting bets on cybernetic intelligence to arise first) is just as 'conscious' as any other portion of the universe, since, as I've shown, consciousness is ever 'present' and completely ubiquitous. So there's NO NEED to determine if this 'machine intelligence' is conscious or not, BECAUSE THE ENTIRE UNIVERSE IS CONSCIOUS. What we would like to know, is whether it becomes aware of it's own consciousness.

I pointed out earlier that the black circles in the mandelbrot set were connected to each other, and here's why. I believe that the reason that living beings can tell the difference between what we call the 'animate' and 'inanimate' is because of this interconnection. There is only one 'I' behind the scenes, and it senses it's self, so conscious beings can recognize each other, at least at their own scale of 'being'.

The reason for the nano-helical pic, was not only to emphasize the similarity in structure, but maybe to stimulate ideas on how complex, SELF-organizing entities can GROW around strange attractors. Maybe that's how we will build our first super intelligence.

Remember, the question of consciousness is not a question of SUBSTANCE (since it MUST be insubstantial), but a matter of CATEGORY.

When we build an emergent intelligence, we will have the opportunity for endless conversations about these subjects, and maybe they can stimulate new insights into these same problems.

So, let me know if I lost anyone along the way, and I'll try to fill in any missing pieces.
Pan

Re: Gelernter, Kurzweil debate machine consciousness
posted on 03/04/2009 10:09 AM by francofiori2004

[Top]
[Mind·X]
[Reply to this post]

it's not so hard to reproduc emotions in robots.
We'll need just enough chip power and a copy of human spindle cells' behaviour.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 03/17/2009 2:36 AM by iamspace

[Top]
[Mind·X]
[Reply to this post]

Very impressive Pan. I am pleased with your simplified way of defining and explaining the "AwareWill." I to have been pondering, exploring, and extrapolating a direction on a conscious machine. I call it "Source Machine Induction Living Entity", or S.M.I.L.E.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 05/05/2009 4:51 AM by colorSpace

[Top]
[Mind·X]
[Reply to this post]

Any intelligence that emerges from our technology (I'm putting bets on cybernetic intelligence to arise first) is just as 'conscious' as any other portion of the universe, since, as I've shown, consciousness is ever 'present' and completely ubiquitous. So there's NO NEED to determine if this 'machine intelligence' is conscious or not, BECAUSE THE ENTIRE UNIVERSE IS CONSCIOUS. What we would like to know, is whether it becomes aware of it's own consciousness.


So how would you answer the question whether consciousness, and whether self-awareness, could be expressed or described in terms of boolean logic?

As everything a computer does (in the sense in which we today define computers) can be expressed in terms of boolean logic, of course.

If you say silicon is conscious, like (perhaps) everything else in the universe) then this doesn't mean that the computer' actions, in terms of the program's results, would be a consciousness-driven action.

The question still is: can boolean logic mimic consciousness?

Even if we assume that the silicon in the computer chips is conscious internally, this question is still far from being answered.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 05/05/2009 12:52 PM by colorSpace

[Top]
[Mind·X]
[Reply to this post]

(Or, Pandemonium1323, perhaps you are saying: I don't care if boolean logic can express consciousness, I just want computers to be "intelligent" in other ways?)

Re: Gelernter, Kurzweil debate machine consciousness
posted on 03/17/2009 2:42 AM by iamspace

[Top]
[Mind·X]
[Reply to this post]

I don't know if I would call myself an "EXPERT" on consciousness, but I believe I have had occasion to witness the best knowledge on the subject.

Go here, and watch the podcasts to see what I mean:

http://www.avatarepc.com/index.html

Re: Gelernter, Kurzweil debate machine consciousness
posted on 03/17/2009 2:02 AM by iamspace

[Top]
[Mind·X]
[Reply to this post]

The question:

"Will Machines Become Conscious?"

"Suppose we scan someone's brain and reinstate the resulting 'mind file' into a suitable computing medium," asks Raymond Kurzweil. "Will the entity that emerges from such an operation be conscious?"

The answer is "NO., at least, not the way proposed in this question."

The reason is simple. Thoughts and memory are not stored in the brain, never have been, and no scientist or experiment has proven the ACTUAL MEMORY STORAGE LOCATIONS in the brain. The only thing that has been tested is the fact the someone recalls something based on stimulus, or otherwise, an organically reproduced sensation that the person associated with the memory. The memory itself was not triggered by the stimulation. The association the person's consciousness has with that feeling is the trigger.

The Institute of Advance Thinking lays it out plainly in it's Book "Instant Memory" when it says, "Memory is not cerebral centric."

Question answered.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 07/13/2009 4:35 PM by alexchen4ever

[Top]
[Mind·X]
[Reply to this post]

'Are we limited to building super-intelligent robotic "zombies" or will it be possible and desirable for us to build conscious, creative, volitional, perhaps even "spiritual" machines? David Gelernter and Ray Kurzweil debated this key question at MIT on Nov. 30.' [1]
I fully agree with Professor David Gelernter's opinion. It is impossible to build conscious, creative, volitional or even 'Spiritual' machines. The idea is based on the following two aspects: concept of consciousness and social desire of spiritual machines.
First of all, the concept of conscious is ambiguous. Professor Ray Kurzweil defined the concept of conscious in two ways. 'One is apparent consciousness, which is an entity that appears to be conscious. Another way is subjectivity. Consciousness is a synonym for subjectivity and really having subjective experience, not just an entity that appears to have subjective experience.' [2] Actually we are defining some kind of future machine and no one could really tell the future. 'Humans are way more intelligent than the people 1,000 years ago. No matter in physical brain or emotion complexity.' [3] But what about 1,000 year after like the year 2,100? I am sure humans could create more intelligent machine than we have at that time and it might fit all the definitions we give today. Then experts today will say it is conscious or even spiritual machine. However, people in the future will say this kind of machine is nothing and it is far away from what we called 'conscious'. So what I am saying is we are advancing our technologies but we should never forget that our brain and our human race are in revolution as well. The concept of consciousness will never be certain. We don't even have an accurate definition. How could we have a so called 'super-intelligent machine'?
Last thing I am going to talk about is the social desire. The question is do we really need the spiritual machines or why do we need them. 'People create everything to meet their social needs.' [3] Computer was invented in order to better serve people and help people get rid of complex computations. But why do we need these computing tools to think? Can they really provide better services if they could think? I don't think so. We want computers to follow our instructions to accomplish tasks. If our instructions have logic errors and the computer cannot execute, they just need to remind us. We as humans will think another instruction. Let's imagine that all computers could think now. We give an order and the computer says: 'hold on second, your instruction is not good, I will do following my idea.' It's not a better service at all. Now it comes to the topic 'Dangerous Future'. If all the machines think in the way I mentioned above, they will definitely replace us. They will do whatever they think is good or right regardless of our thoughts. Replacing human is not part of our social needs. Maybe when that day comes, we will missing those out of date machines who still following our instructions.
In conclusion, the so called 'super-intelligent machines' will never appear in our lives because we don't know what is 'super-intelligent' and we don't need such kind of machine. We won't allow a new race called 'super-intelligent' created by human and dominate ourselves.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 07/13/2009 5:45 PM by mpo907

[Top]
[Mind·X]
[Reply to this post]

I think one of the arguments being posed here is, would a full simulation of a human brain create consciousness? If you look at the arguments posed by Kurzweil and Gerlenter, you'll notice that Kurzweil talks a lot about how there are simulations, such as the Blue Brain project, that try to simulate parts of the brain. And it is true that we have many parts of the brain that are modeled now, though not all of them. However, Gerlenter also claims that he cannot believe that consciousness can appear from nothing but software algorithms. That, despite all of the modeling of the brain that has been done so far, that we cannot achieve the same level of consciousness for machines that has been done for humans.

I don't believe that machines can become conscious off just a simulation. First off, machines could appear to be conscious, through these simulations of the parts of the brain. Suppose that we have the brain fully simulated in the next couple of decades. We can create this simulated brain, place it in a simulated world much like the Blue Brain project has done, and see how it interacts with this world. And as it learns and begins to gather data, as it can recognize these signals and has shown to recognize these signals in these simulations, and thus there is a potential to create a very advanced artificial intelligence (AI) that is capable of doing everything a human can.

But is it human? As Gerlenter says, we derive everything based on the assumption that other humans are conscious, and if we don't take that assumption, then we believe other things to be of lower class than us, such as animals. Is a cat or dog conscious? Certainly they respond to external stimuli from humans and we have domesticated them as such, but as Gerlenter has said, we have no idea how a squid experiences things, and thus no way of telling whether or not a squid is conscious, much less cats or dogs. And that is backed by the fact that what the squid experiences is not what a human experiences, which makes it difficult to imagine what it would be like to be a squid. We could also attempt to apply the Turing Test, but it is likely only enough to show that the machine is capable of analyzing and recognizing things like a human can, just as Gerlenter explains.

Suppose humans did not exist, and that we only had machines with simulations of the human brain, creating the appearance of consciousness that I stated earlier. Now suppose that these machines began interacting with each other, without the context or background knowledge that the other person is nothing more than a machine. Do they believe they are conscious? We have no way of identifying what it is that makes one 'conscious', as it becomes more a philosophical question, mentioned by Kurtzweil. As a result, it is more than likely these two machines will view each other as 'conscious', as 'human', and that they will interact with one another as such; but from our perspective, it doesn't seem that way.

Ultimately, we have no idea of telling what it is that makes one 'conscious', not without a complete model of the human brain, nor an understanding of what it is in the human brain that creates this 'consciousness' that has this 'feeling of oneness with the universe', as Gerlenter points out. And I agree with that position, on the assumption that we have no way of identifying what it is that creates that 'oneness' in the brain at all. Of course, with further research and given time, there is a small chance that we may finally understand what it is that allows this, but I doubt it could be simulated through just software. Perhaps there is the off-chance that it could only be done through some external device or hardware that is capable of creating the same effect, but that's not something I can predict.

Re: Gelernter, Kurzweil debate machine consciousness
posted on 07/20/2009 6:01 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

surprised I never came in and made this 'debate' disappear-

it is easy to make any debate about whether or not machines can be conscious disappear-

I mean I think it is obvious that machines can be made to think [and that that is what you are since you ARE already a computer simulation and you are alive aren't you?]

but anyway- here I go again- get ready for the debate to implode:


I will simply make AI using biological neurons-


QED-

that is the easiest debate buster EVER- I guess people forget that I can make meat-based machines out of human parts- maybe it is just so morbid or squishy that they forget that point?


the Simulation Argument does well at ending this debate as well- I mean you are alive and conscious - and you are a simulation running on a computer- so obviously machines can think- you are doing it right now- it's not that you are a ;biological machine' that is a virtual illusion of the simulation- you aren't biological at all- you are just patterns of 1s and 0s encoded into electrons or photons or whatever the computer that is running you has to express 1/0 in the universe it was built in- you are part of an actual machine thus you prove that machine consciousness exists

Re: Gelernter, Kurzweil debate machine consciousness
posted on 04/17/2010 5:10 PM by Wowo51

[Top]
[Mind·X]
[Reply to this post]

I have a tough time accepting emergent theories that don't have any link to physics, nonetheless you may not require wetware...

From CyberneticImmortality.com...
DRAFT, April 17, 2010


Consider Babbages Analytical engine. Babbage built a purely mechanical calculator. Can a babbage like thinking machine become conscious? It would seem that no matter how complex of a Babbage like machine one might build, one would not expect it to feel emotion at any point. It's calculations would after all, be entirely mechanical. A normal computer is very Babbage like, though it relies on electrical and magnetic force fields rather than purely mechanical operations. So one would not expect a normal computer to become conscious either. Assume for a moment that there is a physical requirement for consciousness, and that consciousness is intimately tied to spacetime. The obvious first choice would be that consciousness arises from particular arrangements of spike trains in our neural nets, but that need not be the case. Indeed a neural net spike train model suffers from two unaesthetic binding problems. The neural net is divided at the synaptic junctions by chemical transmitters, and the spikes are discrete in time. As an alternative, as one might have an unseen magnetic field at some location in space, one might have an unseen thought/emotion field at some location in space. This 'consciousness field' might be a pure energy field equivalent to any of the other four fields of the standard model of physics, or it could arise from some particular arrangement of energy and matter. I prefer the pure field theory myself, in which case their would need to be equations of interaction in order for our thoughts to be communicated to and from the nonconscious matter in our brains. Consider a simple coil radio antenna. It transfers energy efficiently from an electrical current form to an electromagnetic wave form. Is there an equivalent to the coil antenna in the human brain that transfers energy efficiently between our consciousness and the non-conscious matter in our brains? A consciousness activated ion channel could transmit requests to move from our consciousness to non-conscious information processing neurons. There would need to be emotion field producing proteins, or similar special purpose structures. A pure consciousness field theory is my pet theory, quantum consciousness theories, and electromagnetic field theories of consciousness both overcome the binding problem. One could also formulate a configuration of matter theory that supposes that our spike trains do in fact provide the necessary configuration of matter required for consciousness to arise. Emergent theories seem doomed as they lead to the extremely unsavory conclusion that one could create emotion by performing complex calculations with a pen and paper, which results by considering the conscious Babbage engine argument in that direction. In any event a complete description of consciousness is going to be required in order to successfully upload ourselves, or we will risk creating a turing complete replica that does not feel at all. Specially designed hardware is likely to be required in order to produce emotion and consciousness, if any of the physical theories are correct. This specially designed hardware will have to replicate the emotion and consciousness field producing proteins/components in the brain. The required consciousness/emotion sensors will be required as well. If our spike trains are the required configuration of matter, then the hardware requirements should be obvious. For the field theories, the equivalent of an antenna will need to be understood in order to transfer energy and information between consciousness and nonconscious matter.

Consider the following. In the human brain there exists a clear set of inputs and outputs. Our consciousness lies betwen them. One can track information in through our senses, and out through our motor coordination. One can identify structures which process information in a nonconscious fashion. By carefully tracking the flow of information, one can clearly deduce the precise structures which create and interact with our consciousness. Information will tend to spread in many directions when coming in through the senses, by way of various waves and fields according to the standard model of physics. Thus it will likely be more efficient to first identify any structures communicating information from consciousness to non-conscious matter. This can be done by way of tracking the flow of information backwards, from our muscles, through our nerves, through nonconscious information processing systems, towards our consciousness. Identification of such structures will allow us to study the physical mathematics of consciousness. And allow us to artificially sustain consciousness.

Written by Warren Harding
Copyright CyberneticImmortality.com
You may distribute this paper in whole provided this copyright notice is included.