Origin > Will Machines Become Conscious? > Consciousness in Human and Robot Minds
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0474.html

Printable Version
    Consciousness in Human and Robot Minds
by   Daniel Dennett

AI skeptics offer several reasons why robots could never become conscious. MITs' humanoid Cog robot project may give them pause.


Originally published May 1997 in the book Cognition, Computation, and Consciousness. Published on KurzweilAI.net on June 6, 2002.

1. Good and Bad Grounds for Skepticism

The best reason for believing that robots might some day become conscious is that we human beings are conscious, and we are a sort of robot ourselves. That is, we are extraordinarily complex self-controlling, self-sustaining physical mechanisms, designed over the eons by natural selection, and operating according to the same well-understood principles that govern all the other physical processes in living things: digestive and metabolic processes, self-repair and reproductive processes, for instance. It may be wildly over-ambitious to suppose that human artificers can repeat Nature's triumph, with variations in material, form, and design process, but this is not a deep objection. It is not as if a conscious machine contradicted any fundamental laws of nature, the way a perpetual motion machine does. Still, many skeptics believe -- or in any event want to believe -- that it will never be done. I wouldn't wager against them, but my reasons for skepticism are mundane, economic reasons, not theoretical reasons.

Conscious robots probably will always simply cost too much to make. Nobody will ever synthesize a gall bladder out of atoms of the requisite elements, but I think it is uncontroversial that a gall bladder is nevertheless "just" a stupendous assembly of such atoms. Might a conscious robot be "just" a stupendous assembly of more elementary artifacts -- silicon chips, wires, tiny motors and cameras -- or would any such assembly, of whatever size and sophistication, have to leave out some special ingredient that is requisite for consciousness?

Let us briefly survey a nested series of reasons someone might advance for the impossibility of a conscious robot:

(1) Robots are purely material things, and consciousness requires immaterial mind-stuff. (Old-fashioned dualism)

It continues to amaze me how attractive this position still is to many people. I would have thought a historical perspective alone would make this view seem ludicrous: over the centuries, every other phenomenon of initially "supernatural" mysteriousness has succumbed to an uncontroversial explanation within the commodious folds of physical science. Thales, the Pre-Socratic proto-scientist, thought the loadstone had a soul, but we now know better; magnetism is one of the best understood of physical phenomena, strange though its manifestations are. The "miracles" of life itself, and of reproduction, are now analyzed into the well-known intricacies of molecular biology. Why should consciousness be any exception? Why should the brain be the only complex physical object in the universe to have an interface with another realm of being? Besides, the notorious problems with the supposed transactions at that dualistic interface are as good as a reductio ad absurdum of the view. The phenomena of consciousness are an admittedly dazzling lot, but I suspect that dualism would never be seriously considered if there weren't such a strong undercurrent of desire to protect the mind from science, by supposing it composed of a stuff that is in principle uninvestigatable by the methods of the physical sciences.

But if you are willing to concede the hopelessness of dualism, and accept some version of materialism, you might still hold:

(2) Robots are inorganic (by definition), and consciousness can exist only in an organic brain.

Why might this be? Instead of just hooting this view off the stage as an embarrassing throwback to old-fashioned vitalism, we might pause to note that there is a respectable, if not very interesting, way of defending this claim. Vitalism is deservedly dead; as biochemistry has shown in matchless detail, the powers of organic compounds are themselves all mechanistically reducible and hence mechanistically reproducible at one scale or another in alternative physical media; but it is conceivable -- if unlikely -- that the sheer speed and compactness of biochemically engineered processes in the brain are in fact unreproducible in other physical media (Dennett, 1987). So there might be straightforward reasons of engineering that showed that any robot that could not make use of organic tissues of one sort or another within its fabric would be too ungainly to execute some task critical for consciousness. If making a conscious robot were conceived of as a sort of sporting event -- like the America's Cup -- rather than a scientific endeavor, this could raise a curious conflict over the official rules. Team A wants to use artificially constructed organic polymer "muscles" to move its robot's limbs, because otherwise the motor noise wreaks havoc with the robot's artificial ears. Should this be allowed? Is a robot with "muscles" instead of motors a robot within the meaning of the act? If muscles are allowed, what about lining the robot's artificial retinas with genuine organic rods and cones instead of relying on relatively clumsy color-TV technology?

I take it that no serious scientific or philosophical thesis links its fate to the fate of the proposition that a protein-free conscious robot can be made, for example. The standard understanding that a robot shall be made of metal, silicon chips, glass, plastic, rubber and such, is an expression of the willingness of theorists to bet on a simplification of the issues: their conviction is that the crucial functions of intelligence can be achieved by one high-level simulation or another, so that it would be no undue hardship to restrict themselves to these materials, the readily available cost-effective ingredients in any case. But if somebody were to invent some sort of cheap artificial neural network fabric that could usefully be spliced into various tight corners in a robot's control system, the embarrassing fact that this fabric was made of organic molecules would not and should not dissuade serious roboticists from using it -- and simply taking on the burden of explaining to the uninitiated why this did not constitute "cheating" in any important sense.

I have discovered that some people are attracted by a third reason for believing in the impossibility of conscious robots.

(3) Robots are artifacts, and consciousness abhors an artifact; only something natural, born not manufactured, could exhibit genuine consciousness.

Once again, it is tempting to dismiss this claim with derision, and in some of its forms, derision is just what it deserves. Consider the general category of creed we might call origin essentialism: only wine made under the direction of the proprietors of Chateau Plonque counts as genuine Chateau Plonque; only a canvas every blotch on which was caused by the hand of Cezanne counts as a genuine Cezanne; only someone "with Cherokee blood" can be a real Cherokee. There are perfectly respectable reasons, eminently defensible in a court of law, for maintaining such distinctions, so long as they are understood to be protections of rights growing out of historical processes. If they are interpreted, however, as indicators of "intrinsic properties" that set their holders apart from their otherwise indistinguishable counterparts, they are pernicious nonsense. Let us dub origin chauvinism the category of view that holds out for some mystic difference (a difference of value, typically) due simply to such a fact about origin. Perfect imitation Chateau Plonque is exactly as good a wine as the real thing, counterfeit though it is, and the same holds for the fake Cezanne, if it is really indistinguishable by experts. And of course no person is intrinsically better or worse in any regard just for having or not having Cherokee (or Jewish, or African) "blood."

And to take a threadbare philosophical example, an atom-for-atom duplicate of a human being, an artifactual counterfeit of you, let us say, might not legally be you, and hence might not be entitled to your belongings, or deserve your punishments, but the suggestion that such a being would not be a feeling, conscious, alive person as genuine as any born of woman is preposterous nonsense, all the more deserving of our ridicule because if taken seriously it might seem to lend credibility to the racist drivel with which it shares a bogus "intuition".

If consciousness abhors an artifact, it cannot be because being born gives a complex of cells a property (aside from that historic property itself) that it could not otherwise have "in principle". There might, however, be a question of practicality. We have just seen how, as a matter of exigent practicality, it could turn out after all that organic materials were needed to make a conscious robot. For similar reasons, it could turn out that any conscious robot had to be, if not born, at least the beneficiary of a longish period of infancy. Making a fully equipped conscious adult robot might just be too much work. It might be vastly easier to make an initially unconscious or nonconscious "infant" robot and let it "grow up" into consciousness, more or less the way we all do. This hunch is not the disreputable claim that a certain sort of historic process puts a mystic stamp of approval on its product, but the more interesting and plausible claim that a certain sort of process is the only practical way of designing all the things that need designing in a conscious being.

Such a claim is entirely reasonable. Compare it to the claim one might make about the creation of Steven Spielberg's film, Schindler's List: it could not have been created entirely by computer animation, without the filming of real live actors. This impossibility claim must be false "in principle," since every frame of that film is nothing more than a matrix of gray-scale pixels of the sort that computer animation can manifestly create, at any level of detail or "realism" you are willing to pay for. There is nothing mystical, however, about the claim that it would be practically impossible to render the nuances of that film by such a bizarre exercise of technology. How much easier it is, practically, to put actors in the relevant circumstances, in a concrete simulation of the scenes one wishes to portray, and let them, via ensemble activity and re-activity, provide the information to the cameras that will then fill in all the pixels in each frame. This little exercise of the imagination helps to drive home just how much information there is in a "realistic" film, but even a great film, such as Schindler's List, for all its complexity, is a simple, non-interactive artifact many orders of magnitude less complex than a conscious being.

When robot-makers have claimed in the past that in principle they could construct "by hand" a conscious robot, this was a hubristic overstatement analogous to what Walt Disney might once have proclaimed: that his studio of animators could create a film so realistic that no one would be able to tell that it was a cartoon, not a "live action" film. What Disney couldn't do in fact, computer animators still cannot do, but perhaps only for the time being. Robot makers, even with the latest high-tech innovations, also fall far short of their hubristic goals, now and for the foreseeable future. The comparison serves to expose the likely source of the outrage so many skeptics feel when they encounter the manifestos of the Artificial Intelligencia. Anyone who seriously claimed that Schindler's List could in fact have been made by computer animation could be seen to betray an obscenely impoverished sense of what is conveyed in that film. An important element of the film's power if the fact that it is a film made by assembling human actors to portray those events, and that it is not actually the newsreel footage that its black-and-white format reminds you of. When one juxtaposes in one's imagination a sense of what the actors must have gone through to make the film with a sense of what the people who actually lived the events went through, this reflection sets up reverberations in one's thinking that draw attention to the deeper meanings of the film. Similarly, when robot enthusiasts proclaim the likelihood that they can simply construct a conscious robot, there is an understandable suspicion that they are simply betraying an infantile grasp of the subtleties of conscious life. (I hope I have put enough feeling into that condemnation to satisfy the skeptics.)

But however justified that might be in some instances as an ad hominem suspicion, it is simply irrelevant to the important theoretical issues. Perhaps no cartoon could be a great film, but they are certainly real films -- and some are indeed good films; if the best the roboticists can hope for is the creation of some crude, cheesy, second-rate, artificial consciousness, they still win. Still, it is not a foregone conclusion that even this modest goal is reachable. If you want to have a defensible reason for claiming that no conscious robot will ever be created, you might want to settle for this:

(4) Robots will always just be much too simple to be conscious.

After all, a normal human being is composed of trillions of parts (if we descend to the level of the macromolecules), and many of these rival in complexity and design cunning the fanciest artifacts that have ever been created. We consist of billions of cells, and a single human cell contains within itself complex "machinery" that is still well beyond the artifactual powers of engineers. We are composed of thousands of different kinds of cells, including thousands of different species of symbiont visitors, some of whom might be as important to our consciousness as others are to our ability to digest our food! If all that complexity were needed for consciousness to exist, then the task of making a single conscious robot would dwarf the entire scientific and engineering resources of the planet for millennia. And who would pay for it?

If no other reason can be found, this may do to ground your skepticism about conscious robots in your future, but one shortcoming of this last reason is that it is scientifically boring. If this is the only reason there won't be conscious robots, then consciousness isn't that special, after all. Another shortcoming with this reason is that it is dubious on its face. Everywhere else we have looked, we have found higher-level commonalities of function that permit us to substitute relatively simple bits for fiendishly complicated bits. Artificial heart valves work really very well, but they are orders of magnitude simpler than organic heart valves, heart valves born of woman or sow, you might say. Artificial ears and eyes that will do a serviceable (if crude) job of substituting for lost perceptual organs are visible on the horizon, and anyone who doubts they are possible in principle is simply out of touch. Nobody ever said a prosthetic eye had to see as keenly, or focus as fast, or be as sensitive to color gradations as a normal human (or other animal) eye in order to "count" as an eye. If an eye, why not an optic nerve (or acceptable substitute thereof), and so forth, all the way in?

Some (Searle, 1992, Mangan, 1993) have supposed, most improbably, that this proposed regress would somewhere run into a non-fungible medium of consciousness, a part of the brain that could not be substituted on pain of death or zombiehood. Once the implications of that view are spelled out (Dennett, 1993a, 1993b), one can see that it is a non-starter. There is no reason at all to believe that some one part of the brain is utterly irreplaceable by prosthesis, provided we allow that some crudity, some loss of function, is to be expected in most substitutions of the simple for the complex. An artificial brain is, on the face of it, as "possible in principle" as an artificial heart, just much, much harder to make and hook up. Of course once we start letting crude forms of prosthetic consciousness -- like crude forms of prosthetic vision or hearing -- pass our litmus tests for consciousness (whichever tests we favor) the way is open for another boring debate, over whether the phenomena in question are too crude to count.

2. The Cog Project: A Humanoid Robot

A much more interesting tack to explore, in my opinion, is simply to set out to make a robot that is theoretically interesting independent of the philosophical conundrum about whether it is conscious. Such a robot would have to perform a lot of the feats that we have typically associated with consciousness in the past, but we would not need to dwell on that issue from the outset. Maybe we could even learn something interesting about what the truly hard problems are without ever settling any of the issues about consciousness.

Such a project is now underway at MIT. Under the direction of Professors Rodney Brooks and Lynn Andrea Stein of the AI Lab, a group of bright, hard-working young graduate students are laboring as I speak to create Cog, the most humanoid robot yet attempted, and I am happy to be a part of the Cog team. Cog is just about life-size -- that is, about the size of a human adult. Cog has no legs, but lives bolted at the hips, you might say, to its stand. It has two human-length arms, however, with somewhat simple hands on the wrists. It can bend at the waist and swing its torso, and its head moves with three degrees of freedom just about the way yours does. It has two eyes, each equipped with both a foveal high-resolution vision area and a low-resolution wide-angle parafoveal vision area, and these eyes saccade at almost human speed. That is, the two eyes can complete approximately three fixations a second, while you and I can manage four or five. Your foveas are at the center of your retinas, surrounded by the grainier low-resolution parafoveal areas; for reasons of engineering simplicity, Cog's eyes have their foveas mounted above their wide-angle vision areas.

This is typical of the sort of compromise that the Cog team is willing to make. It amounts to a wager that a vision system with the foveas moved out of the middle can still work well enough not to be debilitating, and the problems encountered will not be irrelevant to the problems encountered in normal human vision. After all, nature gives us examples of other eyes with different foveal arrangements. Eagles have three different foveas in each eye, for instance, and rabbit eyes are another story all together. Cog's eyes won't give it visual information exactly like that provided to human vision by human eyes (in fact, of course, it will be vastly degraded), but the wager is that this will be plenty to give Cog the opportunity to perform impressive feats of hand-eye coordination, identification, and search. At the outset, Cog will not have color vision.

Since its eyes are video cameras mounted on delicate, fast-moving gimbals, it might be disastrous if Cog were inadvertently to punch itself in the eye, so part of the hard-wiring that must be provided in advance is an "innate" if rudimentary "pain" or "alarm" system to serve roughly the same protective functions as the reflex eye-blink and pain-avoidance systems hard-wired into human infants.

Cog will not be an adult at first, in spite of its adult size. It is being designed to pass through an extended period of artificial infancy, during which it will have to learn from experience, experience it will gain in the rough-and-tumble environment of the real world. Like a human infant, however, it will need a great deal of protection at the outset, in spite of the fact that it will be equipped with many of the most crucial safety-systems of a living being. It has limit switches, heat sensors, current sensors, strain gauges and alarm signals in all the right places to prevent it from destroying its many motors and joints. It has enormous "funny bones" -- motors sticking out from its elbows in a risky way. These will be protected from harm not by being shielded in heavy armor, but by being equipped with patches of exquisitely sensitive piezo-electric membrane "skin" which will trigger alarms when they make contact with anything. The goal is that Cog will quickly "learn" to keep its funny bones from being bumped -- if Cog cannot learn this in short order, it will have to have this high-priority policy hard-wired in. The same sensitive membranes will be used on its fingertips and elsewhere, and, like human tactile nerves, the "meaning" of the signals sent along the attached wires will depend more on what the central control system "makes of them" than on their "intrinsic" characteristics. A gentle touch, signaling sought-for contact with an object to be grasped, will not differ, as an information packet, from a sharp pain, signaling a need for rapid countermeasures. It all depends on what the central system is designed to do with the packet, and this design is itself indefinitely revisable -- something that can be adjusted either by Cog's own experience or by the tinkering of Cog's artificers.

One of its most interesting "innate" endowments will be software for visual face recognition. Faces will "pop out" from the background of other objects as items of special interest to Cog. It will further be innately designed to "want" to keep its "mother's" face in view, and to work hard to keep "mother" from turning away. The role of mother has not yet been cast, but several of the graduate students have been tentatively tapped for this role. Unlike a human infant, of course, there is no reason why Cog can't have a whole team of mothers, each of whom is innately distinguished by Cog as a face to please if possible. Clearly, even if Cog really does have a Lebenswelt, it will not be the same as ours.

Decisions have not yet been reached about many of the candidates for hard-wiring or innate features. Anything that can learn must be initially equipped with a great deal of unlearned design. That is no longer an issue; no tabula rasa could ever be impressed with knowledge from experience. But it is also not much of an issue which features ought to be innately fixed, for there is a convenient trade-off. I haven't mentioned yet that Cog will actually be a multi-generational series of ever improved models (if all goes well!), but of course that is the way any complex artifact gets designed. Any feature that is not innately fixed at the outset, but does get itself designed into Cog's control system through learning, can then be lifted whole into Cog-II, as a new bit of innate endowment designed by Cog itself -- or rather by Cog's history of interactions with its environment. So even in cases in which we have the best of reasons for thinking that human infants actually come innately equipped with pre-designed gear, we may choose to try to get Cog to learn the design in question, rather than be born with it. In some instances, this is laziness or opportunism -- we don't really know what might work well, but maybe Cog can train itself up. This insouciance about the putative nature/nurture boundary is already a familiar attitude among neural net modelers, of course. Although Cog is not specifically intended to demonstrate any particular neural net thesis, it should come as no surprise that Cog's nervous system is a massively parallel architecture capable of simultaneously training up an indefinite number of special-purpose networks or circuits, under various regimes.

How plausible is the hope that Cog can retrace the steps of millions of years of evolution in a few months or years of laboratory exploration? Notice first that what I have just described is a variety of Lamarckian inheritance that no organic lineage has been able to avail itself of. The acquired design innovations of Cog-I can be immediately transferred to Cog-II, a speed-up of evolution of tremendous, if incalculable, magnitude. Moreover, if you bear in mind that, unlike the natural case, there will be a team of overseers ready to make patches whenever obvious shortcomings reveal themselves, and to jog the systems out of ruts whenever they enter them, it is not so outrageous a hope, in our opinion. But then, we are all rather outrageous people.

One talent that we have hopes of teaching to Cog is a rudimentary capacity for human language. And here we run into the fabled innate language organ or Language Acquisition Device made famous by Noam Chomsky. Is there going to be an attempt to build an innate LAD for our Cog? No. We are going to try to get Cog to build language the hard way, the way our ancestors must have done, over thousands of generations. Cog has ears (four, because it's easier to get good localization with four microphones than with carefully shaped ears like ours!) and some special-purpose signal-analyzing software is being developed to give Cog a fairly good chance of discriminating human speech sounds, and probably the capacity to distinguish different human voices. Cog will also have to have speech synthesis hardware and software, of course, but decisions have not yet been reached about the details. It is important to have Cog as well-equipped as possible for rich and natural interactions with human beings, for the team intends to take advantage of as much free labor as it can. Untrained people ought to be able to spend time -- hours if they like, and we rather hope they do -- trying to get Cog to learn this or that. Growing into an adult is a long, time-consuming business, and Cog -- and the team that is building Cog -- will need all the help it can get.

Obviously this will not work unless the team manages somehow to give Cog a motivational structure that can be at least dimly recognized, responded to, and exploited by naive observers. In short, Cog should be as human as possible in its wants and fears, likes and dislikes. If those anthropomorphic terms strike you as unwarranted, put them in scare quotes or drop them altogether and replace them with tedious neologisms of your own choosing: Cog, you may prefer to say, must have goal-registrations and preference-functions that map in rough isomorphism to human desires. This is so for many reasons, of course. Cog won't work at all unless it has its act together in a daunting number of different regards. It must somehow delight in learning, abhor error, strive for novelty, and recognize progress. It must be vigilant in some regards, curious in others, and deeply unwilling to engage in self-destructive activity. While we are at it, we might as well try to make it crave human praise and company, and even exhibit a sense of humor.

Let me switch abruptly from this heavily anthropomorphic language to a brief description of Cog's initial endowment of information-processing hardware. The computer-complex that has been built to serve as the development platform for Cog's artificial nervous system consists of four backplanes, each with 16 nodes; each node is basically a Mac-II computer -- a 68332 processor with a megabyte of RAM. In other words, you can think of Cog's brain as roughly equivalent to sixty-four Mac-IIs yoked in a custom parallel architecture. Each node is itself a multiprocessor, and they all run a special version of parallel Lisp developed by Rodney Brooks, and called, simply, L. Each node has an interpreter for L in its ROM, so it can execute L files independently of every other node.

Each node has 6 assignable input-output ports, in addition to the possibility of separate i-o (input-output) to the motor boards directly controlling the various joints, as well as the all-important i-o to the experimenters' monitoring and control system, the Front End Processor or FEP (via another unit known as the Interfep). On a bank of separate monitors, one can see the current image in each camera (two foveas, two parafoveas), the activity in each of the many different visual processing areas, or the activities of any other nodes. Cog is thus equipped at birth with the equivalent of chronically implanted electrodes for each of its neurons; all its activities can be monitored in real time, recorded and debugged. The FEP is itself a Macintosh computer in more conventional packaging. At startup, each node is awakened by a FEP call that commands it to load its appropriate files of L from a file server. These files configure it for whatever tasks it has currently been designed to execute. Thus the underlying hardware machine can be turned into any of a host of different virtual machines, thanks to the capacity of each node to run its current program. The nodes do not make further use of disk memory, however, during normal operation. They keep their transient memories locally, in their individual megabytes of RAM. In other words, Cog stores both its genetic endowment (the virtual machine) and its long term memory on disk when it is shut down, but when it is powered on, it first configures itself and then stores all its short term memory distributed one way or another among its 64 nodes.

The space of possible virtual machines made available and readily explorable by this underlying architecture is huge, of course, and it covers a volume in the space of all computations that has not yet been seriously explored by artificial intelligence researchers. Moreover, the space of possibilities it represents is manifestly much more realistic as a space to build brains in than is the space heretofore explored, either by the largely serial architectures of GOFAI ("Good Old Fashioned AI", Haugeland, 1985), or by parallel architectures simulated by serial machines. Nevertheless, it is arguable that every one of the possible virtual machines executable by Cog is minute in comparison to a real human brain. In short, Cog has a tiny brain. There is a big wager being made: the parallelism made possible by this arrangement will be sufficient to provide real-time control of importantly humanoid activities occurring on a human time scale. If this proves to be too optimistic by as little as an order of magnitude, the whole project will be forlorn, for the motivating insight for the project is that by confronting and solving actual, real time problems of self-protection, hand-eye coordination, and interaction with other animate beings, Cog's artificers will discover the sufficient conditions for higher cognitive functions in general-and maybe even for a variety of consciousness that would satisfy the skeptics.

It is important to recognize that although the theoretical importance of having a body has been appreciated ever since Alan Turing (1950) drew specific attention to it in his classic paper, "Computing Machines and Intelligence," within the field of Artificial Intelligence there has long been a contrary opinion that robotics is largely a waste of time, money and effort. According to this view, whatever deep principles of organization make cognition possible can be as readily discovered in the more abstract realm of pure simulation, at a fraction of the cost. In many fields, this thrifty attitude has proven to be uncontroversial wisdom. No economists have asked for the funds to implement their computer models of markets and industries in tiny robotic Wall Streets or Detroits, and civil engineers have largely replaced their scale models of bridges and tunnels with computer models that can do a better job of simulating all the relevant conditions of load, stress and strain. Closer to home, simulations of ingeniously oversimplified imaginary organisms foraging in imaginary environments, avoiding imaginary predators and differentially producing imaginary offspring are yielding important insights into the mechanisms of evolution and ecology in the new field of Artificial Life. So it is something of a surprise to find this AI group conceding, in effect, that there is indeed something to the skeptics' claim (e.g., Dreyfus and Dreyfus, 1986) that genuine embodiment in a real world is crucial to consciousness. Not, I hasten to add, because genuine embodiment provides some special vital juice that mere virtual-world simulations cannot secrete, but for the more practical reason-or hunch-that unless you saddle yourself with all the problems of making a concrete agent take care of itself in the real world, you will tend to overlook, underestimate, or misconstrue the deepest problems of design.

Besides, as I have already noted, there is the hope that Cog will be able to design itself in large measure, learning from infancy, and building its own representation of its world in the terms that it innately understands. Nobody doubts that any agent capable of interacting intelligently with a human being on human terms must have access to literally millions if not billions of logically independent items of world knowledge. Either these must be hand-coded individually by human programmers -- a tactic being pursued, notoriously, by Douglas Lenat and his CYC team in Dallas-- or some way must be found for the artificial agent to learn its world knowledge from (real) interactions with the (real) world. The potential virtues of this shortcut have long been recognized within AI circles (e.g., Waltz, 1988). The unanswered question is whether taking on the task of solving the grubby details of real-world robotics will actually permit one to finesse the task of hand-coding the world knowledge. Brooks, Stein and their team -- myself included -- are gambling that it will.

At this stage of the project, most of the problems being addressed would never arise in the realm of pure, disembodied AI. How many separate motors might be used for controlling each hand? They will have to be mounted somehow on the forearms. Will there then be room to mount the motor boards directly on the arms, close to the joints they control, or would they get in the way? How much cabling can each arm carry before weariness or clumsiness overcomes it? The arm joints have been built to be compliant -- springy, like your own joints. This means that if Cog wants to do some fine-fingered manipulation, it will have to learn to "burn" some of the degrees of freedom in its arm motion by temporarily bracing its elbows or wrists on a table or other convenient landmark, just as you would do. Such compliance is typical of the mixed bag of opportunities and problems created by real robotics. Another is the need for self-calibration or re-calibration in the eyes. If Cog's eyes jiggle away from their preset aim, thanks to the wear and tear of all that sudden saccading, there must be ways for Cog to compensate, short of trying continually to adjust its camera-eyes with its fingers. Software designed to tolerate this probable sloppiness in the first place may well be more robust and versatile in many other ways than software designed to work in a more "perfect" world.

Earlier I mentioned a reason for using artificial muscles, not motors, to control a robot's joints, and the example was not imaginary. Brooks is concerned that the sheer noise of Cog's skeletal activities may seriously interfere with the attempt to give Cog humanoid hearing. There is research underway at the AI Lab to develop synthetic electro-mechanical muscle tissues, which would operate silently as well as being more compact, but this will not be available for early incarnations of Cog. For an entirely different reason, thought is being given to the option of designing Cog's visual control software as if its eyes were moved by muscles, not motors, building in a software interface that amounts to giving Cog a set of virtual eye-muscles. Why might this extra complication in the interface be wise? Because the "opponent-process" control system exemplified by eye-muscle controls is apparently a deep and ubiquitous feature of nervous systems, involved in control of attention generally and disrupted in such pathologies as unilateral neglect. If we are going to have such competitive systems at higher levels of control, it might be wise to build them in "all the way down," concealing the final translation into electric-motor-talk as part of the backstage implementation, not the model.

Other practicalities are more obvious, or at least more immediately evocative to the uninitiated. Three huge red "emergency kill" buttons have already been provided in Cog's environment, to ensure that if Cog happens to engage in some activity that could injure or endanger a human interactor (or itself), there is a way of getting it to stop. But what is the appropriate response for Cog to make to the KILL button? If power to Cog's motors is suddenly shut off, Cog will slump, and its arms will crash down on whatever is below them. Is this what we want to happen? Do we want Cog to drop whatever it is holding? What should "Stop!" mean to Cog? This is a real issue about which there is not yet any consensus.

There are many more details of the current and anticipated design of Cog that are of more than passing interest to those in the field, but on this occasion, I want to use the little remaining time to address some overriding questions that have been much debated by philosophers, and that receive a ready treatment in the environment of thought made possible by Cog. In other words, let's consider Cog merely as a prosthetic aid to philosophical thought-experiments, a modest but by no means negligible role for Cog to play.

3. Some Philosophical Considerations

A recent criticism of "strong AI" that has received quite a bit of attention is the so-called problem of "symbol grounding" (Harnad, 1990). It is all very well for large AI programs to have data structures that purport to refer to Chicago, milk, or the person to whom I am now talking, but such imaginary reference is not the same as real reference, according to this line of criticism. These internal "symbols" are not properly "grounded" in the world, and the problems thereby eschewed by pure, non-robotic, AI are not trivial or peripheral. As one who discussed, and ultimately dismissed, a version of this problem many years ago (Dennett, 1969, p.182ff), I would not want to be interpreted as now abandoning my earlier view. I submit that Cog moots the problem of symbol grounding, without having to settle its status as a criticism of "strong AI". Anything in Cog that might be a candidate for symbolhood will automatically be "grounded" in Cog's real predicament, as surely as its counterpart in any child, so the issue doesn't arise, except as a practical problem for the Cog team, to be solved or not, as fortune dictates. If the day ever comes for Cog to comment to anybody about Chicago, the question of whether Cog is in any position to do so will arise for exactly the same reasons, and be resolvable on the same considerations, as the parallel question about the reference of the word "Chicago" in the idiolect of a young child.

Another claim that has often been advanced, most carefully by Haugeland (1985), is that nothing could properly "matter" to an artificial intelligence, and mattering (it is claimed) is crucial to consciousness. Haugeland restricted his claim to traditional GOFAI systems, and left robots out of consideration. Would he concede that something could matter to Cog? The question, presumably, is how seriously to weigh the import of the quite deliberate decision by Cog's creators to make Cog as much as possible responsible for its own welfare. Cog will be equipped with some "innate" but not at all arbitrary preferences, and hence provided of necessity with the concomitant capacity to be "bothered" by the thwarting of those preferences, and "pleased" by the furthering of the ends it was innately designed to seek. Some may want to retort: "This is not real pleasure or pain, but merely a simulacrum." Perhaps, but on what grounds will they defend this claim? Cog may be said to have quite crude, simplistic, one-dimensional pleasure and pain, cartoon pleasure and pain if you like, but then the same might also be said of the pleasure and pain of simpler organisms -- clams or houseflies, for instance. Most, if not all, of the burden of proof is shifted by Cog, in my estimation. The reasons for saying that something does matter to Cog are not arbitrary; they are exactly parallel to the reasons we give for saying that things matter to us and to other creatures. Since we have cut off the dubious retreats to vitalism or origin chauvinism, it will be interesting to see if the skeptics have any good reasons for declaring Cog's pains and pleasures not to matter -- at least to it, and for that very reason, to us as well. It will come as no surprise, I hope, that more than a few participants in the Cog project are already musing about what obligations they might come to have to Cog, over and above their obligations to the Cog team.

Finally, J.R. Lucas (1994) has raised the claim that if a robot were really conscious, we would have to be prepared to believe it about its own internal states. I would like to close by pointing out that this is a rather likely reality in the case of Cog. Although equipped with an optimal suite of monitoring devices that will reveal the details of its inner workings to the observing team, Cog's own pronouncements could very well come to be a more trustworthy and informative source of information on what was really going on inside it. The information visible on the banks of monitors, or gathered by the gigabyte on hard disks, will be at the outset almost as hard to interpret, even by Cog's own designers, as the information obtainable by such "third-person" methods as MRI and CT scanning in the neurosciences. As the observers refine their models, and their understanding of their models, their authority as interpreters of the data may grow, but it may also suffer eclipse. Especially since Cog will be designed from the outset to redesign itself as much as possible, there is a high probability that the designers will simply lose the standard hegemony of the artificer ("I made it, so I know what it is supposed to do, and what it is doing now!"). Into this epistemological vacuum Cog may very well thrust itself. In fact, I would gladly defend the conditional prediction: if Cog develops to the point where it can conduct what appear to be robust and well-controlled conversations in something like a natural language, it will certainly be in a position to rival its own monitors (and the theorists who interpret them) as a source of knowledge about what it is doing and feeling, and why.

References

Dennett, Daniel C., 1969, Content and Consciousness, London: Routledge & Kegan Paul.

Dennett, Daniel C., 1987, "Fast Thinking," in Dennett, The Intentional Stance, Cambridge, MA: MIT Press, pp. 323-37.

Dennett, Daniel C., 1993a, review of John Searle, The Rediscovery of the Mind, in J.Phil. 90, pp.193-205.

Dennett, Daniel C., 1993b, "Caveat Emptor," Consciousness and Cognition, 2, pp.48-57.

Dreyfus, Hubert & Dreyfus, Stuart, 1986, Mind Over Machine, New York: MacMillan.

Harnad, Stevan, 1990, "The Symbol Grounding Problem," Physica D, 42, pp.335-46.

Haugeland, John, 1985, Artificial Intelligence: The Very Idea, Cambridge MA: MIT Press.

Lucas, J. R., 1994, presentation to the Royal Society, Conference on Artificial Intelligence, April 14, 1994.

Mangan, Bruce, "Dennett, Consciousness, and the Sorrows of Functionalism," Consciousness and Cognition, 2, pp-1-17.

Searle, John, 1992, The Rediscovery of the Mind, Cambridge, MA: MIT Press.

Turing, Alan, 1950, "Computing Machinery and Intelligence," Mind, 59, pp.433-60.

Waltz, David, 1988, "The Prospects for Building Truly Intelligent Machines," Daedalus, 117, pp.191-222.

 

From Cognition, Computation, and Consciousness, Masao Ito, ed., pp. 17-31. © 1997 Oxford University Press. Reprinted by permission of Oxford University Press.

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Conscousness, Presence, and AC current
posted on 06/07/2002 2:16 PM by england@spatialharmonics.com

[Top]
[Mind·X]
[Reply to this post]

It is well known scientifically speaking, that we are physical manifestations of solar energies). The question of consciousness is like looking toward our local source, and like looking at the sun, it is difficult to address such a fundemental question as "what is this energy that moves me - that gives me life?" We all know or should at least have considered the scenario of someone we know dying and leaving the physical body. What we see in the remains of the body - is simply a thing - a concentration of matter and nothing else. Everything that was that person in the essence of life force, character, personality, - and the pure energy that moved the blood, is gone. So often and how easily it is to mistake the body and the bodies functions as our source.
The question of artificial intelligence becoming conscious can only be approached in an accurate way if one specifically defines with concensus, the notion of conciousness. And herein is the dilema for the question at hand.
The question I am raising is - are the people engaged in this debate/discussion satisfied with the idea that this unquestionably mysterious energy that literally makes use of and merly uses the physical apperatus we call our body, - is it merely an equivalent (for the question of AI) to plugging in to an electrical socket to power my computer?
Are we indeed satisfied with equating this energy with ac current. Until the question of consciousness ventures beyond the realm of merely the functional - that is into the realms of Will and Being, we can only ever be speaking of mechanical replication of mechanical processes. What I am suggestion is that has nothing to do with Consciousness. So again, what is understood or assumed about consciousness? More fundementally, am I content with my assumption that I know my own energetic source and the assumptions that I have with respect to the range of it's influence beyond the functional aspects of my body/machine/brain.


Wayne England
Santa Monica, CA.






Re: Conscousness, Presence, and AC current
posted on 06/24/2002 12:05 PM by smile_on@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

You have some valid points.
The problem with your assumptions is that you are comparing human with robots! Robots are well-thought-of entities that are created with some initial goal in mind. And if some scientist initiates such a project in order to end up having a prototype that would stand in line with Human, I suppose he had missed his two nights sleep! My point is: Robots are built for entire different purposes than we (the Humans) carry out generally.
....ASIM

Re: Conscousness, Presence, and AC current
posted on 06/24/2002 12:42 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

The kind of view you have, helps people to come through the days - and especially nights.

If I'd believe something like that, I couldn't sleep a minute.

While I am just a machine (or it's software) .. everything is well, as well as it can be presently. But if I had a 'soul' somewhere attached to God knows what ... I would worry a lot.

- Thomas

Re: Conscousness, Presence, and AC current
posted on 06/24/2002 4:51 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

>The question I am raising is - are the people engaged in this debate/discussion satisfied with the idea that this unquestionably mysterious energy that literally makes use of and merly uses the physical apperatus we call our body, - is it merely an equivalent (for the question of AI) to plugging in to an electrical socket to power my computer?

No more so for us than for any other animal roaming the Earth. Have you noticed that your dog or cat has a certain amount of intelligence and personality? Do they not feel pain and sorrow? Doesn't their blood flow as freely as ours and stop as quickly and finally when they die? Individual animals have made efforts of will that people have described as heroic.

Whatever this mysterious energy is, we can't say it belongs to us alone. The only real question is, can we endow our machines with it? And as the art of building intelligent machines borrows more and more from the life forms doing the building, the harder it will become to tell what is living and what is not. When the energy sources for nanobots become dependent on things like mitocondria, what will be the difference between where they get it and where we get it?

Right now there is a clear difference between them and us. But in the days to come, we will as likely build our robots with DNA as with metal tools and power them with the same kind of fuel we use to run our bodies. Then the primary difference between us will be that instead of having evolved into existence, they will have been engineered. Their designs will no doubt be cleaner and better suited to the jobs for which they were built.

We, on the other hand, were only created to exist. We are too general purpose to do as good a job at anything as a well designed robot. It's that very lack of purpose in design that may be our undoing.

We can, of course, always be redesigned as we go along. That's what knowing the genetic code and combining it with our knowledge of AI and the creation of new materials can do for us. We can become borgs with the best features of both man and machine. That might keep us around for a while. But who knows what we will look like at that stage?

A body made of materials that never existed before and that will never wear out combined with senses that can detect every frequency of every spectrum in the universe, and a brain that can combine those frequencies of input to examine, in concert with our fellow man and machine, everything the universe has to offer might be worth keeping. But it might not be recongizable by homo sapiens.

Re: Conscousness, Presence, and AC current
posted on 06/25/2002 4:23 AM by smile_on@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Yes, as far as the spark of life is concerned, we call it "mysterious"; not because we don't know how to appreciate human life, but because AI as a field of science can't be very useful by itself, if we are to really fathom human existence.
My second point refers to Artificial Intelligence in general. As a honor year comp.science student, I have come to realize that apart from comparing robots with human always, it will be worthwhile to consider it as a separate entity, that would carry some static parameters ,which may help the entity define its "life".
Last point: Our limitations to compare robots with human may result in our consequent inability to solve major deadlocks in AI we face as of today. Please do think over it.
...ASIM

Re: Conscousness, Presence, and AC current
posted on 06/25/2002 4:24 AM by smile_on@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Yes, as far as the spark of life is concerned, we call it "mysterious"; not because we don't know how to appreciate human life, but because AI as a field of science can't be very useful by itself, if we are to really fathom human existence.
My second point refers to Artificial Intelligence in general. As a honor year comp.science student, I have come to realize that apart from comparing robots with human always, it will be worthwhile to consider it as a separate entity, that would carry some static parameters ,which may help the entity define its "life".
Last point: Our limitations to compare robots with human may result in our consequent inability to solve major deadlocks in AI we face as of today. Please do think over it.
...ASIM

Re: Conscousness, Presence, and AC current
posted on 06/27/2002 6:56 PM by Citizen Blue

[Top]
[Mind·X]
[Reply to this post]

These postings re-enforce my ideas on the subject. Thank you.

Re: Conscousness, Presence, and AC current
posted on 07/29/2004 10:31 PM by Incanus

[Top]
[Mind·X]
[Reply to this post]

Greetings Friend,

I will tell you what this energy is. It is called quintessence.

If you begin searching for the meaning of this word, you will learn what this energy is. When I say search for the mean of the word I mean actually come to knowing its meaning through experience (from your words you have already, unconsiously relalised it slightly).

If you wish to know more, or if you wish me to suggest an excellent work based completely on experiments with this "life spark" energy, then please contact me privately.

In LVX
Ora et lege et Labora.
Incanus

Re: Consciousness in Human and Robot Minds
posted on 10/02/2002 1:20 AM by england@spatialharmonics.com

[Top]
[Mind·X]
[Reply to this post]

Haven't been hear in a while - but I see there are a few comments now - and would like to just add another thought that follows this questioning of consciousness - and what it is and what it isn't - baring in mind that 'comrehension' is directly linked to the mysterious force/spirit within. (I am not my mind - I am not my body - I am not my emotions) - but I am witness.
What is this term consciousness?? Consciousness is not related in any way to the on and off mechanics of mechanical process. One might as well bounce a thousand balls up and down and claim there could have been some "consciousness" that just happened. (I am refering of course to a microchip with respect to directional motions) There is no ghost in the machine -unless we are speaking of ourselves - and that was the point. These people who are suggesting I'll be able to download my brain onto a computer some day and live on - are confusing the world of FACT (existentiality - and functions - physical mechanical processes) with the world of VALUE (essential comprehension of meaning - and the experience of BEING) The body is merely an instument that houses what I am more fundementally. It is a question of LEVELS - there are different levels of consciousness - and different qualities of energy associated with those levels. For example Miester Eckhart had a different level or quality of energy moving through him - (or he was host to), than say for example, a stubborn person who believes he is right without questioning. And this is where it gets interesting - because different qualities of energies manifest different kinds of processes and patternings. To continue with the example above - Meister Ekhart was known to almost always be in a state of questioning, - a certain openness and application of those aspects within that enable the intelligent sense of creative wonder. It could be argued (as it apparently is by someone) that this kind of patterning of inner relatedness and knowledge could be "transposed into" algorithms -replicating patternings of relationships and forces in association with sequences of variables etc... But this can only ever be a surface hollow relication of appearaces - like a music sampler can can repeat the sound of my voice. A Robot could never know it was speaking -except to relicate an (appearance) of knowing.

This is obvious stuff - but it concerns me that there are people who believe that computers are going to manifest and be host to conscious energy or that i can be alive in a computer?

Again - we don't know what we are talking about collectively when we refer to this term consciousness. Like so many other words we all have a different take - and have whole sets of assumptions. This why there is the new Toward a science of Consciousness conference. Finally for western science the question is up - the problem is most people are simply not up for the question.

Re: Consciousness in Human and Robot Minds
posted on 10/02/2002 3:55 AM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

England,

I claim I am conscious. It certainly feels that way to me.

You claim you are conscious.

Suppose, in all seriousness, I do not believe you, and believe instead that you are just mechanically and chemically "reacting" to me and your environment, albeit in very complex and "reasonable" ways.

What evidence can you present that should convince me that you actually experience the sensation of "consciousness"?

I mean, besides, "Well, because its just obvious, duh!"

Just curious.

Cheers! ____tony b____

Re: Consciousness in Human and Robot Minds
posted on 10/02/2002 2:00 PM by england@spatialharmonics.com

[Top]
[Mind·X]
[Reply to this post]

The proof can only be experienced by you -
but if you are willing to do an experiment - you will see so clearly for yourself that you will repeat the quote at the end of your posting to yourself. The problem is - people are not serious about the question of consciousness - that is to say there is an overiding assumption that the consciousness I am familiar with is the extent of the matter, or IT.

If you are serious about the question of consciousness - are you willing for 30 days to try to quietly with intension - and the consciousness you claim you have, to sit still in the morning and simply observe for 30 minutes, what occures in your mind, feelings and body?. It is precisely the effort of watching and staying with the intension (like a scientist) that will produce a quality of energy in you that will be the "proof" that I am not my functions - that is to say - I am not my mind, body or feelings, but that feelings, thoughts and sensations move through me - and (I can know them and be witness) to them. The illusion is the assumption that that is how it is anyway - that I am aware of everything that is going on in me... but if I look - for a moment - I see that I am not. This is a terrible blow to my idea of myself. (Serious stuff) - Because on that level (on my ordinary level) I am like a machine - and the point is a machine can not know itself.. a machine cannot be known to itself... things just pass through it - it just churns. The witness in myself is almost always buried beneath the functions (thoughts feelings sensations). Or in other words the witness is there - but is buried by conclusions, assumptions and almost constant distraction - that I give my identity to.

The problem is - that to approach the question of consciousness I need to approach it with an open mind - like a scientist.... - but the unavoidable scenario is that I myself am and can only be - the subject - test and control - if i am to approach the question seriously. The question becomes how intelligent of a scientist am I in relation to the subject of knowing myself?
Am I just a machine? - just a computer processing data in complex ways? Is that it? Can I proove to myself there is nothing more to me? Or is it not quite that simple of a matter? If it isn't - that can only be good news.

The question lays with the individual when the question is about consciousness - and in that sense there is a reason that it has been avoided by science - I myself need to put all my cherished assumptions and opinions into question - and who wants to do that? Just as who wants to become (a serious scientific) experiment to their own observation. - 30 minutes for 30 days. Suffering enters frame - and I have very little baring on that subject.

Words themselves are only symbols for meanings -
A computer cannot UNDERSTAND meanings (Values) - it can only "KNOW" (Facts) The question becomes the differnce between UNDERSTANDING - walking the path - as opposed to KNOWING - talking about the path.

Amoong other places...
It's all in "The MATRIX" the movie - the masterpice of symbolism. Is that film not all about me and my scenario. My problem is I am constantly popping blue pills without knowing it.

ENGLAND.





Re: Consciousness in Human and Robot Minds
posted on 07/30/2004 4:13 PM by ubermouth

[Top]
[Mind·X]
[Reply to this post]

know im many years late but, how exactly are you witness. what exactly does this relation look like?

Re: Consciousness in Human and Robot Minds
posted on 07/30/2004 4:17 PM by ubermouth

[Top]
[Mind·X]
[Reply to this post]

secondly, if you are going to say that you are witness to these things via experiencing them, then i must say that you are made of those things ie emotion, body, thought in that you are a mode ie witness ie experiencer, and are indeed made of those things in that experience is consituted of those thing? you cannot not be connect to those things a s a removed observer, you must experience them which entails that you are in a interaction with them, if not in an interactive mode them how do you relate to them?

Re: Consciousness in Human and Robot Minds
posted on 07/24/2004 10:56 AM by jack d

[Top]
[Mind·X]
[Reply to this post]

Tony,

I claim I am a duck. It certainly feels that way to me.

You claim you are a duck.

Suppose, in all seriousness, I do not believe you, and believe instead that you are just mechanically and chemically "reacting" to me and your environment, albeit in very complex and "reasonable" ways.

What evidence can you present that should convince me that you actually are a duck ?

I mean, besides, "Well, because its just obvious, duh!"

*****
The answer is that you are confusing the subjective experience of consciousness with its objective characteristics. So for instance there are certain objective characteristics we associate with being a duck.They have beaks, go quack and taste quite nice. So in order to convince you I may be a duck , i would invite you to check my external, objective characteristics for the presence of features normally associated amongst knowledgeable society as a duck. I would not expect you to take my word for it. In order to confirm that I am a duck there needs to be the capacity for you, the observer to interact with me, the duck. This is possible in a very easy way with a duck as a duck is made of matter. ( Nonetheless I may be kidding you as I may not be
a duck but a heavily disguised finch - my DNA evidence may provide better evidence ).

The only objective facts we have about consciousness ( there aren't many ) are that it's assocated with brains, it appears when we wake up in the morning and disappears when we drink too much or spend too much time hanging around police stations in Iraq. There is little around to objectively measure consciousness. Anaesthetists make a go of it. But it clearly is not as easy for an external observer to verify the existence or not of consciousness in somebody's brain as it is to see whether something is or is not a duck. But nonetheless the fact that saying something is more difficult is not the same thing as saying something is impossible.

Consciousness is a closed phenomena - its EXPERIENCE is only availabel to the person experiencing it. Nonethless its objective existence is available to third parties - fortunately , as otherwise our hospitals wouldn'y function too well is all its anaesthetists weren't able to work on that basis. Similarly, I would suggest , Tony that you exercise that belief ( that others are conscious just like you ) on a daily basis, a bleief as valid as all the other necessary beliefs, e.g existence of an external world, existence of matter, other human beings, and all that.

Re: Consciousness in Human and Robot Minds
posted on 07/24/2004 10:35 AM by jack d

[Top]
[Mind·X]
[Reply to this post]

Who says that western science cannot 'tackle' the problem of consciousness ? On what basis is this exaggerated claim made ? Consciousness is phenomenological semantic, just like matter and time. It is no more immune to the probing of the scientific method than the reproductive methods of vampire bats. It may be difficult but to state that it is epistemelogically impossible is a philosophical statement that is unsupportable.

Where is Consciousness?
posted on 10/18/2002 1:31 PM by entell

[Top]
[Mind·X]
[Reply to this post]

I think it is quite useless to try to replicate the brain in silicon (or whatever)
to find consciousness or make intelligent machines. I believe the brain is simply a tool for intelligence to present itself on. Much like software running on PCs. The PC can do and be many things based on what software it runs. The software is what makes the PC what it is. The hardware is just the platform for
the software to run itself on. If researchers are trying to replicate
the brain to get to intelligence, I believe they will be very disappointed when they
assemble the artificial brain. I can't believe that brain would be the
source of intelligence. If so, there is a very interesting question to
be asked: what is the difference between a dead person and a living one? Or even better, a
normal living person and a person in a coma. The brain is alive and ticking yet there is no consciousness. Therefore consciousness cannot
possibly be a product of the brain. I have my doubts about if we can ever replicate it in a machine. I know this might sound very unscientific, but what about the role of the soul in all this? The soul is perhaps not a scientifically proven fact, yet, however, just because it is not proven doesn't mean it does not exist. It is quite possible that there is a higher level "software" (which I refer to as the soul) running on our brains that is the real source of the intelligence that we are trying to replicate in a machine. Once the person dies, there goes the soul (and the software). This explains why it is not enough to keep the brain cells alive to have a conscious person.

Apart from consciousness, this thing I refer to as the soul could also be the source of creativity. You know how sometimes you come up with solutions to problems that you didn't know you knew and it just popped up at the least expected time? Where do those creative thoughts come from? From consciousness? From neurons firing in a completely new pattern (chaos)?

I am also familiar with the concept of genetic programming. In that case, there is yet again no consciousness. It's simply exploring a limited space of solutions to a given problem in a 'brute force' fashion. It is a directed search so it is not completely a dumb process, but still there is no conscious thinking involved. I am a hardware engineer myself and I do quite a bit of creative thinking on a daily basis. Sometimes I wonder if a machine could do my job completely on its own, and if it could, what it would need to be able to accomplish that. For simple things, genetic programming seems good enough, but once the project gets a tiny little bit complicated, I don't think an uncreative process like genetic programming (or any other method) will accomplish much. A creative tool, which doesn't purely depend on just IF-THEN-ELSE statements or 'brute force' searches for the possible best solutions, would be needed. No less will do.

I read quite a bit of the articles on this website, but I was so disappointed because they either re-state known facts, or they are very very focused on "replicating the brain" as if there is a 100% guarantee that doing so will create consciousness. I would think really open-minded scientists would be questioning a lot more before taking on such a difficult task.

I do realize, however, that in simple replication of brain-like behavior, we see some intelligence. When we create artificial neurons, and tie them up the way they are tied up in the brain, we see that they make good speech synthesizers and handwriting recognizers. However, replicating functionality does not equal replicating intelligence or creativity or consciousness. The handwriting recognition hardware did not have the "experience" of doing so. It did not even do so at a conscious level. That's all it is meant to do and it did it because as far as it is concerned, there is nothing else to do. Perhaps we need to be able to communicate with the recognizer to really know if it experienced anything or not. :)


Anyone with any comments with any of the things I said? Please let me know. I would like to hear your comments.

Re: Where is Consciousness?
posted on 10/18/2002 2:46 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

entell,

I appreciate your sentiments, but scientific methodology is based upon what can be observed and the efficacy of function.

You say that you are conscious. I claim that I am. If we were to meet, I would likely be convinced that you are indeed conscious, but my reasoning is not terrible scientific. It would be a "leap of faith". It would be silly (statistically) to imaging that I am conscious, and yet all others that "look and act like me" are merely chemical-reactions evolved to present the appearance of consciousness through clever interactions with the environment.

But ultimately, consciousness is "what it feels like to be conscious", and that is a subjective measure. Only the "beholder" can be ultimately convinced. You say that the comatose person is not conscious, but what criteria are you employing? They do not respond to you. So what? They exhibit no brain waves. Ok, brain-waves have been show to correlate strongly to certain pattern of thought. But the lack of brain-waves does not constitute "proof" of loss of internal consciousness. I will grant you, it probably does, but that is not really "provable".

You mentions that genetic algorithms (and other AI techniques) yield the appearance of some intelligence or cleverness in a "limited domain". True, but as more is learned and these systems are given more general heuristics, that "limited domain" can certainly grow. There is no a-priori reason it cannot grow to the point that it is (behaves) "more clever" that humans, in a very general sense. Conscious? Who really cares. Behavior is all that matters, functionally.

I firmly believe that science (and logic) can never answer whether "soul", as you put it, exists or not. But from a scientific viewpoint, it can be of no service either way.

Science proceeds by (hoping to) delineate what can be made predictable (in the broad sense). Actions that might originate from a "beyond physics" somewhere cannot be "dealt with", and cannot thus contribute to the functional formulations.

I tend not to hold to anything "beyond the physical", because I cannot see how a non-physical can effect the physical. The "bridge" would need to be one or the other, and then the problem re-emerges at one end of the bridge, a seemingly impossible "non-physical-to-physical" interface.

(But, nowhere is it written that the universe must obey logical rules, so I am always open to possibilities, philosophically.)

Suppose a very "smart" robot reacts intelligently to your conversations, seems to "care" about you, but after being "struck in the head" its program gets scrambled, and falls into a "stupid loop" (the "program" locks up). The robot is now "comatose", despite the fact that the processor is still cycling. So, "mind" is not merely "any processing", but the manifestation of "effective processing".

I hope this clarifies (at least) my position. Look forward to your thoughts!

Cheers! ____tony b____

Re: Where is Consciousness?
posted on 10/18/2002 4:29 PM by entell

[Top]
[Mind·X]
[Reply to this post]

I agree with you in that consciousness does not have to be created. It is functionally not required for a robot to experience what it is doing, feeling. In fact that might be a bad thing if we are going to continue making the robots do things for us. :)

On the other hand, I wonder if simply increasing the speed of processors, and being able to process more and more data in a given time will ever create entities truely intelligent. For example, Deep Blue can beet Gary Kasparov because it can "process" more moves than Gary can in a given amount of time. In my opinion, that does not make Deep Blue intelligent. It makes it very very fast, and it makes it "seem" intelligent. I think this is your point too. As long as a machine can pretend it is intelligent by being fast, then who cares that it does not have a consciousness.

I agree with you except I don't know if it is ever possible to mimick or even surpass human intelligence by simply being fast. In the case of Gary vs Deep Blue, Deep Blue seems to be more intelligent, however, chess is a game where the number of moves are limited. The limited number of moves available is very very large number (practically infinite for humans), however, it is still limited. I can see a computer "seeming" to be more intelligent than a human when the number of possibilities are limited and there is a single solution, like chess, like backgammon, or go, or any other such game.

How about when the number of possibilities are infinitely many? How about when there is more than one answer with most of the possible answers being pretty much acceptable based on different trade-offs (like in an engineer project)?

I am sure the computer will figure out the answers much faster than a human and "seem" very intelligent. This is also the game field for genetic programming. It tries to find solutions that match or surpass some criteria. And I am sure it will do a better job as processors get faster, provided that we have a way to describe the problem at hand in terms of chromosomes. Most problems are not very suitable for that except for simple things. In John R. Koza's genetic programming books (more like encyclopedias), there are great many examples of how genetic programming can be applied to real world problems, and he also claims the method re-invented some patented technology. Well I don't call that intelligence simply because trying all (or most) possibilities to figure out the best result does not require much intelligence either from a person or a machine. It is just a more directed search than inventing.

Real intelligence is when Enstein comes up with the theory of relativity or when Edison invents the electricity, or Newton figures out how objects behave.. out of the blue.. when no such thing was know to exist. That's what I call intelligence!

Therefore, what I call intelligence is when a computer is NOT told everything, yet it can figure out the missing pieces on its own, and come up with the solution. In the real world, we don't always know everything to solve a problem. I can hear you say genetic programmind does not need everything. Artificial neural networks can also re-construct some information based on what is given, so they are good for handwriting recognition etc.. Very true.. However, consider this: Let's imagine an electrical circuit. Since I am an electrical engineer, I picked that example. If I want my "intelligent" computer to generate me a circuit, I am sure it will do a great job as long as I can explain my problem to it in the way it expects. Assuming all went well, and I told it the problem, and it is ready to start investigating possible solutions (with genetic programming or some artifical neural network setup), there is still one problem. It still has to know what "parts" to use in creating the circuit. I have to give it the basic parts to use from which it might create more complex parts. What if there is yet another basic part that I did not tell it, and it needs that basic part to create the most ultimate best solution? Then what? It will give me a solution or a few solutions, but they won't be the best, and I probably won't know that they are not the best. The computer did not "create" anything. It simply presented the best fit given the inputs. It still missed the ultimate best answer because it could not create an ungiven input which will result in the bestest solution. This happens a lot in engineering.. For example the transistor was not invented until 1970. Before than computers were impossible. Then came the FETs which are like transistors but better in many ways.. So on and so forth.. I think you get the idea.

When the computer can do that kind of "thinking", I will bow in front of it, and accept its intelligence. Please note that this intelligence does not require consciousness unless consciousness is the root of such intelligence. Once again, in the case of Deep Blue, the universe of all possible moves are already known, so it is a matter of time to pick the best move. That does not require the intelligence I mentioned above. Only speed! In the engineering project example I mentioned, the universe of possible solutions is unknown. The computer might have to "invent" something new to reach the best solution. I don't think it will ever be possible to do this with "IF-THEN-ELSE"s, or fast processors, or artifical neural networks where the output is some mathematical function of the input. The output is always limited by what was on the input even though the given inputs could be mixed up to form seemingly new outputs. If the inputs are not large enough to span the entire possible outputs, then our intelligent machine has no luck creating certain outputs.


Lastly, the reason I think it is useless to re-create the brain in silicon is because brain is simply the platform where thoughts and experiences exist. I don't think brain generates "new" ideas. If it did, that would mean that every thought that can ever exist would be already in our brains somewhere. It would mean that the theories that Enstien "discovered" would have existed already in his brain unknown to him, until he thought them into existance... I think it is quite optimistic to imagine that the "new" ideas come from new connections in the brain between neurons for the reason I explained in the previous paragraph. How could neurons cause a completely new thought to appear from nowhere when that thought was not present in the system in the first place? Well it might happen in the brain, but the computers that we are trying to make intelligent don't work that way. What goes in comes out and what can happen to the input is based on what is known :)

My few cents on the topic. Please feel free to comment on them!

Re: Where is Consciousness?
posted on 10/18/2002 5:42 PM by Grant

[Top]
[Mind·X]
[Reply to this post]

A brain surgeon whose book I read recently said words to the effect that consciousness is merely that part of the input and processing that our minds are paying attention to at the moment. In other words, it's what our minds are focused on. When AI has the task of dealing with multiple inputs and complex tasks simultaneously to the point that it can't give full attention to more than a few at a time, it will have to pick and choose which require immediate attention and which can run in the background -- in other words, unconsciously. When the AI mind is divided between this kind of conscious and unconscious processing, it will no doubt be a lot like us. This does not seem like an impossible thing to reproduce in a complex machine.

Grant

Re: Where is Consciousness?
posted on 10/18/2002 5:53 PM by Grant

[Top]
[Mind·X]
[Reply to this post]

P.S. Think of an AI built around multiple processors. One processor handles sight, another handles hand and arm or other types of movement, another handles sound, or other input from inside the machine and the operation therein, and a final one that overrides them all selects between them to decide which requires the AI's immediate attention. The mind of the AI would be the communication that takes place between all the processors as it acts on the information coming from all of them simultaneously.

Re: Where is Consciousness?
posted on 10/18/2002 8:55 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

entell,

I will be the first to agree that Deep Blue, for all its chess-move "intelligence", in no general intelligence, and nowhere near what we might call an artificial "mind".

And I have long expressed reservations about the ability for any algorithmic system to "be a real mind" in the conscious sentient sense to which we are accustomed, because the brain is not merely neurons signalling one another, but also involve subtle EM patterns, chemical pressures, etc. Moreover, the hyper-fine energy states between "close" neural wave patterns may be a phenomenon that is sensitive to deep QM-indeterminacy, something that a typical transistor-based algorithm support system is specifically designed NOT to be sensitive toward.

But having said all of that, I think we need to recognize when qualities attributed to "humans" are really something more, despite our belief that they are.

Today (still) the human brain is many magnitudes more complex than any "computer+program" ever built. The entire internet *may* have capacity, processing power, and complexity rivaling a human brain, but it is nowhere as well "structured", and was not designed to support any centralized sense of operation or agenda. Simple "speed" will not yield intelligence, even the arificial kind.

However, our brain's very complexity leads us to "manage" things for which the "mechanism" is so deeply removed, we often attribute it to support things it may not support, or may not exist at all. That is, lacking a full understanding or appreciation of its mechanisms, it remains "magic" to us.

Creativity is an excellent example. We get a "novel idea", and assume that it somehow was "created", as if it has a substance, rather than being a pattern or arrangement of previous elements whose new "harmony" causes it to become replayed and embedded into memory.

When you need to solve a problem, you do not consciously "search a huge space foolishly", nor do you consciously call upon a "good path-trimming heuristics algorithm" to find a good solution quickly. That does not mean you are not effectively doing the same thing, however creative or "intuitively-directed" it might feel to you.

Deep Blue "could" look through a billion stupid choices to find the best one. Given the heuristics the programmers gave it, it was able to trim away 99.99% of "really stupid" paths, and apply more attention to those that remain. If the programmers wanted to, they could have had it (periodically) investigate 1% of the really-stupid paths anyway, since it might learn new gambits on occasion.

What if the programmers, instead, gave it the heuristics to experiment with heuristics in general? Again, seen in isolation at "only that level", it seems like "intelligence in a limited domain - heuristics development", but remember that as it discovers/invents improved heuristics, it is also employing them in (say) chess playing, or who knows what.

Now, it is no longer the programmers who are programming the system about good chess playing, and to say that it cannot "output" anything that was not specifically "input" is harder to maintain from a functional viewpoint. It will have "learned" to play better chess "on its own", viewed from the higher layer of chess-playing.

Allow it to expand its domain to "games in general", or "problems in general", along with the ability to interact and absorb the environment (as we do), and the boundaries between "inside and outside" begin to break down completely. As the "heuristics space" becomes more general, the kinds of behaviors that can manifest are no more limited that humans are limited.

The notion that we (individuals) are really separated, and have a real "inside and outside", is a nice conceptual approximation, but may be entirely false. When you think you have generated a new idea, can you honestly claim that nothing you ever absorbed from the "outside" had any influence in the ability of that idea to come forth? More esoterically, if we and the universe are really QM-entangled, then anything we build is likewise entangles, and if it is designed to be QM-sensitive, there will be no plausible rationale for claiming it cannot become as "mindful, willful, and creative" as we believe ourselves to be.

As much as I enjoy the metaphysical, I am not "convinced" that our mind is anything other than a manifestation of the pure physics. To claim that we "originate thoughts" is to say that we never absorb outside variety. We may instead be very good pattern-correlators and good harmonic generators from the complex of inputs we recieve. If this is the case, then there is no reason "in principle" that a different substrate could not manifest equivalent functionality.

Perhaps, as "conscious manifestations", we are all really a single quantum-connected mind-system, and our feelings about "soul" or "self-hood" are really just mis-interpretations of that quality, from individual and limited viewpoints. But I see the "individual soul thing" as largely a way to try to answer "where do I go when my body ceases, what happens to ME", in the hopes that there is such a "I go". I do not see other real evidence that "I" is anything different that my current sense-of-the-moment.

If we are purely physics, (even if some physics yet not discovered or understood) then we will apply the physics as we discover it. Why assume otherwise?


Cheers! ____tony b____

Re: Where is Consciousness?
posted on 10/18/2002 11:09 PM by entell

[Top]
[Mind·X]
[Reply to this post]

tony_b,


I thank you for sharing your thoughts with me.
I think you should write an article or two for this website (perhaps you already have). The ones I read on this website so far has not been as interesting as our dialog.

It is interesting that you mention how things are one at a quantum level. I know for a fact that many scientists came up with the same inventions/discoveries simultaneously around the world (even though most of the American scientists were given the tiara by historians). I always thought that that was a weird coincidence until I started reading about quantum theory and how at that level the boundries between the basic particles disappear. Assuming that that is correct, those of us who are sensitive enough to pick up vibes from one another must be "hearing" the others thoughts or something along those lines. We are perhaps one at some level even though at the physical level we live in, we appear to be separate individuals. And perhaps creativity is nothing more than the brain acting as a radio receiver and picking up the transmission from each other and from other places.

It is also interesting that if you read biographies of scientists, most admit that it is when they don't consciously think about their problems is when the solution arrives.. Well at least the ones humble enough to admit it.
Perhaps thought creation is not a conscious activity at all.

Looking at the brain for the purpose of finding the roots of thought and creativity would be a good reason I suppose since it does create or bring forth what was created.

Since I make computer systems all day long, and I know how they work, I think it is the toughest thing to get them to do what we can do in terms of creating solutions. Every time I hear a story about some "smart" machine, I read it hoping that someone found the source of thought, or someone got a processor that can do more than jumping from state to state. That's all a processor is by the way even though it seems magical to many people. It moves from state to state based on its current state and the inputs. Each state is a function that it can do which is limited by simple math functions and comparisons and a few basic logic operations. It sometimes amazes me to observe what computers can do today, but when I dig down into it to find the "smartness", all I find is a few digitized inputs that are connected to some comparison functions and some logic operations. Perhaps a few add's and multiply's.. Nothing magical at all.

My point is that I think most of us are VERY disappointed with AI today after decades of research because computers can't even separate junk e-mail from good e-mail. I think we had very high expectations in the first place. The computers today simply lack the capability to come up with the "creative" solutions to problems we have. Their hardware is simply not sophisticated enough. For some time, it looked like if we had enough "if-then-else"s and a fast enough machine equiped with some kind of "learning" algorithm, we could conquer the universe. It turns out that's not the case. This scheme dose work well for some limited amount of situations. Dr. Rodney Brooks can get robots to behave like animals having them learn how to walk on their own. We can have computers learn on their own how to read English sentences out loud just like a baby learns how to speak and read. However, it is much tougher to teach a computer which piece of code is a virus, and which other ones are simply trying to re-order the data on the harddrive so that it can be accessed more easily.

Apparently the brain is much better in many ways at these more difficult issues. Apart from being complex, it seem to have connections to something that seems to allow it to be more than its own parts. I do agree with you that most of the time the solutions we find are re-arrangements of previous knowledge we gained consciously and unconsciously, but I can't believe that 100% of all solutions we find to problems are that way. Inventors, scientists and entreprenuers always brag about how they had to "re-invent the process" or "re-invent themselves" to do what they did. In fact, I believe such people are the gifted ones who can "somehow" see things differently than the rest of us. They in fact reject what is inputed to them, and they "create" their own new ways. I am not sure if this rejection is really what is going on in their heads. Perhaps they are indeed re-arranging what is there, and presenting it in a completely new way so it looks all different to us in the end. However, I think it is more likely that they indeed introduce new concepts, thoughts, ideas into the system. I am not sure where they get them, but until we figure out where they get them and how, our computers will be doomed to relay good e-mail together with the junk e-mail. Maybe this "quantum computer" concept will bring some new blood to the field. Yet again I have my doubts. This 1's and 0's business worked out good so far, but I doubt it will suffice to create the thinking machines we want. If binary was the ultimate thing, I am sure we would have seen more of it in nature.

As you said, perhaps there is something fundamentally unique about these tiny neurons and what happens at the gaps and with all the electro-chemical reactions going on around them. Perhaps they are small enough to be very sensitive to some other "things" that we can't measure or see in the lab yet. The most amazing thing is that they can look at themselves and be curious about themselves and try to figure out how they work. You would think that they would know how they themselves worked! :)

Re: Where is Consciousness?
posted on 12/27/2002 5:50 PM by Grant

[Top]
[Mind·X]
[Reply to this post]

On the subject of binary coding, nature has been doing it for 3.5 billion years. DNA is a form of binary code using a chemical medium. Think of AT as 1 (A and T are always bound to each other) and GC as 0 (also bound together) and you have the equivalent of a Turing tape with 1s and 0s operating a cell/factory to turn out various kinds of products.

A cell is little different from a factory that uses software to produce cars or motherboards. The software is lines of DNA and the hardware is basically proteins and other chemicals. DNA passes infomation to RNA and RNA uses that information to construct a variety of products which are used by the cell to communicate with other cells, fight off invaders, and reproduce itself. Some invaders though, such as viruses. use the cells' RNA to reproduce themselves and sometimes destroy the cell in the process, just as a computer virus can destroy the information products used to operate a computer.

So nature does use binary code to create and operate all living things. It always has.

Grant

Re: Where is Consciousness?
posted on 12/27/2002 7:06 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Grant,

Actually, you need two bits for each of A, C, G, T, since it matters which side of the helix each pair appears on.

That is:

. A - A - C - C - T
. T - T - G - G - A

is NOT the same code as

. A - T - C - G - T
. T - A - G - C - A

But whether binary, quaternary, or any other n-ary, the REAL issue is unchanged. By our method of viewing the world, we can break it down into changes between distinct states, or see sequences of items from a distinct "alphabet" of codes.

And indeed, the living cell is very much like a state-machine, at that level of description. AND given a machine that acts on binary code, all "n-ary" codes can be represented, along with the machines to interpret those codes.

BUT, as to the topic of consciousness ... it is not entirely clear that merely reproducing the same "logical sequences" in an alien machine will produce the same result (consciousness, or not).

Given a completely different set of fundamental forces (say, excluding electric charge) you might (conceptually) invent a family of particles that could form a "chemistry", form the "logical equivalent" of A, C, G, T, and even (thus) produce the equivalent of living cells (and animals, etc). But if the phenomenon of "consciousness" is (for some peculiar reason) keyed to only "electrical behaviors/manifestations", these would be absent from the new alien "physics".

So, the debate rages about the nature of the "quality" of consciousness that we "feel" we possess (experience), and what is required to support it.

And all of this, despite any homomorphic logical identity of machines.

A silicon brain may well support consciousness ... but only that "brain" will know it. The rest of us will simply be convinced ... or not.


Cheers! ____tony b____

Re: Where is Consciousness?
posted on 12/27/2002 11:12 PM by Grant

[Top]
[Mind·X]
[Reply to this post]

Tony,

For an interesting look at the roots of consciousness, you might give this paper a look. It's a publication of the Congnitive Science department of The University of California at San Diego (UCSD).

http://cogsci.ucsd.edu/cogsci/publications/97_05.pdf

Grant

Re: Where is Consciousness?
posted on 12/28/2002 12:54 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Grant,

The paper you cite, "Perceiving a New Category: The Neurological Basis of Perceptual Categorization", is interesting in relating lower-level functioning/perception to higher-level conceptualizations (models and categories).

But all of it could be ascribed to many algorithmic processes that we generally believe to be quite "unconscious" in the sense we believe ourselves to "feel conscious".

In other words, the paper is good at formalisms, but does not really address "the subjective experience of consciousness" at all.

The paper argues that (our) low-level "perceptions" do not merely preceed the latter "conceptualization of categories", but that the very categorization is part and parcel to the effective operation of perception in the first place. OK, I can buy that. But we could use this insight to develop a better automatic garage-door opener, and never come to believe that the device "experiences consciousness".

I tend to the view that consciousness is a necessarily subjective experience, and that the only one who "knows" it is present is "the beholder". Everyone else will either be convinced, or not.

Can consciousness manifest in an alien substrate? I don't SEE why not.

Will understanding the "formal" mechanics and organization of data-flows in a biological brain AID in producing systems that behave as-if conscious? Sure, it could only help.

Will any amount of such theorizing and engineering produce a PROOF that the consciously-behaving product actually EXPERIENCES consciousness, as we feel we do? I don't see that as possible. I believe that only "that entity" will know, beyond a doubt, whether it is a "waking awareness".

And we will have to trust it, or not, when it behaves well enough to challenge us with the very question.

Cheers! ____tony b____

Re: Where is Consciousness?
posted on 12/28/2002 10:11 AM by Grant

[Top]
[Mind·X]
[Reply to this post]

Tony,

You may be trying to make more out of consciousness than is really there. I think what we call consciousness is just the brain function we are paying attention to at any moment. All of our senses and all of our internal brain functions are working all the time. But when we pick one out to pay attention to, such as moving a finger or identifying a sound, we become conscious of that experience at that time. I believe consciousness is the act of paying attention. Of focusing that portion of the brain which coordinates the activities of mind and body on specific tasks to concentrate those activities on a specific goal. Someone throws a ball and we coordinate eye and hand and awareness of time and distance to catch it. We are always engaged in something, even when sitting still. What the mind is engaged in is consciousness. What it is not specifically engaged in we call the unconscious. The brain processes all of the sights and sounds and smells, tastes and feelings including the memories triggered by those inputs and we continue to process the data about these things in the background, but we only remember what we were paying attention to. What we remember is what we were conscious of. And that's consciousness. IMHO.

Grant

Re: Where is Consciousness?
posted on 12/28/2002 10:18 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Grant,

You are absolutely right about that. Yes, that which we are conscious of is the activity to which we are "paying attention".

But what defines "paying attention"?

Consider the robot Aibo, (or any of its kin). It has numerous processors taking care of "mundane, housekeeping", while some few of them are attempting to manage the "main task", which might be to track the motion of a ball in order to kick it in a given direction. Lower order processors help maintain balance and orientation, while the main processors "pay attention" to the ball.

So is Aibo conscious? Maybe. The mere fact that Aibo does not (likely) think "why am I chasing this silly ball" certainly makes him "seem" less than conscious, but that may be just a human chauvinism.

Your PC may have only one physical processor, but it is timesharing many tasks, one of which is just to keep that little display clock on the correct time, another is to manage which process is using what portion of memory so they do not step on each other. But it pays its HIGHEST attention to the GUI, for it must respond to your typing and mouse-clicks immediately. In a sense, it is "paying attention" to you (via the interface.)

So, is your PC "conscious"?

My bottom line is, the sensation that I have of being conscious, what it "feels like" to me, is something I can only surmise is similar for you, and would have no idea at all whether even the most "paying attention" robot actually "felt" as I do as a consciousness. It might, and it might not.

I say, you would have to "be it", to know for certain.

Cheers! ____tony b____

Re: Where is Consciousness?
posted on 12/27/2002 12:55 PM by stevenm@highwire.com

[Top]
[Mind·X]
[Reply to this post]

I don't get the logic of your last paragraph... You say that the brain does not generate new ideas because it would mean that all ideas must preexist in the brain. I take 'generate' to mean creating something that did not exist before. To 'discover' on the other hand would mean to identify a pre-existant fact... I know this could just be semantics but if I switch the words to conform to my understanding of them, it means that the human brain generates new ideas all the time. Ideas that conform with varying degrees to perceived reality and live or die based on a lot of factors, but hopefully greatest of these factors is how well they model reality.

Can a silicon brain achieve creativity? I've found at least one source that suggests that they can. See http://www.imagination-engines.com/

Re: Where is Consciousness?
posted on 11/13/2002 5:13 AM by Ankit Narang

[Top]
[Mind·X]
[Reply to this post]

Hello
I am Ankit from India. I am really short of time right now. I am doing B.Tech in Information and communication technology from DA-IICT Gandhinagar (www.da-iict.org) and we guys have to submit a project for a course called Intruduction to design. My topic for the design is "Computer procerssor Vs. Human brain". And I have to come up with a product (concrete). And I donno why am I mailing you. The point I wanted to emphasise is that it would be shit on our part saying that we can't replicate human brain. I think and belive that we can and shall.
Goodbye
Ankit Narang
India

Re: Where is Consciousness?
posted on 06/11/2003 5:26 PM by metaphysics

[Top]
[Mind·X]
[Reply to this post]

yes the possibility to replicate the actual functioning of a human brain is plausible but the brain has aspects to it that seem to demonstrate that their is another component such as instincts, that seem to be beyond the simple physical elements of the brain. We may replicate the physical elements of the brain but the pattern of its functioning is just so complex as even now we hardly know what abilities the brain has

Re: Where is Consciousness?
posted on 06/11/2003 5:52 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

this notion that unconscious processes like intuition and epiphany must be "other" than neurofunction is based on poor reasoning and examination of the evidence- [Penrose makes this error]

all that "hidden" unconscious processes like intuition imply is that the structures responsible for generating the "idea" are simply not WHOLLY interconnected with the modular hierarchical process of consciousness- basically- their are regions in the brain- particularly the ancient/emotional areas like the limbic system- that operate in a different way than the rich interconectedness of the cortexes- these primitive strucures were adapted by survival pressures in less-conscious animals- thus they are more "automatic" when certain signals are sent to their inputs- their sub-modules are relatively isolated from the networks of consciousness- so the conclusions and commands they output seem as if they have no known source! "the idea came from nowhere" but it DIDN'T- only the processes which formulate these intuitions and epiphanies are hidden-

but why are they hidden? because much of these processes involve direct reactions and hard-wired survival/planning functions which wouldn't make very much sense in a conscious memory-web comparation process- it is no coincidence that intuitions/epiphanies arrise out of activity in the primitive regions of the brain where the rules are more automatic

Dennett's Dopey Idea
posted on 04/15/2003 5:13 PM by Clifford

[Top]
[Mind·X]
[Reply to this post]

When I read that Steven Rose (the "British Carl Sagan" and a man who literally WROTE THE BOOK on Darwin's theory of natural selection) says Darwin's books state NOTHING about consciousness, I was puzzled.

But then Noam Chomsky and physicist Michio Kaku both told me that Dennett's assertion that Charles Darwin proved strong AI is a contention that just plain BLOWS.

Aside from Dennett's science-fiction riff on the Darwin legend, there's some question as to why Chuck got buried next to Issaac Newton. The British Crown knew Darwin was one of history's MAJOR PLAYERS, but someone forgot to tell the lab biologists- you know, the people who actually DO things instead of WRITE things. Lab biologists make NO use of Darwin's theory. His book just sits on their shelf next to Aristotle's "Physics" and everybody rests easier knowing that the universe has been EXPLAINED.

But if you're the type who's into UFOs and channeling, you might dare take a peek at MENSA science writer Richard Milton's book, "Shattering the Myths of Darwinism". Oxford gasbag Richard Dawkins threatened to have Milton's ass hauled off to a Psychiatric "hospital".

What Dennett, Dawkins and Francis Crick REALLY should do is pack into the Evangelical Atheism Mystery Tour Bus and let everybody in on the painfulTRUTH. Maybe they could hook up with G. Gordon Liddy and slap around ideas about life in another mail-order video debate.

Re: Dennett's Dopey Idea
posted on 04/15/2003 5:57 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Lab biologists make NO use of Darwin's theory.


Lab biology doesn't take place on evolutionary time scales. Nevertheless the small scale details of evolution has been witnessed in the laboratory.

Re: Dennett's Dopey Idea
posted on 07/18/2004 6:28 AM by nomade

[Top]
[Mind·X]
[Reply to this post]

very amusing and enjoyable post exposing the tendency of Dennett & Co. to interpret science in order to justify a particular metaphysic- a procedure based on an unscientific attitude.

Re: Dennett's Dopey Idea
posted on 07/20/2004 2:57 AM by keithprosser

[Top]
[Mind·X]
[Reply to this post]

After a few thousand years of kicking round the same ideas, I think it's time we tried to find out why every philosopher since Aristotle has gone around in circles regarding consciousness.

The idea that consciousness is 'something' is based on the observation (call it that) that one is conscious or 'has consciousness'. We know better than that for everything else. Perception of X does not mean X exists. It only means X is 'neural representated', so suppose we say that consciousness is also just that - the neural (mis)representation of mechanistic brain action?

The brain does interpret neural representations - that is its job. If it represents its own function (which is in reality just a deterministic, if complex, process)as something strange and mysterious then we will perceive 'consciousness' as strange and mysterious. But that does not mean there is anything strange and mysterious going on - we only perceive that there is.

The brain (or a machine) does not need to implement 'subjective consciousness' to account for self-perception of that strange phenomenon. (And there is only evidence of it being self-perceived, and none for its actual existence).

All the brain has to do is neurally (mis) represent its own function and be able to interpret such neural represenations, both of which are (almost!) indisputably basic actions of the brain, and presumably this would be much easier to implement artificially than real 'subjective consciousness', which I deny the human brain really does/has.






Does this help?

Re: Consciousness in Human and Robot Minds
posted on 11/15/2004 11:30 AM by faiz_is_studying

[Top]
[Mind·X]
[Reply to this post]

Of course it is theoretically possible for a robot to obtain a conscious. This whole essay is trying to prove the obvious. Dennett has carefully manoeuvred his wording so that he is only required to prove that robots can acquire a conscious if there was an enormous amount of funding and time available. Money can buy just about everything. It is obvious that these 'economic reasons' can theoretically make it possible for programmers to give robots a consciousness.

Although I do believe it is possible for a robot to have a conscious, it will not be happening any time in the near or distant future. I believe that the four arguments made against the possibility of a robot cognisant were manipulated by Dennett to appear unintelligent. In fact, they are logical arguments and I feel as if it is only fair to defend them in more detail.

'(1) Robots are purely material things, and consciousness requires immaterial mind-stuff.'

This statement does not even use proper grammar. In addition, the grammar that is used sounds like a second grade child wrote it. Readers should not be fooled by Dennett's manipulative wordings to believe what he writes.

'(2) Robots are inorganic (by definition), and consciousness can exist only in an organic brain.'

All life as known to mankind is organic and as such our brains can only measure that which we know. As a result, this argument may be true since we have only experienced evidence of consciousness within organic beings. Thus, there is no proof that inorganic robots can develop a conscious.

'(3) Robots are artefacts, and consciousness abhors an artifact; only something natural, born not manufactured, could exhibit genuine consciousness.'

Consciousness that is genuine cannot be interpreted using a mathematical formula developed by a computer scientist. A genuine consciousness can never be copied, as it is unique to every person. Although the Cog uses experience as a learning basis, there are too many actions that can occur which will affect the Cog's lifetime.

Furthermore, a machine cannot experience the same things that real humans can. A machine does not know the feeling of eating food, drinking water, or even getting angry. In addition, sometimes a genuine consciousness would like to forget about its worries in life and become intoxicated with alcohol. A machine would have major problems obtaining a real and genuine consciousness.



'(4) Robots will always just be much too simple to be conscious.'

Not enough explanation was given to support this statement. The human mind is extremely complex and is dependent on actions that occur surrounding one's life. These external actions that occur are countless. There are way too many for a computer scientist to keep a track of.

Human feelings are like an extensive series of webs interlinked and weaved together. There can be random feelings going on at any given moment in the human mind mixed with other random feelings. Once again, it would be very difficult to nearly impossible for a computer scientist to program something this complex. There are simply too many feelings that can go on in one's mind.

One extremely difficult component of the human mind to create is affection. One can become affectionate with other individuals for different reasons. I do not believe it is necessarily experience in life that can decide whom you become affectionate with. Some things are always left to be totally unexpected.

Personally, from a computer scientist's point of view, I feel as if it is a very difficult task to give a robot a conscious. Just by allowing a robot to learn things by itself from experience will not work alone. There will need to be billions of static lines of code to make a robotic consciousness become genuine. Maintaining this code would be just about close to impossible. However, given the brightest resources to work on this task, I am sure that in five hundred years it could happen.

Re: Consciousness in Human and Robot Minds
posted on 11/15/2004 2:06 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

A couple of points your argument overlooks.

One, you imply that a robot cannot be organic. Once we learn to use the DNA and proteomic code there seems to me to be no limit to how we can combine metal and organic substances to produce robotic beings. We just have to stop thinking of robots as tin men who can only do what they are preprogrammed to do.

Two, you imply that programming will only be done in a linear fashion as it is being done right now. What is to keep us from developing robots that program themselves the same way we program ourselves? The brain is a complex adaptive system that uses experience and evolutionary (in the sense of adapting itself to survive in whatever environment it finds itself) adjustments to program itself.

If a few thousand brain cells from a mouse can control an airplane, what is to keep a few billion brain cells from controlling a robot? By "control" here I mean "self control" rather than control by an external entity. We could just give a robot an assignment and expect it to carry it out with a pre-program that consists of learning and practice rather than stuffing ones and zeros into its head. The airplane that flies with the aid of mouse brain cells is as much a robot as a Japanese toy that can walk and dance.

Re: Consciousness in Human and Robot Minds
posted on 01/12/2005 10:46 AM by anyguy

[Top]
[Mind·X]
[Reply to this post]

I think conciousness is something that comes with your ability to control your environment. Toolmaking or technology. Put some more order to your environment (R. Kurzweil, Law of Time and Chaos) and soonly you will reach a perception of time. Because you have control on the sequence of events.

Conciousness is an inevitable phenomenon simply because at certain point you have so much control over and intervention with the physical world around that you suddenly find your self to be 'I', the very subject(1). Since you have the capacity to act like one.

Counciousness may be defined as constructing a conceptual interaction among self, universe and time. Time comes with conciousness, when you have so much control over the universe (or in a way your self,)- lets say, being able to mate, eat anytime you want, either because you have better tools or intellect to hunt or collect and keep-; then a different or more meaningful sequence of events comes to your perception. Simply because you have at some degree, control over it, enough to make it a sequence, in other words create information.

(1) Gottlob Frege, the great logician of the early 20th century, made the obvious but crucial observation that a first-person subject has to be the subject of something. In which case we can ask, what kind of something is up to doing the job? What kind of thing is of sufficient metaphysical weight to supply the experiential substrate of a self ' or, at any rate, a self worth having '

R.Kurzweil's

Law of Accelerating Returns As order exponentially increases, time exponentially speeds up (i.e., the time interval between salient events grows shorter as time passes).
Law of Increasing Chaos As chaos exponentially increases, time exponentially slows down (i.e., the time interval between salient events grows longer as time passes).
Law of Time and Chaos In a process, the time interval between salient events (i.e., events that change the nature of the process, or significantly affect the future of the process) expands or contracts along with the amount of chaos.

Re: Consciousness in Human and Robot Minds
posted on 07/25/2007 1:24 AM by undercovers

[Top]
[Mind·X]
[Reply to this post]

Well, obviously, consciousness will have to be re-defined if we are to include robots within our current terminology! Are we saying that consciousness as the ability to learn and adapt? Is there more to it? Humans and robots are incredibly different, and if we change our ideas of what both are and consolidate them together, then we will have to change our philosophy as well. Are we just as programmed as the robots that we create, or is there something more to it?

I guess what I'm saying is, humans make decisions based on past experiences and how they have previously seen others act in certain situations. If robots are doing the exact same thing, what is the difference? I feel that there is much more, but I don't want to assume too much and get a big head about my species!!

"Robots are well-thought-of entities that are created with some initial goal in mind...My point is: Robots are built for entire different purposes than we (the Humans) carry out generally."

Re: Consciousness in Human and Robot Minds
posted on 07/25/2007 6:23 AM by doojie

[Top]
[Mind·X]
[Reply to this post]

Humans do not have such well thought out goals because there is a need for adaptation to circumstance.

Since humans are embedded ina genetic system that codes similiarly to other genetic systems, we tend to enter into competition among systems, so that intelligence amnd consciousness is also embedded within those systems.

The falw in consciousness is when we assume that we take the "software" of our symbol system and extend it in such a way that it transcends the physical substrate in which we are embedded.

Religion and government follow this process, extending a symbol system over the "neurons" of the people themselves, and expect the symbol system to expand, but the result is a crash, in the form of revolution or speciation of systems.

Essentially what we recognize as consciousness reacts to this process of crash and re-crash, and we adapt to that system using our own symbol system at a more individual level.

But there is always a tendency toward rapid growth in some systems, and we select those traits that foster even greater growth(war, authority, tyranny, which are usually mechanical processes geared to an ideal and created to be resistant to change.

Robots are programmed with these "ideals" in mechanical fashion, and are not embedded in a greater genetic or symbolic system.

The problem with conscious robots is that consciousness operates at so many levels, even transcending the group and forcing us to react in adaptive processes. How to do that with robots?