Origin > How to Build a Brain > Consciousness
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0282.html

Printable Version
    Consciousness
by   John Searle

Can consciousness be measured scientifically? What exactly is consciousness? John Searle approaches the scientific investigation of consciousness and its possible neurobiological roots from a philosophical perspective.


Abstract

Originally published October 8, 1999 as an academic paper, University of California at Berkeley. Posted on KurzweilAI.net August 13, 2001. Article on John Searle's home page can be read here.

Until very recently, most neurobiologists did not regard consciousness as a suitable topic for scientific investigation. This reluctance was based on certain philosophical mistakes, primarily the mistake of supposing that the subjectivity of consciousness made it beyond the reach of an objective science. Once we see that consciousness is a biological phenomenon like any other, then it can be investigated neurobiologically. Consciousness is entirely caused by neurobiological processes and is realized in brain structures. The essential trait of consciousness that we need to explain is unified qualitative subjectivity. Consciousness thus differs from other biological phenomena in that it has a subjective or first-person ontology, but this subjective ontology does not prevent us from having an epistemically objective science of consciousness. We need to overcome the philosophical tradition that treats the mental and the physical as two distinct metaphysical realms. Two common approaches to consciousness are those that adopt the building block model, according to which any conscious field is made of its various parts, and the unified field model, according to which we should try to explain the unified character of subjective states of consciousness. These two approaches are discussed and reasons are given for preferring the unified field theory to the building block model. Some relevant research on consciousness involves the subjects of blindsight, the split-brain experiments, binocular rivalry, and gestalt switching.

I. Resistance to the Problem

As recently as two decades ago there was little interest among neuroscientists, philosophers, psychologists and cognitive scientists generally in the problem of consciousness. Reasons for the resistance to the problem varied from discipline to discipline. Philosophers had turned to the analysis of language, psychologists had become convinced that a scientific psychology must be a science of behavior, and cognitive scientists took their research program to be the discovery of the computer programs in the brain that, they thought, would explain cognition. It seemed especially puzzling that neuroscientists should be reluctant to deal with the problem of consciousness, because one of the chief functions of the brain is to cause and sustain conscious states. Studying the brain without studying consciousness would be like studying the stomach without studying digestion, or studying genetics without studying the inheritance of traits. When I first got interested in this problem seriously and tried to discuss it with brain scientists, I found that most of them were not interested in the question.

The reasons for this resistance were various but they mostly boiled down to two. First, many neuroscientists felt--and some still do--that consciousness is not a suitable subject for neuroscientific investigation. A legitimate brain science can study the microanatomy of the Purkinje cell, or attempt to discover new neurotransmitters, but consciousness seems too airy-fairy and touchy-feely to be a real scientific subject. Others did not exclude consciousness from scientific investigation, but they had a second reason: "We are not ready" to tackle the problem of consciousness. They may be right about that, but my guess is that a lot of people in the early 1950s thought we were not ready to tackle the problem of the molecular basis of life and heredity. They were wrong; and I suggest for the current question, the best way to get ready to deal with a research problem may be to try to solve it.

There were, of course, famous earlier twentieth century exceptions to the general reluctance to deal with consciousness, and their work has been valuable. I am thinking in particular of the work of Sir Arthur Sherrington, Roger Sperry, and Sir John Eccles.

Whatever was the case 20 years ago, today many serious researchers are attempting to tackle the problem. Among neuroscientists who have written recent books about consciousness are Cotterill (1998), Crick (1994), Damasio (1999), Edelman (1989, 1992), Freeman (1995), Gazzaniga (1988), Greenfield (1995), Hobson (1999), Libet (1993), and Weiskrantz (1997). As far as I can tell, the race to solve the problem of consciousness is already on. My aim here is not to try to survey this literature but to characterize some of the neurobiological problems of consciousness from a philosophical point of view.

II. Consciousness as a Biological Problem

What exactly is the neurobiological problem of consciousness? The problem, in its crudest terms, is this: How exactly do brain processes cause conscious states and how exactly are those states realized in brain structures? So stated, this problem naturally breaks down into a number of smaller but still large problems: What exactly are the neurobiological correlates of conscious states (NCC), and which of those correlates are actually causally responsible for the production of consciousness? What are the principles according to which biological phenomena such as neuron firings can bring about subjective states of sentience or awareness? How do those principles relate to the already well understood principles of biology? Can we explain consciousness with the existing theoretical apparatus or do we need some revolutionary new theoretical concepts to explain it? Is consciousness localized in certain regions of the brain or is it a global phenomenon? If it is confined to certain regions, which ones? Is it correlated with specific anatomical features, such as specific types of neurons, or is it to be explained functionally with a variety of anatomical correlates? What is the right level for explaining consciousness? Is it the level of neurons and synapses, as most researchers seem to think, or do we have to go to higher functional levels such as neuronal maps (Edelman 1989, 1992), or whole clouds of neurons (Freeman 1995), or are all of these levels much too high and we have to go below the level of neurons and synapses to the level of the microtubules (Penrose 1994 and Hameroff 1998a, 1998b)? Or do we have to think much more globally in terms of Fourier transforms and holography (Pribram 1976, 1991, 1999)?

As stated, this cluster of problems sounds similar to any other such set of problems in biology or in the sciences in general. It sounds like the problem concerning microorganisms: How, exactly, do they cause disease symptoms and how are those symptoms manifested in patients? Or the problem in genetics: By what mechanisms exactly does the genetic structure of the zygote produce the phenotypical traits of the mature organism? In the end I think that is the right way to think of the problem of consciousness--it is a biological problem like any other, because consciousness is a biological phenomenon in exactly the same sense as digestion, growth, or photosynthesis. But unlike other problems in biology, there is a persistent series of philosophical problems that surround the problem of consciousness and before addressing some current research I would like to address some of these problems.

III. Identifying the Target: The Definition of Consciousness.

One often hears it said that "consciousness" is frightfully hard to define. But if we are talking about a definition in common sense terms, sufficient to identify the target of the investigation, as opposed to a precise scientific definition of the sort that typically comes at the end of a scientific investigation, then the word does not seem to me hard to define. Here is the definition: Consciousness consists of inner, qualitative, subjective states and processes of sentience or awareness. Consciousness, so defined, begins when we wake in the morning from a dreamless sleep - and continues until we fall asleep again, die, go into a coma or otherwise become "unconscious." It includes all of the enormous variety of the awareness that we think of as characteristic of our waking life. It includes everything from feeling a pain, to perceiving objects visually, to states of anxiety and depression, to working out cross word puzzles, playing chess, trying to remember your aunt's phone number, arguing about politics, or to just wishing you were somewhere else. Dreams on this definition are a form of consciousness, though of course they are in many respects quite different from waking consciousness.

This definition is not universally accepted and the word consciousness is used in a variety of other ways. Some authors use the word only to refer to states of self-consciousness, i.e. the consciousness that humans and some primates have of themselves as agents. Some use it to refer to the second-order mental states about other mental states; so according to this definition, a pain would not be a conscious state, but worrying about a pain would be a conscious state. Some use "consciousness" behavioristically to refer to any form of complex intelligent behavior. It is, of course, open to anyone to use any word anyway he likes, and we can always redefine consciousness as a technical term. Nonetheless, there is a genuine phenomenon of consciousness in the ordinary sense, however we choose to name it; and it is that phenomenon that I am trying to identify now, because I believe it is the proper target of the investigation.

Consciousness has distinctive features that we need to explain. Because I believe that some, not all, of the problems of consciousness are going to have a neurobiological solution, what follows is a shopping list of what a neurobiological account of consciousness should explain.

IV. The Essential Feature of Consciousness: The Combination of Qualitativeness, Subjectivity and Unity

Consciousness has three aspects that make it different from other biological phenomena, and indeed different from other phenomena in the natural world. These three aspects are qualitativeness, subjectivity, and unity. I used to think that for investigative purposes we could treat them as three distinct features, but because they are logically interrelated, I now think it best to treat them together, as different aspects of the same feature. They are not separate because the first implies the second, and the second implies the third. I discuss them in order.

Qualitativeness

Every conscious state has a certain qualitative feel to it, and you can see this clearly if you consider examples. The experience of tasting beer is very different from hearing Beethoven's Ninth Symphony, and both of those have a different qualitative character from smelling a rose or seeing a sunset. These examples illustrate the different qualitative features of conscious experiences. One way to put this point is to say that for every conscious experience there is something that it feels like, or something that it is like to have that conscious experience. Nagel (1974) made this point over two decades ago when he pointed out that if bats are conscious, then there is something that "it is like" to be a bat. This distinguishes consciousness from other features of the world, because in this sense, for a nonconscious entity such as a car or a brick there is nothing that "it is like" to be that entity. Some philosophers describe this feature of consciousness with the word qualia, and they say there is a special problem of qualia. I am reluctant to adopt this usage, because it seems to imply that there are two separate problems, the problem of consciousness and the problem of qualia. But as I understand these terms, "qualia" is just a plural name for conscious states. Because "consciousness" and "qualia" are coextensive, there seems no point in introducing a special term. Some people think that qualia are characteristic only of perceptual experiences, such as seeing colors and having sensations such as pains, but that there is no qualitative character to thinking. As I understand these terms, that is wrong. Even conscious thinking has a qualitative feel to it. There is something it is like to think that two plus two equals four. There is no way to describe it except by saying that it is the character of thinking consciously "two plus two equals four". But if you believe there is no qualitative character to thinking that, then try to think the same thought in a language you do not know well. If I think in French "deux et deux fait quatre," I find that it feels quite different. Or try thinking, more painfully, "two plus two equals one hundred eighty-seven." Once again I think you will agree that these conscious thoughts have different characters. However, the point must be trivial; that is, whether or not conscious thoughts are qualia must follow from our definition of qualia. As I am using the term, thoughts definitely are qualia.

Subjectivity

Conscious states only exist when they are experienced by some human or animal subject. In that sense, they are essentially subjective.

I used to treat subjectivity and qualitativeness as distinct features, but it now seems to me that properly understood, qualitativeness implies subjectivity, because in order for there to be a qualitative feel to some event, there must be some subject that experiences the event. No subjectivity, no experience. Even if more than one subject experiences a similar phenomenon, say two people listening to the same concert, all the same, the qualitative experience can only exist as experienced by some subject or subjects. And even if the different token experiences are qualitatively identical, that is they all exemplify the same type, nonetheless each token experience can only exist if the subject of that experience has it. Because conscious states are subjective in this sense, they have what I will call a first-person ontology, as opposed to the third-person ontology of mountains and molecules, which can exist even if no living creatures exist. Subjective conscious states have a first-person ontology ("ontology" here means mode of existence) because they only exist when they are experienced by some human or animal agent. They are experienced by some "I" that has the experience, and it is in that sense that they have a first-person ontology.

Unity

All conscious experiences at any given point in an agent's life come as part of one unified conscious field. If I am sitting at my desk looking out the window, I do not just see the sky above and the brook below shrouded by the trees, and at the same time feel the pressure of my body against the chair, the shirt against my back, and the aftertaste of coffee in my mouth, rather I experience all of these as part of a single unified conscious field. This unity of any state of qualitative subjectivity has important consequences for a scientific study of consciousness. I say more about them later on. At present I just want to call attention to the fact that the unity is already implicit in subjectivity and qualitativeness for the following reason: If you try to imagine that my conscious state is broken into 17 parts, what you imagine is not a single conscious subject with 17 different conscious states but rather 17 different centers of consciousness. A conscious state, in short, is by definition unified, and the unity will follow from the subjectivity and the qualitativeness, because there is no way you could have subjectivity and qualitativeness except with that particular form of unity.

There are two areas of current research where the aspect of unity is especially important. These are first, the study of the split-brain patients by Gazzaniga, (1998) and others (Gazzaniga, Bogen, and Sperry 1962, 1963), and second, the study of the binding problem by a number of contemporary researchers. The interest of the split-brain patients is that both the anatomical and the behavioral evidence suggest that in these patients there are two centers of consciousness that after commissurotomy are communicating with each other only imperfectly. They seem to have, so to speak, two conscious minds inside one skull.

The interest of the binding problem is that it looks like this problem might give us in microcosm a way of studying the nature of consciousness, because just as the visual system binds all of the different stimulus inputs into a single unified visual percept, so the entire brain somehow unites all of the variety of our different stimulus inputs into a single unified conscious experience. Several researchers have explored the role of synchronized neuron firings in the range of 40hz to account for the capacity of different perceptual systems to bind the diverse stimuli of anatomically distinct neurons into a single perceptual experience. (Llinas 1990, Llinas and Pare 1991, Llinas and Ribary 1993, Llinas and Ribary,1992, Singer 1993, 1995, Singer and Gray, 1995,) For example in the case of vision, anatomically separate neurons specialized for such things as line, angle and color all contribute to a single, unified, conscious visual experience of an object. Crick (1994) extended the proposal for the binding problem to a general hypothesis about the NCC. He put forward a tentative hypothesis that the NCC consists of synchronized neuron firings in the general range of 40 Hz in various networks in the thalamocortical system, specifically in connections between the thalamus and layers four and six of the cortex.

This kind of instantaneous unity has to be distinguished from the organized unification of conscious sequences that we get from short term or iconic memory. For nonpathological forms of consciousness at least some memory is essential in order that the conscious sequence across time can come in an organized fashion. For example, when I speak a sentence I have to be able to remember the beginning of the sentence at the time I get to the end if I am to produce coherent speech. Whereas instantaneous unity is essential to, and is part of, the definition of consciousness, organized unity across time is essential to the healthy functioning of the conscious organism, but it is not necessary for the very existence of conscious subjectivity.

This combined feature of qualitative, unified subjectivity is the essence of consciousness and it, more than anything else, is what makes consciousness different from other phenomena studied by the natural sciences. The problem is to explain how brain processes, which are objective third person biological, chemical and electrical processes, produce subjective states of feeling and thinking. How does the brain get us over the hump, so to speak, from events in the synaptic cleft and the ion channels to conscious thoughts and feelings? If you take seriously this combined feature as the target of explanation, I believe you get a different sort of research project from what is currently the most influential. Most neurobiologists take what I will call the building block approach: Find the NCC for specific elements in the conscious field such as the experience of color, and then construct the whole field out of such building blocks. Another approach, which I will call the unified field approach, would take the research problem to be one of explaining how the brain produces a unified field of subjectivity to start with. On the unified field approach, there are no building blocks, rather there are just modifications of the already existing field of qualitative subjectivity. I say more about this later.

Some philosophers and neuroscientists think we can never have an explanation of subjectivity: We can never explain why warm things feel warm and red things look red. To these skeptics there is a simple answer: We know it happens. We know that brain processes cause all of our inner qualitative, subjective thoughts and feelings. Because we know that it happens we ought to try to figure out how it happens. Perhaps in the end we will fail but we cannot assume the impossibility of success before we try.

Many philosophers and scientists also think that the subjectivity of conscious states makes it impossible to have a strict science of consciousness. For, they argue, if science is by definition objective, and consciousness is by definition subjective, it follows that there cannot be a science of consciousness. This argument is fallacious. It commits the fallacy of ambiguity over the terms objective and subjective. Here is the ambiguity: We need to distinguish two different senses of the objective-subjective distinction. In one sense, the epistemic sense ("epistemic" here means having to do with knowledge), science is indeed objective. Scientists seek truths that are equally accessible to any competent observer and that are independent of the feelings and attitudes of the experimenters in question. An example of an epistemically objective claim would be "Bill Clinton weighs 210 pounds". An example of an epistemically subjective claim would be "Bill Clinton is a good president". The first is objective because its truth or falsity is settleable in a way that is independent of the feelings and attitudes of the investigators. The second is subjective because it is not so settleable. But there is another sense of the objective-subjective distinction, and that is the ontological sense ("ontological" here means having to do with existence). Some entities, such as pains, tickles, and itches, have a subjective mode of existence, in the sense that they exist only as experienced by a conscious subject. Others, such as mountains, molecules and tectonic plates have an objective mode of existence, in the sense that their existence does not depend on any consciousness. The point of making this distinction is to call attention to the fact that the scientific requirement of epistemic objectivity does not preclude ontological subjectivity as a domain of investigation. There is no reason whatever why we cannot have an objective science of pain, even though pains only exist when they are felt by conscious agents. The ontological subjectivity of the feeling of pain does not preclude an epistemically objective science of pain. Though many philosophers and neuroscientists are reluctant to think of subjectivity as a proper domain of scientific investigation, in actual practice, we work on it all the time. Any neurology textbook will contain extensive discussions of the etiology and treatment of such ontologically subjective states as pains and anxieties.

V. Some Other Features

To keep this list short, I mention some other features of consciousness only briefly.

Feature 2:Intentionality

Most important, conscious states typically have "intentionality," that property of mental states by which they are directed at or about objects and states of affairs in the world. Philosophers use the word intentionality not just for "intending" in the ordinary sense but for any mental phenomena at all that have referential content. According to this usage, beliefs, hopes, intentions, fears, desires and perceptions all are intentional. So if I have a belief, I must have a belief about something. If I have a normal visual experience, it must seem to me that I am actually seeing something, etc. Not all conscious states are intentional and not all intentionality is conscious; for example, undirected anxiety lacks intentionality, and the beliefs a man has even when he is asleep lack consciousness then and there. But I think it is obvious that many of the important evolutionary functions of consciousness are intentional: For example, an animal has conscious feelings of hunger and thirst, engages in conscious perceptual discriminations, embarks on conscious intentional actions, and consciously recognizes both friend and foe. All of these are conscious intentional phenomena and all are essential for biological survival. A general neurobiological account of consciousness will explain the intentionality of conscious states. For example, an account of color vision will naturally explain the capacity of agents to make color discriminations.

Feature 3, The Distinction Between Center and Periphery of Attention.

It is a remarkable fact that within my conscious field at any given time I can shift my attention at will from one aspect to another. So for example, right now I am not paying any attention to the pressure of the shoes on my feet or the feeling of the shirt on my neck. But I can shift my attention to them any time I want. There is already a fair amount of useful work done on attention.

Feature 4. All Human Conscious Experiences Are in Some Mood or Other.

There is always a certain flavor to one's conscious states, always an answer to the question "How are you feeling?". The moods do not necessarily have names. Right now I am not especially elated or annoyed, not ecstatic or depressed, not even just blah. But all the same I will become acutely aware of my mood if there is a dramatic change, if I receive some extremely good or bad news, for example. Moods are not the same as emotions, though the mood we are in will predispose us to having certain emotions.

We are, by the way, closer to having pharmacological control of moods with such drugs as Prozac than we are to having control of other internal features of consciousness.

Feature 5. All Conscious States Come to Us in the Pleasure/Unpleasure Dimension

For any total conscious experience there is always an answer to the question of whether it was pleasant, painful, unpleasant, neutral, etc. The pleasure/unpleasure feature is not the same as mood, though of course some moods are more pleasant than others.

Feature 6. Gestalt Structure.

The brain has a remarkable capacity to organize very degenerate perceptual stimuli into coherent conscious perceptual forms. I can, for example, recognize a face, or a car, on the basis of very limited stimuli. The best known examples of Gestalt structures come from the researches of the Gestalt psychologists.

Feature 7. Familiarity

There is in varying degrees a sense of familiarity that pervades our conscious experiences. Even if I see a house I have never seen before, I still recognize it as a house; it is of a form and structure that is familiar to me. Surrealist painters try to break this sense of the familiarity and ordinariness of our experiences, but even in surrealist paintings the drooping watch still looks like a watch, and the three-headed dog still looks like a dog.

One could continue this list, and I have done so in other writings (Searle 1992). The point now is to get a minimal shopping list of the features that we want a neurobiology of consciousness to explain. In order to look for a causal explanation we need to know what the effects are that need explanation. Before examining some current research projects, we need to clear more of the ground.

VI. The Traditional Mind-Body Problem and How to Avoid It.

The confusion about objectivity and subjectivity I mentioned earlier is just the tip of the iceberg of the traditional mind-body problem. Though ideally I think scientists would be better off if they just ignored this problem, the fact is that they are as much victims of the philosophical traditions as anyone else, and many scientists, like many philosophers, are still in the grip of the traditional categories of mind and body, mental and physical, dualism and materialism, etc. This is not the place for a detailed discussion of the mind-body problem, but I need to say a few words about it so that, in the discussion that follows, we can avoid the confusions it has engendered.

The simplest form of the mind body problem is this: What exactly is the relation of consciousness to the brain? There are two parts to this problem, a philosophical part and a scientific part. I have already been assuming a simple solution to the philosophical part. The solution, I believe, is consistent with everything we know about biology and about how the world works. It is this: Consciousness and other sorts of mental phenomena are caused by neurobiological processes in the brain, and they are realized in the structure of the brain. In a word, the conscious mind is caused by brain processes and is itself a higher level feature of the brain.

The philosophical part is relatively easy but the scientific part is much harder. How, exactly, do brain processes cause consciousness and how, exactly, is consciousness realized in the brain? I want to be very clear about the philosophical part, because it is not possible to approach the scientific question intelligently if the philosophical issues are unclear. Notice two features of the philosophical solution. First, the relationship of brain mechanisms to consciousness is one of causation. Processes in the brain cause our conscious experiences. Second, this does not force us to any kind of dualism because the form of causation is bottom-up, and the resulting effect is simply a higher level feature of the brain itself, not a separate substance. Consciousness is not like some fluid squirted out by the brain. A conscious state is rather a state that the brain is in. Just as water can be in a liquid or solid state without liquidity and solidity being separate substances, so consciousness is a state that the brain is in without consciousness being a separate substance.

Notice that I stated the philosophical solution without using any of the traditional categories of "dualism," "monism," "materialism," and all the rest of it. Frankly, I think those categories are obsolete. But if we accept those categories at face value, then we get the following picture: You have a choice between dualism and materialism. According to dualism, consciousness and other mental phenomena exist in a different ontological realm altogether from the ordinary physical world of physics, chemistry, and biology. According to materialism consciousness as I have described it does not exist. Neither dualism nor materialism as traditionally construed, allows us to get an answer to our question. Dualism says that there are two kinds of phenomena in the world, the mental and the physical; materialism says that there is only one, the material. Dualism ends up with an impossible bifurcation of reality into two separate categories and thus makes it impossible to explain the relation between the mental and the physical. But materialism ends up denying the existence of any irreducible subjective qualitative states of sentience or awareness. In short, dualism makes the problem insoluble; materialism denies the existence of any phenomenon to study, and hence of any problem.

On the view that I am proposing, we should reject those categories altogether. We know enough about how the world works to know that consciousness is a biological phenomenon caused by brain processes and realized in the structure of the brain. It is irreducible not because it is ineffable or mysterious, but because it has a first person ontology, and therefore cannot be reduced to phenomena with a third person ontology. The traditional mistake that people have made in both science and philosophy has been to suppose that if we reject dualism, as I believe we must, then we have to embrace materialism. But on the view that I am putting forward, materialism is just as confused as dualism because it denies the existence of ontologically subjective consciousness in the first place. Just to give it a name, the resulting view that denies both dualism and materialism, I call biological naturalism.

VII. How Did We Get Into This Mess? A Historical Digression

For a long time I thought scientists would be better off if they ignored the history of the mind-body problem, but I now think that unless you understand something about the history, you will always be in the grip of historical categories. I discovered this when I was debating people in artificial intelligence and found that many of them were in the grip of Descartes, a philosopher many of them had not even read.

What we now think of as the natural sciences did not really begin with Ancient Greece. The Greeks had almost everything, and in particular they had the wonderful idea of a "theory". The invention of the idea of a theory--a systematic set of logically related propositions that attempt to explain the phenomena of some domain--was perhaps the greatest single achievement of Greek civilization. However, they did not have the institutionalized practice of systematic observation and experiment. That came only after the Renaissance, especially in the 17th century. When you combine systematic experiment and testability with the idea of a theory, you get the possibility of science as we think of it today. But there was a feature of the seventeenth century, which was a local accident and which is still blocking our path. It is that in the seventeenth century there was a very serious conflict between science and religion, and it seemed that science was a threat to religion. Part of the way that the apparent threat posed by science to orthodox Christianity was deflected was due to Descartes and Galileo. Descartes, in particular, argued that reality divides into two kinds, the mental and the physical, res cogitans and res extensa. Descartes made a useful division of the territory: Religion had the territory of the soul, and science could have material reality. But this gave people the mistaken conception that science could only deal with objective third person phenomena, it could not deal with the inner qualitative subjective experiences that make up our conscious life. This was a perfectly harmless move in the 17th century because it kept the church authorities off the backs of the scientists. (It was only partly successful. Descartes, after all, had to leave Paris and go live in Holland where there was more tolerance, and Galileo had to make his famous recantation to the church authorities of his heliocentric theory of the planetary system.) However, this history has left us with a tradition and a tendency not to think of consciousness as an appropriate subject for the natural sciences, in the way that we think of disease, digestion, or tectonic plates as subjects of the natural sciences. I urge us to overcome this reluctance, and in order to overcome it we need to overcome the historical tradition that made it seem perfectly natural to avoid the topic of consciousness altogether in scientific investigation.

VIII. Summary Of The Argument To This Point

I am assuming that we have established the following: Consciousness is a biological phenomenon like any other. It consists of inner qualitative subjective states of perceiving, feeling and thinking. Its essential feature is unified, qualitative subjectivity. Conscious states are caused by neurobiological processes in the brain, and they are realized in the structure of the brain. To say this is analogous to saying that digestive processes are caused by chemical processes in the stomach and the rest of the digestive tract, and that these processes are realized in the stomach and the digestive tract. Consciousness differs from other biological phenomena in that it has a subjective or first person ontology. But ontological subjectivity does not prevent us from having epistemic objectivity. We can still have an objective science of consciousness. We abandon the traditional categories of dualism and materialism, for the same reason we abandon the categories of phlogiston and vital spirits: They have no application to the real world.

IX. The Scientific Study of Consciousness

How, then, should we proceed in a scientific investigation of the phenomena involved?

Seen from the outside it looks deceptively simple. There are three steps. First, one finds the neurobiological events that are correlated with consciousness (the NCC). Second, one tests to see that the correlation is a genuine causal relation. And third, one tries to develop a theory, ideally in the form of a set of laws, that would formalize the causal relationships.

These three steps are typical of the history of science. Think, for example, of the development of the germ theory of disease. First we find correlations between brute empirical phenomena. Then we test the correlations for causality by manipulating one variable and seeing how it affects the others. Then we develop a theory of the mechanisms involved and test the theory by further experiment. For example, Semmelweis in Vienna in the 1840s found that women obstetric patients in hospitals died more often from puerperal fever than did those who stayed at home. So he looked more closely and found that women examined by medical students who had just come from the autopsy room without washing their hands had an exceptionally high rate of puerperal fever. Here was an empirical correlation. When he made these young doctors wash their hands in chlorinated lime, the mortality rate went way down. He did not yet have the germ theory of disease, but he was moving in that direction. In the study of consciousness we appear to be in the early Semmelweis phase.

At the time of this writing we are still looking for the NCC. Suppose, for example, that we found, as Francis Crick once put forward as a tentative hypothesis, that the neurobiological correlate of consciousness was a set of neuron firings between the thalamus and the cortex layers 4 and 6, in the range of 40 Hz. That would be step one. And step two would be to manipulate the phenomena in question to see if you could show a causal relation. Ideally, we need to test for whether the NCC in question is both necessary and sufficient for the existence of consciousness.

To establish necessity, we find out whether a subject who has the putative NCC removed thereby loses consciousness; and to establish sufficiency, we find out whether an otherwise unconscious subject can be brought to consciousness by inducing the putative NCC. Pure cases of causal sufficiency are rare in biology, and we usually have to understand the notion of sufficient conditions against a set of background presuppositions, that is, within a specific biological context. Thus our sufficient conditions for consciousness would presumably only operate in a subject who was alive, had his brain functioning at a certain level of activity, at a certain appropriate temperature, etc. But what we are trying to establish ideally is a proof that the element is not just correlated with consciousness, but that it is both causally necessary and sufficient, other things being equal, for the presence of consciousness.

Seen from the outsider's point of view, that looks like the ideal way to proceed. Why has it not yet been done? I do not know. It turns out, for example, that it is very hard to find an exact NCC, and the current investigative tools, most notably in the form of positron emission tomagraphy scans, CAT scans, and functional magnetic resonance imaging techniques, have not yet identified the NCC. There are interesting differences between the scans of conscious subjects and sleeping subjects with REM sleep, on the one hand, and slow wave sleeping subjects on the other. But it is not easy to tell how much of the differences are related to consciousness. Lots of things are going on in both the conscious and the unconscious subjects' brains that have nothing to do with the production of consciousness. Given that a subject is already conscious, you can get parts of his or her brain to light up by getting him or her to perform various cognitive tasks such as perception or memory. But that does not give you the difference between being conscious in general, and being totally unconscious. So, to establish this first step, we still appear to be in an early a state of the technology of brain research. In spite of all of the hype surrounding the development of imaging techniques, we still, as far as I know, have not found a way to image the NCC.

With all this in mind, let us turn to some actual efforts at solving the problem of consciousness.

X.The Standard Approach to Consciousness: The Building Block Model

Most theorists tacitly adopt the building block theory of consciousness. The idea is that any conscious field is made of its various parts: the visual experience of red, the taste of coffee, the feeling of the wind coming in through the window. It seems that if we could figure out what makes even one building block conscious, we would have the key to the whole structure. If we could, for example, crack visual consciousness, that would give us the key to all the other modalities. This view is explicit in the work of Crick & Koch (1998). Their idea is that if we could find the NCC for vision, then we could explain visual consciousness, and we would then know what to look for to find the NCC for hearing, and for the other modalities, and if we put all those together, we would have the whole conscious field.

The strongest and most original statement I know of the building block theory is by Bartels & Zeki (1998, Zeki & Bartels, 1998). They see the binding activity of the brain not as one that generates a conscious experience that is unified, but rather one that brings together a whole lot of already conscious experiences . As they put it (Bartels & Zeki 1998: 2327), "[C]onsciousness is not a unitary faculty, but.. it consists of many micro-consciousnesses." Our field of consciousness is thus made up of a lot of building blocks of microconsciousnesses. "Activity at each stage or node of a processing-perceptual system has a conscious correlate. Binding cellular activity at different nodes is therefore not a process preceding or even facilitating conscious experience, but rather bringing different conscious experiences together" (Bartels & Zeki 1998: 2330).

There are at least three lines of research that are consistent with, and often used to support, the building block theory.

1. Blindsight

Blindsight is the name given by the psychologist Lawrence Weiskrantz to the phenomenon whereby certain patients with damage to V1 can report incidents occurring in their visual field even though they report no visual awareness of the stimulus. For example, in the case of DB, the earliest patient studied, if an X or an O were shown on a screen in that portion of DB's visual field where he was blind, the patient when asked what he saw, would deny that he saw anything. But if asked to guess, he would guess correctly that it was an X or an O. His guesses were right nearly all the time. Furthermore, the subjects in these experiments are usually surprised at their results. When the experimenter asked DB in an interview after one experiment, "Did you know how well you had done?", DB answered, "No, I didn't, because I couldn't see anything. I couldn't see a darn thing." (Weiskrantz 1986: 24). This research has subsequently been carried on with a number of other patients, and blindsight is now also experimentally induced in monkeys (Stoerig and Cowey, 1997).

Some researchers suppose that we might use blindsight as the key to understanding consciousness. The argument is the following: In the case of blindsight, we have a clear difference between conscious vision and unconscious information processing. It seems that if we could discover the physiological and anatomical difference between regular sight and blindsight, we might have the key to analyzing consciousness, because we would have a clear neurological distinction between the conscious and the unconscious cases.

2. Binocular Rivalry and Gestalt Switching

One exciting proposal for finding the NCC for vision is to study cases where the external stimulus is constant but where the internal subjective experience varies. Two examples of this are the gestalt switch, where the same figure, such as the Neckar cube, is perceived in two different ways, and binocular rivalry, where different stimuli are presented to each eye but the visual experience at any instant is of one or the other stimulus, not both. In such cases the experimenter has a chance to isolate a specific NCC for the visual experience, independently of the neurological correlates of the retinal stimulus (Logothetis, 1998, Logothetis & Schall, 1989). The beauty of this research is that it seems to isolate a precise NCC for a precise conscious experience. Because the external stimulus is constant and there are (at least) two different conscious experiences A and B, it seems there must be some point in the neural pathways where one sequence of neural events causes experience A and another point where a second sequence causes experience B. Find those two points and you have found the precise NCCs for two different building blocks of the whole conscious field.

3. The Neural Correlates of Vision

Perhaps the most obvious way to look for the NCC is to track the neurobiological causes of a specific perceptual modality such as vision. In a recent article, Crick & Koch (1998) assume as a working hypothesis that only some specific types of neurons will manifest the NCC. They do not think that any of the NCC of vision are in V1 (1995). The reason for thinking that V1 does not contain the NCCs is that V1 does not connect to the frontal lobes in such a way that would make V1 contribute directly to the essential information processing aspect of visual perception. Their idea is that the function of visual consciousness is to provide visual information directly to the parts of the brain that organize voluntary motor output, including speech. Thus, because the information in V1 is recoded in subsequent visual areas and does not transmit directly to the frontal cortex, they believe that V1 does not correlate directly with visual consciousness.

XI. Doubts about the Building Block Theory

The building block theory may be right but it has some worrisome features. Most important, all the research done to identify the NCCs has been carried out with subjects who are already conscious, independently of the NCC in question. Going through the cases in order, the problem with the blindsight research as a method of discovering the NCC is that the patients in question only exhibit blindsight if they are already conscious. That is, it is only in the case of fully conscious patients that we can elicit the evidence of information processing that we get in the blindsight examples. So we cannot investigate consciousness in general by studying the difference between the blindsight patient and the normally sighted patient, because both patients are fully conscious. It might turn out that what we need in our theory of consciousness is an explanation of the conscious field that is essential to both blindsight and normal vision or, for that matter, to any other sensory modality.

Similar remarks apply to the binocular rivalry experiments. All this research is immensely valuable but it is not clear how it will give us an understanding of the exact differences between the conscious brain and the unconscious brain, because for both experiences in binocular rivalry the brain is fully conscious.

Similarly, Crick (1996) and Crick & Koch (1998) only investigated subjects who are already conscious. What one wants to know is, how is it possible for the subject to be conscious at all? Given that a subject is conscious, his consciousness will be modified by having a visual experience, but it does not follow that the consciousness is made up of various building blocks of which the visual experience is just one.

I wish to state my doubts precisely. There are (at least) two possible hypotheses.

1. The building block theory: The conscious field is made up of small components that combine to form the field. To find the causal NCC for any component is to find an element that is causally necessary and sufficient for that conscious experience. Hence to find even one is, in an important sense, to crack the problem of consciousness.

2. The unified field theory (explained in more detail below): Conscious experiences come in unified fields. In order to have a visual experience, a subject has to be conscious already and the experience is a modification of the field. Neither blindsight, binocular rivalry nor normal vision can give us a genuine causal NCC because only already conscious subjects can have these experiences.

It is important to emphasize that both hypotheses are rival empirical hypotheses to be settled by scientific research and not by philosophical argument. Why then do I prefer hypothesis 2 to hypothesis 1? The building block theory predicts that in a totally unconscious patient, if the patient meets certain minimal physiological conditions (he is alive, the brain is functioning normally, he has the right temperature, etc.), and if you could trigger the NCC for say the experience of red, then the unconscious subject would suddenly have a conscious experience of red and nothing else. One building block is as good as another. Research may prove me wrong, but on the basis of what little I know about the brain, I do not believe that is possible. Only a brain that is already over the threshold of consciousness, that already has a conscious field, can have a visual experience of red.

Furthermore on the multistage theory of Bartels & Zeki (1998, Zeki & Bartels 1998), the microconsciousnesses are all capable of a separate and independent existence. It is not clear to me what this means. I know what it is like for me to experience my current conscious field, but who experiences all the tiny microconsciousnesses? And what would it be like for each of them to exist separately?

XII. Basal consciousness and a unified field theory

There is another way to look at matters that implies another research approach. Imagine that you wake from a dreamless sleep in a completely dark room. So far you have no coherent stream of thought and almost no perceptual stimulus. Save for the pressure of your body on the bed and the sense of the covers on top of your body, you are receiving no outside sensory stimuli. All the same there must be a difference in your brain between the state of minimal wakefulness you are now in and the state of unconsciousness you were in before. That difference is the NCC I believe we should be looking for. This state of wakefulness is basal or background consciousness.

Now you turn on the light, get up, move about, etc. What happens? Do you create new conscious states? Well, in one sense you obviously do, because previously you were not consciously aware of visual stimuli and now you are. But do the visual experiences stand to the whole field of consciousness in the part whole relation? Well, that is what nearly everybody thinks and what I used to think, but here is another way of looking at it. Think of the visual experience of the table not as an object in the conscious field the way the table is an object in the room, but think of the experience as a modification of the conscious field, as a new form that the unified field takes. As Llinas and his colleagues put it, consciousness is "modulated rather than generated by the senses" (1998:1841).

I want to avoid the part whole metaphor but I also want to avoid the proscenium metaphor. We should not think of my new experiences as new actors on the stage of consciousness but as new bumps or forms or features in the unified field of consciousness. What is the difference? The proscenium metaphor gives us a constant background stage with various actors on it. I think that is wrong. There is just the unified conscious field, nothing else, and it takes different forms.

If this is the right way to look at things (and again this is a hypothesis on my part, nothing more) then we get a different sort of research project. There is no such thing as a separate visual consciousness, so looking for the NCC for vision is barking up the wrong tree. Only the already conscious subject can have visual experiences, so the introduction of visual experiences is not an introduction of consciousness but a modification of a preexisting consciousness.

The research program that is implicit in the hypothesis of unified field consciousness is that at some point we need to investigate the general condition of the conscious brain as opposed to the condition of the unconscious brain. We will not explain the general phenomenon of unified qualitative subjectivity by looking for specific local NCCs. The important question is not what the NCC for visual consciousness is, but how does the visual system introduce visual experiences into an already unified conscious field, and how does the brain create that unified conscious field in the first place. The problem becomes more specific. What we are trying to find is which features of a system that is made up of a hundred billion discreet elements, neurons, connected by synapses can produce a conscious field of the sort that I have described. There is a perfectly ordinary sense in which consciousness is unified and holistic, but the brain is not in that way unified and holistic. So what we have to look for is some massive activity of the brain capable of producing a unified holistic conscious experience. For reasons that we now know from lesion studies, we are unlikely to find this as a global property of the brain, and we have very good reason to believe that activity in the thalamocortical system is probably the place to look for unified field consciousness. The working hypothesis would be that consciousness is in large part localized in the thalamocortical system, and that the various other systems feed information to the thalamocortical system that produces modifications corresponding to the various sensory modalities. To put it simply, I do not believe we will find visual consciousness in the visual system and auditory consciousness in the auditory system. We will find a single, unified, conscious field containing visual, auditory, and other aspects.

Notice that if this hypothesis is right, it will solve the binding problem for consciousness automatically. The production of any state of consciousness at all by the brain is the production of a unified consciousness.

We are tempted to think of our conscious field as made up of the various components - visual, tactile, auditory, the stream of thought, etc. The approach whereby we think of big things as being made up of little things has proved so spectacularly successful in the rest of science that it is almost irresistible to us. Atomic theory, the cellular theory in biology, and the germ theory of disease are all examples. The urge to think of consciousness as likewise made of smaller building blocks is overwhelming. But I think it may be wrong for consciousness. Maybe we should think of consciousness holistically, and perhaps for consciousness we can make sense of the claim that "the whole is greater than the sum of the parts." Indeed, maybe it is wrong to think of consciousness as made up parts at all. I want to suggest that if we think of consciousness holistically, then the aspects I have mentioned so far, especially our original combination of subjectivity, qualitativeness, and unity all into one feature, will seem less mysterious. Instead of thinking of my current state of consciousness as made up of the various bits, the perception of the computer screen, the sound of the brook outside, the shadows cast by the evening sun falling on the wall--we should think of all of these as modifications, forms that the underlying basal conscious field takes after my peripheral nerve endings have been assaulted by the various external stimuli. The research implication of this is that we should look for consciousness as a feature of the brain emerging from the activities of large masses of neurons, and which cannot be explained by the activities of individual neurons. I am, in sum, urging that we take the unified field approach seriously as an alternative to the more common building block approach.

VARIATIONS ON THE UNIFIED FIELD THEORY

The idea that one should investigate consciousness as a unified field is not new and it goes back at at least as far as Kant's doctrine of the transcendental unity of apperception (Kant, 1787). In neurobiology I have not found any contemporary authors who state a clear distinction between what I have been calling the building block theory and the unified field theory but at least two lines of contemporary research are consistent with the approach urged here, the work of Llinas and his colleagues (Llinas, 1990, Llinas et al, 1998) and that of Tononi, Edelman and Sporns (Tononi & Edelman, 1998, Tononi, Edelman & Sporns 1998, Tononi, Sporns & Edelman, 1992). On the view of Llinas and his colleagues (1998) we should not think of consciousness as produced by sensory inputs but rather as a functional state of large portions of the brain, primarily the thalamocortical system, and we should think of sensory inputs serving to modulate a preexisting consciousness rather than creating consciousness anew. On their view consciousness is an "intrinsic" state of the brain, not a response to sensory stimulus inputs. Dreams are of special interest to them, because in a dream the brain is conscious but unable to perceive the external world through sensory inputs. They believe the NCC is synchronized oscillatory activity in the thalamocartical system (1998: 1845).

Tononi and Edelman have advanced what they call the dynamic core hypothesis (1998). They are struck by the fact that consciousness has two remarkable properties, the unity mentioned earlier and the extreme differentiation or complexity within any conscious field. This suggests to them that we should not look for consciousness in a specific sort of neuronal type, but rather in the activities of large neuronal populations. They seek the NCC for the unity of consciousness in the rapid integration that is achieved through the reentry mechanisms of the thalamocortical system. The idea they have is that in order to account for the combination of integration and differentiation in any conscious field, they have to identify large clusters of neurons that function together, that fire in a synchronized fashion. Furthermore this cluster, which they call a functional cluster, should also show a great deal of differentiation within its component elements in order to account for the different elements of consciousness. They think that synchronous firing among cortical regions between the cortex and the thalamus is an indirect indicator of this functional clustering. Then once such a functional cluster has been identified, they wish to investigate whether or not it contains different activity patterns of neuronal states within it. The combination of functional clustering together with differentiation they submit as the dynamic core hypothesis of consciousness. They believe a unified neural process of high complexity constitutes a dynamic core. They also believe the dynamic core is not spread over the brain but is primarily in the thalamocortical regions, especially those involved in perceptual categorization and containing reentry mechanisms of the sort that Edelman discussed in his earlier books (1989, 1992). In a new study, they and their colleagues (Srinivasan et al 1999) claim to find direct evidence of the role of reentry mapping in the NCC. Like the adherents of the building block theory, they seek such NCCs of consciousness as one can find in the studies of binocular rivalry.

As I understand this view, it seems to combine features of both the building block and the unified field approach.

X Conclusion

In my view the most important problem in the biological sciences today is the problem of consciousness. I believe we are now at a point where we can address this problem as a biological problem like any other. For decades research has been impeded by two mistaken views: first, that consciousness is just a special sort of computer program, a special software in the hardware of the brain; and second that consciousness was just a matter of information processing. The right sort of information processing--or on some views any sort of information processing--- would be sufficient to guarantee consciousness. I have criticized these views at length elsewhere (Searle 1980, 1992, 1997) and do not repeat these criticisms here. But it is important to remind ourselves how profoundly anti-biological these views are. On these views brains do not really matter. We just happen to be implemented in brains, but any hardware that could carry the program or process the information would do just as well. I believe, on the contrary, that understanding the nature of consciousness crucially requires understanding how brain processes cause and realize consciousness.. Perhaps when we understand how brains do that, we can build conscious artifacts using some nonbiological materials that duplicate, and not merely simulate, the causal powers that brains have. But first we need to understand how brains do it.1

1I am indebted to many people for discussion of these issues. None of them is responsible for any of my mistakes. I especially wish to thank Samuel Barondes, Dale Berger, Francis Crick, Gerald Edelman, Susan Greenfield, Jennifer Hudin, John Kihlstrom, Jessica Samuels, Dagmar Searle, Wolf Singer, Barry Smith, and Gunther Stent.

 Join the discussion about this article on Mind·X!  
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

John Searle and consciousness ....
posted on 09/19/2001 12:33 AM by thought@clear.net.nz

[Top]
[Mind·X]
[Reply to this post]

Only that I am conscious and here ..... (sic) and my surname is Tearle and my mother's name is Searle ... where next ?
www.thought.co.nz

An explaination of how the unified field of consciousness supports strong AI
posted on 09/28/2001 1:26 PM by vitaminc@erols.com

[Top]
[Mind·X]
[Reply to this post]

Wow! That is the most lucid and concise account of consciousness that I have ever read!

But I have a question for Mr. Searle.

I believe that the unified field view of consciousness he explicates in his essay here is actually at odds with his most famous contribution to the philosophy of mind/consciousness... his Chinese Room. I will explain the contradiction between his two views shortly... My question is... after reading the following explanation of the contradictions between the "unified field of consciousness" and the Chinese Room... how do you resolve the situation? How do you integrate what are two (in my mind) mutually exclusive viewpoints?

Okay... first some background for those that don't know... Searle's Chinese Room is a thought experiment where a man is sealed in a room with a huge database that contains Chinese utterances and sensible possible responses to those utterances. The guy in the room knows NO Chinese - he can't read it or speak it. Slips of paper are shoved under the door and he compares the slip to the database and looks through the table until he sees an entry with an appropriate response. He then writes a new slip in Chinese, which of course he CANNOT understand. His response is based solely on database checking.

The whole system is able to speak Chinese coherently. But where is the understanding of the Chinese? Not in the guy who consults the tables; he knows no Chinese. Not in the tables themselves. It seems counter-intuitive to say the system understands Chinese. That implies that the whole system - the room and the database plus the non-Chinese man - actually is conscious. Obviously this is not the case.

Really the whole Chinese Room story is an analogy used to refute the idea of a computer running special AI software that makes it conscious. It is an argument against what is called "Strong AI", which asserts that computers CAN have cognitive states. According to Searle the computer cannot be conscious or understand ANYTHING anymore than his Chinese Room system as a whole is conscious or understands anything.

I have always been of the mind that an accurate simulation of consciousness would be in itself conscious. I will defend this view later in this posting. However, according to Searle, believers in an artificial CONSCIOUS mind emerging from an insanely fast processor and a sufficiently complex simulation program are WRONG.

Imagine a sufficiently complex simulation of consciousness - one able to model the emergent properties that the innate biological structure of the brain engenders and one that takes into account all that Searle has discussed in his article here, in particular, his "unified field of consciousness". If such a simulation is run on a sufficiently fast processor and the whole system is fed sufficiently rich perception information then that system would BEHAVE exactly as a conscious person would. It would seem to be conscious. Indeed wouldn't it be conscious? Searle says "NO!". He would also say that the "unified field of consciousness" could not be simulated.

I guess what I am getting at is... the imaginary system running this complex simulation seems to be functionally the same as a Searle's Chinese room.... yet everything that Searle has laid out in the article above seems to me to lead to the conclusion that such a system would in fact be conscious! His own conception of a unified field of consciousness is inherently at odds with his Chinese Room thought experiment!

Searle's Chinese Room analogy implies that a conscious SIMULATION of a "unified field" of consciousness is impossible. According to Searle, the ONLY way to create a "unified field" consciousness is to design (or biologically create) some sort of hardware that is able to give rise to that emergent "unified field". But if a model can account for the causal mechanisms of that hardware, plus the emergent properties of a "unified field" of consciousness and the unified field's causal mechanisms, plus the interaction between the hardware and the emergent field then isn't that model when implemented as software on sufficiently powerful computer CONSCIOUS?????

Searle still says "NO." He would point out that such a model is no more conscious than the Chinese Room.

At this point, I would like to differentiate between two views on the simulation of consciousness. I am not sure that anyone has ever made this particular distinction before; hopefully it will be of use to someone.

The first is the view that consciousness can be engendered by a single layer algorithm/process implemented as software on a digital computer. The single layer being the explication/expression of the complex activity of neurons in the brain that somehow researchers will discover given enough time. This is the paradigm that Searle refutes convincingly with the Chinese Room thought experiment.

The second view is that consciousness can be engendered by a dual layered algorithm/process implemented as software on a computer. The first layer being the mechanics of the machinery of the brain and the second being the mechanics of consciousness which would take into account the causal workings of the emergent properties of the brain. The emergent properties are what Searle calls "the unified field of consciousness". This dual layered view Searle condemns directly with his Chinese Room but at the same time, he seems to be indirectly championing it with his concept of a "unified field of consciousness." More on this shortly.

Searle is on the one hand saying that consciousness can finally be understood by researching two distinctly different levels of cognitive activity and the interaction between them -- the global "unified field" level and the microcosmic level of neural interactions. But on the other hand he is saying that once the full system (global/local, unified field/cell level neural activity) is finally understood completely an accurate simulation of that system on a binary computer is NOT conscious. It will always be a simulation and hence never conscious a la his Chinese Room. But if one takes to heart his "unified field" view of consciousness then isn't the brain merely a piece of machinery that simulates consciousness just as much as a computer?? In either case both computer and brain are merely instantiating the process that allows for consciousness.

The brain is using neurons and electricity to create a global state we call consciousness... but looked at from a different perspective the brain is using neurons and electricity to play out patterns, patterns in the "unified field of consciousness". If electricity could be replaced FUNCTIONALLY in the brain with some other manipulatable energy such as light rays then the brain still would be able to create consciousness. It is the patterns that create the consciousness. The consciousness is emergent from the patterns just as much as the patterns are emergent from the unified field which is, in turn, emergent from the physical matter of the brain. If a computer simulates the brain AND the unified field and is therefore able to RE-CREATE THE PATTERNS then a computer is no longer simulating consciousness! The computer is THEN conscious! This is because the patterns are not a simulation! They are the genuine article from which consciousness emerges! (If my use of the term "patterns" is troublesome then replace it with "relations" or "fluctuations in the unified field", the argument is still the same.)

Of course one could argue that a computer simulation of a pattern is not the same as a pattern but that seems extremely suspicious to me.

To simulate a pattern is to recreate it. For instance if I simulate a square using a computer program sending its output to a computer monitor then that simulation of a square IS a square. It is almost nonsensical to talk of a simulation of a pattern. It is counter-intuitive. Likewise it is nonsensical to talk about simulations of consciousness. If one simulates consciousness, then one has given rise to it. It is a category mistake as far as I am concerned to apply the term "simulation" to consciousness. Consciousness is not the sort of thing that can be simulated any more than one can simulate a square. The brain can be simulated, as can the "unified field", but not the consciousness itself.

The problem is that heretofore computers have been used to simulate all sorts of things... buildings, storms, cars, drug wars, etc. so it is naturally tempting to throw consciousness into that realm.

So then isSearle's Chinese Room conscious given this new perspective on consciousness? It is, after all, as a holistic system giving rise to specific patterns internally and also to comprehensible Chinese utterances externally that seem to display an understanding of the Chinese language. Isn't Searle's Chinese Room simulating consciousness just like the hypothetical super fast computer running an accurate "simulation" of consciousness?

The answer is NO!

For an accurate "simulation" of consciousness to occur a system (a Chinese Room or a computer) must implement the dual layer holistic algorithmic approach mentioned previously. The consequence of this fact is that the analogy of the Chinese room breaks down and ceases to have relevance when applied to this two layered approach.

The Chinese Room is really just an analogy for the single layer approach. As such it works as a critique of the single layered attempts at "simulations" of consciousness. Not so for the dual layered approach. The Chinese Room does not have sufficiently complex analogous components to be of use in critiquing it. The non-Chinese speaking room occupant and the giant Chinese database do not do justice to the complexity of the two layered approach implied by Searle's "unified field"; nothing in the Chinese Room even approximates the idea of the "unified field of consciousness". There is no analogous component to stand in for the "unified field of consciousness" so the Chinese Room has no application in regards to the two layered approach. I suppose the Chinese Room idea could be modified to include such an analogous component but I have no idea what such a component would be, especially since it is not clear at all right now how a "unified field of consciousness" would function in the first place. At any rate, I suspect such a modification would seriously damage the Chinese Room's strong appeal to our sense of intuition.

So my point is that Searle's two views are mutually exclusive. It seems that given what I have said above that one of Searle's cherished two views one must fall. Either the idea of a "unified field of consciousness" must be abandoned or the Chinese room must be placed aside when speaking of systems capable of "simulating" consciousness, systems that use the two layered approach mentioned previously.

I personally think the thing to do is to accept the fact that the Chinese Room is a powerful allegory but it has its limits. It is really a metaphor after all, regardless of how powerfully it appeals to our intuitions.

Ultimately it seems that what is at issue here is: Can a simulation have a subjective ontological experience? I would say yes... I think I can safely speak for Searle and say that he would say emphatically "NO!". I suppose there is no way to know absolutely for sure, just as one can never be sure they are not trapped in a solipsistic universe of their own unconscious making or that, to quote "Row row your boat", life is but a dream. But, contrary to what Searle has argued in the past, there is no longer a strong logical argument why such an "artificial" subjective ontological experience is not possible in light of the failure of the Chinese Room argument to address the complexity of the "unified field of consciousness" - assuming of course that such a "unified field" exists.

But there I agree with Searle. I think a "unified field of consciousness" does exist and is what researchers should be investigating and trying to model. I have always thought of consciousness as surfing or riding electrically on the substrate of the brain and the "unified field of consciousness" sums up that personal belief perfectly. Which is the main reason why I was so impressed by this article of his put up on the KurzweilAI site.

The reason I bring up these simulation issues is they have an impact not only on the actual eventual outcome of AI research but also the ethical issues we will HAVE to grapple with should it get to the point where computers/robots/AI/programs etc. are behaving in a sufficiently complex enough way that we are forced to deal with their civil rights.

If a simulation IS conscious that is a whole different ethical ballgame than if a simulation is not. The world will be a much simpler and easier to navigate place if simulations of consciousness are not in and of themselves conscious. But I fear they will be. Which makes the ethical waters very murky. I am afraid that when we get to the point where we grapple with this specific issue there will be wars over it. Civil as well as international. The camps formed by this philosophical discourse might eventually affect what country people decide to live in and the social fabric of their lives. The good news is that according to Milosevic and Kurzweil we have at least 10 to 20 years to decide. If not much longer. In a way we will be lucky if AI research continues to progress at its current pace. If so I personally won't ever have to worry about such murky issues in my lifetime, except for when I posting armchair philosophy to the Internet.

Charles Jamison
Baltimore, MD

Re: An explaination of how the unified field of consciousness supports strong AI
posted on 10/01/2001 7:30 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"He would also say that the "unified field of consciousness" could not be simulated. "

No he doesn't - he says it could not be REPRODUCED - there is a difference between a duck and a painting of a duck !

"The brain is using neurons and electricity to create a global state we call consciousness... but looked at from a different perspective the brain is using neurons and electricity to play out patterns, patterns in the "unified field of consciousness". If electricity could be replaced FUNCTIONALLY in the brain with some other manipulatable energy such as light rays then the brain still would be able to create consciousness. It is the patterns that create the consciousness. "

This is essentially where Searle ( and myself ) would say you are completely wrong. Patterns are not intrinsic but mentally eluded from obersver relative positions.

"Of course one could argue that a computer simulation of a pattern is not the same as a pattern but that seems extremely suspicious to me."

Two things : a pattern is not physical and so , in that sense, a simulation of a pattern is prefectly valid - BUT , the realisation of consciousness is a REAL phsyical phenomena that CAN BE VIEWED as having a pattern - the pattern itself is not instrincic to the phenomena - we can't reprocduce atoms by drawing pictures of them can we ? Similarly when we simulate storms on our PCs our PCs don't actually get wet. So the real physical phenomena on neural activity that yields consciousness has an observer-relative pattern of causality, but that is extrinsic to the phenemona ( and depends upon the model ) , and is not intrinsic to the PHYSICAL components of brains and the mechanisms which lead ot to arise.

To put it another way - there is a difference between matter and the simulation of matter : if you don't get this you will always be confus


"I suppose there is no way to know absolutely for sure,"

I think this is one of the ironies of the strong AI position - deapite their so-called 'rationalism' they unite with dualists in rejecting the idea that science can fathom the mysteries of consciousness.

I think the Chinese Room position explains the position perfectly and rejects the idea SPECIFIC that syntax alone can create mental states - no amount of meddling with syntax schemes can change this fact. The Chinese Room is in fact a simple example of a wider argunment : that minds originate in the physical realm and that models of phsyical things aren't the same as the things themeslves.

So as such the approaches to this subject by Searle have been marvellously consistent examples of philosophy at its best : clear , precise and at the same time inventive.

A conscious Neuron?
posted on 06/28/2002 3:06 PM by jwayt

[Top]
[Mind·X]
[Reply to this post]

What is it about a SINGLE neuron that you think can be more conscious of its own actions than the person in the Chinese room of the passing messages, or an integrated circuit passing bits? Consciousness cannot be viewed on an microscopic level.

In fact, very little detail of this universe is meaningful at all. Biochemists don't care about atomic particles. Microbiologists don't trouble themselves with molecules. Botanists are interested in higher-level structures and organisms. We don't need to know anything about our brains to be able to think and be conscious.

Re: A conscious Neuron?
posted on 06/28/2002 4:27 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> Biochemists don't care about atomic particles

Yes. The same biology may lay on many diferent physics.

And you don't feel the diference between C12 and C13 atoms inside your body.

- Thomas

Re: A conscious Neuron?
posted on 06/28/2002 4:43 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

Precisely the reason I feel that Searle's "Chinese Room", although a useful thought experiment, is essentially self-defeating.

Honestly, imagine trying to make the Chinese Room "work" with a fellow that understands no chinese. He must feed these slips of paper to an "oracle" that is capable of recognizing the import and nuance of the given message, and select CONVINCINGLY a corresponding reply. Seems to me that the argument "nothing in that room consciously understands chinese" is entirely specious. Searle, effectively, wants to dissassemble the "oracle" and say "see, its just copper wires, no consciousness here".

But one can apply the same reduction to biological grey-matter: "See, just atomic elements. No consciousness here."

The best counter I recall reading to Searle's Chinese Room was the "Story of a Brain" in Dennett (and Hofstadter's?) "The Mind's I".

To paraphrase, a scentist makes his colleagues promise to try and keep his brain alive after the rest of his body expires. They do so, keeping it in a glass vat of nutrient fluids, with billions of stimulators and receptors attached to the brain stem to keep the "mind" entertained, and process its state (and we assume its desires). All is well until a custodian accidently knoks the vat over, and a shard of broken glass cleaves the brain in two at the corpus callosum.

The accident is discovered quickly, the two "halves" are undamaged, and rather than attempt to surgically re-connect the two halves, the decide to place billions of receptors at each severed side, and use a radio transmission to allow the left and right brain to communicate. All is well for a time.

Eventually, many other research labs want a "piece" of the action, and the brain is further subdivided and distributed, maintaining the original connectivity by a careful process of radio receptor/transmitters. At one point, some labs are working with as little as a few dozen neural cells maintained alive in nutrient gell.

But one lab accidently lets the gell dry up, and the cells they have (thoroughly) mapped in function die. They decide to create a transistor model of the behavior of those few dozen cells, and allow the group to continue to interact with the rest of the "distributed brain". Eventually, then entire brain is thus replaced with non-biological analogs.

If (IF) one could see no loss in "perceptible" functionality in the overall "brain" (colleagues continue to converse normally with their departed friend), then how could one argue that "consciousness has left the building."

Cheers!

____tony____

Re: A conscious Neuron?
posted on 06/28/2002 5:11 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

There is a caveat to my previous posting.

It may well be that some ancillary "field-effect" contributed by proximity is at work, which would render the "distributed brain" less than able to maintain a "consciousness". This does not mean that consciousness cannot be entertained in a non-biological medium, but may imply that mapping neural pulses to transistor pulses cannot ignore proximity effects.

I am no expert at Chinese medicine (acupunture/pressure etc) but if you look at charts of the body that purport to map the flows of "chi" (or whatever it is they think they are affecting) you will notice that these flux-lines tend to run perpendicular to the bodies main neural and arterial flows. This suggests that ancillary field effects are at work.

Neural paths are shielded electrically from the surrounding tissue by a layer of insulating tissue. This keeps the "electrical flow" from leaking away or being misdirected. But it would not insulate the surroundings from the consequent (and perpendicular) magnetic field effects.

Perhaps, attempts to reproduce the functionality of the brain that rely only upon mapping the "current flow", irrespective of component proximity, are missing an important ingredient.

Cheers!

____tony____

Another look at consciousness
posted on 06/29/2002 11:29 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

The grand illusion

New Scientist vol 174 issue 2348 - 22 June 2002, page 26


That stream of experience we call consciousness is not what it
seems, says
Susan Blackmore. No wonder it's proving so hard to explain


"THE last great mystery of science"; "the most baffling problem
in the
science of the mind"; this is how scientists talk about
consciousness, but
what if our conscious experience is all a grand illusion?

Like most people, I used to think of my conscious life as like a stream of experiences, passing through my mind, one after another. But now
I'm starting to wonder, is consciousness really like this? Could this apparently innocent assumption be the reason we find consciousness so
baffling?

Different strands of research on the senses over the past decade suggest that the brave cognitive scientists, psychologists and neuroscientists who
dare to tackle the problem of consciousness are chasing after the wrong thing. If consciousness seems to be a continuous stream of rich and detailed sights, sounds, feelings and thoughts, then I suggest this is the illusion.

First we must be clear what is meant by the term "illusion". To say that consciousness is an illusion is not to say that it doesn't exist, but that it is not what it seems to bemore like a mirage or a visual illusion. And if consciousness is not what it seems, no wonder it's proving such
a mystery.

For the proposal "It's all an illusion" even to be worth considering, the problem has to be serious. And it is. We can't even begin to
explain consciousness. Take this magazine in front of your eyes. Right now, you are presumably having a conscious experience of seeing the paper,
the words and the pictures. The way you see the page is unique to you, and no one else can
know exactly what it is like for you. This is how consciousness is defined it is your own subjective experience.

But how do you get from a real magazine composed of atoms and molecules to your experience of seeing it? Real, physical objects and private
experiences are such completely different kinds of thing. How can one be related to the other? David Chalmers, of the University of Tucson, Arizona, calls it the "Hard Problem". How can the firing of brain cells produce subjective
experience? It seems like magic; water into wine.

If you are not yet feeling perplexed (in which case I am not doing my job properly), consider another problem. It seems that most of what
goes on in the brain is not conscious. For example, we can consciously hear a song on
the car radio, while we are not necessarily conscious of all the things we do as we're driving. This leads us to make a fundamental
distinction contrasting conscious brain processes with unconscious ones. But no one can explain what the difference really is. Is there a special place in the brain where unconscious things are made conscious? Are some brain cells endowed
with an extra magic something that makes what goes on in them subjective? That doesn't make sense. Yet most theories of consciousness assume that there must be such a difference, and then get stuck trying to explain or investigate it.

For example, in the currently popular "Global Workspace" theory, Bernard Baars of the Wright Institute in Berkeley, California, equates
the contents of consciousness with the contents of working memory. But how does being "in" memory turn electrical impulses into personal experiences?

Another popular line of research is to search for the "neural correlates" of consciousness. Nobel Laureate Francis Crick wants to pin down the brain
activity that corresponds to "the vivid picture of the world we see in front of our eyes". And Oxford pharmacologist, Susan Greenfield, is
looking for "the particular physical state of the brain that always accompanies a subjective feeling" (New Scientist, 2 February, p 30).

These researchers are not alone in their search. But their attempts all founder on exactly the same mysteryhow can some kinds of brain
activity be "in" the conscious stream, while others are not? I can't see what this difference could possibly be.

Could the problem be so serious that we need to start again at the very beginning? Could it be that, after all, there is no stream of consciousness, no movie in the brain, no picture of the world we see in front of our eyes?
Could all this be just a grand illusion?

You might want to protest. You may be absolutely sure that you do have such a stream of conscious experiences. But perhaps you have noticed
this intriguing little oddity. Imagine you are reading this magazine when suddenly you realise that the clock is striking. You hadn't noticed it
before but, now that you have, you know that the clock has struck four times already, and you can go on counting. What is happening here? Were the first three "dongs" really unconscious and have now been pulled out of memory and put in the stream of consciousness? If so, were the contents of the stream changed retrospectively to seem as though you heard them at the time? Or what? You might think up some other elaborations to make sense of it but they are unlikely to be either simple or convincing.

A similar problem is apparent with listening to speech. You need to hear several syllables before the meaning of a sentence becomes unambiguous. So
what was in the stream of consciousness after one syllable? Did it switch from gobbledegook to words halfway through? It doesn't feel like
that, it feels as though you heard a meaningful sentence as it went along. But that is impossible.

The running tap of time

Consciousness also does funny things with time. A good example is the "cutaneous rabbit". If a person's arm is tapped rapidly, say five times at
the wrist, then twice near the elbow, and finally three times on the upper arm, they report not a series of separate taps coming in groups, but a
continuous series moving upwardsas though a little creature were running up their arm. We might ask how taps 2 to 4 came to be experienced
some way up the forearm when the next tap in the series had not happened yet. How did the brain know where the next tap was going to fall?

You might try to explain it by saying that the stream of consciousness lags a little behind, just in case more taps are coming. Or perhaps,
when the elbow tap comes, the brain runs back in time and changes the contents of consciousness. If so, what was really in consciousness when the
third tap happened? The problem arises only if we think that things must always be either "in" or "out" of consciousness. Perhaps, if this
apparently natural distinction is causing so much trouble, we should abandon it.

Even deeper troubles threaten our sense of conscious vision. You might be utterly convinced that right now you're seeing a vivid and
detailed picture of the world in front of your eyes, and no one can tell you otherwise. Consider, then, a few experiments.

The most challenging are studies of "change blindness" (New Scientist, 18 November 2000, p 28). Imagine you are asked to look at the left-
hand picture in the illustration below. Then at the exact moment you move your eyes (which you do several times a second) the picture is swapped for the one on the right. Would you notice the difference? Most people assume that they would. But they'd be wrong. When our eyes are still we
detect changes easily, but when a change happens during an eye movement or a blink we are change blind.

Another way to reveal change blindness is to present the two pictures one after the other repeatedly on a computer screen with flashes of
grey in between (for an example see http://
nivea.psycho.univ-paris5.fr/ASSC (Longer
URL)). It can take people many minutes to detect even a large object that changes colour, or one that disappears altogether, even if it's right in the middle of the picture.

What do these odd findings mean? At the very least they challenge the textbook description that vision is a process of building up representations in our heads of the world around us. The idea is that as we move our eyes
about, we build up an ever better picture, and this picture is what we consciously see. But these experiments show that this way of thinking about vision has to be false. If we had such a picture in our heads we would surely notice that something had changed, yet we don't. We jump
to the conclusion that we're seeing a continuous, detailed and rich picture. But this is an llusion.

Researchers differ in how far they think the illusion goes. Psychologists Daniel Simons of Harvard University and Daniel Levin of Kent
State University, Ohio, suggest that during each visual fixation our brain builds a fleeting representation of the scene. It then extracts the
gist and throws away all thedetails. This gives us the feeling of continuity and richness without too much overload.

Ronald Rensink of the University of British Columbia in Vancouver goes a little further and claims that we never form representations of
the whole scene at all, not even during fixations. Instead we construct
what he calls "virtual representations" of just the object we are paying attention to. Nothing else is represented in our heads, but we get the
impression that everything is there because a new object can always be made "just in time" whenever we look.

Finally, our ordinary notions of seeing are more or less demolished by psychologists Kevin O'Regan of the CNRS, the French national research agency
in Paris, and Alva No of the University of California, Santa Cruz, who first described vision as a grand illusion. They argue that we don't need internal representations at all because the world is always there to be
referred to. According to their "sensorimotor theory of vision", seeing is not about building pictures of the world in our heads, it's about what you are doing. Seeing is a way of interacting with the world, a kind of action.
What remains between eye movements is not a picture of the world, but the information needed for further explor- ation. The theory is dramatically different from existing theories of perception.

It's not clear who's right. Perhaps all these theories are off the mark. But there's no doubt about the basic phenomenon and its main implication. Searching for the neural correlates of the detailed picture in our heads is doomed because there is no such picture.

This leaves another problem. If we have no picture, how can we act on the things we see? This question may seem reasonable but it hides
another false assumptionthat we have to see consciously in order to act. We need only
think of the tennis player who returns a serve before consciously seeing it, to realise that this is false, but the situation is odder than
this. We probably have several separate visual systems that do their jobs somewhat independently, rather than a single one that produces a unified visual world.

David Milner of the University of St Andrews, and Melvyn Goodale of the University of Western Ontario, argue that there is one system for fast
visuomotor control and a slower system for perceiving objects. Much of their evidence comes from patients with brain damage, such as DF who
has a condition known as visual form agnosia. She cannot recognise objects by sight, name simple line drawings, or recognise or copy letters,
even though she produces letters correctly from dictation and can recognise objects by touch. She can also reach out and grasp everyday objects
(objects that she cannot recognise) with remarkable accuracy. DF seems to have a
visual system that guides her actions, but her perception system is damaged.

In a revealing experiment, DF was shown a slot set randomly at different angles (Trends in Neurosciences, vol 15 p 20, 1992). She could
not consciously see the orientation of the slot, and could not draw it or adjust a line to the same angle. But when given a piece of card she
could quickly and accurately line it up and post it straight through. Experiments with normal volunteers have shown similar kinds of dissociation, suggesting that we all have at least two separate vision systems.

Perhaps the most obvious conclusion is that the slow perceptual system is conscious and the fast action system is unconscious. But then
the old mystery is back. We would have to explain the difference between conscious and unconscious systems. Is there a magic ingredient in one?
Does neural information turn into subjective experiences just because it is processed more slowly?

Perhaps the solution is to admit that there is no stream of conscious experiences on which we act. Instead, at any time a whole lot of different
things are going on in our brain at once. None of these things is either "in" or "out" of consciousness. But every so often something
happens to create what seems to have been a unified conscious stream; an illusion of
richness and continuity.

It sounds bizarre, but try to catch yourself not being conscious. More than 100 years ago, psychologist William James likened introspective
analysis to "trying to turn up the gas quickly enough to see how the darkness looks". The modern equivalent is looking in the fridge to see whether the light is always on. However quickly you open the door, you can never catch it out.
The same is true of consciousness. Whenever you ask yourself, "Am I conscious now?" you always are.

But perhaps there is only something there when you ask. Maybe each time you probe, a retrospective story is concocted about what was in the stream of consciousness a moment before, together with a "self" who was apparently
experiencing it. Of course there was neither a conscious self nor a stream, but it now seems as though there was.

Perhaps a new story is concocted whenever you bother to look. When we ask ourselves about it, it would seem as though there's a stream of
consciousness going on. When we don't bother to ask, or to look, it doesn't, but then we don't notice so it doesn't matter. Admitting that
it's all an illusion does not solve the problem of consciousness but changes it completely. Instead of asking how neural impulses turn into
conscious experiences, we must ask how the grand illusion gets constructed. This will prove no easy task, but unlike solving the Hard Problem it may at least be possible.


Susan Blackmore

Susan Blackmore is a psychologist, writer and lecturer based in
Bristol


Re: Another look at consciousness
posted on 06/30/2002 2:42 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

If it's just an illusion, then I'd like to produce it for a zillion of years.

But I don't think, that the word "illusion" solves anything. Nice try, but nothing more, madame professor.

- Thomas

Re: Another look at consciousness
posted on 06/30/2002 7:54 AM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

The author (Susan Blackmore) makes good points, but the choice of words ("Grand Illusion") to describe the sensation of consciousness is unfortunate and a distraction.

I was careful in my "flames" analogy to to claim that it may be "continuity of consciousness" that is illusory, akin to thinking of a flame as "that continuously glowing thing", even though we understand that it is a self-renewing collection of individual and discrete chemical reactions, each giving off a "one-time-only" contribution to the glow.

I do believe that she is correct in asserting that there will not be found a definitive "wall" separating the mind's "conscious thoughts" from the "less-than-conscious" mental activities. Viewing consciousness as a matter of degree will probably be more constructive.

Cheers!

____tony____

Re: Another look at consciousness
posted on 06/30/2002 9:14 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I, on the contrary think, that the consciousness _is_ quite a well definable process. Despite the fact, I have no idea, what that could possibly be.

Her best point is:

> but we get the impression that everything is there because a new object can always be made "just in time" whenever we look.

So the consciousness. We just make it, when needed.

But to say that is an illusion, is the same as to say it's a kind of magic.

Who has the illusion?

- Thomas




Re: Another look at consciousness
posted on 08/22/2002 10:06 AM by prj@ruf.dk

[Top]
[Mind·X]
[Reply to this post]

Thomas
You say that you have no idea what consciousness could possible be.

Please allow me to introduce such an idea.
I see the consciousness as an activity in the brain where a loop of neurons make a positive feedback loop.

We know from acoustics how a microphone amplifier system can enter into a positive feedback loop with terrifying consequences. It produces a very loud and unpleasant sound.

The network of interconnected neurons in the brain has the potential of doing the same thing.

If the conditions are right (memory plus sensory input plus mental state) a loop can start firing and maintain itself for a period of time.

This generator loop could be the explanation for what we feel as a conscious thought.

A have made an illustrated report about this idea. It can be downloaded from:
www.ruf.dk/trans1.doc

The idea was introduced for the first time at the Niels Bohr Institute in Copenhagen in Oct. 2001 and the initial reaction from brain specialists was excitement.

Palle R Jensen
prj@idea.dk
www.idea.dk
www.ruf.dk

Re: Another look at consciousness
posted on 08/23/2002 5:31 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I like it. I like it very much.

- Thomas

Re: Another look at consciousness
posted on 08/23/2002 8:30 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

Of course there are plenty of feedback-loops in the neural architecture. I never supposed that they formed a "line", or a collection of "trees" that all terminate "at the leaves".

Exercise of such loops is naturally involved in mental states, and consciousness in particular, I would suppose. The theory might read "louder self-reinforcing processing produces awareness".

Still seems a bit slippery. It suggests an analogy between "louder/stronger sounds gain our attention", and "louder/stronger thoughts gain our attention". But in the first instance, we have a pre-existing "awareness" whose attention is directed toward the louder sound, while in the second instance we have (as it were) no pre-existing awareness whose attention is directed to the "louder thoughts", but is rather surmised to BE the louder thoughts.

If this "explicates consciousness", then why, or why not, is the microphone-amplifier system conscious? Its feedback-loops are substantial. Why should this work for neural feedback, and not (say) acoustic feedback, or any other form of feedback?

I might consider three elements that contribute:

1. Connection/Relation Complexity (arrangement of parts). Our neurons are not simply "arranged in a line, or a great circle", etc. Very Great complexity.

2. Matter Properties (matter "matters"). Not every substrate that might mimic a "syntactic interpretation of the processing" leads to a "sensorially conscious" mind.

3. Feedback (really a consequence of 1 and 2 together. I think of this phenomenon as semi-stable resonances.)

Cheers! ____tony b____

TRANS = Thought by Repetitive Activation of Neural Sequence
posted on 08/24/2002 10:16 AM by prj@ruf.dk

[Top]
[Mind·X]
[Reply to this post]

The TRANS theory is not about amplifying thoughts.
It is about entering into a different state.

Normal stimulus-response activity is not necessarily conscious. Only if there exists an activated loop which has been running for some time (Libet) the individual will feel conscious.

If an individual is conscious about the same matter a lot of times, it will gradually become "hardwired" in the brain and become an automatic function which can be performed without consciousness.

I don't see the audio system as having consciousness because consciousness is something happening inside a human being.

Animal may have the potential for being conscious, but since they have no language, they are not able to use the loops the same way as humans can.

Loops in a human brain can "play around" with everything in the memory including the language. The cortex may work as a RAM memory where abstract theories can be tested and compared to reality. Animals without language have very limited possibilities to play around with anything.

Palle R Jensen

Re: Another look at consciousness
posted on 08/23/2002 7:33 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Has anyone ever noticed that dogs and cats don't respond (at least in my own experience) to the sounds and visions of dogs and cats in photographs and TV shows? I read about an anthropologist who showed pictures of his family to a group of aborigines who had never seen photographs before. They were unable to deciper the patterns on the paper into images of people. All they saw were weird patterns that had no meaning.

I think consciousness is connected to being able to interpret what we see or hear or smell in terms that allow us to react to it. This is something we learn. It is also something other species don't learn in the same way that we do. They don't see pictures on TV or hear the animals on TV as something in their environment that they can react to. For them it's just noise (in the information sense). What do you think?

Re: Another look at consciousness
posted on 08/23/2002 8:31 PM by wclary5424@aol.com

[Top]
[Mind·X]
[Reply to this post]

Animals and photographs:
My cat nearly always responds to the sounds of animals on TV. He sometimes responds to pictures of animals on TV, but I don't know if he's just interested in the movement, and how much he understands of what he sees. When he was a kitten, he would challenge his reflection in the mirror, but he doesn't pay attention to mirrors any more. He also seems to like movies filled with explosions and car crashes... He never responds to pictures in magazines, I guess because they don't move, so he doesn't see them as real. I don't think that vision is as important to cats as it is to us--my cat is always interested in novel sounds and smells, though.
***

Primitive people and photographs:

I am an anthropologist, and have studied so-called primitive people for 20 years or more. People in primal cultures always recognize photographs for what they are. It is widely reported that some primitive peoples have had to learn how to watch a motion picture--some accounts indicate that at least some primitive people do not catch the central action in a movie scene the first time they watch a movie, but see small details--as if they are scanning the movie as a vigilant person would scan the horizon. However, all of these accounts indicate that after a matter of minutes, all primitive people figure out the syntax of a movie.
I have also read that the inhabitants of Tierra del Fuego did not see the ships of the first European explorers they encountered. I really doubt that.
What I do know is that different cultures process information differently, in have different cognitive maps. The Dagbani, who live in Ghana, have a completely different logical system from ours, for example. The idea of trying to find one underlying cause for any phenomenon in the world would be alien to them. They believe that everything in the world has at least two causes. This is a difficult concept for a Western mind to grasp hold of--and Greek logic is difficult for a Dagbani to understand. They look for complementary relationships among objects--so ecology is perfectly comprehensible to them, but Euclidean geometry is totally foreign. I sometimes like to imagine what a smart Dagbani would say if he were to talk to Niels Bohr.

BC

Re: Another look at consciousness
posted on 08/23/2002 8:43 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

Grant,

My gf was playing a role-playing video game, and at some point the lead character gets to go "fishing". The graphics were quite good, the character stands at the edge of this pond, draws the fishing pole back, and casts the line across the water. The line behaves realistically (waves and dangles like a real line.) We had a cat that had a habit of sitting up in front of the TV and wait for the character to cast the line, at which point she (the cat, not my gf ;) would leap at the screen an attempt to "catch" the string.

Another cat (male) will climb on my desk when I am at the home computer, and try to "catch" the mouse-cursor-arrow under his paw, often becoming so frustrated that he would peek behind the screen to see if there was another way "get at" that pesky cursor. He will also, at times, become interested if that "Crocodile Hunter" program is on (my young niece likes that program) and will attack the tail of a moving crok (oblivious to the represented "size" of the creature.)

Thoughts?

Cheers! ____tony b____

Re: Another look at consciousness
posted on 08/24/2002 12:26 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

All the animals I know act as if the TV were just another piece of furniture. Dogs barking or cats stalking on the screen don't even cause their ears to perk up. Jungle movies, house cats meowing, none of it gets a reaction out of them.

My daughter's cats jump at a feather on the end of a string, but never at anything on the TV screen. It's as if they don't even recognize that the figure on the screen IS a dog or a cat.

Yet they seem just as aware of their environment as we are and in many ways, more aware. I had a dog once who could hear the can opener from a block or more away. A woman who left her dog at our house for a week had the dog jumping at the door and acting crazy while she was still a mile or more away. I don't know what the dog was reacting to, but somehow it was aware that "mama" and coming to get her. My kids, on the other hand, don't even know I'm home after the door slams. I have to shout, "Anybody home?" to get their attention.

I remember one time trying to tell the dog a cat was tromping across his territory. Normally, when the cat did that, he was bark and chase after it. This time the cat was behind him and jumped up on the fence when I pointed to the cat and said, "Look, a cat!" The dog merely sat down and stared at my finger. The cat, too, sat on the fence and stared at me and my pointing finger. Neither of them had the slightest idea what I was trying to do.

So if cats and dogs are conscious (and I think they are) they are conscious in an entirely different way than we are. I think our consciousness is a product of growing up in our particular cultures. Spirits are real to people who believe in them. People like Nash in the story of "A Beautiful Mind" actually see and hear people who are inventions of their own mind.

I think the difference between the way we perceive the world and the way animals perceive it is a matter of culture. There was an experiment I read about in which students were given a couple of parallel lines to look at. One was slightly longer than the other. The students were asked to look at the lines and judge which one was longer. When all the other students in the class said the short line was longer, the patsy (the only student who wasn't in on the conspiracy) marked on his paper that the short line was the longest. When asked about it, he said that's the way he saw it at the time. In other words, his perception of the lines was shaped in part by the words of the people around him.

I think it goes even deeper than this. I think a great deal of the world we perceive and react to is shaped by the information we receive from others. In fact, I think what we call consciousness is shaped by our interactions with others.

Take language as an example. In the first two years of life, children can hear and make the sounds of whatever language they are exposed to. Later in life, when they encounter a new language, they often can't hear those sounds of the new language that are not also in their first language. They have lost the ability to hear those sounds. So Mexicans who come across the border to the U.S. often say "choose" instead of "shoes" and Taiwanese trying to speak Mandarin say "seh" instead of "shr" because both Spanish and Taiwanese lack the sound "sh." They can be taught to hear it, but without instruction they usually won't. Even after being exposed to Mandarin for over 50 years, most Taiwanse still drop the "sh" when they speak it. This in spite of most of the movies and TV in Taiwan being broadcast in the official language of Mandarin, and school being taught in that language. It's not as if they aren't constantly exposed to it on a daily basis.

In early childhood, the brain stops listening for sounds it doesn't need to hear in order to communicate. I know people can hear them on an unconscious level because they can be taught, but on a conscious level they lack meaning, and that's what it means to be conscious. We are conscious when we attach meaning to what we perceive. And we only attach meaning to the sights and sounds that affect us in the pursuit of our daily lives.

Anyway, that's my theory of consciousness. What's yours?

Grant

Re: Consciousness
posted on 06/18/2002 7:17 PM by nnason401@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

Dr. Johnjoe McFadden appears to have discovered the biological root of consciousness. If his theory is true (and I believe it is), it suggests that present A.I. research is headed in the wrong direction. Rather than providing an inadequate summary here, I invite you all to read his revolutionary paper:

http://www.surrey.ac.uk/qe/PDFs/cemi_theory_paper.pdf

Re: Consciousness
posted on 06/18/2002 8:28 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

What the EM field has to do with thinking?

It's just a byproduct. A single neuron can't read entire brains content using it.

I am very sceptic about those ideas. At least as I understand them for now.

- Thomas

Re: Consciousness
posted on 06/28/2002 6:30 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

Thomas,

I just read McFadden's paper (the whole thing) and I would not be quite so harsh. Here is my synopsis:

McFadden argues (convincingly) that EM field effects, accompanying the more familiar synaptic charge transfers, play an important role in how patterns in the brain "synch-up" to produce the coherence we associate with continuity of thought, and even consciousness. At foundation, an entity that can take advantage of such effects can be far more "flow-efficient" than one that does not.

Granted, this does not really mean that such field effects are an a-priory requirement for higher intellect and consciousness, but rather that this yields an efficient structure for such support (and might explain why it is the first structure that nature, at least locally, stumbled upon in giving rise to brains such as ours.)

Importantly, the paper does not argue at all that "brains must grow up in the natural biological fashion".

But the recognition that EM field can induce changes in micro-potentials "at a distance", coupled with (usually) autonomous systems operating patterns "near threshhold values", yields a nice explanation for how we "consciously" take control of our otherwise "unconsious" behaviors, when the need arises.

In Summary: Its not that EM fields are a REQUIRED substrate for consciousness, but that it provides an efficiency advantage (that may be difficult, but not impossible, to emulate in a classical signal processing regime.)

Cheers!

____tony____


Re: Consciousness
posted on 06/29/2002 3:39 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

tony!

I understand. What's bothering me the most in this picture is, how then, that the outside EM fields don't do any interference? The Earth's, the Sun's, the monitor's ...?

I should hear those loud and clear - don't you think so?

Otherwise, I could accept that idea, that a group of neurons induce the other group directly with an EM field.

The outside noise is my single major objection.

- Thomas

Re: Consciousness
posted on 06/29/2002 4:41 AM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

Thomas,

Although I'm not an electronics expert, they did address that issue in the paper. They say that the cerebro-cortical fluid surrounding the brain acts as a "Faraday cage" that significantly attenuates outside EM fluctuations.

They go further to explain that (1) any CONSTANT outside EM field would not have an appreciable effect, since only EM fluctuations induce electrostatic currents, and (2) there IS evidence that powerful EM fluctuations from cell phones and AC fields can be shown to induce synaptic effects. They give lots of medical data, facts, figures, that seem to bear them out. They even explain how researchers are warned about the intensities to use for certain types of probes, to avoid inducing seisures in test subjects.

I have to wonder about this too. Seems like one could take a tape-eraser and accidently "degauss" one's brain... haven't heard of that happening.

I'm no expert in medicine to validate the claims, of course.

Cheers!

____tony____

Re: Consciousness
posted on 06/29/2002 7:40 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

tony,

On the other hand, monkey on the three top can incalculate the wind. Why we couldn't do the same with the EMF ...

That's right. They might be right, after all. Especially since pigeons directly read the orientation from the Earth's M field. As I read.

Still I don't see, a lot has changed. We should only take care, that a group of artificial neurons can communicate with other groups with an (emulated) EM field.

- Thomas

Re: Consciousness
posted on 06/29/2002 6:00 PM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

Thomas,

Indeed, it need not even be an actual EM field, as long as it conveys the same "effects" (allows coordination of coherent standing waves, "phase-lock" sorts of effects, etc.)

I think they entertain a certain fallacy when, in the paper, they state that this EM thing "explains how the 'expected' continuity of consiousness can arise from the 'discrete' (ala digital) nature of neural firing."

The "continuity" of conscious may be entirely illusory. Yet to some, this seems critically important.

A thought experiment:

Suppose we take two bunsen burners, gas on, both lit, each exhibiting a "continuous flame". We then extinguish one of them, and then re-light it using some kind of spark.

In what "real" sense is the first bunsen burner exhibiting its "original flame", and the second burner exhibiting a "new flame"? I think this is a false distinction, especially when we realize that each "flame" is the result of entirely different molecules individually undergoing chemical transitions that give off a discrete bit of radiated energy. Each flame is "completely different" from one moment to the next.

Yet we persist in thinking (ala "stream-of-consciousness, spark-of-life-must-be-unbroken) that the first flame gets to maintain an "identity" while the second does not.

I think this fallacy is part of what makes the "EM" hypothesis seem more attractive to some. Speaking figuratively, it asks "how can I maintain the sense that I am the "same person" if my brain were to wink-out and be restarted.

Cheers!

____tony____

Re: Consciousness
posted on 12/30/2002 4:24 PM by Rob Hoogers

[Top]
[Mind·X]
[Reply to this post]

I'm afraid Mr. Searle overlooks a few vital issues in his thought experiment.

This is part of a piece I wrote recently on an AI forum. The link for the full article is at the bottom.

(The article starts pointing out that function without understanding is not only possible, but also gives a simple example...)

.... The person in the room can safely continue his non-understanding of Chinese and perform anyway. But: is handling the messages just a matter of replacing symbols, or also of one correction in case of error? Again: ofcourse we can. We have to accept the fact that any managing of symbols by the person is a function that can lead to error, which can be noticed by the operator, and can be fixed. And here lies another trap:

This is exactly where understanding slowly creeps in. Not as a sudden assault of knowledge, but a gradual process of learning. Because error detection implies the capability to also detect OUTSIDE errors that creep in. Just comparing with old messages will give enough clues to spot the differences.

Now what that person decides to do, is a different matter. It can decide to pass the information unchanged, or change it to the content of previous messages.

It cannot know for sure whether it made the mistake itself, so it will probably opt for security and correct the error. In case it chose wrongly, the sender is sure to react sooner or later by sending correcting messages to this obvious error in translation.

Which only leads to more detailed consternation...

Thus a form of communications with the outside world becomes an established fact through internal necessity, with no knowledge of either implied or required.

And slowly, step by step, the function of passing messages mutates into one that has a limited censorship over what it passes on and why not. It still has no full understanding of what is going on outside, but it has definitely acquired a handle on things.

Eventually it will end up with a good working grasp of Chinese, and a very frustrated person outside, who curses himself for ever coming up with such a way of communications.

And that is exactly where the whole argument fails horribly. Simple functions evolve into more complex functions over time. Mind you, if you think I am just sabotaging Searle's experiment: sabotage comes from the French sabot, meaning wooden clog (shoe). The first workers forced to work at a pace dictated by a machine ' the Jacquard Looms, the very ones that inspired Babbage to create his first machines ' found out fast enough that kicking one sneakily into the works of the loom would mean an instant break in the routine.

Frustrating the owner of the mill in exactly the same manner as the supervisor of the Chinese Room.

This is why looking at living organisms is more helpful than trying to limit it to a very shabby thought experiment, however well designed.

http://www.ai-forum.org/topic.asp?forum_id=3&topic_id=3797

Re: Consciousness
posted on 12/31/2002 7:12 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Rob,

The "little man" in Searle's "Chinese Room", having no idea at all what the symbols "mean", simply applies a look-up table to perform translation (as if such a table were possible.)

The "little man" is merely a device to handle the table look-up without appeal to a very "advanced AI" in the first place. He is not intended (by Searle's argument) to be an independent thinker, or exhibit any choice or creative behaviors.

In the same way that this very sentence intentionally contaims two erors, the little man has no concept of an "error". All inputs are to be blindly translated according to the "table" (which must contain a rule matching ANY possible sequence of inputs, including some default output when no other rule is matched.

Indeed, the little man need not retain even one bit of "history" of what had been translated only moments earlier. He cannot recognize an "error".

The counterpoint to Searle's argument is to submit that the "system" understands Chinese, even though no single "part" is the understander. Much the way my brain understands English, yet no neuron can be shown to do so.

Indeed, your pocket calculator can perform "square-root of 7", and yet no single transistor, controlled register-latch, or memory cell can perform that operation. You might identify that portion of circuitry activated during the square-root operation, and submit that "it" knows how to perform that function, but it would be silly to decompose the circuit further in search of some smaller component that "understood" the operation.


Cheers! ____tony b____

Re: Consciousness
posted on 12/31/2002 7:45 AM by Rob Hoogers

[Top]
[Mind·X]
[Reply to this post]

Tony, interesting but untrue. Any "real" system needs error correction up to a certain degree. It is an integral part of optimalisation. With the possibility of misinterpretation ensuing. And that is where the whole thing comes tumbling down. If you insist on separating ANY functionality from real-world necessities in such thought experiments, you will always end up with hopeless contradictions.

Show me one example of such symbol passing/manuipulation in nature that does not have any form of error correction and I will gladly eat my words....

Rob

Re: Consciousness
posted on 12/31/2002 3:41 PM by Rob Hoogers

[Top]
[Mind·X]
[Reply to this post]

Even machines can't do without, even in these days...

In the monolingual project "German Vocabulary"(Quasthoff, 1998) a lexicon of full form German words was automatically generated using machine readablecorpora, mainly news papers and scientific journals. The project started in 1996. Reading about 250.000.000 wordsresulted in a word list containing approximately 3.700.000 entries. Between one and two percent of the entriescontain errors, mainly due to spelling errors in the source text and misinterpretation of layout information.Because of the large number of entries there is a strong demand for algorithmic error correction methods.The lexicon in its current state is available via internet at http://www.wortschatz.uni-leipzig.de.

Re: Consciousness
posted on 01/02/2003 4:20 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"The counterpoint to Searle's argument is to submit that the "system" understands Chinese, even though no single "part" is the understander."
I think this is wrong. The whole point about the Chinese Room is that internally there is no comprehension of Chinese - that "understanding" is conferred on the system by the observer with whom it communicates. Internally if I am a human Chinese speaker I do not need other observers to confirm or deny my understanding of Chinese. Another way of looking at the Chinese Room is to say that just because something looks like something , it doesnt mean to say that is IS that something - and just because something may act like it has internal mental states, it doesnt mean to say that it actually has them.

Re: Consciousness
posted on 01/02/2003 10:55 AM by Grant

[Top]
[Mind·X]
[Reply to this post]

>The "little man" in Searle's "Chinese Room", having no idea at all what the symbols "mean", simply applies a look-up table to perform translation (as if such a table were possible.)

Ezra Pound used this method to translate The Confucian Analects. Although they made interesting reading, they in no way resembled what Confucius said in the Analects. In the dictionary, a single word can have many meanings. The person who doesn't understand the meaning will often choose the wrong one to use in the translation. Metaphor is out the window because meaning bears little or no resemblance to the words being used (He was the Rock of Gibralter to his wife and kids).

Language is simply not translatable in this way.

Grant

Re: Consciousness
posted on 10/29/2005 5:18 AM by anyguy

[Top]
[Mind·X]
[Reply to this post]

function of conciousness:

does it eally exist?

what sort of evolutionary advantage can it have?

especially having second order thoughts about our feelings, or lets say ability to describe them is more like a byproduct of the mind which is uner selective pressure to develop ability for abstraction and self organised interrelations.

Re: Consciousness
posted on 10/29/2005 8:24 AM by anyguy

[Top]
[Mind·X]
[Reply to this post]

"But there was a feature of the seventeenth century, which was a local accident and which is still blocking our path. It is that in the seventeenth century there was a very serious conflict between science and religion, and it seemed that science was a threat to religion."

what is called here as an accident is the history of materialism or lets say evolution. The conflict is not an accident but deterministic occurance when sociality is evolving to a new stage and power is transferred from religous clique to artisan and productive forces. Therefore this not an accident but very essential phase of the all human history.

Modernity or capitalism or the social production scheme based o creating surplus value inevitably requires some power relations inorder to regulate the access to woman and distribution of the surplus.

Sociality is an evolutionary phenomenon of human story like some other species. This means that social structures served for some over all reproductive success. However it does not mean such process favours every individual

Re: Consciousness
posted on 02/05/2006 9:25 AM by jpmaus

[Top]
[Mind·X]
[Reply to this post]

Consciousness consists of inner, qualitative, subjective states and processes of sentience or awareness.


This is called begging the the question...

Re: Consciousness
posted on 02/05/2006 10:54 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

CONSCIOUS:

the ability of system to generate predictive models of itself and it's world.



This definition is so successful that conscious systems are being built using it.