Origin > Will Machines Become Conscious? > Consciousness is a Big Suitcase
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0237.html

Printable Version
    Consciousness is a Big Suitcase
by   Marvin Minsky

Is consciousness reducible to a set of mechanisms in the brain acting in concert? In this discussion with the Edge's John Brockman, Marvin Minsky peers into the suitcase of the mind.


Originally published February 27, 1998 at Edge. Published on KurzweilAI.net August 2, 2001.

MINSKY: My goal is making machines that can think-by understanding how people think. One reason why we find this hard to do is because our old ideas about psychology are mostly wrong. Most words we use to describe our minds (like "consciousness", "learning", or "memory") are suitcase-like jumbles of different ideas. Those old ideas were formed long ago, before 'computer science' appeared. It was not until the 1950s that we began to develop better ways to help think about complex processes.

Computer science is not really about computers at all, but about ways to describe processes. As soon as those computers appeared, this became an urgent need. Soon after that we recognized that this was also what we'd need to describe the processes that might be involved in human thinking, reasoning, memory, and pattern recognition, etc.

JB: You say 1950, but wouldn't this be preceded by the ideas floating around the Macy Conferences in the '40s?

MINSKY: Yes, indeed. Those new ideas were already starting to grow before computers created a more urgent need. Before programming languages, mathematicians such as Emil Post, Kurt G--del, Alonzo Church, and Alan Turing already had many related ideas. In the 1940s these ideas began to spread, and the Macy Conference publications were the first to reach more of the technical public. In the same period, there were similar movements in psychology, as Sigmund Freud, Konrad Lorenz, Nikolaas Tinbergen, and Jean Piaget also tried to imagine advanced architectures for 'mental computation.' In the same period, in neurology, there were my own early mentors-Nicholas Rashevsky, Warren McCulloch and Walter Pitts, Norbert Wiener, and their followers-and all those new ideas began to coalesce under the name 'cybernetics.' Unfortunately, that new domain was mainly dominated by continuous mathematics and feedback theory. This made cybernetics slow to evolve more symbolic computational viewpoints, and the new field of Artificial Intelligence headed off to develop distinctly different kinds of psychological models.

JB: Gregory Bateson once said to me that the cybernetic idea was the most important idea since Jesus Christ.

MINSKY: Well, surely it was extremely important in an evolutionary way. Cybernetics developed many ideas that were powerful enough to challenge the religious and vitalistic traditions that had for so long protected us from changing how we viewed ourselves. These changes were so radical as to undermine cybernetics itself. So much so that the next generation of computational pioneers-the ones who aimed more purposefully toward Artificial Intelligence-set much of cybernetics aside.

Let's get back to those suitcase-words (like intuition or consciousness) that all of us use to encapsulate our jumbled ideas about our minds. We use those words as suitcases in which to contain all sorts of mysteries that we can't yet explain. This in turn leads us to regard these as though they were "things" with no structures to analyze. I think this is what leads so many of us to the dogma of dualism-the idea that 'subjective' matters lie in a realm that experimental science can never reach. Many philosophers, even today, hold the strange idea that there could be a machine that works and behaves just like a brain, yet does not experience consciousness. If that were the case, then this would imply that subjective feelings do not result from the processes that occur inside brains. Therefore (so the argument goes) a feeling must be a nonphysical thing that has no causes or consequences. Surely, no such thing could ever be explained!

The first thing wrong with this "argument" is that it starts by assuming what it's trying to prove. Could there actually exist a machine that is physically just like a person, but has none of that person's feelings? "Surely so," some philosophers say. "Given that feelings cannot not be physically detected, then it is 'logically possible' that some people have none." I regret to say that almost every student confronted with this can find no good reason to dissent. "Yes," they agree. "Obviously that is logically possible. Although it seems implausible, there's no way that it could be disproved."

The next thing wrong is the unsupported assumption that this is even "logically possible." To be sure of that, you'd need to have proved that no sound materialistic theory could correctly explain how a brain could produce the processes that we call "subjective experience." But again, that's just what we were trying to prove. What do those philosophers say when confronted by this argument? They usually answer with statements like this: "I just can't imagine how any theory could do that." That fallacy deserves a name-something like "incompetentium".

Another reason often claimed to show that consciousness can't be explained is that the sense of experience is 'irreducible.' "Experience is all or none. You either have it or you don't-and there can't be anything in between. It's an elemental attribute of mind-so it has no structure to analyze."

There are two quite different reasons why "something" might seem hard to explain. One is that it appears to be elementary and irreducible-as seemed Gravity before Einstein found his new way to look at it. The opposite case is when the 'thing' is so much more complicated than you imagine it is, that you just don't see any way to begin to describe it. This, I maintain, is why consciousness seems so mysterious. It is not that there's one basic and inexplicable essence there. Instead, it's precisely the opposite. Consciousness, instead, is an enormous suitcase that contains perhaps 40 or 50 different mechanisms that are involved in a huge network of intricate interactions. The brain, after all, is built by processes that involve the activities of several tens of thousands of genes. A human brain contains several hundred different sub-organs, each of which does somewhat different things. To assert that any function of such a large system is irreducible seems irresponsible-until you're in a position to claim that you understand that system. We certainly don't understand it all now. We probably need several hundred new ideas-and we can't learn much from those who give up. We'd do better to get back to work.

Why do so many philosophers insist that "subjective experience is irreducible"? Because, I suppose, like you and me, they can look at an object and "instantly know" what it is. When I look at you, I sense no intervening processes. I seem to "see" you instantly. The same for almost every word you say: I instantly seem to know what it means. When I touch your hand, you "feel it directly." It all seems so basic and immediate that there seems no room for analysis. The feelings of being seem so direct that there seems to be nothing to be explained. I think this is what leads those philosophers to believe that the connections between seeing and feeling must be inexplicable. Of course we know from neurology that there are dozens of processes that intervene between the retinal image and the structures that our brains then build to represent what we think we see. That idea of a separate world for 'subjective experience' is just an excuse for the shameful fact that we don't have adequate theories of how our brains work. This is partly because those brains have evolved without developing good representations of those processes. Indeed, there probably are good evolutionary reasons why we did not evolve machinery for accurate "insights" about ourselves. Our most powerful ways to solve problems involve highly serial processes-and if these had evolved to depend on correct representations of how they, themselves work, our ancestors would have thought too slowly to survive.

JB: Let's talk about what you are calling "resourcefulness."

MINSKY: Our old ideas about our minds have led us all to think about the wrong problems. We shouldn't be so involved with those old suitcase-ideas like consciousness and subjective experience. It seems to me that our first priority should be to understand "what makes human thought so resourceful". That's what my new book, The Emotional Machine is about..

If an animal has only one way to do something, then it will die if it gets in the wrong environment. But people rarely get totally stuck. We never crash like computers do. If what you're trying to do doesn't work, then you find another way. If you're thinking about a telephone, you represent it inside your brain in perhaps a dozen different ways. I'll bet that some of those representational schemes are built into us genetically. For example, I suspect that we're born with generic ways to represent things geometrically-so that we can think of the telephone as a dumbbell shaped thing. But we probably also have other brain-structures that represent those objects' functions instead of their shapes. This makes it easier to learn that you talk into at one end of that dumbbell, and listen to the other end. We also have ways to represent s in terms of the goals that they serve-which makes it easier to learn that a telephone is good to use to talk to somebody far away.

Continued at: http://www.edge.org/3rd_culture/minsky/minsky_p3.html

Copyright © 1998 by Edge Foundation, Inc.



www.edge.org

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

The Mind
posted on 12/26/2001 12:47 AM by gmcpherran@esystemsw.com

[Top]
[Mind·X]
[Reply to this post]

I think that computers will be able to match and exceed our abilities in the physical world of action and computation. However, a computer will not be able to ponder things such as: itself, the origin of the universe, and its purpose for existence in the same way and with the same legitimate wonder that humans have. They can be programmed to do such things and even mimic outward emotions but they will never have the same dependency on the answers to these questions that we have. I.e. they will never truly experience and care - these aspects will truly put emphasis on the "artificial" in AI. It is exciting to think of the new advances that AI offers though and I'm all for it. We just need to be careful that we don't idolize technology as the final answer to all human questions, needs, and goals.

Greg McPherran

Re: The Mind
posted on 12/26/2001 9:18 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

McPherran's barrier? What makes you so sure that exists? Or even - what except you gut feeling projects the barrier exactly there?

- Thomas


Re: The Mind
posted on 12/28/2001 10:25 AM by gmcpherran@esystemsw.com

[Top]
[Mind·X]
[Reply to this post]

I think that we are more than just atoms and molecules. Otherwise, why would we care about preserving our lives?

Re: The Mind
posted on 12/28/2001 12:30 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Some forms built of atoms and molecules don't wish to preserve or replicate themselves.

They are not very present. We, who are, are quite caring for our(s replication).

Still, I don't think we should identify us with atoms - but with the _information_ engraved into molecules.

- Thomas

Re: The Mind
posted on 12/26/2001 9:51 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

I suspect man and machine will be marching into the future hand in hand and machines will be asking these questions and seeking answers to them because that is what we will want them to do. I don't see our partner in this symbiotic relationship doing much of anything alone. After all, our purpose in creating and developing these tools was to help us answer these questions.

The next step will be for them to become partners in the great transformation of mankind from a bunch of individuals to an organism composed of both man and machine acting in unison. The computer is not evolving on its own. It is evolving in a Lamarkian way to help us do the work we decided needs doing. What we have designed them to do will be a fundamental part of their future development.

Re: The Mind
posted on 12/28/2001 2:18 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Have you checked out References on the Global Brain/Superorganism at:

http://pespmc1.vub.ac.be/GBRAINREF.html ?

Re: The Mind
posted on 01/04/2002 8:37 PM by s_paliwoda@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

AI should exceed human abilities - all of them. "Never say never"; if it seems it's impossible in your eyes, it doesn't have to mean that some other people won't come up with the right ideas to achieve it. Machines will probably ponder much deeper and abstract notions since we'll actually be those machines. Otherwise, humanity as we know it now will have no place in the future. Also, I'm pretty sure that nobody will be programming those "machines" like it's done now, just like nobody programs current humans.
Slawek

Re: Consciousness is a Big Suitcase
posted on 01/07/2002 1:14 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

Good old Marv ! biggest problem in world scientific history solved in a few words. Anybody out there care to expand upon the reducibility of subjective experience ? Anybody know how to describe the colour blue to a blind man ?

Re: Consciousness is a Big Suitcase
posted on 01/12/2002 3:50 AM by jamdk37@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Marvin basic idea that thought is resourceful is to me wrong. Thought is not a genius but a product or end result of inspiration and sudden insight. Show me the structure of inspiration which is the fountainhead of intelligence and resourcefulness. Is there a spell checker at this site?

Re: Consciousness is a Big Suitcase
posted on 01/12/2002 4:51 PM by Blue Oyster Boy

[Top]
[Mind·X]
[Reply to this post]

I know of a character on Startrek "The Next Generation", or something like that, by the name of Data; He could not taste gingerale, but he could give you the molecular structure; we was a adroid--which can be confused from human with robot capabilities. In the show the Cyborgs are looking constantly for answers, and to assimilate any other entity in the universe. As I recall, they, singular are (is) very advanced and are still on that search for knowledge. Now I know that this is Science Fiction, but is it that far off from the truth? What will this Singuarity bring to us besides a lush life?