Origin > Kurzweil Archives > How Can We Possibly Tell If It's Conscious?
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0400.html

Printable Version
    How Can We Possibly Tell If It's Conscious?
by   Raymond Kurzweil

Abstract of talk to be delivered at the "Toward a Science of Consciousness" Conference, April 10, 2002. Sponsored by the Center for Consciousness Studies at the University of Arizona.


Who am I? What am I? Perhaps I am this stuff here, i.e., the ordered and chaotic collection of molecules that comprise my body and brain.

But there's a problem. The specific set of particles that comprise my body and brain are completely different from the atoms and molecules than comprised me only a short while (on the order of weeks) ago. We know that most of our cells are turned over in a matter of weeks. Even those that persist longer (e.g., neurons) nonetheless change their component molecules in a matter of weeks.

So I am a completely different set of stuff than I was a month ago. All that persists is the pattern of organization of that stuff. The pattern changes also, but slowly and in a continuum from my past self. From this perspective I am rather like the pattern that water makes in a stream as it rushes past the rocks in its path. The actual molecules (of water) change every millisecond, but the pattern persists for hours or even years.

So, perhaps we should say I am a pattern of matter and energy that persists in time.

But there is a problem here as well. We will ultimately be able to scan and copy this pattern in a at least sufficient detail to replicate my body and brain to a sufficiently high degree of accuracy such that the copy is indistinguishable from the original (i.e., the copy could pass a "Ray Kurzweil" Turing test). Human brain reverse engineering through neuromorphic modeling of both brain regions and individual neurons is further along than most people realize. I will discuss the underlying exponential trends in computation, communications, miniaturization, brain modeling, and other relevant fields to address the question of when we will encounter a nonbiological entity that can convincingly pass for a human, as well as the more difficult task of recreating the personality and intelligence of a specific person. I describe these scenarios in my essay "The Law of Accelerating Returns" (see http://www.kurzweilai.net/meme/frame.html?main=/articles/art0134.html).

The copy, therefore, will share my pattern. One might counter that we may not get every detail correct. But if that is true, then such an attempt would not constitute a proper copy. As time goes on, our ability to create a neural and body copy will increase in resolution and accuracy at the same exponential pace that pertains to all information-based technologies. We ultimately will be able to capture and recreate my pattern of salient neural and physical details to any desired degree of accuracy.

Although the copy shares my pattern, it would be hard to say that the copy is me because I would (or could) still be here. You could even scan and copy me while I was sleeping. If you come to me in the morning and say, "Good news, Ray, we've successfully reinstantiated you into a more durable substrate, so we won't be needing your old body and brain anymore," I may beg to differ.

If you do the thought experiment, it's clear that the copy may look and act just like me, but it's nonetheless not me because I may not even know that he was created. Although he would have all my memories and recall having been me, from the point in time of his creation, Ray 2 would have his own unique experiences and his reality would begin to diverge from mine.

Now let's pursue this train of thought a bit further and you will see where the dilemma comes in. If we copy me, and then destroy the original, then that's the end of me because as we concluded above the copy is not me. Since the copy will do a convincing job of impersonating me, no one may know the difference, but it's nonetheless the end of me. However, this scenario is entirely equivalent to one in which I am replaced gradually. In the case of gradual replacement, there is no simultaneous old me and new me, but at the end of the gradual replacement process, you have the equivalent of the new me, and no old me. So gradual replacement also means the end of me.

However, as I pointed out at the beginning of this question, it is indeed the case that I am being continually replaced. And, by the way, it's not so gradual, but a rather rapid process. As we concluded, all that persists is my pattern. But the thought experiment above shows that gradual replacement means the end of me even if my pattern is preserved. So am I constantly being replaced by someone else who just seems a like lot me a few moments earlier?

So, again, who am I? It's the ultimate ontological question. We often refer to this question as the issue of consciousness. I have consciously (no pun intended) phrased the issue entirely in the first person because that is the nature of the issue. It is not a third person question. So my question is not "Who is David Chalmers?" or "Who is Stuart Hameroff?" although David and Stuart may ask this question themselves.

When people speak of consciousness, they often slip into issues of behavioral and neurological correlates of consciousness (e.g., whether or not an entity can be self-reflective), but these are third person (i.e., objective) issues, and do not represent what David Chalmers calls the "hard question" of consciousness.

The question of whether or not an entity is conscious is only apparent to himself. The difference between neurological correlates of consciousness (e.g., intelligent behavior) and the ontological reality of consciousness is the difference between objective (i.e., third person) and subjective (i.e., first person) reality. For this reason, we are unable to propose an objective consciousness detector that does not have philosophical assumptions built into it.

I do say that we (humans) will come to accept that nonbiological entities are conscious because ultimately they will have all the subtle cues that humans currently possess that we associate with emotional and other subjective experiences. But that's a political and psychological prediction, not an observation that we will be able to scientifically verify (not without making some philosophical assumptions, albeit subtle ones). We do assume that other humans are conscious, but this is an assumption, and not something we can objectively demonstrate.

I will acknowledge that David Chalmers and Stuart Hameroff do seem conscious, but I should not be too quick to accept this impression. Perhaps I am really living in a simulation, and they are part of the simulation. Or, perhaps it's only my memories of my interactions with them that exist, and the actual experiences never took place. Or maybe I am only now experiencing the sensation of recalling apparent memories, but neither the experience nor the memories really exist. Well, you see the problem.

 Join the discussion about this article on Mind·X!  
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Re: How Can We Possibly Tell If It's Conscious?
posted on 03/23/2002 10:41 AM by nazgul@worldonline.dk

[Top]
[Mind·X]
[Reply to this post]

If we create an intelligent machine, and give it the ability to se, hear and even feel the touch of a hand on it's surface. Will these things not instigate a form of consciousness in the machine. Since our consciousness, the way I see it, is due to our ability to perceive ourselves and the world around us. I feel conscious, simply because I feel. But how to prove that I am acting out of consciousness and not out of programming, I do not know. Mainly because many of my actions is due to genetic programming. Anyway I am looking forward to having this new entity amongst us.
Frost.

Re: How Can We Possibly Tell If It's Conscious?
posted on 03/24/2002 4:59 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

The consciousness MAY be (the latest wild guess from my (??) production line) ... may be the state of your memplex. When it (the memplex) is enough self referenced - the consciousness emerges.

It is in a good accordance with an old idea of gaining the consciousness only several thousand years ago.

When you have the concept of self ... it may be not enough. The consciousness arises with the advancing it further. The information came in, with the spoken language, might be essential.

I don't know for sure.

- Thomas

Re: How Can We Possibly Tell If It's Conscious?
posted on 04/19/2002 1:12 AM by sequoiahughes@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

I don't believe in consciousness except for my own. The rest of you are complex simulations designed to make my own existence appear worthwhile. Consciousness, like so many other things in life, has no black and white division. There is no consciousness or non-consciousness, simply shades inbetween one extreme and the other. A quark is not all that conscious, and neither is a virus. Bugs are kinda dumb, but so are severely mentally retarded people. Words like 'consciousness' and 'life' and 'intelligence' carry meaning only when used in general terms. In other words, one cannot define something as conscious if it fits x criteria. Deciding whether or not someone or something is conscious is a matter of FAITH. As an atheist, I refuse to fall into cognitive paths dictated by religion, spirituality and the like. Therefore, you are all simulations of life. Sorry to break it to you this way.

Re: How Can We Possibly Tell If It's Conscious?
posted on 04/19/2002 1:34 AM by Citizen blue

[Top]
[Mind·X]
[Reply to this post]

< I don't believe in consciousness except for my own. The rest of you are complex simulations
designed to make my own existence appear worthwhile >

That's alright to believe that way; However, be aware that it is still based on a certain kind of faith principle. To say that things appear worthwhile could be giving us too much credit; then again you could be referring to the stimulation, or simulation that your environment affords. There is actually a theory that says that I or you are the only person or entity that exists. I find that once one fall in this existential catagory, then it follows that much around us will not make sense, much like a Camus, or Kafka novel. But, alas even anything we believe in is subject to opinion.

Re: How Can We Possibly Tell If It's Conscious?
posted on 04/19/2002 2:41 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Do you permit, that sometimes you are elswhere, than in *this body? You don't remember it - but it's possible. Isn't it?

- Thomas

ggg
posted on 03/17/2002 3:31 AM by philip@nelcorp.com

[Top]
[Mind·X]
[Reply to this post]

i think there must be some way to test for conciousness!

Re: ggg
posted on 03/17/2002 4:17 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Me too!

- Thomas

Re: ggg
posted on 04/20/2002 2:05 AM by s_paliwoda@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

How about the view of the consciousness as an inevitable product of the increasing complexity of a computing machine. We could test that.

Slawek

Ask the Robot AI Mind
posted on 03/17/2002 11:48 AM by mentifex@scn.org

[Top]
[Mind·X]
[Reply to this post]

We can't tell if Ray 2 or any other Ray is conscious; we can only search for clues. Consciousness is as consciousness does. The Robot AI Mind at http://www.scn.org/~mentifex/jsaimind.html is well on its way to consciousness, because it knows the difference between "you" and "I" in conversations, that is, it has a concept of "self" as designated in the Web white paper, "Standards in Artificial Intelligence" (q.v.). The Robot AI Mind will not come into full consciousness until it finds robotic embodiment as http://www.scn.org/~mentifex/mind4th.html with implementation of the Volition and Motorium modules, so that the AI may become _physically_ aware of itself as distinct from the environment and from other entities. There is _not_ a Consciousness module in the Robot AI Mind, because consciousness is an emergent property, not a specific function.

Re: Ask the Robot AI Mind
posted on 04/12/2002 3:57 PM by zorgalina@mindspring.com

[Top]
[Mind·X]
[Reply to this post]

Knowing that 'you' and 'I' are different are language constructs your machine can emulate.

You're still all talking about a simulacrum of something biological, it seems.

There is no need for a computer/non-animate being to be 'self'conscious, to have an 'ego' or to have children or propagate anything other than other simulacra.