|
Even if one thinks a mechanical device could under some condition, for example, see colors in a way we do So is a blind person non-conscious? And what is to say a concious computer would not have have feelings about colors(given analog input would not one color perhaps translate into input a more desired state which could be the equivliant of a favorite color?
which is absurd (and at the very least: as of today purely hypothetical)
what is absurd about it, other then your oppinion and remember that this is about a hypothetical computer 20 years from now where the technology may be aduqate it still would not be able to tell the difference, as everything it claims is explainable in plain terms of the logic of its hardware and software and data, no other components involved. but with a "learning" computer that builds nural pathways it does not follow that it is explainable in plain terms of logic. Have you never heard of complex systems forming around simple principles? Just a couple of days ago I was reading about a nural system, where after it had learned to differentiate stuff(leave it at that because I can not remember the goal of the system) the programers dissasembled the network the computer had built. They then examined it using algebraic logic and found several pathways with nodes and levels that were apperent dead ends and not needed, after triming those out, the system was no longer able to properly do its intended task, but putting those seemingly worthless peices back in it worked normally again. So I believe that it is possible to get a complex system that deviates from expected behavior from building simple logic connections regardless if that logic is silicon or carbon based.
You could equally well give the benefit of doubt to a banana, whether it claims to be conscious or not. ;-)
That makes no sense whatso ever. I give you the benefit of doubt of being conscious becase you were able to come to an independent conclusion and state it in a reasonable argument. However I do not know that you are a human, I just reason you are because I have yet to hear of computer that could reason as you did, either way I believe you to have a conscious(congrats In my opionion you have passed the turring test...). However I would not give the benefit of doubt to a banana, which in my experience has never caused me to think it may be a thinking entity. If a banana was ever to object to me eating it in a meaningful way that I could pretend to understand, then perhaps I would give a second thought to it being conscious. Or if it told me specifically that it did not believe itself to be conscious and argued its case I might believe it conscious anyway ;-)
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
what is absurd about it, other then your oppinion and remember that this is about a hypothetical computer 20 years from now where the technology may be aduqate It is absurd in the same way as assuming that an advanced flight simulator might cause a computer to lift off. Nobody can prove the opposite in either case, as our knowledge of gravity is not final either. You sound like subtillioN Nr.2...same school or same person ? |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
It is absurd in the same way as assuming that an advanced flight simulator might cause a computer to lift off. Totally inept analogy...as usual. You sound like subtillioN Nr.2...same school or same person ? get a clue... |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Totally inept analogy...as usual. Not inept...the idea that information processing in logic chips produces conscious seeing of colors is exactly as absurd (though theoretically not completely impossible either), if not more so. get a clue... Same school at least. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Not inept...the idea that information processing in logic chips produces conscious seeing of colors is exactly as absurd (though theoretically not completely impossible either), if not more so. Who is restricting it to logic chips? The post you were responding to said this, remember that this is about a hypothetical computer 20 years from now where the technology may be aduqate. You are still stuck of the simplest types of serial/logical digital computers which completely lack any parallelism and neural network architecture. The computers that would be adequate, would be similar to the computer that actually is the brain. EXPAND YOUR DEFINITIONS so you can communicate! Listen to those whom you are arguing against. Remember that the first computers were human beings, so a computer has already achieved consciousness. You simply cannot comprehend how. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Who is restricting it to logic chips? This discussion in general is about computers as we understand them today, except faster and bigger in the future. And that is an important topic discussed in many places, including by contemporary philosophers who have articles on this website, including Dennett, Searle and Chalmers, and the article this discussion is about. It is you who makes an exception there. EXPAND YOUR DEFINITIONS... No, that would be a different discussion. But we can have that different discussion in a separate thread. However, for the word "computer" to make sense, it would have to be something operating deterministically on _mathematical_ principles based on a _program_ (can be self-modifying, though). And I can tell you already that my arguments would be the same, except I would have to re-formulate them in a more complicated way. Again, that would have to be a different thread. This discussion is about plain computers, just bigger and faster (and more advanced software). Either way, humans are not computers in any sense in which the word "computer" makes sense, by the arguments that I have already mentioned. Remember that the first computers were human beings, so a computer has already achieved consciousness. You simply cannot comprehend how. That does not follow. Smoke and mirrors. Humans are much more than what it takes to coompute. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
This discussion in general is about computers as we understand them today, except faster and bigger in the future. And that is an important topic discussed in many places, including by contemporary philosophers who have articles on this website, including Dennett, Searle and Chalmers, and the article this discussion is about.
I dont know about Searle and Chalmers but Dennett sees the brain as a type of computer so his definition is already expanded. No, that would be a different discussion. Yes, one in which effective communication actually happened. But we can have that different discussion in a separate thread. However, for the word "computer" to make sense, it would have to be something operating deterministically on _mathematical_ principles based on a _program_ (can be self-modifying, though). No. It simply must be capable of computing. Thus the human brain qualifies. And I can tell you already that my arguments would be the same, except I would have to re-formulate them in a more complicated way. Again, that would have to be a different thread. This discussion is about plain computers, just bigger and faster (and more advanced software). You are making assumptions here. Software is not the issue. Serial, digital computers as we know them cannot function in the same way as the brain. They are completely different types of complexity. Either way, humans are not computers in any sense in which the word "computer" makes sense, by the arguments that I have already mentioned. The term was invented to reference human beings who could compute. Dont you remember this discussion? Humans can compute, thus that can function as computers. That does not follow. It does follow straightforwardly. Can humans compute? Then they can be considered computers and they have actually functioned and continue to function as computers. Smoke and mirrors. This has become your standard rhetorical device. Humans are much more than what it takes to coompute. Obviously, and yet they make poor numerical computers. They are entirely different types of computers. Computers as we know them or even if they become simply bigger and faster, are entirely the WRONG type of hardware to be conscious. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Wrong. Right. I dont know about Searle and Chalmers but Dennett sees the brain as a type of computer so his definition is already expanded. Dennett sees the brain as in principle functionally equivalent to computers, and also thinks that computers can be as conscious as humans (digital computers). To the best of my knowledge, discussing Dennett does _not_ require an extended concept. It makes much more sense to go at it from the other side: Anything a computer does cannot serve as an indication for consciousness, as anything a computer does can be explained without consciousness. In contrast, consciously seeing colors cannot be explained in computer terms. So human beings are not (or, if you want, more than) computers. No. It simply must be capable of computing. Thus the human brain qualifies. Whatever or whoever sees consciously colors, whether the brain or the heart, it/she/he is not (or more than) a computer. You are making assumptions here. Software is not the issue. Serial, digital computers as we know them cannot function in the same way as the brain. They are completely different types of complexity. Explain that to Dennett and Chalmers, and I'll give you a bonus point if you can get them to express a committing agreement on that. The term was invented to reference human beings who could compute. Dont you remember this discussion? Humans can compute, thus that can function as computers. If you think that means that computers can be consciouss, then I have another one for you: If we don't know where John is, and we don't know where Mary is, does it mean they must be in the same place? Besides, I remember everything. It does follow straightforwardly. Can humans compute? Then they can be considered computers and they have actually functioned and continue to function as computers. But not only, of course. So it does _not_ follow. This has become your standard rhetorical device. Unfortunately, I find it necessary in order to explain my responses. Computers as we know them or even if they become simply bigger and faster, are entirely the WRONG type of hardware to be conscious. See, then you would have to agree with me as far as this discussion goes, instead of creating smoke and mirrors. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
a computation is the process whereby an object changes shape/state proportionally as a result of an external signal/stimulus
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Light waves, like waves in water, can be described by the distance between two successive peaks of the wave - a length known as the wavelength. Different wavelengths of light appear to our eyes as different colors. Shorter wavelengths appear blue or violet, and longer wavelengths appear red.
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Light waves, like waves in water, can be described by the distance between two successive peaks of the wave - a length known as the wavelength. Different wavelengths of light appear to our eyes as different colors. Shorter wavelengths appear blue or violet, and longer wavelengths appear red.
Recognizing a color is obviously not what I mean with "seeing colors". Even todays computers can be easily programmed to say "I see red" when there is such a signal coming from a digital camera. But for them, a color is just a number, not what humans mean when we say we see a color consciously. What if I challenge your consciousness? This forum gives me the perfect opportunity to claim that you are not human, but a bot instead. Can you prove otherwise? If you think that I could be a bot, that doesn't mean that a bot could be a human. You are assuming that I am uneducated both scientifically and in terms of logical thinking. You are wrong. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Of course everything a computer does is because it is either directly or indirectly programmed to do something, even so-called "learning" computers, unless they have a random number generator, which doesn't make them conscious either. The point is: there is no consciousness-detector, only logic chips. Even if one thinks a mechanical device could under some condition, for example, see colors in a way we do, which is absurd (and at the very least: as of today purely hypothetical): it still would not be able to tell the difference, as everything it claims is explainable in plain terms of the logic of its hardware and software and data, no other components involved.
And yet it is argued time and time again that everything a human, conscious mind does is directly or indirectly programmed to do something. the important word there is "indirectly"....... the levels of "indirectness" observed in the human neural net far surpasses the one or two levels we have created in computer models. But that does not mean that there is a difference or a missing outside component. when broken down to nonlogical steps taken by both sides all you are left with are random number generators and the question of their accuracy/randomness. If you think that I could be a bot, that doesn't mean that a bot could be a human. You are assuming that I am uneducated both scientifically and in terms of logical thinking. You are wrong. and still you have yet to prove him false. griffman |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
And yet it is argued time and time again that everything a human, conscious mind does is directly or indirectly programmed to do something. yes in fact the brain can be stimulated in such a way as to control its actions and the mind will automatically think that those actions were voluntary and controlled by it. The subject will think, "I did that on purpose, by my own excercise of free-will." Obviously the mind is a poor judge of where the control is coming from and seems to be simply an interface for providing coherence to the emergent programming of te deeper causal levels of the brain. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
First of all the term "programmed" is quite ill-defined. It wasn't me he used that term in the context of the mind, but either way, I think it is clear what griffman meant with it and I have no problem with it, so I a simply took it along. you certainly have no proof for your assertions that consciousness is not causal I made no such claim here. while the proof mounts on the other side There is no scientific proof for even the existence of "qualia", conscious experience such as seeing colors. The fact that the brain has an effect on conscious experience is obvious, just like a computer has an effect on the computer monitor. It doesn't mean that digital logic can _create_ an LCD screen, which would be an absurd logic. Just start eating only LSD for a year To even think of doing that you would have to be insane in the first place. It is the fuzziness and arbitrariness of the language itself that causes you to fail to resonate with any scientific/causal understanding of consciousness. As of today, there simply is no scientific understanding of the fact that we consciously see colors. So any resonation with such a non-existing understanding would be self-delusional. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
There is no scientific proof for even the existence of "qualia", conscious experience such as seeing colors. Subjectivity is the basis for all objectivity. Do not even scientists experience subjective qualities? No one doubts that somehow those qualities exist. It is only what causes them and what they really are that is open for scientific and philosophical debate. There is no proof of subjective experience needed because its existence is self-evident. The fact that the brain has an effect on conscious experience is obvious, just like a computer has an effect on the computer monitor. It doesn't mean that digital logic can _create_ an LCD screen, which would be an absurd logic. Right, quite absurd indeed and it has no relation to this discussion whatsoever except perhaps in your mind. Just start eating only LSD for a year
Your reaction proves my point. As of today, there simply is no scientific understanding of the fact that we consciously see colors. Scientists understand that obvious fact, they just havent yet definitively and univocally explained it. So any resonation with such a non-existing understanding would be self-delusional. There is no description of consciousness that can ever be absolutely complete without being consciousness itself. The point is that there is some value to the always incomplete (abstract and generalized) descriptions of science and the science of consciousness is no exception. Science is generalization NOT absolute emulation. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Scientists understand that obvious fact, they just havent yet definitively and univocally explained it. Fortunately it is obvious, as scientists are still human beings. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
umm ok ... let's change the subject ;-) |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
And yet it is argued time and time again that everything a human, conscious mind does is directly or indirectly programmed to do something. the important word there is "indirectly"....... the levels of "indirectness" observed in the human neural net far surpasses the one or two levels we have created in computer models. But that does not mean that there is a difference or a missing outside component. when broken down to nonlogical steps taken by both sides all you are left with are random number generators and the question of their accuracy/randomness. If you are assuming the human mind is exclusively programmed, then you are lacking an account of humans consciously seeing colors, as that cannot be explained through programmed behavior (and certainly not as of today). Materialism is not the same as science, it is a philosophy among others. and still you have yet to prove him false. Why does it matter whether _I_ am a bot or a human? What matters is whether _you_, or any other reader, are aware of _your_ consciousness, conscious experience, and can tell a difference between yourself and a pocket calculator. I am explaining you how do to that, but a bot (or book) might do that as well - the proof has to come from yourself. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
If you are assuming the human mind is exclusively programmed, then you are lacking an account of humans consciously seeing colors, as that cannot be explained through programmed behavior (and certainly not as of today). First of all the term "programmed" is quite ill-defined. Does it mean "causal" or simply that some external being set the whole thing up in a certain way to fulfill his specific set of goals? If it means "causal" as it seems to mean in this context then you certainly have no proof for your assertions that consciousness is not causal while the proof mounts on the other side and is readily available to any curious and conscious individual. Just start eating only LSD for a year and see what happens to your consciousness. I know that you are going to say that it is just the "contents" of consciousness that has been caused to malfunction by the external causal factors, but this is an arbitrary distinction and there is no room left for consciousness itself. Materialism is not the same as science, it is a philosophy among others. Who is arguing for materialism here, which is equally as ill-defined as your other standard terms? What is materialism? That reductionism can reach a finite end in its reduction by division process at some indivisible quantum? That "matter" is ultimately reducible to a simple set of finite causes? That has never been my philosophical stance, but this common ground is meaningless to you because you consistently fail to define or understand your core terms. You can see no difference between causation, determinism or predeterminism and you constantly use arguments based on pre-determinism as if they were arguments against deteriminsm which you consistently fail to distinguish from causation. It is the fuzziness and arbitrariness of the language itself that causes you to fail to resonate with any scientific/causal understanding of consciousness. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
On the contrary, the way we see colors has everything to do with consciousness. The first thing your mind does when it detects a light wave is to classify it. They next thing it does is compare it with other input fitting that category. How you react is based on your past experience with that input and what you associate it with. Red is often associated with blood and is therefore used in movies to elicit horror in the person who sees it. Black is likewise used this way because we fear the dark based on what happens to us when we move around but can't see what we might bump into. For the Chinese, red is most often associated with joyous occasions. Until recently, the bride wore red instead of white on her wedding day. Science fiction movies horrify their audiences by having an alien character bleed green blood. Consciousness is concerned with how we react to what we see, hear, smell or feel. The Chinese use to word "hei" to refer to black, darkness and evil. They're not alone, as those of you who give in to the "dark side of the force" will doscover. Movies in both cultures dress good guys in white and bad guys in black. People with dark skin in China have long been seen as inferior and fearful for a millinium or so. Women who work in the fields cover their hands, face and arms so they won't be darkened by the sun. Otherwise they wouldn't be considered suitable for marriage to a higher class man, such as a landlord. So culture and experience help determine how we react to various colors under what circumstances. That's a big part of our conscious awareness of color. It's the same with animals, except that they react to smells and sounds more than they react to colors, since most of them are color blind. It's also why people ask each other, "What's your favorite color?" It affects compatability in the minds of the people who worry about such things. It says something about what kind of person they expect you to be. The association of one thing with another is a large part of what we call consciousness. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
In my personal opinion you are confusing consciousness with perception. I agree with everything you say. I actually repeat those same color associations on a daily basis when advising people on UI design.
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
What I was trying to say is that consciousness is how we react to what we perceive. What do we do with that data after we get it? To me, consciousness is the process of using our perceptions to plot our paths through the world we live in. Such paths are mental as well as physical. Consciousness is,in my mind, a lot like data mining. We sift through constant streams of incoming data/information to find nuggets of truth with which to make decisions about what to do. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
There is an inherent problem with spliting the consciousness from its content. It tacitly assumes that there is an internal viewer, a homunculus that watches the sensations happen and thus it gets us no closer to where we started.
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
There is an inherent problem with spliting the consciousness from its content. When you say that, you need to make clear what you mean with content: If you mean for example the colors we see consciously, then yes: The color is the consciousness itself, without anything else looking at it (except that we can also be aware of the fact that "there is color-seeing", as a reflection about what we are doing. And we can think about what we are doing, about what is happening. If however with content you mean that which we think about, the information, then they are of course not the same, the information is only part of it. (and that is the "content in the sense of information" that I spoke about when referring to Spinoza in our previous discussion.) When we think about a tree, then there is a similarity between our thought and the actual tree. This similarity is abstract, and the idea of the tree consists of more than that which is similar between this idea and the tree. The idea of the tree and the tree have a somewhat similar form, as the idea is a partial model of the tree, but the idea is more than this similarity, more than this form that they have in common. Of course. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Okay, I think the biggest issue here is that we have a poor definition of the term "seeing colors" and how it relates to consciousness.
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Okay, I think the biggest issue here is that we have a poor definition of the term "seeing colors" and how it relates to consciousness. This issue is easily resolved: When talking about consciousness, "color" refers to the color of the visual image that you see, so to say, "in front of you". When you feel pain, it is that which hurts. When you are happy, it is that feeling that you experience, as opposed to the electro-chemical functioning of your body. You may assume that they are the same, but then you are pre-assuming that which you would have to prove. There is no analytical definition (to the best of my knowledge .. ;-)). It is defined by the obvious conscious experience that all of us human beings have each moment of our life. An undeniable fact of life. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
We must first solve the majority of ethical problems within the "Human Condition". This will directly effect the possiblity that we will not pass-on our inherent negative possiblities of human unethical action to our cyber- offspring. Even if our future may allow "cyber" to create "cyber", the Genesis will be of human hands. This indicates not only the wonderful beauty and awe of Humanity but also our ability to hate and be flawed. Of course the true underlining thesis of my statement is transparent. It's been printed, written and filmed in all of the languages of the world since we could dream of such things.
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Our brief time of consciousness stimulated by less than half a dozen senses is a puzzling phenomenon. Since our bio-software could not possibly produce our bio-hardware, the latter is the only instrument of our current state called consciousness. And that idea only applies to our species(this intra-species communication is very difficult as we only have use of commonly accepted language that weakly transfers from one conscious entity to others!). When Neanderthal's briefly co-existed with current sapiens, their state of consciousness could not be compared to ours. Perhaps the Neanderthal consciousness was identical to that of the current computer system.
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
What is teaching but programming? As the Jesuits used to say: "Give me a child until the age of six and it will be mine thereafter." |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
...until that mind uses science or acquired intelligence to shatter slavery by mythology! |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
An intelligent machine would have a hard time getting the same rights as a person. A person creates itself from the two joined cells containing its parents DNA. An intelligent machine (at least to start with) would be created with the material and financial resources of a person. How can this machine that cost its creator many millions of dollars just be defined by others to belong to itself. This would rob the machines creator of the just benefits from their investment. If forfeiture of their machine was predicted by the creator of this intelligent machine, then the machine would not be created at all. An intelligent machine will not be created without all of the lesser creations that would come before it. So, why wouldnt the creator just create the machine with enough intelligence to be useful but not enough so that it could ever be granted autonomy?
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
It is easy to duplicate the idea of pain in any simple program even now but it is also simple to make the program feel no pain whatsoever. The "idea of pain" is not the same as pain. Consciousness is not merely the realm of ideas, that would be more something perhaps called "thought". The thought of happiness is not the feeling of happiness, and the "feeling" is within consciousness, the reason one speaks of something being "conscious", is exactly that, not the (only) rational reflection about feelings. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
If a person has pain from a broken body part but that pain is totally not felt by the brain because of drugs, do you feel pain? Does the pain exist? If pain signals are sent to the brain but the brain doesnt receive them, then it is exactly the same as if they were not sent. We know other people are conscious like we feel we are because they have characteristic behaviors (just like us) in response to various inputs that we recognize. Consciousness and feelings can only be measured by the way they are responded to. We have no capability to look inside anyones head and identify a thing called emotions or consciousness. Even if we identified a physical part of the brain as holding emotions or consciousness, it would be a very human centric conclusion to believe that only human mechanisms count. If the kind of inputs that go into our brain are sent to an intelligent machine and the machine responds in ways that seem like our own, then that machine must be conscious, if we can say that anyone except ourselves is conscious. The idea that you can define consciousness as some mechanism that can only be human is a tautology. I am therefore I am. If I can describe all the consequences of my behavior when I get pain input then I can program those behaviors based on the inputs I have experienced. This set of behaviors and inputs would have to be considered pain because they look and act just like they would in a human. Pain is nothing outside of the experience of the sensory inputs and behaviors associated with those inputs. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
The idea that you can define consciousness as some mechanism that can only be human is a tautology. I am not defining it as something that can only be human. (Besides I would not call it a mechanism.) Read carefully. I am not going to define it again, the need for a "subjective" definition must be obvious at this point. You make the pre-assumption that consciousness must be something that can be measured, but (obviously at least as of today) it can't. The only way we know about consciousness is because each human being experiences his/her own consciousness. We know what kind of things computers are doing. The question is: could those things have anything to do with consciousness? We humans have a consciousness-detector, even if it unfortunately detects only our own consciousness (for example the fact that we see colors consciously), and we know that computers don't have a consciousness-detector. So computers are not doing the same things that we do. They might use the same words, but they would do so for different reasons. That doesn't prove that computers are not conscious, but it shows that anything a computer does can be explained without consciousness, and without a consciousness-detector. When a computer says it is conscious, that statement is not based on a consciousness-detector. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
I am not going to define it again, the need for a "subjective" definition must be obvious at this point. You have become lopsided, blue. You think the only true definition of consciousness must be subjective and fail to understand that it is neither subjective nor objective. We need BOTH subjective AND objective definitions of ALL phenomena in order to approach the totality of understanding of the universe which is neither subjective nor objective. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
I read your article carefully but your meanings are not necessarily all that clear. I understand the words subjective and objective but I dont think that just talking about perspective is very helpful. You say that consciousness is not manifested by a mechanism? If not then what, a soul? Scientists have found that all things they have tested so far have a physical presence in the brain. What proof is there that consciousness would be any different?
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Would this person be considered a human? Conscious? I think they would. If it looks like a duck, walks like a duck, smells like a duck, I think we can consider it to be at least functionally a duck. We would not be doing justice to the facts of consciousness if we treat it as a matter of guesswork, because it isn't It is verifiable fact of daily life. At least as of today, but I would say in principle, it is not verifiable objectively (third-person-perspective), but only subjectively (first-person-perspective). The definition of consciousness must reflect this _fact_. Science needs to acknowledge non-objective _facts_ if it wants to remain committed to truth. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Science needs to acknowledge non-objective _facts_ if it wants to remain committed to truth. Obviously, but who says it doesn't? Have you heard of psychology? Just because it needs to acknowledge subjective facts, does not mean that objectivity is useless. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
_non-objective_ |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
What facts do you suppose science needs to or even could address which are neither objective nor subjective? Note that these facts would be entirely invisible thus inherently outside the scope of science. By definition, Science does not work that way. It bases its claims, supposedly, upon observation, whether subjective or objective. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
A person does not create itself. A child is conceived, born, fed, clothed, housed and educated at not inconsiderable expense to the parent(s). Do you consider this an investment? Does it pay off?
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
I would agree that on the surface there are similarities between creating an intelligent machine and having a child, however. The design of a child is not our design. In making a machine, you get to decide what is included and what not. With a child, you get what the childs genes have ordained, not necessarily what you would like. Second, there seems to be no good reason (on the surface) why a parent would put huge amounts of their life and material resources into bringing up a child. If having ½ a copy of your genes carry on in the world after you are dead is not the reason to have children, then I would say there seems to be no logic in having them at all. Passing on your genetic makeup or personality profile, would hardly be the reason to create an intelligent machine. It is likely that a corporation will create that intelligent machine and not an individual. The personality and priorities of the machine would be the work of many people and so I cant see how a corporation could justify the huge expense of creation to get nothing in return, just because they could. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
The computer will be the slave of man and belong to man until man loses the skills he gave the computer. Then the slave will become the master. |
||||
Practical Considerations |
[Top] [Mind·X] [Reply to this post] |
|||
I am not a lawyer, so I can't comment in that way on this issue. But I think there is a practical consideration here which I haven't seen in the member comments.
|
||||
Re: Practical Considerations |
[Top] [Mind·X] [Reply to this post] |
|||
Mechanized:
We can argue all day about whether the machine is actually "alive" or "self-aware" or whatever term you like. We can argue many days. ;-) It doesn't matter whether *you* believe the computer is "self-aware" or "alive" if the computer is *doing things in the real world* that are similiar to what humans do. Computers are not doing similar things. We know what kind of things computers are doing. They are executing program instructions, that's all. It is as simple as that. They don't have consciousness-detector as a meaningful basis for a statement such as "I am conscious.". But we humans do: we know that we are conscious, that we see colors, as a matter of fact. We do "have" a consciousness-detector. Consciousness-detectors exist. They are called awareness, consciousness, attention, reflection, many words. We are not machines. That is a fact. |
||||
Re: Practical Considerations |
[Top] [Mind·X] [Reply to this post] |
|||
Computers are not doing similar things. We know what kind of things computers are doing. He said IF. This means expand your definition of the term "computer"! Can you not try and communicate? IF a computer (whatever someone happens to mean by this term, and it could be made out of biological proteins and possess neural networks for all we know) acts alive and conscious in every sense of the term, then you have no clue whether or not it really is. They are executing program instructions, that's all. That is like saying that all you are doing is saying words, that's all. You forgot about the sentences and the paragraphs and the entire meaning emergent from the complexity. This understanding requires intuition which you seem to be quite lacking. We are not machines. That is a fact. That is your arbitrary distinction. It is NOT a fact because no-one is agreeing on any definition of machine here. c o m m u n i c a t e This means trying to grasp the differences in arbitrary definitions! Not using those differences to your advantage to spread your mystery-mongering confusion. |
||||
Re: Practical Considerations |
[Top] [Mind·X] [Reply to this post] |
|||
He said IF. This means expand your definition of the term "computer"! There is no if. We are discussing computers, not any "expanded" concept of computers. Computers execute program instructions, and that's it. All the output, including "sentences" and "paragraphs", is the result of executing program instructions, not the result of any consciousness-detector. |
||||
Re: Practical Considerations |
[Top] [Mind·X] [Reply to this post] |
|||
There is no if. We are discussing computers, not any "expanded" concept of computers. Ill repeat what he said, It doesn't matter whether *you* believe the computer is "self-aware" or "alive" if [[ did you catch that!]] the computer is *doing things in the real world* that are similiar to what humans do. IF the computer is doing things in the real world that look as if it were conscious, this means interacting to new unexpected situations, etc . you fill in the rest. Try to pay attention and communicate effectively instead of just furthering your mission to spread mystery and confusion to all corners of the globe. |
||||
Re: Practical Considerations |
[Top] [Mind·X] [Reply to this post] |
|||
IF the computer is doing things in the real world that look as if it were conscious, this means interacting to new unexpected situations, etc . you fill in the rest. If a computer that acts EXACTLY like it is conscious in the real world with real surprises etc etc, does not fit into your definition of what a computer is, then expand your definition for the sake of efficient communication. |
||||
Re: Practical Considerations |
[Top] [Mind·X] [Reply to this post] |
|||
The problem here is the word "computer", because computers can do more than simply compute. They do not have to function in a purely symbolic/syntactical role, nor do they have to be digital. They can emulate the analog nature of the human brain to any desired level of accuracy (or inaccuracy).
|
||||
Re: Practical Considerations |
[Top] [Mind·X] [Reply to this post] |
|||
We can argue many days. ;-) Especially if you are using different definitions and talking past each other. What I see, blue, is that you have no concept of metaphysics, thus you have no system in which to understand anything, you have no clue what consciousness is (which you freely admit when you call it a mystery), you know very little about cognitive science, which you seem to thinkis a psuedo-science etc, etc. You have this tremendous permeating lack of understanding which you feel compelled to force upon others by using the looseness of the language to your advantage for the purpose of confusing people so that they can share in your profoundly mysterious lack of understanding. Just a thought ;-) |
||||
Re: Practical Considerations |
[Top] [Mind·X] [Reply to this post] |
|||
Ill repeat what he said, [...] His statement contained a conditional, but my reply did not and does not require a conditional. [...]your mission to spread mystery and confusion to all corners of the globe.[...] Note the style. If a computer that acts EXACTLY like it is conscious in the real world with real surprises etc etc, does not fit into your definition of what a computer is, then expand your definition for the sake of efficient communication. This statement would acknowledge that a computer of my definition might not be able to act exactly like it is conscious. This would support my position, but you probably didn't mean that. Especially if you are using different definitions and talking past each other. It is not my responsibility to define computers. That has already been done to an extent sufficient for this discussion. I would be willing define the term "machine" as general as anything operating in accordance with deterministic mathematical laws of physics, if that were necessary, but it is not necessary, and would only complicate this discussion. You have this tremendous permeating lack of understanding which you feel compelled to force upon others [...] Note the style. Just a thought ;-) Not quite. I find your style ridiculous and unacceptable. |
||||
Re: Practical Considerations |
[Top] [Mind·X] [Reply to this post] |
|||
oh no! You dissapprove of my style?
|
||||
Re: Practical Considerations |
[Top] [Mind·X] [Reply to this post] |
|||
His statement contained a conditional, but my reply did not and does not require a conditional. Of course not. It requires an attempt to understand the meanings of his words as expressed with a conditional. It is not my responsibility to define computers. That has already been done to an extent sufficient for this discussion. It is your responsibility to attempt to understand what it being communicated. This means altering your definitions when it is OBVIOUS that they are incompatable, or at the very least, make it known that you are using a different definition. I would be willing define the term "machine" as general as anything operating in accordance with deterministic mathematical laws of physics That is far too narrow a definition for me, and according to that definition a human would not be a machine. We already know of deterministic systems whose behavior cannot be described by any mathematical law. We have a whole field of study called complexity theory that deals with such systems. Not quite. I find your style ridiculous and unacceptable. Who cares? Just try and communicate instead of trying to capitalize on confusion to magnify it. |
||||
Re: Practical Considerations |
[Top] [Mind·X] [Reply to this post] |
|||
subtillioN:
Of course not. It requires an attempt to understand the meanings of his words as expressed with a conditional. Since you insist: He said: "It doesn't matter whether *you* believe the computer is "self-aware" or "alive" if the computer is *doing things in the real world* that are similiar to what humans do." The "if" relates to the fact, that as of today, computers are hardly, if at all, doing such things. In the context of this discussion, I have to assume that he assumes that in the future computers will do such things. There was no sign that he assumes that this will require a re-definition of computers. So your objection is completely baseless. oh no! You dissapprove of my style? Yes, this is the second case in a short period of time where you are using insulting language, (as the word BS over in the other thread), and again it turns out that you are completely without the slightest basis for even a reasonable objection. You are simply beside the point with your insults and rhetorical tricks, smoke and mirrors. On reading "Mechanized"'s last post, I think I understood him quite well. Your mention of something deterministic yet not describable mathematically is something we are not discussing here so far, and that would have to be a quite different discussion. |
||||
Re: Practical Considerations |
[Top] [Mind·X] [Reply to this post] |
|||
I'm not sure whether a re-definition is necessary, though this will probably resolve itself as "computers" continue to gain abilities once reserved to humans alone. However, I would argue (hypothetically) that if a "computer" actually achieved what we could define as human-level conciousness, it would at that point cease to be merely a computer.
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
From the posts, it seems that the main discussion is centered on conciousness with another centered on economics.
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Perhaps the problem arises from the use of a noun instead of the use of a verb as Erich Fromm points out for other issues of modern society.
Good point! Consciousness has become a sacred object. An idol absolutely untouchable by those who worship its divine mystery. They would rather keep the subjective mystery absolutely intact then allow the idol to become defiled through objective understanding. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Good point! Consciousness has become a sacred object. An idol absolutely untouchable by those who worship its divine mystery. They would rather keep the subjective mystery absolutely intact then allow the idol to become defiled through objective understanding. Rather consciousness than the idea of a huge clockwork. ;-) When in comes to conscious experience, reductive understanding is no understanding at all. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Rather consciousness than the idea of a huge clockwork. ;-) Must you cling to outdated concepts? Haven't we had this discussion before? We are in the age of the fractal, not the clock. When in comes to conscious experience, reductive understanding is no understanding at all. It is all-or-nothing with you. ALL forms of understanding have their limits. Some are better than others, but there is no use in abandoning any form of understanding. The key is to use them all in conjunction. ...perhaps that is too obvious? |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
It is all-or-nothing with you. You are talking about yourself. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Blue: When in comes to conscious experience, reductive understanding is no understanding at all. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
_reductive_ |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Perhaps the problem arises from the use of a noun instead of the use of a verb as Erich Fromm points out for other issues of modern society.
Consciousness is neither a noun nor a verb. You are loosing sight of consciousness each time you try to define it in objective terms. That doesn't mean that there is no consciousness, or being conscious. Consciousness is being conscious. Not a noun and not a verb, and not the same as thought. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
I would like to help Subtillion if I can here.
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Sorry, I became confused about the speakers here. I was addressing Subtillion when I should have been addressing Blue. Sorry about the confusion. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Mechanized:
Sorry, I became confused about the speakers here. I was addressing Subtillion when I should have been addressing Blue. Sorry about the confusion. How do you mean? (from the earlier post:) I would like to help Subtillion if I can here. Where did you get the idea that subtillioN needs help? ;-) Or is this the point were you wanted to address me rather than subtillioN? You seem to be arguing that there is something ineffable and sublime about consciousness. I agree with that completely. It is a mysterious and complex thing. No, you are falling victim to subtillioN's suggestions. Consciousness is a plain fact of daily life. I am saying that we don't (and also that we won't) have an objective definition for the fact that we are conscious. This is not mysterious to me: the reason is that consciousness is subjective ("subjective experience", "conscious experience") and I don't consider the subjective to be a subset of the objective. To repeat a statement I made earlier: You (not you personally) are loosing sight of consciousness each time you try to define it objectively. (Since I am not trying to do that, I don't wind up in that situation.) *So far* we do not have computer programs that can exhibit the same level of mysterious and complex behaviors. But to say that we will *never* achieve that is like saying humans will never go to the moon. Consciousness is not and cannot be defined based on external behavior. Consciousness is understood to include such things as consciously seeing colors, hearing sounds, feeling pain and feeling happiness. An actor can behave as if being happy, but it doesn't mean the same as actually being happy. (Although it might be enjoyable in itelf.) As to your assertion that we are not machines, I think this is because you believe that a machine must be made of metal and other inorganic parts, and that all machines are extremely finite and simple. No... ...They are, for the most part, deterministic. Likewise the physical processes that happen in your brain are entirely mechanical--given that electricity can also be said to be a mechanical process. ..._this_ would be my understanding of a machine: "physical processes that happen in your brain are entirely mechanical". I think we are working with the same definitions. SubtillioN's objection was that we are not, and that I would be to blame for that. However, if you say "for the most part", thinking of quantum uncertainty, you leave a door open that could allow almost anything, and I can't argue something like that (and in fact wouldn't want to). I am arguing only machines in the sense of something completely deterministic. I am saying we cannot be machines that are completely deterministic, since that would not allow us to make a judgment about for example the fact that we consciously see colors, and then to verbally express this judgment. It would be contradictory to the consciousness-detector that we "have". quote]Why do you care whether somebody thinks a machine can be conscious or not? If it is because you are concerned that we somehow denigrate the value of human consciousness by making extravagant claims about the future of Artificial Intelligence, I think you can put your worries to bed. |
Yes, that is a good part of the reason.
It is a very complicated problem, and despite predictions from people like Hans Moravec, I think that we are going to find that the software improvements do not happen at nearly the same rate as the hardware improvements. Software development does not follow Moore's Law. So I think that in our lifetimes, we will only see the baby steps.
These would be practical problems. I am arguing the conceptual, theoretical assumption that it could be possibly that something deserving the name "computer" could _actually_ be conscious, based on it being a computer.
I would like to be wrong about this, because I would like to see AI appear in my lifetime for a number of reasons. But I am not convinced it is going to happen in my lifetime.
I am not opposed to computer research as such, on the contrary. I think computers will be able to do a lot of things, including using language to a quite high degree. However, I am saying the concept of computers (in any sense comparable to the sense we are using) by principle cannot include consciousness.
In any case, if what you truly revere is the beauty, subtlety and sanctity of conciousness, then what do you care what hardware it runs on? If conciousness itself is what is really important, then what difference does it make whether that consciousness emerges from organic molecules or synthetic ones?
Actually, it does not make a difference to me. Both assumptions are conceptually impossible, would be my argument.
Finally, I would suggest that your repeated flat assertions that consciousness on synthetic hardware is an eternal impossibility are not convincing anyone, and never can. None of us can say what the future holds. A lot of people who are a lot smarter than you or I are spending a lot of time and money trying to do this. Is your aim to get them all to stop? If so, why?
Well, I am not trying anyone to stop from building hardware. I am simply arguing that consciousness cannot be a deterministic mathematically describable physical process.
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Consciousness is a plain fact of daily life. I am saying that we don't (and also that we won't) have an objective definition for the fact that we are conscious. This is not mysterious to me: the reason is that consciousness is subjective Did you realize that Spinoza said the very same thing? The attributes are essentially subjectivity and objectivity. They are correlative but not inter-functional. You are correct, blue, that there will never be an objective description of subjectivity. There will simply be an abstract objective description of the functioning of the same mode that can be seen subjectively from the inside. ("subjective experience", "conscious experience") and I don't consider the subjective to be a subset of the objective. Neither do I. They are the eternally necessitated methods of comprehension for any thinking thing. To repeat a statement I made earlier: You (not you personally) are loosing sight of consciousness each time you try to define it objectively. Some people can keep track of both attributes at once. (Since I am not trying to do that, I don't wind up in that situation.) So you sacrifice objective descriptions so they wont distract you from the subjective experience? Consciousness is not and cannot be defined based on external behavior. The experience is not being defined from the outside, only the mode is being described. The subjective and objective views are complimentarily and are simply being correlated. Only intuition can bridge this gap. I am arguing the conceptual, theoretical assumption that it could be possibly that something deserving the name "computer" could _actually_ be conscious, based on it being a computer. So call it something else. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
SubtillioN:
There will simply be an abstract objective description of the functioning of the same mode that can be seen subjectively from the inside. Some people can keep track of both attributes at once. The "consciousness-detector" (our ability to "detect" our own consciousness and make verbal, physical statements about it), contradicts for example Spinoza's E2P6 and E2P7 (which are essential). So call it something else. No, this discussion is about computers...funny, though... Only intuition can bridge this gap. Although computers might use so-called "fuzzy logic", computers do not and cannot have intuition of a kind that could bridge a gap like that. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
The "consciousness-detector" (our ability to "detect" our own consciousness and make verbal, physical statements about it), contradicts for example Spinoza's E2P6 and E2P7 (which are essential). There is no such thing as a "consciousness detector". SubtillioN: Only intuition can bridge this gap.
I was talking about human intuition for the correlation of subjective and objective descriptions. Oh well... |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
There is no such thing as a "consciousness detector". I guess the reason would be that it contradicts Spinoza. I was talking about human intuition for the correlation of subjective and objective descriptions. Oh well... You were talking about what? |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
I guess the reason would be that it contradicts Spinoza. You guessed wrong. You were talking about what? I am not going to repeat myself. Re-read the post if you care to know. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
I am not going to repeat myself. Re-read the post if you care to know. I did, but how do you know there is subjectivity at all, if you don't have a consciousness-detector? What are you talking about then? |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
I did, but how do you know there is subjectivity at all, if you don't have a consciousness-detector? What are you talking about then? Consciousness is not something that can be detected like magnetism, light, or radioactivity. It is a highly complex set of self-reflexive patterns and events. Consciousness models the world. Consciousness is part of the world. Therefore consciousness reacts to and models the world including itself. To call this self reflection a consciousness detector is to confuse the issue with pointless abstractions. BTW, maybe you should start another thread dealing with Spinoza and consciousness detection or explain here why you feel this contradicts his metaphysics. Note that Spinoza does not address the structure of consciousness nor did he know the anatomy of the brain nor cognitive science. He simply says that the attribute of thought encompasses all subjectivity (the intrinsic nature or experience) of ALL modes. This automatically includes the self-reflection of self-consciousness. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
To call this self reflection a consciousness detector is to confuse the issue with pointless abstractions. It is not an abstraction, but the simple fact that we can tell that we are conscious and make a verbal statement about it. This already contradicts Spinoza's E2P6 and E2P7. It means that consciousness has an impact on physical events. Or in other words, that the subjective has impact on the objective. I wonder how much longer the academic society will need to realize this. It means that computers don't have a consciousness-detector but we humans do. BTW, maybe you should start another thread dealing with Spinoza and consciousness detection or explain here why you feel this contradicts his metaphysics. No need for another thread, it is as simple as that. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
It means that consciousness has an impact on physical events. Or in other words, that the subjective has impact on the objective. This shows directly that you don't understand Spinoza. The subjective and objective are descriptions. Consciousness is part of the flux of causation. It does have an impact on physical events because it is one. It is simple as that. You obviously don't understand Spinoza and you blame this confusion of him. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
This shows directly that you don't understand Spinoza.
Or it shows that Spinoza doesn't understand consciousness, plus that you don't understand Spinoza. The distinction between conscious experience and physical measurements is obviously not a difference of verbal or mathematical description. Or you have a very strange notion of "description". It is a difference in reality that remains when descriptions are absent (probably in animals), and that's probably the part that you don't understand. It cannot be removed, ignored or deflated by just making the announcement "it is one". You are apparently not aware that there are facts that need to be understood, not just theories that need to be merged and not just definitions that need to be extended in order to avoid any "artificial distinctions". And your style, again, indicates that your disagreement is pre-determined. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Or it shows that Spinoza doesn't understand consciousness, plus that you don't understand Spinoza. Spinoza does not attempt to explain consciousness. He simply provides a metaphysical scaffolding in which to place it. You interpret Spinoza so that by your own admission he makes no sense and somehow forgets to include in his metaphysical scaffolding the most obvious phenomenon in existence, namely consciousness. I interpret Spinoza so that the whole thing is easily resonant with all of objectively and subjectively observed reality. Take your pick, coherence or stupidity. Either Spinoza was a complete moron who failed to account for his own consciousness or you have interpreted him incorrectly. I have already pointed out your errors in the other Spinoza thread see http://www.kurzweilai.net/mindx/frame.html?main=/m indx/show_thread.php?rootID%3D20593 The distinction between conscious experience and physical measurements is obviously not a difference of verbal or mathematical description. I never said it was. Objectivity and subjectivity are simply different points of view from which descriptions follow. They originate from the same reality which is a causal part of a causal universe. So yes consciousness can cause things in this universe. This is as plain as day. We cause things all the time. No one is doubting this most obvious of facts. Or you have a very strange notion of "description". indeed It is a difference in reality that remains when descriptions are absent (probably in animals), and that's probably the part that you don't understand. I was talking about the descriptive aspect of the formative split between subjective and objective points of view. It cannot be removed, ignored or deflated by just making the announcement "it is one". it is one, meant that consciousness is a physical event. Read it again in its correct context. Consciousness is part of the flux of causation. It does have an impact on physical events because it is one. You are apparently not aware that there are facts that need to be understood, not just theories that need to be merged and not just definitions that need to be extended in order to avoid any "artificial distinctions". Oh, is that it? You are the one urging people to believe that consciousness will never be objectively explained. You say directly, give up objectively trying to define consciousness because it only gets in the way. Consciousness is a fact and it needs to be understood. We cant gain understanding by giving up one of our essential methods of understanding. Without objectivity we would still be in the stone age...where you are wrt consciousness. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
[...]Take your pick, coherence or stupidity. Either Spinoza was a complete moron who failed to account for his own consciousness or you have interpreted him incorrectly. Or both. Get rid of this language! I never said it was. Objectivity and subjectivity are simply different points of view from which descriptions follow. I am simply responding to what you wrote in the previous message: "The subjective and objective are descriptions." They are not descriptions and neither are they "simply different points of view from which descriptions follow" in the sense in which one would usually understand such a phrase: In the form you used here, one would assume you are talking about theoretical points of views, at least it is not clear what you would mean otherwise. You have repeatedly been reluctant to acknowledge that this not only about descriptions, and even then only in vague terms. They are real facts. It is a fact that we see colors consciously (subjectively), and a fact that we make measurements physically (objectively). Those are real life events. Either there is a _real_ difference between those two aspects, or there is not. If you say they are two sides of the same thing, then is there a _real_ difference between theses two sides, or not? Take your pick, so that we have something real to discuss. If there is a real difference, then the laws of physics (as we know them) address only one side and are not causally closed since the other side (conscious experience) has a modifying impact on the "physical" side ("consciousness-detector"). I was talking about the descriptive aspect of the formative split between subjective and objective points of view. I have doubts that anyone understands clearly what this means. Is there a _real_ difference, or not? And if yes, what is this real difference? My point is that this difference makes a difference. "it is one", meant that consciousness is a physical event. Read it again in its correct context. What else should it have meant? Announcing that "consciousness is a physical event" does not resolve the distinction between subjective and objective, between conscious experience and physical measurement. The fact that we see colors is not part of a physical understanding of the brain. Oh, is that it? You are the one urging people to believe that consciousness will never be objectively explained. You say directly, give up objectively trying to define consciousness because it only gets in the way. Certainly a scientific understanding based on acknowledging first-person-experience and insight would be better than no understanding of conscious experience at all. The restriction of science to third-person physical knowledge is counterproductive. Furthermore such a restricted science would be wrong even in third-person terms, as it follows that any purely third-person-based description would not be self contained, causally-closed. As I have repeatedly clarified more or less during this whole discussion, consciousness has been used as synonymous with conscious experience, the "subjective" side. It is obvious that computers can implement functions of information processing, language processing, recognition, planning, making choices (chess computers), etc. Those functions are possible without consciousness, even though in a human being they are (largely) conscious, and may happen differently. Thus when "comparing" computers and humans, the adequate distinction is "consciousness" and "information processing", even though the "conscious experience" of "consciousness" is that which is clearly different than "information processing". When the question is asked whether computers can be conscious, then that means one is discussing consciousness in the sense of conscious experience, the question whether computers actually feel pain :-( and happiness :-D . This is also the question debated in the article that this discussion is about. Consciousness is a fact and it needs to be understood. We cant gain understanding by giving up one of our essential methods of understanding. Without objectivity we would still be in the stone age...where you are wrt consciousness. Where did you get the idea that I am talking about giving up anything? I am talking about gaining understanding of conscious experience and qualia, which are currently not understood except in ways that I very much support. That will avoid confusion and thereby support any objective understanding that is actually possible (which is a lot ;-). Understanding your limits is a part of learning. Your accusation that I am arguing against objective understanding is smoke and mirrors. If the laws of physics (or any third-person descriptions) are not causally closed, then that is a difference objectively, and I want to know if that is so! |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
I have a question for Blue: where do you believe this "consciousness-detector" you talk about physically resides?
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Mechanized,
I have a question for Blue: where do you believe this "consciousness-detector" you talk about physically resides?
The consciousness detector acts on the brain, of course, when we say "I am consciously seeing a color". Whether it is independent from the physical brain is a different question, and it also depends on what you mean with "physical". I am arguing that consciousness is not a mechanical process, and that the brain _if_ seen as a mechanical process cannot be completely deterministic, as it must allow the "consciousness-detector" and thereby consciousness to act on it. So depending on how you put it, the brain is either not a mechanical process, or this mechanical process is not self-contained, not causally closed, not deterministic as consciousness must act on it. I would think the brain is not a completely mechanical process in the first place. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Blue: I am arguing that consciousness is not a mechanical process If by that you mean that my consciousness is the product of something other than my brain, I'd be interested to hear where else you think it comes from. Blue: I would think the brain is not a completely mechanical process in the first place. What phenonmena occur in the brain that are not mechanical? Is there something that occurs in the brain that does not obey the laws of physics? Again, so I understand: do you believe that your "consciousness-detector" (or whatever you deem the important, defining aspect of human consciousness) is a physical process that exists in the physical brain (the network of brain cells) or do you believe that it does not exist in physical reality, but rather exists "somewhere else"? I think reality will turn out to be weirder than we can currently imagine, and it would not surprise me if our minds turn out to "reside outside physical reality" or something along those lines. But since I have no way to prove or disprove that, I don't spend as much time thinking about that sort of thing as I do about lots of other things. There just isn't time in my life for that line of inquiry, because I can only speculate about it, and cannot establish one way or another whether it is true or not. Regards, Mechanized |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Mechanized,
If by that you mean that my consciousness is the product of something other than my brain, I'd be interested to hear where else you think it comes from. And where does the universe come from? I'd like to know the answer to both questions as well....however I'm not arguing that the brain and consciousness come from different "places", just that (a) the deterministic mathematical description cannot address consciousness, and that (b) the mathematical description cannot be a causally-closed (self-contained) concept, since the "consciousness-detector" allows us to make physical statements about consciousness. (Described in more detail on my homepage). What phenonmena occur in the brain that are not mechanical? Is there something that occurs in the brain that does not obey the laws of physics? This question is about point (a) as above. The short answer is that even a complete third-person physical description of a computer (or machine) would not tell us whether the computer is consciously seeing colors the way we do, or not. One might believe that the conscious experience is completely correlated (although that point would be more than hard to make if it were to include the conscious "look" of colors), such a correlation could not be derived from third-person physical measurements alone, but would have to be established with the help of first-person-knowledge of human beings. This would not be so if consciousness were a purely mechanical process. It simply does not fit into that category. Again, so I understand: do you believe that your "consciousness-detector" (or whatever you deem the important, defining aspect of human consciousness) is a physical process that exists in the physical brain (the network of brain cells) or do you believe that it does not exist in physical reality, but rather exists "somewhere else"? The "consciousness-detector" is related to point (b) as above. It means that a human being can "translate" or "transform" the awareness of being conscious into the verbal, physical (and mathematically describable) statement: "I consciously see a color". As described on my homepage, there are various philosophical objections to both points (a) and (b), but I claim with confidence that they do not hold. Point (a) rules out materialism, and point (b) rules out epiphenomenalism and dual-aspect theories... although this is somewhat simplified, but I don't want to consume your limited time with a lot of details. Thanks for your interest, even if limited. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Blue said:
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
What you're talking about here is a system in which the brain acts as hardware and consciousness is the result of running software. Without the hardware, the system can't operate and consciousness is what the software uses the hardware to produce. (I am speaking metaphorically here.) For example, the combination of software and hardware produce the screen you are looking at on your computer. The changing picture on that screen is neither the hardware nor the software, but it is impossible to produce without the interaction of both.
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
I believe I agree with most of the substance of your comments in the last message, but I still do not see how it proves:
Blue: I am simply arguing that consciousness cannot be a deterministic mathematically describable physical process.
...in any case, as I have said, I am not really inclined to discuss the nature of consciousness, as that discussion does not seem productive to me in any concrete sense, so I'm gonna leave this thread to others. You are obviously very passionate about your opinions, and whether you or I or anyone else agree, I think that it is a good thing that you are trying to think about things and understand them, so best of luck to you! :-) Regards, Mechanized |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
I believe I agree with most of the substance of your comments in the last message, but I still do not see how it proves: Even though you say your interest is of a lower priority, it is definitely welcome. Without attempting to involve you in a discussion, I will try to give a short and simple response to this and your other points, although I do not have time right now. Meanwhile, should you find a spare minute, you might have enough interest to take a look at a text of a few pages I wrote on my homepage at http://www.occean.com |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
One thing I was trying (but possibly failed) to get across with my analogy of the brain as computer is that we'll never be able to find mind or consciousness by examining the brain any more than we can find the picture on the screen of your computer by looking into the computer itself. It isn't in there. It only exists on the screen for the time it is projected there by the computing process.
|
||||
Autoabdicate |
[Top] [Mind·X] [Reply to this post] |
|||
"WE DEMAND RIGIDLY DEFINED AREAS OF UNCERTAINTY AND DOUBT!" -- Broomfondle
|
||||
Re: Autoabdicate |
[Top] [Mind·X] [Reply to this post] |
|||
"WE DEMAND RIGIDLY DEFINED AREAS OF UNCERTAINTY AND DOUBT!" -- Broomfondle You got them - they are called conscious experience, or qualia. ;-) |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
I belive that as humans we have consciousness, emotional experience and intelligence. I think that these three factors determine our sentience.
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
Hi to all,
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
The reason I believe computers should not be given human-like rights is that they have storage capacity and we have the ability to transfer the data held within that storage. Which means that it can go without power for a long time, and copied from place to place to place without any harm done, so if it is not as fragile as a human being, it doesn't need the same rights. Sure, the data can be destroyed, but who cares, as long as there's a backup. Right?
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
PS - To clarify, I don't believe that something should be protected if it can be equivelantly reproduced. We can reproduce the physical part of a person, and perhaps many of that person's attributes, through cloning, but we can't exactly reproduce any interaction with nature. A computer, however, would have the ability (given a very large storage space) to copy over all the data from every encounter it had ever had onto another hard drive and pick up as it had left off. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
I go to sleep at night. I wake up in the morning.
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
We are, or were till today, the most intelligent life on planet Earth amongst other biological life (plants and animals). Is this enough of a reason for humans to think that they are superior amongst all other life. Is there, or can there be something thats smarter than we are?
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
I was once given this analogy:
|
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
The problem I see with this analogy is that the input that I am receiving is not the Chinese books but the English instructions and therefore since I know English I am able to comprehend the input and make decisions about what to do with it before I respond or give my output. Therefore this analogy only states to me that if a computer was placed in this same situation (without any external information) I could expect it to do a much more efficient job than me with the same level of comprehension (essentially none at all).
To clarify, I don't believe that something should be protected if it can be equivelantly reproduced A computer, however, would have the ability (given a very large storage space) to copy over all the data from every encounter it had ever had onto another hard drive and pick up as it had left off. ZeusAlmighty As ZeusAlmighty stated it would be possible to replicate the computers mind (software) to a certain point in time, and this can be further extended to its body (hardware). But this is akin to the ideas from the movie The 6th Day where they were not only able to clone humans but to make copies of their memories up to and including the point at which the brain scan took place. Does that mean that if we were to have clones which have our same software (mind) and hardware (body) in the future that they (ie. you, since they are essentially exactly the same as you in every way) wont be protected under human rights? Some food for thought, there are many issues surrounding cloning and creating artificial intelligence which are quite similar, but there would definitely be a lot of benefits. To give a drastic example, if you were to die today and leave your family without your support, I am sure some of you reading this would want your clone to take your place if they had all your past experiences and knowledge, as well as your body in the same condition as the point at which you died. Would you not want your clone in this situation to have the same human rights as you even though they would think and feel exactly the same as you would? This same example can be extended to computers and it is probably not as controversial because we replicate them all the time. If computers gain a form of consciousness and decide to replicate themselves, I wonder if in comparison to clones their duplicates would have more rights than human clones (provided they have any rights at all). It is quite logical to assume that any sentient being that was aware of a threat to its existence would take measures to protect itself, provided that the being had a sense of self-worth AND that the measures it would take did not conflict with any other value system endowed to the being. t1dude Most likely computers would require many different rights than those which are currently expected to be respected for humans. We dont currently have any experience in such matters but most definitely sometime in the future the rights for humans will change and quite possibly the rights for conscious computers as well. As many movies have outlined such as the Animatrix if there were to exist conscious computers and they werent provided with adequate rights that they felt they required then they may fight to obtain them. This whole discussion reminds me a lot of the movie planet of the apes where George Taylor wasnt allowed to have any of the rights that apes had because he belonged to an inferior species. Anything he said and did was believed to be directly taught to him and that he wasnt able to actually comprehend ideas. There is currently know known limit to what technology will be capable of and giving examples of how things work now wont make future technology completely impossible. Humans have been able to make many things from past science fiction into reality and its possible that computer consciousness could be one of those things. |
||||
Re: Biocyberethics: should we stop a company from unplugging an intelligent computer? |
[Top] [Mind·X] [Reply to this post] |
|||
I dont have a problem with body/mind dichotomies as they both move,and then you're talking dimensions which is a branch of maths - that's cool.
|
||||