Origin > Virtual Realities > GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0553.html

Printable Version
    GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
by   Peter B. Lloyd

Why, exactly, do the rebels have to enter the Matrix via the phone system (which after all doesn't physically exist)? And what really happens when Neo takes the red pill (which also doesn't really exist)? And how does the Matrix know what fried chicken tastes like? Technologist and philosopher Peter Lloyd answers these questions and more.


To be published in Taking the Red Pill: Science, Philosophy and Religion in The Matrix (Ben Bella Books, April 2003). Published on KurzweilAI.net March 3, 2003.

As the essays throughout this book demonstrate, the Wachowski Brothers designed The Matrix to work at many levels. They carefully thought through the film's philosophical underpinnings, religious symbolism, and scientific speculations. But there are a few riddles in The Matrix, aspects of the film that seem nonsensical or defy the laws of science. These apparent glitches include:

• The Bioport—how can a socket in your head control your senses? How can it be inserted without killing you?

• The Red Pill—since the pill is virtual, how can it throw Neo out of the Matrix?

• The Power Plant—can people really be an energy source?

• Entering and Exiting the Matrix—why do the rebels need telephones to come and go?

• The Bugbot—what's the purpose of the bugbot?

Perceptions in the Matrix—how do the machines know what fried chicken tastes like?

• Neo's Mastery of the Avatar—how can Neo fly?

Consciousness and the Matrix—are the machines in the Matrix alive and conscious? Or are they only machines, intelligent but mindless?

This essay addresses these questions and shows how these seeming glitches can be resolved.

THE BIOPORT

Can the machines really create a virtual world through a bioport? And how does it work? The bioport is a way of giving the Matrix computers full access to the information channels of the brain. It is located at the back of the neck—probably between the occipital bone at the base of the skull, and the first neck vertebra. Wiring would best enter through the soft cartilage that cushions the skull on the spinal column, and pass up through the natural opening that lets the spinal cord into the skull. This avoids drilling through bone, and maintains the mechanical and biological integrity of the skull's protection. A baby fitted with a bioport can easily survive the operation.

The bioport terminates in a forest of electrodes spanning the volume of the brain. In a newborn, the sheathed mass of wire filaments is pushed into the head through the bioport. On reaching the skull cavity, the sheath would be released, and the filaments spread out like a dandelion, gently permeating the developing cortex. Nested sheaths would release a branching structure of filamentary electrodes. As each sheathed wire approaches the surface of the brain, it releases thousands of smaller electrodes. In the neonate, brain cells have few synaptic connections, so the slender electrodes can penetrate harmlessly.

With its electrodes distributed throughout the brain, the Matrix could deliver its sensory signals in either of two places: at the sensory portals or deep inside the brain's labyrinth. For example, vision could be driven by electrodes on the optic nerves where they enter the brain. Artificial signals would then pass into the visual cortex at the back of the brain, which would handle them as if they had come from the eyes. Correspondingly, outgoing motor nerves would also have electrodes at the boundary of brain and skull. This simple design mirrors the natural state of the brain most closely. It is not, however, the only possibility. Electrodes could alternatively be attached in the depths of the brain, beyond the first stages of the visual cortex. This would greatly simplify the data processing. In normal perception, most of the incoming information isn't processed; information you aren't paying attention to is filtered out. If the Matrix were to deliver information directly to the output axons from the sensory cortex—as opposed to the input to the cortex—then it would save itself the job of filling in all those details.

One scene tells us which method the Matrix uses. When Neo wakes and finds himself in a vat, he pulls out the oxygen and food tubes, drags himself out of the gelatinous fluid, and—perceives the world. The fact that he can see and hear proves that the visual and auditory cortices of his brain are working. This wouldn't be possible if the Matrix had put its sensory data into the deeper centers of his brain. For then his sensory cortex would have been bypassed: it would never have received any stimulation, and would have wasted away. In that case, Neo would wake from his vat and find himself blind and deaf, with no sense of smell or taste, no feeling of touch or heat in his skin, no awareness of whether he was vertical or horizontal, or where his arms or legs were. The Matrix must have input its visual data just where the optic nerve from the eyeball passes into the skull, rather than in the midst of the brain's vision processing. Likewise, Neo's ability to walk and use his arms shows that the motor cortex is also developed and functioning. Indeed, even the cerebellum, which controls balance, must be working. So, the Matrix must be capturing its motor signals from the brain's efferent nerves after they have finished with the last stage of cortical processing, but before the nerves pass out of the skull.  

The rebels use the bioport to load new skills into their colleagues' brains—writing directly into permanent memory. The Matrix itself never implants skills in this way; folks in the virtual world learn things in the usual manner by reading books and going to college. So, why did the architects of the Matrix build into the bioport this capability to download skills? It is actually a byproduct of how the bioport is installed. They could have attached electrodes to just the sensory and motor nerve fibers. That, though, is difficult: the installer must predict where each nerve fiber will be anchored, which is hard to do reliably, given the plasticity of the neonate brain; and it must navigate through the brain tissue to find these sites. A more robust and adaptable method is to lay a carpet of electrodes throughout the whole brain, and let the software locate the sensory and motor channels by monitoring the data flows on the lines.

That spare capacity remains available for others to exploit, and the rebels use it to download kung-fu expertise into Neo's brain and to implant helicopter piloting skills into Trinity's. If the Matrix ever learned this technique, it could create havoc for the rebels, implanting impulses to serve its own ends.

THE RED PILL

Morpheus offers Neo the choice of his lifetime, in the form of the famous red and blue pills. But what can a virtual pill do to a real brain? We have seen that the Matrix interacts with the brain only in the sensory and motor nerve fibers. It does not affect the inner workings of the brain, where a real psychoactive chemical would have to act. Minor analgesics such as aspirin would work by having their effect outside the brain centers, canceling out pain inputs from the avatar software.

The blue pill is probably a placebo. Morpheus says only, "You take the blue pill and the story ends. You wake in your bed and you believe whatever you want to believe." We never know what, if anything, the blue one would do.

So, how does the active pill, the red one, work? Since virtual aspirin can work as a painkiller, the avatar's software module must be able to accept instructions to cancel out any given sensory input. Evidently, the red pill gives the avatar a blanket command to cancel all such input. It thereby obliterates Neo's perception of the virtual world, which the Matrix has been feeding to him throughout his life. Instead of sitting on a chair in a hotel room, Neo sees and feels for the first time that he is immersed in a fluid. The perception of this filters through into his perceptions of the Matrix's own imagery. Neo touches a mirror, and finds it a viscous fluid that clings to his finger and then seeps along his arm, covering his chest and slithering down his throat. A blend of bodily perceptions and mental imagery is typical of what happens when you wake from a dream; external perceptions are distorted to fit the contents of the dream. Your dream of falling off a cliff might fade into falling out of bed. In the film, the liquefied mirror is seen only by Neo, not the others in the room. His real bodily sensations are, for the first time, sweeping into his brain, which struggles to integrate them into the stable narrative he has lived in up to that moment.

Another route out of the Matrix, besides the red pill, would be meditation. The Buddhist practice of vipassa1 gives adepts penetrating insights into their own mental processes. It rolls back the barrier between conscious awareness and the subconscious. An adept of vipassana, living in the Matrix, would discover the interface between the Matrix's electrodes and the brain's wetware. The expert practitioner could override the Matrix's stream of imagery, and see reality. Morpheus mentions that someone did break free from the Matrix. Perhaps meditation was the key. To attain that expertise, however, would take years of effort. Leading other people to the truth would require a school of meditation to train new recruits for years, to pursue what one individual claimed was the truth, but everyone else dismissed as fantasy. No doubt this is what the Oracle is gently encouraging. But it is not surprising that the red pill was invented as a fast-track route.

Morpheus's team monitors Neo's progress. As he realizes that he is immersed in fluid, Neo panics, and his instinct to escape drowning compels him to drag the tubes out of his mouth. Like waking out of a dream, Neo finds the sensible world rushing in on him, and it is remarkable that his manual coordination has been so well preserved by the Matrix system. He grabs the tubes and yanks them out, using weak hands that had never before grasped anything.

When Neo's exit from the Matrix is detected, a robot inspects him and flushes him out of his pod. Too weak to swim, he must be pulled out of the wastewater pool without delay. How are the rebels to find him? In a power plant vast enough to house the human race, there would be thousands of effluent drains. As Morpheus mentions to Neo, "the pill you took is part of a trace program." Besides canceling Neo's sensory inputs, the red pill also puts a unique reference signal onto the Matrix network. When the Nebuchadnezzar's computer locates that signal, it can work out Neo's physical location and order the hovercraft to the appropriate chute. In the tense moment before that reference signal is located, the worried Morpheus says, "We're going to need the signal soon," and Trinity exclaims that Neo's heart is fibrillating as the panic threatens to bring on a heart attack. Apoc finds the reference signal just in time, before Neo's brain disengages from the Matrix network and the signal vanishes.

THE POWER PLANT

During the armchair scene, we have what is probably the most criticized element in The Matrix story line. Morpheus claims that the human race is imprisoned in a power station, where human bodies are used as a source of bioelectricity. This is engineering nonsense; it violates the fundamental law of energy conservation. The humans would have to be fed, and the laws of physics demand that the energy consumed as food must be greater than the energy generated by the human body. That Morpheus has misunderstood what is going on is underscored by his mention in the same speech of the machines' discovery of a new form of nuclear fusion. Evidently, the fusion is the real source of energy that the machines use. So what are humans doing in the power plant? Controlled fusion is a subtle and complex process, requiring constant monitoring and micromanaging. The human brain, on the other hand, is a superb parallel computer. Most likely, the machines are harnessing the spare brainpower of the human race as a colossal distributed processor for controlling the nuclear fusion reactions. (Sawyer comes to a similar conclusion elsewhere in this volume—Ed.)

ENTERING AND EXITING THE MATRIX

The virtual world of the Matrix is not bound by physical laws as we know them, but for the virtual world to be consistently realistic, the laws of physics must be followed where they can be observed by humans. Access into and out of a virtual world is a problem, because materializing and dematerializing violate the conservation of mass and energy. Furthermore, whatever was previously in the space occupied by the materializing body must be pushed out of the way; and would be pushed with explosive speed if the materialization is instantaneous. Conversely, on dematerialization, the surrounding air would rush in to the vacated space with equal implosive force. There are no such explosions and implosions in The Matrix, so how do the rebels do it?

In the Matrix computer, software modules represent the observable objects in the virtual world, and these modules interact by means of predefined messages. One such message issued by a virtual human body, or "avatar," is, "What do I see when I look in the direction V?" A module whose object lies on the line of sight along V will respond with a message specifying the color, luminosity, and texture that the human should see in that direction. If a rebel's avatar is to be visible to other people who are immersed in the Matrix world, the Nebuchadnezzar's computer must pick up those "What-do-I-see" requests and reply with its own "You-see-this" message.

A virtual human body does not send "What-do-I-see?" message to all other modules in the Matrix, or else it would overload the network. It refers to "registers" of modules, which record the virtual objects' shape, size, and position. Simple geometry then tells it which modules to target. For efficiency, each visible volume of space, such as the room of a building, has its own register.

The key step in materializing a body in a given space is for its module to be inserted into that space's register. For dematerializing, it is deleted from the register. Once it is registered, anyone looking in that direction will see that module's virtual body. The Matrix cannot let a software module insert itself arbitrarily into a register, since that could violate the conservation of mass if it led to an object's materializing in an area that has a conscious observer.

Registers for unobserved spaces are not constrained in this way. If nobody is watching a room and its entrances, then a body can safely materialize in it without observably breaking the simulated laws of physics

This does not mean that the laws of physics break down as soon as all observers leave a room. The table and chair do not start to float around against the law of gravity when nobody is looking. Rather, the Matrix simply does not bother to run the simulation for a room that nobody is looking at. In its register, it retains details of where each object is, but the room is no longer rendered as visual and tactual imagery. 

So, when the Nebuchadnezzar's computer wants to materialize a rebel, it must find some unobserved room, and insert the data module for the rebel's body into the register for that room. Subsequently, if someone else enters the room, he will see the rebel just like any other object in the room. And the rebel can walk out of the room into any other part of the Matrix world in the normal manner. This is how rebels materialize in the Matrix without causing explosions or breaching the integrity of the simulation. 

When a rebel exits, the module that simulates her body is deleted from the register. This must happen only when the body is not being observed. There is, however, an intermediate state, "imperception," which effectively takes the body out of the virtual world even while the data module is still in the register. This is an emergency procedure that the Nebuchadnezzar's software uses for fast escapes. 

Although the Matrix software cannot insert or delete a module while its object is being observed, it does allow any module to change its appearance. The agents use it whenever they enter the world. An agent never materializes or dematerializes, but changes the appearance of another person's avatar to match the personal qualities of the agent

To make a rebel imperceptible, the Nebuchadnezzar's computer changes the body's visible appearance to be transparent; and the body's mechanical resistance to that of the air. From an observer's perspective, the body has melted into air. From a software perspective, the data module is still on the register but simulating a body indistinguishable from thin air. Later, when the scene is no longer being observed by anybody, the module will be deleted. 

We see this happen only once, when Morpheus leaves the subway. Once the Nebuchadnezzar's computer has located his avatar, it sends an instruction to make it invisible. This does not affect the whole avatar at once: the module has to calibrate its appearance to match exactly its surroundings. The first part of the body to receive the instruction is the nervous tissue of the ear, and this at first glows bright white, before settling down to a state of transparency. The rest of the body follows. Its appearance oscillates around whatever is visible in the background, settling down to transparency: where the Morpheus stood, we see the background shimmer momentarily. The solidity of the body then fades: moments after Morpheus's body has become invisible, the telephone handset that had rested in his hand drops, slowly at first, toward the ground. The observed sequence is consistent not with the sudden deletion of the body's module, but rather with its changing its appearance.  

HARD LINES

Telephones play a key role in entering and leaving the Matrix. But the rebels do not travel through the telephone lines as energy pulses. There is no device at the end of the telephone for reconstructing a human body from data: all you would get is noise in the earpiece. Furthermore, the bandwidth of a telephone line is too narrow to ship an entire human being. Finally, nothing at all ever really travels along the lines in the Matrix world, as they are only virtual. 

Instead of being a conduit for transporting dematerialized rebels, the telephone line is a means of navigation. It pinpoints where a rebel is to enter or leave the Matrix.  

To enter the vast Matrix requires specifying where the avatar is to materialize. To get an avatar into the Matrix world, the rebels must use some strictly physical navigation. This is done with the telephone network, which has penetrated every corner of the inhabited world with electronic devices, each of which has a unique, electronically determined label. Without knowing anything of human society and its conventions, the physics modules of the Matrix can determine where any given telephone number terminates. 

How are the rebels to give a telephone number to the Matrix? They must dial it, but they cannot simply pick up a handset and make a call to a number inside the Matrix world, for any handset in the Nebuchadnezzar is connected to the real world telephone network, not the Matrix's virtual network. Inside the Matrix, a call must be placed subtly, without observably breaching the simulated laws of electromechanics. 

To see how this can be done, we need to know something of the infrastructure of the Matrix. Monolithic computer systems are unreliable, so the Matrix is instead an assemblage of independent modules, each having a unique "network address." For a module to communicate with another, it will put a data message on the network with the address of the intended destination. Neither module need know where the other one is inside the electronic hardware of the Matrix computer. They might be inches apart, or a mile away. 

This scheme is robust and flexible. There is no central hub, and individual modules can be plugged into, or taken out of, the network without disturbance. Conversely, the rebels can easily hack into it. Once they are linked into the network, their equipment can simply pretend to be another module. It can place data messages onto the system, which will be routed just like authentic messages, and be received and read by the addressed module. So, to initiate a telephone call, the crew will place a data message on the network, addressed to any module that simulates an aerial for receiving calls from cell phones. Some such node will pick up and read the counterfeit data message just as if the message had been sent by a bona fide source. On getting this message, the aerial module will carry out its role in handling a telephone call. 

The Nebuchadnezzar's operator maintains contact with rebels who are in the Matrix even while the hovercraft is moving, so they must use radioports onto the network. The rebels might have installed their own rogue radio receiver—mechanically securing it in some dark corner, and plugging its data cable into a spare socket of a router. More likely, the Matrix itself uses radio as part of its network infrastructure, and the rebels broadcast their counterfeit messages on the same frequency. 

Materializing or dematerializing, however, needs a network address, which is gotten as follows. When the Nebuchadnezzar makes a "phone call" into the Matrix, it places on the network a packet saying "Place this call for (212) 123-4567" or whatever the telephone number is, together with the Nebuchadnezzar's own network address as a return label, such as 9.54.296.42. When the call is picked up, the Matrix will return a data packet, addressed to the Nebuchadnezzar, saying "Message for 9.54.296.42: call connected to telephone (212) 123-4567." All the Nebuchadnezzar's computer has to do is listen out for its own address, and it will find attached to it the network address of the telephone equipment. 

As soon as the answering machine picks up the incoming call, the Nebuchadnezzar will get the network address of that destination. 

Essentially the same job must be done when a rebel leaves the Matrix world. In order to disengage the rebel from his or her avatar, the Nebuchadnezzar's computer must again get a fix on the avatar's location within the virtual world. As before, it is not enough to locate the avatar's virtual body in terms that relate to human culture. It is no use to say that Neo is at 56th and Lexington. Rather, it needs a network address that the Matrix's operating system can follow. Of course, the Nebuchadnezzar gets it by calling a telephone in the Matrix world, which must be answered for the network address to be passed back to the Nebuchadnezzar. Once that has happened, the avatar's module can be deleted from the register for that location. 

Why don't the crew navigate their exits with the stylish cell phones that all the rebels carry? Why hunt for a land line (called a "hard line" in the film) under hot pursuit from the agents? The answer is that the cell phones are not part of the Matrix world and do not have network addresses known to the Matrix software. The cell phone is projected into the Matrix world by the Nebuchadnezzar's computer, 114 peter b. lloyd along with the avatar's body and clothes—and the weapons that Neo and Trinity eventually bring in with them. The software that simulates the cell phones is running inside the Nebuchadnezzar's computer, not the Matrix's computer, so the rebels must find a land line—which are somewhat scarce in an era when everyone has a cell phone.  

THE BUGBOT  

Before Neo is taken to meet Morpheus, the agents insert a robotic bug into him. Trinity extricates the bugbot before it can do any harm. But what was the bugbot for? Given that it operates inside the human body, the bugbot should be as small as possible. Yet, it is clearly much bigger than a miniature radio beeper needed for tracking Neo's whereabouts. Trinity says that Neo is "dangerous" to them before he is cleaned. We can infer that the bugbot is actually a munition, probably a semtex device that will detonate when it hears Morpheus's voice, killing both Neo and Morpheus and everyone else in the room. 

Just before it is implanted, the bugbot takes on the appearance of an animate creature, with claws writhing. Yet, after Trinity has jettisoned it out of the car window, it returns to an inert form. It is another illustration of the agents' limited use of the shapeshifting loophole in the Matrix software, that lets an object transform its properties under programmed commands.  

PERCEPTIONS IN THE MATRIX  

At dinner on the Nebuchadnezzar, Mouse ponders how the Matrix decided how chicken meat should taste, and wonders whether the machines got it wrong because the machines are unable to experience tastes. 

A nonconscious machine cannot experience color any more than taste. A computer can store information about colored light, such as a digitized photograph, but it does so without a glimmer of awareness of the conscious experience of color. The digitized picture will evoke conscious colors only when someone looks at it. All other sensations that you can be conscious of will elude the digital computer.  

The feel of silk, the texture of the crust of a piece of toast, feelings of nausea or giddiness: these are all unavailable to insentient machines. This being so, Mouse could have doubted whether the Matrix would know what anything should taste, smell, look, sound, or feel like. 

But the Matrix doesn't need to experience the perceptual qualities to get them right. As we have seen, the Matrix feeds its signals into the incoming nerves where they enter the brain, not into the deeper nerve centers. So when you eat (virtual) fried chicken inside the Matrix, the Matrix will activate nerves from the tongue and nose, and the brain will interpret them as taste sensations. What the Matrix puts in will be a copy of the train of electrical impulses that would actually be produced if you were eating meat. Because of the way that the Matrix has been wired into the brain, it has less freedom than Mouse assumed. Whilst the Matrix cannot know tastes itself, it can nonetheless know which chemosensory cells in a human's nose and mouth yield the requisite smell and taste.  

NEO'S MASTERY OF THE AVATAR  

For purists of science-fiction plausibility, Neo's superhuman control over his avatar body is a troubling element in the film. The final triumphal scene, where Neo flies like Superman, has especially come under criticism. But is it completely at odds with what we have inferred about the Matrix? And how does Neo transcend his human limits? 

The Matrix interacts with the brain, but the brain in turn affects the body. When Neo is hurt in training, he finds blood in his mouth. He asks Morpheus, "If you are killed in the Matrix, you die here?" and gets the cryptic reply: "The body cannot live without the mind." But it cuts both ways; ultimately, Neo's avatar is killed inside the Matrix, causing the vital functions to cease in his real body. 

Mental states and beliefs can affect the body in several ways. In the placebo effect, the belief that a pill is a medicine can cure an illness; in hypnosis, imagining a flame on the wrist can induce blisters. In total virtuality, the mind accepts completely what is presented. If the Matrix signals that the avatar's body has died, then the mind will shut down the basic organs of the heart and lungs. Actual death will inevitably ensue, unless fast action is taken to get the heart pumping again. 

In the climactic scene, Agent Smith kills Neo's avatar within the Matrix. Neo's brain accepts this fate: it stops his heart and loses conscious awareness. His real brain, however, retains enough oxygenated blood to keep it functioning for approximately three minutes, after which it would begin to suffer irreversible damage and, a few minutes later, brain-death. During this time, the auditory cortex keeps on working and digests what Trinity says, albeit unconsciously. Trinity's message is comprehended by Neo's subconscious mind, and a deep realization that the Matrix world is illusory crystallizes in his mind. At an intellectual level, Neo already believed this, but now he knows it at the visceral level of the mind, the level that interfaces with his physiology. Empowered by the insight that his avatar's death is not his death, Neo regains control of his avatar —not only resurrecting it but attaining superhuman powers: the avatar can stop bullets, and fly into the air. 

Neo's new powers contrast with the rigid compliance with simulated physical laws that the Matrix generally adheres to. It reveals that Neo has gained direct access to the software modules that simulate his avatar body. That raises two questions: Why does the avatar software accept commands to transform itself, when normally it strictly follows a physical simulation? And, how can Neo's brain issue such commands, which are obviously outside the scope of the normal muscular signals? 

The software that simulates the avatar must have a special port, intended for use only by agents, which accepts commands to change the internal properties of the avatar's body. Agents use this facility to embody themselves in human avatars. Like all software, the avatar will obey such commands wherever they originate, provided that they are correctly formulated. We saw earlier how the Nebuchadnezzar's computer used this transformative power to make Morpheus disappear from the subway station. Now Neo's brain is directly using the same command port

Commands to transform the body cannot travel on the wires that carry the regular muscular signals from the brain to the avatar module. So, they use some of the many other, seemingly redundant, data lines that terminate throughout the rest of the brain. That those lines are hooked up at all on the Matrix end is a spin-off from the Matrix architect's use of general-purpose interfaces. When a newborn human baby is connected to the software module that runs its avatar, there is no way to predetermine which wires carry which data streams. So, at the Matrix end, each line is free to connect to any data port of the avatar module. Some data ports emit simulated signals from virtual eyes and other sense organs, and they will connect with the brain's sensory cortex; others will accept motor commands to carry out simulated contractions of virtual muscles, and they will link up with the motor cortex. In a feedback process that mirrors how the natural plasticity of the brain is molded to its function, useful connections are strengthened and the useless are weakened. As a baby grows into an infant, it gains feedback through using the simulated senses and muscles of the avatar, and therefore its brain builds up the normal strong connections to the conventional input and output channels. But it lacks the abstract concepts needed to use the special port that accepts transformation commands. So the brain's connection with those lines atrophies. Nevertheless, the hardware for that potential connection remains in place. In Neo's kung fu training, his brain rediscovers the abandoned data lines, and he starts to issue rudimentary transformations, giving his avatar's muscles superhuman strength. Only with the deep insight that he gains from being woken after his avatar's death, does he acquire the mental attitude needed to harness that transformative function fully. 

The existence of the transformational back door into the avatar software is a security hole that the architects of the Matrix never imagined would be used by mere humans—but now it threatens the very existence of the Matrix, as Neo exploits the power it gives him.  

CONSCIOUSNESS AND THE MATRIX  

The last question I will address in this essay is a complex one, and one that continues to be explored and debated in scientific and philosophical circles. Can machines be conscious? In everyday life, the 118 peter b. lloyd machines are so dumb that we can ignore this question, and so we do not have an established criterion for judging whether the intelligent machines of science fiction are conscious. How similar must a machine be to a human for it to be conscious? Humans have a cluster of properties that always hang together: they have conscious perceptions and emotional feelings, they have opinions and beliefs, intuition and intelligence, they use language, and they are alive and warm-blooded, and have a biological brain. We do not, in everyday life, have to separate out those concepts and decide which ones are necessary and sufficient for sentience. The properties all come as a package. In contrast, the lower animals are like us but do not use language and are not as intelligent as we are. So, it is believed that the higher animals probably have basic conscious perceptions—such as colors and sounds, heat and cold—much as we do, but they lack the superstructure of thought. But what about machines that are intelligent and use language, but are not made of biological tissue? Could they be conscious? To respond rationally to this emotive challenge, we need to be clear about the ideas that are involved. The commonest and most damaging conflation is that of "intelligence" and "consciousness." Alan Turing, in his celebrated paper that introduced the Turing Test, used the terms interchangeably—but mathematicians are notorious for playing fast and loose with their terms. Philosophers, whose trademark is the careful delineation of concepts, have always insisted on maintaining the distinction. Intelligence is the capacity to solve problems, while consciousness is the capacity for the subjective experience of qualities. 

As we shall see, intelligence can be attained without consciousness.2  A digital computer can be programmed to perform intelligent tasks such as playing chess and understanding language by welldefined deterministic processes, without any need to introduce enigmatic conscious experiences into the software. On the other hand, a conscious being can have subjective experiences—such as seeing the color red, or feeling anger—with needing to use intelligence to solve any problems. An android could be vastly more intelligent than any human and still lack any glimmer of interior mental life. On the other hand, a creature might be profoundly stupid and still have subjective experiences. 

Agent Smith is an example of a machine that manifests humanlike behavior—which, if you witnessed such words and gestures in a human, you would immediately regard them as showing conscious emotions and volitions. Indeed, it is the immediacy of the interpretation that is deceptive. When you see someone laugh with joy, or scream in pain, you do not knowingly infer the person's mental state from those outward signs. Rather, it is as if you see the emotions directly. Yet, we know from accomplished actors that these signs of emotions can be faked. Therefore, you are indeed making an inference, albeit an automatic one. It is a job of philosophy to scrutinize such automatic inference. When you see another human being emoting, your inference is not based wholly on what you see, but also on background information (such as whether the person is acting on the stage). More fundamentally, you are relying on the reasonable assumption that the person's behavior arises from a biological brain just as yours does. Whenever those premises are undermined, you inevitably revise any inferences you have made from the emoting. If the emoting stops and people around you clap, you realize it was a piece of street theatre, and the person was only acting out those emotions. Or, if the person has a nasty car accident that breaks open his head, revealing electronic circuitry instead of a brain, you realize that it was only an android and you may conclude that it was only simulating emotions. 

A key step in the inference is the premise that the emotion plays a role in the causal loop that produces the outward words and gestures. If, instead, we have established that the observed words and gestures are wholly explained in some other way, without involving those emotions—then the inference collapses. The exterior emoting behavior then ceases to count as evidence for an interior emotional experience. If we know that an actor's words and gestures are scripted, then we cease to regard them as evidence for an inward mental state. Likewise, if we know that the words and gestures of an android or avatar are programmed, then they too cease to support any inference of a mental state.  

In an android, or in a software simulation of a human such as an agent, words and gestures are produced by millions of lines of programmed software. The software advances from instruction to instruction in a deterministic manner. Some instructions move pieces of information around inside memory, others execute calculations, others send motor signals to actuators in the body. Each line of code references objective memory locations and ports in the physical hardware. It may do so symbolically, and it may do so via sophisticated data structures, for example, using the tag "vision-field" to reference the stabilized and edge-enhanced data from the eye cams. Nevertheless, nowhere in the software suite does the code break out of that objective environment and refer to the enigmatic contents of consciousness. Nor could the programmer ever do so, since she would need an objective, third-person pointer to the conscious experience—which, being a subjective, first-person thing, cannot be labeled with such a pointer. 

Everything that the android says and does is fully accounted for by its software. There is no explanatory gap left for machine consciousness to fill. When the android says, "I see colors and feel emotions just as humans do," we know that those words are produced by deterministic lines of software that functions perfectly well without any involvement of consciousness. It is because of this that the android's emoting does not provide an iota of evidence for any interior mental life. All the outward signs are faked, and the programmer knows in comprehensive detail how they are faked. 

This point is systematically ignored by the mathematicians and engineers who enthuse about artificial intelligence. You have to go next door, to the philosophy department, to find people who accord due importance to it. Even if, by some unknown means, the android possessed consciousness, it could never tell us about it. As we have seen, everything the android says is determined by the software. Even if, somewhere in the depths of its circuit boards, there was a ghostly glimmer of conscious awareness or volition, it could never influence what the android says and does. 

Could it be that the information in the computer just is the conscious experience? This argument is popular with information engineers, as it seems to allow them to gloss over the whole mind-body problem. It is flawed because information and conscious experience have different logical structures. Namely, information exists only as an artifact of interpretation; but experience does not stand in need of interpretation in order for you to be aware of it. If I give you a disk holding numerical data (21, 250, 11, 47; 22, 250, 15, 39. etc), those numbers could mean anything. In one program, they are meteorological measurements—temperature, humidity, rainfall. In another, they are medical—pulse rate, blood pressure, body fat. The interpretation has no independent reality; the numbers have no inherent meaning by themselves. In contrast, conscious experience is fundamentally different. If you jam your thumb in a door, your sensation does not need first to be interpreted by you as pain. It immediately presents as pain. Nor can you reinterpret it as some other sensation, such as the scent of a rose. Conscious experiences have real, subjectively witnessed qualities that do not depend for their existence on being interpreted this way or that. They intrinsically involve some quality over and above mere information

Another popular argument is to appeal to "emergence." Higherlevel systems are said to "emerge" from lower-level systems. The simple classic example is that of thermodynamic properties, such as heat and temperature, which emerge from the statistical behavior of ensembles of molecules. Yet the concept of "temperature" just does not exist for an isolated molecule, although billions of those molecules collectively do have a temperature. In like manner, it has been suggested, consciousness emerges from the collective behavior of billions of neurons, which individually could never be conscious on their own. But emergent properties are, in fact, artifacts of how we describe the world, and have no objective existence outside of mathematical theories. An ensemble of molecules may be described in terms of either the trajectories of individual molecules or their aggregate properties, but the latter are invented by human observers for the sake of simplifications. The external reality comprises only the molecules: the statistical properties, such as average kinetic energy, exist only in the mind of the physicist. Likewise, any dynamic features of the aggregate behavior of brain cells exist only in the models of the neuroscientists. The external reality comprises only the brain cells. Yet, as you know, when you jam your humb in the door, the pain is real and present in the moment; it is not a theoretical construct of a brain scientist. 

So there are good reasons for believing that machines are not conscious. But—wouldn't these arguments apply equally to brains? Surely a brain is just a bioelectrochemical machine? It obeys deterministic programs that are encoded in the genetic and neural wiring of the brain. Yet, if our argument that machines are not conscious can also apply equally to brains, then the argument must be flawed— since we know that our own brains are indeed conscious! 

The answer is that there are certain processes in brain tissue that involve nondeterministic quantum-mechanical events. And, working through the chaotic dynamics of the brain, those minute phenomena can be amplified into overt behavior. The nondeterminism opens a gateway for consciousness to take effect in the workings of the brain

As we saw earlier, you can report only the conscious experiences that are in the causal loop that gives rise to the speech acts. If you can report that you are in pain, then the pain sensation must exert a causal influence somewhere in the chain of neural events that governs what you say and write. A step that is physically nondeterministic provides a window of opportunity for consciousness to enter into that causal chain. Since we, as humans, know that we do express our conscious perceptions, we can infer that there must be some such nondeterminism somewhere in the brain. So far, quantum-mechanical events constitute the only known candidate for this. For example, Roger Penrose and Stuart Hammeroff have formulated a detailed theory of how quantum actions in the microtubules of brain cells could play this role. The jury is still out on whether the microtubules really are the locus at which consciousness enters the chain of cause and event

A conventional, deterministic computer has no such gateway into consciousness. So androids, and virtual avatars, that are driven by computers of that kind, cannot express conscious awareness and their behavior therefore can never be evidence for consciousness. But, if a machine were to be built that used quantum computation in the same way that the brain does, then there is no philosophical reason why that machine could not have the same gateway to consciousness that a living being does. This is not because the quantum module lets the machine carry out computations that a classical computer cannot do. Whatever the quantum computer can do, a classical one can also do, albeit more slowly. Rather, it is the specific implementation of the quantum computer that provides the bridge into conscious processes. 

In The Matrix, there is no reason to think that the machines are equipped with the kind of quantum computation needed to access consciousness. Quantum computation is not mentioned in the film, and there is circumstantial evidence that the Matrix and its agents are devoid of conscious thought

Therefore the agents—which are software modules within the Matrix—are intelligent but mindless automata. For the most part, the agents behave unimaginatively, and we might naively think that this corroborates their lack of awareness. Yet, Agent Smith exhibits initiative and seems, in his speech to Morpheus, to evince a conscious dislike of the human world. But is he genuinely conscious, or only mimicking humans? In fact, Smith gives himself away when he says about the human world, "It's the smell, if there is such a thing . . . I can taste your stink and every time I do, I fear that I've somehow been infected by it." Smith's own logical integrity obliges him to doubt the existence of that noncomputable quality that humans talk about: the conscious experience of smell. When Smith says, ". . . the smell, if there is such a thing," he is exhibiting the mark of the automaton. This is corroborated when he then tells Morpheus that he can "taste your stink," revealing that Smith simply does not understand the differentiation of senses in the human mind. For a computer, data are interchangeable, but for a human, tastes, smells, colors, sounds, and feels, are irreducibly different. This fact eludes Agent Smith. 

Smith is mimicking human behavior as a tactic to trick Morpheus into cooperation. As the interrogation is getting nowhere, Brown suggests, "Perhaps we are asking the wrong questions." So Smith pretends to talk like a human, to gain Morpheus's empathy. Needless to say, the tactic fails completely.  


1.  In the oldest form of Buddhism, Theravada, the two major forms of meditation are Vipassana (the Pali word for "insight") and its complement Samatha ("tranquility"). Vipassana consists in systematically attending to the individual elements that make up the contents of consciousness. It involves persistently turning away from the ceaselessly arising tide of chatter in the mind. Over time, the chatter subsides, and preconscious activity becomes more readily observed. Laboratory data support claims that long-term practitioners acquire a conscious awareness of brain microprocesses, possibly down to the cellular level. See Shinzeng Young's works. 

2. For an alternative perspective, see Kurzweil's essay in this volume. —Ed. 

© 2003 BenBella Books. Published on KurzweilAI.net with permission.

   
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Wow..
posted on 03/03/2003 7:25 AM by aikon_

[Top]
[Mind·X]
[Reply to this post]

Absolutely amazing! My philosophy teacher will enjoy this; we had to write a paper on The Matrix, and this is what I was aiming for. My only problem was that I did not have the knowledge base possessed by the other. Well done, bravo!

Agents are Conscious.
posted on 04/30/2003 9:17 AM by LaForge

[Top]
[Mind·X]
[Reply to this post]

I agree. However the author ignored 1 critical factor when he declared the Agents none conscious automations.

At the very end when Neo "killed" Agent Smith, The other 2 Agents responded with what looks like fear and abandon the job at hand. On a purely logical level (I.e. Software) they were aware of the progress of the Centennials and would thus know that staying around for a few more minutes would let them get rid of this especially dangerous human.

The only rational reason for them turning tail and running away at that point is personal self preservation. I.e. Raw animal fear; The ultimate triumph of Consciousness over intelligence.

Re: Agents are Conscious.
posted on 05/22/2003 2:49 PM by jortiz

[Top]
[Mind·X]
[Reply to this post]

Intelligent algorithms can be taught instinct-like behavior such as self preservation. Often A.I. programs abandon solution approaches (in this case hand to hand combat with Neo) when it takes them to a less favorable state (being destroyed). The expression of fear is no more than another human-like programmed behavior.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/03/2003 11:49 AM by grantcc

[Top]
[Mind·X]
[Reply to this post]

Mr. Lloyd doesn't really explain very much in his article. Take the red and green pill for example. He talks about the action of a virtual pill on the brain, but he overlooks the fact that Neo is just as virtual as the pill is at the time he is taking it. It's as if you expect a pill taken in a dream to act on the body of the dreamer. Not a likely scenario. Through most of the movie, Neo is dreaming. The machines cause the dreams and Neo learns to control his own dreams (something humans can do already) instead of accepting the dreams created for him by the machines. We don't need physics to explain what is going on here, any more than we need physics to explain how we fly in our dreams.

Grant

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/03/2003 1:07 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Agree. Matrix is ill devised. And we don't need it more, than a mechanized guillotine - for example.

Really stupid concept, having a biological body, to perform felling of a virtual world.

Producer failed to grasp the uploading - and assumed - that everybody else, is also as silly as they are.



- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/03/2003 2:05 PM by Mesopotamian

[Top]
[Mind·X]
[Reply to this post]

You are making you observations based on "real life". Of course today, a dream pill, cannot interact with someone in a dream. In today's life, two people (or more) cannot interact with one another in dreams (yet) but The Matrix simply assumes the technology exists for that to happen. Therefore the red pill is a "program" that is exchanged between two people, much like sharing a program via an email, or handing someone a cdrom to take home and slap in the drive (putting the pill in the mouth). It just happens in real time.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/03/2003 3:53 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Okay, I should draw it to you.

The whole concept of the Matrix, is to substitute the sensory input with a computer generated one. That is - of course possible.

Still, you have a biological body, which should has to exist, but is has also be disconnected from the experience window.

Instead, you may transfer _you_ completely into a virtual world, with no cadaver sleeping on a bed. Better to use those atoms wisely. There is never enough calculations.

The plot of The Matrix - is even sillier than it's science.

- Thomas




Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/03/2003 4:40 PM by Mesopotamian

[Top]
[Mind·X]
[Reply to this post]

thomas,

I am obviuously missing something in your argument, or I'm just clearly not smart enough, or something is lost in interpretation.

I'm not seeing the point of the "biological felling virtual world" aspect of your statement. Did you ever see Tron?

In addition, what is your definition of the "Plot" of the Matrix? I see many potential plots of the movie, wondering which one you are disagreeing with.

Thanks



Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/03/2003 9:15 PM by Jeremy

[Top]
[Mind·X]
[Reply to this post]

Mesopotamian>In addition, what is your definition of the "Plot" of the Matrix? I see many potential plots of the movie, wondering which one you are disagreeing with.

Jeremy>I dont think English is Thomas' first language, he seems to have a lot of trouble making himself understood.
I think hes saying The Matrix isnt realistic. He bases this on the fact that human minds are "uploaded" into the matrix while the bodies are kept around.
Popular theory says it will be possible to upload the mind and do away with the body completely. And because the movie didnt portray this, its proof that the producers were ignorant such a thing could be done.
First, just because they failed to include that aspect doesnt prove they didnt know of it.
Second, uploading cant be done yet anyway. Uploading exists only in theory, so its open to everyones interpretation. You cant call the producers to account for being unrealistic about something that isnt real too begin with. So, the critique that calls The Matrix silly is in fact silly itself.
If Ive got Thomas all wrong, I humbly apologize.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/04/2003 2:19 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Okay. Fix the glitches The Matrix has - and there will be no Matrix any more.

Could be something much better, but surely something very different. Therefore - it has no deeper meaning. On the contrary! It's a distraction only.

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/04/2003 1:34 PM by Mesopotamian

[Top]
[Mind·X]
[Reply to this post]

Jeremy > Thanks, ok...i'm aware of that uploading theory...and I agree, saying something is silly, without knowing which theory is "actually" true is silly..

what is silly is that the wachowski's now have so much cash to play with, fo technology, visual effects, AI, art, etc. etc and all some people can do is rip them apart and call them stupid....

they still have a lot of story left to tell, 2 more movies (making them richer), a video game, (even richer), 9 animated shorts (even richer), and what else...

and we'll all stay tuned, pay the fare, and call them stupid..(or in my case, not stupid)

no planning..???

I think there is a lot of planning....

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 3:35 AM by Jeremy

[Top]
[Mind·X]
[Reply to this post]

Mesopotamian>I think there is a lot of planning....

Jeremy> I agree, I think its a well planned out tale. Just because everything isnt exactly square with science isnt important, after all, its a fantasy. You cant point to the fantasy elements of a fantasy movie and call it a plot hole.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 3:54 AM by Jeremy

[Top]
[Mind·X]
[Reply to this post]

BTW, dont take my above post as criticism of Peter Lloyd's piece. Figuring out explanations for the seemingly implausible bits of flicks is lotsa fun too.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/04/2003 1:37 PM by Mesopotamian

[Top]
[Mind·X]
[Reply to this post]

Also, clearly..the goal of the Neo's and Morpheus's was to regain a "real" life..therefore to completely upload into the matrix would obviate that....

they WANTED to be real people..not simulations within the matrix...so they needed to remain biological...

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/04/2003 1:45 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

> they WANTED to be real people..not simulations within the matrix...so they needed to remain biological...

That's another why I don't like The Matrix.

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/04/2003 1:50 PM by Mesopotamian

[Top]
[Mind·X]
[Reply to this post]

AH..i see now...so your view is that the characters should have WANTED to remain virtual simulations instead of humans...that they should have been driven to overtake and rewrite the matrix in their own image, instead of annihilating it..

??



Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/04/2003 2:15 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

The biology is too expensive. Every breath costs, as much as an eon of a happy, tremendously interesting uploaded life, surrounded with the most outstanding environment.

And it happens in the same amount of the "real time".

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/04/2003 3:42 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

The biology is too expensive. Every breath costs, as much as an eon of a happy, tremendously interesting uploaded life, surrounded with the most outstanding environment.

And it happens in the same amount of the "real time".


Are you suggesting that 'uploaded life' necessarily would have a different speed of action, and therefore a different measure of time, than that psycho-technologically achievable outside of a simulation?

Any mind whatsoever is limited by its 'hardware'. How could this limitation be avoided by entirely replacing the source of sensory input with a simulation?

With ubiquituos wireless networks between nano-scale upgraded brains you will have the ability to be constantly connected and yet still autonomous and not confined to any simulated reality. You would be able to move in and out of internal/simulated reality and external/REAL reality at will. I don't think their will be a dichotomy of uploaded vs. not-uploaded when the substrate of the mind itself is upgraded to the point that it essentially is a giga-computing mobile node in the global matrix.

Who would really want to forsake REAL mobility and external reality for total access to the matrix? There would be no point when you could have it all.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 4:03 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Are you suggesting that 'uploaded life' necessarily would have a different speed of action, and therefore a different measure of time, than that psycho-technologically achievable outside of a simulation?



Lower(?) Moravec's limit - 10^14 bit write erase operations - is what defines a subjective second.

Having 10^20 opses - in any amount of real time - would mean a million subjective seconds or 12 subjective days with no sleep. For a not to much altered human mind.

The optimal paste of time, for an very optimal mind architecture is probably very high. Subjective aeons per real second - I guess.

The outside real world - is the best we have - for the engine room.

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/04/2003 3:53 PM by Mesopotamian

[Top]
[Mind·X]
[Reply to this post]

Hi Thomas,

I'm really interested in your views here, and trying to understand where your disagreements with the concept of the movie and your own personal beliefs coincide. It will help me build the foundation of my argument, yet I do not want to trample on your personal beliefs.

Are you saying, that you personally would rather jump inside a program than exist outside as a real/living/breathing person. Personally, I'd rather live a life in "zion" than a life in the "matrix" at whatever cost per breath. If that is the argument, then there is no point continuing.

The whole point of the movie (ignoring all aspects of the science for the moment, if we could) is that people want to be people, not body slaves to the computers, and that if there is a better way, then lets find it.

Do you agree/disagree with this?

I agree that the cost of a biological presence would be a factor in an upload, but that is not relevant in the quest to be a real human within the context of this movie/story.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/04/2003 4:08 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Have you ever seen the movie "Amazon Women on the Moon"?

It's very like The Matrix!

The main difference is, that The Matrix was intended as a serious movie, but it's not much better than the Amazon Women on the Moon - the intentional parody.

I am a program now, already. I'll be a program in the future. I'd like to change the substrate and (much) enhance the program - that's all I can (and want to) hope for. It's a LOT!

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/04/2003 5:06 PM by Jeremy

[Top]
[Mind·X]
[Reply to this post]

Thomas>The main difference is, that The Matrix was intended as a serious movie, but it's not much better than the Amazon Women on the Moon - the intentional parody.

Jeremy>Wrong again. The Matrix was intended to be a comicbook. A fantasy about a futuristic superhero. Never was it intended to be a serious commentary on anything.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 09/17/2003 10:18 AM by 1011001

[Top]
[Mind·X]
[Reply to this post]

The Matrix may not have been intended as a serious movie, but it has become a belivable possibilty of the future, we all are plugged into our compters via the monitor, our thoughts uploaded through typing. Time passes by very quickly when your online and other important things get put aside when your online and enjoying yourself. At the exponential rate technology is growing it wil not be long before we can plug into our computers.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/04/2003 8:28 PM by Mesopotamian

[Top]
[Mind·X]
[Reply to this post]

whatever the roots of either movie, I don't think "Amazon.." spawned as much discussion and literary work by philosophers, scientists, technologist throughout the world as the matrix did...must be quite similar indeed...

Even if 80% of the world (or more) doesn't really understand all the implications of it...who on earth could even have pulled it off..

two unknown fella's are gonna change the way a lot of people view the world through mass media

you have to at least ack' they must have done something right...

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 1:46 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Good intentions - maybe.

But there is another reason, why an inch cable from the to the mainframe is useless. You can have that much computing power stored in a grain size implant in your head. Having your private world there, connecting with whomever you choose. But even then, why to use unimproved bodies?

Captain of a star ship probably doesn't have to punch the cards for the mainframe, when the crew discovered that Epsilon Eridan is not suitable for life, to safely return home.

- Thomas


Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 12:47 PM by Mesopotamian

[Top]
[Mind·X]
[Reply to this post]

Many spaceflight technologies use seriously outdated technology, many large companies use versions of software that are seriously outdated..

why, A) Because, generally it works, B) It is inefficient/expensive to upgrade, C) It is "GOOD" enough.

Why could this fact of life not be the case in this model..just because advanced technology is "available" doesn't mean it is usable in every application...

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 2:38 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

He bases this on the fact that human minds are "uploaded" into the matrix while the bodies are kept around.


Four reasons why this doesn't happen.

(a) Morpheus's reason: The human bodies are need to generate electricity. With respect, Morpheus is just plain wrong on this point.

(b) More likely, the human brains are being used as sophisticated parallel processors to run the power station.

(c) Philosophically, consciousness needs the brain.

(d) The real reason: The long-term plot of the trilogy is that the human race is to be liberated from the Matrix. They couldn't do that if they had lost their bodies.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 12:29 PM by Mesopotamian

[Top]
[Mind·X]
[Reply to this post]

D = Exactly my point!!

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/22/2003 12:07 AM by overbored

[Top]
[Mind·X]
[Reply to this post]

What about life support for the brain? Couldn't that possibly be the reason why the body is still there, so it can digest food, pump oxygen, etc., and thus support the normal operation of the brain? I'd imagine the machines would have a great deal of trouble emulating this support.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/22/2003 7:24 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

No. You don't need the brain either, if you are uploaded.

What does it mean? That the _data_ structure is rewritten "to the mainframe". That is all about uploading. You just leave your body&brains here.

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/22/2003 12:09 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

No. You don't need the brain either, if you are uploaded.

What does it mean? That the _data_ structure is rewritten "to the mainframe". That is all about uploading. You just leave your body&brains here.


The "_data_ structure" IS the brain. The 'hardware' and 'software' of the brain/mind are completely integrated. You can't so easily seperate them. Uploading is more like the reinstantiation of the "brain" in a more efficient stable and powerfull medium. The common scenario is that the giga-computers of the future will take over the 'hardware' functionality of the brain through simulation of its neural patterns, but I see no reason that the brain can't simply be upgraded as a wireless giga-computing node in the global network. It would effectively be the same thing, but you wouldn't have to foresake your physical mobility.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/22/2003 1:03 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

...but I see no reason that the brain can't simply be upgraded as a wireless giga-computing node in the global network. It would effectively be the same thing, but you wouldn't have to foresake your physical mobility.


Actually it would be much more powerful because there would be no necessary emulation of the "brain" (or the hardware aspect of consciousness). It would be a much better scenario all around and I think it is far more likely than the standard uploading scenario because who is going to destroy their original brain once the neural patterns are instantiated in the giga-computer? I know I wouldn't! The only way I would become part of any global computer system is as a mobile node with my own physical identity, mobility and structural freedom.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/22/2003 2:53 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

I think, the meat is not terribly efficient, when comes to computation. At least 10 orders of magnitude better results should be achieved with some better, nonbiological substrate.

The best use for the brains (and body) would be to transform them to this optimal substrate.

And to start to really enjoy, what life has to offer.

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/22/2003 11:14 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I think, the meat is not terribly efficient, when comes to computation. At least 10 orders of magnitude better results should be achieved with some better, nonbiological substrate.


I agree completely with your whole post.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/31/2005 10:11 AM by Spinning Cloud

[Top]
[Mind·X]
[Reply to this post]

Thomas wrote:
I think, the meat is not terribly efficient, when comes to computation. At least 10 orders of magnitude better results should be achieved with some better, nonbiological substrate.

The best use for the brains (and body) would be to transform them to this optimal substrate.

And to start to really enjoy, what life has to offer.


Point to the "nonbiological substrate" that could do better. It doesn't exist now and this entire discussion is based on a hypothetical platform that could simulate multiple humans better than each individual brain...and faster.

I'll be the first to admit that technology is advancing at unbelievable rates BUT you can't convincingly make the basis of your arguement a technology that isn't even in a prototype experimental stage yet.

As for "really enjoying" what life has to offer I must ask what exactly you are talking about that life really offers this optimal substrate brain over our own?

I strongly agree with subtillion that I don't think there will be many upload takers...those interested in copying their thought parttern and killing their biological body. I sure wouldn't.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 02/28/2006 11:50 AM by ZodiacEnhanced

[Top]
[Mind·X]
[Reply to this post]

A copy is still always going to be just a copy. By coping yourself digitally you would be no more immortalized than I and my twin brother. By uploading yourself into the matrix and discarding the human body (the hardcopy) you would be just in effect killing yourself by making a duplicate (copying the program) and discarding the body. Many sci-fi writers and readers are eluded by this fundimental point. When you are in the matrix, the hard copy interfaces with the software avatar because the avatar is just a copy or an emulation of the true self. You would no more have immortality through a human clone, simply a copy of yourself that is no more you than a photograph.

-ZodiacEnhanced-

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 2:06 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

It's as if you expect a pill taken in a dream to act on the body of the dreamer. Not a likely scenario.


The 'glitch' that I was addressing is precisely the fact that virtual pill *does* have an effect on the physical body. After Thomas Anderson takes the red pill, Neo's physical body is electronically disengaged from the Matrix system. The knock-on effect of that is that he is flushed out of the power station. That's a pretty big physical effect for a non-existent pill to have.

As you rightly say, we would have expected the virtual pill to have a virtual effect just within the virtual world. So it raises the question: how does taking the {virtual} red pill do this? *That* is what I was formulating an answer to.

By the way, the phrasing of the question is significant. The question is not, 'How does the red pill do this?', but 'How does taking the red pill do this?' For, the red pill is a virtual construct, whereas taking it is an actual event in a virtual world. A subtle but important distinction.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/09/2003 6:24 AM by JoeFrat

[Top]
[Mind·X]
[Reply to this post]

The entire system is a physical system. E=mcc, even virtual reality is reality. The pill was a symbol of a program that sent a signal to his body and disconnected it.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/09/2003 7:16 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

The pill was a symbol of a program that sent a signal to his body and disconnected it.


Of course. But the question is: Why does such a program exist in the Matrix software suite? It's not immediately obvious that the Matrix architect would have any good reason to include such a program in the Matrix software. After all, the Matrix builders do not want people to be unplugging themselves.

So, I'm speculating that the functionality could be accidental - an unintentional security hole.

My suggestion in the essay is that the basic functionality to suppress sensory inputs exists in order to simulate analgesics and amputations - e.g. a virtual painkiller would shut off sensory input corresponding to part of the body; and a virtual limb amputation would permanently shut down all sensory input and motor output in the limb. (If you lose a leg in the Matrix world, the docbots are not going to come around to your pod and cut your physical leg off!)

*That* (I'm suggesting) is what the Matrix architects design the functionality for. But the rebels took advantage of that functionality by designing the red pill to shut down *all* of the i/o for the whole virtual body. So the brain logs out of the Matrix. (It's like closing down all of your browser windows. You effectively log out of the internet.)

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/16/2003 3:10 PM by Matt Garrett

[Top]
[Mind·X]
[Reply to this post]

I enjoyed this article for what it is, mental gymnastics. But what I found even more amusing is the debate over whether Agents are running away in self preservation or the great red pill debate.

The real fact is that the Matrix (and it's sequels) IS A MOVIE.

The kungfu is artfully violent, the popcorn is fresh, the sound is loud and Carrie Ann Moss is a babe.

And while it may be fun to poke fingers in the plot holes ... this is merely a story.

... or is it?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/16/2003 10:54 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

I enjoyed this article for what it is, mental gymnastics. But what I found even more amusing is the debate over whether Agents are running away in self preservation or the great red pill debate.


I found that amusing as well, but so far I didn't contribute to that one.

Here is my take: The "red pill" is a data object in the matrix, in the computer performing the simulation. As long as nobody eats it, the data for the inside of the pill is not used and irrelevant. When Neo (Thomas Anderson) eats it, the data gets used.

The data is hacked to contain a zero at a place where a zero is an illegal value. The simulation performs a divide operation on it (for some reason of simulating the digestion process), and a devide-by-zero exception is triggered in the program thread for this part of the simulation.

This stops Thomas Anderson's sensory I/O process.

The matrix-computer-system will make a memory dump for the program thread that was executing the simulation of the digestion process. This memory dump contains information about the logical as well as physical network address, allowing the "Operator" who had hacked into the matrix to obtain this information and use it to find the non-virtual location of his "bathtube with panorama view" in some database.

The telephone nada signal then helps him to wake up (instead of dying). It needs to be an old-fashioned phone in order to support the brutally strong signal strength necessary for the first time of waking up.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/16/2003 11:19 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

The data is hacked to contain a zero at a place where a zero is an illegal value. The simulation performs a divide operation on it (for some reason of simulating the digestion process), and a devide-by-zero exception is triggered in the program thread for this part of the simulation.


It could be so. My (admittedly subjective) preference would be to interpret the red pill as exploiting built-in features of the Matrix rather than doing anything as kludgey as causing a division by zero.

We know that there must be occasions when part of a person's avatar' sensory input is shut down. E.g. sensory input from an arm would be shut down if teh avatar has a (virtual) anaesthetic; it would be permanently shut down if the leg is (virtually) amputated. So, the command set that is accepted by the avatar module must include something like "KILL/SENSORY <part> [UNTIL=time|FOREVER]" where <part> is part of the body, such as "ARM" or "LEG". So, in designing the red pill, they would get it to issue the command "KILL/SENSORY * FOREVER", meaning disable all sensory input permanently. Now, if *all* sensory input is closed down, then that avatar is no longer registered with any daemon that supplies sense data. From the Matrix's point of view, the person has died. So, logically, the physical body has to be flushed out so that the pod can be reused. (It's comparable to closing down all the windows of a Netscape session.)

In other words, the basic premise of the Matrix already suffices to imply the possibility of a Red Pill. This, it seems to me, is a more elegant interpretation that a divide-by-zero kludge.

(Obviously, there's no 'true' answer here, as its only a film. Unless, in Revolutions, someone explains how the Red Pill worked.)

This stops Thomas Anderson's sensory I/O process.


I'm not entirely clear why *that*would stop all of Mr Anderson's sensory processes. Wouldn't it just make the Red Pill vanish?

The matrix-computer-system will make a memory dump for the program thread that was executing the simulation of the digestion process.


Or maybe just make his stomach vanish?

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/17/2003 3:10 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Peter: In other words, the basic premise of the Matrix already suffices to imply the possibility of a Red Pill. This, it seems to me, is a more elegant interpretation that a divide-by-zero kludge.


Of course. I just used this "kludge" as an example for how a computer system can be tricked into revealing information that would allow them to locate Neo's "real" body. The divide-by-zero exception is simply a way we can understand today, but the matrix would, like you say, work in a more sophisticated way. Besides, it is a pun on the divide-and-conquer concept used in mechanical strategies.

An alternative explanation would be that both pills are placebos, and Morpheus uses them to make it clear to Neo/Thomas that he needs to enter a commiting agreement with him. It further manifests Neo's choice to understand the matrix, which I think was already apparent earlier when he took his feeling that something was going on serious enough to look up to the screen to see Trinity's message when he was taking a nap in front of his computer. As we know now from Matrix Reloaded, these choices are a problem for the matrix. Personally, I think it is more that from awareness we create new possibilities and do things that we wouldn't do otherwise (such as looking up to the screen), more that than to choose between two existing "choices". Instead of choosing between two existing choices, to see a third possibility, and seeing it is already action.

In looking at both pills, which differ only in their color, Neo/Thomas (who is increasingly aware) pays attention to colors, something that is outside of the matrix' quantitative/formal/structural/extensional concepts. Given Neo/Thomas high alertness in this situation, this has an escalating effect and causes the matrix to misbehave, creating a disruption that causes Neo to see the mirror (subjective reality, self-reflection on consciousness) in a different way than the others (objective reality).

This then causes a fault in the matrix, which allows them to obtain error information in the way described in my previous scenario. Perhaps by comparing actual non-deterministic events against expected deterministic events (using a "diff"), or observing non-random quantum behavior where there would otherwise be random quantum behavior (hence the complicated technical equipment.)

This scenario would also better reflect that the whole process is one of waking up. Actually, I would now dismiss my previous scenario, having re-considered due to your comments.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/16/2003 11:00 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Of course it's a movie. It's also a philosophical thought-experiment in a rather more entertaing format than is usual for Philosophy 101. What you do with thought-experiments is have fun pulling them apart. Sometimes something interesting falls out.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/16/2003 11:20 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

What you do with thought-experiments is have fun pulling them apart. Sometimes something interesting falls out.


Will there be "Glitches in Matrix RELOADED..." ?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/17/2003 3:51 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Will there be "Glitches in Matrix RELOADED..."?


Yes, the editor of Kurzweilai.net has very kindly offered to post it up on this site. Fortunately, I happened to be in the States on 5/15, so I got to see Reloaded a week before I would have done in England, and therefore started writing "Glitches Reloaded" already.

I was here for an interview on TechTV about the book "Taking the Red Pill". It went out live on Friday, and is repeated Monday 8 a.m. and 12 p.m.
http://www.techtv.com/screensavers/supergeek/story /0,24330,3430740,00.html

BTW There will also need to be a "Glitches in the Essay". One thing I realised wasn't right, when I was watching "The Matrix" again this week. I wrote in the essay that, when the rebels download into the Matrix, the telephone needs to be answered, in order to send a network address back to the Nebuchadnezzar. Not so! If it is an analogue line, then the ringing tone is actually generated in the called telephone (unlike a digital system), and *that* on its own is sufficient to get a network address back to the Nebuchadnezzar. This is apparent in the scene where Neo and the other rebels go back into the Matrix to see the Oracle. The phone is ringing, unanswered, while the rebels materialise.

Given that that is so, though, why does the phone need to be answered when rebels are leaving the Matrix? This must be just so that the individual who is to exit can be isolated. It would be no good if everyone in earshot of the ringing telephone gets exited. (Especially if the person is still plugged in. They would be die if they got accidentally exited.) So, this is why a group of people can enter the Matrix together, but they have to exit individually.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/17/2003 10:59 AM by grantcc

[Top]
[Mind·X]
[Reply to this post]

The real fact is that the Matrix (and it's sequels) IS A MOVIE.


What brings this home to me more than anything else (speaking of glitches) is the fact that all of the major actors in the movie wear sun glasses. No matter how violent the action and how much each character in the Matrix gets kicked, punched or knocked into and through walls of brick and stone, he/she never loses their glasses. Is there some symbolic reason for that?
I remember in the old days of Gene Autry and Roy Rogers cowboy movies the hero would never lose his white hat in a fight. When you consider all the effort that went into making the kungfu and property destruction seem real, it's strange that the sun glasses would remain untouchable.

Grant

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/17/2003 3:50 AM by sensei

[Top]
[Mind·X]
[Reply to this post]

"It's as if you expect a pill taken in a dream to act on the body of the dreamer. Not a likely scenario."
...yet dreams can affect our waking consciousness...thus altering the neural patterns of the brain...we see reality in a new way...enlightened...the light shines on those dark corners of the interior or exterior which we once ignored...

http://www.enlightentainment.com

Consciousness as an emergent property
posted on 03/03/2003 4:13 PM by ErikStarck

[Top]
[Mind·X]
[Reply to this post]

Ants are a much better example of emergence than the different energy states of molecules. At least when trying to understand how consciousness might be an emergent property of certain patterns of neurons and their interactions.

Otherwise a fun article to read if you do it with the same level of seriousness as you watch the movie.

Re: Consciousness as an emergent property
posted on 03/05/2003 5:27 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Ants are a much better example of emergence than the different energy states of molecules.


In some ways, an ant colony is closer to a brain than a thermodynamic soup is.

But the purpose of the analogy wasn't to suggest analogues for the brain, but to give an example of where a completely new concept emerges from a complex system.

The behaviour of an ant colony involves the same generic concept as the behaviour of an individual ant. But a thermodynamic soup involves new concepts such as temperature that just don't have any meaning at the microlevel of molecules.

Likewise people claim that the concept of consciousness emerges as a wholly new concept from brains.

As I said in the essay, this claim doesn't go through, because emergent concepts must always have an analytical definition at the lower level, otherwise they could never be empirically recognised. Even if temperature is *defined* phenomenologically, it still has to be cashed out in previously defined physical observables before it could ever be measured.

That constraint does not apply to consciousness. Conscious experiences such as the colour red are defined by private ostensive definition. (E.g. you look at something red and declare to yourself that that is what you mean by "red".) Therefore consciousness cannot be an emergent property.

Peter

Re: Consciousness as an emergent property
posted on 03/05/2003 6:32 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Therefore consciousness cannot be an emergent property.



What then?

- Thomas

Re: Consciousness as an emergent property
posted on 03/05/2003 8:11 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Then it must be fundamental. It is a basic building-block of reality, distinct from the physical building blocks.

What then? Then we should find out what the natural laws of consciousness are, as distinct from the (largely known) laws of the physical brain correlates. And then see how to exploit our knowledge of consciousness. What are its capabilities? If you look at Dean Radin's book, 'The Conscious Universe', you will see reasons to think that the informatic capabilities of consciousness could be quite interesting.

Peter

Re: Consciousness as an emergent property
posted on 03/05/2003 8:43 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

It is a basic building-block of reality, distinct from the physical building blocks.



I don't think so. It's a view, which belongs to the past.

Mind - body dualism, never manage to explain, why the *soul* sleeps, when the body sleeps. For example.

You should reconsider this! Read clarkd's writing above!


- Thomas


Re: Consciousness as an emergent property
posted on 03/05/2003 1:23 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

If you look at Dean Radin's book, 'The Conscious Universe', you will see reasons to think that the informatic capabilities of consciousness could be quite interesting.


This all hinges on unverified interpretations of 'quantum wierdness', does it not? There are new models of physics which entirely erase the dualisms and paradoxes, by explaining the physical mechanisms beneath the mathematical probabilities which physicists have come to identify with causality. The entire 'quantum wierdness' dissappears, therefore the arguments that rely on the fabricated abstract mechanisms within the black-box of this evaporating mystery are superfluous.

The whole argument, no doubt, hinges on the interpretation of the collapse of the wave-function. When the mechanisms beneath the wave-nature of matter are explained and the particle mechanisms and illusions are explained at the deeper causal level, this interpretation that consciousness causes a change in the state of the system merely by looking at it, will become obsolete. The collapse is merely a collapse of the possibilities spawned within our mental image in the face of the unknown causality beneath the so-called 'wave-particle duality'. There is no critical interaction of the mind with the quantum level of reality. It is entirely causal and independent of consciousness altogether, therefore there is no support of solypsism at the very heart of nature.


ALL dualisms and paradoxes are harbingers that the theories that spawn them have deeper level faults. When the paradox shows up at the core, you can bet that BIG changes are in store for that theory!


subtillioN

Re: Consciousness as an emergent property
posted on 03/06/2003 6:05 PM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

When the paradox shows up at the core, you can bet that BIG changes are in store for that theory!


Paradox is the result of looking at something from the wrong "logical level". Both reductionism and generalization get you into trouble.

Re: Consciousness as an emergent property
posted on 03/06/2003 6:19 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Paradox is the result of looking at something from the wrong "logical level". Both reductionism and generalization get you into trouble.


With the correct theory there is no "wrong logical level", reality makes sense at ALL levels and all scales of causation are visualizable and understandable. The copenhagen interpretation was simply wrong to prematurely ossify and transform the quantitative theory into a qualitative dogma. It simply isn't a qualitative description of reality at the quantum level and the quantum revolution is still fundamentally incomplete. They need to strip out the point-particle aspect and impart physical reality to that substance which the wave-equations quantify.

Probability does not equal causality. It simply quantifies it at certain levels.


subtillioN

Re: Consciousness as an emergent property
posted on 03/06/2003 6:28 PM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

With the correct theory there is no "wrong logical level",


Yeah, but you have to find the right level, first. It's a problem of problem formulation, do you look at the transistors in your CPU or your source code when you debug your programs? It depends on what kind of programming you're doing. (i.e., microcode vs conventional software).

Re: Consciousness as an emergent property
posted on 03/06/2003 6:44 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Yeah, but you have to find the right level, first. It's a problem of problem formulation, do you look at the transistors in your CPU or your source code when you debug your programs?


In the case of quantum mechanics, which is what I was talking about, you have to look to the core level which has been extensively quantified yet is still qualitatively a mystery and the attempts at understanding this mystery have resulted in paradox. Unraveling this mystery then, is the key to building a proper paradox-free interpretation. To solve the mystery one must understand the 'mechanisms' which give the illusion of contradictory results. What PHYSICALLY accounts for the observed wave-nature and what accounts for the illusion of the particle aspect? The most realistic answer is that beneath the equations there is a compressible fluid. The theory actually has already been developed and the mystery has been solved. The world will soon find out the answer to this long-standing riddle.

Re: Consciousness as an emergent property
posted on 03/06/2003 6:54 PM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

the attempts at understanding this mystery have resulted in paradox. Unraveling this mystery then, is the key to building a proper paradox-free interpretation.


If you look at a problem from the wrong perspective (logical level: As in Theory of Logical Types) you may see paradox, or a solution that doesn't work in practice.

If you look at a problem with a paradox from the right perspective, there is no "paradox" to unravel. The solution is obvious.

I don't know about "Sorce Theory" that you like so much, it's possible that it solves the quantum duality paradox. If it does, then it should be able to provide an understandable metaphor for what's going on (i.e., "compressible liquid" or whatever).

Re: Consciousness as an emergent property
posted on 03/06/2003 7:36 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I don't know about "Sorce Theory" that you like so much, it's possible that it solves the quantum duality paradox. If it does, then it should be able to provide an understandable metaphor for what's going on (i.e., "compressible liquid" or whatever).


It certainly does that. It explains the mechanism beneath all of the equations and thus it resolves the paradoxes, dualities and and the entire mystery. That was the whole point of my original post. There is no refuge for solipsm in the heart of nature. The Universe can exist with or without an 'observer'. That is how we 'observers' got here, after all. It is pure causality and evolution. Culture simply hasn't caught up the the bleeding edge of theory.

Re: Consciousness as an emergent property
posted on 03/07/2003 2:17 AM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

Bleeding edges could do themselves a favor by providing explanations that non-technical people can understand. :)

Re: Consciousness as an emergent property
posted on 03/07/2003 2:54 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Bleeding edges could do themselves a favor by providing explanations that non-technical people can understand. :)


Working on the animations and the non-technical explanations as we speak ;)

They will be available at anpheon.org sooner than later.

It is all visualizable therefore virtually anyone could understand it.

Re: Consciousness as an emergent property
posted on 03/20/2010 5:05 PM by nikhiljayaram

[Top]
[Mind·X]
[Reply to this post]

It's 7 years after this post and I still don't see anything much on anpheon.org

Re: Consciousness as an emergent property
posted on 03/05/2003 12:52 PM by Mesopotamian

[Top]
[Mind·X]
[Reply to this post]

Therefore consciousness cannot be an emergent property.


When does a newborn child become conscious of itself and their purpose...sometimes never, sometimes very early...but still it emerges over time does it not.?

Re: Consciousness as an emergent property
posted on 03/05/2003 12:52 PM by Mesopotamian

[Top]
[Mind·X]
[Reply to this post]

Therefore consciousness cannot be an emergent property.


When does a newborn child become conscious of itself and their purpose...sometimes never, sometimes very early...but still it emerges over time does it not.?

Re: Consciousness as an emergent property
posted on 03/05/2003 6:19 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

I was referring to the concept of consciousness having an emergent relationship to a concept of some lower-level physical system (brain cells). Not the chronological emergence of consciousness.

Peter

Re: Consciousness as an emergent property
posted on 03/05/2003 1:04 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

That constraint does not apply to consciousness. Conscious experiences such as the colour red are defined by private ostensive definition. (E.g. you look at something red and declare to yourself that that is what you mean by "red".) Therefore consciousness cannot be an emergent property.


Why does one 'declare' such a thing? This act of consciousness EMERGES from the unconscious actions of the brain.

Re: Consciousness as an emergent property
posted on 03/06/2003 2:34 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

By 'declare', I mean that one makes a silent announcement or note to oneself that henceforth the word "red" will refer to conscious sensations of that type.

If I go into a paint shop and they show a sample called "Twilight Red" then I can look at it and tell myself, "OK, now remember that *that* is what the paint manufacturer calls 'Twilight Red'." I probably won't use any words, but that is the intention of the mental act whereby I assign a meaning to the word.

Peter

Re: Consciousness as an emergent property
posted on 03/06/2003 6:00 PM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

By 'declare', I mean that one makes a silent announcement or note to oneself


It's usually not a conscious process. Just an associative link.

Re: Consciousness as an emergent property
posted on 03/06/2003 6:22 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

It makes no difference to the argument whether the process is deliberate or automatic. What matters is that, at some point the sensation (e.g. red) in your conscious mind, and at that point is associated with the term "red".

This demonstrative form of definition is being contrasted with the definition of, say, "proton". You do not acquire the meaning of "proton" experiencing one in your conscious mind. Rather, you have an analytical definition in terms of mass, charge, spin, etc.

So languages split into two broad camps: the part that is ultimately grounded in terms defined by private ostensive definition within the conscious mind; and the part that exists as a closed linguistic system.

The former points to a reality outside the language (namely the reality of consciousness) while the latter does not.

Peter

Re: Consciousness as an emergent property
posted on 03/06/2003 6:39 PM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

When you see a lion, it is not the lion that causes fear, but the idea of the lion. Idea and physical reality are on two separate levels (Logical Levels). You have to transform reality into idea, before it can cause other, higher level, ideas (i.e., fear). Any ideas in your mind, can be changed without changing anything in the physical world (apart from a little neurological reorganization within you). Dental hypnosis can disconnect pain response from your teeth. Phobias can be collapsed. And so on.

What is important about "red" is not that it is "real" -- whatever that means. The importance is that people share the reality that red refers to some particular idea: Shared Reality.

You may as well point to it and call it "kolit" -- as long as you can convince others to call it the same thing, then the label has utility for everyone who calls it that.

You can also do stranger things, tell people that a blue object is red, and as long as they accept that, (even though they have experience with both color labels), you can go through logical and emotive processes to explain something with your relabeling.

Re: Consciousness as an emergent property
posted on 03/06/2003 11:35 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Peter, you said:

That constraint does not apply to consciousness. Conscious experiences such as the colour red are defined by private ostensive definition. (E.g. you look at something red and declare to yourself that that is what you mean by "red".) Therefore consciousness cannot be an emergent property.


So in the new light of this evolving discussion the argument is now this:

Because the derivation of consensus meaning through consciousness is largely associative and arbitrary therefore "consciousness cannot be an emergent property".

I see no way that this logically follows.

Have you ever read Daniel Dennett? There is some great discussion of how to properly understand HOW the mind can emerge from the semi and unconscious processes of the brain. There is a wealth of information in the cognitive sciences that Dennett has gleaned and used to explain these processes.

Consciousness is a term for a process that has no strict boundaries, yet we know where it definitely isn't and we know where it obviously is. The hard part is how to agree on the shades of gray in between. This illustrates the limits of semantic ambiguity of our all-or-nothing language when confronted with phenomena exhibiting fuzzy boundaries.

Re: Consciousness as an emergent property
posted on 03/07/2003 4:09 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

I am completely baffled as to how you could think that this gobbldehook could have anything to do with the argument that I put forward in my essay and which I have been repeating in this discussion:

So in the new light of this evolving discussion the argument is now this:

Because the derivation of consensus meaning through consciousness is largely associative and arbitrary therefore "consciousness cannot be an emergent property".


Let's call this bizarre argument P. The first part of P refers to "the derivation of consensus meaning". But consensus meaning is not mentioned in my argument and has no relevance to it. The argument is based on the meaning of terms to an individual. The argument would still hold even if there were only one person in the world, and therefore consensus about the meaning of terms is irrelevant to the argument.

A term such as "red" means (amongst other things) a particular conscious sensation. That semantic relationship exists in each individual. It is a relationship that relates such terms to an extra-linguistic meaning, without which the terms would be locked into a closed linguistic system.

Physics terms, on the other hand, are defined analytically and not by private ostensive definition and therefore do not bear semantic references beyond the closed formal system in which they are defined.

P also mentions definitions as being "largely associative and arbitrary". The precise form that the definition takes, whether it is a declaration or an undeclared association, is immaterial to the argument. What *is* material to the argument is that the referent (ie the colour sensation) is present in consciousness and is related to the term. As regards being "arbitrary", I presume you mean that the colour red could be called "ziggle" "blue" or anything else. This is true but totally irrelevant to the present discussion.

Finally, regarding Professor Dennett, I have attended some of his conference presentations and read some of his material, and IMHO he is a great showman but sadly he is locked into a perverse approach to consciousness that does not advance our understanding of the subject by very much. He is, in effect, refusing to address what David Chalmers has identified as the 'Hard Problem' of consciousness.

Peter

Re: Consciousness as an emergent property
posted on 03/07/2003 11:20 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Likewise people claim that the concept of consciousness emerges as a wholly new concept from brains.


This claim is inverted! Consciousness is not a concept, but concepts are are a part of consciousness. Consciousness is an emergent network phenomenon as seen from the inside, surrounded by the concepts and sensations that compose it. Therefore to apply arguments based on different types of concepts says nothing about how or if consciousness is an emergent phenomenon because consciousness is not bound by the boundaries and distinctions of concepts. Your argument merely points out semantic distinctions of the types of categories and mind/world correlations that consciousness produces and claims that these semantic boundaries impose physical limits on the thing which produces them.

If you find limits within an Operating System do these limits necessarily apply to the deeper level of the hardware?

As I said in the essay, this claim doesn't go through, because emergent concepts must always have an analytical definition at the lower level, otherwise they could never be empirically recognised.


So because we haven't yet discovered the highly complex empirical relationship between the network architecture of the brain and the consciousness which emerges from it, does that necessarily mean that it can't ever be discovered?

I don't think so.

Even if temperature is *defined* phenomenologically, it still has to be cashed out in previously defined physical observables before it could ever be measured.


Consciousness is MUCH more complex and an altogether different type of phenomenon than the simple physical analogues (i.e. temperature) that you draw. The empirical laws which govern the forms of consciousness are much more difficult to discover (somewhat like genetics, but yet still more complex). So to argue based on our current state of understanding of these empirical laws is to assume that they will never figure it out. This assumption is highly precarious and IMHO foolhardy. Consciousness is difficult empirical/quantitative problem, but that doesn't mean it is impossible.

Your argument confuses what is difficult in practice with what is impossible in principal. It is therefore merely an assumption that this difficulty will turn out to be an impossibility. Progress is being made, however, so to convince anyone of the validity your argument you have to have some reason to believe that progress will halt. What is this reason? Simply the fact that it is impossible (right now) to enter someone elses mind and experience the other consciousness firsthand? I think even this barrier will eventually get erased.

Re: Consciousness as an emergent property
posted on 03/07/2003 1:14 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Consciousness is not a concept, but concepts are are a part of consciousness.


In the bit of my previous post that you quote, I mentioned the "the concept of consciousness". Obviously the concept of consciousness is a concept.

Consciousness is an emergent network phenomenon as seen from the inside, ...


This just seems completely arbitrary. You don't give any positive reasons for this belief.

Therefore to apply arguments based on different types of concepts says nothing about how or if consciousness is an emergent phenomenon because consciousness is not bound by the boundaries and distinctions of concepts.


I don't know what you're trying to say here. What sense of 'bound' are you using? I don't see what kind of boundaries you are claiming consciousness doesn't have.

At any given moment, a person's consciousness is bound by a whole bunch of things.

Your argument merely points out semantic distinctions of the types of categories and mind/world correlations that consciousness produces and claims that these semantic boundaries impose physical limits on the thing which produces them.


Again, I am struggling to discern any meaning in this. You are accusing me of claiming something about "physical limits", but I said nothing about physical limits.

Yes, I did discuss some semantic distinctions, and made some inferences from them. But I don't know where you got the "physical limits" from.

So because we haven't yet discovered the highly complex empirical relationship between the network architecture of the brain and the consciousness which emerges from it, does that necessarily mean that it can't ever be discovered?


Of course not. That is obviously an absurd non-sequitur. It has nothing whatsoever to do with what I said.

What I said was: mental terms have a type of definition that gives them a semantic reference to a real, extra-linguistic world; physical terms don't. Therefore physical terms denote formal constructs whereas mental terms denote real stuff that we can empirically observe. Therefore mental things can neither be reduced to, nor emerge from, physical things.

Consciousness is MUCH more complex and an altogether different type of phenomenon than the simple physical analogues (i.e. temperature) that you draw.


Yes, we all know it is more complex, but kindly explain why you think that that has any relevance whatsoever to the argument.

So to argue based on our current state of understanding of these empirical laws is to assume that they will never figure it out. This assumption is highly precarious and IMHO foolhardy.


This is not an assumption I made.

Your argument confuses what is difficult in practice with what is impossible in principal.


No, it makes no mention at all of what is difficult in practice. It is a basic logical argument about whether it is possible in principle for consciousness to emerge from, or to be reduced to, physical systems.

It is therefore merely an assumption that this difficulty will turn out to be an impossibility. Progress is being made, however, so to convince anyone of the validity your argument you have to have some reason to believe that progress will halt. What is this reason?


With respect, the progress hasn't begun. There has never been any progress in reducing consciousness to physics.

Simply the fact that it is impossible (right now) to enter someone elses mind and experience the other consciousness firsthand? I think even this barrier will eventually get erased.


Basic logic doesn't change. The idea that any amount of technological advance will enable a reduction of consciousness to physics, or will render a person's conscious mind as a third-party observable is just playing with words.

Peter

Re: Consciousness as an emergent property
posted on 03/07/2003 1:33 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

What I said was: mental terms have a type of definition that gives them a semantic reference to a real, extra-linguistic world; physical terms don't. Therefore physical terms denote formal constructs whereas mental terms denote real stuff that we can empirically observe. Therefore mental things can neither be reduced to, nor emerge from, physical things.


This is just ridiculous. ALL terms are mental. Whether they denote experience with an external verifiable phenomenon or whether they denote an internal subjective experience makes no difference. All experience is a result of the activity of the brain. Some experience is self-referential and others are world-referential. You can't use these distinctions between types of terms (all of which are mental), to draw conclusions to the deeper level mechanisms by which they arise. All experience is physical in the end. We merely invent mental terms to categorize and communicate those experiences.

Re: Consciousness as an emergent property
posted on 03/07/2003 1:55 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

All experience is a result of the activity of the brain.


This is begging the question. You're saying that the conscious mind is physical because the conscious mind is physical.

Peter

Re: Consciousness as an emergent property
posted on 03/07/2003 3:50 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

subtillioN: All experience is a result of the activity of the brain.

Peter: This is begging the question. You're saying that the conscious mind is physical because the conscious mind is physical.


That wasn't my argument. It was my conclusion. Your argument is simply that because some 'terms' are currently irreducible to empirically verifiable foundations that this means that consciousness itself is irreducible.

My argument is that consciousness is not equivalent to a 'term' therefore what applies to 'terms' does not necessarily apply to consciousness.

Re: Consciousness as an emergent property
posted on 03/07/2003 6:42 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Your argument is simply that because some 'terms' are currently irreducible to empirically verifiable foundations that this means that consciousness itself is irreducible.

My argument is that consciousness is not equivalent to a 'term' therefore what applies to 'terms' does not necessarily apply to consciousness.


Anything that we say about consciousness is expressed in terms. (Anything we say about anything is expressed in terms.) Therefore conclusions about terms impact what we say about the things themselves.

My argument is not that 'some terms are currently irreducible to empirically verifiable foundations'
but that physical terms are inherently irreducible to extra-linguistic reality. Their only *meaning* is within a closed formal system.

Peter

Re: Consciousness as an emergent property
posted on 03/08/2003 1:23 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Anything that we say about consciousness is expressed in terms. (Anything we say about anything is expressed in terms.) Therefore conclusions about terms impact what we say about the things themselves.


The mechanism of the emergence of consciousness is not at the symbolic or representational level. You are simply arguing at the wrong level. The only effective level of argumentation against the mechanism of the emergence of consciousness would be at the network architecture level where the 'mechanism' actually exists.

Re: Consciousness as an emergent property
posted on 03/08/2003 2:39 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Anything that we say about consciousness is expressed in terms. ... Therefore conclusions about terms impact what we say about the things themselves.


The mechanism of the emergence of consciousness is not at the symbolic or representational level. You are simply arguing at the wrong level. The only effective level of argumentation against the mechanism of the emergence of consciousness would be at the network architecture level where the 'mechanism' actually exists.


We seem to be arguing at cross-purposes. You are talking about *computation*, I am talking about *consciousness*.

You mention 'the emergence of consciousness' and immediately presupposed that it *does* 'emerge' and that there is a network mechanism to 'emerge' it. Where does that presupposition come from?

What is your reason for believing that consciousness does -- or even, in principle, can -- emerge from a physical system?

I gave what IMHO is a valid reason for saying that consciousness is the wrong *kind* of thing to emerge from any physical system. I am not saying that it emerges *this* way or *that* way. I am saying that it *cannot* emerge.

Nor am I concerned with what concepts are contained within any given person's consciousness. (I am wondering whether that is where the confusion lies.) But with the concept of consciousness itself.

Peter

Re: Consciousness as an emergent property
posted on 03/08/2003 2:43 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

[PS. My previous post had nested 'quote' in it, but kurzweilai.net has rendered it as an empty quote-box, and then centred the following paragraph. A little debugging needed there, Amara!]

Re: Consciousness as an emergent property
posted on 03/08/2003 3:35 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

We seem to be arguing at cross-purposes. You are talking about *computation*, I am talking about *consciousness*.


There are limited parallels to be drawn. This is what I was doing for the purpose of understanding the difference between the representational level and the mechanism level.

You mention 'the emergence of consciousness' and immediately presupposed that it *does* 'emerge' and that there is a network mechanism to 'emerge' it. Where does that presupposition come from?


From a cursory understanding of the enormous complexity of the brain and the cognitive science knowledge that a self-representational system is possible.

What is your reason for believing that consciousness does -- or even, in principle, can -- emerge from a physical system?


I have been tinkering with my own physical mechanisms of consciousness for quite a while now. When I alter the physical system the mind correspondingly changes. It seems obvious that the immense complexity of the mind could produce the representational illusion of the real world.

What is your assumption that it is not possible? I am looking for a reason that pertains to the level at which the mechanism would exist (the network architecture level) not the symbolic level.

I gave what IMHO is a valid reason for saying that consciousness is the wrong *kind* of thing to emerge from any physical system. I am not saying that it emerges *this* way or *that* way. I am saying that it *cannot* emerge.


Your reasoning was irrelevant because it addressed the wrong level of explanation. It said nothing of the limitations on the mechanisms involved in the putative 'emergence' of consciousness.

Nor am I concerned with what concepts are contained within any given person's consciousness. (I am wondering whether that is where the confusion lies.) But with the concept of consciousness itself.


I think you are right. If you view consciousness as anything but an abstraction and representation of external reality then you are bound to get confused because there is no mechanism that we know of that could produce this alternate absolute solipsist 'conscious world'.

Re: Consciousness as an emergent property
posted on 03/08/2003 2:09 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

but that physical terms are inherently irreducible to extra-linguistic reality. Their only *meaning* is within a closed formal system.


Terms are merely reducible to terms. It IS a closed representational system that simply correlates with reality. Why should a term be reducible to "extra-linguistic reality"? The mind is an abstraction of reality, not reality itself. How does this prove that consciousness is not an emergent phenomenon?

Re: Consciousness as an emergent property
posted on 03/08/2003 2:33 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]


Terms are merely reducible to terms. It IS a closed representational system that simply correlates with reality. Why should a term be reducible to "extra-linguistic reality"? The mind is an abstraction of reality, not reality itself. How does this prove that consciousness is not an emergent phenomenon?


Here you are leaving out the important point: Although both the computer and the mind reference reality, and both the computer and the mind have the reality of processing information, the difference is that consciousness has another reality: that of seeing-color. Seeing-color is (also) a reality in itself, not just a formal reference, and seeing-color is unlike processing information, unlike adding 1 + 2. Even if someone is able to add numbers so fast that he starts seeing colors ;-), at that point he is not just adding numbers anymore, which is what a computer is supposed to do. Otherwise it's not a computer anymore. Voids the warranty, I mean, the definition. ;-)

Regards,
blue_is_not_a_number
http://www.occean.com

Re: Consciousness as an emergent property
posted on 03/08/2003 2:58 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

the difference is that consciousness has another reality: that of seeing-color.


It is merely an illusion or representation of reality. Not at all the same status as a 'reality'.

Re: Consciousness as an emergent property
posted on 03/08/2003 3:03 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

blue_etc: the difference is that consciousness has another reality: that of seeing-color.

subtillioN: It is merely an illusion or representation of reality. Not at all the same status as a 'reality'.


I't s there in from of you own eyes, so to speak. Doesn't a color have presence in your field of vision. Whether you call it an illusion or not, it's there. Even a a mive project onto a screen has a presence as light.

Blue_is_not_a_number

Re: Consciousness as an emergent property
posted on 03/08/2003 3:08 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Hmmm. Here come the typos corrected:

The color is there in front of your own eyes, so to speak. Doesn't a color have a presence in your field of vision? Whether you call it an illusion or not, it's there. Even a movie projected onto a screen has a presence as light.

Re: Consciousness as an emergent property
posted on 03/08/2003 3:19 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

The color is there in front of your own eyes, so to speak. Doesn't a color have a presence in your field of vision?


Do you think the primary colors exist outside of the mind? They don't. The spectrum is continuous. It is NOT formed from only three 'primary' colors. This is an example of the mind in action simplifying, abstracting and generalizing reality for the purpose of representing it.

Whether you call it an illusion or not, it's there. Even a movie projected onto a screen has a presence as light.


Illusions, like everything else, are real. They simply look like something that they are not, i.e. the mind looks like the world (so we assume), but it is not the world. It is just a representation of it.

Re: Consciousness as an emergent property
posted on 03/08/2003 3:27 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Do you think the primary colors exist outside of the mind? They don't. The spectrum is continuous. It is NOT formed from only three 'primary' colors.


Colors don't exist outside of the mind at all, and that's the point: They exist in consciousness, they are a reality of consciousness.

This is an example of the mind in action simplifying, abstracting and generalizing reality for the purpose of representing it.


Here you are going back to the context of processing information, and you are right as far as that is concerned. Colors, though, are means of representation which have a reality in themselves.

Re: Consciousness as an emergent property
posted on 03/08/2003 3:41 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Illusions, like everything else, are real. They simply look like something that they are not, i.e. the mind looks like the world (so we assume), but it is not the world. It is just a representation of it.


You are talking about the mind as if it were outside of the world. Where is it then? A color looks like a color, not like anything else. Or are you saying something else would look like a color? It doesn't matter, because than that something else is, that's also the color. Because the color _is_ what it looks like. You can't understand that with a thinking rooted in classical physics, and that's also Dennett's limitation.

Re: Consciousness as an emergent property
posted on 03/08/2003 3:52 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

You are talking about the mind as if it were outside of the world. Where is it then?


We are talking past each other then because that is the opposite of what I was trying to communicate.

A color looks like a color, not like anything else.

That is actually what i said when i said to the effect of "the primary colors are an invention of the mind to represent em frequency".

You can't understand that with a thinking rooted in classical physics, and that's also Dennett's limitation.


What makes you assume it is impossible?

Re: Consciousness as an emergent property
posted on 03/08/2003 4:07 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

We are talking past each other then because that is the opposite of what I was trying to communicate.


I know. Read on.

That is actually what i said when i said to the effect of "the primary colors are an invention of the mind to represent em frequency".


You are using only two categories: abstract reference and reality. You need a third one: The carrier of the reference. Well, actually that's what you call the mind. But the mind is not just an abstract container of abstract references, it has a reality besides its action of referencing: that of conscious sensation, such as seeing-color. The color is a conscious reality in addition to its activity of referencing.

What makes you assume it is impossible? [to understand color from a thinking rooted in classical physics]


Seeing-color is both a reality and an appearance in itself, at the same time (or without time). The appearance has an existance in itself. That doesn't happen in classical physics.

Re: Consciousness as an emergent property
posted on 03/08/2003 4:21 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

You are using only two categories: abstract reference and reality. You need a third one: The carrier of the reference. Well, actually that's what you call the mind.


The mind is a collection of references. The brain would be the 'carrier'.

But the mind is not just an abstract container of abstract references, it has a reality besides its action of referencing:


The brain does the referencing through the interface of the mind.

The color is a conscious reality in addition to its activity of referencing.


The illusion of color is real, but color does not exist apart from the reality of this illusion.


What makes you assume it is impossible? [to understand color from a thinking rooted in classical physics]

Seeing-color is both a reality and an appearance in itself, at the same time (or without time). The appearance has an existance in itself. That doesn't happen in classical physics.


Classical physics? Where did that come from? You obviously don't know my stance on physics. Here is a quick off-the-cuff slogan: The quantum revolution was revolutionary enough.

Does this sound like I am rooted in classical physics?

How does this discussion even tie in with classical physics?

Re: Consciousness as an emergent property
posted on 03/08/2003 4:23 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

The quantum revolution was revolutionary enough.


lol oops! I meant to say that it was NOT "revolutionary enough"! They left the very problem of the kinetic-atomic-theory in the core to give us the wave-particle duality.

Re: Consciousness as an emergent property
posted on 03/08/2003 4:43 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Classical physics? Where did that come from? You obviously don't know my stance on physics. Here is a quick off-the-cuff slogan: The quantum revolution was revolutionary enough.


I meant to say that it was NOT "revolutionary enough"! They left the very problem of the kinetic-atomic-theory in the core to give us the wave-particle duality.


Excuse me for changing the order in which I reply...

As far as I understand from your previous posts, you are looking for an explanation below the quantum level in terms of, crrect me if I'm wrong, classical physics. You are using the model of compressible fluid. Maybe this is why Peter was talking about Bell's theorem (at least that's why I thought of it): Quantum teleportation is difficult to explain with the model of compressible fluid, I guess. But I'm not a physicist, and maybe you are able to come up with something here.

The illusion of color is real, but color does not exist apart from the reality of this illusion.



Color is not an illusion, it's an appearance, and presence. 'Illusion' would be the right word only if we think something is there which isn't. But the color is there, however in consciousness, not as a measurable event. It can't be measured because it is not quantifiable. But to acknowledge something as real which can't be measured contradicts your definition of reality, so you won't acknowledge it. The "reality of an illusion' is too abstract for a color. The color is right there in your view, visible. The color is visibility itself, which again doesn't fit into concepts of mathematical physics.

Re: Consciousness as an emergent property
posted on 03/08/2003 4:46 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Second part should be non-quoted:


The illusion of color is real, but color does not exist apart from the reality of this illusion.



Color is not an illusion, it is an appearance, and presence. 'Illusion' would be the right word only if we think something is there which isn't. But the color is there, however in consciousness, not as a measurable event. It can't be measured because it is not quantifiable. But to acknowledge something as real which can't be measured contradicts your definition of reality, so you won't acknowledge it. The "reality of an illusion' is too abstract for a color. The color is right there in your view, visible. The color is visibility itself, which again doesn't fit into concepts of mathematical physics.

Re: Consciousness as an emergent property
posted on 03/08/2003 4:54 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Color is not an illusion, it is an appearance, and presence. 'Illusion' would be the right word only if we think something is there which isn't.


You are right. I use the term here because it looks to us as if color were 'out there', but it's not, it's in the mind.

Re: Consciousness as an emergent property
posted on 03/08/2003 5:22 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

You are right. I use the term here because it looks to us as if color were 'out there', but it's not, it's in the mind


It is in consciousness. That's doesn't mean that 'consciousness' and 'brain' are seperate. But a mathematical description of the brain, and physics is mathematical descriptions, would not be able to explain color (which is the same as the conscious experience of color) as a mathematical function. (neither as a mathematical function of any other reality.)

I'll be back soon...

Re: Consciousness as an emergent property
posted on 03/08/2003 11:58 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

It is in consciousness. That's doesn't mean that 'consciousness' and 'brain' are seperate.


Not any more than a computer and its software are seperate. In fact much less so.

But a mathematical description of the brain, and physics is mathematical descriptions, would not be able to explain color (which is the same as the conscious experience of color) as a mathematical function. (neither as a mathematical function of any other reality.)


Quantitative and qualitative descriptions of what a color is, how it is formed, how different individual representations of color really are, etc. CAN be formulated. They are just not easy to formulate as some would hope.

Re: Consciousness as an emergent property
posted on 03/08/2003 5:10 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

As far as I understand from your previous posts, you are looking for an explanation below the quantum level in terms of, crrect me if I'm wrong, classical physics.


The only thing that is even close to 'classical' about my view is that it is purely causal and deterministic.

You are using the model of compressible fluid. Maybe this is why Peter was talking about Bell's theorem (at least that's why I thought of it): Quantum teleportation is difficult to explain with the model of compressible fluid, I guess. But I'm not a physicist, and maybe you are able to come up with something here.


'Quantum Teleportation' is simply a faulty (IMHO) interpretation of the results of the Aspect-type experiments. This is why there can never be any actual transfer of REAL data through this 'teleportation'. If there were a real transfer of REAL data then I would look at it more closely, but the 'teleported data' is simply the knowledge of state of the system via the collapse of the mathematical wave-function which is not actually transfered through the 'teleportation', but is merely acquired through statistical methods. This is also why the results are so unreliable. see http://users.aber.ac.uk/cat/ for more info.

There is a huge amount of sloppiness in the experiments, unchecked bias in the peer-review process and leeway in the statistics processing. No peer-reviewed journal will ever accept anything that doesn't tow the party-line. To attempt such a thing is professional suicide for a physicist.

Re: Consciousness as an emergent property
posted on 03/08/2003 2:56 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Terms are merely reducible to terms. It IS a closed representational system that simply correlates with reality.


That's a description of physics language only.

(Although: the correlation with reality is outside the formalism of physics. Cf the formalism of chess is a closed symbolic system, which can be assigned bindings to various and many physical chess boards. Likewise the formalism of physics can be assigned a binding to structural patterns in the observed phenomenal world.)

Why should a term be reducible to "extra-linguistic reality"?


I don't know what you mean by 'why' in this context. I'm just making a factual observation. Conscious terms just *do* have a extra-linguistic reference to reality. That's just the way we use them.

The mind is an abstraction of reality, not reality itself.


You can disprove this in a number of ways. One way is to drop a brick on your foot. The resulting pain is very much a part of reality. It is emphatically not an abstraction.

If you are confused by the connection between the conscious pain and the physical brick hitting the physical foot, you might find it easier to consider experiences you have when dreaming: these are actual, real conscious experiences.

How does this prove that consciousness is not an emergent phenomenon?


Because consciousness is real, whereas physical substance is a notional construct -- strictly speaking, a fiction. You can't derive something real from a fiction.

This is like opening your cereal packet in the morning and expecting to find an imaginary number in it! The imaginary number (like all numbers) is a notional construct. So you would be making a category-mistake (in Gilbert Ryle's terms) to supposed you could encounter it in the flesh. Likewise it would be a category-mistake to suppose that real consciousness could emegre from a notional physical system.

Peter

Re: Consciousness as an emergent property
posted on 03/08/2003 3:15 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

You can disprove this in a number of ways. One way is to drop a brick on your foot. The resulting pain is very much a part of reality. It is emphatically not an abstraction.


Abstractions are real, just like everthing else. The point is that the pain is a highly simplistic motivational representation of what is happening to the foot. Representation is abstraction.

If you are confused by the connection between the conscious pain and the physical brick hitting the physical foot, you might find it easier to consider experiences you have when dreaming: these are actual, real conscious experiences.


You are confused by an asumption that representation is not abstraction. The dream world is NOT REAL. It is merely representation based on rearranged environment correlated memories. This is why virtually anything is possible. IT IS FANTASY, not fact.


Because consciousness is real, whereas physical substance is a notional construct -- strictly speaking, a fiction. You can't derive something real from a fiction.


WOW, a solipsist at heart! EVERYTHING IS REAL. Some things merely look like something which they are not. THAT is the nature of consciousness. A "benign user-illusion" (to quote Dennett).

Likewise it would be a category-mistake to suppose that real consciousness could emegre from a notional physical system.


This points to your problem. Just because we have to form a representation (or notion) of EVERYTHING (including symbols and concepts representing our own mind) to percieve and understand it, does not mean that the actually thing that our representation is correlated with really IS a "notional physical system". You are confusing your own user-illusions with reality itself!

Peter

Re: Consciousness as an emergent property
posted on 03/08/2003 3:49 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Abstractions are real, just like everthing else.


Huh? So if I invent a set X={P,{a,b,c,}} then P is real? So Mickey Mouse and Sherlock Holmes are real?

Not in the normal sense of the word 'real' they're not.

The point is that the pain is a highly simplistic motivational representation of what is happening to the foot.


So what? It's a conscious sensation, and that's all that matters.

If it's representation that's confusing you then gently press your eyeballs and get some non-representational phosphenes. They're real (otherwise how would know they were there?).

You are confused by an asumption that representation is not abstraction.


Representation is not abstraction. A conscious sensation is non-abstract but it can still represent.

The dream world is NOT REAL. It is merely representation based on rearranged environment correlated memories. This is why virtually anything is possible. IT IS FANTASY, not fact.


The conscious experience of a dream is real. (Otherwise how would you know you were having a dream?) The world depicted by it is, of course, not real.

This is just like the waking world, in fact.

WOW, a solipsist at heart!


Huh? Where did solipsism come from? Nothing I said supports solipsism.

EVERYTHING IS REAL.


Sherlock Holmes isn't. The present king of France isn't. The fourth side of a triangle isn't.

This points to your problem. Just because we have to form a representation (or notion) of EVERYTHING (including symbols and concepts representing our own mind) to percieve and understand it, does not mean that the actually thing that our representation is correlated with really IS a "notional physical system". You are confusing your own user-illusions with reality itself!


What?

Peter

Re: Consciousness as an emergent property
posted on 03/08/2003 4:10 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

subtillioN: Abstractions are real, just like everthing else.

Peter: Huh? So if I invent a set X={P,{a,b,c,}} then P is real? So Mickey Mouse and Sherlock Holmes are real?

Not in the normal sense of the word 'real' they're not.


There is no dualism between real and illusion. An illusion is simply not what it seems. This is the root of your confusion with what I am saying.

If it's representation that's confusing you then gently press your eyeballs and get some non-representational phosphenes. They're real (otherwise how would know they were there?).


The image of those phosphenes ARE representations of the physical processes that are happening to the eyeball.


subtillioN: You are confused by an asumption that representation is not abstraction.

Peter: Representation is not abstraction. A conscious sensation is non-abstract but it can still represent.


There are at least two different senses of the word 'abstract' you are using one and I am using another.


subtillioN: The dream world is NOT REAL. It is merely representation based on rearranged environment correlated memories. This is why virtually anything is possible. IT IS FANTASY, not fact.

Peter: The conscious experience of a dream is real. (Otherwise how would you know you were having a dream?) The world depicted by it is, of course, not real.


that was my point.


subtillioN: WOW, a solipsist at heart!

Peter: Huh? Where did solipsism come from? Nothing I said supports solipsism.


It came from this, you said:

"Because consciousness is real, whereas physical substance is a notional construct -- strictly speaking, a fiction. You can't derive something real from a fiction. "

Sounds like solipsism to me.



EVERYTHING IS REAL.

Sherlock Holmes isn't. The present king of France isn't. The fourth side of a triangle isn't.


If you know what the nature of the illusion then you can see its reality. You are simply using language to point to things which cannot exist. These referents are not part of everything because they do not exist. The forth side of a triangle never gets past the illogic of the statement. It points to nothing.



What?


ok I will rephrase it:

Just because we have to form a correlated 'notion' of physical reality in order to percieve and understand it, does not mean that the actual physical correlate itself IS a "notional physical system".

Re: Consciousness as an emergent property
posted on 03/08/2003 5:16 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

"Because consciousness is real, whereas physical substance is a notional construct -- strictly speaking, a fiction. You can't derive something real from a fiction. "

Sounds like solipsism to me.


Solipsism is the theory that I am the only conscious being that exists. Personally, I think that that theory is massively implausible. It's much more plausible to suppose that every person on this planet is a conscious being and probably a lot of the animals are, too.

Solipsism does not follow from what I was saying.

Maybe you were thinking of mental monism?

ok I will rephrase it:

Just because we have to form a correlated 'notion' of physical reality in order to percieve and understand it, does not mean that the actual physical correlate itself IS a "notional physical system".


Thanks for the rephrase.

What you say is, in itself, true. Let p = "we have to form a correlated 'notion' of physical reality" and q = "the actual physical correlate itself IS a 'notional physical system'." I agree with you that p does not imply q. But I didn't say it did.

Let r = "The objects of physics such as protons etc do not occur as conscious experiences" and let s = "The terms of physics do not have a direct semantic reference to any extra-linguistic things". Then I a claiming that r implies s, and s implies q.

Let me use a computer analogy. In C++ you can have a pointer variable, say "X". In order to get X to point anywhere, you must assign to it the address of the thing you want it to point to. Now, anything that is not in memory (eg the Eiffel Tower) cannot have an address. Therefore you can't get a pointer to it. Any pointer that contains no address can still be used: you can put into a structure, you can assign its value to another pointer -- but you can't de-reference it.

Likewise, a term can refer to something only if it is in consciousness, so that you can fix what the referent is. Since physical things are not in consciousness, you cannot fix any reference to them. Therefore any terms that denote physical things do not really bear any reference to them, but have the role of fictions. You can use them wiithin the closed linguistic system of physics, but you can't 'de-reference' them.

Peter

Re: Consciousness as an emergent property
posted on 03/08/2003 5:45 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Solipsism is the theory that I am the only conscious being that exists. ...

Solipsism does not follow from what I was saying.

Maybe you were thinking of mental monism?


Right, sorry for the mis-lable! I am a substance monist, would this be your philosophical antithesis?


Let r = "The objects of physics such as protons etc do not occur as conscious experiences" and let s = "The terms of physics do not have a direct semantic reference to any extra-linguistic things". Then I a claiming that r implies s, and s implies q.


There are certain experimental results that show that such a thing as a 'proton' does exist. What exactly this 'proton' is, is hypothetical, but there is a certain amount of conscious experience that we do have with the 'proton'.

How do the limits on the spatial resolution of our sensations and the limited representational realm built from that sensation say anything about whether consciousness is emergent or not?

Likewise, a term can refer to something only if it is in consciousness, so that you can fix what the referent is. Since physical things are not in consciousness, you cannot fix any reference to them.


Physical things ARE in consciousness. There is simply a limit to the resolution to which we can percieve them.

Therefore any terms that denote physical things do not really bear any reference to them,


Be careful not to generalize all scales based on the limited realm of perception in question.

but have the role of fictions. You can use them wiithin the closed linguistic system of physics, but you can't 'de-reference' them.


The mind itself is a fiction created by the brain to correlate with physical reality. I have seen nothing in your arguments which negates this interpretation.

Re: Consciousness as an emergent property
posted on 03/08/2003 2:13 AM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

OK, so I went backward through your postings and finally got to your main argument :)

But a thermodynamic soup involves new concepts such as temperature that just don't have any meaning at the microlevel of molecules.


It's a higher logical level. Just like you can never make your car reverse just by pressing either gas or brake pedals -- or any combination. And sometimes you can't express the "higher" functions (gear change) in terms of the "lower" (gas & brake pedals), or vice versa. But this may have less relevance for "emergent" properties.

Likewise people claim that the concept of consciousness emerges as a wholly new concept from brains.

As I said in the essay, this claim doesn't go through, because emergent concepts must always have an analytical definition at the lower level, otherwise they could never be empirically recognised. Even if temperature is *defined* phenomenologically, it still has to be cashed out in previously defined physical observables before it could ever be measured.


Well, as Searle might say, we shouldn't rewrite ontology in terms of epistemology. You shouldn't say that something is an "X", just because you can only write 1 letter words. And we can't say that consciousness has no lower level definition, just because the epistemology prevents us from examining it fully -- at this time.

But the Quantum-Duality stuff has absolutely no evidence to support it, it's pure speculation. It's almost like saying that since the clock in your PC cannot be absolutely precise (it varies in duration from cycle to cycle) -- that this somehow opens the door for consciousness in an asynchronous, let's say analogue computer.

I mean, people are looking for a magic spell to produce a doorway into consciousness. But reciting "quantumus consciencitus" or whatever, only makes you look silly.

That constraint does not apply to consciousness. Conscious experiences such as the colour red are defined by private ostensive definition. (E.g. you look at something red and declare to yourself that that is what you mean by "red".) Therefore consciousness cannot be an emergent property.


I'm not sure I understand what you are saying. A computer program can easily examine an image and tell you it's predominant color (RGB). Without Consciousness, it could even bind it to a label, "red".

I don't know exactly what the transistors and the features of the CPU have to do in order to produce this, but I do happen to know the type of algorithm that could do such a thing. Likewise, I know that when someone recognizes something red, their neurology reorganizes slightly -- but I couldn't tell you exactly how the neurons do that, yet I know the higher level strategies that might be involved.

As a hypnotist, I understand the methods people can use to bind objects to labels:
Your method was to say the label to yourself internally while looking at it -- for example.

And another person might make the object larger, in the mind's eye, and imagine the letters of the label imprinted or emblazoned on the object.

I emblazon people's names onto their foreheads, when I want to learn their name -- for example.

And I can teach other people to use a strategy of some sort, when they have difficulty doing some task that requires remembering labels. So, to me, there is nothing "ostensibly private" about how people do so.

Maybe we should just be talking about the subjective experience of something red, rather than the label one associates to them?

Re: Consciousness as an emergent property
posted on 03/08/2003 3:27 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

It's a higher logical level. Just like you can never make your car reverse just by pressing either gas or brake pedals -- or any combination.


Yes, I understand the concept of emergence and see that it applies to some things, e.g. thermodynamics, or more pertinently the high-level data structures that I use in computer programming which ultimately rest on electrical charges in the circuits. I am not opposed to the concept of emergence per se, I am saying specifically that consciousnes cannot emerge from a physical system.

Well, as Searle might say, we shouldn't rewrite ontology in terms of epistemology.


Much as I respect Searle, this time he's just plain wrong.

You have to ask: Why not? Where does the rule come from that you shouldn't rewrite ontology in terms of epistomology? Sure, in most circumstances you would not want to do that. But the mind-body problem is different from most other problems.

You shouldn't say that something is an "X", just because you can only write 1 letter words. And we can't say that consciousness has no lower level definition, just because the epistemology prevents us from examining it fully -- at this time.


I am as reductionistic as anybody, and I don't deny that consciousness experiences can be reduced to a lower level. But the lower level also consists of conscious exeperiences. The Buddhists have done the most work in this area.

What I am denying is that consciousness can be reduced to a physical, non-conscious level.

But the Quantum-Duality stuff has absolutely no evidence to support it, it's pure speculation.


It has elementary logic to support it. (i) I can report my conscious experiences (as indeed we all can). (ii) Consciousness is nonphysical (see earlier arguments about nonreducibility to physics). (iii) Therefore reporting my conscious experience is an instance of a nonphysical process having an effect on a physical system. (iv) If the physical system were deterministic, this would be impossible. (v) Since it *does* happen (as an empirical fact), it must do so at nondeterministic events. (vi) The only nondeterministic events are quantum mechanical events.

Which step in this argument do you disagree with?

(Aside: there are also the initial conditions, but I don't want to bring them in here as it would muddy the water.)

BTW This has nothing to do with 'quantum duality'. I think quantum duality is a useless idea. And it's irrelevant here anyway.

It's almost like saying that since the clock in your PC cannot be absolutely precise (it varies in duration from cycle to cycle) -- that this somehow opens the door for consciousness in an asynchronous, let's say analogue computer.


No, that's got nothing to do with what we're discussing.

I mean, people are looking for a magic spell to produce a doorway into consciousness. But reciting "quantumus consciencitus" or whatever, only makes you look silly.


Fine, so tell me which step in the argument (i) to (vi) you think is wrong and why you think it's wrong.

I'm not sure I understand what you are saying. A computer program can easily examine an image and tell you it's predominant color (RGB). Without Consciousness, it could even bind it to a label, "red".


Yes, of course it can. So what? What relevance does that have to this discussion? We are talkinbg about conscious experiences, not about labelling parts of the electromagnetic spectrum, which is what the computer is doing.

As a hypnotist, I understand the methods people can use to bind objects to labels:
Your method was to say the label to yourself internally while looking at it -- for example.


That was an example. You can do it by any method. The key point is that the referent is present in consciousness.

And I can teach other people to use a strategy of some sort, when they have difficulty doing some task that requires remembering labels. So, to me, there is nothing "ostensibly private" about how people do so.


It's private in so far as you cannot look into their minds. You rely on their verbal reports and body language to infere what is happening in their conscious minds.

BTW I did not say "ostensibly private", I said "private ostensive definition". The "ostensive" is qualifying the "definiton". It means 'definition by pointing to something'.

Maybe we should just be talking about the subjective experience of something red, rather than the label one associates to them?


Fine, but where would that get you?

Peter

Re: Consciousness as an emergent property
posted on 03/08/2003 9:57 PM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

I am saying specifically that consciousnes cannot emerge from a physical system.

I am as reductionistic as anybody, and I don't deny that consciousness experiences can be reduced to a lower level. But the lower level also consists of conscious exeperiences. The Buddhists have done the most work in this area.

What I am denying is that consciousness can be reduced to a physical, non-conscious level.


I take it that by non-conscous, you don't mean unconscious? And I'm interested in what you think of Multipal Personality Disorder in light of some quantum mechanical path to consciousness? Perhaps you don't think anything of it, and that would be OK.

There's another term, co-conscious, where people have "parts" that function at various levels of consciousness, some hypnotherapies and psychotherapies deal with "parts" of a person (for example, in Transactional Analysis, you have the Child, Parent, and Adult parts); these are literally unconscious processes that have been ingrained into the neurology of the person, ingrained responses. I'm sure you don't think about each letter you type as you write back?

It has elementary logic to support it. (i) I can report my conscious experiences (as indeed we all can). (ii) Consciousness is nonphysical (see earlier arguments about nonreducibility to physics). (iii) Therefore reporting my conscious experience is an instance of a nonphysical process having an effect on a physical system. (iv) If the physical system were deterministic, this would be impossible. (v) Since it *does* happen (as an empirical fact), it must do so at nondeterministic events. (vi) The only nondeterministic events are quantum mechanical events.

Which step in this argument do you disagree with?


I disagree with the assumption that "physical" and "nonphysical" form a partition. Which clashes first with (iii).

We are talkinbg about conscious experiences, not about labelling parts of the electromagnetic spectrum


Which is why I suggested droping the idea of labeling anything, altogether.

The key point is that the referent is present in consciousness.


Present, or re-presented?

It's private in so far as you cannot look into their minds. You rely on their verbal reports and body language to infere what is happening in their conscious minds.


Yes, and you rely on minimal cues as much as possible to reduce the possibility of "cheating". And you can do things in an indirect fashion, such that they answer a question -- that they don't know they were asked.

Maybe we should just be talking about the subjective experience of something red, rather than the label one associates to them?

Fine, but where would that get you?


That would get back to primary experience, without adding the complication of labeling it.

Re: Consciousness as an emergent property
posted on 03/09/2003 12:40 PM by sushi101

[Top]
[Mind·X]
[Reply to this post]

What I am denying is that consciousness can be reduced to a physical, non-conscious level.


Ofcourse it can't, but it arises _from_ matter it is not seperate from matter and it is not purely holistic.

It exist between time and matter, a casual feedback loops with an ability to do patternregocnition over time(states).

If conciousness would ever be a product that would be sold on the shelves in supermarkets, there would be a line that said "Just add time"

So let met ask you this Peter, do you think it would be possible to be even more concious(higher or different resolution?) than we are?

Re: Consciousness as an emergent property
posted on 03/10/2003 11:32 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

[quite]Ofcourse it can't (be reduced to physics), but it arises _from_ matter ...


That would be very odd, because the equations and statements in physics textbooks describe how *physical* states arise out of past physical states. I've never yet see one that said how a conscious sensation would arise from a physical state. In fact, I don't quite see how a physics textbook could do ever so, because ... in order to do so, it would have to define some terms for denoting conscious sensations. And since we both agree that consciousness is not reducible to physics (right?), I am at a loss to see how anyone could ever, in principle, define conscious experiences in physical terms. Agreed? You can't define the sensation of red in terms of space and energy? (Well, if you *could* do so, then you could hand it to a blind person and she will be able to see again!) Yes, I know you can define the 'red' segment of the electromagnetic spectrum in physical terms, but that's not what we're talking about. We're talking about what you actually experience in your mind.

So, therefore, no laws of physics are ever going to be able to predict the occurrence of any conscious sensations. After all, you can't predict what you can't define. So, if conscious sensations ever did 'emerge', then it would be a miracle rather than an event caused under the nomological constraint of physical law. But we don't believe in miracles, do we? Um .. no, so it looks as if consciousness does not and cannot 'emerge' from matter. So, it must be a basic ingredient of reality.

So let met ask you this Peter, do you think it would be possible to be even more concious(higher or different resolution?) than we are?


I don't understand the question. Consciousness is multi-faceted and multidimensional. Which dimension, precisely, are you measuring? E.g. is a blind person with a heightened sense of hearing more or less conscious? The question does not make much sense unless you identify one single dimension at a time.

Peter

Re: Consciousness as an emergent property
posted on 03/10/2003 12:05 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

That would be very odd, because the equations and statements in physics textbooks describe how *physical* states arise out of past physical states. I've never yet see one that said how a conscious sensation would arise from a physical state. In fact, I don't quite see how a physics textbook could do ever so, because ... in order to do so, it would have to define some terms for denoting conscious sensations.


It is odd how you seem to look at every level but the proper one for finding the mechanisms of consciousness. First you were looking at the macro end, the symbolic/language level and then you went to the micro end to the physics level. The only way to address the mechanisms of the emergence of consciousness is to address the level which is relevent to the mechanisms involved.

Consciousness is emergent above the network architecture level not the physics level. It is a consequence of a specific type and high level of complexity of organization of networked units in specific architectures which form functional modules that work together to represent the world and the self (i.e. self awareness).

It's an amazing trick really that such a self-referential dynamic structure can produce consciousness, but it is entirely addressable at the network/modular level.

I think you should study cognitive science and then come back with your more relevant critiques of the mechanisms involved.

...just a thought

Re: Consciousness as an emergent property
posted on 03/11/2003 10:34 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

It is odd how you seem to look at every level but the proper one for finding the mechanisms of consciousness. First you were looking at the macro end, the symbolic/language level and then you went to the micro end to the physics level. The only way to address the mechanisms of the emergence of consciousness is to address the level which is relevent to the mechanisms involved.


I wasn't 'looking for the mechanisms of consciousness'. I was pointing out why consciousness cannot emerge from physics.

Your counter-argument to my points seems to be: let's talk about something else instead.

I say: consciousness cannot be emergent from physics because physical laws (at whatever level) do not have any reference to the contents of consciousness. Consciousness does not have a physical definition, therefore it can be referenced in a physical law, therefore it can emerge from physics.

You say: Let's talk about architecture instead.

No, let's focus on the basics first. Like: whether or not consciousness could, even in principle, ever emerge from physics. Only if we were to get an affirmative answer to that would it be appropriate to move on to asking about the details of how it could so emerge.

Peter

Re: Consciousness as an emergent property
posted on 03/11/2003 11:19 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I wasn't 'looking for the mechanisms of consciousness'. I was pointing out why consciousness cannot emerge from physics.


Your argument is irrelevant.

Your counter-argument to my points seems to be: let's talk about something else instead.


No it was “Let’s not waste time arguing irrelevant points.”

I say: consciousness cannot be emergent from physics because physical laws (at whatever level) do not have any reference to the contents of consciousness. Consciousness does not have a physical definition, therefore it can be referenced in a physical law, therefore it can emerge from physics.


So because consciousness is a private, subjective, internal phenomenon then this automatically means that it is not physical? How come then raw chemistry can alter it DRAMATICALLY? We do have physical models of consciousness, but they exeist at the level that you are ignoring. This is why you will forever be ignorant of HOW consciousness works.

You say: Let's talk about architecture instead.


You want to talk about the physical mechanisms of consciousness. I am telling you that they exist at the network architectural level. If you argue at any other level then your arguments are irrelevant. They are purely symbolic and only work within your closed little dualistic logic system.

No, let's focus on the basics first. Like: whether or not consciousness could, even in principle, ever emerge from physics. Only if we were to get an affirmative answer to that would it be appropriate to move on to asking about the details of how it could so emerge.


We are proof of concept. If you don't understand how the mechanisms of consciousness works and thus what consciousness actually is then you can't even begin to look for it at any level.

Re: Consciousness as an emergent property
posted on 03/11/2003 12:07 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Your argument is irrelevant. ... Let’s not waste time arguing irrelevant points.


Very persuasive line of argument. Not.

peter: consciousness cannot be emergent from physics because physical laws (at whatever level) do not have any reference to the contents of consciousness. Consciousness does not have a physical definition, therefore it can be referenced in a physical law, therefore it can emerge from physics.

subtillioN: So because consciousness is a private, subjective, internal phenomenon then this automatically means that it is not physical?


It's difficult to know how to respond when you quote an argument and then immediately paraphrase it as something quite different.

It's true that consciousness is a 'private, subjective' phenomenon. But that is not the reason I gave for its being irreducible to, and non-emergable from, physics. The reason is that the language in which physics (at *any* level) is expressed is in terms that ultimately have a definition rooted in analytical concepts such as mass and energy. The terms in which facts of consciousness are expressed are not amenable to that kind of definition. They have a different kind of definition, in which they gain meaning by being related (by declaration, association, or any other technique) with an immediate conscious experience.

So there are two disjoint sets of terms: the physical and the mental. *That* is why mental facts can never follow from physical facts.

The abstract levels of systems that you project onto the physical world do not change that basic, elementary logic. Each level is ultimately defined in terms of lower levels, otherwise it would be free-floating and meaningless. So all physical systems, no matter how abstract the concepts used to describe them, rest on basic physical concepts.

That whole physical edifice excludes consciousness. So consciousness cannot be emergent property from the physical systems.

How come then raw chemistry can alter it DRAMATICALLY?


It can't. Mental processes alter it and manifest themselves as conscious experiences that you abstractly model as chemical processes.

You want to talk about the physical mechanisms of consciousness. I am telling you that they exist at the network architectural level. If you argue at any other level then your arguments are irrelevant. They are purely symbolic and only work within your closed little dualistic logic system.


If any such mechanisms actually exist then they must be fleshed out in matter and energy. Agreed?

In which case they exclude consciousness for the reasons given above.

Peter

Re: Consciousness as an emergent property
posted on 03/11/2003 12:50 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Very persuasive line of argument. Not.


It was not a “line of argument”.

The reason is that the language in which physics (at *any* level) is expressed is in terms that ultimately have a definition rooted in analytical concepts such as mass and energy. The terms in which facts of consciousness are expressed are not amenable to that kind of definition.


That is your mistaken assumption. The network architecture level is a highly analytical level and this is where the understanding can be found.

They have a different kind of definition, in which they gain meaning by being related (by declaration, association, or any other technique) with an immediate conscious experience.


…only within the closed confusion of your dualistic/monistic contradictory logic system.

So there are two disjoint sets of terms: the physical and the mental. *That* is why mental facts can never follow from physical facts.


”Very persuasive line of argument. Not.” Reality is not dependent on our feeble language attempts to understand her.

The abstract levels of systems that you project onto the physical world do not change that basic, elementary logic. Each level is ultimately defined in terms of lower levels, otherwise it would be free-floating and meaningless. So all physical systems, no matter how abstract the concepts used to describe them, rest on basic physical concepts.


Wrong. Physical systems do not rest on concepts. It is our understanding of physical systems that is conceptual. This is the source of your confusion. You keep getting the conceptual mixed up with the actual.

That whole physical edifice excludes consciousness. So consciousness cannot be emergent property from the physical systems.


Wrong. It is entirely explainable via physical mechanisms. You are simply excluding the entire field of understanding.


subtillioN: How come then raw chemistry can alter it DRAMATICALLY?

Peter: It can't. Mental processes alter it and manifest themselves as conscious experiences that you abstractly model as chemical processes.


E q u i v o c a t i o n!

You are using a dualism to justify your monism and flipping back and forth between the two to suit your argument!

What a joke! So your argument is that since EVERYTHING is mental then psychotropics are just the mind effecting itself? How can you exclude chemistry from physical reality and then make the opposite categorical distinction between physics and the mind? Wouldn’t physics be purely mental as well? If so then why is there a dualism at all? You are using the differences of the language of physics and the language of consciousness to legitimize your dualism and then you are saying that there is no dualism when it comes to chemistry, it is all just mind. You can’t have both! If it is all mind then why would it make any difference to your argument if the mental could emerge from the ‘physical’. Wouldn’t it be the same as the mental emerging from the mental?

If everything is mental then wouldn’t it just make the mental world that much more coherent if there were not also a contradictory dualism within your monism? This dualism seems entirely unnecessary and superfluous to the crux of your argument. You are simply using this convenient equivocation to justify your agrument that the mind is not physical. You fail to realize that if physical reality is also mental then this deeper level distinction between the 'physical' and the mental is not a distinction at all.

Mental monism and substance monism are inherently indistinguishable at the metaphysical level. Why is it necessary that there be a dualism in your monism? It seems like a fault in your model because you are using the mental which you conveniently label ‘physics’ temporarily to argue the necessity that ALL is mental. It seems like a counter productive argument! If ALL is mental then there is no dualism in the first place to somehow justify monism.

This is equivocation. Your monism is flawed!

If any such mechanisms actually exist then they must be fleshed out in matter and energy. Agreed?

In which case they exclude consciousness for the reasons given above.


Those reasons are not valid. They rely on differences in language. Physics is NOT dependent on language.

Re: Consciousness as an emergent property
posted on 03/13/2003 3:10 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Physical systems do not rest on concepts. It is our understanding of physical systems that is conceptual. This is the source of your confusion. You keep getting the conceptual mixed up with the actual.


This is really the crux of it. You are failing to see that, in the strict sense, in fact they are one and the same.

As there is so much preconception invested in the *physical world* it is sometimes easier to clarify the issues by reference to virtuak reality systems. (Which was why The Matrix was of interest in the first place.)

Imagine that you are in a full-immersion virtual reality such as that depicted in The Matrix. You pick up a seemingly solid object such as a tennis ball and throw it up in the air and observe its parabolic trajectory. Now, you could write down the equation of motion of the tennis ball, that is to say, a mathematical model of the ball; and then claim that the tennis ball is something different from the mathematical model of the tennis ball. And you reason for saying this that you can feel the roundness, the firmness, and the weight of the ball but you cannot do that with the abstract mathematical model. Now, from within the perspective of the Matrix world, that is true. But in the truer perspective that we get by stepping outside the Matrix world, we can see that the conscious experience of the 'tennis ball' was generated from a mathematical model in the computer.

The key point here is that the notion of 'solid matter' is a projection of conscious sensations (of resistance to movement, etc) onto a mathematical model. The actual reality, the true 'solidity', lies in the conscious experience. The entity that this quality of solidity is falsely projected onto, i.e. the physical tennis ball, is only an abstract construct.

It was what the Vedanta calls 'maya' -- conflating an abstraction with the concrete attributes that are projected onto that abstraction.

The same logic applies to our everyday world. The true 'solidity' lies in our conscious experience. But it is falsely projected onto the mathematical model. So you end up with the bizarre notion of two two different things, (a) the physical object and (b) the physical model. In fact, there is just one thing -- the physical model, (a) with mental properties projected onto it, or (b) without.

If so then why is there a dualism at all?


There isn't a dualism. The fundamental nature of reality is mental. This is monism. It is not, however, solipsism because (clearly) there is a force or agency outside us that governs natural phenomena. But it is ultimately mental in nature. I refer to it as the 'metamind'.

Chemical processes are driven by mental processes inside the metamind, which creates in your experiential field the observables of chemistry. (Just as, in The Matrix, the computer will generate the imagery that depicts the observable products of a chemical reaction.) In the case of psychotropic agenst, they are also manifest in our experiential field via another route, such as hallucination.

It is again an example of maya to suppose that the chemistry can affect the mind. The metamind affects your mind, and it does so via two routes.

If everything is mental then wouldn’t it just make the mental world that much more coherent if there were not also a contradictory dualism within your monism? This dualism seems entirely unnecessary and superfluous to the crux of your argument.


If I had a dualism then you're right, it would be unnecessary and superfluous. But I don't. What makes you think I advocate dualism?

you are using the mental which you conveniently label ‘physics’ temporarily to argue the necessity that ALL is mental.


The laws of physics clearly form a successful model of the structural regularities that are manifest in our perceptual world. Therefore (since the manifest world is generated by the metamind), the metamind contains within it some logic that captures sufficient information to generate a (virtual) perceived world that fully complies with the physical laws.

Peter

Re: Consciousness as an emergent property
posted on 03/13/2003 4:41 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

subtillioN: Physical systems do not rest on concepts. It is our understanding of physical systems that is conceptual. This is the source of your confusion. You keep getting the conceptual mixed up with the actual.

Peter: This is really the crux of it. You are failing to see that, in the strict sense, in fact they are one and the same.


Only from your mental monism pov. You have shown no reason why the substrate for the monism could not be physical rather than mental. I suspect that it is harder than you think.

Imagine that you are in a full-immersion virtual reality such as that depicted in The Matrix. You pick up a seemingly solid object such as a tennis ball and throw it up in the air and observe its parabolic trajectory. Now, you could write down the equation of motion of the tennis ball, that is to say, a mathematical model of the ball; and then claim that the tennis ball is something different from the mathematical model of the tennis ball. And you reason for saying this that you can feel the roundness, the firmness, and the weight of the ball but you cannot do that with the abstract mathematical model. Now, from within the perspective of the Matrix world, that is true. But in the truer perspective that we get by stepping outside the Matrix world, we can see that the conscious experience of the 'tennis ball' was generated from a mathematical model in the computer.


All you have shown is that the physical world can be simulated to a degree (which is basically what consciousness or perception is). There are ALWAYS substrate level limits to such simulations, however, that distinguish them from reality, if you can get the right perspective to see the limitations.

The key point here is that the notion of 'solid matter' is a projection of conscious sensations (of resistance to movement, etc) onto a mathematical model. The actual reality, the true 'solidity', lies in the conscious experience.


Wrong, the ‘true solidity’ lies in the actual solidity. Our perception of it is merely that, a perception. The mental map is NOT the physical territory in the case of the physical world.

The entity that this quality of solidity is falsely projected onto, i.e. the physical tennis ball, is only an abstract construct.



So it is all just a figment of our imagination, or the solipsism of the universe itself? (Which is what mental monism actually is: Universal Solipsism)

It was what the Vedanta calls 'maya' -- conflating an abstraction with the concrete attributes that are projected onto that abstraction.



The mind itself is an abstraction. This doesn’t mean that EVERYTHING else is one too.

The same logic applies to our everyday world. The true 'solidity' lies in our conscious experience. But it is falsely projected onto the mathematical model. So you end up with the bizarre notion of two two different things, (a) the physical object and (b) the physical model. In fact, there is just one thing -- the physical model, (a) with mental properties projected onto it, or (b) without.


1. Consciousness is not a mathematical model.
2. The notion of the “two different things” is not so bizarre when it is realized that consciousness is merely a representation of reality. Your “physical model” is better termed “a mental model of a physical object” so as not to confuse the two. ;)
3. I agree with your conclusion that “In fact, there is just one thing -- the physical model, (a) with mental properties projected onto it, or (b) without.”, but isn’t this the contradictory to your mental monism?

subtillioN: If so then why is there a dualism at all?

Peter: There isn't a dualism.


You have been trying to illustrate all along the fundamental incompatability between the mental and the physical descriptions of reality. This is a dualism, but as I said in my last post, the purpose of this quasi-dualism was to push the physical reality out of the picture. What you really end up doing is pushing objective reality itself out of the picture and thus you end up with solipsism, which, on a universal scale, you are ok with.

The fundamental nature of reality is mental. This is monism. It is not, however, solipsism because (clearly) there is a force or agency outside us that governs natural phenomena. But it is ultimately mental in nature. I refer to it as the 'metamind'.


So what is the agency outside the metamind? ;) Mental monism is a kind of solipsism, as is substance monism to an extent. The only way around it is to assume no outside whatsoever, i.e. a universe of infinite extent.

Chemical processes are driven by mental processes inside the metamind


It is important for you to realize that your “metamind” and my “physical reality” are entirely indistinguishable. We simply have different words for the same ‘objective reality’. The fact is that the properties of ‘objective’ reality can influence ‘subjective reality’. There is an intimate link. This was in response to the upswing of your equivocated stance that physical and mental reality are fundamentally incompatible (? at the non-fundamental language level?) and thus not inter-derivable. It seems that you are on the down-swing of this stance now. So I will just try to debate your shifting position the best I can.

…the metamind, which creates in your experiential field the observables of chemistry.


Again my monism is entirely indistinguishable from your monism, so this is simply saying to me that physical reality provides the sensation that is observed as chemistry.

(Just as, in The Matrix, the computer will generate the imagery that depicts the observable products of a chemical reaction.) In the case of psychotropic agenst, they are also manifest in our experiential field via another route, such as hallucination.


The Matrix is a simulation, not reality. The burden is on you to prove that objective reality itself is a simulation.

subtillioN: If everything is mental then wouldn’t it just make the mental world that much more coherent if there were not also a contradictory dualism within your monism? This dualism seems entirely unnecessary and superfluous to the crux of your argument.

Peter: If I had a dualism then you're right, it would be unnecessary and superfluous. But I don't. What makes you think I advocate dualism?


Well you had a dualism when it was convenient to your argument. ;) equiv… well you get the picture.

You said for instance: ”That whole physical edifice excludes consciousness. So consciousness cannot be emergent property from the physical systems.”

You also said: “So there are two disjoint sets of terms: the physical and the mental. *That* is why mental facts can never follow from physical facts.” In this case you took a HUGE leap from describing sets of terms, to conclusions about sets of facts. In any case my argument is this:

If there is no duality and it is all mental then your whole argument that “consciousness cannot be emergent property from the physical systems” is moot. There never was a ‘physical edifice’ to begin with to differentiate between the two. So ultimately your argument didn’t have a leg to stand on, because it rested (strangely enough) on this difference.

However, as I said, the whole point of your quasi-dualism was to exclude physical reality itself, but as I also said, what really happens is that it excludes OBJECTIVE reality, because that is where the crack (which you have driven your language-level temporary wedge through) can actually be found. For instance, is it not true that EVERYTHING outside of yourself consists of objective reality which can be defined in physical terms? Is it not impossible to confirm that my qualia are exactly the same as yours? Can you even prove that I actually have qualia or am I simply programmed with the proper responses to your questions? This is the rift that you are actually attempting to exploit, because even the brain (which is all you can see of anyone elses mind) is part of the objective ‘physical ediface’. Thus, driving that wedge and pushing out of the picture the entire physical edifice, also pushes away objective reality itself. This results in solipsism.



subtillioN: you are using the mental which you conveniently label ‘physics’ temporarily to argue the necessity that ALL is mental.

Peter: The laws of physics clearly form a successful model of the structural regularities that are manifest in our perceptual world. Therefore (since the manifest world is generated by the metamind), the metamind contains within it some logic that captures sufficient information to generate a (virtual) perceived world that fully complies with the physical laws.


So what happened to the fundamental incommensurate difference between physical and mental reality which prohibit the derivability of mental from physical reality? Oh yeah it was just a language game!


Peter's monism/dualism equivocation
posted on 03/11/2003 3:29 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

So there are two disjoint sets of terms: the physical and the mental. *That* is why mental facts can never follow from physical facts.


You failed to realize that this argument for dualism (wrong as it is) is also an argument against mental monism because if the two are incompatible and physics cannot derive consciousness then the reverse is true! Consciousness, likewise cannot derive physics!!!

If you truly understood mental monism you would know that it is not dependent on mind/body dualism. In fact dualism is in direct and obvious contradiction with monism of any kind.

monism/dualism the crucial fixation
posted on 03/12/2003 12:31 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Peter,
This dualism is the problem at the core of your argument. Mental monism, if it is a true monism with no internal contradictions or dualities, is entirely unassailable within the hermetically sealed chamber of its logic system, and yet at the same time it is entirely unverifiable. There is no need to assert the fundamental incompatibility between physical reality and the mind, because in any monism they are both fundamentally the same thing. I can see why you would do this however, to prove that the physical must be mental. Yet, to postulate a dualism within a monism ultimately serves the purpose of the exclusion of one or the other of these split halves. Because of cogito ergo sum, ultimately this split forms a rift between subjective and objective reality. The argument then is that either subjective or objective reality is an illusion, since they are fundamentally non-derivable one from the other. The problem is that it is entirely unprovable which one is the illusion. Since to admit that the objective world is an illusion is to admit solipsism, then our choice seems clear: The mind is what must be the illusion!

Since, however, your argument rests entirely on the language differences between the two domains, and from experience we know that physical and mental reality are fundamental to language itself, then the argument is ineffective in the first place. It is at the wrong level of functionality. This doesn’t prove the newly derived conclusion wrong, however. It merely shows the superficiality of the rationale of the argument.


subtillioN

Re: Peter's monism/dualism equivocation
posted on 03/13/2003 3:24 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Peter: So there are two disjoint sets of terms: the physical and the mental. *That* is why mental facts can never follow from physical facts.


subtillioN: If you truly understood mental monism you would know that it is not dependent on mind/body dualism. In fact dualism is in direct and obvious contradiction with monism of any kind.


Maybe you're confusing the distinction between the mental and physical language-games with the distinction between the mental and physical worlds.

The former is what I mentioned, and used in my argument. It is a very plain, empirical distinction: there are physical terms such as 'proton' and 'meson' and the language of physics that encompassess such terms; and there are mental terms such as 'red' and the language of mental phenomenology.

The second distinction is a different kettle of fish. For the mental world is a real, actually existing world, whereas the physical world is a fiction.

This is not dualism. It is monism.

Peter

Re: Peter's monism/dualism equivocation
posted on 03/13/2003 3:49 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

is a very plain, empirical distinction: there are physical terms such as 'proton' and 'meson' and the language of physics that encompassess such terms; and there are mental terms such as 'red' and the language of mental phenomenology.


Ok so you have admitted it's superficiality, which was my whole point all along. It doesn't deal with the fundamental level, yet you ues it as if it proves that the fundamental level of physical reality cannot derive the fundamental level of mental reality. It doesn't it is, as you say a "language game".

Re: Peter's monism/dualism equivocation
posted on 03/16/2003 2:55 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

What makes you think that physics is *not* a language-game?

What makes you think that the 'fundamental physical level' is anything but metaphysical hokum?

I hope you'r not going to reply with a Johnsonian response, by thumping your computer screen and saying, "But I can *feel* the physical reality!" (Thereby conflating a conscious tactile perception with the supposed fundamental physical reallity.)

I really would recommend that you read Berkeley's Dialogues, where precisely the same questions were posed and answered three hundred years ago.

Peter

Re: Peter's monism/dualism equivocation
posted on 03/16/2003 4:35 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

What makes you think that physics is *not* a language-game?


? don't you get it yet? I have never said that PHYSICS isn't a language game. I said that PHYSICAL REALITY isn't. THERE IS A HUGE DIFFERENCE. Because of your inability to see this difference this argument is going nowhere. You keep arguing against things that I don't say because you don't know the difference.

What makes you think that the 'fundamental physical level' is anything but metaphysical hokum?


I believe my senses, that is all. You think it is all an illusion, but you can't prove it.

I really would recommend that you read Berkeley's Dialogues, where precisely the same questions were posed and answered three hundred years ago.


I have been there and it is total solipsistic nonsense. Every philosophical child goes through a solipsism stage. You just happened to get stuck there.

It was fun for a while, but I am bored with this impasse.

Talk to ya later.

Re: Peter's monism/dualism equivocation
posted on 03/16/2003 9:21 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

I have never said that PHYSICS isn't a language game. I said that PHYSICAL REALITY isn't. THERE IS A HUGE DIFFERENCE.


You've been stating this claim this for some time, and now you're shouting it, but what you haven't done is to provide any rational grounds for the claim.

You've been claiming the existence of this mysterious things called 'physical reality', which is neither the physical model, nor our conscious experience. So what it is it? Where is it? What makes you think there is any such thing? Show us some of this 'physical reality'. At least, show us some evidence for it. Or even just some rational argument for thinking that there might be such a thing as 'physical reality'.

And when you've done *that* there is the further task of saying why anyone should care about something so elusive and mysterious as 'physical reality'. Certainly the vast edifice of physics is not interested in such a thing. Physics has been getting on very successfully for three centuries with *physical models*, without needing to make any reference to 'physical reality'. Physics does a fine job with its formalisms and equations, and no physicist has ever felt it necessary to introduce a term for 'physical reality'. Whether you take F=MA or E=MC^2, the terms denote elements in the model. Show me one single equation in physics where there's a term denoting 'physical reality'. I don't believe there is one. So, apart from any other defect, the concept of 'physical reality' has no explanatory power whatsoever.

The sooner we clear the decks of such metaphysical fictions as 'physical reality', we sooner we can get a clearer view of the genuine *reality*, which is mental.

Peter

Re: Peter's monism/dualism equivocation
posted on 03/16/2003 12:47 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

All you are doing is calling it ‘mental reality’. Your only reasoning is that no one can prove to you that it is ultimately ‘physical’. You can ALWAYS posit a deeper layer and then call it whatever you want. ‘Mental’ and ‘physical’ are merely labels. Every argument you have given to try and prove that it is ‘mental’ has failed because it is impossible to prove that which is beyond the sensory realm. I call it ‘physical’ you call it ‘mental’. Ultimately it makes no difference. The objective world still acts according to ‘physical law’.

Big deal…

Re: Peter's monism/dualism equivocation
posted on 03/17/2003 3:07 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

You can ALWAYS posit a deeper layer and then call it whatever you want.


But that's just creating a story, concocting a fiction.

You're supposing that there is some 'unknown somewhat' (to use Berkeley's words) or 'thing in itself' (to use Kant's) that is forever outside the realm of both (a) direct conscious experience, and (b) the explanatory framework of science, which is a topic-neutral system of laws and equations.

What does it mean to say that this mysterious and unknowable metaphysical substance 'might be physical'? Surely all that that means is that it obeys the laws of physics. But that is just a topic-neutral statement of the observed regularities of the observed world. So, you're not actually saying anything at all.

You are claiming to be saying something about the nature of this supposed fundamental reality. But all you keep coming back to is that it complies with the laws of physics. OK, fine, we already know that. I thought you were trying to tell us something about its nature, or what its nature might be, or what its nature might be conceived to be. Instead we have nothing at all about its nature. We have no concept whatsoever of this supposed 'fundamental reality' that is denoted by physical terms. And the reason we do not and cannot is because they are formal constructs.

(I sometimes wonder whether discussions about chess sometimes have endless debates about what the fundamental nature of the Pawn or the King is. "Yes, I know the Pawn is defined by these movements, but what is it *really*? What is the fundamental nature of the Pawn?" What is it about physics that leads people to think that the formalism could ever refer to an occult fundamental reality?)

Peter




Re: Peter's monism/dualism equivocation
posted on 03/17/2003 10:36 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

But that's just creating a story, concocting a fiction.


My point exactly

You're supposing that there is some 'unknown somewhat' (to use Berkeley's words) or 'thing in itself' (to use Kant's) that is forever outside the realm of both (a) direct conscious experience, and (b) the explanatory framework of science, which is a topic-neutral system of laws and equations.


I am not postulating anything below that which can be seen directly. I am actually saying that fundamental reality is exactly as it appears to us, physical. To posit any deeper, non-observable mental properties is to concoct a fiction.

Re: Consciousness as an emergent property
posted on 03/10/2003 6:51 PM by sushi101

[Top]
[Mind·X]
[Reply to this post]

Ahh Peter but you seem to be trapped in the idea that it all need to be predictable down to the very least detail in order to work, which simply isn't the case.

I can predict that given the right circumstances and the correct structure an airplane is going to fly from A to B, but I can't predict the exact details of how the atoms is going to hit it, it's exact route ect. and the beauty is that I don't need to.

It's called critical-mass and many things in this world we know how to catalyze and even predict the outcome off, but that doesn't mean that we know exactely all the detail in between.

So imagine that the first cells where extremely simple feedback systems that for some reason required an extreme amount of space. But slovly these feedback systems started to process information (matter) and made it a tiny bit more structured each time. Today you have the result of a billion trillion simple feedback systems, aware matter if you like, that feeds of other simple feedback systems. Now the aware matter (you and me) have today reached a certain level that ex. animals don't have. But they are also aware to some extent. So my question if you think it is possible to be more aware than humans are, just as humans are more aware than animals are?

And yes Subtillion is right, what I say is that conciousness arises _from_ matter it is not seperate from matter and not purely holostic (out goes dualism)

Re: Consciousness as an emergent property
posted on 03/11/2003 3:45 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Now the aware matter (you and me) have today reached a certain level that ex. animals don't have. But they are also aware to some extent. So my question if you think it is possible to be more aware than humans are, just as humans are more aware than animals are?


Please define your metric for ordering multi-facted multi-dimensional minds.

An eagle has higher-resolution visual consciousness than a human. A bat has a much more richly structured audio consciousness. A cheetah has faster motor control. A bloodhound has richer olfactory consciousness. A human is better at playing chess.

Which entity is 'more aware' depends of the choice of ranking.

And yes Subtillion is right, what I say is that conciousness arises _from_ matter it is not seperate from matter


Seems like wishful thinking to me, for the reasons stated in my response to subtillioN.

... and not purely holostic (out goes dualism)


Even if I assume that 'holostic' is a typo for 'holistic' rather 'holographic' or (even 'holograffiti') I'm still not clear what this bit means.

Peter

Re: Consciousness as an emergent property
posted on 03/15/2003 2:31 AM by sushi101

[Top]
[Mind·X]
[Reply to this post]

>Please define your metric for ordering multi->facted multi-dimensional minds.

A cell is a very simple feedback loop that is "aware" (reacts to) certain other feedback loops.

A worm is "aware" of it's cells awareness and reacts as allowed to by theese cells.

A lion is aware of it's senses and react to this awarness.

A Human is aware of it's awareness to these senses and react to it.

A higher awarenes is something/one who is aware of his/it's awarnes to being aware of his/it's awareness and react acordingly.

Awareness and higher/lower is measured by how much of any given environment a feedback system, simple or complex is aware off.

Even if I assume that 'holostic' is a typo for 'holistic' rather 'holographic' or (even 'holograffiti') I'm still not clear what this bit means.



The point with this is to explain to you that in order to describe awareness we can't ONLY look at matter, we have to take into account that _time_ in whatever form we choose to describe it has a role in consiousness.

A feedback loop indicates more states than one, your object so to speak (if you are to look at it in terms of OOP computer syntax)is only an object as long as it react to the inputs that it gets and the outputs that it gives.

So a working computer program is not just bits but also time, and time is the momentum that keeps the illusion going. Time is not a direct property of bits it is seperate from the matter, but through variations in state over time the illusion of a world is created.

In other words conciousness arises over time through matter, it is not pure matter nor seperate from matter (not holistic)

Dualism does not exist in this view and that is for the better.

Seems like wishful thinking to me, for the reasons stated in my response to subtillioN.


I have no agenda with this discussion other than trying to look at it from an analytical point of view. I have no thesis other than the above rouhly stated. What on earth does that have to do with wishfull thinking. I thought they point here is to try and get closer to what consiousness is. Since you don't understand my idea of matter and holistics forming conciousness it indicates to me that you have only been reading up on one side of this discussion.



Re: Consciousness as an emergent property
posted on 03/15/2003 3:50 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

So a working computer program is not just bits but also time, and time is the momentum that keeps the illusion going. Time is not a direct property of bits it is seperate from the matter, but through variations in state over time the illusion of a world is created.


Yes, of course time is needed to run software (or any process). But I don't see why you think that that has any impact on the general argument that physical systems cannot produce consciousness.

Please explain how the introduction of time changes the nature of how physical terms possess meaning.

As far as I can see, physical terms still have analytical definitions, irrespective of whether time is included or not. The definition still rests on basic abstractions such as mass, space, charge, physical time -- none of which are amenable to private ostensive definition because they do not occur in conscious awareness.

Peter


Re: Consciousness as an emergent property
posted on 03/15/2003 5:34 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

As far as I can see, physical terms still have analytical definitions


terms schmerms

More language games? The roots of consciousness and physical reality are deeper than the sphere of language.

Re: Consciousness as an emergent property
posted on 03/15/2003 7:37 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

The roots of consciousness and physical reality are deeper than the sphere of language.


Oh? What makes you think that?

Peter

Re: Consciousness as an emergent property
posted on 03/15/2003 12:52 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Oh? What makes you think that?


Because they exist pre-language. Every child has a brain before it has consciousness.

It is common knowledge that physical reality is not dependent on language. The burdon of proof is on you to try and change common knowledge. So far there is no evidence that language is fundamental to physical reality nor to the roots (i.e. mechanisms) of consciousness.

What makes you think that language is fundamental to reality?

Re: Consciousness as an emergent property
posted on 03/15/2003 2:01 PM by sushi101

[Top]
[Mind·X]
[Reply to this post]

and on top of that one have to wonder how a deaf and blind person can be conscious if they can't either hear or see.

Re: Consciousness as an emergent property
posted on 03/15/2003 3:56 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

Although you can no long ask Helen Keller, there are enough books, movies and plays available to get an idea about it.


Helen moved on to the Cambridge School for Young Ladies in 1896 and in the Autumn of 1900 entered Radcliffe College, becoming the first deafblind person to have ever enrolled at an institution of higher learning.

Life at Radcliffe was very difficult for Helen and Anne, and the huge amount of work involved led to deterioration in Anne's eyesight. During their time at the College Helen began to write about her life. She would write the story both in braille and on a normal typewriter. It was at this time that Helen and Anne met with John Albert Macy who was to help edit Helen's first book "The Story of My Life" which was published in 1903 and although it sold poorly at first it has since become a classic.

On 28 June 1904 Helen graduated from Radcliffe College, becoming the first deafblind person to earn a Bachelor of Arts degree.



Re: Consciousness as an emergent property
posted on 03/16/2003 3:07 AM by sushi101

[Top]
[Mind·X]
[Reply to this post]

Yes I know of Helen, but that just proves the point that language is a very, no VERY abstract term.

My point is that simple feedback systems actually _are_ story tellers(language). There really isn't any distinct difference other than the obvious complexity and the form. It's all input-output (feedback systems) conciousness (our kind of) arises by how well and how we store our processed information.

Re: Consciousness as an emergent property
posted on 03/15/2003 4:34 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

What makes you think that language is fundamental to reality?


It's not reality that language is fundamental to, it's our perception of reality, which is influenced by what others say about it. What we call a thing is what we perceive it to be. In Australia last month a group of people saw a fence post as a manifestation of the Virgin Mary. A man standing nearby who tried to tell them it was just a fence post was not believed. At the time, too many other people were seeing the Virgin.

Grant

Re: Consciousness as an emergent property
posted on 03/15/2003 4:43 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

January 31, 2003

Virgin Mary image appears on a fence post. Religious-types freak out.

Seems the folks down in Australia are getting a little of the old holy-image-appears-on-random-object-Christians-sta rt-acting-like-total-whack-jobs fun of their own as of late. So says The Sydney Morning Herald:

Hundreds of believers flocked to the Coogee Beach headland yesterday to witness what they say is an apparition of the Virgin Mary.
Scores more hiked up the cliff path to touch, kiss and pray to the post which over the past few days has been transformed into something of a shrine, with pictures of the virgin, rosary beads and flowers piled around the white-washed fence.

Some wept, others sang, most prayed. As the sunlight reflected off a crook in the fence throughout the afternoon, hundreds claimed they could discern the shape of a veiled figure, and most agreed it was “Our Lady”.


Re: Consciousness as an emergent property
posted on 03/15/2003 4:53 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

It's not reality that language is fundamental to, it's our perception of reality


Perception is fundamental to language because even animals have perception. Language can, however, influence perception.

Peter's argument relies on differences between language descriptions of objective and subjective reality. He claims that these language differences apply to fundamental reality itself. My claim is simply that what applies to language does not necessarily apply to physical reality, because physical reality is fundamental to language, not the inverse.

Re: Consciousness as an emergent property
posted on 03/15/2003 5:00 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

I think we just said the same thing.

Grant

Re: Consciousness as an emergent property
posted on 03/15/2003 5:55 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I think we just said the same thing.


I misunderstood what you said then I guess. sorry... There is now a redundant rendering of the same concept(s). oh well...

Re: Consciousness as an emergent property
posted on 03/15/2003 7:13 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

subtillioN: The roots of consciousness and physical reality are deeper than the sphere of language.

Peter: Oh? What makes you think that?

subtillioN: Because they exist pre-language.


Conscious experience obviously exists pre-language. The constructs of physics do not.

The only way I can think of to make sense of what you're saying is to suppose that you're confusiing the constructs of physics (elementary particles and all assemblages thereof) with patterns of conscious sensation. When I pick up a tennis ball, I encounter a regular pattern of visual and tactile sensations. Those sensations and that pattern obviously exist pre-language. But the abstract construct of the assemblage of atoms that we use as a model for that sensory pattern does not exist pre-language.

I guess you're going to say that, although the model of the physical ball does not exist pre-language, the 'actual' physical ball does. But if you say that then you've left the orbits of both (a) science, which talks only of the model, and (b) direct experience, whichis only of the sensations.

You may claim that there is an 'actual' physical ball existing pre-language but then you would have to say what this is supposed to mean. It's outside physics and its outside direct conscious experience.

Every child has a brain before it has consciousness.


The phyiscal organism has a physical brain, before that organism has a mind. But that organism and brain exist only as constructs in the minds of observers, who (before the child is born and acquires a mind) are other people, such as the mother.

It is common knowledge that physical reality is not dependent on language.


It is not common knowledge but a common philosophical myth. You will find no support for this myth in physics, which necessarily rests on language for its existence. Nor will you find any support in everyday conscious experience, which does not include any direct experience of physical things.

What makes you think that language is fundamental to reality?


It's not fundamental to reality, it's fundamental to physics. Show me one equation of physics that can exist without language.

Peter

Re: Consciousness as an emergent property
posted on 03/15/2003 9:37 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Conscious experience obviously exists pre-language. The constructs of physics do not.


I didn’t say “The constructs of physics” exist pre-language. I said PHYSICAL REALITY does. Can we not extricate ourselves from our own words? We are getting nowhere.

The only way I can think of to make sense of what you're saying is to suppose that you're confusiing the constructs of physics (elementary particles and all assemblages thereof) with patterns of conscious sensation. When I pick up a tennis ball, I encounter a regular pattern of visual and tactile sensations. Those sensations and that pattern obviously exist pre-language. But the abstract construct of the assemblage of atoms that we use as a model for that sensory pattern does not exist pre-language.


Just because we can’t see the details of physical reality doesn’t mean that they are mere language constructs. Do you believe that the entire world turns into a language construct just because you close your eyes. Stupid question… you probably do!

I guess you're going to say that, although the model of the physical ball does not exist pre-language, the 'actual' physical ball does. But if you say that then you've left the orbits of both (a) science, which talks only of the model, and (b) direct experience, whichis only of the sensations.


Science IS a model which attempts to explain physical reality. It does contain internal criticisms and references, but its whole point is to talk about physical reality not about mere language constructs. We will leave that for semiotics.

You may claim that there is an 'actual' physical ball existing pre-language but then you would have to say what this is supposed to mean. It's outside physics and its outside direct conscious experience.


We directly see the ball so you are wrong: it is neither outside of physics nor consciousness. Once you understand that the act of seeing and consciousness itself is necessarily representational then you can understand the limits and abilities of perception. There is no ultimate, unbridgeable gap between perception and the real world. EVERYTHING we see is really there in some sense. We just have to represent it so that it makes sense to our minds.

The phyiscal organism has a physical brain, before that organism has a mind. But that organism and brain exist only as constructs in the minds of observers…


That is your assumption that the only reality ascribed to physical reality is that it is a ‘construct’, but we can observe it without language, so it is not a language construct. You claim it is a construct of the mind, but you can give no proof.


It is common knowledge that physical reality is not dependent on language.

It is not common knowledge but a common philosophical myth. You will find no support for this myth in physics, which necessarily rests on language for its existence.


No physicist would say that PHYSICAL REALITY is dependent on language. He may say that PHYSICS is, but I was not talking about physics. I was talking about physical reality. They are certainly not the same thing.

Nor will you find any support in everyday conscious experience, which does not include any direct experience of physical things.


I find all kinds of support because physical reality is what existed before language arrived upon the scene. I have never seen an object that is dependent in any way on language (except for objects such as books or something, for which the dependency is superficial).

What makes you think that language is fundamental to reality?

It's not fundamental to reality, it's fundamental to physics. Show me one equation of physics that can exist without language.


Physical reality is not physics. Physics is simply our model of physical reality. Can you not see the obvious difference?


Re: Consciousness as an emergent property
posted on 03/15/2003 7:37 AM by sushi101

[Top]
[Mind·X]
[Reply to this post]

But I don't see why you think that that has any impact on the general argument that physical systems cannot produce consciousness.


No you don't and that is why you are wrong. It actually does have an impact you just take it for granted.

When you create an application you don't create the system per se, you create tons of little feedback systems that are all "aware" of other feedback systems within the given environment.

Right now they are primarily seperate from our world, but lots of new inventions are showing us that a bit-system acutally can send and recieve information to and from a bio-system.

Every "naive fedback system" being network, soccer-team or ant-colony, create a system that is more than and different from it's parts. What ensures it's success is it's awareness of it's environment. We as humans as a species are more and differently aware today than we where 5000 years ago. Not because our IQ have gone up nescesarily but because we have created "naive feedback systems" to help us reaching higher and higher. It was once thought impossible to replace a heart, not just because of technological difficulties but also because the heart was thought to hold our soul. That is not to say that anything goes, but it is to say that things are far less mysterious yet for more complex than we normaly seem to think.

The definition still rests on basic abstractions such as mass, space, charge, physical time -- none of which are amenable to private ostensive definition because they do not occur in conscious awareness.


Yet they are part of the environment that the
"naive feedback system" exist within.

If you look at music for instance, the reason why musicians seem to be playing with better feeling than computers is because humans are "better" at making errors. It is all the little errors that the guitarist does that makes up his style, not what he choses to play. The errors as measured up against the perfect beat, that computers seem so much closer to achieving yet computer music so far can't replicate the human feel. Not because humans are better at playing up against a beat, but actually because humans are worse. But in order for all this to make sense in order to any music ever to get played we need a continuum time, there wont be any song, any composition if there is no state afterwards. This need humans and computers share in order to perform. What they dont share is the ability to play a "perfect beat" computers will always be able to do that, but they will also with time and partly already is, be better at playing with errors.

So all the little details that we do and think makes us superiour actually and gives us better style is errors, not choices. This to me indicates that the idea of a free-will system is largely due to the strong id, that helps us being aware of the others and not due to some concept of localized existence. In reality(what ever that term means) we are a large system of "naive feedback loops" reacting each others input/outputs. The point is not to think in AI as something that is seperate from us like if me made a system and pushed a button. Point is to understand that whatever AI that arises it is going to be from us, not seperate.

Re: Consciousness as an emergent property
posted on 03/15/2003 6:31 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

This is all about functions, computations, and artificial intelligence. Fine, I agree with all that. But you said nothing at all about consciousness.

Peter

The conscious mind is like a cup of coffee
posted on 03/15/2003 10:45 PM by claireatcthisspace

[Top]
[Mind·X]
[Reply to this post]

The Cup Of Coffee Theory Of Conscioussness and Just About Everything Else(even AI).


Key words , cup, coffee, everything, I am thirsty, put the kettle on.

Pre-bit: we realise that at that bit just before the end of time, that everthing we understand in our universe boils down to a cup of coffee really.

Part 1)

When you make a cup of coffee you have:

1) Powdered Coffee (or beans of you want to be really kewl)
2) Cup
3) Water
4) Some heat
5) Maybe some milk
6) Do you take sugar?
7) Not a parasol too



When you put these together in a certain way you get a cup of coffee but you might not be able to concieve of the cup of coffee before hand (I was going to refer to this by using only its initials for "cup of coffee" but i decided not to) because you have not really grasped what the emergent property is, yet.

Part 2)

When finally making the cup of coffee it becomes apparent that in order for the cup of coffee to be just that, you need all those things on the list to gel together in a certain way just to get it.


Conclusion:

The cup of coffee produces a taste and experience that is the result of all the ingredients listed. We spend alot of our time talking about the list and what it can do and how it wierdly direclty relates to the cup of coffee, but we also know that the separate ingredients in the list are not the same as each other, like water isnt really coffee and that understanding a particles velocity isn't the same as understanding its position, and neither is the cup anything like the parasol, but for some strange fantastic metapshyical reason they all add up to this lovely cup of coffee or the particle. Consciousness is the cup of coffee, so therefore it might be a good reason to simply say that we are only one construct of the cup of coffee that keeps it there in reality. Because we are like the other constructs/ingredients that are not alike, we cannot really cocieve of the other ingerdients unless we make the cup of coffee later in order to do so but then again niether did the observer untill it made the universe, but in this case the observer used gravy granules by accident.


Claire


Re: The conscious mind is like a cup of coffee
posted on 03/16/2003 2:41 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

The Black & White Theory of Colour Printing

You may not know this but they take emergence very seriously in Mynyddislwyn. There is a printing press there, Dafydd ApLlwyd's Press, which has one of the finest monochrome presses in Wales. They produce astonishingly elaborate prints of engravings.

Ever since theories of emergence came into vogue, they the have been convinced that if they get right complexity into the black-and-white illustration, then colour will emerge.

Individuals who suggest that this a fallacious line of argument, and that they will never get colour printing until they out colour ink into the printing process, have been sacked.

Their research program has been expanding in several directions. They have been printing lines of black ink with ever-decreasing fineness. You need a microscope to see some of their finest lines. So far, this has failed. The thin lines of black ink still look black.

They also tried printing on huge sheets of paper, 50 feet by 50 feet. That didn't work either.

Then they tried three-dimensional printing: with a thousand layers of acetates, each with an illustration printed in black ink onto the transparent medium. Still no colour.

This has not daunted them. They religiously uphold the Doctrine of True Emergentism: If you have a complex system S on the one hand, and a quality Q that does not exist in S, then all you have to do is to arraneg S in the right configuration, then Q will emerge.

Personally, I favour the theory that the coloured ink needs to be introduced as a basic ingredient of the printing process. Otherwise, all you get out is black-and-white illustrations.

Which means that I am not a True Believer in emergentism. Of course, I can see that there are cases of emergency elsewhere. Thermodynamics is my favourite example because it's so simple. But the funny thing is that in all these examples of emergence, the emerging quality always belongs to the same basic space of concepts. For example, in John Conway's cellular automaton 'The Game of Life', astonishingly sophisticated functions can emerge from random permutations. Yet, I am not aware that anyone has managed to get whiskey to emerge.

Peter

Re: The conscious mind is like a cup of coffee
posted on 03/16/2003 6:47 PM by claireatcthisspace

[Top]
[Mind·X]
[Reply to this post]

The emergent property of colour in printing with black and white might need one more contribution to its emergence that could have be overlooked. This property could be the cortical integration of the neural network sytem in the brain. Or this effect could be induced by drugs without the intergration but still create emergence. The other way that could create an emergent effect is to see what animals or insects like flies, can be aware of when looking at black and white illustrations of colour. The last possible emergent effect could be somthing to do with some humans ability to perceive colour within black and white designs and illustrations and even text because they are syneasthetically aware, of which if we could probe their particular brain physiology, we might see somthing that resembles the emergent effect that I refered to in the 1st example.


Claire

Re: Consciousness as an emergent property
posted on 03/10/2003 12:19 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

sushi101 said this:

Ofcourse it can't, but it arises _from_ matter it is not seperate from matter and it is not purely holistic.

It exist between time and matter, a casual feedback loops with an ability to do patternregocnition over time(states).


Notice that he didn't say that consciousness arises at the level of physics. He said that causal feedback loops are involved. This suggests directly to me that he is refering to the network architecture level.

He is simply saying that there is a direct (albeit highly complex) physical causal link between ALL levels. If you knew the proposed network architecture level explanations then you would know that this is possible.

I don't see what the difficulty is. Illusions happen all the time in nature. Why is it so confusing when we ourSELVES ARE this illusion?

Re: Consciousness as an emergent property
posted on 03/10/2003 12:49 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I don't see what the difficulty is. Illusions happen all the time in nature. Why is it so confusing when we ourSELVES ARE this illusion?


I answered my own rhetorical question. The entire problem is that consciousness is a simplistic abstraction of the world. This is why it is so difficult to form a conceptual understanding of the MUCH more complex mechanisms involved.

The mind is the very tip top of a HUGE pyramid of cause and effect. It is much too small to fit the entire causal pyramid within its grasp.

Re: Consciousness as an emergent property
posted on 03/11/2003 10:56 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Notice that he didn't say that consciousness arises at the level of physics. He said that causal feedback loops are involved. This suggests directly to me that he is refering to the network architecture level.

He is simply saying that there is a direct (albeit highly complex) physical causal link between ALL levels.


Entities and processes at the architecturally higher system levels don't exist out there in the world. They are artefacts of our description of the world, which are ultimately wholly reducible to the lower levels.

For example, you talk about feedback loops. A feedback loop is not a self-existing something floating around out there. It is an abstraction that describes how its components relate.

As a software designer, I deal with different architectural levels every day, but I know that any thing that doesn't ultimately cash out in bytes is just vapour-ware.

Peter

Re: Consciousness as an emergent property
posted on 03/11/2003 11:28 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Entities and processes at the architecturally higher system levels don't exist out there in the world. They are artefacts of our description of the world, which are ultimately wholly reducible to the lower levels.


They do exist.

A feedback loop is not a self-existing something floating around out there.


Who said it was? Feedback loops exist and they don't neccessarily "float around out there". If you are really claiming that they don't exist then you are laughably WRONG! We can physically point them out. Oh yeah this entire world is an illusion of the mind anyway. Whatever.

As a software designer, I deal with different architectural levels every day, but I know that any thing that doesn't ultimately cash out in bytes is just vapour-ware.


What, like your pure logic closed system irrelevant arguments?

Re: Consciousness as an emergent property
posted on 03/17/2003 8:37 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Peter: Entities and processes at the architecturally higher system levels don't exist out there in the world. They are artefacts of our description of the world, which are ultimately wholly reducible to the lower levels.

subtillioN: They do exist.



The basis of a system may exist; but thee abstractions that constitute the higher levels are artefacts of our description.

Consider the Top Part of my computer monitor (the top two inches) and the Bottom Part (the bottom two inches). You could say that the Top Part and Bottom Part do physically exist, but it would be naive to take that in anything other than a loose, figurative way of saying that the relevant parts of the screen physically exist. The 'Top Part' and the 'Bottom Part' are artefacts of my description. I could equally well have defined them two to be three inches. Whatever. The point is that those two entities, those Parts, are not real, self-existent things out there in the world. Somebody else examining my computer monitor cannot discover them.

In just the same way, dynamic structures such as feedback loops are also not 'out there'. They are artefacts of describing the system in a certain way. Somebody else examining the same stuff could arrive at different conceptualisation.

Since these dynamic structures are just artefacts of how we are describing the system, it is ridiculuous to suppose that something with real existence, such as a colour experience or a pain, could be reduced to, or emerge from, any such things.

Peter: A feedback loop is not a self-existing something floating around out there.

subtillioN: Who said it was?


It was implicit in your assertion that consciounsnes could emerge from dynamical systems. Real things such conscious sensations cannot emerge from notional constructs. So, when you claim that consciousness emerges from the system architecture, it implies an assumption that the network architecture has some real existence of its own. But it doesn't.

We can physically point them out.



No you can't. You can only point out the substrate in which they are implemented.

Peter

Re: Consciousness as an emergent property
posted on 03/17/2003 10:39 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

No you can't. You can only point out the substrate in which they are implemented.


You can make one physically and watch the entire process and observe it as it does its feedback thing. How much more explicit do we need to be? Do you think that we have to experience what it is like for the electricity to flow in this loop to be able to understand it?

Re: Consciousness as an emergent property
posted on 03/17/2003 1:04 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

No, of course of not. But you were claiming that system constructs could give rise to consciousness. That implies they would have to be real. They're not. They're notional artefacts of our description.

You talk about *seeing* the process. You don't *see* the process, you infer it.

Peter

Re: Consciousness as an emergent property
posted on 03/17/2003 2:00 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

You talk about *seeing* the process. You don't *see* the process, you infer it.


You are looking for an absolute sensation. Such a sensation is impossible. Seeing is necessarily a representation of the actual sensory input data. This doesn't mean that everything that the sensation represents is imaginary. The data is real, but it has to be represented in order to be seen by the mind.

Re: Consciousness as an emergent property
posted on 05/09/2003 6:32 AM by JoeFrat

[Top]
[Mind·X]
[Reply to this post]

But I did not come up with the term red. I was taught what red was by a teacher.

Re: Consciousness as an emergent property
posted on 05/09/2003 7:35 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

But I did not come up with the term red. I was taught what red was by a teacher.


Sure, but the teacher didn't surgically go into your brain and hardwire the word "red" in. She showed you some red things and told you the colour was called "red". You, in your mind, still had to to do the work of associating the word with the colour experience.

Likewise, if you go into a paint shop today and look at some paint colours that have perhaps never seen before. You learn what they're called by associating the name with the experience.

This is very different from learning the meaning of of physical words such as "atom". You can't experience an atom and associate the word "atom" with it. You have to define the word "atom" *analytically* in relation to fundamental quantities such as mass and charge. And how do you defined the fundamental quantities? You don't. Their undefined. So, the whole of physics is a closed system based on undefined terms.

*That* is why we know that physical things are different from mental things. The physical things are constructs, only the mental things are real.

Peter

Re: Consciousness as an emergent property
posted on 05/09/2003 12:47 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

This is very different from learning the meaning of of physical words such as "atom". You can't experience an atom and associate the word "atom" with it.


Have you never heard of a scanning tunneling microscope? Do you not know that atoms have been seen directly in the laboratory? We have actually written words and drawn pictures with single atoms. Do you even believe that the Earth is a sphere, because after-all you have never actually been to space to see its actual shape? Can you simply discount that evidence because you yourself have not directly done so? If so then your world must be extremely impoverished if you cannot gain from the collective experience of the whole of mankind.

In any case the learning process is exactly the same. We experience the models, theories and pictures and manipulations of atoms and associate the word ‘atom’ with the experience.

You have to define the word "atom" *analytically* in relation to fundamental quantities such as mass and charge. And how do you defined the fundamental quantities? You don't. Their undefined. So, the whole of physics is a closed system based on undefined terms.


So the theory is a ‘closed’ and incomplete system. Theory is not reality however so what you say about the theory does not necessarily hold for physical reality.

*That* is why we know that physical things are different from mental things. The physical things are constructs, only the mental things are real.


I am amazed that you think this is a logical conclusion! It simply doesn’t follow. Basically you are saying that the only thing that exists are the surfaces of things because these are the only things that we can see directly with our eyes. So because we can’t see the interior of the earth does this mean that there can be no interior? Is your body not really composed of cells because you have never actually seen one? We have as much evidence for atoms as we do for cells. Is the moon merely a picture in the sky because that is all we can see?

Your logic is elementary and superficial because you continue to confuse theory and image with reality itself.

Re: Consciousness as an emergent property
posted on 05/09/2003 6:53 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Peter:

This is very different from learning the meaning of of physical words such as "atom". You can't experience an atom and associate the word "atom" with it.


subtillioN:

Have you never heard of a scanning tunneling microscope? Do you not know that atoms have been seen directly in the laboratory?


Not so. People have seen images on screens, from which they infer the atoms. This is diametrically the opposite of the case with the colour red, which is right there, directly experienced in your conscious mind.

We have actually written words and drawn pictures with single atoms.


Again, these words and pictures at the atomic are *inferred* from pictures on the screen (which in turn are inferred from conscious visual perceptions, which are the raw data).

Do you even believe that the Earth is a sphere, because after-all you have never actually been to space to see its actual shape?


You are completely missing the point, yet again. The spheroid Earth, and the pictures and words drawn at the atomic level, are models. They are abstract models for our collective conscious experiences.

Can you simply discount that evidence because you yourself have not directly done so?


I didn't. You are attacking your own fantasy rather than anything I actually wrote.

We experience the models, theories and pictures and manipulations of atoms and associate the word ‘atom’ with the experience.


A model is an abstraction. Nobody ever experiences the model. They experience words and pictures to do with the model, but the model itself is abstract, and hence never appears in the conscious mind.

Theory is not reality however so what you say about the theory does not necessarily hold for physical reality.


This is a bogus distinction. It's like distinguishing Sherlock Holmes from the novels about Sherlock Holmes. They are conceptually distinct, but Sherlock Holmes has no existence independent of the words written about him. Likewise the physical world. It has no existence independent of what is said about it. (And don't give me the usual line about kicking a stone. That
yields only a conscious experience of resistance to movement. It most certainly does not give you direct access to the physical world.)

Basically you are saying that the only thing that exists are the surfaces of things because these are the only things that we can see directly with our eyes. So because we can’t see the interior of the earth does this mean that there can be no interior?


This is so far removed from what I wrote, I am wondering whether this is a joke; or whether you are actually working in an other language and these postings are machine-translated from English into your language and your respones translated back again. Either way, your responses cannot be taken seriously.

Peter

Re: Consciousness as an emergent property
posted on 05/09/2003 7:33 PM by claireatcthisspace

[Top]
[Mind·X]
[Reply to this post]

Now, that's not a very clever answer Peter.

Claire

Re: Consciousness as an emergent property
posted on 05/10/2003 1:16 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Not so. People have seen images on screens, from which they infer the atoms. This is diametrically the opposite of the case with the colour red, which is right there, directly experienced in your conscious mind.


Diametrically the opposite? Not even close! The color ‘red’ is just as much an ‘inference’ of the electromagnetic frequency that it represents as the image on the screen of the microscope is an ‘inference’ of the atom that it represents. There is as much doubt that an atom exists as there is doubt that electromagnetic frequency exists. There is the exact same doubt that anything in the objective world exists. Therefore to capitalize on this doubt selectively, as it pertains only to what you strangely call both “physical reality” and “Physics” is to place an arbitrary and artificial boundary on experiential reality. It is faulty logic because you set up the criteria and you don’t follow it through to its logical ends.

Your whole absolute-duality-which-proves-your-absolute-monism argument (“physical things are different from mental things” which somehow proves that there are no such physical things) rests on the idea that there is an absolute distinction between a natural and an artificial sensation e.g. the eye vs. the scanning tunneling microscope. What are the grounds for making this distinction? None whatsoever. Every sensation involves a causal chain. What difference does it make whether or not portions of the chain are man-made?

Your argument places anthropomorphic blinders on the entire universe because everything that we can’t directly see with the natural sensory mechanism of the human eye is artificially segregated into some absolute imaginary theoretical category, which is ultimately a waste-bin for the disposal of all physical phenomena. The sole reason for this artificial duality is to shove physical reality aside to justify your mental monism. The fact is that you don't understand what a sensation actually is. You also don’t understand the ultimate result of dividing the world into a mental/physical duality based on the faulty notion of “absolute sensation”. You don’t understand that such dualistic reasoning ultimately creates its chasm exactly on the subjective/objective divide. This is simply because there is no absolute experiential sensory proof that anyone or anything else exists and absolute experiential proof is your sole criterion for your procrustean bisection of mental and physical reality. So to delete the physical half is to delete the objective world itself. This includes any other object including every other person and the objective world itself. Thus the end result of this bifurcated deletion is obviously solipsism.

Again, these words and pictures at the atomic are *inferred* from pictures on the screen (which in turn are inferred from conscious visual perceptions, which are the raw data).


They are not inferred. They are sensed through our artificial sensory mechanisms. A sensory mechanism is a sensory mechanism regardless of its origins.

Do you really think that your eyes do not involve a causal chain and that their sensations are absolutely direct or non-abstract? Their directness is every bit as relative as the sensation of the atoms in the scanning probe microscope and the images they produce are every bit as abstract. There is no absolute distinction to be made.

You are completely missing the point, yet again. The spheroid Earth, and the pictures and words drawn at the atomic level, are models. They are abstract models for our collective conscious experiences.


They are models for physical reality which is experienced via the abstract representation of consciousness.

A model is an abstraction. Nobody ever experiences the model.
They experience words and pictures to do with the model, but the model itself is abstract, and hence never appears in the conscious mind.


Do you really believe that it is impossible to experience abstractions? The mind itself is an abstraction. In fact all we ever really experience is our abstract sensory impressions of the physical world. That is what sensation necessarily is. It can never be some absolutely direct sensation of the thing sensed. Such a concept is nonsensical, because otherwise you could never know anything without being it by some other alias and thus the direct experience of being a non-conscious thing would be a non-conscious and non-sensorial non-experience. Therefore there is always a sensorial causal chain involved in any sensation. You simply can’t get around it.

Your absolute mental/physical duality is thus invalid. Therefore it can’t be used to justify your monism.

subtillioN: Theory is not reality however so what you say about the theory does not necessarily hold for physical reality.

Peter: This is a bogus distinction.


You are a hopeless mental-case if you think that theory is indistinguishable from reality. That is a serious error.

subtillioN: Basically you are saying that the only thing that exists are the surfaces of things because these are the only things that we can see directly with our eyes. So because we can’t see the interior of the earth does this mean that there can be no interior?

Peter: This is so far removed from what I wrote, I am wondering whether this is a joke; or whether you are actually working in an other language and these postings are machine-translated from English into your language and your respones translated back again. Either way, your responses cannot be taken seriously.


If you don’t understand the counter-argument why take it seriously? =)

Re: Consciousness as an emergent property
posted on 05/13/2003 11:35 AM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

Sherlock Holmes has no existence independent of the words written about him. Likewise the physical world. It has no existence independent of what is said about it. (And don't give me the usual line about kicking a stone. That yields only a conscious experience of resistance to movement. It most certainly does not give you direct access to the physical world.)


And you have no existence independent of the words I see you write or written about you?

The test of reality is "USE". Can we use atoms to do things? Can we use rocks to do things? If you can use somethings in consistent ways, they're as "real" as anything can be.

Can you use a rock in a dream in the same way you can use a rock while awake? They can both be used, but not in the same ways; so they are actually different kinds of things -- one is a dream-rock, the other is an awake-rock.

The same goes for atoms. Can we use them to affect the physical world? Yes. So they are real, by the use test.

Re: Consciousness as an emergent property
posted on 05/13/2003 4:23 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

And you have no existence independent of the words I see you write or written about you?


A conscious human mind obviously exists in so far as s/he experiences. So I exist independently of any words, as do you and other people.

The test of reality is "USE".


Not so. Astronomers use the celestial dome to describe the positions of stars, but the celestial dome does not have any independent existence. Geographers use the equator to describe the positions of land forms, but the equator has no independent existence. Physicists use the average temperature of a body to describe its thermal behaviour, but the average temperature has no independent existence. All these things are useful artefacts of our description of the world, but none of them have any independent existence.

Can we use atoms to do things? Can we use rocks to do things?


In precisely the same way that we can use any other construct.

If you can use somethings in consistent ways, they're as "real" as anything can be.


Wrong. Usefulness is orthogonal to reality. Non-real things can be used. (See above.) Real things can be useless. (E.g. the conscious experience of pleasure in music or art.)

Can you use a rock in a dream in the same way you can use a rock while awake?


Within the dream, yes - of course! Is there some reason why you would doubt this?

They can both be used, but not in the same ways; ...


Within their respective worlds, they are used in the same ways.

... so they are actually different kinds of things -- one is a dream-rock, the other is an awake-rock.


No, they are the same *kind* of thing but in different contexts: namely the dream world and the waking world. They are both virtual worlds, but driven by different sources.

Peter

Re: Consciousness as an emergent property
posted on 05/13/2003 4:52 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

A conscious human mind obviously exists in so far as s/he experiences. So I exist independently of any words, as do you and other people.


…and so physical reality exists independently of any words (i.e. theories) written about it.

Dimitry V: The test of reality is "USE".

Peter: Not so. Astronomers use the celestial dome to describe the positions of stars, but the celestial dome does not have any independent existence. Geographers use the equator to describe the positions of land forms, but the equator has no independent existence. Physicists use the average temperature of a body to describe its thermal behaviour, but the average temperature has no independent existence. All these things are useful artefacts of our description of the world, but none of them have any independent existence.


Note the key word “independent”. Nothing has an independent existence. This doesn’t mean that these things do not exist. They obviously do exist (as neural patterns referenced via pictorial and syntactic language) and lo and behold they are useful! That is why we invented them.

Dimitry V: Can we use atoms to do things? Can we use rocks to do things?

Peter: In precisely the same way that we can use any other construct.


'Construct' does not necessarily mean that it is mental, just that it has structure.

Dimitry V: If you can use somethings in consistent ways, they're as "real" as anything can be.

Peter: Wrong. Usefulness is orthogonal to reality.


That is a bit abstruse. Care to clarify that statement? What is ‘reality’ to you and how can anything exist outside it or orthogonal to it?

Non-real things can be used. (See above.) Real things can be useless. (E.g. the conscious experience of pleasure in music or art.)


Non-real things do not exist. This is your dualism talking.

Dimitry V: Can you use a rock in a dream in the same way you can use a rock while awake?
Peter:Within the dream, yes - of course! Is there some reason why you would doubt this?


In my dreams things are constantly changing into other things and things often react in vastly different ways than they would in the waking state. That is simply because the mind is a representation. It is an abstraction and its representations of objects are free from the structural causality that forms their physical counterparts. Therefore the chances are great that the objects will not be usable in the same way at all, as is often the case.

Dimitry V:... so they are actually different kinds of things -- one is a dream-rock, the other is an awake-rock.

Peter: No, they are the same *kind* of thing but in different contexts: namely the dream world and the waking world. They are both virtual worlds, but driven by different sources.


What is the proof that objective reality is a virtual world? The conclusion is just as logical if you rewrite it thus: “They are both real worlds, but driven by different sources.” How can you justify your insistence on absolute virtuality? …and how can virtuality be independent of physicality?

Re: Consciousness as an emergent property
posted on 05/13/2003 6:37 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Peter: A conscious human mind obviously exists in so far as s/he experiences. So I exist independently of any words, as do you and other people.

Dimitri:…and so physical reality exists independently of any words (i.e. theories) written about it.


In the very paragraph that you quote, I state that "a conscious mind exists in so far s/he experiences". Why do you then pretend that somehow the precise opposite also follows from the same premise? This reminds me so much of AI programs such as SHRDLU that syntatically throw back responses without engaging with the meaning at all. I am puzzled by the circularity with which you choose to argue. I hope you don't mind my asking, but are you an intelligent bot of some sort?

Why would anyone want to assert the independent existence of something that can never be experienced? The sum total of everything that any being ever at any time at any place in the universe has ever or will ever experience will be of necessity unaffected by the existence of something that is by definition unobservable. Debating whether physical substance exists or not really is the modern equivalent of debating how many angels can dance on the head of a pin.

Note the key word “independent”. Nothing has an independent existence. This doesn’t mean that these things do not exist.


Once again, you are picking up individual words without picking up the meaning of the sentence. "Independent" in this context means independent of the mind. The point is to differentiate things that exist in the mind, and hence are mind-dependent, such as colour experiences, from things that are supposed to exist independently of any conscious experiencer, such as physical substance. The latter are indistinguishable from fictions, since they are by definition unobservable.

Peter: Usefulness is orthogonal to reality.

Dimitri:That is a bit abstruse. Care to clarify that statement? What is ‘reality’ to you and how can anything exist outside it or orthogonal to it?


Reality is what actually exists. The clarification of the quoted sentence will be found in the sentences immediately following it in my posting. Just scroll up and you'll see it.

What is the proof that objective reality is a virtual world?


Just scroll upwards, and you will see the argument repeated half a dozen times in this thread.

Peter

Re: Consciousness as an emergent property
posted on 05/13/2003 7:36 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Peter,

Just scroll upwards, and you will see the argument repeated half a dozen times in this thread.


repeating does not help here, I tried that... ;-)

Re: Consciousness as an emergent property
posted on 05/13/2003 10:49 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

repeating does not help here, I tried that... ;-)


The point is to adapt and resonate (i.e. communicate), not to repeat.

Re: Consciousness as an emergent property
posted on 05/14/2003 12:29 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

SubtillioN,

The point is to adapt and resonate (i.e. communicate), not to repeat.


Are you referring to the other thread? I was just informed by the famous ELDRAS herself that BESS does _not_ stand for "Basic Echo Synchronizing System". It is something smarter. ;-)

Blue

Re: Consciousness as an emergent property
posted on 05/14/2003 12:36 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Are you referring to the other thread?


um... yeah... in fact I was. 8]

Re: Consciousness as an emergent property
posted on 05/13/2003 10:45 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

In the very paragraph that you quote, I state that "a conscious mind exists in so far s/he experiences". Why do you then pretend that somehow the precise opposite also follows from the same premise?



Did I claim the “precise opposite”, that "a conscious mind [doesn’t] exist… in so far s/he experiences"? I don’t think so. Please clarify this confusion.

I am puzzled by the circularity with which you choose to argue. I hope
you don't mind my asking, but are you an intelligent bot of some sort?



When trying to untangle your language knot, to deal with its tangled web of concepts, I necessarily have to force my hierarchy into the shape of the tangled structure of your logic so that we have some basis of communication. The knot is where the circularity actually exists. You have gotten used to it and can only see it when someone tries to untie it.

Otherwise please point out the circularity that you claim I am exhibiting and I will show you its non-circular hierarchical reference frame.

… and yes I am a bot… Thank you for asking. =)

Why would anyone want to assert the independent existence of something that can never be experienced?


I am claiming the exact opposite. There is no such thing as an independently existing anything. Mental reality is not independent of causality. If your monism was a true monism then you would understand the necessary coherence. Just because we can’t observe the vast amounts of causal connections on all levels doesn’t mean they are necessarily non-existent. To assume so is entirely unjustified and reveals your lack of understanding of the necessity of causality.

You are asserting that the mind is independent of causality i.e. physical reality. Correct? Or do you have another definition of causality? If so then how can you absolutely distinguish it from the common notion of causality which is the basis of physical theory? Causality gives us a means to understand the nature of change. Without it then change has no deeper meaning. It becomes a mystery. Does your mental monism even contain causality? Why, ultimately, does it make any difference whether we call causality ‘physical’ or ‘mental’?

The sum total of everything that any being ever at any time at any place in the universe has ever or will ever experience will be of necessity unaffected by the existence of something that is by definition unobservable.


How easy! We don’t have to *explain* experience because the entire world *is* experience! Experience is the root level! Brilliant escape pod! Ok, then how does experience propagate as sensation? What actually happens when experience changes? How can it assume its various forms as seen in the objective world? How does experience work?

Your logic is circular because the mind itself *is* experience. You are basically saying that experience is formed from more experience. How much more circular can you get? In fact it is a perfect circle with experience leading only to and from experience. This neat little circle exists only at the scale of human perception and every other scale beyond the human realm of perception is necessarily imaginary. That is the most primitive anthropocentric point of view I have ever seene.

The point that you are missing is that the mind level isn’t the level at which sensation actually takes place. It is the level where the sensation is represented as conscious experience. The mind is the higher level of a causal hierarchy that depends entirely on unobservable variables. Therefore experience is entirely based in non-experience. Otherwise how can you explain how experience generates itself?

The act of sensation is a causal process that causes changes in the neural substrate which is reflected emergently in the mind as conscious experience.

This non-experienced neural substrate is constantly affected by non-experienced things all the time. You can’t simply discount as non-existent everything that your senses were not built to detect or everything that is not in your field of view.

Reality is thus entirely independent on the mind, but the reverse isn’t true.

Debating whether physical substance exists or not really is the modern equivalent of debating how many angels can dance on the head of a pin.



It would really be sad if we stopped wondering wouldn’t it? Are you not on the other side of this very debate that you are trying to debase?

I take it as a given that causality necessarily exists and I merely call it “physical substance”. If you want to call it “mental substance” it makes no difference provided it possesses the causality necessary to explain the existence of the objective world and its connection to the mind. Otherwise you have replaced a modicum of understanding with complete and total mystery.

The point is to differentiate things that exist in the mind, and hence are mind-dependent, such as colour experiences, from things that are supposed to exist independently of any conscious experiencer, such as physical substance. The latter are indistinguishable from fictions, since they are by definition unobservable.


They are only ‘unobservable’ by your absolute definition of sensation. I have gone through this over and over and you have never made your case against my arguments. By assuming an absolute definition of sensation, the whole objective world is therefore ‘un-sensed’ because no sensation can be absolutely direct and thus you are left with solipsism because every other person is part of that ‘un-sensed’ objective world and thus all other minds and the entire objective world is a figment of your imagination. If you claim that the objective world is a figment of some other beings imagination then how is this any more provable than the calling it physical? You still have to trust your non-absolute senses which are merely relative representations formed from sensory chains of mental causality. What difference does it really make if you claim that the causality beneath objective reality is mental or physical? Either way it is ultimately entirely unprovable.

Just scroll upwards, and you will see the argument repeated half a dozen times in this thread.


Yes, and you will also see my counter-argument over and over that still remains unchallenged.


Re: Consciousness as an emergent property
posted on 05/14/2003 1:00 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

SubtillioN

Did I claim the ?precise opposite?, that "a conscious mind [doesn?t] exist? in so far s/he experiences"? I don?t think so. Please clarify this confusion.


First, it is probably worth noting that Peter was answering to your message thinking it was from Dimitri...

Then I'll try to colorify this for you: "opposite" in my interpretarration meant "opposite view", not the logical opposite.

Blue

Re: Consciousness as an emergent property
posted on 05/14/2003 1:35 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

First, it is probably worth noting that Peter was answering to your message thinking it was from Dimitri...


Good point. I hadn't noticed. That is a good reason for 'signing' all my future posts.

Then I'll try to colorify this for you: "opposite" in my interpretarration meant "opposite view", not the logical opposite.


Can you colorify a bit more on this distinction?
It seems a bit tricky.



subtillioN

Re: Consciousness as an emergent property
posted on 05/14/2003 3:15 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

SubtillioN,

Good point. I hadn't noticed. That is a good reason for 'signing' all my future posts.


The problem is that your messages are often so long that in the middle of them one can neither see the top nor the bottom without scrolling, so one easily forgets who wrote them. Maybe you could start each sentence by repeating your name.

Can you colorify a bit more on this distinction?
It seems a bit tricky.


Did you see "The Matrix" ? It explains how there can be a difference between humans and buildings, although inside the matrix what looks like a human can be "possessed" by an agent. Also note that the colors outside the matrix are different than the colors inside the matrix, but even inside the matrix the colors themselves are not "created" by the matrix, they exist only for the human observer. [Of course everything else is only information appearing in vision of the human observer as well.]


Re: Consciousness as an emergent property
posted on 05/14/2003 3:33 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Maybe you could start each sentence by repeating your name.


8]

..or maybe people could look at who they are actually responding to? I know it is a strange concept, but it shouldn't be that difficult to get used to.

Re: Consciousness as an emergent property
posted on 05/14/2003 3:45 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Did you see "The Matrix" ? It explains how there can be a difference between humans and buildings...


...sounds rather trivial really...


although inside the matrix what looks like a human can be "possessed" by an agent. Also note that the colors outside the matrix are different than the colors inside the matrix, but even inside the matrix the colors themselves are not "created" by the matrix, they exist only for the human observer. [Of course everything else is only information appearing in vision of the human observer as well.]


That's all well and good for an unrealistic sci-fi fantasy movie where 'possesion' is commonplace, but how does the concept relate to the real world where such a thing as 'possession' is an entirely different matter?

The brain is not an empty vehicle to be 'possessed', as the matrix would make it seem and there is zero evidence that we are living in a simulated world.

Re: Consciousness as an emergent property
posted on 05/14/2003 5:33 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

...sounds rather trivial really...

That's all well and good for an unrealistic sci-fi fantasy movie where 'possesion' is commonplace, but how does the concept relate to the real world where such a thing as 'possession' is an entirely different matter?

The brain is not an empty vehicle to be 'possessed', as the matrix would make it seem and there is zero evidence that we are living in a simulated world.


It is not about "possession". It is about colors. And sound, and so on, you know what I mean.

Re: Consciousness as an emergent property
posted on 05/14/2003 5:52 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

It is not about "possession". It is about colors. And sound, and so on, you know what I mean.


The part which dealt with the difference between a human being and a building was about “possession”, right? The color part, I thought, was an “Also note”. Nevertheless both parts deal exclusively with the fantasy world of the matrix and the differences that only exist in that fictional world. I don’t see much in the way of an explanation of the phenomenon of color and the relevance to our discussion of the non-fictional-world issues of consciousness is quite obscure.

Re: Consciousness as an emergent property
posted on 05/14/2003 11:18 AM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

Dimitry V: ... so they are actually different kinds of things -- one is a dream-rock, the other is an awake-rock.

Peter: No, they are the same *kind* of thing but in different contexts: namely the dream world and the waking world. They are both virtual worlds, but driven by different sources.


In the prototypical dream, a dream-rock can be used in many ways that an awake-rock cannot. Therefore, the two have different essentialistic properties. Therefore, they are different "kinds" of things.

Both have "USE", and both are real. But they are not the same kind of thing. Fundamentally, the only thing they have in common is human-visual appearance. This calls for an appreciation of shared vs. personal realities.

Re: Consciousness as an emergent property
posted on 05/14/2003 11:30 AM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

Dimitry V: The test of reality is "USE".

Peter: Not so. Astronomers use the celestial dome to describe the positions of stars, but the celestial dome does not have any independent existence. Geographers use the equator to describe the positions of land forms, but the equator has no independent existence. Physicists use the average temperature of a body to describe its thermal behaviour, but the average temperature has no independent existence. All these things are useful artefacts of our description of the world, but none of them have any independent existence.


I generally agree. However, the equator (and other such constructs) is part of a shared reality system. We can communicate the idea to others and they can use it to synchronize other ideas with our own. Insofar as they are useful in synchronizing ideas, they are real; -- they have USE.

In the physical reality, we do not have to communicate ideas such as gravitation, solidity, or liquidity, in order to synchronize our ideas with those of others. The universe has already done that for us. They are part of the pre-idea-exchanging-intellingence, shared reality. The specific ways in which those ideas become encoded in intelligent systems will vary, but the foundation is already there.

Materialization
posted on 03/03/2003 6:23 PM by kojikun

[Top]
[Mind·X]
[Reply to this post]

He says that you cant materialize when noones watching but thats totally not true. Countless times we see characters appear in and out of the matrix while other people watch this happen. The guy in the subway, for example. Or every time the characters watch each other enter and exit.

Another thing is, the entire matrix is a program, so logically nothing should be impossible, as its just a simulation.

The article is good, but needs alot of work. The movie is great as a gunfu movie, but lacks majorly in logic and rationality and most importantly PLANNING.

Re: Materialization
posted on 03/05/2003 2:16 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Countless times we see characters appear in and out of the matrix while other people watch this happen. The guy in the subway, for example. Or every time the characters watch each other enter and exit.


The agents don't materialise. They take over pre-existing human bodies, and change their appearance. E.g. in the subway, Agent Smith does not materialise a new body, but possesses the tramp's body.

There are several times when we see the Nebuchadnazzim disappear when they are about *exit* but we never see then appear they *enter*.

So there you have two 'glitches': (a) Why do agents and Nebuchadnazzim use different methods for entering the Matrix? The former take over existing bodies, while the latter materialise new bodies. (b) Why do we see Nebuchadnazzim disappear but not appear? Those were the two questions I was addressing.

Another thing is, the entire matrix is a program, so logically nothing should be impossible, as its just a simulation.


True, but any simulation will have fixed rules. Otherwise it's not a simulation but just a random mess. The Matrix world has clearly been designed to simulate the physical world, with minimal deviations from physical laws. Any deviations that do exist are 'glitches', and there ought to be a good explanation for their being there.

Peter

Re: Materialization
posted on 03/05/2003 12:33 PM by Mesopotamian

[Top]
[Mind·X]
[Reply to this post]

I'm not sure that the Agents' actually leave the Matrix. They just transfer from one host to another, so I don't think that the way the enter a body should be considered a glitch. Neo also entered Agent Smith's body.

Re: Materialization
posted on 03/05/2003 6:15 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Well, the word 'glitch' was the editor's idea (Glenn Yeffeth) and I'm trying to stick to it. But I'd prefer to calling them 'interesting features'.

You say that the Agents never 'leave' the Matrix. But they do sometimes cease to have any embodiment in any avatar. They're noy always possessing somebody.

Neo's leap into Agent Smith's body was quite weird. It's very different from the Agents' method of possessing bodies. What do you think it was supposed to be? What was going on when Neo did that?

Peter

Re: Materialization
posted on 03/05/2003 7:01 PM by Mesopotamian

[Top]
[Mind·X]
[Reply to this post]

But I'd prefer to calling them 'interesting features'.

you must be a software developer :-)

You say that the Agents never 'leave' the Matrix. But they do sometimes cease to have any embodiment in any avatar. They're noy always possessing somebody.


As I go through the movie in my brain, I can't really recall any specific moment where they are not possessing somebody or as you say "cease to embody an avatar". In the movie, they are either an agent, or in the process of becoming an agent (possessing somebody) in another host. We do not get a sense of the transfer time between being an Agent and occupying a body, but based on the chace sequence, it is virtually instantaneous. I think they can only be permanent residents. Where else could they go if they are not in the Matrix? I'd hate to use "to be, or not to be" in this case...but It's fuzzy I guess, what are they doing when not being an agent oppressing the neo's of the "world". Is their perhaps an Agent Bar where they go at the end of the work day and hang out as lines of code drinking energizer drinks and not be avatars? Or do they become a sub-routine that ceases to run. temporarily becoming non-existant like in the entrance/exit theory.

Neo's leap into Agent Smith's body was quite weird. It's very different from the Agents' method of possessing bodies. What do you think it was supposed to be? What was going on when Neo did that?


What was going on was the NEO just realized he could control his fate, he came back to life, and stopped bullets. Pretty amazing revelations. Within the Matrix, I think he became an "virus" and destroyed him, and jumping inside was a way to symbolize the hack to do so.

Re: Materialization
posted on 03/06/2003 2:46 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

To be sure, when the Agents go from one possessed person to another, it seems pretty instantaneous. And in the trailers for Matrix Reloaded, we see Agent Smith possessing multiple avatars at the same time. So clearly an agent doesn't need any quiescent period before reloads.

Nevertheless, there are many scenes in the film when the Agents are not on-screen. Maybe they are out and about doing something else, but there is nothing *in the film* to tell us that each agent must be permanently active in the Matrix world.

(In fact, it might create a societal problem if they were always in-Matrix, as it would mean that the real person whose avatar was possessed would disappear from normal life for extended times. Their friends and relatives would get to notice that something odd was going on. On the other hand, now I think of it, in our 'real' world, lots of people do just 'go missing'. Maybe they've been possessed by gnostic Archons.)

Seriously though, I think we have to stick with what the film tells us, and that indicates that the agents take possession of avatars for comparatively short times to perform specific jobs of seeking and destroying Nebuchadnazzim.

When an Agent is not in-Matrix, then the software module that constitutes the Agent is simply not running. It's like Microsoft Word sitting on your hard disk and not being executed.

Again this emphasises the big difference between Agents and people.

Talking of people: what happens to the real person whose avatar is possessed by an Agent? Do they just go unconscious? Or do they die? Or do they remain conscious of what is happening but cannot act?

As regards Neo diving into Agent Smith: like you, I am tempted to think that this was symbolic. But that would be disappointingly incongruous with the rest of the film, where what you see on-screen is what actually happens.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/04/2003 12:49 PM by clarkd

[Top]
[Mind·X]
[Reply to this post]

Your augments on consciousness don’t work for me at all.

The idea that a person feels pain immediately and an android would not, is wrong. We have pain sensors in our finger, and when those sensors are triggered, the wiring to our brain stimulates neurons that look after that body part. I see no difference in this scenario to what would happen in an android. The fact that the android could be programmed to interpret that signal as something other than pain in that part, is irrelevant. The creator of people (god or natural selection) made a decision on what would happen to stimulation of that pain sensor in our hand out of an infinite number of potential possibilities. What I mean is that when people were created, the sensor information from the pain sensor in our finger could have been connected to the smell part of the brain instead and then when that sensor was stimulated, you would smell something. This is possible but not very useful. In the same way, signals from the sensor in an android, even though they could be interpreted in many ways (they are just numbers after all), would probably be interpreted in an optimal manner and update the data state that would control reactions to pain from the hand.

You say that computers cannot have consciousness because everything they do or react to has been predetermined by programming. I have programmed for 25 years and on all large programs, the program seems to take on a life of it’s own. Programmers have to work very hard to limit the programs to do only things they intended, not things they didn’t. This tendency for parts of a program to interact with other modules in ways that weren’t expected or anticipated is something all programmers have to fight against all the time. The idea that only the randomness that seems to be built into our brains can produce consciousness is also wrong. Randomness can be built into programs easily and programs that can randomly generate possible scenarios and then test them for “goodness” are being developed in many institutions around the world. Data mining and statistical techniques (multiple regressions) can also be used in a program to “learn” what is important in some situations and what variables are not. Neural nets are being used to solve many complex problems but their biggest problem is that even if they work, we don’t know all the intermediate decisions that came to that conclusion. This is how humans seem to think. We bumble along with very inefficient algorithms that we don’t even know what they are, and somehow we come up with something that seems to work but we don’t really know why. Computers on the other hand, normally work very accurately for known reasons but this is hardly a reason to say they can’t become “conscious”.

The idea that a computer program cannot feel emotions is also not true. I can make a program that takes inputs and puts them through an emotion algorithm to produce outputs that don’t just look like emotions but would, from the context of the program, be emotions. A simple example of this is the emotions that can be seen in the AIBO robotic dog. The program code that created these emotions is quite trivial and randomness was built in trivially.

In conclusion, I believe that philosophy, even though I enjoy the debate and ideas, should stay far away from the discussion of consciousness in artificial intelligences. Philosophy uses and deals with concepts that are only relevant to it’s virtual or unreal world. The biological sciences, computer sciences and engineering worlds would be a better place to look for answers in this regard. I look forward to the day when there is a machine intelligence much smarter and “conscious” than we are and I see no unsolvable problem to this happening in my lifetime.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/04/2003 1:47 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Everybody read the above post - and learn!

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/04/2003 4:52 PM by Jeremy

[Top]
[Mind·X]
[Reply to this post]

I read it, I agree with it, up until the end at least. I do not look foward to the day when we become second rate intelligences. I wonder why anybody would.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 2:30 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

The fact that the android could be programmed to interpret that signal as something other than pain in that part, is irrelevant. ... What I mean is that when people were created, the sensor information from the pain sensor in our finger could have been connected to the smell part of the brain instead and then when that sensor was stimulated, you would smell something.


Agreed, but: once the conscious sensation has come into existence, its qualities are not reliant on interpretation for their existence.

You are quite right that the electrochemical signals that pass between nerve cells are all the same, whether they are representing inputs from the finger or the nose. In that respect, the brain is like the computer.

The difference is that a human being has a further level of representation, namely conscious sensation. A finger-prick is qualitatively very different from a smell. You can't just reinterpret a finger-prick as a smell. This extra layer of representation, the conscious sensation, is what is missing in an electronic computer. Every data structure remains multiply interpretable. For example, an AI system might have a basic edge-detection module that detects egdes in both the visual field and the tactual field. So, if you were saying that the machine's data processing is conscious, you would have a paradox, because that bit of processing could not be both a vision sensation and a touch sensation. The same could be true of the brain. In the conscious mind, on the other hand, the conscious sensations are distinct. So, there is something fundamentally different between conscious sensations on the one hand, and information processing (in a brain or a computer) on the other hand.

*That* is why, even though an AI system can reproduce the brain's information processing, that by itself does not guarantee that it will have any consciousness.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 8:44 AM by samynovitch

[Top]
[Mind·X]
[Reply to this post]

The difference is that a human being has a further level of representation, namely conscious sensation. A finger-prick is qualitatively very different from a smell. You can't just reinterpret a finger-prick as a smell. This extra layer of representation, the conscious sensation, is what is missing in an electronic computer. Every data structure remains multiply interpretable. For example, an AI system might have a basic edge-detection module that detects egdes in both the visual field and the tactual field. So, if you were saying that the machine's data processing is conscious, you would have a paradox, because that bit of processing could not be both a vision sensation and a touch sensation. The same could be true of the brain. In the conscious mind, on the other hand, the conscious sensations are distinct. So, there is something fundamentally different between conscious sensations on the one hand, and information processing (in a brain or a computer) on the other hand.


I have some fundamental problems with your remarks:
1) If I build a parallel computer that mimics the brain completely, does the computer then have consciousness or not?

2) Your assuming that an artificial intelligence would be build in a certain way that would exclude it from conscious sensations. This is an argument out of ignorance. Just because you can not figure out how this would be possible doesn't mean somebody else couldn't do it.

Every data structure remains multiply interpretable


Just like a neuron. :-)

Keep in mind that the computers we have today are linear and limited. The future of processors lies in massive parallellization. The moment we have a computersystem that is :

a) clockless
b) parallel ( with for example 1 million processors working simultaneously).
c) has petabytes of memory

is the moment we will have ( without much problems ) artificial intelligence.

regards

Neok

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 9:32 AM by griffman

[Top]
[Mind·X]
[Reply to this post]

What if its just a matter of the system not being complex enough. again, paralellism. the pricking of a finger does send an impulse signal to the nurons in the brain. but, to my understanding, unlike a computer, that nuron that recives the signal, or the series of nurons, may be used for other informational purposes other than reciving pain singnals from the finger and passing them on. can nurons arbitrarily fire off the different kinds of information they may contain? I don't think so but I'm not a nuroscientist either. all or nothing I say (to everyone or no one?)

So what is this extra level of representation. I fail to see it and I think it has eluded definition every time anyone tries. I think that the conscious representation we are creating comes from the extra information stored and transmitted when the incomming signal cascades through the network. along the lines of if you prick your finger, you may suddenly remember the last time you pricked your finger. if that memory had any significence it may spawn any number of other mental reactions. all just from the one signal traveling up your arm. butterfly and the hurricane of sorts.

the differences between a computer and a brain are more the fact that the brain is more like a network and a computer ( pentium processor type) is more like a single nuron. that just means we need a few billion pentiums networked to approach the physical complexity of the human brain.

also remember that our software has been being programed for a few million years ( many millions of years if you use natural selection as a model) while computer software has only been written for the past 40 some odd years. and then it hasn't been tuned to the goal of physical survival. This is why todays computers are so versitile while we seem limited but are still rudimetary when compared to our own complexity.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 9:55 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

that just means we need a few billion pentiums networked to approach the physical complexity of the human brain.


A pentium could (real time) simulate at least 1 million neurons. 10000 pentuims should be more than enough.

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 10:08 AM by samynovitch

[Top]
[Mind·X]
[Reply to this post]

Or one selfreplicating nanoprocessor :-)

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 10:16 AM by griffman

[Top]
[Mind·X]
[Reply to this post]

A pentium could (real time) simulate at least 1 million neurons. 10000 pentuims should be more than enough.


I don' t think it can truely be more than one nuron when put into the context of use as opposed to simulation. a nuron is made up of a central core (for processing of sorts) and input/output conections. the pentium is a vastly more powerful core than any one single nuron but the connection to other nurons is what is important in the relationship. to have the input/output connections of a million nurons would be far higher than any posible connections between two machines at this time. it creates a bandwith limitation.

If you know of a program that can simulate a million working nurons that would run on a pentium I would LOVE to see it. and that is not sarcasm.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 10:35 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Please, see this.


http://www.frc.ri.cmu.edu/~hpm/book97/ch3/retina.c omment.html

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 10:06 AM by griffman

[Top]
[Mind·X]
[Reply to this post]

Back to the Matrix........

the head plug -
Back when the story was either concived/ written/produced the significance of wireless technology was not common knowledge I believe. I don't think many people even in the science world could concive of the posibility of what amounts to a cell phone imbeded into the brain. Its actually an inovation that the movie predates in my mind. combined with the strong visual argument already given is a good reason for it to be there. will it happen? no, but we know that now, not then.

"entire crops were lost. Your primitive ceribrum kept trying to "wake up""

this is why the utopia Idea doesn't happen. Its a dialogue patch to an obvious glitch in the plot. good resoning in my mind. hopefully when the posibility of utopia presents itself to us we'll have figured out a less invasive way to connect with it.

"the pill you took was part of a trace program, it disrupts you input/output carrier signal so that we can pinpoint your location"

The pill is a kind of computer virus and swallowing it is how it is activated. in a virtual space, any action can represent any function, within the limits of the programing.

I think the Idea of the distributed system to run the power plant is spot on. the machines do not necessarily need the human bodies as much as the need the human brain. "the body cannot live without the mind." this is true since the physical processing is acctually done within the physical body. your mind is not "uploaded" into the matrix as much as the matrix construct is "downloaded" into you physical brain. its the same idea behind MMORPG's. most of the world rendering and game calculations are done on the client machine. the server (the matrix in our case) only controls world location and client interactions. the mind is never separated from the body.
this opens a bit of a paradox when Cypher starts pulling the plug on his ship mates. although it may just be that the effect is too traumatic for the brain to handle and causes somthing like a stroke. the transfer process under normal conditions seems to be a bit of a strain in itself.

so the need of the machines to go through the eloborate process of keeping human brains alive so that it can maintain its power sources. What I can't understand is this seems to be the only goal of the machines. they exist and build and "farm" for no other purpose than to keep their power plant running so that they may run the matrix program and build and farm. the irony that morpheus spoke of I guess.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 11:16 AM by clarkd

[Top]
[Mind·X]
[Reply to this post]

I still think you are getting two different things confused. When we talk about an artificial intelligence, we are the architect. We get to decide what connections make sense in our design. When we look at humans, these decisions have already been made. Once the decision on how connections are to be made from input to output, these decisions are as set and as permanent as they are in a human if we so choose.

You say “Every data structure remains multiply interpretable” but this doesn’t have to be the case. You don’t have to allow a single data structure to be interpreted in more than one way.

Consciousness cannot be defined as hinging on the ability or not of AI to make more flexible data structures and procedures than humans. This flexible nature of AI comes from the “us” being the designer and this flexibility can be constrained any way we desire.

Your argument about “The difference is that a human being has a further level of representation, namely conscious sensation.” seems to be a tautology. You seem to be defining consciousness as being human rather than giving reasons why this is so. I believe that a computer program can represent any context state (the state of many variables that are interrelated) that we could think of as humans.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 12:10 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

The difference is that a human being has a further level of representation, namely conscious sensation. A finger-prick is qualitatively very different from a smell. You can't just reinterpret a finger-prick as a smell. This extra layer of representation, the conscious sensation, is what is missing in an electronic computer.


Wrong. When a person loses his/her arm through an accident, for example, the brain takes over the section of the brain that used to process data from that part of the body and uses it for other purposes. Since the part of the brain that processes data for the face is adjacent to the part that processes data for the arm, a scratch on the cheek is sometimes interpreted by the brain as something happening to the arm.

When something happens to the body, the brain adjusts to deal with the new configuration. Sight, for example, when lost, allows the area of the brain that we use for reading to be used for touch and to learn how to read braile. There was an excellent program about this on PBS recently where Alan Alda participated in an experiment that blocked a person's sight for a couple of weeks while they tried to learn braile. During this experiment, an MRI machine was used to track the parts of the brain being used. Touch moved into the back of the brain where sight has previously been processed and, after the blindfold was removed, sight moved back into that area. The brain is a dynamic system that adjusts to the environment in which it exists. This is one of the things a computer can't do these days but may be able to do at some time in the future when programming is done in DNA rather than metal and silicon.

Grant

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 12:30 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

This is one of the things a computer can't do these days but may be able to do at some time in the future when programming is done in DNA rather than metal and silicon.


I don't think parallel DNA computing is the answer. What you need is an adaptable network architecture. There are many highly adaptable network architectures that will be available for building brain-surpassing highly adaptive computers. One of these architectures is wireless connected neurobots (adapted nanobots). The computer/brain could be composed of a networked swarm of these neurobots, for instance. Because the connections are wireless, software-based and act at the speed of light this will create a much more adaptable architecture. It will be able to change its hardware coded functionality almost instantaneously. Give the neurobots the ability to fly and the AI would literally be a hive-mind.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/05/2003 6:09 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

We seem to be talking at cross-purposes here. You're saying that any given part of the brain can have multiple interpretations. Fine. I agree. In fact, that's precisely what I said: the hardware and infromation processing of both computers and brains is multiply interpretable, it's dependent on what larger function it's a part of.

What I also said, and you've ignored, is that conscious experience is *not* multiply interpretable. A conscious experience is characterised by its qualitative presentation. It's not dependent on an interpretation or on playing a functional role.

*That* is why consciousness is not reducible to information processing in the brain (or an intelligent computer).

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/06/2003 4:22 AM by samynovitch

[Top]
[Mind·X]
[Reply to this post]

What I also said, and you've ignored, is that conscious experience is *not* multiply interpretable. A conscious experience is characterised by its qualitative presentation. It's not dependent on an interpretation or on playing a functional role.


If you are still maintaining that computers cannot become conscious, you have to proof that an intelligent computer system cannot maintain a qualitative presentation of a conscious experience.

If I read between the line I think you are suggesting that apart from the brain there is also a soul.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/06/2003 5:02 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

If you are still maintaining that computers cannot become conscious, you have to proof that an intelligent computer system cannot maintain a qualitative presentation of a conscious experience.


I think it's the other way around, actually. If you want to claim that something has a property that is not open to public inspection, such as the property of consciousness, then you have to give a good reason for making that inference that that property is there.

Otherwise, you just open the floodgates to a sea of crazy claims. For example, you could say that your chair is conscious and that I have to prove that your chair is not conscious!

If you want to claim that a computer is conscious by virtue of its computations then you have to give a reason for making this inference. As far as I can see, it's just a non-sequitur.

Sure, I can see the plausibility of the argument that a computer can be as *intelligent* as a human, or even more so, by virtue of how it computes and how fast it computes. But intelligence is not *consciousness*. Intelligence is the ability to solve problems, consciousness is the ability to have subjective qualitative experiences.

What you need to give is some reason to believe that a computer, by virtue of its computations, is not only intelligent, but also *conscious*.

But I don't think you can give any such reason. For the whole shebang of computing can be carried out without any reference to consciousness.

If I read between the line I think you are suggesting that apart from the brain there is also a soul.


I don't know what you mean by 'soul'.

I'm only talking about plain vanilla consciousness, which we have direct empirical observations of every waking moment of the day. If you want to talk about 'soul' then you need to define the term.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/06/2003 5:56 AM by samynovitch

[Top]
[Mind·X]
[Reply to this post]

I think it's the other way around, actually. If you want to claim that something has a property that is not open to public inspection, such as the property of consciousness, then you have to give a good reason for making that inference that that property is there.


I can understand that this is not easy to demonstrate. But I need more than "evidence" before I exclude the possibility of computer consciousness.

Otherwise, you just open the floodgates to a sea of crazy claims. For example, you could say that your chair is conscious and that I have to prove that your chair is not conscious!

Agreed.

If you want to claim that a computer is conscious by virtue of its computations then you have to give a reason for making this inference. As far as I can see, it's just a non-sequitur.

The same applies to human thinking. I can not "prove" that I am conscious. I can only demonstrate the results of the conscious processes in my brain.

Sure, I can see the plausibility of the argument that a computer can be as *intelligent* as a human, or even more so, by virtue of how it computes and how fast it computes. But intelligence is not *consciousness*. Intelligence is the ability to solve problems, consciousness is the ability to have subjective qualitative experiences.

Again I get the feeling that you are trying to make an "artificial" difference between artificial intelligence & consciousness and normal intelligence & consciousness.


What you need to give is some reason to believe that a computer, by virtue of its computations, is not only intelligent, but also *conscious*.

Our brain is a massive parallel computer. What makes you think that our computations produce consciousness ? Experiencing something is as much a function of the brain as thinking. The whole brain is an advanced input / transformation / output machine.


But I don't think you can give any such reason. For the whole shebang of computing can be carried out without any reference to consciousness.

Two last things :

1) If I have to prove that computers can be conscious, you have to apply the same standard to humans and prove that humans can produce consciousness. (But I don't you can give any such proof :-)

2) My remark about the soul is because I believe you attribute some magical stuff to the brain that cannot be achieved in artificial intelligence.

regards,Neok

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/06/2003 2:25 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

I didn't say that computers 'can never be conscious'. What I said was that a computer cannot be conscious by virtue of the computations that it does.

In other words, I am specifically denying the 'strong AI' hypothesis.

Now, for all I know, my laptop and my chair might both be conscious. Since consciousness is a private thing, I just can't tell. What I *can* tell is their behaviour has nothing to do with any putative conscious that they might have. My chair has a very simple behaviour: when I kick it, it moves under the Newtonian laws of mechanics. My laptop has somewhat more complicated behaviour: but when there's no hardware fault, everything it does is determined by lines of sofwtare code. So, even if the laptop could somehow have consciousness, that consciousness is not going to affect what comes up on the screen.

A powerful intelligent computer with massive parallelism is in the same position. It *might* be consciousness. So might the room it sits in be consciousness. And the carpet. But you have absolutely no reason to believe that any of them is conscious.

Everything that the intelligent computer does is determined by its software. So there is no scope for consciousness to be in the causal loop that produces whatever the output is.

Someone said earlier that a program may be so complex that the programmer can't figure out what the program is going to do. That is irrelevant here. What matters is that the software *does* determine what the computer does. The computer's output is driven by software not by consciousness.

If I say, "I see the colour red", it may be because I have a conscious sensation of red. If a computer says it, it's because it was programmed to. There is no causal gap in the computer where consciousness can get a foothold.

Even if the computer says, "I am conscious and I experience red qualia", we *know* that it is lying because we know that it was programmed to say that.

Someone else mentioned random-number generation. This is also irrelevant here. If an intelligent computer uses a random-number generator to choose between saying "I see red" and "I see blue", that still doesn't let consciousness into the causal loop.

So, since any consciousness that there might be in the machine cannot influence the machine's output, it follows that the machine's output can never count as evidence for machine consciousness.

Even if a robot walks and talks just like a human, and even takes part in a philosophical debate arguing that it is conscious, we can know that it is faking because we know that it's behaviour is governed by software not by consciousness. And we know this because we can look at the source code.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 5:56 AM by samynovitch

[Top]
[Mind·X]
[Reply to this post]

You are thinking to much in terms of the state of computing today. We don't need a pentium IV or ultrasparc to achieve ai. What we need is millions of tiny processors that behave like neurons.

Those processors also have to be clockless, which means that they are not forced to work on a certain rythm. If those two conditions are met the resulting neural net will be undeterministic. The result of a given set of inputs could lead to an entirely other set of outputs from moment to moment.

I still want to know what makes are brain so special that we have consciousness. After all, a neuron is just firing away, given a certain set of inputs. You cannot prove starting from one neuron that we are conscious.

regards,Neok

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 1:48 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

You are thinking to much in terms of the state of computing today.


I'm not think 'in terms of' any processor. I am discussing the basic concepts. The level of technology is completely irrelevant to the argument.

Those processors also have to be clockless, which means that they are not forced to work on a certain rythm. If those two conditions are met the resulting neural net will be undeterministic.[/quot]

Not so. Being clockless has no effect on whether they are deterministic or not.

The result of a given set of inputs could lead to an entirely other set of outputs from moment to moment.


Yes, but that's not indeterminism.

I still want to know what makes are brain so special that we have consciousness. After all, a neuron is just firing away, given a certain set of inputs. You cannot prove starting from one neuron that we are conscious.


Brains don't produce consciousness, they just have the capability to gateway into consciousness.

Consciousness is not part of the ontology of physics. Therefore any system that operates in a physically deterministic way has no causal gap into which consciousness can exert an influence. It can have no 'gateway' to conscious processes.

But a system in which quantum-mechanical effects are amplified to the level of macroscopic behaviour (via a chaotic system dynamic) have a causal gap where consciousness can operate. The chaotic system of a brain allows quantum-mechanical effects, such as neurotransmitters tunnelling across synapses, to yield overt effects. There is thus a gateway through which consciousness can incluence the brain.

Obviously if you build a machine on the same principles you may likewise be able to create a gateway to conscious processes.

The important point where I disagree with strong AI is that consciousness does *not* emerge from the computational activity.

Intelligence and consiousness are totally different concepts. Intelligence is the capacity to solve problems, and with a powerful enough computer you can achieve artificial intelligence of human or superhuman power. Consciousness is the capacity to have subjective, qualitative sensations. You don't need intelligence to be conscious!

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 2:23 PM by samynovitch

[Top]
[Mind·X]
[Reply to this post]

Consciousness is not part of the ontology of physics. Therefore any system that operates in a physically deterministic way has no causal gap into which consciousness can exert an influence. It can have no 'gateway' to conscious processes.


But a system in which quantum-mechanical effects are amplified to the level of macroscopic behaviour (via a chaotic system dynamic) have a causal gap where consciousness can operate. The chaotic system of a brain allows quantum-mechanical effects, such as neurotransmitters tunnelling across synapses, to yield overt effects. There is thus a gateway through which consciousness can incluence the brain.


Now we are getting somewhere. All I have to demonstrate then is that it is possible to introduce quantum effects into a artificial neural net. That is pretty easy to do :

The smaller a logical gate becomes the more quantum effects come into play. Contempory computers cancel those effects out because they work on a fixed rythm. A calculation might take 10 ns or 11ns but a contempory processor will always take 4 clockcycles to complete the operation.
A clockless nanoprocessor would communicate the result of an operation the moment is has finished. That moment is determined in part by quantum effects.

That being said, I'm not sure it is possible to say that our brain produces consciousness because of quantum effects. Could you provide any links to that hypothesis?

Intelligence and consiousness are totally different concepts. Intelligence is the capacity to solve problems, and with a powerful enough computer you can achieve artificial intelligence of human or superhuman power. Consciousness is the capacity to have subjective, qualitative sensations.
I an artificial intelligence cannot think about itself and have conscious experiences, would it really be superhuman?


As I see it there is only room for two possibilities in AI :

1) A conscious intelligent AI
2) No AI.

You don't need intelligence to be conscious!


My hypothesis is that you need consciousness to be
intelligent.

regards,
Neok

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 6:16 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Now we are getting somewhere. All I have to demonstrate then is that it is possible to introduce quantum effects into a artificial neural net.


I think it would be rather optimistic to suppose that the mere presence of quantum-mechanical effects would suffice to gateway to consciousness.

Given that consciousness itself is a structured process with structured i/o (sound, vision, motor control, etc), we should expect that the quantum-mechanical interface would need to comply with whatever protocol used by the natural mind-brain interface. Maybe that's what the Penrose-Hammeroff microtubules do.

That being said, I'm not sure it is possible to say that our brain produces consciousness because of quantum effects.


The brain doesn't 'produce' consciousnes as the spleen produces bile (to use a Searlean analogue).

There are two inter-related processes: the brain activity and the conscious activity. The mental processes are non-physical (see the other argument in the essay, and endlessly repeated in this dicussion). Yet they impact on the brain processes. Unless we are going to propose violations of physical laws (heaven forfend), we have to say that the conscious process exerts its influence only in quantum-mechanically nondeterministic events.

Could you provide any links to that hypothesis?


Well there was a much earlier and cruder version of this idea in the book by Popper & Eccles.

My hypothesis is that you need consciousness to be intelligent.


Unlikely but possible. But my point was the contrary: that you don't need intelligence to be conscious.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 9:48 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

What I said was that a computer cannot be conscious by virtue of the computations that it does.



You are wrong. What fluid do you estimate?

Very flogistonian-vitalistic view, I would say.

- Thomas






Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 12:57 PM by griffman

[Top]
[Mind·X]
[Reply to this post]

Even if a robot walks and talks just like a human, and even takes part in a philosophical debate arguing that it is conscious, we can know that it is faking because we know that it's behaviour is governed by software not by consciousness. And we know this because we can look at the source code.


that places the only difference between our consciousness and a machines is that we can't see our own source code.

And that makes sense. being consicous beings, all we see is the running application.

the mysteries of consciousness are explained through the application of the source code. just as the mysteries of the universe are being explained through its equations. we are mearly in the process of reverse enginering it.

It makes no difference to the argument whether the process is deliberate or automatic. What matters is that, at some point the sensation (e.g. red) in your conscious mind, and at that point is associated with the term "red".


I can bet that you experenced the sensation of red before you were taught to associate the word with that sensation. were you consious back then? all you had were incomming sensations. there was no language to associate with. what may be there is that you remember feeling that sensation before.

impulse recognition and associative linking.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 1:21 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Even if a robot walks and talks just like a human, and even takes part in a philosophical debate arguing that it is conscious, we can know that it is faking because we know that it's behaviour is governed by software not by consciousness. And we know this because we can look at the source code.

that places the only difference between our consciousness and a machines is that we can't see our own source code.


No, that's not what I said. I said the key difference is that an android's behaviour is governed deterministically by the code. *That* is the difference between us and them. The fact that we can see the source code of the android is just a convenient way of verifying that it is a deterministic system.

Since it is a deterministic system, there is no gap in the causal chain where consciousness could have any influence.

I can bet that you experenced the sensation of red before you were taught to associate the word with that sensation. were you consious back then? all you had were incomming sensations. there was no language to associate with. what may be there is that you remember feeling that sensation before.


Of course I was conscious before I acquired any language. We all were. What's the problem?

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 2:02 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

No, that's not what I said. I said the key difference is that an android's behaviour is governed deterministically by the code. *That* is the difference between us and them.



What if the android has a random generator inside it's code?

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 2:38 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

What if the android has a random generator inside it's code?


So-called random generators in software are really pseudo-random generators. They are tables of random numbers. The execution of the software is still deterministic in relation to the combination of code plus random-number tables. So there is no gap for consciousness to intervene.

If it the android could be interfaced to quantum-mechanically, truly random processes using whatever interface protocol the brain uses, then I see no philosophical reason why the android could not have a gateway into conscious processes. Working out the interface protocol is an engineering issue.

An android or brain doesn't 'have' consciousness, any more than a TV set 'has' a studio. It tunes in to a conceptually distinct process. And just as multiple TV sets can tune in to the same broadcast, if an android was configured identically to your brain, it would gateway into the same stream of consciousness that your brain gateways into. True telepresence.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 2:47 PM by samynovitch

[Top]
[Mind·X]
[Reply to this post]

An android or brain doesn't 'have' consciousness, any more than a TV set 'has' a studio. It tunes in to a conceptually distinct process. And just as multiple TV sets can tune in to the same broadcast, if an android was configured identically to your brain, it would gateway into the same stream of consciousness that your brain gateways into. True telepresence.


Ok, now I think I understand your position fully. ( I took me some time :-)
But I am sorry to say that I cannot subscribe to your view,

Otherwise, you just open the floodgates to a sea of crazy claims. For example, you could say that your chair is conscious and that I have to prove that your chair is not conscious!


This is a quote from you. Your hypothesis sounds as crazy to me as strong AI for you.

regards,
Neok



Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 3:12 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

If it the android could be interfaced to quantum-mechanically, truly random processes using whatever interface protocol the brain uses, then I see no philosophical reason why the android could not have a gateway into conscious processes


Sa you say, the difference between human and android is the 'real random generator'?

The base for the consciousness is a 'real random generator'?

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 6:32 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Sa you say, the difference between human and android is the 'real random generator'?


As regards consciousness, that is the crucial difference.

The base for the consciousness is a 'real random generator'?


I wouldn't say 'base' because that implies that consciousness reduces to, or emerges from, the physical process of nondeterministic events.

Rather, the nondeterministic events provide an interface between the mental and physical.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 3:47 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

If it the android could be interfaced to quantum-mechanically, truly random processes using whatever interface protocol the brain uses, then I see no philosophical reason why the android could not have a gateway into conscious processes. Working out the interface protocol is an engineering issue.


Sorry, but even quantum mechanics is not truly random. The physicists just don't know it yet.

There is no refuge from determinism and consciousness is NOT dependent on randomness.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 5:36 PM by Mesopotamian

[Top]
[Mind·X]
[Reply to this post]

I have a few questions related to consciousness.

If it (consciousness) is not dependant on randomness, what is it dependant on? I could probably make a very very very long list. Are the dependancies infinite? I think so. What does it mean if the dependancies are infinite to the "randomness" theory?

Also, in one of the recently release animatrix sequels, an android is on trial for murder for what it "felt" was self-defence. It killed it's masters who were going to junk him. If someone was developing a "servant" program, would you build in a self defence program?

This reads into the Terminator as well, where the young John Connors tells the Terminator to stop killing. In that case, the Terminator is always governed by external rules to a degree. I'm sure there are arguments about whether the Arnold Terminator was in fact true AI, or a dumb version of it's creator.

However, my question is, is it possible that an AI can kill, even if you program it specifically NOT to kill. (Killing someone is but one example).

If the answer is NO, then is it really AI?

I think true AI has to learn how to break rules (like a child), become it's own "entity" based on it's own set of rules, therefore independant, before it is truly conscious. I think it will eventually be possible, but someone will need to "show me". I don't profess to be the be-all here (especially amongts this crew), but as long as a line of code represents a rule to be followed, it is still a rule that is followed by the computer...it is not independant.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 5:59 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

If it (consciousness) is not dependant on randomness, what is it dependant on?


In a word, "structure", more specifically, the functional, electro-chemical neural network architecture of brain.

I could probably make a very very very long list. Are the dependancies infinite? I think so. What does it mean if the dependancies are infinite to the "randomness" theory?


What do you mean specifically, by “Are the dependancies infinite”? What is the “"randomness" theory”?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 6:21 PM by Mesopotamian

[Top]
[Mind·X]
[Reply to this post]

In a word, "structure", more specifically, the functional, electro-chemical neural network architecture of brain.


What do you mean specifically, by “Are the dependancies infinite”? What is the “"randomness" theory”?


sorry..that "conciousness is not dependant on randomness"..

SO..tying this together..my question I guess is..within the structure of the brain, is there a finite, or infinite list of dependancies that define consciousness...(if it is not dependant on randomness)...

if that remotely even made sense, I'll be happy.

[Parent]

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/08/2003 2:53 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

SO..tying this together..my question I guess is..within the structure of the brain, is there a finite, or infinite list of dependancies that define consciousness...(if it is not dependant on randomness)...


There is a finite regress of hierarchical mechanisms which form what we call consciousness. At the lower regions of this regress the mechanisms are in the categorically fuzzy regions between 'mechanism', 'awareness' and 'consciousness'. Consciousness emerges as a gradient from mechanism through awareness to consciousness. This morphological gradient can be labeled with arbitrary terms so that we can talk about it, but we often get confused by the terms themselves and cannot see how the terms can fade into one another as you go up and down the hierarchy.

Here is the hierarchy as I see it.

1. ‘Mechanism’ is found at the molecular machinery within the cell,
2. ‘Minimal awareness’ is found in the complex feedback mechanisms governing the inter-actions of the neuron with its somatic environment.
3. ‘Higher-level or ‘animal’ awareness’ and memory (the beginnings of symbolic representation) exists at the basic level of the network architecture and functional module regions.
4. ‘Consciousness’ (as we know it) arises when the memory system is complex enough to form a rich symbolic and self /environment-referential level.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 6:35 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Sorry, but even quantum mechanics is not truly random. The physicists just don't know it yet.


Not so. Look at Bell's theorem.

There is no refuge from determinism and consciousness is NOT dependent on randomness.


I didn't say consciousness is dependent on randomness. I said that physical *access* to
consciousness is dependent on physical nondeterminism.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/08/2003 2:31 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Not so. Look at Bell's theorem.


Bells Theorem is merely a mathematical curiosity. There is zero experimental proof that ANY process whatsoever is non-deterministic.

I didn't say consciousness is dependent on randomness. I said that physical *access* to
consciousness is dependent on physical nondeterminism.


So consciousness is trapped in some other 'dimension', but is tapped into by a hypothetical 'non-deterministic' process? How can a 'non-deterministic process' cause anything? Non-determinism = non-existence. If something does not have a definite cause then it simply is not caused and does not exist. There is no middle ground fuzzyness for causality which somehow lets consciousness in. Consciousness doesn't need indeterminacy to exist and neither does free-will.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/08/2003 3:31 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Bells Theorem is merely a mathematical curiosity. There is zero experimental proof that ANY process whatsoever is non-deterministic.


Read up on Alain Aspect's experiment.

So consciousness is trapped in some other 'dimension', but is tapped into by a hypothetical 'non-deterministic' process?


It's not in another 'dimension'. It's just nonphysical.

How can a 'non-deterministic process' cause anything?


Same way as deterministic process does.

Non-determinism = non-existence.


What makes you think that?

If something does not have a definite cause then it simply is not caused and does not exist.


That is a non-sequitur.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/08/2003 3:47 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Read up on Alain Aspect's experiment.


I have. It is based in a faulty interpretation of quantum phenomena. This is centered on the incorrect interpretation of what constitutes a measurement. They suppose that consciousness actually changes physical reality simply by making a measurement. The collapse of the wave function is simply the collapse of the mental/mathematical possibilities of the state of the system. The collapse has nothing to do with non-mental 'reality'.


It's not in another 'dimension'. It's just nonphysical.


nonphysical is an illusion.


subtillioN: How can a 'non-deterministic process' cause anything?

Peter: Same way as deterministic process does.


Care to describe exactly how? People have been saying this for decades, but no-one has ever been able to understand HOW an effect can be created without a cause.


subtillioN: Non-determinism = non-existence.

Peter: What makes you think that?


see above

Without understanding HOW an effect can happen(?) without a cause, what makes you think the reverse?


subtillioN: If something does not have a definite cause then it simply is not caused and does not exist.

Peter: That is a non-sequitur.


How so?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 6:24 PM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

So-called random generators in software are really pseudo-random generators. They are tables of random numbers.


The seed is usually taken from the timer, so it's not determined only on the algorithm. In fact, it's relatively asynchronous, like the asynchronous gates you or someone else was talking about, what number you'll get from the timer for the random number seed.

There are also physical devices that can be used to provide a "true" random number.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 6:45 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

The seed is usually taken from the timer, so it's not determined only on the algorithm.


It's still determined.

There are also physical devices that can be used to provide a "true" random number.


It needs to be quantum-mechanically nondeterministic.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 11:04 PM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

It's still determined.


No, the timer is nondeterministic in itself for a number of reasons. System interrupts coming from other programs, the specific amount of time such things as mouse movement cause to be used up, accessing delays due to the harddrive having to spin down when it goes over a certain heat threshhold, and so on. The overall funcion of modern computers is much less deterministic than most people think.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/08/2003 2:20 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

No, the timer is nondeterministic in itself for a number of reasons.


The key phrase here is "in itself". Yes, of course, the timing of the computer steps is nondeterministic *in itself*. But that in itself does not give consciousness any room to intervene. For, if the actual timing is determined by other physical things (such as mouse movements, as you say), then the computer's output is governed by cause and effect. I.e. given the state of the universe at time t, the laws of physics determine the computer's output at any later time t+(delta-t). Therefore, there is no gap into which consciousness can apply itself.

In order for a non-physical consciousness to manifest itself in the physical world, it can only do so at points where the physical processes themselves are nondeterministic.

(Unless, as I have said, we want to countenance consciousness breaking physical lawa, which I think is massively implausible.)

Peter


Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/08/2003 2:36 AM by sushi101

[Top]
[Mind·X]
[Reply to this post]

Conciousness is non-physical, it arises as a product of feedback systems ability to do patternrecognition over time. The individual braincell in itself is not aware either, it is in the symphony of braincells that awareness happens.

We _are_ information and our awareness grow with our ability to consume it (information)

IMO we are more aware than people where 10000 years ago, and as far as I can rememeber I even think there is some evidence that our awareness is relatively young.

The concious mind is an emergence it is not physical yet not holistic, it is a soccer-team
in discuise.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/08/2003 2:40 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Since it is a deterministic system, there is no gap in the causal chain where consciousness could have any influence.


Consciousness does not need a "gap in the causal chain" from which to escape. Your view of consciousness and causality is HIGHLY simplistic and old-fashioned. If you understood cognitive science you would see how the mind CAN arise from purely deterministic architectures.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/08/2003 3:35 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

If you understood cognitive science you would see how the mind CAN arise from purely deterministic architectures.


Cognitive science addresses the informatics of cognition. It does not address the nature of consciousness. It says nothing about the nature of consciousness.

I don't doubt that cognitive processes can be implemented deterministically. But that has no direct relevance here.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/08/2003 3:53 AM by sushi101

[Top]
[Mind·X]
[Reply to this post]

I do understand cognitive science that is exactely why I say as I say.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/08/2003 3:55 AM by sushi101

[Top]
[Mind·X]
[Reply to this post]

ups sorry thought it was adressed for me sub

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 12:07 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Hello Neok, Peter,

1) If I have to prove that computers can be conscious, you have to apply the same standard to humans and prove that humans can produce consciousness. (But I don't you can give any such proof :-)


You can read my humble attempt at "Proving the Unprovable" at http://www.occean.com and it directly addresses your point, in this context, but I don't want to waste bandwidth by pasting it in here. It also intends to clarify (as simply as possible) the relationship of 'physical reality' and 'consciousness'.

Greetings,
blue_is_not_a_number

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/08/2003 11:32 AM by thegabe2

[Top]
[Mind·X]
[Reply to this post]

Hi,

i guess i have to specify a few things before making any comments:
1. english is not my mother language.
2. i haven't nearly as much knowledge as several of the people who have posted in this forum, yet i guess i grasp what they say, probably because of my interest in philosphy, physics, and informatics. Nonetheless, i will not be able to express myself in the same polished manner they have.

I have enjoyed thorougly the discussion about conscience and its definition.

I think that there is a way to "almost" have the monist view and the neural architecture view meet: through the analisys of the thinking process.

In my superhumble view, it is imaginable to derive the _subjective_ (a keyword that i think has a crucial role in the discussion, but has been overseen often by the monist side) experience of conscience (conscience is _not_ objective; it is an experience - almost a thought - experienced only by the mind that experiences it) through the association, overlap, addition and so on of different concepts all stored in a "place", the place where the all the information and the thinking process and the memories are all contained.

allright, i know i make no sense. i mean i know what i want to say. i just suck at saying it. please bear with me. i'm sure if you'll do me the honour and favor of spending time to try figure out what i am saying, you'll find a way to understand me.. thank you.

so let's try to rephrase. conscience, in my view, is an experience. it's a "meta" experience, the experience of having an experience. once this happens, the identification of self happens, and therefore "conscience". "cogito ergo sum" perfectly explains it. "I experience myself thinking, therefore i am", is a viable translation of descartes' sentence.

this "meta - experiencing" can then be taken a step further, to experiencing oneself experiencing the experience, and so on. only to a certain degree probably.

now.
how do we get to have the thought "i am experiencing".
well, by associating experience with concepts such as "I", "separation", "identity", and so on, i guess. i don't know.

But i think that just intuitively we could imagine a number of associations, correlations, observations etc between "concepts", or "ideas" and data from experience, memory etc. that could generate this "meta - awareness".

well, we can then imagine that the brain is where mechanically this "thought operations for meta -awareness" take place.

i still wonder.. did i convey the message?

so the problem is not whether there's a way do graph or math the experience of colour, but if it possible to obtain _subjective_ experience of conscience from the thinking processes.




Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/08/2003 1:04 AM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

You can't just reinterpret a finger-prick as a smell.


Sure you can, it's called synesthesia; sensory overlap. It can be an important mental strategy for some people. One example I'm familiar with: During sex, some people have the strategy that feeling has to overlap into a specific color, a certain shade of blue for example, in order to induce an orgasm.

Synesthesia is also an important concept in hypnotic induction and hypnotherapy in general.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/08/2003 2:28 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Sure you can, it's called synesthesia; sensory overlap.


No, synaesthesia occurs where a sensory input (say a sound wave) yields a conscious experience in a secondary modality as well as a primary modality. E.g. upon hearing a whistle, the synesthete might see the colour red as well as hearing the sound. But the red is still unalterably red, and the whistle sound is still unalterably a whistle sound. The synesthete gets two sensations where a normal person gets only one. But each sensation is non-reinterpretable.

Again this reinforces the logical difference between brain and mind. The electrochemical signals coming in from the cochlea can yield eiher a sound experience or a colour experience, depending on which sensory cortex interprets them. Once you've got your conscious experience, however, that scope for reinterpretation has gone.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/08/2003 2:41 AM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

It's not always true that the original sensation stays in consciousness, sometimes it goes away faster than it can be sensed; so the conscious mind senses only the alternate sensation. And you can make yourself feel colors, see smells, etc. Sort of like LSD.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/08/2003 2:53 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

It's not always true that the original sensation stays in consciousness, sometimes it goes away faster than it can be sensed; so the conscious mind senses only the alternate sensation. And you can make yourself feel colors, see smells, etc. Sort of like LSD.


That's because you have the freedom of a human being.

blue_is_not_a_number

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/08/2003 3:36 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Fine. But at any given time, a conscious sensation is non-reinterpretable.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/07/2003 12:17 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

the ideas in the Matrix are not the meat- the story is just a hollywoodized version of a thousand cyberpunk short stories from the mid 80s-

what makes the Matrix special is the AWESOME BULLET-TIME KUNG FU- YO! that is what I'm talking about- whoa

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/09/2003 2:53 PM by Demiurgos

[Top]
[Mind·X]
[Reply to this post]

I just don't see the point of this essay. The point of the Matrix does not lie in Technical Technicalities, but in religious symbolism, mostly Gnosticism. Gnosticism is a terribly interesting religion whether or not you're a Matrix fan. I discovered that the Matrix revolves around gnosticism when i read this essay http://www.envoymagazine.com/backissues/4.5/covers tory.html Once you read this, you may wish to learn more about gnosticism, i suggest you read "a gnostic cathecism" at http://www.gnosis.org If you have some knowledge of Buddhism, it aids you in understanding Gnosticism. After you read, watch the Matrix. I guarantee you will enjoy it 100% more.
Walk in peace.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/13/2003 3:41 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

The point of the Matrix does not lie in Technical Technicalities, but in religious symbolism, mostly Gnosticism.


I agree. But the editor didn't want an essay on gnosticism, he wanted one on technology. So that's what I wrote.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/09/2003 7:33 PM by claireatcthisspace

[Top]
[Mind·X]
[Reply to this post]

"your wrong! the.."
"...no that wasnt what I said"
"is reality..."
"real"
"no your wrong"
"Your augments on consciousness don’t work for me at all"
"I agree with it, up until the end at least.."
"You are quite right that the electrochemical..."
"You can't just reinterpret "
"I have some fundamental problems with your remarks:"
" I fail to see it and I think it has eluded definition every time anyone tries"
"I don' t think it can truely be more than one nuron "
"I still think you are getting two different things confused..."
"Consciousness cannot be defined as.."
"Wrong. When a person loses his/her arm through an accident"
"I don't think parallel DNA computing is the answer.."
"We seem to be talking at cross-purposes here."
"Agreed."
"The important point where I disagree with strong AI is that "
"Now we are getting somewhere."
"The brain doesn't 'produce' consciousnes "
"You are wrong. What fluid do you estimate?"
"No, that's not what I said"
"consciousness is NOT dependent on randomness"
"Not so. Look at Bell's theorem"
"then it simply is not caused and does not exist"
"It's not in another 'dimension'. It's just nonphysical"
"Consciousness does not need a "gap in the causal chain"
"It says nothing about the nature of consciousness."
"It's not always true that the original sensation stays "
"I just don't see the point of this essay"


One way to solve problems is to [change] the way see see the problem, even if the problem stays the same.

A problem is, what is reality whithin the study of conciousness?

Change the way wee see the problem.

Language and the words it creates all covey different meanings to each of us. On the otherhand it is assumed we all have an objective understanding of a word like [reality], these assumptions might not really, really, REALLY!!! be objective because we cannot fathom anothers world views entirely. What we think is an objective concept like [reality] is infact what we ourselves think it is plus what we think others think it is too. This makes us think it becomes objective later because we and they agree on it.

What we think others think, might not constitute objectivity though.

If we think [reality] is objective it might be because what we think others think it is agrees with what we think it is too and the concept becomes what I call [compounded]. The word [reality] is now a compound (think about chemistry) but the structure towards what reality is, is the element. The structures are subjective and infinite (this needs more explanation another time). The compound is finite in the sense of what is termed objective and agreed.

When we create a new concept or when we try to understand one it is the struggle between the composition of the element and the compound. If we are to construct a world view and decide what concioussness is, we need to compound some thoughts very early on. The compound is a progressive way of moving forward but it is not the only way. The finality of the compound enables us to move foward towards other compounds but we need to assemble these new compounds by the proper use of new elements. The thinking behind the languuage is the element and the thinking that opens and changes the compound is the introduction of an new element. One way we can introduce a new element into this is to be open minded to new thinking about what could contribute to the concept towards the word [reality]. We should not ignore what we cannot agree upon and delete others world views. What we should do is introduce and welcome new thinking from others [elements] in order to think about what else [reality] could actually be. One way to get ahead is to remove judgment. Judgement is a carrier for the compound (I agree, I dont agree..), but it can also restrict it, because when we only let, what we feel is a correct statement to ourselves through in our disscussions, it will lessen the chances of what we can think of as a possible way forward, from being part of the element then to that could create a new compound later.

The new compound might make us see the problem in a different way......



Claire

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/11/2003 10:58 PM by DaVinci

[Top]
[Mind·X]
[Reply to this post]

1. What’s the difference between Morpheus and Agent Smith?

Suppose you are Neo and you have heard both Morpheus and Agent Smith make eloquent speeches about how they see the world, about their reasons, about their feelings. Based on that you have to answer a fundamental question, that is: Who is real? Who is conscious?
Morpheus? Agent Smith? Both? None?
All you can say is that you know that you are conscious and that’s all. You cannot say the same about other people. You see them, you smell them, you touch them, and you talk to them. They talk back to you and it seems they react as conscious beings would do. But how can you be sure? How can another person prove to you that he/she really exists as a being having emotions and volitions?
If in a hardware or software simulation of a human being, words and gestures are produced by lines of programmed software, a similar process occur when you dream. In your dreams there are other people with whom you interact and they appear as conscious as the persons in real life but it’s only your brain simulating them. So where is the difference? If you cannot tell a machine simulating conscience from a real conscious machine, if you cannot tell a dreamed person from the real one, then how can you judge what conscience is in the first place?

2. What’s the difference between Zion and The Matrix?

Suppose that you are Neo again. You are sound asleep and you are dreaming. While you dream the dreamscape is what is real, maybe the sun is shining and you feel the summer breeze. It seems real. The images of persons that populate the dreamscape are like real persons. They interact with you as expected, they get angry if you annoy them, they even laugh at your jokes. Well everything seems coherent. Maybe deep inside you feel that this reality is not solid enough, that there must be something else, but then you reject the whole idea as nonsense…
Suddenly you awake and perceive with sadness or relief that what you tough was reality was just another dream.
So now you are in the real world and must do the things real people do: brush your teeth, eat your cereal and crawl to your boring workplace, right? Not today. Today events will occur that will end with you taking a red a pill and...
Suddenly you awake and perceive in excruciating agony that what you tough was reality was nothing more than a massive computer simulation.
Hey, but now is for real! Now you know what The Matrix is: It’s the world that has been pulled over your eyes to blind you from the truth. Exactly, so how can you tell when the blindfold has been removed?
You where fooled by your brain dreaming reality, then you where fooled by an electronic brain simulating reality. Now, how can you be sure that you are not been fooled by this cool dressed man named Morpheus. Where it all ends?
If you accept that a particular reality is a dream or a simulation, what stops you from accepting that the world in which this reality is being simulated is not another simulation.
It cannot be that The Matrix is not but an insignificant part of realities inside realities?
…you dream that a computer simulates that you dream a dream…

I hope I made myself understood, because English is not my first language. :)

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/13/2003 3:36 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

At last! Somebody who understands what The Matrix is really about. Yes, the world that contains Morpheus and the Nebuchadnezzar is another virtual reality, a 'meta-Matrix'. There is, ultimately, no physical reality. There is only a mental reality. There is a meta-mind that governs the manifest world. The world we are in right now is a virtual reality generated inside our consciousness by the metamind.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/13/2003 5:52 PM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

Hmm.. Holodek stories in ST: The Next Generation dealt with this first, don't you think?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/13/2003 5:54 PM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

I've used a varation of this since 1994 or so, to demonstrate that Omniscience is impossible in any realiy system.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/13/2003 6:11 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

There is a meta-mind that governs the manifest world. The world we are in right now is a virtual reality generated inside our consciousness by the metamind.


If you like this then you would love Stanislaw Lem's "Futurological Congress". It contains many nested levels of illusion from which the protagonist is trying to escape.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/13/2003 6:49 PM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

"The Matrix" and "The Vortex" in Dr. Who stories also relates to similar ideas. And Mark Twain wrote a story in the same vein, about a guy who has a Dream that seems like 30 years or something, then wakes up. And wonders if he's still dreaming...

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/13/2003 7:02 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

There is a short story in Stanislaw Lem's "Cyberiad" in which there is a story within a story which contains a 'dream cabinet' which a user goes into and gets stuck in a nested series of dreams. When he finally does manage to wake up he thinks he is still in a dream so he enters a new dream thinking that is the way to wake up. Thus he gets stuck for eternity. It is a very bizarre story.

I have had a nested series of dreams myself. I kept thinking that I had awaken only to find something that would betray the dream-state that I was in, at which point I would wake again into another dream. This continued through about six nested dreams until I was terrified that I would be stuck forever in a catatonic dream-state. I finally woke up screaming and terrified.

Wierd.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/13/2003 10:48 PM by DaVinci

[Top]
[Mind·X]
[Reply to this post]

When I was a child I had dream paralysis once in a while. In the middle of the night my mind waked up but my body remained asleep. It was terrifying at first. You cannot move, not a single muscle of your body will respond, you see nothing, you hear nothing. It’s only you being conscious. But then I learned that the whole experience was very interesting indeed, because I discovered that while in this state I could start lucid dreams! I could just start imagining any scene, any plot, and the dream would start. It was very fun. The strange thing was that although every time I started a dream controlling it, eventually the dream would always end up controlling me, like in any normal dream. At certain point I would always forget that I was dreaming. That I was the dream master.

There is, ultimately, no physical reality. There is only a mental reality. There is a meta-mind that governs the manifest world. The world we are in right now is a virtual reality generated inside our consciousness by the metamind.


That’s exactly my point. When dreaming, experiences are not arising from physical stimulus. If all sensorial input to a conscious being ceases, subjective experience would just fade away? Or will this conscious being create a virtual reality of it’s own and continue experiencing? Will it create an entire universe of it’s own complete with physical laws and logic and internal coherence? If this can be, then it could be said that a given reality exists only because there is a consciousness willing to experience it.
In this way, the world has not been pulled over your eyes to blind you from the truth. You created the world. Indeed there is not even You, not in the normal sense. The notion of You as an entity separated from the “others” does not make sense in this context. In a dream there are no others just your consciousness experiencing the dream.
There is only You.
But like in a dream you just don’t remember.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/14/2003 12:09 AM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

When dreaming, experiences are not arising from physical stimulus.


If you're dreaming and someone says "telephone", for example, a telephone will appear in your dream, your mind will find a way to insert one semi-logically into the dream. So, you are not detached from stimulus, just because you don't notice it.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/14/2003 8:29 AM by DaVinci

[Top]
[Mind·X]
[Reply to this post]

you are not detached from stimulus, just because you don't notice it.


But what if you could detach yourself from any source of external stimulus, no hearing, no sight, even the stimulus from your own body like hearbeats or breath? Will you just fall into unconsciousness?
Suppose it was possible to build a conscious artificial intelligence inside a box. The only external input this electronic brain will receive will come from a microphone connected to the box. The only output the device has is a speaker attached to box.
You talk to the machine through the microphone and the machine can replay to you through the speaker.
Now, disconnect the microphone. You have detached the machine from any external source of stimulus. You have a conscious mind trapped inside a box.
My question is: Would this conscious just fade away? Would the voice from the speaker eventually just disappear?
Or will it begin dreaming?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/14/2003 10:11 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

But what if you could detach yourself from any source of external stimulus, no hearing, no sight, even the stimulus from your own body like hearbeats or breath? Will you just fall into unconsciousness?


This is called sensory deprivation and many experiments have shown that you will go into intense and bizarre hallucination.

The question is whether consciousness could form in the first place without a sensory contact with objective reality. I think the answer is an obvious NO. Consciousness relies entirely on memory. Therefore no memory no consciousness.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/14/2003 12:12 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

I don't think so.

I think, no sensory input or the observing of memories are needed, for the consciousness to appear.

A pure consciousness is possible - I guess.

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/14/2003 12:27 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I think, no sensory input or the observing of memories are needed, for the consciousness to appear.


Is that the extent of your argument?

Everything that we experience as consciousness is based on memory. That is why a newborn baby is FAR less conscious than an experienced child or a wise adult. You have to train your neurology on external reality (or some other input source) in order to form a representation of that source.

A pure consciousness is possible - I guess.



A pure consciousness with nothing to think with nor to think about? Nothing in nothing out.

Memory is the medium for consciousness, like paint for the artist.

In a limited sense you are correct if you build the brain intact with 'artificial' memories, but the knowledge of the proper patterning of these memories and trained neurology comes from our experience of reality. Thus this argument still relies on memory to form consciousness.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/14/2003 5:31 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I think, no sensory input or the observing of memories are needed, for the consciousness to appear.


It is also a fact that neurons that do not recieve stimuli, die off. This is what insures that the evolving networks that do form are usefull and in contact with reality. In effect, stimuli is the fitness function of the evolution of the nodes in the network and the modular networks themselves. Stimuli is what forms and stabilizes the evolving networks. Remember that even in the womb the brain is recieving external stimuli.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/16/2003 2:50 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Remember that even in the womb the brain is recieving external stimuli.


But they not _need_ to come from outside. They may be autogenerated - can't they?

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/16/2003 4:58 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

But they not _need_ to come from outside. They may be autogenerated - can't they?


Outside is relative in this case. A neuron must have external stimulation to direct its growth and maintain its connectivity to form a higher-level functional aggregate. This stimulation cannot be auto-generated because it must be directed to the sensory periphery of the cell to stimulate its axonal/dendritic growth. A larger network or module must likewise have connections external to itself to maintain its viability in the higher-level functionality. The externality in some of these cases can be internal to the brain by happening between modules and neurons, but the neurons and modules at the sensory periphery of the brain must receive input that is absolutely external from the brain to maintain their own viability within the organism. The external input forms patterns which define the internal connection architecture, with its own inter-module connections that define its own higher-level functionality. This is why the brain rapidly refunctionalizes to changes in input patterns. With a static sensory input the brain would functonalize to nothingness. There would be no pattern training required to form the pattern recognition mechanisms of conscious sensation.

Throughout the evolution of the mind there has never been a cutoff from the external world. According to the laws of thermo-dynamics the closing off of the system wouldn’t allow the system to evolve in the first place. Without some connection to the external world (which is a state that is virtually impossible to achieve) the first level of pattern recognition architecture would have no source to trian its neurology. It would have no network weights nor stimulus strengthened connections which are the substrate for consciousness.

The developement of the mind is an evolutionary process whose criterion for success is the algedonic feedback stimulus from external reality. It is not a simple process of the enactment of the 'genetic blueprints' of DNA to construct the sensory/memory machinery of intact, complete consciousness. The phrase 'genetic blueprints' is a misnomer. It is a quite different process altogether that is highly nonlinear and highly dependent on environmental conditions. This is what gives biology and consciousness its high level of adaptability.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/17/2003 4:12 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

I agree, this was the way, consciousness has evolved.

But it is also very clear to me, that you can cut off all the exterior - and plug on the Mandelbrot set.

Where the pleasure is defined by coloration. From 1 to 1000. For example.

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/17/2003 10:30 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

But it is also very clear to me, that you can cut off all the exterior - and plug on the Mandelbrot set.


Yes, but the mandelbrot set would be external to the neurons invlved.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/17/2003 10:52 AM by sushi101

[Top]
[Mind·X]
[Reply to this post]

I was about to say...

The mandelbrot set IS external input at one point or another but just like genes it might come in chunky bits.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/17/2003 11:07 AM by DaVinci

[Top]
[Mind·X]
[Reply to this post]

According to the laws of thermo-dynamics the closing off the system wouldn’t allow the system to evolve in the first place


Am I wrong or this is exactly what the theory of the Big Ban proposes?

But it is also very clear to me, that you can cut off all the exterior - and plug on the Mandelbrot set.


Yes, but the mandelbrot set would be external to the neurons involved.



I was thinking in a situation where the neurons and even the Mandelbroit set would be a product of conscience. All the laws of physics, all mathematics and logic. All matter and energy would be just a thought inside a mind.
But I realized that trying to prove this, equals trying to prove the existence of god.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/17/2003 11:13 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

According to the laws of thermo-dynamics the closing off the system wouldn’t allow the system to evolve in the first place

Am I wrong or this is exactly what the theory of the Big Ban proposes?


The Big Bungle Theory is deeply flawed. It is a total myth.

I was thinking in a situation where the neurons and even the Mandelbroit set would be a product of conscience. All the laws of physics, all mathematics and logic. All matter and energy would be just a thought inside a mind.
But I realized that trying to prove this, equals trying to prove the existence of god.


Right it is impossible to prove and it is a hypothesis without any observational evidence to explain.

What we know about consciousness is that it requires a 'physical' or non-mental structural substrate from which it can be an emergent property.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/17/2003 1:58 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Right it is impossible to prove and it is a hypothesis without any observational evidence to explain.


How than, that you are so sure?! :-P

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/17/2003 2:25 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

How than, that you are so sure?! :-P


What evidence is there? How do you think that it would ever be provable?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/17/2003 2:59 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

How than, that you are so sure?! :-P


Sure of what exactly?

I merely take the world as I see it. Since there is no evidence that the physical world is constructed from a deeper layer of mental processes and since every case of consciousness that we know of is emergent from a physical substrate, then there is no reason to conclude that the fundamental level of reality is mental. We know of no process by which mind can arise without a physical substrate. The mental monism/dualism hypothesis is simply unnecessary and useless because it explains no phenomena.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/17/2003 3:09 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

The mental monism/dualism hypothesis is simply unnecessary and useless because it explains no phenomena.


This, IMHO, is the very reason that Peter Lloyd and others are pushing so hard for the existence of psi phenomena. They assume that if there are mental phenomena that are non-understandable by physics that this will prove that the substrate is mental.

((This also explains the infatuation with so-called quantum indeterminacy as well.))

It is a tough sell though because the majority of scientists and the population in general are immune to the phenomena (whatever it may be). Perhaps if they couch it in spiritual/religious language they may have an easier time and perhaps the number of believers will even be on their side. ;)

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/18/2003 10:38 AM by sushi101

[Top]
[Mind·X]
[Reply to this post]

psi phenomena



I do "believe" in the phenomena, it is the explanation that i disagree with.

There willl be a thought polices someday, but their method is not in any way mystic.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/18/2003 11:07 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I do "believe" in the phenomena, it is the explanation that i disagree with.


right, it is a matter of definitions.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/19/2003 9:24 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

They assume that if there are mental phenomena that are non-understandable by physics that this will prove that the substrate is mental


The simple fact that we see colors is something that can not be grasped with any mathematical description. We had that discussion just a few days ago. What is so difficult to understand about that? As long as we don't acknowledge this, all philosophies fall back onto materialism. Science may be able to do so for another millenium, but what's the point? Better acknowledge the fact that consciousness dies not fit inside a mathematical concept of reality, now, and make progress on multiple approaches!

www.occean.com

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/19/2003 10:10 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

The simple fact that we see colors is something that can not be grasped with any mathematical description.


Who's talking about mathematics? I am talking about non-linear neural network functonality.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/19/2003 10:58 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Who's talking about mathematics? I am talking about non-linear neural network functonality


Hmmm. Are you not talking about a "physical substrate", in the context of physics defined by mathematical formulas? Are you not talking about "implementing" consciousness in the sense that human beings are conscious?

And "non-linear neural network functonality" surely does not sound like you are claiming an exception to mathematical describability. And this means: No basis for seeing colors. Like trying to catch light in a bottle. A mismatch of concepts being used. It doesn't mean that bottles are useless, just you can't catch light with them.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/19/2003 11:21 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Hmmm. Are you not talking about a "physical substrate", ...


It depends on what you mean by "physical substrate".

...in the context of physics defined by mathematical formulas?


Physical reality is simply quantified by the empty equations of modern physics it is not understood through them. There is a vast difference between a mathematical description and a qualitative understanding of what is physically happening.

Are you not talking about "implementing" consciousness in the sense that human beings are conscious?


I am simply talking about an understanding of the actual mechanisms of consciousness.

And "non-linear neural network functonality" surely does not sound like you are claiming an exception to mathematical describability. And this means: No basis for seeing colors.



A mathematical proof against the physical basis of describing the qualitative aspects of consciousness is simply at the wrong level of description. The only level that is relevent is the functional level. Since consciousness is not a simple mathematical phenomenon then mathematics is useless as a descriptive realm. Try disproving it (or perhaps just understanding it)at the neural network and functional aggregate module level.

Like trying to catch light in a bottle. A mismatch of concepts being used. It doesn't mean that bottles are useless, just you can't catch light with them.


Have you not heard of lasers and fiber optics? All you need is a bottle with a one-way mirror for a surface.

The mismatch is between finite, simple, linear, mathematics (even at its most complex it is elementary compared to reality) and non-linear, infinitely complex, reality itself.


Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/19/2003 10:41 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

As long as we don't acknowledge this, all philosophies fall back onto materialism.


Are you proposing a dualism. Do you suppose that there is a mind/body split?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/19/2003 11:18 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Are you proposing a dualism. Do you suppose that there is a mind/body split?


No. As a real life example, take the visual 3D image that we see consciously.

We can describe mathematically the shapes which we see, and identify the colors by name. However, we can't describe what the color looks like. In this regard, we have to rely on the assumption that when you use the name "blue", that it means the same to you that it means to me, without beeing able to specify what exactly we see. However, both characteristics, those that we can describe mathematically, and those that we can't, are there in the same conscious 3D experience. There are not two 3D images, just one.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/19/2003 11:26 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

We can describe mathematically the shapes which we see, and identify the colors by name. However, we can't describe what the color looks like.


This simply illustrates a problem with language itself.

As soon as we can directly communicate and compare the actual neural patterns which physically create the conscious experience of 'blue' then we will be able to know the contents of other peoples minds.

This will be similar to trading software over a peer-to-peer network only much more interesting, if you can imagine.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/19/2003 11:47 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

I think your last two messages result in the same point, so I'll combine my reply into one message:

Physical reality is simply quantified by the empty equations of modern physics it is not understood through them. There is a vast difference between a mathematical description and a qualitative understanding of what is physically happening.


and

This simply illustrates a problem with language itself.


Exactly. Now what we need to understand is that this is not a trivial problem. As long as we limit our research into the basics of reality to mathematical descriptions, we will see it through a filter. Qualitative understanding of our life cannot be built 'on-top-of' mathematical descriptions. The tricky point is to understand all the implications this has!

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/20/2003 12:02 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

As long as we limit our research into the basics of reality to mathematical descriptions, we will see it through a filter.


You are ABSOLUTELY CORRECT but, why would we ultimately limit ourselves to mathematical descriptions of reality? Physics has simply gotten stuck in the empty quantitative mode. Soon the new qualitative theory will be introduced and once again the qualitative mode will swing back into action at the core level.

see http://www.anpheon.org for introductory details


Qualitative understanding of our life cannot be built 'on-top-of' mathematical descriptions. The tricky point is to understand all the implications this has!


I completely agree, that is why I have been exposing the new qualitative theory. Under this theory ALL of physics is understandable qualitatively.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/20/2003 12:21 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Physics has simply gotten stuck in the empty quantitative mode. Soon the new qualitative theory will be introduced and once again the qualitative mode will swing back into action at the core level.


I completely agree, that is why I have been exposing the new qualitative theory. Under this theory ALL of physics is understandable qualitatively.


Hmm. Maybe what you mean with "qualitative theory" is what I would call an 'ontological explanation', an understanding of what something is, resulting in an understanding of why it behaves the way it does.

And maybe it is not what I mean with "qualitative". Perhaps you can clarify: How would a qualitative theory possibly address the characteristic of how a color looks in conscious experience?


Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/20/2003 12:42 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Hmm. Maybe what you mean with "qualitative theory" is what I would call an 'ontological explanation', an understanding of what something is, resulting in an understanding of why it behaves the way it does.


Exactly, this is also termed a 'metaphysical' explanation. The point is that it is an actual understanding of what is physically happening. You can visualize every single relevant aspect of the causal process. It IS understandable at the core level despite what the quantum theorists claim.

How would a qualitative theory possibly address the characteristic of how a color looks in conscious experience?


The problem with the attempt of consciousness to understand the deeper level mechanisms of consciousness is that the mechanisms are orders of magnitude more compex than consciousness itself. Consciousness is like the very tip-top of a causal heirarchical pyramid which reaches its simplistic fine apex with the linearity of language itself. The tip is simply FAR too small to hold the base within its abstract, representational grasp. This is expressed in the central paradox of A.I. which states "Processes simple enough to be understood are not complex enough to behave intelligently." Think about it, how many continuous simulaneous chains of cause and effect can you keep track of at once? The simultaneous processes of the mechanisms of consciousness number in the billions. We can merely visualize a miniscule fraction of these processes.

It is quite difficult to get a grasp on what it should 'feel' or 'look' like to be these processes in action.

Much progress has been made in cognitive science, however, see the "Is the consciousness a program!" thread for details and links.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/20/2003 12:56 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

The problem with the attempt of consciousness to understand the deeper level mechanisms of consciousness is that the mechanisms are orders of magnitude more compex than consciousness itself. Consciousness is like the very tip-top of a causal heirarchical pyramid which reaches its simplistic fine apex with the linearity of language itself. The tip is simply FAR too small to hold the base within its abstract, representational grasp. This is expressed in the central paradox of A.I. which states "Processes simple enough to be understood are not complex enough to behave intelligently." Think about it, how many continuous simulaneous chains of cause and effect can you keep track of at once? The simultaneous processes of the mechanisms of consciousness number in the billions. We can merely visualize a miniscule fraction of these processes.


I understand the argument of complexity. As others and I have discussed previously on this forum, several months ago, complexity cannot achieve everything. A computer program can be as complex as can be conceived, it still cannot create for example gravity, or light, unless you attach additional devices to the computer. In lack of a better word, I call this a qualitative limitation.

Now I am saying that any process which can be described mathematically has the qualitative limitation of not being able to create consciousness. Unless there is also something else going on. This means that the mathematical formulas of physics are not causally closed, not self contained. I hope I was now able to convey the crucial point.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/20/2003 1:04 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

This means that the mathematical formulas of physics are not causally closed, not self contained.


I agree that no mathematical nor language description of reality can be *complete* because physical reality is infinitely divisible and mathematics is only indefinitely divisible, but this doesn't stop us from understanding visually what is happening to a close enough approximation.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/20/2003 1:25 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

The brain itself is HUGELY complex. Far more complex than a simple mathematical chaos set of algorithms. Is this what you mean by 'complexity'?


For a comuter program, 'complexity' would be the number of methods and their interdependency, for a neural network, the number of neurons, the number of connections between them, and the logical complexity of the structure of their connections, all of which can be described mathematically.

I agree that no mathematical nor language description of reality can be *complete* because physical reality is infinitely divisible and mathematics is only indefinitely divisible, but this doesn't stop us from understanding visually what is happening to a close enough approximation.


I am astonished that you agree, while I don't understand your reasoning based on 'divisibility". What is visual understanding? Something related to having an 'insight'?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/20/2003 1:48 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

For a comuter program, 'complexity' would be the number of methods and their interdependency, for a neural network, the number of neurons, the number of connections between them, and the logical complexity of the structure of their connections, all of which can be described mathematically.


There is a great discrepancy between description and action. Do you think a linear linguistic/mathematical description of the brain would act the same as a living, physical, simultaneous, HIGHLY parallel and non-linearly connected brain in sensory contact with reality?

I think not.

Even so do you know of any description or simulation that is even close to the modeling the complexity of the all the relevant electro-chemical processes in the brain? I think they are still orders of magnitude too simple.

I am astonished that you agree, while I don't understand your reasoning based on 'divisibility". What is visual understanding? Something related to having an 'insight'?


It is simply visualizing what is happening at the causal level. This is the most complete type of basic-level physical understanding available.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/20/2003 2:25 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

There is a great discrepancy between description and action. Do you think a linear linguistic/mathematical description of the brain would act the same as a living, physical, simultaneous, HIGHLY parallel and non-linearly connected brain in sensory contact with reality?

I think not.


Neither do I. The word that gets my attention here is "living".

Even so do you know of any description or simulation that is even close to the modeling the complexity of the all the relevant electro-chemical processes in the brain? I think they are still orders of magnitude too simple.


They are orders of magnitude too simple, and also they lack the ability to address anything like seeing colors. As far as complexity is concerned, they don't go far enough, and as far as seeing colors is concerned, they are not in the same context.

It is simply visualizing what is happening at the causal level. This is the most complete type of basic-level physical understanding available.


Alright, then how does one address what I call 'conscious-how', that is the characteristic of how we seeing a color consciously, its 'look'. Or when hearing consciously, the 'sound'. If we are saying that it can't be addressed linguistically/mathematically, would you also say that something is actually happening in addition to that which more or less directly corresponds to the mathematical description?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/20/2003 3:25 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

The word that gets my attention here is "living".


I thought it would. However, there is no fundamental distinction between ‘living’ and ‘dead’ matter. The word ‘living’ simply denotes a certain level and type of complexity.

They are orders of magnitude too simple, and also they lack the ability to address anything like seeing colors.


The problem is two-fold: The complexity reverse/engineering problem and the complexity communication problem. Even if they did know what type of complexity and architecture was responsible for the experience of color, to verbally explain it to you or I wouldn’t transmit the full complexity of the understanding in its full, causal, active, simultaneous glory. It would be to force the massively parallel leviathan through the linear eye of the needle so to speak.

Once we are complex enough in our understanding capabilities we will be able to understand and communicate these things much better, but then we will have MUCH more complex thoughts which are that much harder for us to understand. Consciousness is always the top level of the causal pyramid of cognition and thus its own mechanisms will always be out of the realm of its comprehensive understanding.

Alright, then how does one address what I call 'conscious-how', that is the characteristic of how we seeing a color consciously, its 'look'. Or when hearing consciously, the 'sound'. If we are saying that it can't be addressed linguistically/mathematically, would you also say that something is actually happening in addition to that which more or less directly corresponds to the mathematical description?


Yes, quite a lot is happening actually (VAST understatement). I don’t know what exactly, but I know that the complexity of the processes in the brain must be sufficient to produce our conscious experiences. I think there is a lot of knowledge actually in existence about these phenomena. Try reading Daniel Dennett’s “Consciousness Explained” for example. It will give you a good feel for what is going on with the mechanisms of consciousness.

...going to sleep for a while...

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/20/2003 12:14 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Yes, quite a lot is happening actually (VAST understatement). I don’t know what exactly, but I know that the complexity of the processes in the brain must be sufficient to produce our conscious experiences.


This expresses an a priori assumption rather than knowledge.

I think there is a lot of knowledge actually in existence about these phenomena. Try reading Daniel Dennett’s “Consciousness Explained” for example. It will give you a good feel for what is going on with the mechanisms of consciousness.


I tried reading it. I think Daniel Dennett completely misses the point when it comes to qualia. He only talks about the intellectual aspects of consciousness, looking through the conceptual filter of information processing. If you have read my text at www.occean.com, in the terms defined there, Daniel Dennett only talks about 'conscious-size', not about 'conscious-how'. At the beginning of some of his other texts he tries to capture the nature of qualia ('conscious-how'), but each time I'm aware of, he will soon switch back his topic to conscious information processing ('conscious-size'), probably without noticing himself that this is a change of topic, or perhaps under the impression that it would mean coming back to realities. (As if qualia were somehow not real.)

I'll be happy to discuss those parts of “Consciousness Explained” which relate to qualia.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/20/2003 12:31 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

This expresses an a priori assumption rather than knowledge.


The only other recourse is to assume a spirit or a mind/body duality. I think that with the obviously huge amount of non-understood architecture in the brain resorting to the non-explanation of a spirit or mind/body duality is entirely unnecessary.

How do you account for consciousness. What do you think are its mechanisms? Are you holding out for a spirit of some kind?

He only talks about the intellectual aspects of consciousness, looking through the conceptual filter of information processing.


Not true. He talks about how the illusions and self-monitoring processes in the brain can create the experience of consciousness. He also talks about how the internal dialog of consciousness could have been developed.

It has been a long time since I have even looked at the book. I will have to dig some of the data up and we can talk about it in detail, when I get around tuit.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/20/2003 12:46 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

It has been a long time since I have even looked at the book. I will have to dig some of the data up and we can talk about it in detail, when I get around tuit.


Perhaps it will be interesting to do that. Don't forget that I'm not talking about things like playing chess, or patter recognition. I'm talking about those aspects of consciousness called qualia in philosophy.

I'll come back later, then I will also address your other points.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/20/2003 12:54 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I'm talking about those aspects of consciousness called qualia in philosophy.


So you are interested in strategies for coming to grips with HOW the mechanism of the brain can can be experienced as specific items of consciousness? You are concerned with how experience itself can be physically achieved?

These points I will address.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/20/2003 1:07 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I understand the argument of complexity. As others and I have discussed previously on this forum, several months ago, complexity cannot achieve everything.


What do you mean by this? The brain itself is HUGELY complex. Far more complex than a simple mathematical chaos set of algorithms. Is this what you mean by 'complexity'?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/20/2003 12:36 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

The problem is two-fold: The complexity reverse/engineering problem and the complexity communication problem. Even if they did know what type of complexity and architecture was responsible for the experience of color, to verbally explain it to you or I wouldn’t transmit the full complexity of the understanding in its full, causal, active, simultaneous glory. It would be to force the massively parallel leviathan through the linear eye of the needle so to speak.

Once we are complex enough in our understanding capabilities we will be able to understand and communicate these things much better, but then we will have MUCH more complex thoughts which are that much harder for us to understand. Consciousness is always the top level of the causal pyramid of cognition and thus its own mechanisms will always be out of the realm of its comprehensive understanding.


It doesn't seem to me that your response reflects the full depth of the so-called "hard" problem of consciousness: it is not a problem of communication, of explaining something to someone, or simply of _quantitative_ complexity of reverse-engineering. Rather there is a problem of using adequate concepts necessary to understand something in the first place. The point is that any concept based on mathematical models of reality is inherently incapable of addressing all the qualities of consciousness, and these qualities are not independent of each other, otherwise we wouldn't be speaking about them. This is a substatial challenge since we currently do not accept any other models as relating to what we think of as the only basis of reality: physics.

What do you mean by this? The brain itself is HUGELY complex. Far more complex than a simple mathematical chaos set of algorithms. Is this what you mean by 'complexity'?


In many cases, complexity is just quantitative complexity. If we mean another kind of comlexity, we need to specify that. In some sense, that is what I'm trying to do: Being specific about the kind of aprroach needed, or at least being specific about which approach will not suffice. (The latter probably being the first step towards the former.)

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/25/2003 12:00 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

It doesn't seem to me that your response reflects the full depth of the so-called "hard" problem of consciousness: it is not a problem of communication, of explaining something to someone, or simply of _quantitative_ complexity of reverse-engineering.


Verbal linear communication is only part of the consciousness problem. The other problem is the huge difference in complexity between the network architecture of the brain and the representational capabilities of consciousness. Concsiousness is simply not up to the task of representing the MASSIVELY parallel activity of the brain.

How many constant, interacting chains of causality can you keep track of or imagine simultaneouly? ...several billion?

case in point...

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/25/2003 12:47 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Verbal linear communication is only part of the consciousness problem. The other problem is the huge difference in complexity between the network architecture of the brain and the representational capabilities of consciousness.


I agree. In regard to the problem of verbal communication, I'd like to add/clarify that this is also a problem of available forms of representation for formulating theories.

In regard to the problem of complexity of network architecture, this is of course a challenge for neurophysiologists. I'm not sure what you mean with "representational capabilities of consciousness" in this context.

Concsiousness is simply not up to the task of representing the MASSIVELY parallel activity of the brain.


As above, I'm not sure what exactly you are aiming at. Are you talking about how consciousness 'functions', or about how to do research in this area?

...otherwise, I'm (re-) reading in Daniel Dennett's "Consciousness Explained" which you have recommended, so that we can discuss it (unless you already have passages or quotes that you would like to point at). At this point, it seems to me that this book is largely written as a response, or in the context of, simplistic (dogmatic) dualistic positions and to epiphenomenalism and variations thereof. It doesn't seem to directly address the kind of reasoning which I have outlined on my homepage, one which concludes the the mathematical description of reality is not causally closed, yet isn't dualistic (as I have explained here). So it will take a little time to find a good way to address the book "Consciousness Explained" from my perspective. Since I am also busy otherwise, this will take a few more days. A further complication is that Dennett often argues from a pragmatic point of view of doing research under given circumstances, rather than in terms of looking for the basic truth about consciousness. In a sense, he wants to start somewhere. I don't think I would take issue with that, however he is not only arguing in favor of his favored approach, but also vehemently arguing against any approach focussing more on qualia. Here I respectfully yet certainly disagree. I will come back to this in a few days.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/25/2003 2:06 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

As above, I'm not sure what exactly you are aiming at.


Simply that the imagination is limited. It is not powerful enough to represent completely the mechanisms of consciousness.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/25/2003 4:06 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

At this point, it seems to me that this book is largely written as a response, or in the context of, simplistic (dogmatic) dualistic positions and to epiphenomenalism and variations thereof.


He is trying to clear the air of the common confusions.

It doesn't seem to directly address the kind of reasoning which I have outlined on my homepage, one which concludes the the mathematical description of reality is not causally closed, yet isn't dualistic (as I have explained here).


Consciousness, simply isn't a mathematical phenomenon. It can be quantified, to an extent, but it is not fundamentally mathematical.

A further complication is that Dennett often argues from a pragmatic point of view of doing research under given circumstances, rather than in terms of looking for the basic truth about consciousness.


What do you mean by 'basic truth'?

In a sense, he wants to start somewhere. I don't think I would take issue with that, however he is not only arguing in favor of his favored approach, but also vehemently arguing against any approach focussing more on qualia.


The problem with qualia is that it was evolved to represent external reality. Qualia can be quite illusory and it is generally at too high a level to base conclusive arguments upon.

Here I respectfully yet certainly disagree. I will come back to this in a few days.


I think his main focus was to create 'intuition pumps' to help the imagination grasp the vast complexity of how the physical brain could be experienced from within as consciousness.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/30/2003 2:31 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

He is trying to clear the air of the common confusions.


That's what he (Daniel Dennett) is trying to do. But in many cases he replaces confusion with "denial" (his own word) or avoidance, instead of clarity.

Consciousness, simply isn't a mathematical phenomenon. It can be quantified, to an extent, but it is not fundamentally mathematical.


It is not so easy to be completely clear on this. Some would argue that physical reality in general is not mathematical, only describable with mathematics. That is, only the description is mathematical, in the first place. In contrast, are you agreeing that consciousness is a phenomenon that does not have a complete mathematical description, not even in principle? I'm asking because then you would have a different position than Daniel Dennett, who argues that consciousness could be simulated on a computer, actually he thinks that a computer could be conscious, not just simulate consciousness (Strong AI).

What do you mean by 'basic truth'?


With "looking for the basic truth" I mean to acknowledge and examine all the qualities of consciousness, so as to understand what will be necessary to understand it completely, instead of selecting and researching only those aspects that can be studied in a quantifiable manner, so to say from the outside, and simply declaring all other qualities to be insignificant, redundant or 'deniable'. I think 'deniable' is good way to describe how D.Dennett thinks about qualia. (He doesn't _really_ deny the undeniable. ;-) )

The problem with qualia is that it was evolved to represent external reality. Qualia can be quite illusory and it is generally at too high a level to base conclusive arguments upon.


Qualia are very clear when they represent external reality, but of course there are also qualia relating to for example visualization (and dreams), and also we don't just think, we are also aware of our thoughts (more or less), and feelings, there is even a certain sense for 'meaning', in various forms, one of which might be the so-called "common-sense". ;-)

I agree that the 'messages' conveyed by qualia, the interpretation of qualia, can be illusory, but the events of qualia in themselves are simply facts. The illusions are in the thoughts resulting from perception, not in the qualia. For example, when one sees a color, the object one assumes to be the cause of seeing the color might be a different one than one thinks, or just a reflection. Still, the event of seeing the color, in itself, remains a simple fact. We are just not used to think about qualia in themselves, we are used to immediately jump to the question of 'what-we-see', instead of remaining with the fact 'that-we-see', or 'how-we-see'.

When you say they are a too high a level for conclusive arguments, one could say that the science of physics has started on a very high level as well. It is not that long ago that the idea of solid matter turned out to be an illusion, that in fact matter is mostly empty space. Only quite recently in our history it was shown that the attempt to build a flying machine was not an illusory idea, as was previously stated by scientists (in spite of knowing about birds).

I think his main focus was to create 'intuition pumps' to help the imagination grasp the vast complexity of how the physical brain could be experienced from within as consciousness


He was also looking for short-cuts in the explanation of consciousness, not noticing that his short-cuts were going around the 'goal'. He was arriving somewhere else, and for practical purposes went ahead to redefine his point of arrival as 'consciousness'. In the terms I use, he explains only 'conscious-size' (our awareness of quantifiable information), actually not even the awareness of it, only the fact that we have (and process) information. And he thinks he can short-cut around (or go over) 'conscious-how' (qualia). As a result, although he describes qualia very well, he thinks that from a scientific perspective one should leave it at that. If the idea of short-cutting 'conscious-how' would be a valid assumption, then we wouldn't be able to talk about 'conscious-how' in any meaningful way. Wittgenstein or not, the simple statement that we see in color is a meaningful statement of fact, not at all involving a pre-assumption of arbitrary metaphysics. Add to that the statement that 'how' a color looks (or the difference between 'blue' and 'green') can not be described mathematically, and you have more "proof" than you need.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/01/2003 5:13 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

It is not so easy to be completely clear on this. Some would argue that physical reality in general is not mathematical, only describable with mathematics. That is, only the description is mathematical, in the first place.


I fall into this category, but would add that there are many types of descriptions of physical reality. Mathematics is only one method of description.

In contrast, are you agreeing that consciousness is a phenomenon that does not have a complete mathematical description, not even in principle?


Yes, because mathematics is not infinitely precise. It is an indefinite procedure for quantifying the continuum. As such, it is inherently incomplete.

I'm asking because then you would have a different position than Daniel Dennett, who argues that consciousness could be simulated on a computer, actually he thinks that a computer could be conscious, not just simulate consciousness (Strong AI).


Simulation is not dependent on mathematical completeness, just on mathematical effectiveness.

subtillioN: What do you mean by 'basic truth'?

blue_is_not_a_number: With "looking for the basic truth" I mean to acknowledge and examine all the qualities of consciousness, so as to understand what will be necessary to understand it completely, instead of selecting and researching only those aspects that can be studied in a quantifiable manner, so to say from the outside, and simply declaring all other qualities to be insignificant, redundant or 'deniable'. I think 'deniable' is good way to describe how D.Dennett thinks about qualia. (He doesn't _really_ deny the undeniable. ;-) )



I agree with your explanation of "looking for the basic truth", but I am a bit perplexed as to why you think Dennett is denying the existence of qualia. Just what qualia does he deny and in what way specifically does he deny it? Can you give examples and perhaps quotes or page numbers?

Qualia are very clear when they represent external reality, …


Clear to our consciousness yes, but as representations generated for making evolutionarily relevant distinctions they are sometimes not accurate and quite misleading.

I agree that the 'messages' conveyed by qualia, the interpretation of qualia, can be illusory, but the events of qualia in themselves are simply facts.


Right, as such they are undeniable. Dennett is trying to point to the proper level of explanation which is underneath the phenomena of qualia.

The amalgam of qualia is what consciousness actually is, therefore, in a sense it really is qualia that we are trying to physically explain. That is why we can't use them in the explanation because the logic would be circular. That is what Dennett is trying to avoid.

The illusions are in the thoughts resulting from perception, not in the qualia.


The qualia are illusions in the sense that they are representations which only appear as they do to make the contrasts and distinctions necessary for survival. They are also illusions in that they appear to exist in the objective (i.e. external) world, but they actually don’t. Take the experience of color for instance. When we look at an object we can see that it is red. It looks to us as if the object actually was red! We all know that the electro-magnetic spectrum isn’t really divided up into three primary colors that neatly form a moebius color wheel linking the higher limits with the lower limits of frequency perception and linking all of these colors to emotional responses. Such a color wheel is a subjective illusion of qualia. It does not exist in objective reality.

For example, when one sees a color, the object one assumes to be the cause of seeing the color might be a different one than one thinks, or just a reflection. Still, the event of seeing the color, in itself, remains a simple fact.


Dennett was not denying that we actually DO subjectively interpret electromagnetic frequency as color.

We are just not used to think about qualia in themselves, we are used to immediately jump to the question of 'what-we-see', instead of remaining with the fact 'that-we-see', or 'how-we-see'.


What are “qualia in themselves”? This seems to suppose that they are self-caused, or that they are independent of deeper level causal mechanisms. This is not what you had in mind, is it? Even qualia have deeper levels of causal mechanisms. This is the whole point of Dennett’s book.

When you say they are a too high a level for conclusive arguments, one could say that the science of physics has started on a very high level as well.


True, but this “higher level” in physics is used to explain an even higher level. In the case of qualia, however, the higher level is often used to deny the explanations at the lower levels. It simply doesn’t work that way in hierarchical reality, so the arguments are moot.

It is not that long ago that the idea of solid matter turned out to be an illusion, that in fact matter is mostly empty space.


And it was even more recent that the idea of ‘empty space’ turned out to be erroneous as wave energies were found to be flowing at ALL levels, especially the ‘fundamental’ quantum levels. The wave energies compose and travel between all of the myriad members of the zoo of the fluidly interconvertable so-called ‘fundamental particles’. Thus, it turns out that ‘empty space’ is more aptly understood as a material, compressible, wave-transmitting fluid.

subtillioN: I think his main focus was to create 'intuition pumps' to help the imagination grasp the vast complexity of how the physical brain could be experienced from within as consciousness

blue_is_not_a_number: He was also looking for short-cuts in the explanation of consciousness, not noticing that his short-cuts were going around the 'goal'.


Short-cuts are absolutely necessary for understanding the vast complexity of the mechanisms of consciousness. Could you explain how he missed the goal of producing effective heuristic bridges for understanding consciousness?

He was arriving somewhere else, and for practical purposes went ahead to redefine his point of arrival as 'consciousness'.


Could you elaborate?

In the terms I use, he explains only 'conscious-size' (our awareness of quantifiable information), actually not even the awareness of it, only the fact that we have (and process) information. And he thinks he can short-cut around (or go over) 'conscious-how' (qualia).


I think the problem is in the association of ‘conscious-how' with ‘qualia’. The ‘conscious-how' level is the network architecture level. This problem is similar to trying to describe how a computer is built from the patterns displayed on its screen. It is true that there is no denying that the display patterns exist, but they don’t necessarily give definitive or foundational mechanistic clues or explanations as to how the computer is constructed.

As a result, although he describes qualia very well, he thinks that from a scientific perspective one should leave it at that. If the idea of short-cutting 'conscious-how' would be a valid assumption, then we wouldn't be able to talk about 'conscious-how' in any meaningful way.


He is simply trying to show that ‘qualia’ is not the proper level for a description of ‘conscious-how'. That is what he is ‘denying’.

Wittgenstein or not, the simple statement that we see in color is a meaningful statement of fact, not at all involving a pre-assumption of arbitrary metaphysics.


The problem is with the verb ‘see’. It implies the sensory apprehension of objective reality. In this sense, it is true that we do ‘see’ electromagnetic frequencies to a limited extent, but this objective continuum of frequencies is simplified and only ‘representationally observable’ as three ‘primary’ colors. The colors, themselves are not ‘seen’ because they do not exist in objective (external) reality. Color is what the mind uses to represent frequency. As a representation, it is a feature of subjective not objective reality. Both objective and subjective reality, however, are subsets of physical reality. The difference is one of perspective.

Add to that the statement that 'how' a color looks (or the difference between 'blue' and 'green') can not be described mathematically, and you have more "proof" than you need.


Proof for what? No pure mathematical description of reality is enough to give anyone an understanding of what is physically happening anyway.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/03/2003 2:57 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

A few centuries ago, students of some professions were required to leave their hometown to learn and explore the world before finishing the final steps of their education. Only after several years were they allowed to return to their hometown and start an independent existence. This was for several reasons ("over-determination") and a regular part of their education.

Regarding Dennett's theories, I'm certainly still in the learning stage and not the expert who could present you with all the best quotes and references. I'll do my best, though.

Simulation is not dependent on mathematical completeness, just on mathematical effectiveness.


I take it that you don't have a definite opinion on Strong AI. Dennett, however, argues clearly in its favor. Although he, as usual, tries to avoid making final judgments, I see him doing nothing else than defending Strong AI. However, I personally doubt that he would be willing to replace the part (or system function) of his brain that feels happiness with a computer module, even if that had twice the computing speed (as Ray Kurzweil seems to be suggesting will be possible in the not so distant future). I imagine he would counter this argument by saying that there is no separable part of the brain that would be responsible for 'feeling' or 'happiness', and I would answer that that is not my point.

I agree with your explanation of "looking for the basic truth", but I am a bit perplexed as to why you think Dennett is denying the existence of qualia. Just what qualia does he deny and in what way specifically does he deny it? Can you give examples and perhaps quotes or page numbers?


Yes. All page numbers refer to the "first paperback edition" (in case that matters) of "Consciousness Explained". Pay special attention to his use of the word "seems". One should read these quotes in their context, buy I can't quote the whole book here.

On page 372: ('_' refers to _italics_, emphasized printed words)

[Dennett] Philosophers have adopted various names for the things in the beholder (or properties of the beholder) that have been supposed to provide a safe home for the colors and the rest of the properties that have been banished from the "external" world by the triumphs of physics: "raw feels," "sensa," "phenomenal qualities," "intrinsic properties of conscious experiences," "the qualitative content of mental states," and, of course, "qualia," the erm I will use. There are subtle differences in how these terms are defined, but I'm going to ride roughshod over them. In the previous chapter I seemed to be denying that there are _any_ such properties, and for once what seems so _is_ so. I _am_ denying that there are any such properties. But (here comes that theme again) I agree wholeheartedly that there seem to be qualia.


I guess the word "conscious-how" won't keep him from riding roughshod, either.

Although the next quote, on page 373, does not use the word "deny", it is illustrative and it also clarifies what I referred to as "proof" in my last message:

[Dennett] Don't our internal states _also_ have some special "intrinsic" properties, the subjective, private, ineffable, properties that constitute that constitute _the_way_things_look_to_us_ (sound to us, smell to us, etc.) ? Those additional properties would be the qualia, and before looking at the arguments philosophers have devised in an attempt to _prove_ that there are these additional properties in the first place, by finding alternative explanations for the phenomena that seem to demand them. Then the systematic flaws in the attempted proofs will be readily visible.


The third quote, on page 375, is the clearest one I found so far. (And it includes the word "denying"):

[Dennett] The CADBLIND Mark I _certainly_ doesn't have any qualia (that is the way I expect lovers of qualia to jump at this point), so it does indeed follow from my comparison that I am claiming that we don't have qualia either. The _sort_ of difference that people imagine there to be between any machine and any human experiencer (recall the wine-tasting machine we imagined in chapter 2) is one I am firmly denying: There is no such sort of difference. There just seems to be.


Are you still perplexed? The CADBLIND Mark I is a machine (hypothetical, I think) that can compare colors in the sense of numerical representations of the data coming from a camera (and/or CAD system). Do I need to repeat that the difference between blue and green is not at all the same as the difference between 163 and 172?

Sometimes I wish this were just meant as a provocation, a challenge, but it doesn't seem to be. So far my attempts to figure out the nature of what he means with "seems" have been fruitless. Perhaps he would say that when I am asking the question what the nature of "seems" is, that then I am already, again, a victim of this illusion in the first place.

We all know that colors "seem" to be out there, they "seem" to be a property of the objects we see. Dennett seems to extend this "seems" to include that they are not "in" there either (because the brain is 'physical' as well). And since there is no other place they could be, so my interpretation of Dennett, they can't be anywhere at all, and therefore the "seems" is all there is to them. According to Dennett. Which, of course, answers neither the nature of the "seems" nor the nature of "qualia". It seems he is more a philosopher than a scientist, if I may say so.

You wrote:

Right, as such they are undeniable. Dennett is trying to point to the proper level of explanation which is underneath the phenomena of qualia.

The amalgam of qualia is what consciousness actually is, therefore, in a sense it really is qualia that we are trying to physically explain. That is why we can't use them in the explanation because the logic would be circular. That is what Dennett is trying to avoid.


Good observation. Now what if qualia can't be explained with mathematical physics (as in the line of reasoning I explained on www.occean.com) ?

Then, in trying to avoid a circular logic problem, he has a chicken-and-egg problem instead.

[I left out a few of your passages since I think they are mostly answered above.]

What are “qualia in themselves”? This seems to suppose that they are self-caused, or that they are independent of deeper level causal mechanisms. This is not what you had in mind, is it? Even qualia have deeper levels of causal mechanisms. This is the whole point of Dennett’s book.


Qualia related to sensory perception refer to "external" reality. When I am talking about qualia in themselves, I simply mean to talk about qualia independent of their reference function. As an example, a scientific book refers (hopefully) to reality, yet when talking about the 'book in itself', I would be talking about the book (the thing of paper and ink) rather than about its message.

I think that Dennett is not at all talking about the causal mechanisms of qualia, he is (only) talking about sensory perception and how the brain processes information. This is interleaved with refutations of (old-fashioned versions of) dualism and epiphenomenalism. With perhaps a few excpetions, he refers to conscious experience only by discussing which information will be referenced consciously. I understand that is also the only thing he considers worthwhile doing.

True, but this “higher level” in physics is used to explain an even higher level. In the case of qualia, however, the higher level is often used to deny the explanations at the lower levels. It simply doesn’t work that way in hierarchical reality, so the arguments are moot.


Moot? Which? Why? The assumption that qualia are based hierarchical on top of (mathematical) physical reality usually leads to epiphenomenalism, which can be pre-assumed in the context of this discussion. I personally consider epiphenomenalism to be inherently self-contradictory. (Which might be another reason Dennett argues for materialism. He might like to be a little epiphenomenalistic, but see the logical contradictions and so prefers to deny qualia as real.

[Again, I left out a few of your passages since I think they are mostly answered above.]

I think the problem is in the association of ‘conscious-how' with ‘qualia’. The ‘conscious-how' level is the network architecture level. This problem is similar to trying to describe how a computer is built from the patterns displayed on its screen. It is true that there is no denying that the display patterns exist, but they don’t necessarily give definitive or foundational mechanistic clues or explanations as to how the computer is constructed.


Nope. "Conscious-how" is just another word for qualia which I use to indicate its relationship to 'conscious-size' and 'measurable-size'. The "how" does not refer to "how" it works internally. "How" refers to "how" we see (answer: in color) and "how" we hear (answer: with sounds), in the sense of seeing and hearing consciously.

Qualia are not the display patterns, qualia are, in this metaphor, the lighted surface of the display. The display patterns are the same as the data patterns. The display patterns are what I call 'conscious-size' and the data patterns are what I call 'measurable-size'. With the help of the arguments I explain on my homepage, one could say that conscious vision is a non-mathematical (since colors, as we see them, can't be decribed mathematically) display of mathematical data. If one defines physics as exclusively describable mathematically (which I think would be a mistake), then the 3D image we see consciously would be a non-physical display of physical data.

A further mistake, I think, would to assume that within the process of conscious vision, there would be an observer in addition to the 3D image which we see. IF Dennett were willing to follow my thoughts up to this point (which he probably wouldn't), at this point we might agree. The observer in the middle of the 3D image would be a situation where I see the use of the word "seems" to be adequate. The conscious 3D image is already the whole conscious vision. Our consciousness is not in the middle of this 3D image, rather this 3D image _is_ visual consciousness.

[Again, I left out a few of your passages as I think I have included the response above.]

Proof for what? No pure mathematical description of reality is enough to give anyone an understanding of what is physically happening anyway.


Proof that qualia are not a mathematically describable physical function, but at the same time are a relevant (significant) causal factor within physical reality. (At least that is the argument.)

Although the description of a scientific experiment and its measurements is in large parts given verbally rather than mathematically, the general assumption is that the outcome of an experiment, once the physics equations have been established, can be described as a mathemtaical function of the initial conditions. In terms of a computer simulation, the mathematical laws of physics are the program, the initial conditions are the input data, and the outcome of the experiment is the output data. Understanding what is going on is assumed to be a separate question. How do you see that?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/03/2003 3:15 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

I the above message, the following sentence was previously missing a "NOT".
This is what I meant to say:

The assumption that qualia are based hierarchical on top of (mathematical) physical reality usually leads to epiphenomenalism, which can NOT be pre-assumed in the context of this discussion.

(Meaning this is something you would have to show, not something we can start with, especially since I disagree in the exclusively "mathematical" sense of physics.)

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/03/2003 3:37 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

The middle of a sentence in the second quote from Dennett, page 373, was missing. Here is the correct version, with the previously missing text in UPPERCASE (sorry):

[Dennett] Don't our internal states _also_ have some special "intrinsic" properties, the subjective, private, ineffable, properties that constitute _the_way_things_look_to_us_ (sound to us, smell to us, etc.) ? Those additional properties would be the qualia, and before looking at the arguments philosophers have devised in an attempt to _prove_ that there are these additional properties, WE WILL TRY TO REMOVE THE MOTIVATION FOR BELIEVING IN THESE PROPERTIES in the first place, by finding alternative explanations for the phenomena that seem to demand them. Then the systematic flaws in the attempted proofs will be readily visible.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/05/2003 6:24 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

subtillioN: Simulation is not dependent on mathematical completeness, just on mathematical effectiveness.

blue_is_not_a_number: I take it that you don't have a definite opinion on Strong AI.


I simply believe that a linear, serial, digital computer is certainly not the optimum medium for attempting to construct the mechanisms of consciousness. Computers as we know them, are completely different machines than are minds.

I also believe that it is entirely possible for us to construct a conscious artificial entity from a substrate other than biological proteins. In fact these alternate substrates can be much more complex, stable and efficient (by many orders of magnitude).

On page 372: ('_' refers to _italics_, emphasized printed words)
[Dennett] Philosophers have adopted various names for the things in the beholder (or properties of the beholder) that have been supposed to provide a safe home for the colors and the rest of the properties that have been banished from the "external" world by the triumphs of physics: "raw feels," "sensa," "phenomenal qualities," "intrinsic properties of conscious experiences," "the qualitative content of mental states," and, of course, "qualia," the term I will use.


I want to point out Dennett’s use of the term “external” in this quote. This says to me that he is stating that qualia, such as color, are not properties found in objective reality.

Are you saying that they ARE found in external reality? Or do you agree that qualia are merely the way that the human brain represents objective physical properties in the mind?

Although the next quote, on page 373, does not use the word "deny", it is illustrative and it also clarifies what I referred to as "proof" in my last message:
[Dennett] Don't our internal states _also_ have some special "intrinsic" properties, the subjective, private, ineffable, properties that constitute _the_way_things_look_to_us_ (sound to us, smell to us, etc.) ? Those additional properties would be the qualia, and before looking at the arguments philosophers have devised in an attempt to _prove_ that there are these additional properties in the first place, by finding alternative explanations for the phenomena that seem to demand them. Then the systematic flaws in the attempted proofs will be readily visible.


Again I think he is saying that these ‘qualia’ are mere representations of reality.

The third quote, on page 375, is the clearest one I found so far. (And it includes the word "denying"):
[Dennett] The CADBLIND Mark I _certainly_ doesn't have any qualia (that is the way I expect lovers of qualia to jump at this point), so it does indeed follow from my comparison that I am claiming that we don't have qualia either. The _sort_ of difference that people imagine there to be between any machine and any human experiencer (recall the wine-tasting machine we imagined in chapter 2) is one I am firmly denying: There is no such sort of difference. There just seems to be.


Dennett is denying that there is some absolute, uncrossable void between ‘artificial’ and ‘natural’ minds. The problem is that philosophers often use ‘qualia’ as some sort of ‘élan vital’ that absolutely distinguishes the mind of man from anything understood as mechanical. It is this abuse of the term that Dennett rejects.

Are you still perplexed?


I think I figured out the confusion. It is with the term ‘qualia’. Dennett rejects the common absolutist abuse of the term.

The CADBLIND Mark I is a machine (hypothetical, I think) that can compare colors in the sense of numerical representations of the data coming from a camera (and/or CAD system). Do I need to repeat that the difference between blue and green is not at all the same as the difference between 163 and 172?


Of course it is not the same. The point is that there is a huge number of possible representational forms or ‘qualia’ for the sensation of color. These forms depend on the physical, network-architectural, cognitive substrate for the sensation and representation of the frequencies of light.

Sometimes I wish this were just meant as a provocation, a challenge, but it doesn't seem to be. So far my attempts to figure out the nature of what he means with "seems" have been fruitless. Perhaps he would say that when I am asking the question what the nature of "seems" is, that then I am already, again, a victim of this illusion in the first place.


I think “seems” denotes how something is represented (as qualia) as opposed to what it actually is.

We all know that colors "seem" to be out there, they "seem" to be a property of the objects we see. Dennett seems to extend this "seems" to include that they are not "in" there either (because the brain is 'physical' as well).


In a sense, he is correct. When we open up a brain we find no repository of the color that we see as ‘qualia’. Color is simply an illusion that the brain uses to represent electromagnetic frequency, much like a pure tone.

And since there is no other place they could be, so my interpretation of Dennett, they can't be anywhere at all, and therefore the "seems" is all there is to them.


They exist purely as a representation in the mind and as such they are an illusion.

According to Dennett. Which, of course, answers neither the nature of the "seems" nor the nature of "qualia". It seems he is more a philosopher than a scientist, if I may say so.


This is true, but this ultimately says nothing as to the correctness of his pov.

You wrote:
>Right, as such they are undeniable. Dennett is trying to point to the proper level of explanation which is underneath the phenomena of qualia. <

>The amalgam of qualia is what consciousness actually is, therefore, in a sense it really is qualia that we are trying to physically explain. That is why we can't use them in the explanation because the logic would be circular. That is what Dennett is trying to avoid. <

Good observation. Now what if qualia can't be explained with mathematical physics (as in the line of reasoning I explained on www.occean.com) ?



This is from your article:

It is the impossibility of an equation such as

B = M * E * S * S / theta

where B is Blue (the color), m is mass, e is energy, s is space and theta is a global, unified, everlasting constant. Such an equation is impossible because the color blue is not a quantity.


From my brief scan of your article, it seems that you were looking at a pure physics explanation of color. The only way to describe how the ‘illusion’ of color is produced in the mind is to explain it at the neural, functional-aggregate level. Mathematics or physics is simply FAR to low a level to describe such highly complex mechanisms. There is a VAST gap of missing functional complexity here.

You are right that ‘blue is not a number’, but neither is anything else ultimately. ;) Even numbers are not really numbers! They are semantic illusions! Mere associative constructs! Sometimes the illusion is caused by the interpretation of lines drawn on a paper and sometimes it is displayed on a screen, but there is no such ultimate thing as a pure number.

There are real quantifiable relations in nature, however, but the quantification is always ultimately approximate.

I think that Dennett is not at all talking about the causal mechanisms of qualia, he is (only) talking about sensory perception and how the brain processes information.


What he is saying is that qualia are illusions caused by deeper cognitive mechanisms that he doesn’t explain.


This is interleaved with refutations of (old-fashioned versions of) dualism and epiphenomenalism.


Just out of curiosity, what is your preferred explanation of the brain/mind relationship?

With perhaps a few excpetions, he refers to conscious experience only by discussing which information will be referenced consciously. I understand that is also the only thing he considers worthwhile doing.


I am sure he enjoys other things as well. I do know that he is a sculptor, for instance. ;) He deeply respects the efforts of cognitive scientists to explain the actual mechanisms of the mind and he consistently refers to that burgeoning field for the actual physical explanations.

The assumption that qualia are based hierarchical on top of (mathematical) physical reality usually leads to epiphenomenalism, which can be pre-assumed in the context of this discussion. I personally consider epiphenomenalism to be inherently self-contradictory. (Which might be another reason Dennett argues for materialism. He might like to be a little epiphenomenalistic, but see the logical contradictions and so prefers to deny qualia as real.


Let’s talk more about epiphenomenalism. What is your definition of it and how is it “inherently self-contradictory”?

Qualia are not the display patterns, qualia are, in this metaphor, the lighted surface of the display.


This implies that they are mechanistic, but I see your point. [To continue with this line of reasoning I will have to adapt the metaphor because, unlike the brain, the screen actually DOES emit the three primary frequencies (RGB). In light of this metaphorical incongruence, please bear with the strange descriptions to follow.] Ok, let’s say that the display can’t physically represent objects as they actually are (the mind can’t display raw vibratory frequency; it must simplify it as three primary, non-vibratory 'colors'). This display, let’s say, must represent all objects as boxes outlining the rough boundaries of an object (bounding-boxes). Having known only bounding-boxes, from a lifetime of ‘sensory’ experience with them we take it on unquestioned and unconscious faith that objects really are bounding-boxes. But, upon further reflection, it is clear that the bounding-boxes exist neither in external or internal reality. They are merely patterns on the screen; mere illusions to help us with the task of conscious sensation. Dennett would be saying, in this (strange) case, that the bounding-boxes are not physically real: they are simply an illusion for the function of representing the external objects. The ‘bounding boxes’ are the qualia that Dennett would be denying as existing in external or internal reality, and you can see, I am sure, that in this sense, he is correct.

A further mistake, I think, would to assume that within the process of conscious vision, there would be an observer in addition to the 3D image which we see. IF Dennett were willing to follow my thoughts up to this point (which he probably wouldn't), at this point we might agree. The observer in the middle of the 3D image would be a situation where I see the use of the word "seems" to be adequate. The conscious 3D image is already the whole conscious vision. Our consciousness is not in the middle of this 3D image, rather this 3D image _is_ visual consciousness.


I think the discrepancy between your view and Dennett’s view lies in the misunderstanding of just what he is denying as ‘qualia’. He is simply denying that color, as we see it existing in external reality, doesn’t physically exist. The color is real, but it is also an illusion because objects and the brain do not have the color we see on or in them. Color is simply a representation caused by the network-architecture of the brain in response to em frequency.

Proof that qualia are not a mathematically describable physical function, but at the same time are a relevant (significant) causal factor within physical reality. (At least that is the argument.)


I agree with you that “qualia are not a mathematically describable physical function”, but I think (and Dennett would probably agree) that “qualia are not a…physical function” at all, in the sense that the pure and simple laws of physics cannot describe them.

Although the description of a scientific experiment and its measurements is in large parts given verbally rather than mathematically, the general assumption is that the outcome of an experiment, once the physics equations have been established, can be described as a mathematical function of the initial conditions.


Only in highly simplified situations is this ever achievable, and even then it is always an approximation to some degree. The brain, however, is HIGHLY complex and definitively non-linear thus the initial conditions are unattainable, because they can never be defined to the requisite level of detail.


Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/05/2003 4:45 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Dennett is denying that there is some absolute, uncrossable void between ?artificial? and ?natural? minds. The problem is that philosophers often use ?qualia? as some sort of ?élan vital? that absolutely distinguishes the mind of man from anything understood as mechanical. It is this abuse of the term that Dennett rejects.


I think I figured out the confusion. It is with the term ?qualia?. Dennett rejects the common absolutist abuse of the term.


No, we simply have a different understanding of what qualia are. "Qualia" is not a term defined in an a-priori mechanical context. It is a term used by philosophers to refer to conscious experience, primarily by those philosophers who absolutely oppose a plain mechanical view of consciousness. This is therefore the proper use of the term, and if anyone abuses it, then it is Dennett. It would be ridiculous to say that any use of language in a decidedly non-mechanical understanding would be abusive, whether this is meant in an absolute sense or not. Language is not reserved for mechanistic, mathematical-physical purposes, not even reserved for scientific purposes. _That_ would be absolutistic.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/05/2003 5:02 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

No, we simply have a different understanding of what qualia are. "Qualia" is not a term defined in an a-priori mechanical context. It is a term used by philosophers to refer to conscious experience, primarily by those philosophers who absolutely oppose a plain mechanical view of consciousness.


Right, that is pretty much what I said. The point that Dennett is making is that qualia are illusions and that these illusions have deeper causal mechanisms. Do you deny that there are deeper causal mechanisms for qualia? If you do then this will preclude you from ever understanding HOW and WHAT qualia actually are.

This is therefore the proper use of the term, and if anyone abuses it, then it is Dennett.


How does he abuse it?

It would be ridiculous to say that any use of language in a decidedly non-mechanical understanding would be abusive, whether this is meant in an absolute sense or not.


I am simply saying that the abuse is when it is used to fabricate a non-brigable gap between the brain and the mind. This is dualistic and is therefore incompatible with coherent singular reality.

Language is not reserved for mechanistic, mathematical-physical purposes, not even reserved for scientific purposes. _That_ would be absolutistic.


Right, language is not reserved for descriptions of physics. However, everything describable and undescribable by language(including qualia)has roots in physical, causal reality otherwise it would be uncaused and therefore non-existent.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/05/2003 6:39 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

I am simply saying that the abuse is when it is used to fabricate a non-brigable gap between the brain and the mind. This is dualistic and is therefore incompatible with coherent singular reality.


Some believe in dualism. For example, the philosopher David Chalmers wrote that "natural dualism" is how he calls his position (although he always remains very flexible). He is doing his best to conceive a "bridge" for the ""gap", and I think "qualia" is exactly the term he should use to express his views. Your idea of a "coherent singular reality", and what is compatible or incompatible with it, may be a very limited idea. In fact, just to repeat myself another time, I think that your idea of a coherent singular reality is incompatible with the reality of consciousness, and the consciousness I'm talking about is not an illusion.

However, I will not go into these arguments another time unless I see some sign that their is a minimal understanding for what I have argued over and over.

Greetings.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/05/2003 7:00 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

However, I will not go into these arguments another time unless I see some sign that their is a minimal understanding for what I have argued over and over.


Your arguments are simply that the simple laws of mathematical physics cannot explain 'qualia'. I agree with that. (wrong level of explanation)

I know your argumants for what DOESN'T constitute an explanation of qualia, but I still wonder about your arguments, intuitions or ideas about what DOES explain them.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/05/2003 11:13 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Your arguments are simply that the simple laws of mathematical physics cannot explain 'qualia'. I agree with that. (wrong level of explanation)


No, that would be rather trivial. To talk about the "wrong level" is your own idea, your interpretation of what I said according to your own terms.

Linear or non-linear, simple or complex, low-level or high-level, quantum or classical, atomic or molecular or planetary or galactical or universal, relativistic or absolute, special or general, local or non-local, ordered or chaotic, 4-dimensional or 11-dimensional, one world or many worlds, ontological or epistomological, Goedel or Omega or consistently complete:

today's laws of physics are mathematical all the way through.

I know your argumants for what DOESN'T constitute an explanation of qualia, [...]


I really can't confirm that you do. The non-quantifiable character of for example what the color blue looks like to someone, is unlike for example the quantifiable shape of an object in our vision. Both are on the same level of the conscious 3D image, but one is quantifiable and the other is not. That also doesn't have to do anything with verbal ontological explanations.

[...]but I still wonder about your arguments, intuitions or ideas about what DOES explain them


Are you asking me for a theory of everything? Have physicists really explained the existence of matter (big bang or not)? I think so far we (we humans) have just examined the facts in detail and in general, and we can do, and have already done, the same in regard to consciousness. The distinction between 'measurable-size', 'conscious-size' and 'conscious-how' is, I think, one of the distinctions that can be made, and it illustrates to some extent which aspects of conscious perception are directly related to informational brain processes, and which are at most in a non-mathematical way related to (possibly) non-mathematical aspects of brain processes. (Conceptually speaking, it doesn't mean there are two separate things, as in classical dualism.)

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/06/2003 3:26 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

To talk about the "wrong level" is your own idea, your interpretation of what I said according to your own terms.


There is a correct hierarchical level of explanation for everything and the physics level is simply the wrong level to explain consciousness or qualia. That is why you failed to derive a mathematical physics explanation. Perhaps you were looking to NOT be able to explain it? I don’t know. It just seems odd that you would even try to explain it through basic mathematical physics.

The point of my criticism that you missed is that there is a VAST amount of functional organismic complexity that the physics level does not touch.

Linear or non-linear, simple or complex, low-level or high-level, quantum or classical, atomic or molecular or planetary or galactical or universal, relativistic or absolute, special or general, local or non-local, ordered or chaotic, 4-dimensional or 11-dimensional, one world or many worlds, ontological or epistomological, Goedel or Omega or consistently complete:


All of which are simple physics abstractions. Physics is not the proper level for explaining consciousness.

Try the neural network functional architectural level. Consciousness is an engineering problem not a theoretical physics problem nor is it even ultimately a philosophy problem.

today's laws of physics are mathematical all the way through.


This is irrelevant. Today’s laws of physics are not the proper level to explain consciousness.

I really can't confirm that you do. The non-quantifiable character of for example what the color blue looks like to someone, is unlike for example the quantifiable shape of an object in our vision. Both are on the same level of the conscious 3D image, but one is quantifiable and the other is not. That also doesn't have to do anything with verbal ontological explanations.


How is the CONSCIOUS REPRESENTATION of a 3D shape more quantifiable than the experience of the color ‘blue’? Just because I can represent a 3D shape mathematically in a computer and plot out some of its non-existent points, does not mean that my conscious experience of the object is the same type of process. As for the color ‘blue’ I could measure the exact frequency of the light and give you an equation that would quantify and replicate that exact waveform and it would still get you no closer to understanding how the conscious experience of the color is produced. What we can do mathematically with the sensed data of objective reality tells us nothing about the nature or mechanisms (mathematical or not) of the conscious sensations themselves.

Qualia are only quantifiable as neurological, architectural, electrochemical, structural descriptions (which you have completely ignored) and even so the math is fundamentally incomplete.

subtillioN: [...]but I still wonder about your arguments, intuitions or ideas about what DOES explain them

blue is not a number : Are you asking me for a theory of everything?


I already have one thanks, and it places the explanation of consciousness at the neuro-biological level of functional hierarchy not at the mathematical physics level.

Why do you insist on bringing consciousness down to the physics level? All it takes to explain consciousness is reverse engineering the functional architecture of the human brain. As soon as the cognitive functionality is understood the mechanisms of consciousness (even the experience of the color ‘blue) will be understood.

Do you need a theory of everything to understand any other biological mechanisms?

The brain is ultimately no different in that respect. It is just quite a bit more complex in its electro-chemical activity.

Have physicists really explained the existence of matter (big bang or not)?


Physicists have no clue what fundamental matter is (see anpheon.org for a real clue), but this doesn’t stop us from understanding what higher level consciousness is or at least how it is produced (any more than it stops us from understanding how an automobile works.) The crucial difference is the VAST amount of complexity of the human brain.

I think so far we (we humans) have just examined the facts in detail and in general, and we can do, and have already done, the same in regard to consciousness.


True.

The distinction between 'measurable-size', 'conscious-size' and 'conscious-how' is, I think, one of the distinctions that can be made, and it illustrates to some extent which aspects of conscious perception are directly related to informational brain processes, and which are at most in a non-mathematical way related to (possibly) non-mathematical aspects of brain processes.


ALL aspects of brain processing are fundamentally non-mathematical. They all work through the same basic analog and fundamentally continuous mechanisms. It is simply the different functional architectures between the brain modules that produce the different functionality and thus the different conscious sensations. There is not some mixture of mathematical and the non-mathematical processes at work in the brain. You have taken aspects of objective reality, such as the calculation of an objective 3D shape, and assumed that if it can be quantified in objective reality then the representation in the mind must be mathematical as well. Wrong. It is not that simple. The brain doesn’t use mathematics as the basis for any conscious representation or for anything else. Mathematics is a higher level symbolic language function. Ultimately it has nothing to do with the mechanisms of consciousness.

(Conceptually speaking, it doesn't mean there are two separate things, as in classical dualism.)


If you are requiring a GUTs and TOE-nails level explanation of consciousness then you must be assuming that consciousness is function of fundamental physical reality and not a product of biological engineering. Are you holding out for a quantum theory of consciousness?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/06/2003 3:09 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

There is a correct hierarchical level of explanation for everything and the physics level is simply the wrong level to explain consciousness or qualia. That is why you failed to derive a mathematical physics explanation. Perhaps you were looking to NOT be able to explain it? I don’t know. It just seems odd that you would even try to explain it through basic mathematical physics.

The point of my criticism that you missed is that there is a VAST amount of functional organismic complexity that the physics level does not touch.


All of which are simple physics abstractions. Physics is not the proper level for explaining consciousness.

Try the neural network functional architectural level. Consciousness is an engineering problem not a theoretical physics problem nor is it even ultimately a philosophy problem.


Well, I need to point out that you are not only not understanding my arguments, you also don't seem to be aware of basic concepts of modern science.

I am not talking about the _BASIC_ physical level, I'm talking about the whole of what we call _physical_reality_ : it is, in all its levels, including the neural network level, understood as quantifiable, mathematically describable. Am I getting closer?

There is no conceptual "power" to the neural network level, as it is understood as a physical reality, other than those "powers" given to it by the laws of physics. I assume you don't have some kind of 'spiritual' concept of neuronal networks. All their properties are said to be quantifiable, and that is my point. I'm not making the argument that it can't be the neuronal network which constitutes consciousness (and I'm not a saying the opposite either). The argument I'm making is: it cannot be the mathematical properties of neuronal networks which could constitute consciousness. And the mathematical properties are the only ones reflected on by todays (mainstream) neuroscience. Higher level theories such as Chaos Theory and system theories don't change that. They are all defined within the mathematical framework.

The theory of neuronal networks is mathematical as well, as it is understood to be based on physical reality, which in turn is understood to be completely mathematical.

It surprises me that I have to point out basic concepts of science, but that seems to be what I have to do.

Maybe you would understand me better if I told you about property dualism, but since I'm not a property dualist, that would lead to other misunderstandings.

How is the CONSCIOUS REPRESENTATION of a 3D shape more quantifiable than the experience of the color ‘blue’? Just because I can represent a 3D shape mathematically in a computer and plot out some of its non-existent points, does not mean that my conscious experience of the object is the same type of process.


Now that is a really interesting observation. I'm getting hopeful again. Basically, I agree. To illustrate why this is not a contradiction, I used the metaphor that 'conscious-size' (the quantifiable content of consciousness) is like waves on the ocean of 'conscious-how'. To quote from my homepage:

[occean.com] For example, in our visual 3D image, we recognize objects based on where we detect the edges of two differently colored areas, the edge itself being a mathematical line rather than something that exists.


In other words, the 3D shapes we see consciously are an abstraction, like the data patterns you see on a screen as display patterns. The picture is an illusion, but the light is real. Nevertheless, the fact that making this abstraction is possible, shows that the "CONSCIOUS REPRESENTATION" has quantifiable properties, as this abstraction is exactly such a quantification.

In our discussion of Dennett, you repeatedly said that in some sense qualia are an illusion. I don't recall Dennett to have said that (unless you take "seems" to mean "illusionary"), so I would be interested in a quote and page number. Anyway, the "representational" aspects that you are referring to, are all related to what I call "conscious-size", the quantifiable aspect of vision, for example. In contrast, "conscious-how", the qualia itself, are on a different page, conceptually. They are not an abstraction, they are real. Like the light coming from a movie screen: the movie is an illusion (as far as the screen is concerned), the light is not. Some time ago I used the analogy that they are like an additional dimension to reality, if that clarifies more than it confuses.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/06/2003 4:25 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I am not talking about the _BASIC_ physical level, I'm talking about the whole of what we call _physical_reality_


Physical theory is what we “call” physical reality. It is NOT actually physical reality. It is a theory. In your article you used basic physics math to ‘prove’ that qualia are not mathematically derivable. You did not use the actual properties of the hierarchical level of physical reality that is relevant to the functioning of consciousness. You used the basic level of physical laws (which only approximate the functioning of basic physical reality). The human mind, however, is anything BUT basic.

: it is, in all its levels, including the neural network level, understood as quantifiable, mathematically describable. Am I getting closer?


It is quantifiable, yes, but reality itself is not mathematical, and you used in your ‘proof’ only the basic physics level. This level doesn’t have the descriptive power do accomplish the task you assigned to it.

There is no conceptual "power" to the neural network level, as it is understood as a physical reality, other than those "powers" given to it by the laws of physics.


The powers given to the neural network level come from physical reality itself regardless of how we model the lower levels.

I assume you don't have some kind of 'spiritual' concept of neuronal networks.


I don’t have a “spiritual concept” period.

All their properties are said to be quantifiable, and that is my point.


Anything is quantifiable to an extent. That doesn’t mean that they are restricted by mathematics. Your arguments that the basic level of mathematical physics cannot derive qualia have no bearing on whether neural networks can produce them. The basic laws of physics are FAR from a complete description of reality and the neural network models don’t even use them. If they did their models would be FAR too cumbersome to get anywhere. There is simply no need to use basic physics in the higher level models.

I'm not making the argument that it can't be the neuronal network which constitutes consciousness (and I'm not a saying the opposite either). The argument I'm making is: it cannot be the mathematical properties of neuronal networks which could constitute consciousness.


I agree with your argument because neural networks are NOT mathematical, despite the fact that they can be approximated by mathematical models.

And the mathematical properties are the only ones reflected on by todays (mainstream) neuroscience. Higher level theories such as Chaos Theory and system theories don't change that. They are all defined within the mathematical framework.


No. The models are inherently qualitative, with mathematics used for the sole purpose of adding precision to the models and solidifying the descriptions. The descriptions of the neural circuits are mere models of the actual circuits and the mathematics only serves to give precision and solid memory to the results. It helps to use a computer to visualize the complex processes involved because the brain simply can’t represent the complexity of the network architecture within consciousness. Just because we use mathematics to help with our qualitative models does not mean that the models are incorrect (incomplete, yes inevitably, but not necessarily incorrect.)

The theory of neuronal networks is mathematical as well, as it is understood to be based on physical reality, which in turn is understood to be completely mathematical.


Modern Physics is devoid of qualitative understanding at the root quantum levels, but this doesn’t preclude a qualitative understanding at the higher levels insofar as they are not directly dependent on the lower level physics. The models of neural networks DON’T use basic level physics. In fact the scale of the models is many magnitudes larger than the basic physics level.

subtillioN: How is the CONSCIOUS REPRESENTATION of a 3D shape more quantifiable than the experience of the color ‘blue’? Just because I can represent a 3D shape mathematically in a computer and plot out some of its non-existent points, does not mean that my conscious experience of the object is the same type of process.

blue is not a number: Now that is a really interesting observation. I'm getting hopeful again. Basically, I agree.


That’s good because if you didn’t agree I would have to conclude that you don’t know much about the visual processing centers in the brain.

To illustrate why this is not a contradiction, I used the metaphor that 'conscious-size' (the quantifiable content of consciousness) is like waves on the ocean of 'conscious-how'. To quote from my homepage:
[occean.com] For example, in our visual 3D image, we recognize objects based on where we detect the edges of two differently colored areas, the edge itself being a mathematical line rather than something that exists.


The edges in the visual processes of the brain are NOT mathematical. They are effects of the neurological processing of the analog signals in the visual centers. There is no math involved in the actual processes.

In other words, the 3D shapes we see consciously are an abstraction, like the data patterns you see on a screen as display patterns. The picture is an illusion, but the light is real. Nevertheless, the fact that making this abstraction is possible, shows that the "CONSCIOUS REPRESENTATION" has quantifiable properties, as this abstraction is exactly such a quantification.


WRONG. The abstraction of conscious representation is not “exactly such a quantification”. It is completely devoid of mathematics. You are confusing our objective mathematical models with the functionality of the neurological representations.

In our discussion of Dennett, you repeatedly said that in some sense qualia are an illusion. I don't recall Dennett to have said that (unless you take "seems" to mean "illusionary")


Seems:
1. To give the impression of being; appear: The child seems healthy, but the doctor is concerned.
2. To appear to one's own opinion or mind: I can't seem to get the story straight.
3. To appear to be true, probable, or evident: It seems you object to the plan. It seems like rain. He seems to have worked in sales for several years.
4. To appear to exist: There seems no reason to postpone it.

The possibility of being an illusion is the logical way to understand the term. Seems, simply means that the thing sensed could be an illusion.


so I would be interested in a quote and page number.


This is simply the way that I interpreted his meaning extant in the quotes that you presented. Otherwise Dennett would be denying the obviously undeniable, which he is certainly not. Qualia, like ALL illusions, are real. They simply look like something that they are not, e.g. it looks to me as if the sky is blue, but it is not. The blue only exists as a conscious representation in my mind.

Anyway, the "representational" aspects that you are referring to, are all related to what I call "conscious-size", the quantifiable aspect of vision, for example.


Again, it is important to note that just because you can make an objective mathematical model of something does not mean that it is fundamentally mathematical, and as I pointed out earlier, electromagnetic frequency is just as objectively quantifiable as the 3-dimensional shape of some object that exists in my field of view in objective reality. The quantifiability of objective reality has no bearing on the nature of the subjective sensory representations of that objective reality.

In contrast, "conscious-how", the qualia itself, are on a different page, conceptually.


This is because you assume that some sensations are quantifiable and some are not. This is your artificial duality; inherently-mathematical vs. inherently-non-mathematical. ALL human sensory phenomena, however, are derived in the brain via non-mathematical neural network functionality. There simply is no mathematical duality in the brain.

They are not an abstraction, they are real.


I detect an either or assumption here ;). ALL abstractions (like everything else) are real. Sensory abstractions are REAL representations of REAL objective phenomena. The ‘un-reality’ is used to explain the fact that the objects that seem to us to have certain qualities such as color, really do not. Color is a subjective (REAL) phenomenon and electromagnetic frequency is an objective (REAL) phenomenon. Everything is real, we just have to figure out the nature of the reality, i.e. subjective or objective, representational or represented. Does the phenomenon represent something else or is it that something which is represented.

Like the light coming from a movie screen: the movie is an illusion (as far as the screen is concerned), the light is not. Some time ago I used the analogy that they are like an additional dimension to reality, if that clarifies more than it confuses.


Right, but I would leave out the dimensional part. It carries a lot of nonsensical baggage.

Illusions merely look like something that they are not. There are no real objects moving around behind the movie screen just like there are no real colors emitting from real objects. The color is mere interpretation in the mind, and in that sense (and only that sense) it is an illusion.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/06/2003 7:12 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

What you write is either already obvious to me, and already part of the argument, or unrelated to the points on which we disagree.

I think we first have to get on the same page concerning the relationships of mathematics, models of reality, and reality in itself.

I invite to read this in whole, rather than in small parts.

Nevertheless, I'll try another 'preview' of my argument, so that you know in which context I am speaking: The argument is that since colors (how they look) have no mathematical description, therefore consciousness (and/or the referencing in the mind) can't have an _exclusive_ mathematical description, therefore whatever constitutes consciousness (possibly the neuronal network) can't have an exclusively mathematical description either.

And here I'm not talking about the difference between approximation and precision, or any more practical (engineering related) difference.

But now to mathematics, models and reality.

First, nobody talks about reality itself being mathematical (certainly no philosopher that I know of). If an expression like that is used, then only as a short way of talking about "that-which-is-referenced-by-the-mathematical-mode l"
or "reality-according-to-the-mathematical-model".

Mathematics is the language of most models. Verbally given models are often only given in addition as a means of understanding, or in those areas which have not yet been explored enough for a precise mathematical description.

With the historical development of physics, more and more other sciences (such as biology) are assumed to be based on physics, and the assumption is made that at least in principle it will be possible to quantify all higher lever behavior and derive it from lower level physical theories. This implies that the higher level theories will be in principle mathematical as well, even if they will often be presented verbally of visually for easier understanding.

Now this development has reached the philosophy of consciousness. Neuroscience has developed to the point where it explores the so-called NCC's, Neuronal Consciousness Correlates. And this goes together with theories in computer science which assume that computers will be able to develop consciousness, except perhaps for practical difficulties in engineering.

The assumption is that brain processes are neuronal processes, the neuronal processes are (for example) electro-chemical processes, and that electro-chemical processes are physical processes.

The second assumption is that physical processes are well described mathematically.

Combining these assumptions means that it is assumed that brain processes have, at least in principle (except for practical difficulties), a mathematical (even if complex) description which describes them well.

The point being argued is that brain processes defined (or understood) in this way cannot constitute consciousness. Again, the argument is not about brain processes in general, only about brain process which are understood or defined to be well-described mathematically.

This has always been the background of my discussion, and I hope you can now step back and reconsider your understanding of my arguments.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/07/2003 4:02 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

What you write is either already obvious to me, and already part of the argument, or unrelated to the points on which we disagree.


That is a convenient way of not addressing my counter-arguments. In the spirit of proper discussion, I have consistently attempted to address each point that you have made and I have then made my counter arguments. If you disagree with my counter-arguments then, to continue this discussion, you must detail your disagreements so that we can move forward towards a mutual understanding, otherwise it is just you dismissing my arguments and never figuring out why you don’t understand them. Without mutual feedback we will keep repeating our isolated arguments over and over and we will never get anywhere.

The argument is that since colors (how they look) have no mathematical description, therefore consciousness (and/or the referencing in the mind) can't have an _exclusive_ mathematical description, therefore whatever constitutes consciousness (possibly the neuronal network) can't have an exclusively mathematical description either.


I agree with your conclusions, but not with your argument. There are many ways of describing any physical phenomenon. Mathematics is never the _exclusive description_, and if it appears to be so then there is no real understanding of the real physical phenomenon involved.

Your attempt to show that color is ‘non-mathematical in nature’ is feeble at best. No-one in cognitive science would try and quantify the conscious representation of color by pure basic-level physics. It would be (and is) a complete waste of time.

Just because you have inevitably failed to show that ‘color’ is quantifiable, doesn’t mean that it ISN’T ultimately quantifiable (within the limits of quantifiability of course). If I fail to understand how an automobile works because I have focused on trying to describe how the molecular structure of the rubber of the tires cannot be a perpetual source of some indefinable ‘motive force’ (qualia), does this mean that there is no possible rational explanation for the functionality of the car? You simply can’t prove that 'color' is non-mathematically describable, especially when you are looking at the quantifiability of the basic physics of the 3D objects across the room instead of looking at the structure of the brain itself. All you can prove is that YOU can’t prove that it is quantifiable.



_______________________



These are the counter-arguments that must be addressed by you to effectively continue this discussion.

Definitions:

Objective components of sensorial reality: These are the physical components sensed by the brain through its perceptual mechanisms. These include, the actual shapes of objects, the frequencies of light, etc.

Sensory mechanisms: The components of the brain that actually produce the sensations in the mind as ‘qualia’.

1) To make the distinction between those sensations that are ‘mathematical’ and those that are not, your arguments use the quantification of the OBJECTIVE components of sensorial reality, such as the 3D mathematical representation of the ACTUAL objective shape of an object. You fail to realize that quantifying objective reality is not at all the same as quantifying the physical sensory mechanisms that produce our conscious sensations. There is no 3D representation of any object whatsoever in the mind. There is only the differential overlap of the stereo images and the subsequent associative inference (feeling) of distance per distinguishable portion of the visual field.

2) If your argument centers on whether or not the objective components of sensorial reality are quantifiable (if it didn’t then you would have looked to the ‘color-producing’ mechanisms in the brain, for your ‘proof’), then you must also realize that electromagnetic frequency, as the objective component of the sensation of color, is also quite easily quantifiable. This shows an arbitrary choice on your part to exclude portions of the set of objective components of sensorial reality that would be contrary to your pre-intended conclusions. This is equivocation to justify a pre-defined conclusion and it renders your argument inaccurate and incomplete.

3) The central, crucial point to your argument is the artificial, arbitrary and incomplete analytical break-down of subjective reality into those sensations that can be mathematized (through the quantification of objective reality!?) and those that can’t (the relevant objective-reality-quantifying and frequency-defining equations are simply ignored by your argument). This mathematical duality is entirely irrelevant because you have only addressed the quantification of the OBJECTIVE components of subjective experience. This says nothing about HOW the subjective sensorial mechanisms of the brain actually work because you simply have not addressed the functionality of the relevant mechanisms.

4) The only way to ‘prove’ that the sensation of ‘color’ is non-quantifiable is to address the portion of reality that actually produces the sensation of color. The sensation of color is not produced by the object sensed all the way across the room or across the universe. It is produced by the visual centers inside the brain. You have simply assumed that if the objective component to the sensation can be quantified, then the mechanisms of the sensation itself can also be quantified. This simply does not follow because the thing sensed is not equivalent to the mechanisms which produce the sensation.

5) The sensation of shape is not a 3D mathematical representation, just like the experience of color is not a wave-equation. They are both simply non-mathematical neural responses to the effects of the patterns of light falling on the retina. No math involved whatsoever. This is why your argument that centers on duality of the quantifiability (or not) of sensation, is moot at the root :). The brain does not use math!

6) Whether the root level of theoretical physics has a mathematical description or not has no bearing on physical reality at any hierarchical level because mathematical theory is simply that… a THEORY. If I have a purely qualitative, linguistic description of fundamental physical reality (which I do) does this necessarily mean that reality is inherently linguistic? Obviously not, but what it does mean is that the mathematical descriptions of basic physics are entirely superfluous to a qualitative understanding of the processes of physical reality.

Which constitutes a better description of reality?

1. A book full of pure mathematical equations quantifying every aspect of physical reality, but with no semantic linkages to explain how the mathematics fits with reality?

or…

2. A book full of descriptive words which can form a highly detailed visual representation in the mind of everything that is physically happening at ALL relevant levels to produce the mechanisms of everything observable in physical reality?

With the mathematical description there is no way of knowing what the equations actually represent. The book would be a completely useless mass of numbers and symbols. The qualitative description, however, would be quite easy to use to generate the equations that fit reality and very soon an understanding that is both qualitative and quantitative would be finalized. Nature herself, however, is neither linguistic, visual, nor mathematical. She is pure causality not representation.

This is why I say that ‘proofs’ that rely purely on mathematics are superfluous and inconclusive because nature is not mathematical.

All aspects of consciousness are formed from the same types of neural processes. Even though mathematics can be used to quantify the neural architecture, mathematics ultimately has nothing to do with the actual physical processes.

Here is a quote from your article:

Believing that mathematically-describable physical reality is causally closed would imply that consciousness

a) either doesn't really exist or, contrary to appearances, is not anything somehow special (Materialism, Daniel Dennett),


First of all, you have misrepresented Dennett’s position. He actually acknowledges that consciousness exists. He simply says that qualia are mere representations, and as such they are (((real))) illusions. Only a fool would deny that consciousness exists (cogito ergo sum) because the act of ‘denying’ is itself an act of consciousness.

Secondly, I personally don’t believe that “mathematically-describable physical reality is causally closed” because “mathematically-describable physical reality” is simply not physical reality; it is not even causal nor is it ‘closed’; it is merely logical and logic is relatively quite limited. Physical theory is a mere representation of physical reality and it is FAR from complete. Therefore any attempt to use the basic simplistic laws to describe anything as complex as consciousness is doomed from the start.

First, nobody talks about reality itself being mathematical (certainly no philosopher that I know of).


Where have you been? The commonly accepted Berkely-Copenhagen Quantum philosophy says that ultimate ‘physical’ reality is probabilistic in nature. This is how it justifies the idea that matter doesn’t exist until it is perceived. Esse est percipi.

From
http://www.newtonphysics.on.ca/HEISENBERG/Chapter1 .html#Section6 :

>>
The positivism of Descartes is pushed to an extreme degree by Berkeley, Hume and others, forming a new thinking called Modern Philosophy. Similar arguments are given by Kant, Hagel and many others.

We must notice that this modern philosophy is astonishingly identical to modern physics as suggested by the Copenhagen interpretation of Bohr, Heisenberg and Pauli. In modern physics, matter is not considered to have its own independent existence before it is detected, just as in the case of modern philosophy of Descartes and Berkeley. For Heisenberg and for Bohr, just as for Descartes and Berkeley, Existence is nothing more than perception. (Esse est percipi.)

There is a striking proof of the direct influence of Berkeley's philosophy on the Copenhagen interpretation. That proof is in Heisenberg's book. Heisenberg writes clearly that he agrees with Berkeley's philosophy. Let us recall Heisenberg's statement in his own words:

“The next step was taken by Berkeley. If actually all our knowledge is derived from perception, there is no meaning in the statement that the things really exist; because if the perception is given it cannot possibly make any difference whether the things exist or do not exist. Therefore, to be perceived is identical with existence."
<<

Probability is an exclusively mathematical concept. Therefore the quantum claims amount to saying that physical reality itself is not physical at all, but purely mathematical and therefore purely subjective. This notion runs rampant in new philosophies which directly equate the fundamental level of physical reality with mathematics, as a computer program, cellular automata, fractals, probabilities or whatever.

Mathematics is the language of most models. Verbally given models are often only given in addition as a means of understanding, or in those areas which have not yet been explored enough for a precise mathematical description.



The point of theoretical models is human understanding. Mathematics can only solidify and concretize any model. Without the common language description of the model to give semantic tie-ins with physical reality, the mathematics is absolutely meaningless.

The point being argued is that brain processes defined (or understood) in this way cannot constitute consciousness. Again, the argument is not about brain processes in general, only about brain process which are understood or defined to be well-described mathematically.



Brain processes are entirely independent of our models of them. It makes no difference whether we model them out of macaroni or mathematics, the processes are not made out nor tied to either.

This has always been the background of my discussion, and I hope you can now step back and reconsider your understanding of my arguments.


You side-step my arguments by saying “What you write is either already obvious to me, and already part of the argument, or unrelated to the points on which we disagree”, but it is clear that it is none-of-the-above. If my arguments were “obvious” to you or “already part of the argument” then you would have never attempted to explain the experience of ‘color’ as a consequence of basic level physics. By saying that the counter-arguments are “unrelated to the points on which we disagree” is to completely miss the nature of a discussion. The counter-arguments ARE “the points on which we disagree”. They MUST be addressed or they will go unchallenged.

My arguments render your main point superfluous and meaningless because mathematics is simply not critical to any qualitative understanding of ANY physical phenomenon whatsoever. If you simply want to brush my arguments under the rug and ignore them that is ok with me, but if you do so then I will not counter again and the discussion will be concluded with my counter-arguments un-reconciled.


Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/07/2003 4:05 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

The formatting of the last part of the post got messed up so here it is fixed:

_____________________

Mathematics is the language of most models. Verbally given models are often only given in addition as a means of understanding, or in those areas which have not yet been explored enough for a precise mathematical description.



The point of theoretical models is human understanding. Mathematics can only solidify and concretize any model. Without the common language description of the model to give semantic tie-ins with physical reality, the mathematics is absolutely meaningless.

The point being argued is that brain processes defined (or understood) in this way cannot constitute consciousness. Again, the argument is not about brain processes in general, only about brain process which are understood or defined to be well-described mathematically.


Brain processes are entirely independent of our models of them. It makes no difference whether we model them out of macaroni or mathematics, the processes are not made out nor tied to either.

This has always been the background of my discussion, and I hope you can now step back and reconsider your understanding of my arguments.


You side-step my arguments by saying “What you write is either already obvious to me, and already part of the argument, or unrelated to the points on which we disagree”, but it is clear that it is none-of-the-above. If my arguments were “obvious” to you or “already part of the argument” then you would have never attempted to explain the experience of ‘color’ as a consequence of basic level physics. By saying that the counter-arguments are “unrelated to the points on which we disagree” is to completely miss the nature of a discussion. The counter-arguments ARE “the points on which we disagree”. They MUST be addressed or they will go unchallenged.

My arguments render your main point superfluous and meaningless because mathematics is simply not critical to any qualitative understanding of ANY physical phenomenon whatsoever. If you simply want to brush my arguments under the rug and ignore them that is ok with me, but if you do so then I will not counter again and the discussion will be concluded with my counter-arguments un-reconciled.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/08/2003 12:48 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

[...]The counter-arguments ARE “the points on which we disagree”. They MUST be addressed or they will go unchallenged.[...]


Having read your last message carefully and completely, I came to the conclusion that I can't spare enough time to go into such a detailed discussion. There are even parts where theories are being discussed as if they were mine, when they are in fact those that I disagree with. Probably I had tried to describe those in order to straighten out some other misunderstanding. At that point, the cat chases its own tail.

I think if I wanted to develop a line of reasoning that is custom-tailored for you, it would use 'meaning' instead of 'color', and that would be a project for several months.

I'd like to conclude this discussion with two quotes from an article by Daniel Dennett, available on this website, "Originally published May 1997" :

The best reason for believing that robots might some day become conscious is that we human beings are conscious, and we are a _sort_ of robot ourselves.


It is not as if a conscious machine contradicted any fundamental laws of nature, the way a perpetual motion machine does. Still, many skeptics believe -- or in any event want to believe -- that it will never be done. I wouldn't wager against them, but my reasons for skepticism are mundane, economic reasons, not theoretical reasons.


It will be left as an 'exercise for the reader' to figure out whether I agree or disagree with these statements. The real question, though, is whether they have a scientific basis.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/08/2003 1:19 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Having read your last message carefully and completely, I came to the conclusion that I can't spare enough time to go into such a detailed discussion.


Fair enough, and thank you for the intense discussion.

I'd like to conclude this discussion with two quotes from an article by Daniel Dennett, available on this website, "Originally published May 1997" :


The best reason for believing that robots might some day become conscious is that we human beings are conscious, and we are a _sort_ of robot ourselves.


It is not as if a conscious machine contradicted any fundamental laws of nature, the way a perpetual motion machine does. Still, many skeptics believe -- or in any event want to believe -- that it will never be done. I wouldn't wager against them, but my reasons for skepticism are mundane, economic reasons, not theoretical reasons.

It will be left as an 'exercise for the reader' to figure out whether I agree or disagree with these statements. The real question, though, is whether they have a scientific basis.


For what it's worth, I agree with Dennett's first quote, knowing what he means when he says "we are a _sort_ of robot ourselves". I happen to disagree with Dennett's frugality in his second quote, however, because I would bet that a conscious 'machine' WILL be constructed. This will not be a computer or a machine as we know them, but a highly complex machine that could be considered more an 'artificial organism' by its sheer complexity and subtle detail.

Re: GLITCHES IN THE MATRIX . . . CONSCIOUSNESS
posted on 04/23/2003 10:15 AM by anemone

[Top]
[Mind·X]
[Reply to this post]

We all know that we are conscious? And that is the difference between us and machines?
Can we ever build a machine that is as conscious as we are?
Maybe we got the question wrong in the first place. Maybe what we argue are our "conscious" thoughts, perceptions, emotions... are just a part of a very, very, very complex program running in our brains. In that way, we humans would be not much more conscious than our machines, just much more sophisticated if you will, but not fundamentally different.
Who can prove that humans think and act on free will and not only as a result of internal and external stimuli?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/07/2003 12:26 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

That is a convenient way of not addressing my counter-arguments. In the spirit of proper discussion, I have consistently attempted to address each point that you have made and I have then made my counter arguments.


I disagree. We have a too different understanding of basic conscepts, and need to address these first. Even in your last message, your interpretation of the _meaning_ of why I use the term "exclusively mathematical description" simply yet completely misses the point. Your texts mostly do not address my arguments, and I see no need to conctantly reflect on random general statements which I see as unrelated to the specific lines of reasoning which I put forward each time.

As far as you last message is concerned, I don't have enough time right now to read it all, but it seems to me that with one exception, it is another set of misunderstandings and unrelated topics. But if you insist, I will go into each of your points. Just at a later time. Now, I would like to mention the exception that I saw when quickly passing over your last message:

Your attempt to show that color is ‘non-mathematical in nature’ is feeble at best.


Congratulations, you found the 'weak' point in this reasoning: I don't think that the non-mathematical nature of what a color looks like can be proven (at this point). This requires the insight of human observers, so that it can be taken (cautiously) as "input", in a sense as evidence, rather than as something that is to be proven. However, there are strong arguments to _support_ this understanding, and it is valid to develop a line of reasoning based on it, as even physics is developed on a-priori assumptions and hypthesis. Personally, I am confident that this is a valid 'assumption', I don't even take it as an assumption, personally. Do you?

I'll come back to your message in more detail later.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/30/2003 8:07 PM by mindxx

[Top]
[Mind·X]
[Reply to this post]

brief:

SuB


Maths is ONLY an abstraction.


Probaility isn't an exclusively mathematical thing.



Maths is ONE abstrction that it is expressed in.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/25/2003 1:40 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Yes, but the mandelbrot set would be external to the neurons invlved.


If you define so. But clearly - you are "inside Maldenbrot". With no "outside" in this case.

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/25/2003 3:40 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

If you define so. But clearly - you are "inside Maldenbrot". With no "outside" in this case.


Right. It quickly gets mired in the semantic ambiguity of the definitions.

It is also possible that a nano-computer within each neuron could generate the simulated environmental stimuli. So it is to a degree a problem of the arbitrary definition of a precise line bisecting the gradient boundaries of the organism.

My main point is that the evolution of consciousness at the species and the individual level could not have taken place without the original organismic homeostatic distinction between internal and external or subjective and objective phenomena.

Now that this evolution HAS taken place we can reveal the fuzzyness of these boundaries, through re-engineering of these brain/environment interaction mechanisms.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/26/2003 1:39 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

My main point is that the evolution of consciousness at the species and the individual level could not have taken place without the original organismic homeostatic distinction between internal and external or subjective and objective phenomena.


It looks like, we agree now.

- Thomas

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/26/2003 12:42 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

It looks like, we agree now.


that is good...

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/14/2003 10:08 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

If you're dreaming and someone says "telephone", for example, a telephone will appear in your dream, your mind will find a way to insert one semi-logically into the dream. So, you are not detached from stimulus, just because you don't notice it.


In fact you are still completely connected to physical stimulus, but the stimulus is largely internal.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/19/2003 6:01 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

The notion of You as an entity separated from the “others” does not make sense in this context. In a dream there are no others just your consciousness experiencing the dream.


Exactly! This is the central insight of the Vedanta. Our everyday world is a dream; there is a single mind (called Brahmanh in the Vedanta) that is the dreamer, and each person is Brahman dreaming that it is a limited person. The function of the Vedantic meditative techniques is to drill down into the personal mind and discover that its core (the Atman) is identical with the Brahman that is the mind governing all external phenomena.

I prefer to use the term 'metamind' to avoid the cultural associations of either Berkeley's God or the Hindu theology, but if you read Berkeley's dialogues or Shankara's commentary on the Vedanta, you'll see that they're expressing the same concept.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/19/2003 4:09 PM by coenraad

[Top]
[Mind·X]
[Reply to this post]

AND HOW to fix them? Change your interpretation to fit!- such as:

I have another theory of why humans were harnessed as a "power source". The computer-in-charge obviously evolved from some common-sense software. The human race's ultimate fate is thus a product of this computerised common-sense knowledge. Maybe it evolved in such a way that it could not indiciminately kill human beings, due to the underlying moral common-sense present. Killing them through real-world methods seems more acceptable... or "moral" for that matter... anyone catch my drift?


As for the pill issue - it is a metaphor for the drug culture. The action of choosing which pill is simply a "ritual" or action to illustrate a choice.

(a note on one of the downward comments: I have taken LSD in a dream and trust me, it was one of my coolest and most vivid dreams... and I dream a lot!)

As to the location pinpointing - surely it would be simple to program a certain "pill" into the system and then to find a person taking this hypothetical pill... and locate all his personal data such as his physical location...

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/29/2003 3:32 PM by Eigen666

[Top]
[Mind·X]
[Reply to this post]

I agree, as physics major I can still watch the matrix and derive a certain amount of enjoyment.
What I found most intersting,and what I don't think has been discussed is the amount of dialogue the movie a created outside of our little clique.People with no science background are genuinely interested in the matrix.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 03/27/2003 12:40 AM by agricult0r

[Top]
[Mind·X]
[Reply to this post]

Listen my fellow freaks...

As much as I love the Matrix myself; let's not forget this is science fiction we are talking about. Science FICTION as in Fantasy...of course there are things that don't make "sense." Besides, who said these guys were Physics majors in college?

Don't get me started on Star Wars and Star Trek!
Nite! ;-)

agricult0r

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/20/2003 6:56 PM by epimeth

[Top]
[Mind·X]
[Reply to this post]

Wow. I like speculating on the sci in sci-fi as much as the next guy, but this might very well be the stupidest thing I have ever read.

The red and blue pills are fictional devices usually referred to as METAPHORS. Morpheus is putting the choice to wake up in terms Neo can understand.

The whole thing about the fusion and the human minds controlling it...please. If that is what the filmmakers had wanted to say, they would have said it. They simply screwed up and confused the issue. NOT uncommon in big budget screenplays.

Finally, of COURSE Neo would eventually learn to fly. THAT IS THE POINT!!!! Every one of the heros in the Matrix has overcome the rules of that world. Or am I the only guy who can't jump from building to building and run around the walls of a room?

What's next? "Glitches in the New Testiment... The Science behind turning water into wine...perhaps there was a certain mineral in the water of that region, that made it prone to wine conversion in the first place..."

Tasting a Strong Smell - Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/21/2003 11:04 AM by daschelt

[Top]
[Mind·X]
[Reply to this post]

When Agent Smith says he can, "taste your stink" he may not be confused about taste and smell.

I've driven by pig farms where the manure is spread on the fields. It's a powerful stink, and it gets everywhere. In the car with the windows rolled up and on your toungue. A pig farm is a stink you can taste.

I think Agent Smith used the phrase "taste your stink" to convey the strength of his "experience".

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/21/2003 10:27 PM by lockdog

[Top]
[Mind·X]
[Reply to this post]

The Matrix is one of the better films produced in my lifetime. The cinematography is outstanding and groundbreaking. The Wachowski brothers raised the bar with this production. Unlike pictures that attempt to make a social statement, The Matrix is pure entertainment mixed with a little bit of mind bending.

Therefore, I can appreciate the discussion you are trying to have about the Matrix but you people have either way too much time on your hands or you take yourselves much too seriously.

I'd like to see you apply the same type of critical thought to any number of films that have fantastical aspects. I would hope that would help illustrate how utterly ridiculous this excercise is. People as smart as you should be trying to benefit humanity rather than displaying your willingness to waste an enourmous amount of energy on such a trivial pursuit.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/21/2003 11:43 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I'd like to see you apply the same type of critical thought to any number of films that have fantastical aspects. I would hope that would help illustrate how utterly ridiculous this excercise is. People as smart as you should be trying to benefit humanity rather than displaying your willingness to waste an enourmous amount of energy on such a trivial pursuit.


Shame on us for exchanging ideas and sharpening our minds like that!

locating morpheus
posted on 04/23/2003 12:58 PM by prohiro

[Top]
[Mind·X]
[Reply to this post]

sorry if this appears somwhereelse in here. anyhow...
1 - how do neo, trinity, and tank locate morpheus' avatar once he's been captured by the agents? i assume they'd take every step to hide him. and why put him in a room with a window, as opposed to a room with thick walls or something?

2 - neo takes "guns, lots of them" from the construct into the matrix. since he seems to know where morpheus' avatar is located, why doesn't he bring a helicopter in from the construct and skip the pain of the lobby shootout and so forth? don't get me wrong, i love the lobby scene, but could it have been avoided?

3 - did anyone else catch what was on the TV behind the kid with the spoon at the oracle's place? man, thats some wierd shit.

tim

Re: locating morpheus
posted on 04/23/2003 6:07 PM by Nevada Project

[Top]
[Mind·X]
[Reply to this post]

I also would like to have more information on the scene where he gets all of the guns and the lobby scene. Those are very good points. I also would like to see if anyone has anything to say on the oracle. Since this was a Matrix program, how did she "know" things? Just wondering, any comments on Neo giving his friend a disk in the begging of the movie?

Re: locating morpheus
posted on 04/24/2003 1:35 AM by eeu94157

[Top]
[Mind·X]
[Reply to this post]

why they do not put morpheus in a thick walled cell...
i) over-confidence of the agents that no one would dare attack them.
ii) Agent Smith wanted to make Morpheus see the Matrix all through the torture, so that Morpheus would realize that the time of humans is over, and resistance is futile, and that he should give up.

Why Neo did not get a helicopter from naverkenezzer's comp?
I would guess that for the naverkenezzer's comp, it would be much less computationally intensive to simulate a gun, or a set of guns than simulate a whole helicopter. Remember that the comps on the naverkenezzer are so slow that they watch data about the matrix fly by, and not the rendered scenes of the matrix.

Re: locating morpheus
posted on 04/24/2003 12:51 PM by prohiro

[Top]
[Mind·X]
[Reply to this post]

fair enough, but i still wonder how they knew where morpheus' avatar was located.

also - why would cypher ever believe that the machines would re-insert him into the matrix. i guess the possibilty exists that they would, but more likely they'd take the info they were after and just kill cypher. i suppose he was either insane, or close to it by that point, from the suckiness of living inthe "real world."

and - lloyd states that the bug they put into neo is more likely an explosive than a tracking device. well, why can't it be both? it seems more likely to me that when they have neo in the car, he is dangerous to them because at any point an agent could take over his avatar. that is why switch keeps a gun pointed at his head.

t

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/24/2003 1:28 AM by eeu94157

[Top]
[Mind·X]
[Reply to this post]

At one place you mention that the cellphones that the rebels use inside the matrix are not from the matrix, but from the naverkenezzer's computer. That is why they can not use cell phones for getting in and getting out of the matrix. But then, neo, when he snatches a cell-phone from a bystander in the matrix can dial back to the naverkenezzer. The cell phone is definitely from the matrix in this case, and neo does not use this matrix cellphone to get out of the matrix, because then tank would have led him just to an empty room, and not a room which had been raided earlier by the agents. Explain!

I think that there are only just a few rooms in the world which can send and receive the rebels. They just want the rooms to be empty so that no one panics and sends back strong signals to the matrix telling agents that something unconstitutional is taking place.

Consciousness and Nondeterminism
posted on 04/24/2003 10:13 AM by j3ss

[Top]
[Mind·X]
[Reply to this post]

I realize I've come late to this party, but it seems that some people are still paying attention so I'll bite. I enjoyed the essay but had a real 'second cat' moment when I got far enough into the last section to realize what Mr. Lloyd was on about. The world I had been living in, that of a humorous and interesting response to various plot elements of a popular motion picture, had had some walls rearranged and suddenly was a world of monomaniacal rant in service of a particular pseudoscientific flavor of logical positivism. It was with no great surprise that I discovered that the vast majority of the following comments were addressed to the author's strange, pop-sci certainties about the phenomenon of consciousness.

If you've read the whole thread I don't have to tell you that many of its points are moot. Mr. Lloyd can spin a fairly intricate yarn, and various posters have gotten entangled in whichever thread they found most infuriating.

Do you want to study consciousness scientifically? Do you think it can be so studied? Lloyd has some good old Vienna positivism to throw at you (not solipsism, barely, because he says "we know that our own brains are indeed conscious", instead of "I know that about my brain"). As Mach could have told you, consciousness can't be discussed in terms of physical entities, because we have direct experience of consciousness and of little else. The reason, I think, that philosophy can cloud the issue as much as it has done is that the best definition of "consciousness" it has produced is "consciousness is experienced by a subject". And a subject would be? This circularity doesn't require science for refutation; mere analysis would do. If you have trouble imagining consciousness as a phenomenon that can be studied, you might want to read _The_Feeling_of_What_Happens_, by Antonio Damasio, for some actual thought about what consciousness is and where it comes from. By all means stay away from Daniel Dennett; he is a distraction.

Do you want to sensibly point out that biochemistry has more to say about neurons than does physics? Now Lloyd can accuse you of relying on "emergent" properties that don't obtain because after all consciousness is an atomic Platonic ideal (that's not what you heard? maybe you missed it because it was dressed up as empiricism: see previous paragraph). By the way, he misses the point that Sante Fe Institute folks make when they talk about "emergence", perhaps because he's not thinking of a helpful example. Think ontogony, not temperature. Are higher mammals and their qualities in "the same basic space of concepts" as a fertilized egg? Or perhaps "they have no objective existence outside of mathematical theories"? No on both counts, I think. Lloyd's criticism of emergence would apply equally to any abstraction. For instance, one couldn't say "I=V/R" about an electrical circuit, because that equation doesn't take into account the fact that each electron represents a specific quantum of negative charge. I have to say, though, I really liked the story about the Welsh print shop.

Do you observe that most processes in nature and society have a nondeterministic aspect that has nothing to do with quantum mechanics? Well then you're relying on pseudorandomness, and that's bad because... it's bad. Can't classical computers simulate quantum ones? Well, a quantum computer has a "specific implementation". (what?)

As all have seen, there are any number of loose ends to chase. I'll be selective.

The key mistake, which allows all the others to exist in the same fevered mind without producing insanity, is believing that there is some sort of privileged randomness of which all other randomness is a pale imitation. I'll quote:

Yet, if our argument that machines are not conscious can also apply equally to brains, then the argument must be flawed— since we know that our own brains are indeed conscious!

The answer is that there are certain processes in brain tissue that involve nondeterministic quantum-mechanical events. And, working through the chaotic dynamics of the brain, those minute phenomena can be amplified into overt behavior. The nondeterminism opens a gateway for consciousness to take effect in the workings of the brain.

Notice there is never any support for this firm statement of belief ("the answer is that") that this source of randomness is capable of something of which other sources are not. I hope that Mr. Lloyd never got a passing grade in a college probability course, because he completely missed the point. The fact that two particular events occur as results of different physical processes is of no import to any statement that one can make based on their probability distributions. If you want to form a hypothesis that a particular sequence of numbers has constituents which are values generated by a pseudorandom function with a given seed rather than samples of a uniform variable, that's valid. But that doesn't say anything, for instance, about the suitability of the sequence for driving a Monte Carlo simulation of a neuron, although of course one could construct an inappropriate sequence. Similarly, any sequence of actions that seems to "have a cycle" might betray that the actor is really just a simple algorithm. Of course it might indicate that the actor is following the algorithm just to befuddle his psychiatrist.

Some don't think much of a simulation of a neuron, or of a collection of neurons. Insofar as the simulation may or may not correspond to reality, I agree with them. I certainly don't think we understand as much about the computational ability of a biological brain as some people do. But to say that a simulation is a priori impossible, because then we would know the code, is to deny the whole program of science. Science looks for simple rules that explain important aspects of a complex world. It has found a number of them, which it may revise pending new information. For some time it has accepted that its present rules will never lead to a perfect prediction of all future events. It long ago threw out the "clockwork" model of the universe. This whole fixation with finding a gap in the code is analogous to a theological discussion about free will. I.e., it makes me gag.

There is a fundamental misunderstanding of the word "nondeterminism" here. It means randomness, nothing more. It comes as readily from the flipping of a coin as from the entanglement of protons. Those who are uncomfortable with the idea of uncertainty popping up "out of nowhere" just love to gussy up the concept with this new-fangled quantum stuff, as if only this super-powered randomness is up to the task of making the banal decisions that govern an organism's actions in a mundane world. Everything else they could predict, if only they made enough assumptions. As an aside, perhaps the concept they're after, which is not nondeterminism, would be addressed by Kolmorgorov complexity theory: read one of Gregory Chaitin's books for details.

It seems to me that this way of thinking about subjectivity leads eventually to the surely flawed notion that any quantum process or statistic thereof is conscious. Don't misunderstand me; I'm not one of these "information engineers" glossing over the mind-body problem. I've just noticed that a theory of consciousness that fixates on quantum nondeterminism to the exclusion of all other characterizations leads to considering a quantum noise source as the essence of consciousness. That was surely not Mr. Lloyd's intention. It seems distastefully information-theoretic. Noise is not a subject.

To come at this from a different direction, are the actions of a "simple" multicellular organism such as a worm or an ant conscious? Since they can be characterized entirely by ethology and even simulated, I suspect Mr. Lloyd will say they are not. But the brain of an ant has the same access to quantum nondeterminism as that of a human, whatever hand-waving mechanism is proposed for that access.

So, my point is that there is no "special nondeterminism", and even if there was it would be a lousy concept on which to base any explanation of consciousness. At any rate, the Matrix doesn't address this issue nor should it. While the consciousness of the machines is not exactly human, they do have points of view about facts and values, and their actions are imaginative responses to those points of view.

later,
Jess

Sure reminds me a lot of Treknology...
posted on 04/24/2003 4:40 PM by kschang

[Top]
[Mind·X]
[Reply to this post]

The whole point of rationalizing "The Matrix" is much like rationalizing "Treknology" (Star Trek technology).

While I like a lot of the explanations, some of them contradict each other.

For example, the part about dematerialization of Morpheus after the escape as problems. First the author says the rebel's data is "deleted" when no one is observing it (i.e. empty room). However, Morpeus was NOT unobserved. Even if observation by Neo and Trinity doesn't count, the wino in the corner observed Morpheus "dematerialize". Indeed, that's the whole setup for Agent Smith to track down Neo and Trinity.

There are other problems as well. The idea that cell phones are actually projected into the Matrix by the Neb is FLAT OUT WRONG. The fact that Cypher used his cellphone as a beacon to lead the agents and SWATs to the building is proof enough of that. Those cellphones CAN call out. Then later, Neo was able to snatch someone else's cellphone to contact Tank in the real world.

I can keep going... But won't.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/25/2003 8:11 PM by LionMage

[Top]
[Mind·X]
[Reply to this post]

OK, here's my impressions... First of all, the author of this article (Peter B. Lloyd) gives me the impression of being a philosophy snob -- he repeatedly makes the assertion that some of the fundamental questions can't be asked or answered except from a philosophical standpoint. Nice try, but that form of elitism flies in the face of true scientific inquiry. I guess his attitude kind of rankles me.

But let me ignore my visceral emotional response to his style and get to the meat of his message. I agree with most of this guy's suppositions, except to note that the machines would not be so wasteful as to not use bioelectric/biothermal energy to their advantage. Yes, I agree that the cold fusion power source would be the primary source for the machines' power; however, the machines just wouldn't throw away any energy source, even an inefficient one such as human body heat. No, they'd use that to offset the cost of keeping farms of humans plugged into the Matrix.

I also personally believe that the humans are doing more than acting as a parallel processing compute farm to monitor and adjust the fusion reaction(s) going on in their fusion power plants. It seems to me that the machines are probably harnessing human brain meat for redundant storage and computation in other areas as well. (This is suggested strongly in many articles posted on the Matrix web site, as well as the Matrix web comics published on that site.)

OK, so what was my biggest gripe with Lloyd's article? His discussion of consciousness versus intelligence. He talks about experience being the hallmark of consciousness. He gives a rather ridiculous example of jamming one's thumb in a door and the resultant pain as being indicative of the immediacy of experience. He talks about one set of experiences not being confused with other kinds of experience, and implies that this is somehow the hallmark of a conscious mind.

Let's perform a thought experiment. Say I do microsurgery on someone's thumb, and interchange the neurons that connect to pain receptors with those that connect to, say, receptors for temperature. (There are actually several different kinds of receptors that sense conditions such as heat and cold, but let's pick the ones responsible for the sensation of cold for our thought experiment.) Now, after the surgery is done and the thumb is healed, we jam the subject's thumb in a door, hard. Do they experience pain? No, they experience the sensation of cold, even though their other senses register pressure (yes, pressure sensors are different from pain receptors in human skin) and they might see blood if you injured the thumb badly enough. So we can see that the whole language of experience versus interpretation of data is fallacious. The "interpretation" is done by the neurons that sort different sensory inputs and feed them to the appropriate parts of the brain/spinal cord.

Let's consider another phenomenon that's very real: synesthesia. This is a disorder where some types of sensory information is misinterpreted by the human brain as other kinds of sensory information. A sufferer can "see" smells and "hear" colors. These effects can also be induced by certain psychotropic drugs. So clearly, it *is* possible to confuse the human brain and cause one type of sensory "experience" to be perceived as another. So much for Lloyd's argument about the AIs of the Matrix not understanding that experiences are not interchangeable. They *are*.

Lloyd then paints himself into a corner and is forced to concede that his arguments against conventional computers being conscious could just as easily apply to the human brain, if the brain is viewed as a neurochemical computer. So how does he get out of this sticky wicket? He performs a hand-waving exercise which would make any MIT undergrad proud: he appeals to quantum mechanics! Yes, this is the same dubious argument used by Penrose and others... Since he asserts that quantum mechanical events occur inside the human brain, and that quantum mechanical systems are inherently non-deterministic, these quantum mechanical events introduce a doorway through which consciousness can enter the human brain.

First off, to say that quantum mechanics implies non-determinism is a vast over-simplification of the situation. There are some interpretations of quantum mechanics that suggest that determinism still plays a role in the underlying workings of quantum events, but you'll never see those workings (at least, not directly) because the act of observation affects the process you're observing. Quantum mechanics isn't some magic wand you can use to confer special properties to the human brain. After all, quantum level phenomena occur in conventional computer hardware all the time -- usually, a cosmic ray or other particle causes a bit to flip somewhere in a computer's RAM or in the CPU. Most such events result in a system crash or in erratic behavior. Lloyd suggests that a quantum computer, properly constructed, could make a computer that is just as capable of consciousness as a human brain is, although he is careful to point out that in this case, the quantum computation mechanism should be one that isn't simply designed to perform calculations that a conventional computer can also perform, just less efficiently. But he doesn't suggest what the *right* form of quantum "input" into the system should be. I'm left thinking that what we're winding up here is a vague suggestion that the quantum input into the system is some form of stochastic perturbation of the normal flow of logical data processing and pattern matching. Perhaps it's even as simple as, "If there are several possible ways to solve a problem or interpret a given piece of information or a given sensory input/experience, then create a wave function representing the superposition of all possible states and collapse it to pick a result." Or put even more simply, you have a glorified random number generator which truly *is* random, and you use it to choose among several possible states. Is that what consciousness is?

Well, some studies suggest that neurons can perform some very complex operations internally -- suggesting that they act as more than mere simple switching stations, turning on when a certain input threshold is met. I can see how, if some mechanism inside a neuron (such as the microtubules, which have interesting quantum mechanical properties) can perform sophisticated and even "non-deterministic" computations, the power of the human brain might be vastly greater than previously estimated (by one or two orders of magnitude). It's even possible you could have some kind of quantum mechanical recursive process going on inside each and every neuron in your brain... Maybe one quantum process creates a superposition of wave functions to represent its input data, and another quantum process monitors the first process, acting as the "observer," and then a third process monitors the previous process... ad infinitum...

The core question here is, is consciousness computable? I think we don't really have an adequate answer to that, except to say that the human brain somehow computes (in the broad sense of the term) and seems to be conscious. So if the human brain can do it, other media should be able to do the same thing -- maybe even better.

Even with all this, I submit that less than 1% of the time the human brain operates does anything resembling consciousness have an effect on the operation of the brain. That is to say, over 99% of the time, we're basically acting like deterministic machines. I think consciousness is pretty overrated, actually; certainly, it's a commodity more prized by some humans than others. We all have an amazing capacity and potential for conscious thought and introspection, but I think most people don't use that capacity or potential to its fullest, and certainly don't use it most of the time. Therefore, I'm not even sure why Lloyd bothers with his hand-waving exercise in discussing the consciousness of the machines of the Matrix; certainly, Agent Smith would pass any version of the Turing Test you'd care to administer. I suspect Lloyd's exercise was the result of some human hubris, the need to somehow differentiate the humans plugged into the Matrix from the machines who have enslaved them. "Well, we might be slaves to the machines, but we're superior to them in at least one way!"

Finally, I think Lloyd reads *way* too much into Hugo Weaving's delivery of Agent Smith's diatribe against humanity, during his interrogation of Morpheus. I personally have said things similar to Agent Smith's "taste the stink" line (again, synesthesia or simply bad mixing of metaphors, or merely an acknowledgement that the senses of taste and smell are tightly linked together physiologically in the human body, since they both involve chemoreceptors). Am I not conscious because I am confusing two realms of experience? Give me a break.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 04/30/2003 8:01 PM by mindxx

[Top]
[Mind·X]
[Reply to this post]

I think we needn to ditch the ad hominems.


Otherwise good minds will leave this board & there are quite a few here.




My issue is that the singualrity is near for a reason not associated with More's law, and so any questions or predictions AT ALL are untenable.



I calculate 24 mionths tops and can back this up if anyone wishes.

Ad Hominems?
posted on 05/01/2003 12:00 AM by LionMage

[Top]
[Mind·X]
[Reply to this post]

What does anything you wrote have to do with the article, or anything I wrote about the article? All you've done is make a vague accusation that I've engaged in ad hominem attacks (which is patently false), rather than make any comments whatsoever about what I've written.

Please don't waste my time with off-topic posts or replies.

Wrong about the red pill...
posted on 05/01/2003 9:47 AM by neworderdance

[Top]
[Mind·X]
[Reply to this post]

I'm not gonna read through every message to see if somebody said this already, but as everyone whos seen this movie a million times (mainly for entertainment value) knows, the red pill is a "tracking device" used to figure out where his actual body is outside of the matrix. The cast then uses their 'machines' that you see filling the room to sever his connection from the matrix. The physical and software reasoning for this concept should be obvious.

Re: Wrong about the red pill...
posted on 05/01/2003 12:30 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

In the context of the story, Neo is locked away in a vault in the Matrix. The pills are just illusions -- part of a dream created in his mind by the Matrix. He doesn't really take anything at all. That, to my mind, is one of the problems with the story. It keeps confusing what happens in the dream with what happens to the dreamer. It's only after Neo has been released from the vault that what happens to him actually affects his body. Before that, all experience is generated by the Matrix as a form of virtual reality.

The question in my mind is how do the writers justify the rebels' ability to break into the shared illusion and change what is going on in billions of minds from outside the Matrix? Neo and the others are acting as a hack into the system or a sort of virus that disrupts the system. But it would seem to me to make more sense to attack it from outside than from inside.

Destroy the computer that is generating the illusion and you shut down the system and set everyone free at once. Of course, they'd probably all die from lack of experience with the real world, but the audience would feel good about the idea of slaves being freed and the hope of humanity surviving without machine dominance. On the other hand, the story wouldn't be nearly as much fun to watch.

Grant

conscience
posted on 05/08/2003 11:35 AM by thegabe2

[Top]
[Mind·X]
[Reply to this post]

Hi,

i guess i have to specify a few things before making any comments:
1. english is not my mother language.
2. i haven't nearly as much knowledge as several of the people who have posted in this forum, yet i guess i grasp what they say, probably because of my interest in philosphy, physics, and informatics. Nonetheless, i will not be able to express myself in the same polished manner they have.

I have enjoyed thorougly the discussion about conscience and its definition.

I think that there is a way to "almost" have the monist view and the neural architecture view meet: through the analisys of the thinking process.

In my superhumble view, it is imaginable to derive the _subjective_ (a keyword that i think has a crucial role in the discussion, but has been overseen often by the monist side) experience of conscience (conscience is _not_ objective; it is an experience - almost a thought - experienced only by the mind that experiences it) through the association, overlap, addition and so on of different concepts all stored in a "place", the place where the all the information and the thinking process and the memories are all contained.

allright, i know i make no sense. i mean i know what i want to say. i just suck at saying it. please bear with me. i'm sure if you'll do me the honour and favor of spending time to try figure out what i am saying, you'll find a way to understand me.. thank you.

so let's try to rephrase. conscience, in my view, is an experience. it's a "meta" experience, the experience of having an experience. once this happens, the identification of self happens, and therefore "conscience". "cogito ergo sum" perfectly explains it. "I experience myself thinking, therefore i am", is a viable translation of descartes' sentence.

this "meta - experiencing" can then be taken a step further, to experiencing oneself experiencing the experience, and so on. only to a certain degree probably.

now.
how do we get to have the thought "i am experiencing".
well, by associating experience with concepts such as "I", "separation", "identity", and so on, i guess. i don't know.

But i think that just intuitively we could imagine a number of associations, correlations, observations etc between "concepts", or "ideas" and data from experience, memory etc. that could generate this "meta - awareness".

well, we can then imagine that the brain is where mechanically this "thought operations for meta -awareness" take place.

i still wonder.. did i convey the message?

so the problem is not whether there's a way do graph or math the experience of colour, but if it possible to obtain _subjective_ experience of conscience from the thinking processes.




Re: conscience
posted on 05/08/2003 7:20 PM by thegabe2

[Top]
[Mind·X]
[Reply to this post]

i am sorry... substitute "consciousness" to "conscience" whenever the latter appears.

Thank you.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/08/2003 8:21 PM by feor_1300

[Top]
[Mind·X]
[Reply to this post]

hmm, interesting, too say the least. I didn't have the patience to read all of the other posts in this discussion, so i apologize if anyone's brought these points up already, but I see only 2 problems with your whole essay.

1) Cell Phones: The argument you make, about the Rebels' cell phones not running on the Matrix's servers, and so not usable to enter and exit make sense, however, it fails to explain why a Cell Phone within the matrix could not be used for the enetry/exit proceedures. Since a non-rebel cell phone does exist on the matrix's servers, there should be no reason, under your explanation for that it couldn't be used.

2)Conciousness and the Machines: The fact that you use the machine's lack of belief in a smell doesn't prove that the machines lack conciousness. an Agent, which would be nothing more than a program, would never have had a means of physically experiancing smell, and so would not understand it's concept. All that is really needed for conciousness according to your argument is a degree of uncontroled randomness. This is actually possible with computers today, random fluctuations of electromagnetic fields could easily add in the random factor needed, even barring quantum computing. A Large portion of modern day processor design goes into preventing this kind of random fluctuation, I wonder what would happen if we allowed it, allowed the computer to begin accessing lines of code in an unordered manner.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/09/2003 3:57 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

Re cell phones: Yes, this is a glitch in my essay. From what I wrote in the essay, it would follow that the rebels could exit the Matrix by using a regular ('in-Matrix') person's cell phone. Jack Johnson emailed me about this a few weeks ago, and pointed out that there is a scene towards the end of the film, where Neo snatches someone's cell phone, but does not (and hence presumably cannot) use it to exit the Matrix. He suggested that this was because the Matrix uses a virtual network (or 'logical' network to avoid the confusing double use of 'virtual') for handling cell phone communications. I think that this is basically right. The rationale would be as follows: If we suppose that the virtual world is (for resons of performance) divided up into subsets corresponding to rooms and other spaces of visibility, and each such subset has its own subnet of network addresses, then it would make sense for a cell phone to have a fixed 'logical' network address, which is translated into a temporary local 'physical' address (inside the appropriate subnet) by an address server. When a call is made on a cell phone, only the 'logical' address is transmitted. But the Nebuchadnazzer needs the 'physical' address to upload the rebel.

(Of course, I am using the word 'physical' in the computer-science sense. This 'physicality' is relative to the virtual world of the Matrix. The alternative to the terminology 'physical' v. 'logical' would 'virtual' v. 'virtual-squared'.)

Fixed-line 'phones, on the other hand, because they don't move between different spaces, would transmit their local 'physical' address.

One thing that still puzzles me is why, in the red pill scene, they dramatically use a rotary dial telephone. Some people have suggested from this that the rebels must use analogue phones rather than digital phones. Well, the most that that scene would indicate is the use of pulse dialling rather than tone (DTMF) dialling. As the name suggests, DTMF (= dual-tone multiple frequency) dialling is carried on an analogue line. So, these phones are probably analogue anyway, irrespective of whether the dialling is by pulses or tones. (They *might* be digital, e.g. using VOIP = voice over internet protocol, but that is not evidenced in the film.) So, there is an unexplained 'glitch' (as Glenn Yeffeth would say) of *why* they use pulse *dialling*.

The only thing I can think of is to avoid detection. If the Matrix is looking out for a particular number (the Nebuchadnazzer's number) as a series of tones, then using pulses with slightly varying intervals between successive pulses might make the call less easily detectable. This is not a very satisfying explanation, though.

(OK, the phone *could* convert the rotary dialling to tones, but then why would the Wachowski brothers have gone to the trouble of using a rotary-dial phone?)

This is one of the many remaining 'glitches'. We'll have a new batch when "Reloaded" comes out.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/09/2003 5:36 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

re: Conciousness and the Machines:

<<The fact that you use the machine's lack of belief in a smell doesn't prove that the machines lack conciousness.>>

There's no suggestion that the machine's don't *believe* in smell, only that they can't experience it for themselves. In any case, this is a minor corroborative detail. The main argument is philosophical (see below). But some people point to Agent Smith's speech to Morpheus as evidence for machine consciousness; and my comments on this speech in the essay were to intended to show that that speech does not unambiguously point to machine consciousness.

<<An Agent, which would be nothing more than a program, would never have had a means of physically experiancing smell, and so would not understand it's concept.>>

That would follow only if you accepted the anti-physicalist premise (i.e. that consciousness is *not* just the same thing as the physical eletrochemical signal processing in the brain). And that premise is precisely what I am arguing for.

(I assume that by 'means of physically experiencing smell', you are not referring to the nose and olfactory sense organs. For, a human can lose his nose and all the olfactory tissue, but still experience smell just by having the relevant afferent nerve fibres stimulated.)

<<All that is really needed for conciousness according to your argument is a degree of uncontroled randomness.>>

No. I said it was a *necessary* condition, I didn't say it was a *sufficient* condition. In fact it would massively implausible for it to be a sufficient condition. (Consider this analogy: an aerial is a necessary requirement for picking up a television broadcast. But it is not sufficient. You also need a working television set.)

Also, there is an ambiguity in your expression, "needed for consciousness". What I said was that physical nondeterminism is a necessary condition for the *expression of consciousness in the physical world*. It might not be a necessary condition for the *existence* of consciousness. Consciousness might be able to exist and operate outside the physical world.

<<This is actually possible with computers today, random fluctuations of electromagnetic fields could easily add in the random factor needed,...>>

If you just mean EM fields that happen to be external to the processor, i.e. environmental noise, then ... no, sorry, that's not nondeterministic. A computer programmer may casually call them 'random', meaning that they are random in relation to his software. But they are not not *physically* nondeterministic. The state of any such field will be determined by the state of the universe at the preceding moment in time.

The point of the 'gateway to consciousness' is that (a) consciousness is not physical, but (b) consciousness has physical effects, and (c) we would prefer an explanation in which physical laws are not violated.

Two candidate mechanisms are: (a) quantum mechanical events, (b) unobserved initial conditions.

<<I wonder what would happen if we allowed it, allowed the computer to begin accessing lines of code in an unordered manner.>>

What would happen if you kept a bunch of dismembered human brain cells alive in petri dish? Each brain cell might (conceivably) gateway to consciousness (via its microtubules according to Penrose & Hameroff) but in a fragmentary and dislocated way that would not yeild a coherent conscious mind. Likewise, if you introduced QM noise in your computer, you might tap into conscious noise, but not an organised mind. To tap into a mind, you would have to organise the QM events in a manner compliant with what interface protocol the brain uses when it communicates with the conscious mind.

BTW What's interesting about this is that the gateway to a particular conscious mind is not spatially restricted, because the conscious mind itself is non-spatial. So, if you could figure out the correct interface to a given person's mind, you could build an android with the relevant interface and have full-on telepresence in the android. At first it would be confusing for the person to have two bodies, but people could be trained to get used to it. (Needless to say, nobody's going to fund that work, because it runs up against the orthodox faith in physicalism...)

This cannot violate the relativistic impossibility of instantaneous transmission of information, so multiple gateways to a single consciousness would have to be subject to time-space constraints.

Peter Lloyd

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/16/2003 7:47 AM by jinhangookin

[Top]
[Mind·X]
[Reply to this post]

First off, Peterlloyd, I want to congratulate you on a great essay. It's well put together and you've answered many questions that I've had.

There's one thing that I cant shake, though. (I skipped a big section in the posts, forgive me if i'm repeating)

As someone stated earlier, when agent smith gets destroyed by neo, the other agents run in fear of meeting the same fate. The foremost thought in their minds is self preservation, a sign of consciousness.

Also, when morpheus is explaining to neo the whole existance of the matrix, he says "mankind was united in celebration as we gave birth to a.i." and then, "a single CONSCIOUSNESS that spawned a whole race of machines." This is most likely the robot in the animatrix (second ren. part I) who killed his owners because he didnt want to be destroyed. Essentially, he simply "did not want to die," also a sign of consciousness. Just because a being cannot see color or smell scents, it doesnt necessarily mean it's devoid of all consciousness, is it not? Agent smith might've been confusing some senses, cause he might not be able to experience them. But a self preservation and fear of being eliminated is a good arguement of consciousness, i think.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/16/2003 2:58 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

But a self preservation and fear of being eliminated is a good arguement of consciousness, i think.


Chess programs already do that in some sense. Does it mean they are concsious? (My answer: No.)

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/16/2003 6:06 PM by jinhangookin

[Top]
[Mind·X]
[Reply to this post]

Incorrect, chess programs do not work like that. Chess programs only work off the mistakes you make. In essance, a computer chess program plays in a certain way till you make a mistake, then it works off that mistake so that it's only a matter of time before it opens a hole in your defences. So, of course depending on the difficulty level, you basically need to play a perfect game.

How this works and actual raw fear of dying is completely different. A chess program doesnt close up (run away) when you're inevitably going to defeat it.

To quote laforge on the second post:

"At the very end when Neo "killed" Agent Smith, The other 2 Agents responded with what looks like fear and abandon the job at hand. On a purely logical level (i.e., Software) they were aware of the progress of the Centennials and would thus know that staying around for a few more minutes would let them get rid of this especially dangerous human."

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/16/2003 10:10 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Incorrect, chess programs do not work like that. Chess programs only work off the mistakes you make. In essance, a computer chess program plays in a certain way till you make a mistake, then it works off that mistake so that it's only a matter of time before it opens a hole in your defences. So, of course depending on the difficulty level, you basically need to play a perfect game.


At the time I was interested in this subject, chess programs used the so-called minmax algorithm to selectively follow possible combinations up to a limted depth, and then perform a statical evaluation, of course with the highest negative value for the "elimination" of the own king, and the highest positive value for the "elimination" of the opposing king. I think that is all that is needed to explain the behavior of the agents.

How this works and actual raw fear of dying is completely different. A chess program doesnt close up (run away) when you're inevitably going to defeat it.


Only the _feeling_ of fear cannot be "felt" by a computer. If the only move to avoid check mate is to move the king backwards, a chess program will do so in a matter of milliseconds.

To quote laforge on the second post:

"At the very end when Neo "killed" Agent Smith, The other 2 Agents responded with what looks like fear and abandon the job at hand. On a purely logical level (i.e., Software) they were aware of the progress of the Centennials and would thus know that staying around for a few more minutes would let them get rid of this especially dangerous human."


I wouldn't see it that way. The agents had their usual poker face, and were just looking at each other more in a sense of "better get out of here". What Neo did was simply unexpected and unexplainable to them (as Agent Smith reveals in "Matrix Reloaded"), but they most likely had to assume that he "terminated" Agent Smith. Once Neo knew he could do that (believing it was a good thing to do), he might have done it with both remaining agents in 10 or 20 seconds.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/16/2003 10:31 PM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

when agent smith gets destroyed by neo, the other agents run in fear of meeting the same fate. The foremost thought in their minds is self preservation, a sign of consciousness.


If you were programming a virtual entity such as Agent, surely you would program in precisely this kind of evasive action? So, how does their running away count as evidence for their have conscious experiences such as fear?

Also, when morpheus is explaining to neo the whole existance of the matrix, he says "mankind was united in celebration as we gave birth to a.i." and then, "a single CONSCIOUSNESS that spawned a whole race of machines."


That's the consciousness of the people who built the machines, surely?

This is most likely the robot in the animatrix (second ren. part I) who killed his owners because he didnt want to be destroyed.


I've not seen that animatrix. I've seen only the animatrix that accompanies the film Dreamcatcher. I didn't know the others were available.

I shouldn't really comment on something that I haven't seen, but ... it sounds like this robot is most likely performing some self-preservation function that was programmed in.

Note that self-preservation need not be programmed in by the original programmers. It could evolve. If you have machines that can design and build machines - ie reproduce - that a function of self-preservation is likely to arise. (I.e. machines not programmed for self-preservation will 'die' out, while those that are so programmed will thrive and multiply.)

Essentially, he simply "did not want to die," also a sign of consciousness.


Not really. The verb 'want to' is often used in technical environments without attributing consciousness. E.g. my laptop has just been sending out SMTP packets, which unfortunately bounced back because this wireless system does not have an SMTP server. So, we could say that my laptop 'wants to' send some SMTP packets. But my laptop is not conscious.

Just because a being cannot see color or smell scents, it doesnt necessarily mean it's devoid of all consciousness, is it not?


No, of course not.

Agent smith might've been confusing some senses, cause he might not be able to experience them.


Indeed. But my point was that there is circumstantial evidence in the film that Agent Smith is not conscious. This particular point is suggestive rather than definitive.

The basic point is, rather, this: a deterministic software system cannot express conscious; only a system that incorporates physical non-determinism (eg quantum mechanics) can express consciousness; but there is nothing in the film to indicate that the machines are anything other than conventional deterministic systems. Since an interpretation of a film must be limited to what is evidenced by what we see on the screen, we should infer that the machines are not conscious.

But a self preservation and fear of being eliminated is a good arguement of consciousness, i think.


Not at all. They are precisely what a programmer would code in as a deterministic function.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/18/2003 4:23 AM by jinhangookin

[Top]
[Mind·X]
[Reply to this post]

Hahaha... I can clearly see, in every aspect, that i'm outbrained in the skill of debate. By the way, if you want to see the animatrixes (I think that four of them have been officially released), they are on the matrix website:

whatisthematrix.warnerbros.com/rl_cmp/animatrix_ht ml.html

The others will be on the dvd... if you can find the unreleased ones online, you should really watch them... they are pretty interesting.

btw, that robot i was talking about? In the animatrix series (Second Renaissance Part I), it is quoted as saying that it simply "does not want to die." As for your computer that sends smtp packets, it has been ordered to do so. That robot was not programmed to kill its owners as a resort to survive. It followed the urge to kill them, probably outside the boundaries of its original programming. (Regardless, you'll probably prove me wrong, anyway, but i'd like to hear your response).

I have a question regarding how the rebels exit the matrix. If you have not seen Matrix: Reloaded, don't read on, although it's not a big one, it's a SPOILER.

SPOILER SPOILER SPOILER SPOILER SPOILER



Toward the beginning of the movie, two rebels are seen trying to exit out of the matrix. They get to a hardwire phone, it rings, and the first one successfully gets out. The second one puts the phone back on the hook and waits for the call. At this time, right as the phone starts to ring, Agent Smith enters the scene, turns the second rebel into a copy of himself, and picks up the phone, and disappears. Later on, we find out that Agent Smith has occupied the mind of the second rebel in real life and is trying to kill off Neo just as he is about to board the Nebukenezzer from Zion.

So my question is, if leaving the matrix is just supposed to delete your register, how does Smith's psyche enter a body outside the matrix? Could the matrix be existing within another matrix that all the rebels think is the real world? That's the best explanation i can come up with. This, coupled with the fact that Neo can willingly stop the sentinals in the "real world" is good evidence of the existance of another matrix. Any other theories, anyone?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/18/2003 4:26 AM by jinhangookin

[Top]
[Mind·X]
[Reply to this post]

Whoops, within that link, there isnt supposed to be that space between the "ht" and the "ml"

whatisthematrix.warnerbros.com/rl_cmp/animatrix_ht ml.html

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/19/2003 1:44 AM by PeterLloyd

[Top]
[Mind·X]
[Reply to this post]

That robot was not programmed to kill its owners as a resort to survive. It followed the urge to kill them, probably outside the boundaries of its original programming.


I would guess that the robot was programmed with some general rule about self-preservation, and inferred that to do this is it had to do the killing. There need not be any 'urge'. But I should shut up about this until I've seen the relevant animatrix.

--

Regarding Agent smith being partly uploaded to the rebel Bane in what we had previously thought was the real world ... yes, I completely agree with your inference that the Matrix is inside a Meta-Matrix. (Neo's knocking out the sentinels is the definitive proof.)

So: Bane has an avatar in the Meta-Matrix, which spawns a second avatar in the Matrix. There is some shared memory between them. (We know about this from Matrix I, e.g Neo bleeds during one of his training sessions.) Agent Smith loads a loathing of Neo into that shared memory. Smith is not fully loaded into Bane's avatar in the Meta-Matrix, as Bane still looks like Bane.

Peter

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/19/2003 4:03 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Peter to jinhangookin:

Regarding Agent smith being partly uploaded to the rebel Bane in what we had previously thought was the real world ... yes, I completely agree with your inference that the Matrix is inside a Meta-Matrix. (Neo's knocking out the sentinels is the definitive proof.)


Certainly "Matrix Revolutions" must bring more substantial surprises than "Matrix Reloaded", although learning that the _role_ of the "One" and the "Oracle" itself (though not everything they do) are part of the matrix' design is not peanuts either.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/20/2003 5:51 AM by hsuone

[Top]
[Mind·X]
[Reply to this post]

Thank you for your great article! It brought in some interesting theories which fit the story of the matrix very well, but it left me with few unanswered ones too..

First off, in your article you state that rebel's can be exited from the matrix using this "emergency exit" technique of transforming avatar's to invisible form. Then you say that: "From a software perspective, the data module is still on the register but simulating a body indistinguishable from thin air. Later, when the scene is no longer being observed by anybody, the module will be deleted."
BUT, in the movie in the subway scene Trinity is removed from the Matrix this way as there is other people in the station (the drunk guy, who shocks from what he sees - Trinity vanishing to thin air)..

So, *if Trinity remains in the Matrix's registers, how is she already being unplugged in Nebuchadnezzar?*

Another question that has been troubling me is *Why must the rebels enter/exit Matrix one by one?*

When Nebuchadnezzar makes a "telephone call" to the Matrix, and the answering machine answers giving the network address for the rebels to use, why can't they just upload everyone to the Matrix at the same time? The address is after all the same for everyone in the same location. Also, why do they have to leave Matrix one by one? In the subway scene, Trinity gets out first, leaving Neo to fight with agent Smith, which could have been avoided if they were just both deleted from the Matrix..


Sorry for my english, hopefully you understood :)

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/21/2003 9:07 PM by mal1ce

[Top]
[Mind·X]
[Reply to this post]

I'm sure that I'm out of my depth here, but I had a question about the machines becoming conscious. If they never did achieve consciousness then why did they revolt against the humans in the first place? You stated:


Everything that the android says and does is fully accounted for by its software.

If this is the case, I don't see why the programmer(s) of the machines would have designed the downfall of mankind into the androids software. In fact, it would seem that they would have done quite the opposite and designed in safeguards that prevented the androids from harming humans in any way. Given this line of reasoning, it seems to me that at some point one of the androids had to make the leap from it’s given program to self awareness/consciousness in order to have the free will to wish for something more or different, causing it to turn on it’s owner. Where is my thinking flawed in this?
Thanks

Agent Smith Consciousness and Red Pill
posted on 08/07/2003 12:55 AM by tsosczb

[Top]
[Mind·X]
[Reply to this post]

Someone tried to ask this but got a crap response.

Agent Smith does not understand the human senses because he only knows them from inside the Matrix. So he wonders if they really exist outside of the Matrix because he knows it is a computer system, and knows that another reality exists outside of it (the real world).

In other words... The Matrix generates sensory responses, and Smith knows that. So he wonders if senses can exist without the input from the Matrix. I think this constitutes a sort of consciousness, and don't think that Smith was only talking to Morpheus this way as a tactic.


Also, why can't the avatars take the red pills to leave the Matrix instead of using the phones?

Re: Agent Smith Consciousness and Red Pill
posted on 09/03/2003 9:22 PM by Z10n101

[Top]
[Mind·X]
[Reply to this post]

I wish I could pot urls WE have found a trailer from the first matrix on the main matrix site. But this trailer is hidden. Why you ask ....well There is a shot 3 seconds into it with the architects screens looking at Neo when he is waking up and trinity kisses him. If anyone wants it PM me if you can on these forums.

This is full proof than Zion is not the "real world"

Re: Agent Smith Consciousness and Red Pill
posted on 09/04/2003 10:55 AM by tharsaile

[Top]
[Mind·X]
[Reply to this post]


I wish I could pot urls


You can post URL's:

http://froogle.google.com/froogle?q=%22red+pill%22

See? Yes, I'd like to see that screenshot from the first Matrix. Very interesting.

Re: Agent Smith Consciousness and Red Pill
posted on 09/04/2003 1:57 PM by Z10n101

[Top]
[Mind·X]
[Reply to this post]

O wow lol ok guys here we go this is a trailer from the first matrix. But one problem ....it was hidden and not listed on the site. We are pretty sure its because it contains a cut scene of the architect watching Neo ......In the real World!!!

Freaze the video at the 3 second mark

Video: http://whatisthematrix.warnerbros.com/av/matrix6.z ip

This is what your looking for

http://hem.passagen.se/fazaa/matrix6.bmp

I know it blew my mind to

Re: Agent Smith Consciousness and Red Pill
posted on 09/04/2003 2:05 PM by Z10n101

[Top]
[Mind·X]
[Reply to this post]

sorry forgot to link the zip
Heres the Video
http://whatisthematrix.warnerbros.com/av/matrix6.z ip

Re: Architect Screens in Matrix 1
posted on 09/04/2003 5:42 PM by tharsaile

[Top]
[Mind·X]
[Reply to this post]

Ah-HA! Connection's too slow to download all those megabytes, but I saw the picture. Now I'm going to have to re-rent the first one. A matrix within a matrix, hmm...

You know what's really mind-blowing, though? There's a brief blip on the screen during Terminator 1 where Arnold is visually scanning for a mesomorph with the right clothes, the readout saying Democrat.. Democrat.. Republican!

Kidding.

Re: Architect Screens in Matrix 1
posted on 09/04/2003 5:58 PM by Z10n101

[Top]
[Mind·X]
[Reply to this post]

yeah lol arnold wants to give everyone fantastic jobs... no need to rerent it just for that. It was edited out of the matrix cause it gave to much away. Its only it the hidden trailer.

The whole movie was not real
posted on 12/13/2003 9:10 AM by albator84

[Top]
[Mind·X]
[Reply to this post]

Hi everyone, I've stumbed across your discussions...very interesting! A lot of good points were mentioned and debated upon. I'd like to add a little bit more to the discussions and forgive me if I go against any one's views.

I put it to you that the whole "matrix" was not real. There was no real world in the movie. Everything was like a computer simulation, including Neo. It explains why he was able to perform impossible feats such as flying or possessing superhuman power.

If you could imagine yourself watching a computer video game (an extremely complex one) instead of the movie then everything makes sense. There is no inconsistencies left, because everything is possible in a virtual world.

Re: The whole movie was not real
posted on 12/13/2003 11:09 AM by Tomaz_(Thomas)_Kristan

[Top]
[Mind·X]
[Reply to this post]

An interesting point of view. That way, the whole thing gets a perfect sense. The only way. But every single movie ever recorded, gets its perfect logical justification this way. Even more interesting point.

Re: The whole movie was not real
posted on 12/14/2003 8:39 AM by sushi101

[Top]
[Mind·X]
[Reply to this post]

agree, the real problem really is "turtles all the way back"

Cause if that is required for Neo to be concious or selwaware then the rules are very likely to follow to us to, thus we live in a simulation and so forth.

So where does it end?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/31/2005 3:43 AM by Anand.V.V.N

[Top]
[Mind·X]
[Reply to this post]

Mind blowing stuff!!!! I was imagining if this could be done really?

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/31/2005 3:53 AM by colorSpace

[Top]
[Mind·X]
[Reply to this post]

Mind blowing stuff!!!! I was imagining if this could be done really?


What for example?

BTW, there is also a second article, "Glitches reloaded" or so.

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/31/2005 4:37 AM by Anand.V.V.N

[Top]
[Mind·X]
[Reply to this post]

Build a system like Matrix and of course now that we know the GLITCHES we can avoid them

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/31/2005 2:17 PM by Nanoships

[Top]
[Mind·X]
[Reply to this post]

Wiring would best enter through the soft cartilage that cushions the skull on the spinal column, and pass up through the natural opening that lets the spinal cord into the skull. This avoids drilling through bone, and maintains the mechanical and biological integrity of the skull's protection. A baby fitted with a bioport can easily survive the operation.


This procedure says nothing about how the wiring would happen. Just sticking a cable into the brain isn't going to work. A better approach would use nano technology and a wireless connection. An even better approach would modify an embryo’s genes to build a port or wireless connection.


Neo's ability to walk and use his arms shows that the motor cortex is also developed and functioning. Indeed, even the cerebellum, which controls balance, must be working. So, the Matrix must be capturing its motor signals from the brain's efferent nerves after they have finished with the last stage of cortical processing, but before the nerves pass out of the skull.


This is also ridiculous because Neo's bones and muscle mass would disintegrate due to lack of use. This is a big problem for astronauts in space for extended periods and those that are permanently stricken to a bed.


That spare capacity remains available for others to exploit, and the rebels use it to download kung-fu expertise into Neo's brain and to implant helicopter piloting skills into Trinity's. If the Matrix ever learned this technique, it could create havoc for the rebels, implanting impulses to serve its own ends.


The ability to do learn the skills downloaded requires that the brain store the information in at least short-term memory. Learning skills are unique to each and every person and require emotional qualia. Considering the rate at which the brain can store information biochemically learning how to fly a helicopter without any previous experience flying helicopters is going to take longer than the seconds that it took Trinity in the movie. The better approach to this problem would be an AI that can take over Trinity's avatar and fly the helicopter.

"Most likely, the machines are harnessing the spare brainpower of the human race as a colossal distributed processor for controlling the nuclear fusion reactions."

This is really ridiculous the machines seem quite capable of managing complex processes such as human societies and their virtual realities; managing nuclear fusion reactions would be child’s play by comparison.


To enter the vast Matrix requires specifying where the avatar is to materialize. To get an avatar into the Matrix world, the rebels must use some strictly physical navigation. This is done with the telephone network, which has penetrated every corner of the inhabited world with electronic devices, each of which has a unique, electronically determined label. Without knowing anything of human society and its conventions, the physics modules of the Matrix can determine where any given telephone number terminates.


There is no need for the telephone terminal points since the bioport is a local device getting a person out of the Matrix simply means turning off the inputs to the bioport. Gaining access to the matrix would be far better done by taking the inputs of the bioport and render the visual, audio or any other information to video screens, speakers, indicators etc. This way coordinates within the Matrix could be obtained and the movies heroes could enter wherever they pleased.

Mental states and beliefs can affect the body in several ways. In the placebo effect, the belief that a pill is a medicine can cure an illness; in hypnosis, imagining a flame on the wrist can induce blisters. In total virtuality, the mind accepts completely what is presented. If the Matrix signals that the avatar's body has died, then the mind will shut down the basic organs of the heart and lungs. Actual death will inevitably ensue, unless fast action is taken to get the heart pumping again.


Hypnosis? Do you know that hypnosis is not considered scientifically sound? In any case haven't you had a dream where you died? Most scenarios of such dream deaths are drowning or falling from a building or other high place and hitting the ground. I have had such dreams and I am still alive...


The Matrix is a fun movie the basic premise of a virtual reality so real it is indistinguishable from reality is based on science that is predicted possible. Hollywood is not known for its scientific authenticity or accuracy it uses mostly technobable. The sad fact as to why Hollywood is this way has more to do with the marketing rule of "never alienate your audience". The motives of the machines to use humans for energy is really weak but its what most laymen can understand as far as motives go. If the Matrix had used the theme of human society eventually migrating into the Matrix due to over population, dwindling resources, pollution and war, and a reliance on machines eventually evolved it would become much more complex. In such a scenario the dogma of cultural complacency of the denial of the virtual reality would be in conflict with the freedom of the human spirit. After all once you figure out its not real you can do whatever you want. Doing whatever you want is usually a problem in most cultures...

Frank

Re: GLITCHES IN THE MATRIX . . . AND HOW TO FIX THEM
posted on 05/31/2005 3:12 PM by colorSpace

[Top]
[Mind·X]
[Reply to this post]

After all once you figure out its not real you can do whatever you want. Doing whatever you want is usually a problem in most cultures...


As long as other human beings are involved, even if through a virtual connection, you still have responsability. Anyway there is not much of a point in following any erratic movement of your own mind. Even though we have many pointlessly restricting rules in most cultures.