|
|
|
|
|
|
|
Origin >
Will Machines Become Conscious? >
Live Forever--Uploading The Human Brain...Closer Than You Think
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0157.html
Printable Version |
|
|
|
Live Forever--Uploading The Human Brain...Closer Than You Think
Ray Kurzweil ponders the issues of identity and consciousness in an age when we can make digital copies of ourselves.
Originally published February 2, 2000 at PsychologyToday.com. Published on KurzweilAI.net April 7, 2001.
Thought to Implant 4: OnNet, please.
Hundreds of shimmering thumbnail images mist into view, spread fairly evenly across the entire field of pseudovision.
Thought: Zoom upper left, higher, into Winston's image.
Transmit: It's Nellie. Let's connect and chat over croissants. Rue des Enfants, Paris in the spring, our favorite table, yes?
Four-second pause.
Background thought: Damn it. What's taking him so long?
Receive: I'm here, ma chêre, I'm here! Let's do it!
The thumbnail field mists away, and a café scene swirls into place. Scent of honeysuckle. Paté. Wine. Light breeze. Nellie is seated at a quaint table with a plain white tablecloth. An image of Winston looking 20 and buff mists in across from her. Message thumbnails occasionally blink against the sky.
Winston: It's so good to see you again, ma chêre! It's been months! And what a gorgeous choice of bodies! The eyes are a dead giveaway, though. You always pick those raspberry eyes. Trés bold, Nellita. So what's the occasion? Part of me is in the middle of a business meeting in Chicago, so I can't dally.
Nellie: Why do you always put on that muscleman body, Winston? You know how much I like your real one. Winston morphs into a man in his early 50s, still overly muscular.
Winston: (laughing) My real body? How droll! No one but my neurotechnician has seen it for years! Believe me, that's not what you want. I can do much better! He fans rapidly through a thousand images, and Nellie grimaces.
Nellie: Damn it! You're just one of Winston's MI's! Where is the real Winston? I know I used the right connection!
Winston: Nellie, I'm sorry to have to tell you this. There was a transporter accident a few weeks ago in Evanston, and well, I'm lucky they got to me in time for the full upload. I'm all of Winston that's left. The body's gone.
When Nellie contacts her friend Winston through the Internet connection in her brain, he is already, biologically speaking, dead. It is his electronic mind double, a virtual reality twin, that greets Nellie in their virtual Parisian café. What's surprising here is not so much the notion that human minds may someday live on inside computers after their bodies have expired. It's the fact that this vignette is closer at hand than most people realize. Within 30 years, the minds in those computers may just be our own.
The history of technology has shown over and over that as one mode of technology exhausts its potential, a new more sophisticated paradigm emerges to keep us moving at an exponential pace. Between 1910 and 1950, computer technology doubled in power every three years; between 1950 and 1966, it doubled every two years; and it has recently been doubling every year.
By the year 2020, your $1,000 personal computer will have the processing power of the human brain-20 million billion calculations per second (100 billion neurons times 1,000 connections per neuron times 200 calculations per second per connection). By 2030, it will take a village of human brains to match a $1,000 computer. By 2050, $1,000 worth of computing will equal the processing power of all human brains on earth.
Of course, achieving the processing power of the human brain is necessary but not sufficient for creating human level intelligence in a machine. But by 2030, we'll have the means to scan the human brain and re-create its design electronically.
Most people don't realize the revolutionary impact of that. The development of computers that match and vastly exceed the capabilities of the human brain will be no less important than the evolution of human intelligence itself some thousands of generations ago. Current predictions overlook the imminence of a world in which machines become more like humans-programmed with replicated brain synapses that re-create the ability to respond appropriately to human emotion, and humans become more like machines-our biological bodies and brains enhanced with billions of "nanobots," swarms of microscopic robots transporting us in and out of virtual reality. We have already started down this road: Human and machine have already begun to meld.
It starts with uploading, or scanning the brain into a computer. One scenario is invasive: One very thin slice at a time, scientists input a brain of choice-having been frozen just slightly before it was going to die-at an extremely high speed. This way, they can easily see every neuron, every connection and every neurotransmitter concentration represented in each synapse-thin layer.
Seven years ago, a condemned killer allowed his brain and body to be scanned in this way, and you can access all 10 billion bytes of him on the Internet. You can see for yourself every bone, muscle and section of gray matter in his body. But the scan is not yet at a high enough resolution to re-create the interneuronal connections, synapses and neurotransmitter concentrations that are the key to capturing the individuality within a human brain.
Our scanning machines today can clearly capture neural features as long as the scanner is very close to the source. Within 30 years, however, we will be able to send billions of nanobots-blood cell-size scanning machines-through every capillary of the brain to create a complete noninvasive scan of every neural feature. A shot full of nanobots will someday allow the most subtle details of our knowledge, skills and personalities to be copied into a file and stored in a computer.
We can touch and feel this technology today. We just can't make the nanobots small enough, not yet anyway. But miniaturization is another one of those accelerating technology trends. We're currently shrinking the size of technology by a factor of 5.6 per linear dimension per decade, so it is conservative to say that this scenario will be feasible in a few decades. The nanobots will capture the locations, interconnections and contents of all the nerve cell bodies, axons, dendrites, presynaptic vesicles, neurotransmitter concentrations and other relevant neural components. Using high-speed wireless communication, the nanobots will then communicate with each other and with other computers that are compiling the brain-scan database.
If this seems daunting, another scanning project, that of the human genome, was also considered ambitious when it was first introduced 12 years ago. At the time, skeptics said the task would take thousands of years, given current scanning capabilities. But the project is finishing on time nevertheless because the speed with which we can sequence DNA has grown exponentially.
Brain scanning is a prerequisite to Winston and Nellie's virtual life-and apparent immortality.
In 2029, we will swallow or inject billions of nanobots into our veins to enter a three dimensional cyberspace-a virtual reality environment. Already, neural implants are used to counteract tremors from Parkinson's disease as well as multiple sclerosis. I have a deaf friend who can now hear what I'm saying because of his cochlear implant. Under development is a retinal implant that will perform a similar function for blind people, basically replacing certain visual processing circuits of the brain. Recently, scientists from Emory University placed a chip in the brain of a paralyzed stroke victim who can now begin to communicate and control his environment directly from his brain.
But while a surgically introduced neural implant can be placed in only one or at most a few locations, nanobots can take up billions or trillions of positions throughout the brain. We already have electronic devices called neuron transistors that, noninvasively, allow communication between electronics and biological neurons. Using this technology, developed at Germany's Max Planck Institute of Biochemistry, scientists were recently able to control from their computer the movements of a living leech.
By taking up positions next to specific neurons, the nanobots will be able to detect and control their activity. For virtual reality applications, the nanobots will take up positions next to every nerve fiber coming from all five of our senses. When we want to enter a specific virtual environment, the nanobots will suppress the signals coming from our real senses and replace them with new, virtual ones. We can then cause our virtual body to move, speak and otherwise interact in the virtual environment. The nanobots would prevent our real bodies from moving; instead, we would have a virtual body in a virtual environment, which need not be the same as our real body.
Like the experiences Winston and Nellie enjoyed, this technology will enable us to have virtual interactions with other people-or simulated people-without requiring any equipment not already in our heads. And virtual reality will not be as crude as what you experience in today's arcade games. It will be as detailed and subtle as real life. So instead of just phoning a friend, you can meet in a virtual Italian bistro or stroll down a virtual tropical beach, and it will all seem real. People will be able to share any type of experience-business, social, romantic or sexual- regardless of physical proximity.
The trip to virtual reality will be readily reversible since, with your thoughts alone, you will be able to shut the nanobots off, or even direct them to leave your body. Nanobots are programmable, in that they can provide virtual reality one minute and a variety of brain extensions the next. They can change their configuration, and even alter their software.
While the combination of human-level intelligence in a machine and a computer's inherent superiority in the speed, accuracy and sharing ability of its memory will be formidable-this is not an alien invasion. It is emerging from within our human- machine civilization.
But will virtual life and its promise of immortality obviate the fear of death? Once we upload our knowledge, memories and insights into a computer, will we have acquired eternal life? First we must determine what human life is. What is consciousness anyway? If my thoughts, knowledge, experience, skills and memories achieve eternal life without me, what does that mean for me?
Consciousness-a seemingly basic tenet of "living"-is perplexing and reflects issues that have been debated since the Platonic dialogues. We assume, for instance, that other humans are conscious, but when we consider the possibility that nonhuman animals may be conscious, our understanding of consciousness is called into question.
The issue of consciousness will become even more contentious in the 21st century because nonbiological entities-read: machines-will be able to convince most of us that they are conscious. They will master all the subtle cues that we now use to determine that humans are conscious. And they will get mad if we refute their claims.
Consider this: If we scan me, for example, and record the exact state, level and position of my every neurotransmitter, synapse, neural connection and other relevant details, and then reinstantiate this massive database into a neural computer, then who is the real me? If you ask the machine, it will vehemently claim to be the original Ray. Since it will have all of my memories, it will say, "I grew up in Queens, New York, went to college at MIT, stayed in the Boston area, sold a few artificial intelligence companies, walked into a scanner there and woke up in the machine here. Hey, this technology really works."
But there are strong arguments that this is really a different person. For one thing, old biological Ray (that's me) still exists. I'll still be here in my carbon, cell-based brain. Alas, I (the old biological Ray) will have to sit back and watch the new Ray succeed in endeavors that I could only dream of.
But New Ray will have some strong claims as well. He will say that while he is not absolutely identical to Old Ray, neither is the current version of Old Ray, since the particles making up my biological brain and body are constantly changing. It is the patterns of matter and energy that are semipermanent (that is, changing only gradually), while the actual material content changes constantly and very quickly.
Viewed in this way, my identity is rather like the pattern that water makes when rushing around a rock in a stream. The pattern remains relatively unchanged for hours, even years, while the actual material constituting the pattern-the water-is replaced in milliseconds.
This idea is consistent with the philosophical notion that we should not associate our fundamental identity with a set of particles, but rather with the pattern of matter and energy that we represent. In other words, if we change our definition of consciousness to value patterns over particles, then New Ray may have an equal claim to be the continuation of Old Ray.
One could scan my brain and reinstantiate the new Ray while I was sleeping, and I would not necessarily even know about it. If you then came to me, and said, "Good news, Ray, we've successfully reinstantiated your mind file so we won't be needing your old body and brain anymore," I may quickly realize the philosophical flaw in the argument that New Ray is a continuation of my consciousness. I may wish New Ray well, and realize that he shares my pattern, but I would nonetheless conclude that he is not me, because I'm still here.
Wherever you wind up on this debate, it is worth noting that data do not necessarily last forever. The longevity of information depends on its relevance, utility and accessibility. If you've ever tried to retrieve information from an obsolete form of data storage in an old obscure format (e.g., a reel of magnetic tape from a 1970s minicomputer), you understand the challenge of keeping software viable. But if we are diligent in maintaining our mind file, keeping current backups and porting to the latest formats and mediums, then at least a crucial aspect of who we are will attain a longevity independent of our bodies.
What does this super technological intelligence mean for the future? There will certainly be grave dangers associated with 21st century technologies. Consider unrestrained nanobot replication. The technology requires billions or trillions of nanobots in order to be useful, and the most cost-effective way to reach such levels is through self-replication, essentially the same approach used in the biological world, by bacteria, for example. So in the same way that biological self-replication gone awry (i.e., cancer) results in biological destruction, a defect in the mechanism curtailing nanobot self-replication would endanger all physical entities, biological or otherwise.
Other salient questions are: Who is controlling the nanobots? Who else might the nanobots be talking to?
Organizations, including governments, extremist groups or even a clever individual, could put trillions of undetectable nanobots in the water or food supply of an entire population. These "spy" nanobots could then monitor, influence and even control our thoughts and actions. In addition, authorized nanobots could be influenced by software viruses and other hacking techniques. Just as technology poses dangers today, there will be a panoply of risks in the decades ahead.
On a personal level, I am an optimist, and I expect that the creative and constructive applications of this technology will persevere, as I believe they do today. But there will be a valuable and increasingly vocal role for a concerned movement of Luddites-those anti-technologists inspired by early-19th-century weavers who in protest destroyed machinery that was threatening their livelihood.
Still, I regard the freeing of the human mind from its severe physical limitations as a necessary next step in evolution. Evolution, in my view, is the purpose of life, meaning that the purpose of life-and of our lives-is to evolve.
What does it mean to evolve? Evolution moves toward greater complexity, elegance, intelligence, beauty, creativity and love. And God has been called all these things, only without any limitation, infinite. While evolution never reaches an infinite level, it advances exponentially, certainly moving in that direction. Technological evolution, therefore, moves us inexorably closer to becoming like God. And the freeing of our thinking from the severe limitations of our biological form may be regarded as an essential spiritual quest.
By the close of the next century, nonbiological intelligence will be ubiquitous. There will be few humans without some form of artificial intelligence, which is growing at a double exponential rate, whereas biological intelligence is basically at a standstill. Nonbiological thinking will be trillions of trillions of times more powerful than that of its biological progenitors, although it will be still of human origin.
Ultimately, however, the earth's technology-creating species will merge with its own computational technology. After all, what is the difference between a human brain enhanced a trillion-fold by nanobot-based implants, and a computer whose design is based on high-resolution scans of the human brain, and then extended a trillion-fold?
This may be the ominous, existential question that our own children, certainly our grandchildren, will face. But at this point, there's no turning back. And there's no slowing down.
Reprinted with permission from Psychology Today. http://www.psychologytoday.com/
| | Join the discussion about this article on Mind·X! | |
|
|
Mind·X Discussion About This Article:
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
I've been thinking about mind uploading for some time, and it's obvious to me that it's humanity's destiny. At least for most of humans, and at least in this century. I say most since no matter how desirable the benefits of uploading, I don't think that everybody will sign up. Perhaps because some will consider it not moral, or some won't do it simply out of fear. I must admit though, when I first heard about it some 4 years ago it didn't sound like a such a good idea. However, after examining the details, and realizing that conciousness is a pattern independent of the matter that makes it up, and that human senses are simply a function of electrical inputs provided by our bodies, it's clear that if there are ways to understand and replicate that biological "technology" there should be ways to improve upon it. Improvement would mean a great enhancement to human lives in ways that's difficult to imagine now. Along with everything what Mr.Kurzweil already wrote, I would add the creation of new senses, designing new emotions, and multiparallel presence in virtual reality. Also, borrowing here Mr.David Pearce's "paradise enginering" ideas, I think it's logical to believe that those new emotions should create unimaginable well being for those uploaded minds experiencing them.
What humanity will do having that perspective is really interesting. Perhaps voluntarily abandoning pursuit of progress/happiness? What is the incentive for progress if we don't need to be any happier? Of course that point is probably very far in the future. Maybe that's when the humanity will choose such a point to stop advancing further so it won't slide into abyss of singularity's chaos. That's all speculation.
This is all great but there is still something that disturbs me about uploading, namely the whole idea of preserving the "soul". Mr.Kurzweil mentions a thought experiment where his mind is downloaded during his sleep and reinstanciated in nonbiological entity. There are two Rays, each claiming to be a real one. Now, here's the problem. What if the first Ray gets killed in a way that he can't rely on the copy of his mind file? The copy still exists but not the first Ray. What I mean is that while the conciousness and "soul" of the second Ray is still there the "soul" or "uniqueness" of the first Ray is dead. Therefore the question. How can a mind be uploaded in the first place with preserved uniqueness so that the person before and after uploading is the same instance of itself? Otherwise it would mean that that when first original person would die, no matter how many copies were made of his/her conciousness that first person's "soul" would be dead. I must say I still haven't come across the solution to this problem so I'll try to propose it.
Let's take the real biological person, and a nonbiological brain ready to be uploaded. If a simple copy is made, then there are 2 instances of the same person and each one is unique and separate, although almost the same. When one of them is deleted, one of them is dead. To prevent that from happening I think the solution lies in a way the biological brain should be uploaded. What if throughout the uploading process only one copy of the mind is preserved i.e. whenever an imaginary "unit" of the biological mind gets transferred to nonbiological host/brain, one "unit" gets deleted in biological host/brain. The process should be fully reversible, but at each time there would have to exist only one fully functional mind between two entities. The complete transfer would finish after all the biological "units" become deleted. Each subsequent transfer done this way would preserve the conciousness as well as the uniqueness of each person.......I would hope.
Other maybe simpler approach might be to just replace each neuron one at a time in the original biological brain with a digital equivalent, and go from there, but this would not be uploading, would it?
No matter how uploading will be achieved this is a fascinating subject. The more discussion the better.
S.Paliwoda
|
|
|
|
|
|
|
|
|
Re: It is I ...
|
|
|
|
As I mentioned before, there might be one person with two instances, but still, it would mean an existance of two different people. As far as I can understand, you talk about the bandwith between one mind doing separate activities (what I personally call "parallel presence") which I guess would require several instances of the same mind, and if that's what you're talking about then I agree 100%. I hope I got your point right.
However, I don't think the previous poster and I had those kind of instances in mind. It's just like in the thought experiment with Ray Kurzweil and his copy. 2 instances of the same person means that there have to be 2 exactly the same people (at least at the beginning) with 2 different souls. Let's say you're a real Ray, and you get killed. No matter how many copies there were of you, you're dead, and it was you, not the copy, who perhaps felt the agony.
What I was after was the very transfer from a biological to nonbiological state in a way that would preserve the soul. A transfer of the soul without the need of making any copies/instances of a person which I believe would be a creation of another similar person, and not the actual transfer of the original one. I think that "soul transfer" is possible. What I forgot to add is that the biological mind would probably have to be transformed to nonbiological state first to make sure the "unity" of the mind during the transfer could be preserved.
Happy New Year!
S.Paliwoda |
|
|
|
|
|
|
|
|
Re: It is I ...
|
|
|
|
[The memories are not that important when it comes to self.]
You mean two people, each with different memory (including different experiences, and personalities) could be the same?
[Do we check our memory against the self and then decide whether or not it's me or somebody else? We don't, of course.]
Of course we do. Subconciously. How else would a person know his/her name or where is one's home.
[What does it mean "separate memories". We can remember something very similar. But most of the time we do not actually remembering much.]
Yes, but we tend to remember our personalities pretty well. Even after many years.
[God knows how many "strobos of me" currently run around.]
One - though I hope uploading will happen within decades.
[> you create a separate entity which can't be you
I think it simply is. It can't be something else. It's just another instance of self running on a different machine(s).]
It is just like you, but not you. If your original copy (uploaded from biological brain) gets shut down, you're dead. It doesn't matter how many copies, and how many machines run other copies. Let's say it's a week after creation of another copy of yourself so that original and newly created copies had been running on two separate machines for that past week. Let's say that those copies could share no new memories with each other. Now, during that week those two copies made completely different new memories. What if one of them has to be shut down. From the point of view of your original copy the only thing changed from the week before was creation of another copy (you may have not been even aware of that), but you still continually exist as an original so that created copy is nothing more to you than another person. Perfect twin but still, you don't feel it's pain. That's why it's a different person/entity.
Slawek |
|
|
|
|
|
|
|
|
Re: It is I ...
|
|
|
|
Like I said in an earlier post, it's been a couple of years since I'm actively thinking about this subject, and I must say that even not a long ago I shared your view, more or less, (although I still don't understand your analogy to strobe lights).
When I went deeper into thinking about uploading, though, I found a problem in it (at least in the way people currently think about it) which goes deeper too. It's a solvable problem, but still, I was amazed at the fact that nobody so far raised this issue. I tried to make my point here as clearly as I could, but I guess it's tricky to understand it. This whole thing is NOT about just conciousness, but something more subtle. The best description of it I was able to come up with was the "uniqueness of the soul". I guess it takes more time for this to really sink in.
Slawek |
|
|
|
|
|
|
|
|
Re: It is I ...
|
|
|
|
What if, in the far future, two differing sets of neural networks decided that they want to merge, so as to be closer together. Would this constitute an whole new being? Would it be like having a child? Could the conflicting data, there surely would be, be rectified, or would this create a schizophrenic entity? Would it be like two companies merging, maybe taking the best qualities? In other words, how close could two neural networks get to each other without becoming one being, or entity? or would it be obvious, logically that they would have to. |
|
|
|
|
|
|
|
|
Re: It is I ...
|
|
|
|
I think, mysticism has nothing to tell us about "soul".
Even QM hasn't.
SELF can run on almost everything, but if it runs on the meat, as we see it here on Earth, it has almost certainly nothing to do directly with QM. No QM is necessary for thinking or with something related.
It could have in the future, but now it's the same as to say that your car uses quantum mechanics. Sure it does, deep down everything is quantum. But to say that the door opening is a quantum phenomena is just silly.
Equally, searching for some quantum answers in the brains is a waste of time.
It's the question of cells, neurons how they are connected and how do they communicate and store information.
Inter cell operation.
No 'Aura' will preserve you if there is no hardware. On the other hand, if the exact hardware reappears after a googol of years, you wouldn't notice the gap.
- Thomas |
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
Ultimately, the situation is if we upload our mind, is that the uploaded mind is not our mind. We might want to do that just to have a central location of all knowledge and thought (uploading to a central server); however, to clone ourselves or to upload our minds so that we can continue living... this is folly, simply because that uploaded instance is not "us", it is "it" the new uploaded instance, even if under a constant link, to be redownloaded incase of failure will cease to be us, because that is again a new instance. The question is do we care. The answer is, Yes, because we are still biological animals with an instict of "self". Further, we have a respect, and high value of "self". So in the end this is a rejection of "self" and is thus will be rejected. Just because something is possible, doesn't mean the populus will think it should be done. |
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
Hello Keeper
I totally agree with /:setAI.
Hans Moravec describes a thought experiment where, say, a single ion transfer in your brain is temporarily replaced with a different mechanism (say a wireless link - remember, it's a thought experiment so technical feasibility is not an issue here) that is exactly functionally equivalent.
You are prompted by the "upload surgeon" to think/act in such a way that the ion transfer/wireless message transfer takes place. (Say, the transfer occurs as part of the massively complex process that occurs whenever you see a photograph of your mother.)
You introspect and see for yourself whether your "inner life" disappears with the wireless replacement. If it does not you are replaced functional component by functional component (always checking, of course, to see if you are being swallowed by a blank hole of nothingness).
Finally, you have been gradually replaced, just as if you had eaten and breathed your way to a new you.
It's just a thought experiment, so not proof, but can you really imagine inner presence disappearing just because one mechanical message medium is replaced by another? |
|
|
|
|
|
|
|
|
Infinite dimensional experiential cube and why you have no free will
|
|
|
|
I've seen your thread above and I want to give my contribution. I wasn't this sick before, it happened after I thought about consciousness for extended periods of time. I advise you not to, or to just liquidate the problem like Thomas, that keeps you sane. Ok let's do it:
There is something unique about consciousness.
You can make a copy of yourself and you'd STILL see with only one pair of eyes.
Making a backup won't save you from death, killing your copy won't be like committing suicide.
Making a copy of yourself and then committing suicide will not cause a "soul transfer"
(by "copy" I mean a copy of any kind, including mind uploading). Let us call "process" a
pattern in time and space, which can be interpreted as self aware, with this interpretation
unlimited in its potential complexity. Our brain squirts neurotransmitters, this is a pattern
of electricity and molecules evolving in time, it's got its inputs and outputs and it can be decoded
to reveal a "self-aware" being. It's a kind of process.
If you want to think about subjectivity, at some point need to realize that you ARE this process (your
mind and body). You are not "represented" by this process, codified by it, embodied by it, running on it,
but simply, ARE it. You are also not "ANY process identical to this", but ONE and only one process
(this one). You may be used to a third person point of view, the scientific one, which tells you that
identical objects by definition are identical, but this notion is not correct if you talk about the
subjective.
Your experience "log" from time 0 until death is the really unique feature about you, that nobody -
even a perfect copy - can ever share fully. You are not an entity floating about in
a universe full of colours faces trees free to choose his path (although we are designed to feel this
way); rather, the path chosen, the final combination of inputs and outputs, creates, or IS, your
subjectivity, your first person awareness, what you really mean when you say "YOU". When you think
of yourself, you should not picture a grey brain
inside a biped body walking around in manhattan. Rather, you should have in mind some kind of
1-dimensional shape that is contained in a boundless, infinite-dimension experiential cube.
Bear with me as I illustrate this point.
Each point in this infinite-dimensional space would contain a precise "subjective state",
the superimposition of all qualia of a certain strength or magnitude at one instant. A one-dimensional
line, or curve, would then be a linear sequence of such states, which is what we call "life".
Note that the infinite-dimensionality of this experiential cube allows for ALL forms of awareness and
all subjectivities to fit in; all possible qualia are included. Similarly, all lives and sequences of
experiential states can be traced in this cube.
I have pondered for ages about the conundrums of subjective consciousness and they can all be solved
provided that we accept this new model.
Q:Where was I before being born?
A:This question implies the existence of space and time at a more fundamental level than our subjectivity.
The experiential cube is timeless and spaceless. Time is something we perceive, but the individual life
lines are not inside any kind of "time" or spatial scheme. Therefore, when you were born, your experience
of space and time began; outside of that experience, space and time make as much sense as the sound of the
color red.
Q:What happens if I teleport myself? Do I live or do I die?
A:We're assuming teleportation is perfect down to the smallest details and that the original and destination
bodies use different atoms; they never share one instant of simultaneous, but conflicting, experience.
In this case teleportation should be successful. One must look at the problem this way. Does the source
person + destination person conform to our definition of PROCESS? Can we decode the person1+person2
compound, using an arbitrarily complex interpretation, into a single self-aware pattern in space and time?
If so, the person's subjectivity has been preserved. And in this case, p1+p2 have formed a single strand
in the experiential cube, just like your age 10 to age 15 person and your age 15 to present age person
have created a single line and maintained a subjective experience. The trick would seem to make sure
that this interpretation can always be made. Evidently if at one time the original is thinking hey what's
going on is this finished yet? and the copy is thinking oh ok here I am - that's not a smooth transition and
the two segments could not create a unified interpretation of process.
Q:What happens if I make a perfect copy of myself and then die in an accident after 10 minutes?
A:Well you're not 10 minutes dead but dead for good according to what I'm suggesting.
Q:So how do I live forever, or get rich, or be happy?
A:You can't. You are not an observer in a simulation, you are an experiential line which has been pre-defined.
If it were different, you would not be you, for you ARE it. So whether you die at 10 or 100 or 1000, is pre-set,
and there's nothing you can do about it. Same for all other things in life. You are now only watching a movie,
in which you are the star. You ARE the movie. What if you got rich? Well that would be another movie! Or maybe
you WILL get rich because it's already in this movie, you just need to fastforward a bit.
Q:Are you proposing that everything rotates around us in this universe? That subjective experience is more basic
than space and time? Haven't we learned that the more we discover about the universe and the less important we
and our existence become?
A:Our bodies are a byproduct of the laws of physics and play a very small role indeed. However, our subjective
experience is made of a different stuff. I suggest that an integral part of the universe is this multidimensional
experiential state, which coexists with the regular laws of physics. I don't claim to have a complete explanation
for their interaction, or which comes first. If you look at physics alone, our subjective experience has no need
to exist and indeed, it can't be measured, compared, or detected; just like light can't be touched or sounds seen.
Q:Oh come on let's just scan the brain reverse engineer it and upload ourselves into a suitable substrate and kill
the original
A:Ok go on my friend I'm sure 72 virgins are waiting for you on the other side
Q:...But I mean gradually
A:In that case, probably it's ok :) |
|
|
|
|
|
|
|
|
Re: Infinite dimensional experiential cube and why you have no free will
|
|
|
|
Consciousness more fundamental than space and time?
But falling out by as simple physical process as sleeping? Or by a near (A) bomb explosion?
I see however your attempt as honest and clever - but still wrong.
What you basically saying is: Your path is you, my path is me.
Okay ... just lay down and die, when your time comes. Than wait a long time for some quantum fluctuation to brings you back. The waiting time, will be zero for you, don't worry. And for the next also.
Will it? If you say no, then you have to know, that you are going through the sequence of small quantum fluctuations all the time already.
What I am saying (another instance of you) is:
When replace your memories by those of Cezar, you will fell the same as you feel now. And the late Cezar will feel as you feel now. You are just casting. One role after another - and in parallel as well.
You were Cezar once upon the time, you are Thomas several thousand miles away - now.
That is what I say.
Q: - How is then seeing through 4 or more eyes?
A: - Just like oscillating there and back very fast. In each destination you can even be sure, that you are unique and inside that destination forever. It apparent, since memories convince you so.
I don't say you are actually oscillating between Manhattan and Alps. You don't have to. Is just the same feeling as you were. Leaving memories behind each time.
- Thomas Kristan |
|
|
|
|
|
|
|
|
Re: Infinite dimensional experiential cube and why you have no free will
|
|
|
|
>Consciousness more fundamental than space and time?
yep!
>But falling out by as simple physical process as >sleeping? Or by a near (A) bomb explosion?
fall out? where? mine never falls out ;)
I have no recollection of any moment in which I was not conscious.
>Will it? If you say no, then you have to know,
>that you are going through the sequence of small
>quantum fluctuations all the time already.
what's your point?
>You were Cezar once upon the time, you are
>Thomas several thousand miles away - now.
Are you suggesting that there is only one conscious entity around, and that he plays different roles one turn at a time? If that is so, the end result is still to create individual subjective consciousness strands isn't it?
>Q: - How is then seeing through 4 or more eyes?
>A: - Just like oscillating there and back very
>fast. In each destination you can even be sure,
>that you are unique and inside that destination
>forever. It apparent, since memories convince
>you so.
I don't know how this is related to my previous message but it sounds interesting. However, two separate consciousness strands would still appear.
>I don't say you are actually oscillating between
>Manhattan and Alps. You don't have to. Is just
>the same feeling as you were. Leaving memories
>behind each time.
I don't see that our arguments fit together or are in contrast. But would you expand this to all forms of life? So you are now an ant, now a person, now a squid on tau ceti, and so on? My definition of "process" goes beyond what we normally interpret as conscious. Anything with the right "decoding" could be considered conscious, and for conscious beings unobserved and left undecoded, life still goes on. So there is an infinite number of processes, or experiential lines, which you would expect from an infinite experiential space etc.
How would you compute all of these one by one?
-mirai |
|
|
|
|
|
|
|
|
Re: Infinite dimensional experiential cube and why you have no free will
|
|
|
|
>That the concept of "no more life after death"
>doesn't deal quantum fluctuations. On a large
>time scale - much larger than the current age of
>the Universe - it's a question of time, when you
>will be physicaly regenerated.
I don't see this as a problem as long as you can look at the first person and the second and interpret the sequence as being a unique process (see previous messages for details on conditions) then you'd have continuous subjective experience.
>And you are nothing but _this_ process!
yes, you are a process that was my claim, and not only a process of a certain kind, but _this_ process right here. and saying "you ARE a process" is not an easy statement to make. It makes no sense. Why this and not that other? What's special about this process?
>> would you expand this to all forms of life?
>No. To some apes maybe. Maybe.
So you think hurting a dog does not cause subjective sensations of pain? I am convinced that even without superior logical and linguistic capabilities a being can be fully aware of the basic autonomic stimuli such as heat, cold, pain, etcetera. Ok monkeys are not as smart as us, but so are children and some handicapped persons or other people with neural diseases, you would not classify them as unconscious I think.
>If PC's OS contained a consciousness subprogram -
There can be no such thing, as you can read on other essays on this site. But for sake of discussion let's read on.
>the last PC's second will be it's last
>second of life. Not a particular PC break down.
Sure. That does not prove or even "go with" any of the claims you've made, however.
>We are NOT hardware (PC's) - but a software
>running on them. In many instances.
The software itself can run perfectly well without "YOU" really existing.
Just like a computer does not know that spreadsheet files exist. I also thought like this, because I came from a pc/programming background, like you perhaps. You must consider the copy paradoxes and the fact that no amount of processing whatsoever NECESSARILY causes any subjective experience. You will come to see that software ALONE is insufficient. It seems to me that lots of people who give easy answers to consciousness haven't understood what the problem is. |
|
|
|
|
|
|
|
|
Re: Infinite dimensional experiential cube and why you have no free will
|
|
|
|
I'll try to ignore your sarcasm.
> I see your argument as being shallow as your understanding of the matter.
What tell us something about your deepness.
(From now on, I'll try harder to avoid sarcasm myself.)
> By now everyone agrees that no physical or supernatural structure itself is cause for awareness
Who is this everyone?
> heart is spelled with only one H
Thanks. Did you mean 'H'? Speaking of typos ...
> A "linear" program, like one you can write in C++, cannot be conscious, however
No, it can _produce_ consciousness. By simulating enough atom _reactions_ to each other. It would require a lot of computing - but is no reason, why it can't be done. If your entire body consisted of small C++ programs, instead of cells - you wouldn't noticed the difference. You just get a signal. With no signature where from it came - from a program, cell or from a random event.
> Everyone agrees on this, but if you believe otherwise go ahead and write a C++ program that passes the turing test
Since you know better than me - you do it first! If it is mandatory for me only ... well ...
> Get a big hard drive, because you're going to need lots of code
What now? You are saying, that I need a big and hard drive for something you said earlier, can't be done at all.
> Infinite, to be exact.
That big? Not even children are afraid of the infinity any more. Let alone an arrogant smart ass as myself.
> If someone has valid criticism on the new perspective I have written about
You are saying, that the unique consciousness is glued somehow to every did of a chap. If he took plums instead of cherries just once in a life - it would be a different one. Are all the future acts then calculable just from the fact, that this is "I"? Environment information - like the air temperature - are also already included? Have to be. I can't choose any other than that and that shirt.
I don't like that.
> As for you Thomas, I think you're at the very first phase of thinking about subjectivity, good studies.
Your course I will pass, I think.
- Thomas
|
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
This is something that always bothered me about uploading...the uploaded copy would wake up, say "Hey, it worked!", and adamently claim to be me. Meanwhile, I, the original, would say, "No, I'm still over here..." If the upload procedure destroyed my brain in the process, the original me would be dead, while the copy thinks everything went just fine.
Hans Moravec has an interesting possible solution to this problem, outlined here:
http://yudkowsky.net/singularity.html#upload
In a nutshell, this process moves--rather than copies--the mind bit by bit. In theory, it wouldn't even interrupt one's train of thought, so, subjectively, you would still be you, since there was complete continuity throughout the process.
--Ariston |
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
I have what I believe the answer to this:
"This is all great but there is still something that disturbs me about uploading, namely the whole idea of preserving the "soul". Mr.Kurzweil mentions a thought experiment where his mind is downloaded during his sleep and reinstanciated in nonbiological entity. There are two Rays, each claiming to be a real one. Now, here's the problem. What if the first Ray gets killed in a way that he can't rely on the copy of his mind file? The copy still exists but not the first Ray. What I mean is that while the conciousness and "soul" of the second Ray is still there the "soul" or "uniqueness" of the first Ray is dead. Therefore the question. How can a mind be uploaded in the first place with preserved uniqueness so that the person before and after uploading is the same instance of itself? Otherwise it would mean that that when first original person would die, no matter how many copies were made of his/her conciousness that first person's "soul" would be dead. I must say I still haven't come across the solution to this problem so I'll try to propose it."
My answer:
This problem is one of so-called fuzzy logic, where the general idea is that identity shifts from moment to moment as our mind changes through time. Obviously if we upload a copy from a living person, the upload (copy) would ideally need to be the same person. Remember that we are who we are because of three "inputs", which I consider the trilogy-elements of the soul for that reason: (1) brain, (2) body, (3) environment. The brain is technically a part of the body but I distinguish it because that is where consciousness itself is, and it's unique amongst all the parts of the body for the role it plays in manifesting consciousness. Also, the environment contains other consciousnesses! There are two solutions actually, as far as I think I can tell. One would be to feed each mind - the original (in this case biological) and copy (upload) the exact same sensory stimulous, whether from the same body or two bodies, so long as it is drawn from the same perceived environment. This way, if the function of the body is *identical* and the brain will feel, see, hear, taste, smell the same world and the brain will act the same way, in conjunction with its self-affective reactions as well. I would argue that a better alternative to this situation is to avoid the necessity for syncing dopplegangers - besides that having a soul-clone is disturbing for most people to begin with (!) - and instead just STOP the activity of the brain. This, is cryonics. If you can stop the biological brain and restart its function within a computer from the same point, but while refusing revival of the biological template, the resultant emulation should be a continuation in time relative to the consciousness of the same stream of believed personal identity. So yay ~ |
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
Hi,
I'll try to make some analogy with biological cells.
Consider your 'self' a cell.
Uploading produces another cell - by copying.
Connecting the new cell with the old one produces a double-cellular organism.
Killing one of the two cells is killing half of the organism - it may result in some irreparable damage. (If you lose a half of your brain, you will be significantly retarded. And if the dead half of your brain is replaced, you may again function properly, though never retrieve some memories and skills.)
But, after producing a billion of cells and connecting them:
1. A loss of one cell would no longer be significant (because of a very high superfluity rate).
2. Given proper organization, we'd have a step from amoeba to monkey - the point is that it will make a qualitative, not just quantitative difference.
And that's the Singularity.
In my opinion thinking of copying vs. moving is irrelevant. When the technology comes, uploading will be used for creating such multi-cellular organisms and not for saving/loading/multiplying primitive, however fast, human instances.
Artyom Shmakov
|
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
In fact, there IS a very strong motivation for the integration into a 'superbeing'. And it has nothing to do with any altruism. Moreover, it is one of the basic human instincts - the desire for survival.
I do not want to die! Neither at 60, nor at 60 million. If I upload, my copy survives and I die. You can kill me at once after uploading to achieve the illusion of 'transcend', but my copy gets this illusion, not me (Yes, I'm sure!).
Integration into something bigger looks weird because it is something completely, entirely new - no man has ever tried it and we _can't_imagine_the_outcome_. But it is the most natural path as far as I see it. For me, it is better to change into something different than to end my existence forever. And I do not think my desire for eternal life is very uncommon (or unhealthy) among our species or among any living beings.
I don't think we can make any assumptions about the time after the Singularity. But, extrapolating from the past, it is likely that the Life will become more meaningful. What is the meaning of life of an animal? We, humans, are more advanced from that point of view (yet very slightly). But WHAT will happen - we CAN'T know, even an uploaded mathematician won't get much closer.
Artyom Shmakov |
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
Well, if a social superorganism can be formed, it will - somebody for some reasons will try it, if only as a joke. But, if such an entity has some survival advantage over other entities, it will persist, grow, and that will be no joke.
The superorganism might be very efficient, with no need for many of the rituals, organisations, able to use resources to enhance its survival, rather than to fit the disparate needs of its parts, like a conventional human society does.
Rafal |
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
[Well, if a social superorganism can be formed, it will - somebody for some reasons will try it, if only as a joke] - No question about it, but I don't think the idea will be popular.
[The superorganism might be very efficient ....] - as efficient as any individual could be.
[.... with no need for many of the rituals, organisations, able to use resources to enhance its survival, rather than to fit the disparate needs of its parts, like a conventional human society does.] - an implied assumption is that the future society living in a simulation won't exactly be our current conventional one, and, therefore, the resources, and disparate needs of an individual won't be a problem. They can all be simulated (except the computer memory and processing power of course).
The other important thing to consider is the very process of merging. Would it be a creation of a person with two minds or just acquiring the memories and skills of another person who becomes extinct. As for the first definition - this is not really merging, and as for the second - it would be murder, where simple downloadable knowledge would do. As you can tell in order to say anything about superorganisms, we need a clear definition of what merging is and does.
Slawek
|
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
[Well, if a social superorganism can be formed, it will - somebody for some reasons will try it, if only as a joke] - No question about it, but I don't think the idea will be popular.
### You need only one...
-----
[The superorganism might be very efficient ....] - as efficient as any individual could be.
### Oh, much more so. There are most likely limits on the amount of information that can be processed by a singular sentience (a limit on IQ, if you like). Societies can overcome this limit by joining the abilities of many individuals but only if the cooperation is proceeding smoothly. The SO could run much smoother than, let's say, the United Nations, CIA, Walmart or other conventional organizations.
-----
therefore, the resources, and disparate needs of an individual won't be a problem. They can all be simulated (except the computer memory and processing power of course).
### Processing costs money (or energy, or whatever is important).
------
say anything about superorganisms, we need a clear definition of what merging is and does.
#### The SO could be initiated by designing a sentience wholly devoted to the goal of forming the SO. It would then split, or spawn, producing copies of itself, as fast as possible. Copies would differentiate slightly, to accomodate various tasks within the SO, yet retaining the lack of self-oriented goals (like individual survival, except in as far as necessary to serve the SO). That's just one of the possible ways.
Rafal |
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
In Antonio Damasio's books about the mind he gives an example of people with spinal injuries whose lack of feedback from the body changes their abilities to percieve/comprehend etc. If one is downloaded then, unless a programme deceiving oneself into believing one still has metabolising body, is downloaded with the mind, then ones "downloaded mind" will beging to evolve in ways that may have radical emergent properties. Greg Egan (science fiction writer) has explored the concept of minds living in computer generated mindscapes but he still kept them basically human in feel. It seems that downloading minds would be more a way to evolve superior programmes and might even require crippling ones downloaded mind in order to make it dedicated to a particular purpose. "Slave-minds" Would you enslave your own mind?
If we start to view our own conciousness as toys or profitable programmes we are getting into a very weird future? reductionism in extremis. A great breakthrough in downloading minds would be in being able to develop new perceptions that were so powerful that old ways of rationalising the same old wars, territorialism, greed, violence blah blah blah, would become undeniably ridiculous.
Our modes of thinking and prioritising have stayed the same for centuries. The witch doctor is still at large but now he sits in front of the television scratching himself. Developing different ways for thinking...attempts like Edward De Bono's work, studies on fuzzy logic, attempting to think in other ways than mutually exclusive logical steps,aren't sufficient. The change in mind set would have to be carried along within the culture as a popular phenomena.
Using developments in artificial intelligence to develop the breadth of our own capacities for creating a deeply fulfilling and sustainable society would surely be the greatest advancemet we could make. |
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
There are quite a few problems if what Ray Kurzweil think is true in the future. First, if Ray Kurzweil is true that we can scan human brains and re-create its design electronically by 2030, then the next step that we will take is to make human clones, or transfer human memories into another body to extend lifespan. If we can extend our human lifespan, then first, this will definitely increase the over-population problem dramatically. ''human population estimates ranging from 0.6 to 1.2 billion''i Currently we already have over 6 billion of human population, and if we can extend human lifespan, it will definitely increase human population rapidly. Also, if we can transfer human memories into another body, it will lead to another problem, which is, 'Who is eligible for transferring memories into another body, and who is not eligible?' Obviously, people who are rich and famous would definitely get the advantage of trying this technology than people who are poor and infamous. Eventually, the rich and famous people will have an 'eternal life'. However, leaving the rich and famous people does not mean it is a good idea at all. For example, I suppose no one would be happy if we have this technology back in 30s and 40s and we are able to give the 'eternal life' to Adolf Hitler. He is a famous person in 30s and 40s, but definitely it will be a disaster if he has 'eternal life'.
Another problem that may occur is who is taking control of the use of this technology. In our current Information Technological Age, we already face so many types of problems dealing with computers like spywares, viruses. If we are able to scan human brains into computers, it is hard to imagine if the 'data' in the computers are infected by viruses. Also, it will be easy for the 'programmers' to 'edit' the 'data' during the transferring process to another body. This means the 'programmers' can easily censor information that they do not want. They can also be easily to do 'mind control' to people by 'adding' some information to the 'transferable data'.
In addition, there will be no privacy at all by inventing this technology. The 'programmers' can keep track of every single detail inside your 'memories'. However, I believe no one would like not having any privacy at all.
To be more optimistic, things about scanning human brains and transferring human memories to another body might just be a science fiction. This is because we are still having lack of knowledge about human brain. Also, even if we can scan and make a copy of human brains into computers, does this mean it is also the same as we can make a copy of human 'spiritually'? Human memory and Human spirit might just be two different things. It means that making a copy of human memory might not mean making a copy of human spirit. By transferring human memory might not mean it is transferring all information that human contains into another body. In addition, it might come to a point that scanning human brains is just impossible for computers to do. Also, even if computers are able to handle this technology, simply because we might run out of resources and computers are not able to operate such task.
In the pessimistic side, if computers can truly scan, copy and transfer human brains (human mind), then we will have either one of the following results:
1. Computers might take control of human beings at some point.
2. A small number of people will mind control the whole human populations.
Therefore, in order to prevent things like this happen, governments, organizations should take the responsibility to control the research on these areas and the use of the technology.
i. http://www.ecofuture.org/pop/rpts/mccluney_maxpop. html, EcoFuture (TM) Population and Sustainability - How Many People Should the Earth Support? |
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
There has been an experiment with rats, in which they connected a wire into his brain. In his pleasure centre to be exact. The rat just had to push a button in his cage to go into extacy. The resulting behaviour was pushing this button all day long. No more eating, no more drinking. Death.
I don't think the purpose of life is evolution. I think coping with entropy is the purpose of life and evolution is the only way to survive entropy. Complexity is not necessary, otherwise man would be the only species alive. Our complexity is a coincidence.
With this complexity of ours something strange has happened; we don't adapt anymore to our environment, we adapt our environment.
Every living creature has pleasure centres he wants to activate, and by doing so, he follows a program to survive. Only he only reaches pleasure now and then because he is not succesfull all the time. That's why he is still evolving; failures.
So trying to reach pleasure is what's needed in life, not reaching pleasure, let alone having nonstop pleasure. Because then there is no more need to fight entropy, there is no more need to live, no more need to evolve.
So if we reach the point of uploading our brain, the biological evolution has come to an end because we don't need bodies anymore. Evolution will take place outside biology. If there is any evolution left, because there is no need for evolution if you are in continous extacy. There is no need for life, no need for consciousness, no need for fysical abalities to cope with entropy.
Maybe we will become God at that point; all the living souls without a body uploaded and interconnected in total extacy and all-knowing...
Cheers. |
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
'There has been an experiment with rats, in which they connected a wire into his brain. In his pleasure centre to be exact. The rat just had to push a button in his cage to go into extacy. The resulting behaviour was pushing this button all day long. No more eating, no more drinking. Death.'
Unlike rats, some of us have some concept of moderation, and find satisfaction in considerably more variety than a rat does in the (excuse the pun) singular entertainment of a button's press of food.
'I don't think the purpose of life is evolution. I think coping with entropy is the purpose of life and evolution is the only way to survive entropy. Complexity is not necessary, otherwise man would be the only species alive. Our complexity is a coincidence.'
The purpose of life is, like most things, relative to the individual's given bias. Entropy being a fancy way of saying 'chaos by unpredictability' when you say our purpose is to conquer it you are implying that our satisfaction comes from stifling out the unsatisfactory, which stemming *typically* from need is why evolution has come to be traditionally viewed as the development of means by which to meet those critical ends. It is true that complexity isn't strictly necessary for this, but that complexity is NOT a coincidence. Having evolved to the point we can fend for ourselves quite well, and thanks as Kurzweil says to the opposable thumb and the development of complex language mechanisms, we can not only survive with our tools but play with them and forge stories. If you like, evolution serves another purpose besides stifling entropy ' it allows us to create it, by making games.
'With this complexity of ours something strange has happened; we don't adapt anymore to our environment, we adapt our environment.'
Again, this doesn't address the issue of to what more we 'adapt' our environment if not simply to survive.
'Every living creature has pleasure centres he wants to activate, and by doing so, he follows a program to survive. Only he only reaches pleasure now and then because he is not succesfull all the time. That's why he is still evolving; failures.'
You're presuming your premise, that pleasure is strictly for survival ~ Human's are a bit more special than that. I can appreciate your saying that pleasure needs be refused on occasion to be appreciated and thus, relative to perception, present, but as I said moderation handles this nicely (and tends to become developed in those who actually get what they want and need find satisfaction by alternative, usually less material or sensory needs, as in the artistic or intellectual etc.), but another convenient mechanism a lot of people don't seem to consider, is that we forget. Putting down the book I cherish and love for years on end, picking it up I not only have a fresh perspective, but a novel appreciation. Maybe our weaknesses hold their own unique advantages in their own right, and not failure exclusively (thank God).
'So trying to reach pleasure is what's needed in life, not reaching pleasure, let alone having nonstop pleasure. Because then there is no more need to fight entropy, there is no more need to live, no more need to evolve.'
Non-stop pleasure' Well, how do you define 'pleasure'. It's a vague word because there are kinds of pleasure that go beyond sensory stimulus. I can be continually pleasured by sampling a variety of activities and challenges, while still remaining focused for that those distinct enterprises serve a common, unifying theme. It is an axiom almost to the point of clich' throughout history that technology creates more problems than it solves ' and that is much to the delight of we scientists. Although it would seem we're making a world more complex to the *demise* of the world (or so say our dear neighboring Luddites) we are in fact making the world a better place towards the end of addressing more fundamental problems not yet addressed, which would not have been relevant otherwise perhaps but, are necessary growing pains in the process of evolution. We fight these problems, but not the entropy itself. Entropy is what we feed, and it comes back to give us purpose, full circle.
'So if we reach the point of uploading our brain, the biological evolution has come to an end because we don't need bodies anymore. Evolution will take place outside biology. If there is any evolution left, because there is no need for evolution if you are in continous extacy. There is no need for life, no need for consciousness, no need for fysical abalities to cope with entropy.'
In a sense biological evolution, I would guess, will not come to an end because when we speak of evolution we're talking not just of the body in the sense of genetics, but the mind that it's influenced by. Just as we are shaped by our environment, we are shaped by that environment depending upon how it's perceived, and that perception is dependent by proxy via the body. An upload will be capable of altering both/either his/her central and/or peripheral nervous system, and relay their signaling mechanisms to either the functional equivalent of the biological body or else some foreign construct. If the latter is the case, as well may eventually be, the mind will shift in response to that new medium, but if the former is the case, even if partially, we will continue to evolve according to the influence of the biological programming's predetermined influence in conjunction with the environment, though both can be variable in direct proportion to one another as in our actual reality. It really depends on how you define evolution, but I think it has a larger meaning than may be inferred by your argument in attempt to convince of biology's alleged nullification in evolution's role. If you would ask what my 'need to evolve' would be, I have a plethora of answers, foremost of which is to master the piano, a very close second being to master Go, if not Chess (but I'll get around to that and many other things along my long way).
'Maybe we will become God at that point; all the living souls without a body uploaded and interconnected in total extacy and all-knowing...'
Wow. You used the word 'God'. That's a very much regurgitated can of worms I scarce dare to dip into but I've commented on the rest of what you said, and given you misspelled continuous, physical, abilities, and ecstasy I figure I might as well throw up a final wall to bang up your memes against. God is the singular form of 'god', which being without capitalization implies there are some multiplicity of gods to composite the whole of the world's fundamental rules. Off hand, it should seem intuitively obvious that there can be one god (a.k.a., 'God') since any system is inherently coherent, and logic begets causality, which roots by reductionism towards the source. I won't say it's necessarily the 'why' in every sense of that word, but it should be sufficient given our acknowledged ignorance (unless you're the kind of idiot to dive into blind faith, instead of theory as in science and philosophy) that it is the 'how'. As some knowledgeable person above on this page, physics begets chemistry, which begets biology, which begets technology. Personally I'd argue that below the level of physics is a more fundamental one of mathematics, and that it's paradoxical nature is the reason physics is so tantalizingly mysterious for all the questions it leaves unanswered. There mere fact that there are too often answers like 'there is infinity' despite the intuitive command in our heads that this isn't possible should lend some plausibility to the multiverse theory, but I'm ranting on tangent sorry. Point is, you can encapsulate 'God' to mean the collective force of universal law, or Truth (not in the sense of mere facts per se, but the reason things become what they are, or in other words the reason for change; Tao Te Ching ring a bell anyone?). Science prides itself in holding the most rational method for attaining exactly that ' Truth! But any scientist will tell you that there are very few widely acknowledged laws, and even these are disputed, the most prominent example being gravity *shudders*. In other words, almost everything if not just that, is a theory. Even if all our logic is coherent with the presently observed world, it is unwise to be confident in our beliefs because, and I'm roughly paraphrasing this from some now long dead philosopher heh, we can't know if we don't know, or what it is we don't know if that's so [hey, I even made it rhyme! :O yay'] Moreover, having full knowledge of Truth or Universal Law or whatever you want to call it would not make us God itself, except in the sense that everything already is a manifestation of God for that as its creation we are its children, if you want to put it into Biblical terms, but then again the division between objects is arbitrary and you might as well say we're a collective organism, or that for that matter in macroscopic analogy from the brain we're a conscious entity like the brain ' the so-called noosphere, and therefore, taking that likewise up to scale, that God itself is conscious through all matter, and that It is growing with/by us. But we'd be demi-gods at most in the future, and then again aren't we just that now, in a sense? Those of the past thought what we do now is magic for the incredible power it seemed relative to their own weakness and ignorance (and for their mysticism). In a sense, science is the most feasible attempt at understanding, and channeling the power of God, as if It's very instruments (but nothing else, even if in full consciousness of 'It' ; notice I omit gender? Ha! I've eliminated sexism AND feminism with one stroke!! Bwahaha'), It's will nothing less or more than what, simply, is. 'I am what I am. That is all that I am.' Puts a new twist on the nature of good and evil, doesn't it. Bias gives us meaning, shadow from the light. To answer your final, sarcastic comment on how perfect everything will be, the boredom argument is hardly new (I'm already bored with it actually) ' some say that's why it's likely we're being run as a simulation right now. Are you entertaining yet? Hurry up and get complex so we can make this divine play interesting. Whether it's a comedy or a tragedy, is up to you. x_+ hehe <3 |
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
How do you decide from which level of structure ( e. e., atomic, molecular, mcomolecular, DNA, RNA, cell membrane, proteome, cell organite, synapses, etc ) your upload will choose its substance ?
I think you'll need to choose everything, in fact, not to choose; to take all.
Which seems highly impossible, even with your nanobots.
However, maybe, what constitutes memory on a specific level ( e.g., visual images , acoustic images , lingistic structures ) will become a source for this upload , once it will have been appripriately identified ( at present , I can't know what molecules represent in my brain the computer sceen I'm gazing at ). |
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
With regard to the question of at what level it's appropriate to attempt a read of the brain for uploading to be successful, I agree you need to take it all into account without exception. What we don't seem to agree on its that endeavor's feasibility, or even the specific interpretation we attribute to the meaning of "all". What I mean is that you can encapsulate everything without explicit (though of course you would by necessity thus have implicit reference) to the more abstract aspects of the brain we call by words like "neuron" and "synapse" etc. In other words, we need [brace yourself] an *atomic*-level reading of the brain. I also doubt this can feasibly be done with nanobots, not only because indirect reading as by wireless scanning from the bloodstream, or even further within the tissue (itself raising the issue of danger that the specimen is irrevokeably contaminated beyond possibility of the initial state's recoverability), but more tellingly because the brain itself - in the scenario wherein Kurzweil proposes such a method - is active, processing in realtime per its panoply <Yes Mr. Special K., I'm borrowing your favorite word ;P> which it seems to me would make it rather difficult to read accurately, besides the doppleganger issue of having an original you to kill off o_O; the fuzzy logic issue of the discontinuity of consciousness this would raise seems to be telling for that it implies we would be making a mind clone for only the instant of the copy, but not after they experience a different environment and/or through a different perceptual function (bodily constitution as medium between mind and environment). Short and long of it is that cryonics seems to be to be the most philosophically appropriate route for uploading, but I don't agree with the majority, who seem to think the best solution would be nano-repair. Nanotechnology, while powerful, will not hold such direct relevance over mind uploading if from the standpoint I imagine in waking from cryosleep, since the purpose isn't just to wake up fine and etc. but to do so in a computer, and not in your biological body. By then we could get rid of a lot of the diseases and more subtle vulnerabilities of the flesh but not the flesh itself (at least not in the way typically proposed). It will not take nanobots but a more advanced form of the technology we already have available here and now, such as the cryo ultramicrotome, cryo transmission/scanning electron microscope/spectroscope, cryo confocal laser scanning micrscope, annular dark field imaging, cryo immunohisto(-cyto-)chemical serial sectioned tissue staining, neural nets (but to the quasi-atomic & molecular levels - not just arbitrarily calibrated cellular level), and a host of algorithms derived from in-vitro wetlab experiments we're barely beginning to really uncover from efforts as in the Blue Brain Project as one prime example. |
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
'You may not think the brain thinks. I call that somewhat counter-intuitive, although its a logical corollary of your position. A neuron individually may not be capable of thought, but a network of neurons (as physicial and chemical entities) clearly DOES think in the context of a healthy brain.'
I think the same, but unfortunately you misunderstood my argument. Your original case asked us to wonder if the Sun (which is ONE component in your sundial computer) is 'thinking'. I then showed that it is an error to reduce a system to its component parts, argue that each part is incapable of exhibiting a certain behaviour, and then conclude that such behaviour cannot exist in a suitably organized system.
Put it this way: A hydrogen atom is not water. An oxygen atom is not water. But combine two atoms of hydrogen and one of oxygen and what do you get?
'Your only conclusioon must be, I suppose, that my AM/PM calculator is thinking'.
That is not the case. There is little reason to suppose this system is building an internal model of a dynamic environment consisting of self/body/place, predicting the behaviour of the real world (as far as its senses can probe), adjusting its model until both coincide to a degree that is useful to the system's survival.
Frankly, you cannot take such a simple computer as your AM/PM contraption, make the resonable point that it is not capable of 'thought' and then conclude that ANY computational system, no matter HOW closely it peforms the types of information-processing and world-modelling capabilities of a brain must be similarly incapable of cognition.
|
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
'For example, you can not well simulate an another person's mind, let alone your mind that you must know perfectly'.
Actually, neuroscientists using functional brain imaging have provided experimental proof that an individual does NOT know their own mind perfectly; that the brain knows things that you are subjectively unaware of.
The ideas of simulation and of computer brain augmation exclude each other, as the simulating computer would have to simulate the brain computer.
3. Anything that you simulate must be less computationally expensive than the simulator capacity'.
The collective brain power of the human race is several orders of magnitude LESS than the organised computational capacity of the local resources available to a type II intelligence.
'If your simulation runs a simulation in its turn,
and that simulation runs a simulation in its turn, and so forth - pretty soon the next simulation would not be available'.
Well, the 'computer' available to type II civilizations has an operational lifetime of tens of billions of years, and a computational speed sufficient to emulate the entire history of thought at Type 0 level (us) in a few microseconds. Each type of civlization differs from the next lower type by a factor of 10 billion. Perhaps literally, God knows what computational resources are available to Type IV.
Maybe we are real, living in a universe that is gradually approaching a state where life cannot be supported, in which case awareness will no longer exist. Or, we are a simulation, running on a computer that will run it to completion, whereupon the simulation ends and we no longer exist.
|
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
'Ours will disappear long before the sort of energy cv I is supposed to "harness" is there'.
Of course.
'Civilization I is sure impossibility, nothing saying about Civilization II'.
The transition from type 0 to type I is the most IMPROBABLE leap, because it seems to necessitate the abandoning of tribal and territorial habits in favour of the establishment of a single planetary civilization. The state of the world today makes one despair of this ever happening.
Then again, a Type 0 civilization derives its energy from fossil fuels which create global environmental problems that require truly global collaboration if they are to be tackled. There is nothing quite like a common enemy to make tribes forget their differences and start collaborating.
Fossil fuels also in finite supply, as is all the matter locked up in our planet. This means the continuing survival of civilization depends upon learning how to manipulate matter as efficiently as possible, a process that logically leads us to nanotechnology. Nanotechnology has already begun to make advances in tapping solar energy, and we are still very much in the primitive stages of nano, and therefore at a similarly primitive stage in learning how to harness solar energy.
What is so important about solar energy? Well, type I is sometimes mistakenly described as 'the means to control hurricanes, Earthquakes..all the currently wild and untamed forces of the Earth!' but this is not the case. In actual fact, a type I civilization is defined by the ability to harness the total amount of solar energy striking the Earth: 10^16 watts. All that talk of making volcanoes explode on demand is just wild speculation about what you might do with all that power.
(BTW, I do not necessarily believe you can successfully harvest the ENTIRE 10^16 watts.)
'I wonder if Martians were having dreams like yours before they went into oblivion so far that even the sculptures that they have left, do not convince anybody they existed'.
Well, Mount Rushmore hardly compares to the simulacra conjoured up by light and shadow falling on an outcrop of rock that was whimsically described as 'the face on Mars', or to the effects of erosion, scultping rocks to look 'sort of' like tools, such as you have presented.
But you have stumbled upon an important point, which is that for the purposes of THIS discussion the Kardeshev scale is not the most appropriate one to use. We should be discussing the Sagan scale, which runs from A to Z and refers to the amount of information a civilization can store and process.
Type A has language but no writing. They process 10^6 bits of information. Written language is type C and it achieves 10^9 bits. All the books in all the libraries of the world today gives us 10^13 bits, and the information on the internet (which is not just words but pictures and sound too) approaches 10^15 bits. 10^15 bits places us at Type H.
The point of all this is: How many bits of information does it take to encode a brain, its body, and the surrounding environment? Moravec estimates it at 10^18 bits. So we seem to be short by 3 orders of magnitude. However, a desktop rod-logic nanocomputer processes 10^18 instructions per second (is that the same thing as bits?) and our matrioska brain blueprint with its rod-logic nancomputing systems using up all the carbon etc etc in all the planets processes 10^47 bits, which exceeds the amount required to encode the entire population of the Earth AND the Earth itself by 19 orders of magnitude.
|
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
'What process ? Where does this process reside if not in the brain ? And isn't ths process the whole point ? What on earth makes you think this process doesn't go on in the brain ?'
The process I am referring to is the process of information being fed to the brain, which it uses to construct the mind's model of reality. The point here is this: Is it REALLY raining, or is the brain merely being fed signals that it would receive if it WAS raining? The 'Godlike' intelligences feeding the signals to the brain are in a position to know it is not 'really' raining, just as a person standing over you during your REM period of sleep know you are not REALLY dancing in the rain with a pink gibbon. But so long as they do not wake you, from your subjective point of view you ARE dancing in the rain.
So, yes, I DO think this process goes on in the brain. But I am NOT absolutely sure that what I perceive to be 'real' actually corresponds to something that really really really DOES exist 'out there' beyond the only reality I have access to...the virtual reality model constructed by my brain.
'Isn't the "sensation of becoming wet" EXACTLY what brain processes produce ? And if this is what brain processes produce, then how can you conceivably say the brain carries on in ignorance of the rain ?'
No. The 'sensation of becoming wet' is what the MIND produces, which is an emergent property of The BRAIN, an appropriate set of sensors, and the environment. The BRAIN knows nothing of 'rain' because it is totally isolated from the real world by being encased in the skull. ALL it is in contact with is the information coming in through the input axons.
Again, if an artificial brain could be fed signals equivilent to the information it would receive if it WAS raining, then the SYSTEM the (the brain, its sensors, and the information) would build a subjective model in which a MIND believes it is raining, even though observers OUTSIDE of the system believe differently.
'The brain is not in ignorance of rain : if it were, so would we'.
This is wrong. See above.
|
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
'I think you think we are in agreement but I think we're not. I dont believe that mental states are caused by computational states but by the brain which is a physical object which has the causal powers to create them.'
People wonder how far into the future you would have to travel to meet computers that can think. I do not know the answer to that, but I DO know that you only have to travel a few decades into the PAST to encounter computers that are capable of thinking.
That is because, in the past, the term 'computer' was applied to people whose job it was to perform calculations. Then we built machines whose job it was to perform calculations but these too were quite different to the computers we use today, at least in appearance.
Now, we have the emerging field of 'neuromorphic moddelling' in which functional brain scanning is used to study the way the brain processes information, and then build 'functionally equivilent' hardware/software. And we know they 'function equivilent' to the biological versions because we have connected artificial neurons with 'real' neurons in a hybrid biological-nonbiological network and gotten the same type of results as an all-biological net. Also, rats have had a portion of their hippocampus removed, and a chip neuromorphically modelled on that portion was implanted in their brains. It restored function with 90% accuracy.
So, we have neuromorphic models of some types of neurons and some brain regions. Therefore, it is not entirely beyond the realms of possibility that we could reverse engineer EVERY kind of neuron, model EVERY brain region (and maybe subsequent generations of chips will progress from 90% to 100% accuracy) and construct a COMPLETE artificial brain. This neuromorphically modelled brain processes information in a way that is functionally equivilient to a human's brain. Does that mean this 'computer' can 'think'?
'I personally believe that subjective mental states are ultimately physical phenomena, but just cant be reduced to particulate activity.'
Yes, we have a lot of arguments at MindX about what causes the phenomenon we call 'mind'. Some think the brain does. Some argue in favour of the body. Some point to the environment as the cause. I think the truth is that 'mind' does NOT exist in any of these candidates ALONE but is an emergent pattern that results between a flow of information exchanged amongst the ensemble. You CANNOT separate 'mind' from brain/body/environment and still retain it, just as you cannot separate hydrogen and oxygen atoms and still have 'water'. But that does not mean to say 'water' does not deserve to be thought of as a thing in and of itself. After all, it definitely has distinctive characteristics that cannot be reduced to the properties of hydrogen and oxygen which make it up. I think we can say the same thing of 'mind'. It does not exist separate to brain/body/environment but properties like qualia (which cannot be reduced to brain or body or environment) make it a thing in and of itself.
The interesting question, though is this: Given that the brain builds a MODEL of the self/body/environment emergent pattern we call 'mind' from the information flowing in on the input axons, and it is this MODEL of reality (including our sense of physicality and subjectivity) that is the only reality we actually KNOW, what happens if INFORMATION equivilent to the information a brain would recieve if it was in an actual environment were sent into its input axons, and we could read the outputs and adjust the simulation accordingly? In a very crude sense, we have performed this experiment already, because people have had the brain region responsible for recognising faces stimulated with electric current, and the people reported seeing faces....
'This has led some AI enthusisasts to postlate the existence of a 'brain within a brain', which i think is desperate.'
Yeah you can see infinite regress happening here. So a brain needs a brain within it to function as a brain? Why, that must mean that the brain in my brain needs a brain in ITS brain that has a brain in ITS brain...
|
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
"This neuromorphically modelled brain processes information in a way that is functionally equivilient to a human's brain. Does that mean this 'computer' can 'think'? "
No. It runs a program that implements a representation of a brain. A representation of a thinking brain is different to an actual thinking brain in the same way that a duck is different to a painting of a duck.
I think you focus too much on perception and the senses. The 'model of reality' we have is not restricted to the senses- for one thing, we know our senses to be imperfect.
We have a 'model of reality' based largely on ideas, one if which is that, in general, the senses see things as they are. When we don't, we are aware of it.
But it makes no sense to say that we don't see a table, only an appearance of a table, for example. We see a table.
If we didn't believe this, we might as well go home and pretend we're the only people in the world.
In you example of the brain being triggered artificially by an external input,for example, the correct answer is that the patient doesnt see anything : he hallucinates.
We nonetheless have some agreement - although I think the mind emerges from neuronal and physiological behaviour, not 'information processing'.
|
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
'It runs a program that implements a representation of a brain. A representation of a thinking brain is different to an actual thinking brain in the same way that a duck is different to a painting of a duck'.
A completely misleading analogy.
What I am describing is a working replica of a brain, that can perform the capabilities of the original. We have already demonstrated that we can build artificial neurons that process information like the 'real' thing, and by hooking both up in a biological-nonbiological network we have shown that 'real' neurons are quite happy to work with artificial ones as if they were 'one of them'. We have also demonstrated that we can reverse engineer a brain region and replicate its functionality. Again, this was verified by removing the portion of the brain responsible, and wiring in circuitry designed to do the job of the original.
So what I am talking about is not merely a 'program'. It is a physical device that does what a brain does. Your duck analogy is like asking 'can a picture of a stomach help digest a pizza', to which the answer is obviously no. But so what? That has very little to do with what I am describing. The PROPPER question to ask is 'can an artificial stomach, designed to perform the functions of a 'real' stomach, help digest a pizza'?
'In you example of the brain being triggered artificially by an external input,for example, the correct answer is that the patient doesnt see anything : he hallucinates'.
But is the patient AWARE that the information his mind is interpreting into a model of reality corresponds to a 'real' place? Put it this way, while you are dreaming, are you AWARE that your physical embodiment in the environment is the result purely of activity in the brain and has no correspondance to the 'real' environment in which you exist?
The answer is no. While you are asleep and dreaming, it IS your reality. So, for as long as the brain is being fed information equivilent to what it would receive if it were in the ACTUAL environment, and so long as the mind's states were monitored in order to update the information to correspond with what the model of reality predicts, how can there be any subjective awareness of the 'fact' that it is hallucination?
'But it makes no sense to say that we don't see a table, only an appearance of a table, for example. We see a table.'
Again, this misses the point. The brain is not in direct contact with the world. All it 'knows' (actually it does not 'know' anything but never mind) is the information it receives from the senses. The information it has at any given moment is used to predict what the next input of information will be. If the information corresponds to that which the senses would send if there really was a table, then as far as you are concerned there IS a table.
This is more than mere guessing. Actual experiments have been carried out in which patients have had certain areas of their brains stimulated with electric current, resulting in their experiencing hallucinations corresponding to the area being stimulated. So, for example, if the area being stimulated is the brain region responsible for recognising faces, the patient reports that s/he can see faces.
Now, before you jump on the fact that the patient 'knows it is an hallucination' that is only because the brain is receiving a lot of information from the environment, most of which does NOT agree with what that particular region is 'reporting'. But what if we could stimulate EVERY brain region that WOULD be stimulated if there really WAS a person standing in the room? How would the mind then be able to tell the difference between hallucination and reality?
|
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
I'm a year behind and playing catch-up. I agree with the concept that mind is the emergent property that comes from brain, sensory input and environment. After all, that's how awareness emerged during evolution (if one is allowed to accept that "theory" as true!) This leads to an issue that I haven't seen much discussion of: once we upload a mind to a computer, how do we provide input that feels real? I believe we underestimate the processing that goes on in the nerve endings in fingers, arms, eyes, ears etc. The feeling of rain on one's skin takes in a number of separate but self-consistent inputs, including touch: impact, running drops tickling our scalp; temperature sensing, including the cooling effect of evaporation as well as the temp difference between the air and the water, plus the sensation of the water in contact with the skin warming as a result of that contact, etc. Not to mention the sound of rain falling and the vision of drops in the air, hitting the sidewalk and darkening it, the smell of all the things that rain brings, from stirring up dust to ozone. We discuss uploading as a hard problem in itself, which it is, but also hard and perhaps a different order of difficulty is simulating those inputs. Imagine if you were the uploaded mind, residing in a computer somewhere. Right now, we're able to supply vision and sound with cameras and microphones, but smell, taste and touch and the various other subtle inputs such as proprioception--the sense of how our body is positioned and moving, etc.--are beyond us. Any comments on this problem? |
|
|
|
|
|
|
|
|
Re: Syntax vs. Semantics
|
|
|
|
As you already feel, the concept of SEMANTICS is a PSYCHOLOGICAL ONE, not a mathematical/informatical/cybernetical one ;-))
Let me tell you THE REAL SITUATION, as a mathematician.
Mathematically, it DOESN'T MEAN whether you denote a symbol with B or C, neither whether the Java functions are f1, f2, f3... or Init, Check_Parameters, Print_Results... But it is very PSYCHOLOGICALLY IMPORTANT to humans to use semantic descriptors and comments, as our memory is very poor and after a while all of us FORGET that f2 was to stnad for Check_Paramaters.
So, can some "abstract symbols" have a kind of IMPLICIT SEMANTICS, i.e. the SYNTAX ITSELF to YIELD THE SEMANTICS? The mathematics is overwhelmed with such an examples, and below I will use one.
Assume that we have the right to CREATE SET with ANY elements we have. First of all, we have no elements, and thus we can only create the empty set '. Using it as an element, we can create an ONE ELEMENT SET {'}.
Furthermore, let's repeat this procedure: with each further step, create a set of the ALL PREVIOUSLY DEFINED SETS. So we get these sets, as results of that procedure (the first two are included in the beginning):
'
{'}
{',{'}}
{',{'},{',{'}}}
{',{'},{',{'}},{',{'},{',{'}}}} etc.
This is a SYNTACTICALLY CREATED MATHEMATICAL STRUCTURE. Has it some kind of IMPLICATE SEMANTICS WITHIN ITSELF? YESSS!
First of all, using the concept of a SUBSET, we can always order the sets obtained. Also we can always define the concept of an IMMEDIATE SUCCESSOR of a particular set. Denote it with the operand #; so we have these equations:
'# = {'}
{'}# = {',{'}}
{',{'}}# = {',{'},{',{'}}} etc.
Replacing the left side sets with ' and an appropriate number of #s, we eventually get:
'# = {'}
'## = {',{'}}
'### = {',{'},{',{'}}} etc.
What is this structure? Is it know, under other "semantic" notation? Of course it is known; they're the NATURAL NUMBERS, denoted within the decade system as follows:
0 = '
1 = '# = {'}
2 = '## = {',{'}}
3 = '### = {',{'},{',{'}}}
Then, the relation IS A SUBSET OF has the semantic meaning IS SMALLER THAN, while the operand SUCCESSOR is the OPERATION +1.
|
|
|
|
|
|
|
|
|
Re: Live Forever--Uploading The Human Brain...Closer Than You Think
|
|
|
|
All these syntax vs. semantics arguments seem, to me, to be no different than the old chestnut, "If a tree falls in the forest, and there is no one there to hear it, does it make a sound?"
In a world of AIs, where humanity is extinct, do equations like F=ma and E=mc^2 have any meaning if there are no human scientists around to understand them? Are humans needed to create "real meaning," despite the presence of engineering marvels built by AIs, marvels that no human can even imagine today, but nonetheless can be only be built by understanding the meaning of such equations as F=ma and E=mc^2?
I think the notion that some HUMAN has to be around to make something "mean" something is ridiculous, just like the notion that someone needs to hear something for it to make a sound. |
|
|
|
|
|
|
|