Origin > Living Forever > Dialogue between Ray Kurzweil, Eric Drexler, and Robert Bradbury
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0533.html

Printable Version
    Dialogue between Ray Kurzweil, Eric Drexler, and Robert Bradbury
by   K. Eric Drexler
Ray Kurzweil
Robert Bradbury

What would it take to achieve successful cryonics reanimation of a fully functioning human brain, with memories intact? A conversation at the recent Alcor Conference on Extreme Life Extension between Ray Kurzweil and Eric Drexler sparked an email discussion of this question. They agreed that despite the challenges, the brain's functions and memories can be represented surprisingly compactly, suggesting that successful reanimation of the brain may be achievable.


E-mail dialogue on November 23, 2002. Published on KurzweilAI.net Dec. 3, 2002. Comments by Robert Bradbury added Jan. 15, 2003.

Ray Kurzweil: Eric, I greatly enjoyed our brief opportunity to share ideas (difficulty of adding bits to quantum computing, cryonics reanimation, etc.). Also, it was exciting to hear your insightful perspective on the field you founded, now that it's gone&#8212from what was regarded in the mainstream anyway as beyond-the-fringe speculation—to, well, mainstream science and engineering.

I had a few questions and/or comments (depending on whether I'm understanding what you said correctly). Your lecture had a very high idea density, so I may have misheard some details.

With regard to cryonics reanimation, I fully agree with you that preserving structure (i.e., information) is the key requirement, that it is not necessary to preserve cellular functionality. I have every confidence that nanobots will be able to go in and fix every cell, indeed every little machine in every cell. The key is to preserve the information. And I'll also grant that we could lose some of the information; after all, we lose some information every day of our lives anyway. But the primary information needs to be preserved. So we need to ask, what are the types of information required?

One is to identify the neuron cells, including their type. This is the easiest requirement. Unless the cryonics process has made a complete mess of things, the cells should be identifiable. By the time reanimation is feasible, we will fully understand the types of neurons and be able to readily identify them from the slightest clues. These neurons (or their equivalents) could then all be reconstructed.

The second requirement is the interconnections. This morphology is one key aspect of our knowledge and experience. We know that the brain is continually adding and pruning connections; it's a primary aspect of its learning and self-organizing principle of operation. The interconnections are much finer than the neurons themselves (for example, with current brain imaging techniques, we can typically see the neurons but we do not yet clearly see the interneuronal connections). Again, I believe it's likely that this can be preserved, provided that the vitrification has been done quickly enough. It would not be necessary that the connections be functional or even fully evident, as long as it can be inferred where they were. And it would be okay if some fraction were not identifiable.

It's the third requirement that concerns me; the neurotransmitter concentrations, which are contained in structures that are finer yet than the interneuronal connections. These are, in my view, also critical aspects of the brain's learning process. We see the analogue of the neurotransmitter concentrations in the simplified neural net models that I use routinely in my pattern recognition work. The learning of the net is reflected in the connection weights as well as the connection topology (some neural net methods allow for self-organization of the topology, some do not, but all provide for self-organization of the weights). Without the weights, the net has no competence.

If the very-fine-resolution neurotransmitter concentrations are not identifiable, the downside is not equivalent to merely an amnesia patient who has lost his memory of his name, profession, family members, etc. Our learning, reflected as it is in both interneuronal connection topology and neurotransmitter concentration patterns, underlies knowledge that is far broader than these routine forms of memory, including our "knowledge" of language, how to think, how to recognize objects, how to eat, how to walk and perform all of our skills, etc. Loss of this information would result in a brain with no competence at all. It would be worse than a newborn's brain, which is at least designed to begin reorganizing itself. A brain with the connections intact but none of the neurotransmitter concentrations would have no competence of any kind and a connection pattern that would be too specific to relearn all of these skills and basic knowledge.

It's not clear whether the current vitrification-preservation process maintains this vital type of information. We could readily conduct an experiment to find out. We could vitrify the brain of a mouse and then do a destructive scan while still vitrified to see if the neurotransmitter concentrations are still evident. We could also confirm that the connections are evident as well.

The type of long-term memory that an amnesia patient has lost is just one type of knowledge in the brain. At the deepest level, the brain's self-organizing paradigm underlies our knowledge and all competency that we have gained since our fetal days (even prior to birth).

As a second issue, you said something about it being sufficient to just have preserved the big toe or the nose to reconstruct the brain. I'm not sure what you meant by that. Clearly none of the brain structure is revealed by body parts outside the brain. The only conceivable way one could restore a brain from the toe would be from the genome, which one can discover from any cell. And indeed, one could grow a brain from the genome. This would be, however, a fetal brain, which is a genetic clone of the original person, equivalent to an identical twin (displaced in time). One could even provide a learning and maturing experience for this brain in which the usual 20 odd years were sped up to 20 days or less, but this would still be just a biological clone, not the original person.

Finally, you said (if I heard you correctly) that the amount of information in the brain (presumably needed for reanimation) is about 1 gigabyte. My own estimates are quite different. It is true that genetic information is very low, although as I discussed above, genetic information is not at all sufficient to recreate a person. The genome has about 0.8 gigabytes of information. There is massive redundancy, however. For example, the sequence "ALU" is repeated 300,000 times. If one compresses the genome using standard data compression to remove redundancy, estimates are that one can achieve about 30 to 1 lossless compression, which brings us down to about 25 megabytes. About half of that comprises the brain, or about 12 megabytes. That's the initial design plan.

If we consider the amount of information in a mature human brain, however, we have about 1011 neurons with 103 average fan-out of connections, for an estimated total of 1014 connections. For each connection, we need to specify (i) the neurons that this connection is connected to, (ii) some information about its pathway as the pathway affects analog aspects of its electrochemical information processing, and (iii) the neurotransmitter concentrations in associated synapses. If we estimate about 102 bytes of information to encode these details (which may be low), we have 1016 bytes, considerably more than the 109bytes that you mentioned.

One might ask: How do we get from 107 bytes that specify the brain in the genome to 1016 bytes in the mature brain? This is not hard to understand, since we do this type of meaningful data expansion routinely in our self-organizing software paradigms. For example, a genetic algorithm can be efficiently coded, but in turn creates data far greater in size than itself using a stochastic process, which in turn self-organizes in response to a complex environment (the problem space). The result of this process is meaningful information far greater than the original program. We know that this is exactly how the creation of the brain works. The genome specifies initially semi-random interneuronal connection wiring patterns in specific regions of the brain (random within certain constraints and rules), and these patterns (along with the neurotransmitter-concentration levels) then undergo their own internal evolutionary process to self-organize to reflect the interactions of that person with their experiences and environment. That is how we get from 107 bytes of brain specification in the genome to 1016 bytes of information in a mature brain. I think 109 bytes is a significant underestimate of the amount of information required to reanimate a mature human brain.

I'd be interested in your own reflections on these thoughts, with my best wishes.

Eric Drexler: Ray--Thanks for your comments and questions. Our thinking seems closely parallel on most points.

Regarding neurotransmitters, I think it is best to focus not on the molecules themselves and their concentrations, but rather on the machinery that synthesizes, transports, releases, senses, and recycles them. The state of this machinery must closely track long-term functional changes (i.e, long-term memory or LTM), and much of this machinery is an integral part of synaptic structure.

Regarding my toe-based reconstruction scenario [creating a brain from a bit of tissue containing intact DNA-Ed.], this is indeed no better than genetically based reconstruction together with loading of more-or-less default skills and memories&#8212corresponding to a peculiar but profound state of amnesia. My point was merely that even this worst-case outcome is still what modern medicine would label a success: the patient walks out the door in good health. (Note that neurosurgeons seldom ask whether the patient who walks out is "the same patient" as the one who walked in.) Most of us wouldn't look forward to such an outcome, of course, and we expect much better when suspension occurs under good conditions.

Information-theoretic content of long-term memory

Regarding the information content of the brain, both the input and output data sets for reconstruction must indeed be vastly larger than a gigabyte, for the reasons you outline. The lower number [109] corresponds to an estimate of the information-theoretic content of human long term memory found (according to Marvin Minsky) by researchers at Bell Labs. They tried various methods to get information into and out of human LTM, and couldn't find learning rates above a few bits per second. Integrated over a lifespan, this
yields the above number. If this is so, it suggests that information storage in the brain is indeed massively redundant, perhaps for powerful function-enabling reasons. (Identifying redundancy this way, of course, gives no hint of how to construct a compression and decompression algorithm.)

Best wishes, with thanks for all you've done.

P.S. A Google search yields a discussion of the Bell Labs result by, yes, Ralph Merkle.

Ray Kurzweil: Okay, I think we're converging on some commonality.

On the neurotransmitter concentration level issue, you wrote: "Regarding neurotransmitters, I think it is best to focus not on the molecules themselves and their concentrations, but rather on the machinery that synthesizes, transports, releases, senses, and recycles them. The state of this machinery must closely track long-term functional changes (i.e, LTM), and much of this machinery is an integral part of synaptic structure."

I would compare the "machinery" to any other memory machinery. If we have the design for a bit of memory in a DRAM system, then we basically know the mechanics for the other bits. It is true that in the brain there are hundreds of different mechanisms that we could call memory, but each of these mechanisms is repeated many millions of times. This machinery, however, is not something we would need to infer from the preserved brain of a suspended patient. By the time reanimation is feasible, we will have long since reverse-engineered these basic mechanisms of the human brain, and thus would know them all. What we do need specifically for a particular patient is the state of that person's memory (again, memory referring to all skills). The state of my memory is not the same as that of someone else, so that is the whole point of preserving my brain.

And that state is contained in at least two forms: the interneuronal connection patterns (which we know is part of how the brain retains knowledge and is not a fixed structure) and the neurotransmitter concentration levels in the approximately 1014 synapses.

My concern is that this memory state information (particularly the neurotransmitter concentration levels) may not be retained by current methods. However, this is testable right now. We don't have to wait 40 to 50 years to find this out. I think it should be a high priority to do this experiment on a mouse brain as I suggested above (for animal lovers, we could use a sick mouse).

You appear to be alluding to a somewhat different approach, which is to extract the "LTM," which is likely to be a far more compact structure than the thousands of trillions of bytes represented by the connection and neurotransmitter patterns (CNP). As I discuss below, I agree that the LTM is far more compact. However, we are not extracting an efficient LTM during cryo preservation, so the only way to obtain it during cryo reanimation would be to retain its inefficient representation in the CNP.

You bring up some interesting and important issues when you wrote, "Regarding my toe-based reconstruction scenario, this is indeed no better than genetically-based reconstruction together with loading of more-or-less default skills and memories&#8212corresponding to a peculiar but profound state of amnesia. My point was merely that even this worst-case outcome is still what modern medicine would label a success: the patient walks out the door in good health."

I agree that this would be feasible by the time reanimation is feasible. The means for "loading" these "default skills and memories" is likely to be along the lines that I described above, to use "a learning and maturing experience for this brain in which the usual 20 odd years were sped up to 20 days or less." Since the human brain as currently designed does not allow for explicit "loading" of memories and skills, these attributes need to be gained from experience using the brain's self-organizing approach. Thus we would have to use this type of experience-based approach. Nevertheless, the result you describe could be achieved. We could even include in these "loaded" (or learned) "skills and memories," the memory of having been the original person who was cryonically suspended, including having made the decision to be suspended, having become ill, and so on.

False reanimation

And this process would indeed appear to be a successful reanimation. The doctors would point to the "reanimated' patient as the proof in the pudding. Interviews of this patient would reveal that he was very happy with the process, delighted that he made the decision to be cryonically suspended, grateful to Alcor and the doctors for their successful reanimation of him, and so on.

But this would be a false reanimation. This is clearly not the same person that was suspended. His "memories" of having made the decision to be suspended four or five decades earlier would be false memories. Given the technology available at this time, it would be feasible to create entirely new humans from a genetic code and an experience / learning loading program (which simulates the learning in a much higher speed substrate to create a design for the new person). So creating a new person would not be unusual. So all this process has accomplished is to create an entirely new person who happens to share the genetic code with the person who was originally suspended. It's not the same person.

One might ask, "Who cares?" Well no one would care except for the originally suspended person. And he, after all, is not around to care. But as we look to cryonic suspension as a means towards providing a "second chance," we should care now about this potential scenario.

It brings up an issue which I have been concerned with, which is "false" reanimations.

Now one could even raise this issue (of a false reanimation) if the reanimated person does have the exact CNP of the original. One could take the philosophical position that this is still a different person. An argument for that is that once this technology is feasible, you could scan my CNP (perhaps while I'm sleeping) and create a CNP-identical copy of me. If you then come to me in the morning and say "good news, Ray, we successfully created your precise CNP-exact copy, we won't be needing your old body and brain anymore," I may beg to differ. I would wish the new Ray well, but feel that he's a different person. After all, I would still be here.

So even if I'm not still here, by the force of this thought experiment, he's still a different person. As you and I discussed at the reception, if we are using the preserved person as a data repository, then it would be feasible to create more than one "reanimated" person. If they can't all be the original person, then perhaps none of them are.

However, you might say that this argument is a subtle philosophical one, and that, after all, our actual particles are changing all the time anyway. But the scenario you described of creating a new person with the same genetic code, but with a very different CNP created through a learning simulation, is not just a matter of a subtle philosophical argument. This is clearly a different person. We have examples of this today in the case of identical twins. No one would say to an identical twin, "we don't need you anymore because, after all, we still have your twin."

I would regard this scenario of a "false" reanimation as one of the potential failure modes of cryonics.

Reverse-engineering the brain

Finally, on the issue of the LTM (long term memory), I think this is a good point and an interesting perspective. I agree that an efficient implementation of the knowledge in a human brain (and I am referring here to knowledge in the broadest sense as not just classical long term memory, but all of our skills and competencies) would be far more compact that the 1016 bytes I have estimated for its actual implementation.

As we understand biological mechanisms in a variety of domains, we find that we can redesign them (as we reverse engineer their functionality) with about 106 greater efficiency. Although biological evolution was remarkable in its ingenuity, it did get stuck in particular paradigms.

It's actually not permanently stuck in that its method of getting unstuck is to have one of its products, homo sapiens, discover and redesign these mechanisms.

We can point to several good examples of this comparison of our human engineered mechanisms to biological ones. One good example is Rob Freitas' design for robotic blood cells, which are many orders of magnitude more efficient than their biological counterparts.

Another example is the reverse engineering of the human auditory system by Lloyd Watts and his colleagues. They have found that implementing the algorithms in software from the reverse engineering of specific brain regions requires about a factor of 106 less computation than the theoretical potential of the brain regions being emulated.

Another good example is the extraordinarily slow computing speed of the interneuronal connections, which have about a 5 millisecond reset time. Today's conventional electronic circuits are already 100 million (108) times faster. Three-dimensional molecular circuits (e.g., nanotube-based circuitry) would be at least 109 times faster. Thus if we built a human brain equivalent with the same number of simulated neurons and connections (not just simulating the human brain with a smaller number of units that are operating at higher speeds), the resulting nanotube-based brain would operate at least 109 times faster than its biological counterpart.

Some of the inefficiency of the encoding of information in the human brain has a positive utility in that memory appears to have some holographic properties (meaningful information being distributed through a region), and this helps protect the information. It explains the usually gradual (as opposed to catastrophic) degradation of human memory and skill. But most of the inefficiency is not useful holographic encoding, but just this inherent inefficiency of biological mechanisms. My own estimate of this factor is around 106, which would reduce the LTM from my estimate of 1016 for the actual implementation to around 1010 for an efficient representation, but that is close enough to your and Minsky's estimate of 109.

However, as you point out, we don't know the compression/decompression algorithm, and are not in any event preserving this efficient representation of the LTM with the suspended patients. So we do need to preserve the inefficient representation.

With deep appreciation for your own contributions.

Eric Drexler: With respect to inferring memory state, the neurotransmitter-handling machinery in a synapse differs profoundly from the circuit structure in a DRAM cell. Memory cells in a chip are all functionally identical, each able to store and report different data from millisecond to millisecond; synapses in a brain are structurally diverse, and their differences encode relatively stable information. Charge stored in a DRAM cell varies without changes in its stable structure; long-term neurotransmitter levels in a synapse vary as a result of changes in its stable structure. The quantities of different enzymes, transport molecules, and so forth, determine the neurotransmitter properties relevant to LTM, hence neurotransmitter levels per se needn't be preserved.

My discussion of the apparent information-theoretic size of human LTM wasn't intended to suggest that such a compressed representation can or should be extracted from the detailed data describing brain structures. I expect that any restoration process will work with these far larger and more detailed data sets, without any great degree of intermediate compression. Nonetheless, the apparently huge gap between the essential mental information to be preserved and the vastly more detailed structural information is reassuring&#8212and suggests that false reanimation, while possible, shouldn't be expected when suspension occurs under good conditions. (Current medical practice has analogous problems of false life-saving, but these don't define the field.)

Ray Kurzweil: I'd like to thank you for an engaging dialogue. I think we've converged to a pretty close common vision of these future scenarios. Your point is well taken that human memory (for all of its purposes), to the extent that it involves the neurotransmitters, is likely to be redundantly encoded. I agree that differences in the levels of certain molecules are likely to be also reflected in other differences, including structural differences. Most biological mechanisms that we do understand tend to have redundant information storage (although not all; some single-bit changes in the DNA can be catastrophic). I would point out, however, that we don't yet understand the synaptic structures sufficiently to be fully confident that the differences in neurotransmitter levels that we need (for reanimation) are all redundantly indicated by structural changes. However, all of this can be tested with today's technology, and I would suggest that this would be worthwhile.

I also agree that "the apparently huge gap between the essential mental information to be preserved and the vastly more detailed structural information is reassuring." This is one example in which the inefficiency of biology is helpful.

Eric Drexler: Thank you, Ray. I agree that we've found good agreement, and I also enjoyed the interchange.


Additional comments on Jan. 15, 2003 by Robert Bradbury

Robert Bradbury: First, it is reasonable to assume that within this decade we will know the precise crystal structure for all human proteins for which cryonics reanimation is feasible, using either X-ray, NMR or computational (e.g., Blue Gene) based methods. That should be almost all human proteins. Second, it seems likely that we will have both the experimental (yeast 2-hybrid) or computational (Blue Gene and extensions thereof and/or distributed protein modeling, via @Home) to determine how proteins that interact typically do so. So we will have the ability to completely understand what happens at synapses and to some extent model that computationally.

Now, Ray placed an emphasis on neurotransmitter "concentration" that Eric seemed to downplay. I tend to lean in Eric's direction here. I don't think the molecular concentration of specific neurotransmitters within a synapse is particularly critical for reanimating a brain. I do think the concentrations of the macroscale elements necessary for neurotransmitter release will need to be known. That is, one needs to be able to count mitochondria and synaptic vesicle size and type (contents) as well as the post-synaptic neurotransmitter receptors and the pre-synaptic reuptake receptors. It is the numbers of these "machines of transmission" that determines the Hebbian "weight" for each synapse, which is a point I think Ray was trying to make.

Furthermore, if there is some diffusion of neurotransmitters out of individual synapses, the location and density of nearby synapses may be important (see Rusakov & Kullmann below). Now, the counting of and determination of the location of these "macroscale" effectors of synapse activity is a much easier task than measuring the concentration of every neurotransmitter molecule in the synaptic cleft.

The neurotransmitter concentration may determine the instantaneous activity of the synapse, but I do not believe it holds the "weight" that Ray felt was important. That seems to be contained much more in the energy resources, enzymatic manufacturing capacity, and vesicle/receptor concentrations, which vary over much longer time periods. (The proteins have to be manufactured near the neuronal nucleus and be transported, relatively slowly, down to the terminal positions in the axons and dendrites.)

One can alter neurotransmitter concentrations and probably pulse-transmission probabilities at least within some range without disrupting the network terribly (risking false reanimation). SSRIs [Selective Serotonin Reuptake Inhibitors] and drugs used to treat Parkinson's, such as L-dopa, are examples of drugs that may alter these aspects of interneuron communications. Of more concern to me is whether or not there will be hurdles in attempting a "cold" brain restart. One can compare this to the difficulties of restarting the brain of someone in a coma and/or someone who has drowned.

The structure of the brain may be largely preserved but one just may not be able to get it running again. This implies there is some state information contained within the normal level of background activity. We haven't figured out yet how to "shock" the brain back into a functional pattern of activity.

Ray also mentioned vitrification. I know this is a hot topic within the cryonics community because of Greg Fahy's efforts. But you have to realize that Greg is trying to get us to the point where we can preserve organs entirely without nanotech capabilities. I think vitrification is a red herring. Why? Because we will know the structure of just about everything in the brain under 50 nm in size. Once frozen, those structures do not change their structure or location significantly.

So I would argue that you could take a frozen head, drop it on the floor so it shatters into millions or billions of pieces and as long as it remains frozen, still successfully reassemble it (or scan it into an upload). In its disassembled state it is certainly one very large 3D jigsaw puzzle, but it can only be reassembled one correct way. Provided you have sufficient scanning and computational capacity, it shouldn't be too difficult to figure out how to put it back together.

You have to keep in mind that all of the synapses have proteins binding the pre-synaptic side to the post-synaptic side (e.g., molecular velcro). The positions of those proteins on the synaptic surfaces are not specified at the genetic level and it seems unlikely that their locations would shift significantly during the freezing process (such that their number and approximate location could not be reconstructed).

As a result, each synapse should have a "molecular fingerprint" as to which pre-side goes with which post-side. So even if the freezing process pulls the synapse apart, it should be possible to reconstruct who the partners are. One needs to sit and study some freeze-fracture electron micrographs before this begins to become a clear idea for consideration.

So I think the essential components are the network configuration itself, the macroscale machinery architecture of the synapses and something that was not mentioned, the "transcriptional state of the nuclei of the neurons" (and perhaps glial cells), i.e., which genes are turned on/off. This may not be crucial for an instantaneous brain "reboot" but might be essential for having it function for more than brief periods (hours to days).

References

A good (relatively short but detailed) description of synapses and synaptic activity is Ch.5: Synaptic Activity from State University of New York at Albany.

Also see:

Understanding Neurological Functions through the Behavior of Molecules, Dr. Ryoji Yano

Three-Dimensional Structure of Synapses in the Brain and on the Web, J. C. Fiala, 2002 World Congress on Computational Intelligence, May 12-17, 2002

Assessing Accurate Sizes of Synaptic Vesicles in Nerve Terminals, Seongjai Kim, Harold L. Atwood & Robin L. Cooper

Extrasynaptic Glutamate Diffusion in the Hippocampus: Ultrastructural Constraints Uptake and Receptor Activation, Dimitri A. Rusakov & Dimitry M. Kullmann The J. of Neuroscience 18(9):3158-3170 (1 May 1998).

Ray Kurzweil: Robert, thanks for your interesting and thoughtful comments. I essentially agree with what you're saying, albeit we don't yet understand the mechanisms behind the "Hebbian weight" or other vital state information needed for a non-false reanimation. It would be good if this state information were fully represented by mitochondria and synaptic vesicle size and type (contents), post-synaptic neurotransmitter receptors and pre-synaptic reuptake receptors, i.e., by the number of these relatively large (compared to molecules) "machines of transmission."

Given that we have not yet reverse-engineered these mechanisms, I suppose it would be difficult to do a definitive experiment now to make sure we are preserving the requisite information.

I agree with your confidence that we will have reverse-engineered these mechanisms within the next one to two decades. I also agree that we need only preserve the information, and that reanimation technology will take full advantage of the knowledge of how these mechanisms work. Therefore the mechanisms don't need to preserved in working order so long as the information is there. I agree that Fahy's concerns apply primarily to revitalization without such detailed nanotech repair and reconstruction.

Of course, as I pointed out in the debate with Eric, such a complete reconstruction may essentially amount to creating a new brain/person with the cryonically preserved brain/body serving only as a blueprint, in which case it would just as easy to create more than one renaimated person. Eric responded to this notion by saying that the first one is the reanimated person and subsequent ones are just copies because after all, at that time, we could make copies of anyone anyway.

With regard to your jigsaw puzzle, that may be a difficult puzzle to put together, although I suppose we'll have the computational horsepower to do it.

   
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

subjective time in an upgraded brain
posted on 12/04/2002 2:16 PM by subtillion

[Top]
[Mind·X]
[Reply to this post]

Ray said
"Another good example is the extraordinarily slow computing speed of the interneuronal connections, which have about a 5 millisecond reset time. Today's conventional electronic circuits are already 100 million (108) times faster. Three-dimensional molecular circuits (e.g., nanotube-based circuitry) would be at least 109 times faster. Thus if we built a human brain equivalent with the same number of simulated neurons and connections (not just simulating the human brain with a smaller number of units that are operating at higher speeds), the resulting nanotube-based brain would operate at least 109 times faster than its biological counterpart. "

This is an open question:

What does this drastic increase in the speed of thought do to the subjective experience of time?


subtillioN

Re: subjective time in an upgraded brain
posted on 12/04/2002 3:28 PM by henrik w

[Top]
[Mind·X]
[Reply to this post]

I guess that one would feel time passing more slowly than usual, enhancing reflexes and such.

I believe a fly reacts faster to its surroundings because its interconnected network of "neurons"(?)is much less complicated than the human brain and if our "computing speed" where to increase we would have a greater chance of catching flies :)

henrik

Re: subjective time in an upgraded brain
posted on 12/04/2002 5:06 PM by spurk

[Top]
[Mind·X]
[Reply to this post]

Thats a really interesting question; I'm inclined to think you'd "speed up" and the world would "slow down" (subjectively). But our time sense does some wierd things...

when you have little to think about or occupy yourself with, time seems to crawl by, and when you are occupied it seems to go by faster. this would seem to support the idea that time awareness is somehow related to your brain's "cpu utilization"

but some drugs mess with your time sense, without necessarily affecting other aspects of your mind

strange stuff

spurk
http://users.rcn.com/standley/AI/AI.htm

Re: subjective time in an upgraded brain
posted on 12/04/2002 5:38 PM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

I was actually just thinking about sense of time.

I think that it may be more about anchoring to external or internal events. For example; if you say that you will do X tomorrow, you put an event into a mental space labelled "tomorrow". You know how tomorrow feels and where you represent it. However, when the next day comes, you think about that "tomorrow" mental space and it still feels like X should be done tomorrow. The way to get out of the loop is to link event X to some other external stimulus that will occur on the next day. So, "after I brush my teeth, I will do X" could be the appropriate linking anchor.

I think you can see how this might work for longer periods of time. So, what about shorter periods? This could be measured by the change in current mental space.

So, what do we know?

When you have things to think about and do physically that are pleasant, times seems to go by quickly.

When you have things to think about, but don't like the subject, time can go by slowly.

When you have nothing to think about but you feel pleasure, time seems to go by quickly.

When you have nothing to think about and you are bored (uncomfortable), time seems to go by slowly.

So, it seems that pleasure and discomfort govern time sense more than the content of mental space, for short time period sense. But why?

Re: subjective time in an upgraded brain
posted on 12/04/2002 6:59 PM by spurk

[Top]
[Mind·X]
[Reply to this post]

I think boredom is probably a built-in behavior regulation system. My exposure to medical literature regarding disorders like OCD has led me to this belief, as have some general observation about human behavior. A defining human characteristic is our curiosity and desire for new experiences; children display these behavior the most. Something I have noticed as well is that children in america often complain of boredom. It's not that children have little to do, it's that the more stimulating the environment, the quicker they habituate to new stimuli. Or at least that's how it seems to me.

IMO, boredom is an evolutionary adaptation that serves to keep us productive, thereby contributing to the survival of human culture and society. In many animals such consideration would be meaningless, but we are among the most social of animals.

I'm beginning to wonder if boredom and slowed time perception are really connected in anyway other than the fact that similar enviormental conditions trigger both phenomena

spurk
http://users.rcn.com/standley/AI/AI.htm

Re: subjective time in an upgraded brain
posted on 12/07/2002 4:09 PM by JamesS

[Top]
[Mind·X]
[Reply to this post]

I would wager that the perception of time per se would not change significantly (after the initial period of adjustment). My gut feeling is that the brain would compensate for an increase in number of compute cycles in the same way that it compensates when the visual field is reversed. (Remember the experiment in which subjects wore glasses that made everything look upsidedown. After an adjustment period, upsidedown just seemed normal.) I think the adjustment would require various ques, such as watching the clock tick, timing the heartbeat, watching objects fall, moving one's limbs, etc. But after that adjustment, I don't think you would get the feeling, for example, that apples seem to fall more slowly than they used to.

The major difference one would notice is the decrease in time it takes to perform mental tasks. For example, it took me several minutes to compose this response. I think I will know I have been "accelerated" when I can compose faster than I can type.

James

Re: subjective time in an upgraded brain
posted on 12/08/2002 6:20 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

> I don't think you would get the feeling, for example, that apples seem to fall more slowly than they used to.

But I want that. I want a year, until the apple falls.

Of course, I'll not wait idle, to see it happen. Instead, I will visit some virtual places meantime. Where everything will have "my speed".

As a side effect, this virtuality will be far more realistic, then the "natural reality" is.

So, I will not care much, what had really happened, with that apple. Just use it, to further speed my world's paste!

- Thomas

Re: subjective time in an upgraded brain
posted on 12/05/2003 3:40 AM by ventoux

[Top]
[Mind·X]
[Reply to this post]

Nothing.
A subjective opinion would be formulated at a speeded up rate. Relative to a brain like my brain working now it would be thinking faster but for this faster brain my best guess is that it would feel no different as it 'thought'.

Re: Dialogue between Ray Kurzweil and Eric Drexler
posted on 12/04/2002 5:22 PM by spurk

[Top]
[Mind·X]
[Reply to this post]

I dunno why, but somehow cryonics seems a like a longshot to me. I don't doubt it possible to reanimate someone, but;

I have a sneaking suspicion that if you were to revive a cryo-preserved person, it wouldn't be "you".

A persons conciousness seems to be extremely localized, any claims of paranormal effects aside. Some of you have probably read Greg Egan's 'Diaspora'. When the characters spawn copies of themselves, which 'self' are they after the split?

I have the feeling that continuity of brain function is key here. That's why Moravec's procedure fascinates me so much. Also, it's the only brain-uploading proposal that's ever felt 'right' to me. You lose awareness when you fall asleep, but your brain doesn't shut down.

people wake up from comas, but I've never heard of someone recovering from brain death. Anyone have a different take on this subject?

spurk
http://users.rcn.com/standley/AI/AI.htm

Soul not copyable by definition?
posted on 12/04/2002 7:08 PM by Jay the Truthseeker

[Top]
[Mind·X]
[Reply to this post]

You wrote:

"I have a sneaking suspicion that if you were to revive a cryo-preserved person, it wouldn't be "you"."

In order to clarify what I'm trying to say here, I hereby define 'soul' as that which makes us who we truly are. The 'soul' is the only thing which would make the difference between me and another person who is a perfect copy of me in as well body as brain-content.

I guess the only conclusion would be that the soul is, by definition, not copyable. (After all, us humans are constrained of being consciouss of the input of only one brain.)

This means we should also be careful when it comes to soul-transitions. You might say a transition exists out of: copy, delete, insert (somewhere else).

Notice the 'delete'-thingy in there...

This might not only occur in an attempt to transfer ones soul/consciousness to a new body, but also when reviving someone else.

I sure do hope cryogenics will not be necessary for me. Oh well... According to Ray us 24-year olds will have no problem at all staying alive until The Singularity, so I guess I'm safe. ;)

Re: Soul not copyable by definition?
posted on 12/04/2002 8:44 PM by spurk

[Top]
[Mind·X]
[Reply to this post]

"This means we should also be careful when it comes to soul-transitions. You might say a transition exists out of: copy, delete, insert (somewhere else).

Notice the 'delete'-thingy in there..."

I think you're right, when cryo-preserved people are reanimated, I think it will not be like waking up, but rather a new 'soul' will be spawned, with your memories but still not 'you'. If you ever remembered something but don't 'remember' remembering it (jesus english is horrible for talking about philosophical ideas), that's what I think it would feel like, you'd remember everything about 'your' life and have all 'your' skills and knowledge, but they wouldn't feel like they were 'yours'

none of this means I think that the 'soul' is anything mystical. rather I think it is a emergent phenomenon that occurs in complex systems, a case of the whole being greater than the sum of it's parts. It's also why I think Hans Moravec's procedure is the only feasible way to achieve 'uploading', since it slowly integrates artifical neurons with your brains network, then slowly replaces the biological components. Moravec's version hinges on the availability of nanotech; I think it could be done today. Here's a link where I explain this further: http://users.rcn.com/standley/AI/immortality.htm

I think Roger Penrose is onto something when he argues that AI is impossible using traditional computing methods

Re: Soul not copyable by definition?
posted on 12/04/2002 10:48 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Regarding the "reanimated you":

Why do we even suppose that we are REALLY the same "we" (soul if you like) upon waking from a night's sleep? It it anything more than the "sensation of sameness"?

Yesterday, before you went to bed, perhaps you wondered "Will I wake up tomorrow morning?". Suppose you fall asleep, and indeed, you (THAT "you") never re-awake. Instead, a "new person awakes" in your body, who simply finds everything quite familiar and uneventful due to the inherited memory and state (curtains have the same color as "remembered", etc.)

The above is a metaphor. I rather believe that "I" exist only for a moment, and am "new" each new moment. The "me" that began this post never "lived" to know how it ended. New "soul" every moment, as the "self-sense" only exists in the present moment.

The sense of "continuity of being" is exactly that; a sensation.

Many folk look forward to the singularity as a kind of immortality vehicle, wherein their wildest dreams are as if real, etc. In this regard, I posed the following:

Suppose I have a drug that (I claim) bestows both immortality and infinite bliss. When taken, you are ramped into increasing circles of wonderous extacy and delight, so delightful and enraptured that you do not care a whit that you are slowly losing consciousness, become entirely unconscious, and then peacefully die.

Why, AT BASIS, is this not "immortality and infinite bliss"?

Being that Y-O-U only exist in the present ... immortality is only the EXPECTATION of endless tomorrows, NOT the exercise of those tomorrows.

Taking my "drug", the you that went happily unconscious never knows that there are no more tomorrows, so can have no regrets. The you-of-tomorrow never gets to exist, so (clearly) can have no regrets.

Hence, the sense that my drug "cheats you of immortality" is only a sensation that comes from knowing how it really acts. Not knowing this, it is (subjectively) identical to immortality.

Cheers! ____tony b____

Re: Soul not copyable by definition?
posted on 12/05/2002 5:15 AM by Jay the Axeslayer

[Top]
[Mind·X]
[Reply to this post]

I understand what you mean completely. But when I defined 'soul', I should have also written that it is something which processes information in linear time, meaning that it *is* the same continuously.

Nobody knows for sure whether we actually have a soul or not. But I do know this: I'll go to sleep and rest easily, but you won't catch me stepping into a teleporter or having myself frozen very quickly.

I think I'll stick around to see what's up with this whole soul-business before I attempt any of the two (teleport, cryogenics) above. I suppose we'll figure out what's up when we hit the area of (advanced) AI (iow: when we are building brains ourselves).

Re: Soul not copyable by definition?
posted on 12/05/2002 10:02 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Jay,

> "when I defined 'soul', I should have also written that it is something which processes information in linear time, meaning that it *is* the same continuously."

Ok. That brings up an interesting point of definition.

Consider a computer, running a software operating system (OS), with that OS itself running an application (App), and that App is "continuously" processing data (say, weather readings or Shakespear or something.)

Which of [machine, OS, Application] is a thing that "is the same continuously"?

It is relatively easy to say "the machine is always the same" (hardware-wise), even though the particular state of its controlled register latches (transistor-supported "bits") are always changing.

It gets harder to say that the OS is unchanging. Sure, its static code-base (upon "load") is usually the same (unless it has performed an automatic upgrade via the net), but much of how the OS is behaving depends upon large allocated "soft structures", whose size, form and content can change radically during operation. These structures, and their contents, affect what the software "is" to a considerable extent at any moment.

Likewise, the "App" can possibly self-alter even more radically (especially one that is designed to "learn").

Where would "soul" (a thing "being the same continuously") reside in such a picture, metaphorically speaking?

Cheers! ____tony b____


Re: Soul not copyable by definition?
posted on 12/06/2002 3:31 AM by Jay the Axeslayer

[Top]
[Mind·X]
[Reply to this post]

I have no idea where the soul would kick in in this metaphore.

But then again, I don't really want to compare ourselves to CPU's of nowadays. I see the CPU's we now have as nothing more than complex and advanced versions of Charles Babbages Addingmachine.

The important thing for me, is that there is a possibilitie that the true you will be lost when screwing around with teleportation and cryogenics and stuff.

I'd rather have a superior AI figure out what's up with the soul, before I undertake any of those things.

Just trying to be careful.

Cheers mate!

Re: Soul not copyable by definition?
posted on 12/05/2002 9:41 AM by Robert Fentress

[Top]
[Mind·X]
[Reply to this post]

You wrote:
"Why do we even suppose that we are REALLY the same "we" (soul if you like) upon waking from a night's sleep? It it anything more than the "sensation of sameness"? "

I think this is the key. I find interesting the disagreement Ray Kurzweil and Eric Drexler seem to be having about whether nuerotransmitter levels or just the more solid state long term memory are neccessary for "real" reanimation. I think in a fundamental way neither is. It seems to me there must be a mechanism in the brain which is responsible for giving me my "sense of self." While this has some relationship to my long term memory, I don't think a consistent LTR is a necessity. For instance, certain psychoactive drugs can give one a radically different conception of "self" that is not bound to any sense of a common history. I think a lot more research needs to be done on what brain mechanisms are responsible for this. In some sense Buddha and Hume must be right about the self being illusory. Maybe the real issue is to technologically change our sense of self rather than attempt to preserve something which may or may not be "it."

Rob

Re: Soul not copyable by definition?
posted on 12/05/2002 9:43 AM by Robert Fentress

[Top]
[Mind·X]
[Reply to this post]

"I don't think a consistent LTR is a necessity"

Whoops! I meant LTM (long term memory).

Re: Soul not copyable by definition?
posted on 12/05/2002 11:57 AM by Adam C

[Top]
[Mind·X]
[Reply to this post]

Haven't seen Memento?

Re: Soul not copyable by definition?
posted on 12/05/2002 9:41 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Rob,

> "In some sense Buddha and Hume must be right about the self being illusory. Maybe the real issue is to technologically change our sense of self rather than attempt to preserve something which may or may not be "it.""

I tend to agree that "self" is a sensation of some sort, as opposed to a fundamentally extant "object". I think memory has much to do with the "sense of continuing self". True, psychedelics can lead to different self-like states, some of which might be divorced from our ordinary LTM, but to that degree, they are generally not "selves" that are much concerned about existential continuity.

I'm not saying that "sense of BEING a being" is all about LTM, but rather that "sense of being the SAME being" probably correlates to depths of memory.

Cheers! ____tony b____

Re: Soul not copyable by definition?
posted on 12/06/2002 6:35 AM by Robert Fentress

[Top]
[Mind·X]
[Reply to this post]

Excellent point! That thought occurred to me (at least I thing it was me [wink]) in a nebulous form even as I was writing my post, but I had a hard time expressing it. Still, I can't help but feel that the paradoxes evident in our "self" talk point to some fundamental flaw in the language we are using to describe our "sense of self" and "continuity of self." For instance, I became convinced of the fact that my self could be recreated synthetically by the thought experiment (Moravic?) where you gradually replace each neuron using nanotechnology, grafting the synthetic onto the organic as you go. This seemed conclusive since I wasn't able to say at what point the synthetic creation ceased to be me. However, the "Star Trek transporter" experiment where a duplicate you is created seemes to muddy the waters significantly for me.

Rob

Re: Soul not copyable by definition?
posted on 12/06/2002 9:27 AM by Grant

[Top]
[Mind·X]
[Reply to this post]

Here's another thought experiment. Take a peach and replace each molecule of the peach with a molecule of silicon. When the last molecule has been replaced, take a bite of the peach. How does it taste?

When you replace a biological cell that is as complex as a computer with a substance of any kind, and that substance does not duplicate the entire function of the cell from reproduction of self to creating the protines and other chemicals that govern the emotional state of the body and mind, how are you going to get the same result?

Each cell in the body is a computerized factory. It turns out products and communicates with other cells on many levels from electrical to chemical. If what you create in its place does not do this, the result will not be the same.

Grant

Re: Soul not copyable by definition?
posted on 12/06/2002 10:28 AM by Robert Fentress

[Top]
[Mind·X]
[Reply to this post]

Good point. Obviously, a synthetic recreation of a neuron would not be materially identical to an organic one. However, I think we are working on the assumption that the important functionality of a neuron is its information bearing and processing capability and that this can be replicated in a variety of physical media. Do you not think this is the case?

I think it is likely. Look at the advances being made in artifial vision and hearing. While these are on the periphery of the nervous system and the current technologies are not perfect, I don't see why, in principle, they couldn't produce an experience that is phenomenologically identical to the "real" sense organs. If these aspects of our nervous system can be recreated sythetically, why not others?

Rob

Re: Soul not copyable by definition?
posted on 12/06/2002 12:52 PM by Grant

[Top]
[Mind·X]
[Reply to this post]

About as likely as a delicious silicon peach. I don't think you can duplicate the form and function of flesh in metal or silicon. In addition, most of the emotional content of the chemical aspect of communication within the body is for the benefit of the body and it's the body that feels this in the form of emotions. These are, in turn, a major element of any experience we have. What is a mind that doesn't feel fear in the pit of the stomach, or the adrenalin rush of shock from contact with the unknown? It may be a very clear mind, but it won't be the mind you left behind. In other words, it won't be you.

Cheers,

Grant

Re: Soul not copyable by definition?
posted on 12/06/2002 1:26 PM by Dimitry V

[Top]
[Mind·X]
[Reply to this post]

>"It may be a very clear mind, but it won't be the mind you left behind. In other words, it won't be you."

Yeah, you would have to be re-embodied somehow. And I'm not sure that would even be possible, since all of the experiences you bring to the new body would be relative to the old body. The new body would have to have similar feedback systems, otherwise we might have "phantom body" pain.

Re: Soul not copyable by definition?
posted on 12/06/2002 2:24 PM by Robert Fentress

[Top]
[Mind·X]
[Reply to this post]

You wrote:

"In addition, most of the emotional content of the chemical aspect of communication within the body is for the benefit of the body and it's the body that feels this in the form of emotions. "

We don't "feel" anything in our body, it only seems that way. Feeling is a funtion of our nervous system. I would guess any input to our nervous system could be simulated in such a way as to make us feel our bodies in much the same way we do now. Still, even if I couldn't feel my body in exactly the same way that I do now, I don't think that would be a determining factor in whether I had "continuity of self." I would just have slightly different sensations and would rapidly adjust to them as long as they were functionallly similar.

Rob

Re: Soul not copyable by definition?
posted on 12/06/2002 4:42 PM by alberjohns

[Top]
[Mind·X]
[Reply to this post]

>"I don't think you can duplicate the form and >function of flesh in metal or silicon."

If we accept that the brain is a material system that obeys the laws of physics just as all other material systems must, then we must also admit that its function can, in principle, be duplicated. The same is also obviously true for the body.

>In addition, most of the emotional content of >the chemical aspect of communication within the >body is for the benefit of the body and it's >the body that feels this in the form of >emotions.

All body operations can be duplicated or simulated so that the end result will be the same. Thus, we cannot logically conclude that the body is necessary for emotions or anything else. At most, we can only conclude that something resembling a body is necessaary. Perhaps merely the perception of a body will be all that is needed. It seems that a virtual reality body would do quite nicely.

Restoring the Total Mind
posted on 01/06/2003 4:08 PM by David S

[Top]
[Mind·X]
[Reply to this post]

What about all the information stored throughout the rest of the body? Does our definition of 'brain' include the entire neuron network or just the neurons above the neck?

Could the very positions of muscle cells in relationship to each other and the neurons that control them contain vital 'information' from which we derive our sense of self?

Just some interesting (to me) questions from a complete neophyte layman.

Re: Soul not copyable by definition?
posted on 12/20/2002 11:00 PM by James Jaeger

[Top]
[Mind·X]
[Reply to this post]

>Good point. Obviously, a synthetic recreation of a neuron would not be materially identical to an organic one. However, I think we are working on the assumption that the important functionality of a neuron is its information bearing and processing capability and that this can be replicated in a variety of physical media. Do you not think this is the case?

I think you're right -- this would be the case.

Grant's thought experiment on the peach would be different than a thought experiment on the brain where each neuron was replaced one by one. In short, the sensation of taste and the functionality of computation are vastly different things I would think. No doubt Grant's "peach" would not TASTE like a peach at all, but a brain in which all its elements had been converted to silicon, would probably ACT like a brain in that it would compute (i.e., recognize differences, similarities, and identities). Maybe the computation would have a different "flavor" or "feel" -- sort of like the difference between a song on analog vinyl and CD digital, same song, just a different feel to it.

James Jaeger

Re: Soul not copyable by definition?
posted on 12/10/2003 3:44 AM by anachronistic

[Top]
[Mind·X]
[Reply to this post]

You said:
"True, psychedelics can lead to different self-like states, some of which might be divorced from our ordinary LTM, but to that degree, they are generally not "selves" that are much concerned about existential continuity."

----------------

I just wanted to add that psychedelics can also alter the perception of time. Time is not always perceived as being linear.

I'm not sure how relevant this is to Singularity, but while on the topic of time, souls, and selves...

Organic brains can experience a feeling of timelessness, and eternity with no boundaries.
And it is possible to perceive and experience more than one thing at a "time."

Though, in "my" experience the perception of "self" was absent during that time. It was more like it was replaced by "everything."

That sounds crazy. And it probably is :)

Re: Soul not copyable by definition?
posted on 12/05/2002 12:38 PM by Thomas

[Top]
[Mind·X]
[Reply to this post]

> Being that Y-O-U only exist in the present

How many possible Y-O-Us exist?

What do you mean by "present". This Planck time?

- Thomas

Re: Soul not copyable by definition?
posted on 12/05/2002 1:44 PM by spurk

[Top]
[Mind·X]
[Reply to this post]

"The above is a metaphor. I rather believe that "I" exist only for a moment, and am "new" each new moment. The "me" that began this post never "lived" to know how it ended. New "soul" every moment, as the "self-sense" only exists in the present moment."

This idea makes a lot of sense.

But there is evidence to support the opposite claim as well.

chaos and complexity theory (depending on your interpretation) suggest that there is a continuous 'soul'; ie a persistent, higher-order-of-complexity, dynamic pattern(s) that arises out of the system-wide activity of a given complex system.

Also there is experimental evidence of quantum effects that are present in the brain. Whether these effects(quantum coherence iirc) are a byproduct of or integral to brain function is currently unknown. However, their existence does suggest that some aspects of conscious intellect cannot be modelled w/ standard serial computation. I don't mean this to imply that strong AI is impossible, rather that it MAY be impossible to achieve relying solely on deterministic, non-quantum processes...

Of course I could be wrong

Re: Soul not copyable by definition?
posted on 12/05/2002 9:22 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Spurk,

I feel as you do, that (perhaps) a "real" sentience requires access to the non-deterministic QM "substrate", and that our brains (well, "brains" in general) do this to some extent.

My point about "existing only for the moment" is different, and reflects the notion that our SUBJECTIVE sense of having continuity is only something that we "sense" each moment. Thus, it effectively finesses the whole concept of "soul continuing". Sure, just as our bodies (approximately) continue from moment to moment, one can say that our consciousness or "beinghood" does as well. But such a view is effectively a conceptual one. Even if I were to claim "I continue", my "self" of a week ago is no longer "feeling" anything, no longer "making decisions", etc. Thus, it is only a conceptual view that continuity really means anything. What is important is the chain of memory.

In principle, then, assuming I could "copy" myself into a dozen clones (including mental state), and then kill my "original self", the question of whether any of the clones is "me" becomes clear. All are "me", and none are "me".

All are me, in that they each would subjectively "feel", and be convinced that they were the "original me", yet each would go off experiencing separate phenomena, as separate individuals.

Cheers! ____tony b____

Re: Soul not copyable by definition?
posted on 12/06/2002 2:06 AM by spurk

[Top]
[Mind·X]
[Reply to this post]

"assuming I could "copy" myself into a dozen clones (including mental state), and then kill my "original self", the question of whether any of the clones is "me" becomes clear. All are "me", and none are "me".

All are me, in that they each would subjectively "feel", and be convinced that they were the "original me", yet each would go off experiencing separate phenomena, as separate individuals."

This is nutty sounding, but my 'gut' tells me that, if you 'booted up' the clones before killing yourself, 'you' would suddenly become a kind of hive mind(probably because of quantum effects like entanglement or as of yet undiscovered quantum phenomena. When your original self was killed, you probably wouldn't even really notice it...
just an idea, not to be taken too seriously :)

spurk
http://users.rcn.com/standley/AI/AI.htm




Re: Soul not copyable by definition?
posted on 12/06/2002 4:29 AM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Spurk,

Beings who will experience ... "being" ... in an envisioned "uploaded" state, may well be able to experience a hive-mind sort of thing (with a commensurate loss of a degree of self-hood, I must imagine.

But short of this, I cannot justify a hive-mind simply as an outgrowth of duplication. At a fundamental level, we are all massively "quantum entangled", yet do nor (apparently) experience each others experiences. There may be some evidence that identical twins "share something" a bit deeper than the rest of us, but not to the degree that the death of one means they continue to experience "living" through the twin. (At least, I don't suppose so.)

I cannot help but feel that, short of a "soul in the religious sense" (which would defy the notion that conscious minds are artifacts of the underlying and "unthinking physical world") there is really no extant "me-ness" that acts like a fluid that can be poured from one vessel into another, always unbroken/contiguous. Beyond my physically unique separateness, what makes me "me" is akin to the software I entertain (not really the best metaphor, but represents that what I entertain in my mind can be influenced by, and integrate outside information, in ways that might thereafter "change" how I act, even though I have not physically "rewired" my neural arrangement.)

I know this is a poor analogy, but there are a million systems out there running IDENTICAL versions of MS Windows ... yet even that perfect identicality does not lead to the kind of coherent QM entanglement that would allow my OS to reflect, consciously or not, the behaviors of its "clones" elsewhere.

For me, everything becomes clear and simple when I consider the "me" I experience in real-time to be a thing that only exists for the present moment, a "soul-per-second" so to speak.

I know this sounds weird, but to try and "feel" this idea more thoroughly, I suggest trying this. Close your eyes for a moment, and imagine yourself to be a "soul" that has existed in total unconsciousness for eternity, then open your eyes for a half-second and close them again to become thereafter unconscious for eternity. Imagine that tiny "blip" of experience is all that you ever get. That seems incredible, but you must also imagine that during that half-second, you inherit all "state" that was accumulated by all the other "souls" that inhabited that body before you, so you feel no disorientation during that brief experience, and just as quickly, never get to anticipate your "death" an instant later.

The try to imagine that your current and usual sense of "consciously existing" is the result of such a weird reality.

Sort of funny, I now imagine Isaac Newton saying "I think, therefore I am", where in reality, each word was uttered by a different "being".

Of course, I am using "sequence of eternal souls" metaphorically. Such "souls" would rather not warrant the name, since they would not effectively "exist" except for that sliver of time.

Cheers! ____tony b____

Re: Soul not copyable by definition?
posted on 01/30/2003 5:56 AM by Ted

[Top]
[Mind·X]
[Reply to this post]

Actually, it was Descartes who said "I think, therefore I am."

I suppose Newton could have said it too. But it's Descartes who gets the credit for writing it down.

Don't mind me... just editing for continuity :)

-Ted www.thisweekinscience.com

Re: Soul not copyable by definition?
posted on 01/07/2004 2:12 PM by jontait

[Top]
[Mind·X]
[Reply to this post]

These are good points Tony. No-one has brought up time-dilation yet, so I thought I would. Consider the standard Planet Of The Apes style "twin paradox". One twin takes off in a rocket ship at a large fraction of the speed of light. Upon her return from the 5 year trip, the twin that remained on Earth has aged 60 years.

So we have 2 people, in the same universe, that subjectively experienced two vastly different subjective AND objective time spans. Going back to what Tony has said about "blips" or snapshots of awareness, it would seem that the space-bound twin has experienced fewer of these snapshots. I find it fascinating to consider this in the context of cellular automata, such as Conway's Game of Life.

A pattern in the Life lattice ages deterministically every clock cycle. So any two identical, isolated pattern structures will still be identical at the end of X number of clock cycles.

We know that atomic clocks age the same, or differently, depending on the gravitational field they are exposed to. So by extension, the matter composing a human brain will age the same, or differently depending on the gravitational field they are exposed to, affecting both its objective and subjective time-span. Could the gravitational field have something to do with an aging/freezing effect like what could be done in a cellular automata experiment?

This seems like the most plausible explanation to me, and if this is the case, then the implications are that the "soul" is the information pattern that defines the physical existence of the human body. Information is obviously copyable, but because we live in a chaotic universe, any copies would instantly and permanently diverge as soon as one of them "ages" a clock cycle.

Furthermore, if you can capture even an abstraction of the relevant information describing the relationships between all the parts that make up the whole nervous system, then you would have a real copy by the definition I described above. A digital version of this information could be aged at whatever clock cycles we can achieve in computers as Thomas Kristan so frequently points out.

Re: Soul not copyable by definition?
posted on 12/06/2002 2:51 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

some points and brief thoughts

a/ you are not you- you are a shifting collage of simpler selves-

b/ your "soul" is the ever-changing pattern of your mind/body- make a copy and it IS you- the soul HAS become a hive consciousness- kill the "original" would then be just like removal of redundant brain-tissue- the information is represented elswhere- thus isn't needed- subjective experience of the old mythic single self for each body wouldn't really apply any longer-

c/ I plan on having at least a half-dozen copy-clones- all connected for full information sharing/parallel computation/ collective-supermind configurations- I wouldn't have identical body units- some male- some female- some big brutes- some lithe and agile- and maybe some hybrid/digital versions all hooked up-

d/ humans have some silly and untrue notions about Self and Time- both of which don't have any real existence other than as useful modles for our minds to function- like color- what is the obsession with "immortality" you want immortality in TIME? because that is not possible: Time as we think of it- a forward-moving causlity- doesn't exist- you ALREADY exist- therefore YOU DO- you cannot say I exist now but won't "later" because there really is no latter- just a different region of probabilistic phase-space- You- you patterns and all of your variations throughout all possible universes- EXIST- Time is not a factor- when you die you just see one boundry of that pattern- it's not infinite so it's OK to have boundaries- it's just ONE boundry- in ONE direction [forward in illusory Time]- the information is not LOST at death- the survivors just MOVE to a new region of the universe that does not include the dead's pattern- but that pattern STILL EXISTS in that original spacetime- just like the dinosaurs still exist- a properly evolved technology could easily "travel" to or extract and move any pattern that exists anywhen- you only have to find the paths that get you there

Re: Soul not copyable by definition?
posted on 12/06/2002 4:09 PM by spurk

[Top]
[Mind·X]
[Reply to this post]

:/setAI -

I totally agree with you on a) and b)...

regarding c), that's a really cool idea; I never really thought about what form(s) I'd like to take if the singularity does indeed happen (I personally give it a .9 -.95 / 40 years probability) I think the first thing I'd try would be a distributed intelligence of a semi-biological nature. say 10e4 organisms inhabiting all of earth major ecosystems, each organism 'cyborged' variants of existing macroscopic species. Future forms are a fun concept to play around with...

d) fascinating ideas about the nature of time and space. What theories are they based on; string theory? quantum loop gravity? something else?

spurk
http://users.rcn.com/standley/AI/AI.htm

Re: Soul not copyable by definition?
posted on 12/06/2002 9:51 PM by Robert Fentress

[Top]
[Mind·X]
[Reply to this post]

You wrote:
"d/ humans have some silly and untrue notions about Self and Time- "

It seems to me that our perception of time is tied to how our brain works as well, which has some pretty interesting connotations. What happens when we die? Isn't it possible that since our perception of time probably ceases at our death, that we might be trapped in whatever our last thought was for all eternity? Or maybe if the universe contracts we would reexperience everything again as the universe played itself out in reverse. Just speculative B.S., but I think it points to some paradoxical consequences of the way we use language to talk about self and time. Can't help thinking we just don't have the right vocabulary to express all the true statements about self and time without internal contradictions.

Rob

Re: Dialogue between Ray Kurzweil and Eric Drexler
posted on 12/10/2002 10:22 AM by Nth

[Top]
[Mind·X]
[Reply to this post]

I saw a girl who suffered from a heart attack and died on the way to the hospital.

She had no respiration, no heartbeat and no electrical activity in the brain, but the medic crews managed to revive the her after 19 seconds.

My theory is that her brain didn't manage to decay enough, so when oxygen was pumped through it. It reactivated.

So that's was one case, my big brother was another that was similar to this.

Re: Dialogue between Ray Kurzweil and Eric Drexler
posted on 12/10/2002 11:05 PM by tony_b

[Top]
[Mind·X]
[Reply to this post]

Nth,

> "no electrical activity in the brain"

I don't know of any ambulance services that do an EEG for brain waves on the way to the hospital, so how can anyone "know" she had no brain waves (electrical activity in the brain)?

I would agree, there is no such thing as a extant "life force", that ever leaves, or is even ever present in "living" things, so as long as the supporting structure has not decayed greatly, an effective re-animation can be accomplished.

Cheers! ____tony b____



Re: Dialogue between Ray Kurzweil and Eric Drexler
posted on 01/07/2004 12:28 AM by TwinBeam

[Top]
[Mind·X]
[Reply to this post]



Think of a soul
as a river that flows
a stream that babbles
- a conscious brook.

Against the stones
of sensation it
curls awareness
a wake awake.

Winter sets in
thought cools
the stream dies
- or sleeps?

A frozen dream
of spring
and life
again.

Mind Uploading cryonically preserved brains
posted on 12/05/2002 9:37 PM by Alberjohns

[Top]
[Mind·X]
[Reply to this post]

It seems to me that mind uploading of cryonics patients will become technologically feasable long before nanotechnological reanimation. I reach this conclusion because uploading a human mind simply requires a complete scan of the neural network of a brain, which can be done with an electron microscope with not much more than today's technology.

Nanotechnological reanimation, on the other hand, will require technology that is orders of magnitude greater than what is currently available.

In addition to this, it seems to me that uploading is preferable to nanotechnological reanimation. There are several reasons for this.

1)Uploading requires less advanced technology.

2)It will be far easier to repair a severely damaged brain if it is in software form. If the repair is not successfull the first time, the original scan information will still exist, and so further attempts will be possible.

3)A biological substrate for the mind does not have the long-term potential of uploaded minds.

Re: Dialogue between Ray Kurzweil and Eric Drexler
posted on 01/17/2003 10:47 AM by Robin Hanson

[Top]
[Mind·X]
[Reply to this post]

This was a very thoughtful discussion. My thanks to the participants and whoever else arranged for it.

The main issue discussed was how much info near synapses is required to reconstruct the "weight" of that synapse (and I guess any local state regarding how that weight is updated). The immediate reason for asking this question is to help estimate whether current cryonics techniques preserve this info.

I agree that this is an important question, that it currently remains unanswered, and that we should try to answer it soon via the experiment Ray suggests.

There is a related question, however, that I am even more interested in. This question is estimating when scanning technology will have the spatial/chemical resolution and cost required to create an upload from a living, or ideally frozen/vitrified, brain.

I wish someone would create a technology level vs time graph, analogous to the Moore's law graphs, showing how scanning resolution/cost is improving with time, and with target lines cooresponding to the optimistic and pessimistic estimates of the resolution required to create uploads.

With such a graph, we could estimate when the scanning component of the technology required for uploads will be ready. The other two technologies required are, of course, understanding brains well enough to know how to simulate one given scanning info, and the ability to have enough raw computational ability at a reasonable cost to acutally do a full brain simulation.

Re: Dialogue between Ray Kurzweil and Eric Drexler
posted on 01/24/2003 12:59 PM by ELDRAS

[Top]
[Mind·X]
[Reply to this post]

Arg!


Look, human vrains...in fact any animal brians dont just work like electrical wiring.



They imvolve interections of chemical slops and peptides, with ebbs and flows and rythmns too.



Those chemicals partly come from the body.


For example the synpase the area between the neurons, harbours chemicals such a dopamine which flows between the neurons.


Any theory of the human brian, any deconstruction of it, MUST involve a complete modelling and not just neurons + axions + dendrites.



Ta Salutant!

ELDRAS

Re: Dialogue between Ray Kurzweil and Eric Drexler
posted on 01/24/2003 1:29 PM by thomas

[Top]
[Mind·X]
[Reply to this post]

I don't think so. It is HOW it is working and not WHAT.

- Thomas

Brave New World - 2100? 2200? 3000?
posted on 11/25/2003 7:01 PM by vo1now

[Top]
[Mind·X]
[Reply to this post]

I've caught some thoughts right now:
During all the history of human civilization we've been proclaiming certain "objectively true" facts which have been turning out to be rather subjective and relative, if not to say "false". For example, it was considered for a long time that our earth if flat; that the sun moves around the earth; that there's some mystical substantion in all "unburned" things which is then turned into fire and heat (and our experience had shown that no "substantion" goes out, but oxygen comes in. Kind of the same myths is so called "computrone", physical carrier of computing process. And the idea of "certain eternal soul of thin matter directed from off this world" for me looks the same. Why it lives? Because we haven't got a chance for practical experience. When we do mind uploading, and the reanimated person says he/she is the same, when he/she realizes him(her)self, this question will soon disappear.
The matter is we are as never able to realize relativity of many things that can seem foolish for casual person: that we are just series of our "just-in-time" representations. (The word "are" is not very correct, it means that all the past, present and future time already exist and are determined. I can't agree with such fatalistic view and argue with it: we objectively can't determine position of an electron in an atom (in other words, even present is indetermined). So, how can future be determined, if it depends on indetermined present?)
But, on the other side, even in so "simple" computer systems of today there are several levels, for example 7 levels of networking: physical, channel, protocol, etc. The fact that data from website are transferred either by radio waves, or by twisted pair, or by phone cable, doesn't affect the website content. The networking system protects itself from external errors, minimalizing chance of critical changes by the checksum mechanism, stop bits and so on.
Our brains also must be enough protected from environment, enough not to "lose self". Put a magnet around your head, it will cause some changes in magnetic field, but the fact that something started to move on low level won't make you lose yourself, unless this field is so strong that can kill you. With or without that magnet, you would be you, but in alternate representations.
Till today we looked at our life as linear process, once started and in some time finished. Some people tried to move forward and backward, and created the idea of eternal life. But still linear, based on one separate uncopiable soul. Now it's time to move from linear to alternate personality, with no time limits but also with no copy limits. I hardly imagine which challenge this move gives to us; all our society must be rebuilt if we don't want total chaos. From our Western philosophy's position (black is black, white is white; black is NOT white, white is NOT black) we must jump to Eastern one (black becomes white, white becomes black; or - black contains white, white contains black, as famous Ying-Yang symbol shows).
But maybe some of our future copies will want to reintegrate (don't laugh, imagine dozen mind clones loving one woman - and how she has to feel?) by saving all common and all desired separate experience into one brain, and others - to separate into two souls (due to the same reason - unshared love), with dynamic cracking their own brains to clear "inconvenient" experience and emotions and to empower everything "convenient"? This doesn't seem more strange, alienistic or blasphemic for us than description of photography or video recording for people of 12th century Islamic world (original Islam doesn't allow to make images of all living creatures, especially people, to anybody except God).
As a conclusion, I want to say that I don't see any barrier for mind uploading in the future. But we must start to prepare ourselves for future "alter-life" right now. We must realize that you'll have the same right to say that you're original, as your clone, and will have the same instruments to prove it (past experience and knowledge). And not to see yourself pushed out of life, we must convert our current society into something different. Into the world where no IDs will be forced to use. Where good will will overcome external force, and direct connections between people will destroy our damned hierarchic state power system. A brave new world. Just a prologue to the next stage - The Resurrection. Making a God's will by human hands. No matter when - in 2100, 2200, 3000...

Re: Brave New World - 2100? 2200? 3000?
posted on 11/26/2003 6:01 PM by johnsolo

[Top]
[Mind·X]
[Reply to this post]

This misses the basic point of the singularity, that it will be possible to bioenginneer life that outperofrms it's previous version. WHy not a melding of nanotech and biotech ?