  | 
  | 
  | 
  | 
  | 
  | 
 
|     | 
 Origin >
 Living Forever > 
Dialogue between Ray Kurzweil, Eric Drexler, and Robert Bradbury
 
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0533.html
 
Printable Version | 
 
  |  
|     | 
    | 
Dialogue between Ray Kurzweil, Eric Drexler, and Robert Bradbury
 
 
What would it take to achieve successful cryonics reanimation of a fully functioning human brain, with memories intact? A conversation at the recent Alcor Conference on Extreme Life Extension between Ray Kurzweil and Eric Drexler sparked an email discussion of this question. They agreed that despite the challenges, the brain's functions and memories can be represented surprisingly compactly, suggesting that successful reanimation of the brain may be achievable. 
 
 E-mail dialogue on November 
            23, 2002. Published on KurzweilAI.net Dec. 3, 2002. Comments by Robert 
            Bradbury added Jan. 15, 2003.
Ray Kurzweil: Eric, I greatly enjoyed our brief opportunity 
              to share ideas (difficulty of adding bits to quantum computing, 
              cryonics reanimation, etc.). Also, it was exciting to hear your 
              insightful perspective on the field you founded, now that it's gone—from 
              what was regarded in the mainstream anyway as beyond-the-fringe 
              speculation—to, well, mainstream science and engineering.  
I had a few questions and/or comments (depending on whether I'm 
              understanding what you said correctly). Your lecture had a very 
              high idea density, so I may have misheard some details. 
With regard to cryonics reanimation, I fully agree with you that 
              preserving structure (i.e., information) is the key requirement, 
              that it is not necessary to preserve cellular functionality. I have 
              every confidence that nanobots will be able to go in and fix every 
              cell, indeed every little machine in every cell. The key is to preserve 
              the information. And I'll also grant that we could lose some of 
              the information; after all, we lose some information every day of 
              our lives anyway. But the primary information needs to be preserved. 
              So we need to ask, what are the types of information required?  
One is to identify the neuron cells, including their type. This 
              is the easiest requirement. Unless the cryonics process has made 
              a complete mess of things, the cells should be identifiable. By 
              the time reanimation is feasible, we will fully understand the types 
              of neurons and be able to readily identify them from the slightest 
              clues. These neurons (or their equivalents) could then all be reconstructed. 
             
The second requirement is the interconnections. This morphology 
              is one key aspect of our knowledge and experience. We know that 
              the brain is continually adding and pruning connections; it's a 
              primary aspect of its learning and self-organizing principle of 
              operation. The interconnections are much finer than the neurons 
              themselves (for example, with current brain imaging techniques, 
              we can typically see the neurons but we do not yet clearly see the 
              interneuronal connections). Again, I believe it's likely that this 
              can be preserved, provided that the vitrification has been done 
              quickly enough. It would not be necessary that the connections be 
              functional or even fully evident, as long as it can be inferred 
              where they were. And it would be okay if some fraction were not 
              identifiable.  
It's the third requirement that concerns me; the neurotransmitter 
              concentrations, which are contained in structures that are finer 
              yet than the interneuronal connections. These are, in my view, also 
              critical aspects of the brain's learning process. We see the analogue 
              of the neurotransmitter concentrations in the simplified neural 
              net models that I use routinely in my pattern recognition work. 
              The learning of the net is reflected in the connection weights as 
              well as the connection topology (some neural net methods allow for 
              self-organization of the topology, some do not, but all provide 
              for self-organization of the weights). Without the weights, the 
              net has no competence. 
If the very-fine-resolution neurotransmitter concentrations are 
              not identifiable, the downside is not equivalent to merely an amnesia 
              patient who has lost his memory of his name, profession, family 
              members, etc. Our learning, reflected as it is in both interneuronal 
              connection topology and neurotransmitter concentration patterns, 
              underlies knowledge that is far broader than these routine forms 
              of memory, including our "knowledge" of language, how 
              to think, how to recognize objects, how to eat, how to walk and 
              perform all of our skills, etc. Loss of this information would result 
              in a brain with no competence at all. It would be worse than a newborn's 
              brain, which is at least designed to begin reorganizing itself. 
              A brain with the connections intact but none of the neurotransmitter 
              concentrations would have no competence of any kind and a connection 
              pattern that would be too specific to relearn all of these skills 
              and basic knowledge.  
It's not clear whether the current vitrification-preservation process 
              maintains this vital type of information. We could readily conduct 
              an experiment to find out. We could vitrify the brain of a mouse 
              and then do a destructive scan while still vitrified to see if the 
              neurotransmitter concentrations are still evident. We could also 
              confirm that the connections are evident as well.  
The type of long-term memory that an amnesia patient has lost is 
              just one type of knowledge in the brain. At the deepest level, the 
              brain's self-organizing paradigm underlies our knowledge and all 
              competency that we have gained since our fetal days (even prior 
              to birth).  
As a second issue, you said something about it being sufficient 
              to just have preserved the big toe or the nose to reconstruct the 
              brain. I'm not sure what you meant by that. Clearly none of the 
              brain structure is revealed by body parts outside the brain. The 
              only conceivable way one could restore a brain from the toe would 
              be from the genome, which one can discover from any cell. And indeed, 
              one could grow a brain from the genome. This would be, however, 
              a fetal brain, which is a genetic clone of the original person, 
              equivalent to an identical twin (displaced in time). One could even 
              provide a learning and maturing experience for this brain in which 
              the usual 20 odd years were sped up to 20 days or less, but this 
              would still be just a biological clone, not the original person. 
             
Finally, you said (if I heard you correctly) that the amount of 
              information in the brain (presumably needed for reanimation) is 
              about 1 gigabyte. My own estimates are quite different. It is true 
              that genetic information is very low, although as I discussed above, 
              genetic information is not at all sufficient to recreate a person. 
              The genome has about 0.8 gigabytes of information. There is massive 
              redundancy, however. For example, the sequence "ALU" is 
              repeated 300,000 times. If one compresses the genome using standard 
              data compression to remove redundancy, estimates are that one can 
              achieve about 30 to 1 lossless compression, which brings us down 
              to about 25 megabytes. About half of that comprises the brain, or 
              about 12 megabytes. That's the initial design plan.  
If we consider the amount of information in a mature human brain, 
              however, we have about 1011 neurons with 103 
              average fan-out of connections, for an estimated total of 1014 
              connections. For each connection, we need to specify (i) the neurons 
              that this connection is connected to, (ii) some information about 
              its pathway as the pathway affects analog aspects of its electrochemical 
              information processing, and (iii) the neurotransmitter concentrations 
              in associated synapses. If we estimate about 102 bytes 
              of information to encode these details (which may be low), we have 
              1016 bytes, considerably more than the 109bytes 
              that you mentioned.  
One might ask: How do we get from 107 bytes that specify 
              the brain in the genome to 1016 bytes in the mature brain? 
              This is not hard to understand, since we do this type of meaningful 
              data expansion routinely in our self-organizing software paradigms. 
              For example, a genetic algorithm can be efficiently coded, but in 
              turn creates data far greater in size than itself using a stochastic 
              process, which in turn self-organizes in response to a complex environment 
              (the problem space). The result of this process is meaningful information 
              far greater than the original program. We know that this is exactly 
              how the creation of the brain works. The genome specifies initially 
              semi-random interneuronal connection wiring patterns in specific 
              regions of the brain (random within certain constraints and rules), 
              and these patterns (along with the neurotransmitter-concentration 
              levels) then undergo their own internal evolutionary process to 
              self-organize to reflect the interactions of that person with their 
              experiences and environment. That is how we get from 107 
              bytes of brain specification in the genome to 1016 bytes 
              of information in a mature brain. I think 109 bytes is 
              a significant underestimate of the amount of information required 
              to reanimate a mature human brain. 
I'd be interested in your own reflections on these thoughts, with 
              my best wishes. 
Eric Drexler: Ray--Thanks for your comments and questions. 
              Our thinking seems closely parallel on most points. 
Regarding neurotransmitters, I think it is best to focus not on 
              the molecules themselves and their concentrations, but rather on 
              the machinery that synthesizes, transports, releases, senses, and 
              recycles them. The state of this machinery must closely track long-term 
              functional changes (i.e, long-term memory or LTM), and much of this 
              machinery is an integral part of synaptic structure. 
Regarding my toe-based reconstruction scenario [creating a brain 
              from a bit of tissue containing intact DNA-Ed.], this is indeed 
              no better than genetically based reconstruction together with loading 
              of more-or-less default skills and memories—corresponding to 
              a peculiar but profound state of amnesia. My point was merely that 
              even this worst-case outcome is still what modern medicine would 
              label a success: the patient walks out the door in good health. 
              (Note that neurosurgeons seldom ask whether the patient who walks 
              out is "the same patient" as the one who walked in.) Most 
              of us wouldn't look forward to such an outcome, of course, and we 
              expect much better when suspension occurs under good conditions. 
Regarding the information content of the brain, both the input 
              and output data sets for reconstruction must indeed be vastly larger 
              than a gigabyte, for the reasons you outline. The lower number [109] 
              corresponds to an estimate of the information-theoretic content 
              of human long term memory found (according to Marvin Minsky) by 
              researchers at Bell Labs. They tried various methods to get information 
              into and out of human LTM, and couldn't find learning rates above 
              a few bits per second. Integrated over a lifespan, this 
              yields the above number. If this is so, it suggests that information 
              storage in the brain is indeed massively redundant, perhaps for 
              powerful function-enabling reasons. (Identifying redundancy this 
              way, of course, gives no hint of how to construct a compression 
              and decompression algorithm.) 
Best wishes, with thanks for all you've done. 
P.S. A Google search yields a discussion 
              of the Bell Labs result by, yes, Ralph Merkle. 
 
Ray Kurzweil: Okay, I think we're converging on some commonality. 
             
On the neurotransmitter concentration level issue, you wrote: "Regarding 
              neurotransmitters, I think it is best to focus not on the molecules 
              themselves and their concentrations, but rather on the machinery 
              that synthesizes, transports, releases, senses, and recycles them. 
              The state of this machinery must closely track long-term functional 
              changes (i.e, LTM), and much of this machinery is an integral part 
              of synaptic structure." 
I would compare the "machinery" to any other memory machinery. 
              If we have the design for a bit of memory in a DRAM system, then 
              we basically know the mechanics for the other bits. It is true that 
              in the brain there are hundreds of different mechanisms that we 
              could call memory, but each of these mechanisms is repeated many 
              millions of times. This machinery, however, is not something we 
              would need to infer from the preserved brain of a suspended patient. 
              By the time reanimation is feasible, we will have long since reverse-engineered 
              these basic mechanisms of the human brain, and thus would know them 
              all. What we do need specifically for a particular patient is the 
              state of that person's memory (again, memory referring to all skills). 
              The state of my memory is not the same as that of someone else, 
              so that is the whole point of preserving my brain.  
And that state is contained in at least two forms: the interneuronal 
              connection patterns (which we know is part of how the brain retains 
              knowledge and is not a fixed structure) and the neurotransmitter 
              concentration levels in the approximately 1014 synapses. 
             
My concern is that this memory state information (particularly 
              the neurotransmitter concentration levels) may not be retained by 
              current methods. However, this is testable right now. We don't have 
              to wait 40 to 50 years to find this out. I think it should be a 
              high priority to do this experiment on a mouse brain as I suggested 
              above (for animal lovers, we could use a sick mouse). 
You appear to be alluding to a somewhat different approach, which 
              is to extract the "LTM," which is likely to be a far more 
              compact structure than the thousands of trillions of bytes represented 
              by the connection and neurotransmitter patterns (CNP). As I discuss 
              below, I agree that the LTM is far more compact. However, we are 
              not extracting an efficient LTM during cryo preservation, so the 
              only way to obtain it during cryo reanimation would be to retain 
              its inefficient representation in the CNP. 
You bring up some interesting and important issues when you wrote, 
              "Regarding my toe-based reconstruction scenario, this is indeed 
              no better than genetically-based reconstruction together with loading 
              of more-or-less default skills and memories—corresponding to 
              a peculiar but profound state of amnesia. My point was merely that 
              even this worst-case outcome is still what modern medicine would 
              label a success: the patient walks out the door in good health." 
             
I agree that this would be feasible by the time reanimation is 
              feasible. The means for "loading" these "default 
              skills and memories" is likely to be along the lines that I 
              described above, to use "a learning and maturing experience 
              for this brain in which the usual 20 odd years were sped up to 20 
              days or less." Since the human brain as currently designed 
              does not allow for explicit "loading" of memories and 
              skills, these attributes need to be gained from experience using 
              the brain's self-organizing approach. Thus we would have to use 
              this type of experience-based approach. Nevertheless, the result 
              you describe could be achieved. We could even include in these "loaded" 
              (or learned) "skills and memories," the memory of having 
              been the original person who was cryonically suspended, including 
              having made the decision to be suspended, having become ill, and 
              so on.  
False reanimation
And this process would indeed appear to be a successful reanimation. 
              The doctors would point to the "reanimated' patient as the 
              proof in the pudding. Interviews of this patient would reveal that 
              he was very happy with the process, delighted that he made the decision 
              to be cryonically suspended, grateful to Alcor and the doctors for 
              their successful reanimation of him, and so on.  
But this would be a false reanimation. This is clearly not the 
              same person that was suspended. His "memories" of having 
              made the decision to be suspended four or five decades earlier would 
              be false memories. Given the technology available at this time, 
              it would be feasible to create entirely new humans from a genetic 
              code and an experience / learning loading program (which simulates 
              the learning in a much higher speed substrate to create a design 
              for the new person). So creating a new person would not be unusual. 
              So all this process has accomplished is to create an entirely new 
              person who happens to share the genetic code with the person who 
              was originally suspended. It's not the same person.  
One might ask, "Who cares?" Well no one would care except 
              for the originally suspended person. And he, after all, is not around 
              to care. But as we look to cryonic suspension as a means towards 
              providing a "second chance," we should care now about 
              this potential scenario. 
It brings up an issue which I have been concerned with, which is 
              "false" reanimations.  
Now one could even raise this issue (of a false reanimation) if 
              the reanimated person does have the exact CNP of the original. One 
              could take the philosophical position that this is still a different 
              person. An argument for that is that once this technology is feasible, 
              you could scan my CNP (perhaps while I'm sleeping) and create a 
              CNP-identical copy of me. If you then come to me in the morning 
              and say "good news, Ray, we successfully created your precise 
              CNP-exact copy, we won't be needing your old body and brain anymore," 
              I may beg to differ. I would wish the new Ray well, but feel that 
              he's a different person. After all, I would still be here.  
So even if I'm not still here, by the force of this thought experiment, 
              he's still a different person. As you and I discussed at the reception, 
              if we are using the preserved person as a data repository, then 
              it would be feasible to create more than one "reanimated" 
              person. If they can't all be the original person, then perhaps none 
              of them are.  
However, you might say that this argument is a subtle philosophical 
              one, and that, after all, our actual particles are changing all 
              the time anyway. But the scenario you described of creating a new 
              person with the same genetic code, but with a very different CNP 
              created through a learning simulation, is not just a matter of a 
              subtle philosophical argument. This is clearly a different person. 
              We have examples of this today in the case of identical twins. No 
              one would say to an identical twin, "we don't need you anymore 
              because, after all, we still have your twin." 
I would regard this scenario of a "false" reanimation 
              as one of the potential failure modes of cryonics. 
Finally, on the issue of the LTM (long term memory), I think this 
              is a good point and an interesting perspective. I agree that an 
              efficient implementation of the knowledge in a human brain (and 
              I am referring here to knowledge in the broadest sense as not just 
              classical long term memory, but all of our skills and competencies) 
              would be far more compact that the 1016 bytes I have 
              estimated for its actual implementation.  
As we understand biological mechanisms in a variety of domains, 
              we find that we can redesign them (as we reverse engineer their 
              functionality) with about 106 greater efficiency. Although 
              biological evolution was remarkable in its ingenuity, it did get 
              stuck in particular paradigms.  
It's actually not permanently stuck in that its method of getting 
              unstuck is to have one of its products, homo sapiens, discover and 
              redesign these mechanisms.  
We can point to several good examples of this comparison of our 
              human engineered mechanisms to biological ones. One good example 
              is Rob Freitas' design for robotic blood cells, which are many orders 
              of magnitude more efficient than their biological counterparts. 
             
Another example is the reverse engineering of the human auditory 
              system by Lloyd Watts and his colleagues. They have found that implementing 
              the algorithms in software from the reverse engineering of specific 
              brain regions requires about a factor of 106 less computation 
              than the theoretical potential of the brain regions being emulated. 
             
Another good example is the extraordinarily slow computing speed 
              of the interneuronal connections, which have about a 5 millisecond 
              reset time. Today's conventional electronic circuits are already 
              100 million (108) times faster. Three-dimensional molecular 
              circuits (e.g., nanotube-based circuitry) would be at least 109 
              times faster. Thus if we built a human brain equivalent with the 
              same number of simulated neurons and connections (not just simulating 
              the human brain with a smaller number of units that are operating 
              at higher speeds), the resulting nanotube-based brain would operate 
              at least 109 times faster than its biological counterpart. 
             
Some of the inefficiency of the encoding of information in the 
              human brain has a positive utility in that memory appears to have 
              some holographic properties (meaningful information being distributed 
              through a region), and this helps protect the information. It explains 
              the usually gradual (as opposed to catastrophic) degradation of 
              human memory and skill. But most of the inefficiency is not useful 
              holographic encoding, but just this inherent inefficiency of biological 
              mechanisms. My own estimate of this factor is around 106, 
              which would reduce the LTM from my estimate of 1016 for 
              the actual implementation to around 1010 for an efficient 
              representation, but that is close enough to your and Minsky's estimate 
              of 109.  
However, as you point out, we don't know the compression/decompression 
              algorithm, and are not in any event preserving this efficient representation 
              of the LTM with the suspended patients. So we do need to preserve 
              the inefficient representation.  
With deep appreciation for your own contributions. 
 
Eric Drexler: With respect to inferring memory state, the 
              neurotransmitter-handling machinery in a synapse differs profoundly 
              from the circuit structure in a DRAM cell. Memory cells in a chip 
              are all functionally identical, each able to store and report different 
              data from millisecond to millisecond; synapses in a brain are structurally 
              diverse, and their differences encode relatively stable information. 
              Charge stored in a DRAM cell varies without changes in its stable 
              structure; long-term neurotransmitter levels in a synapse vary as 
              a result of changes in its stable structure. The quantities of different 
              enzymes, transport molecules, and so forth, determine the neurotransmitter 
              properties relevant to LTM, hence neurotransmitter levels per se 
              needn't be preserved. 
My discussion of the apparent information-theoretic size of human 
              LTM wasn't intended to suggest that such a compressed representation 
              can or should be extracted from the detailed data describing brain
structures. I expect that any restoration process will work with 
              these far larger and more detailed data sets, without any great 
              degree of intermediate compression. Nonetheless, the apparently 
              huge gap between the essential mental information to be preserved 
              and the vastly more detailed structural information is reassuring—and 
              suggests that false reanimation, while possible, shouldn't 
              be expected when suspension occurs under good conditions. (Current 
              medical practice has analogous problems of false life-saving, but 
              these don't define the field.) 
Ray Kurzweil: I'd like to thank you for an engaging dialogue. 
              I think we've converged to a pretty close common vision of these 
              future scenarios. Your point is well taken that human memory (for 
              all of its purposes), to the extent that it involves the neurotransmitters, 
              is likely to be redundantly encoded. I agree that differences in 
              the levels of certain molecules are likely to be also reflected 
              in other differences, including structural differences. Most biological 
              mechanisms that we do understand tend to have redundant information 
              storage (although not all; some single-bit changes in the DNA can 
              be catastrophic). I would point out, however, that we don't yet 
              understand the synaptic structures sufficiently to be fully confident 
              that the differences in neurotransmitter levels that we need (for 
              reanimation) are all redundantly indicated by structural changes. 
              However, all of this can be tested with today's technology, and 
              I would suggest that this would be worthwhile.  
I also agree that "the apparently huge gap between the essential 
              mental information to be preserved and the vastly more detailed 
              structural information is reassuring." This is one example 
              in which the inefficiency of biology is helpful. 
 
Eric Drexler: Thank you, Ray. I agree that we've found good 
              agreement, and I also enjoyed the interchange. 
 
Additional comments on Jan. 15, 2003 by Robert Bradbury 
Robert Bradbury: First, it is reasonable to assume that 
              within this decade we will know the precise crystal structure for 
              all human proteins for which cryonics reanimation is feasible, using 
              either X-ray, NMR or computational (e.g., Blue Gene) based methods. 
              That should be almost all human proteins. Second, it seems likely 
              that we will have both the experimental (yeast 2-hybrid) or computational 
              (Blue Gene and extensions thereof and/or distributed protein modeling, 
              via @Home) to determine how proteins that interact typically do 
              so. So we will have the ability to completely understand what happens 
              at synapses and to some extent model that computationally. 
Now, Ray placed an emphasis on neurotransmitter "concentration" 
              that Eric seemed to downplay. I tend to lean in Eric's direction 
              here. I don't think the molecular concentration of specific neurotransmitters 
              within a synapse is particularly critical for reanimating a brain. 
              I do think the concentrations of the macroscale elements 
              necessary for neurotransmitter release will need to be known. That 
              is, one needs to be able to count mitochondria and synaptic vesicle 
              size and type (contents) as well as the post-synaptic neurotransmitter 
              receptors and the pre-synaptic reuptake receptors. It is the numbers 
              of these "machines of transmission" that determines the 
              Hebbian "weight" for each synapse, which is a point I 
              think Ray was trying to make.  
Furthermore, if there is some diffusion of neurotransmitters out 
              of individual synapses, the location and density of nearby synapses 
              may be important (see Rusakov & Kullmann 
              below). Now, the counting of and determination of the location of 
              these "macroscale" effectors of synapse activity is a 
              much easier task than measuring the concentration of every neurotransmitter
molecule in the synaptic cleft.  
The neurotransmitter concentration may determine the instantaneous 
              activity of the synapse, but I do not believe it holds the "weight" 
              that Ray felt was important. That seems to be contained much more 
              in the energy resources, enzymatic manufacturing capacity, and vesicle/receptor 
              concentrations, which vary over much longer time periods. (The proteins 
              have to be manufactured near the neuronal nucleus and be transported, 
              relatively slowly, down to the terminal positions in the axons and 
              dendrites.) 
One can alter neurotransmitter concentrations and probably pulse-transmission 
              probabilities at least within some range without disrupting the 
              network terribly (risking false reanimation). SSRIs [Selective Serotonin 
              Reuptake Inhibitors] and drugs used to treat Parkinson's, such as 
              L-dopa, are examples of drugs that may alter these aspects of interneuron 
              communications. Of more concern to me is whether or not there will 
              be hurdles in attempting a "cold" brain restart. One can 
              compare this to the difficulties of restarting the brain of someone 
              in a coma and/or someone who has drowned.  
 
The structure of the brain may be largely preserved but one just 
              may not be able to get it running again. This implies there is some 
              state information contained within the normal level of background 
              activity. We haven't figured out yet how to "shock" the 
              brain back into a functional pattern of activity. 
Ray also mentioned vitrification. I know this is a hot topic within 
              the cryonics community because of Greg Fahy's efforts. But you have 
              to realize that Greg is trying to get us to the point where we can 
              preserve organs entirely without nanotech capabilities. I think 
              vitrification is a red herring. Why? Because we will know the structure 
              of just about everything in the brain under 50 nm in size. Once 
              frozen, those structures do not change their structure or location 
              significantly.  
So I would argue that you could take a frozen head, drop it on 
              the floor so it shatters into millions or billions of pieces and 
              as long as it remains frozen, still successfully reassemble it (or 
              scan it into an upload). In its disassembled state it is certainly 
              one very large 3D jigsaw puzzle, but it can only be reassembled 
              one correct way. Provided you have sufficient scanning and computational 
              capacity, it shouldn't be too difficult to figure out how to put 
              it back together.  
You have to keep in mind that all of the synapses have proteins 
              binding the pre-synaptic side to the post-synaptic side (e.g., molecular 
              velcro). The positions of those proteins on the synaptic surfaces 
              are not specified at the genetic level and it seems unlikely that 
              their locations would shift significantly during the freezing process 
              (such that their number and approximate location could not be reconstructed). 
             
As a result, each synapse should have a "molecular fingerprint" 
              as to which pre-side goes with which post-side. So even if the freezing 
              process pulls the synapse apart, it should be possible to reconstruct 
              who the partners are. One needs to sit and study some freeze-fracture 
              electron micrographs before this begins to become a clear idea for 
              consideration. 
So I think the essential components are the network configuration 
              itself, the macroscale machinery architecture of the synapses and 
              something that was not mentioned, the "transcriptional state 
              of the nuclei of the neurons" (and perhaps glial cells), i.e., 
              which genes are turned on/off. This may not be crucial for an instantaneous 
              brain "reboot" but might be essential for having it function 
              for more than brief periods (hours to days). 
References 
A good (relatively short but detailed) description of synapses 
              and synaptic activity is Ch.5: 
              Synaptic Activity from State University of New York at Albany. 
 Also see: 
Understanding 
              Neurological Functions through the Behavior of Molecules, Dr. 
              Ryoji Yano  
Three-Dimensional 
              Structure of Synapses in the Brain and on the Web, J. C. Fiala, 
              2002 World Congress on Computational Intelligence, May 12-17, 2002 
             
Assessing 
              Accurate Sizes of Synaptic Vesicles in Nerve Terminals, Seongjai 
              Kim, Harold L. Atwood & Robin L. Cooper  
Extrasynaptic 
              Glutamate Diffusion in the Hippocampus: Ultrastructural Constraints 
              Uptake and Receptor Activation, Dimitri A. Rusakov & Dimitry 
              M. Kullmann The J. of Neuroscience 18(9):3158-3170 (1 May 1998). 
             
Ray Kurzweil: Robert, thanks for your interesting and thoughtful 
              comments. I essentially agree with what you're saying, albeit we 
              don't yet understand the mechanisms behind the "Hebbian weight" 
              or other vital state information needed for a non-false reanimation. 
              It would be good if this state information were fully represented 
              by mitochondria and synaptic vesicle size and type (contents), post-synaptic 
              neurotransmitter receptors and pre-synaptic reuptake receptors, 
              i.e., by the number of these relatively large (compared to molecules) 
              "machines of transmission." 
Given that we have not yet reverse-engineered these mechanisms, 
              I suppose it would be difficult to do a definitive experiment now 
              to make sure we are preserving the requisite information. 
I agree with your confidence that we will have reverse-engineered 
              these mechanisms within the next one to two decades. I also agree 
              that we need only preserve the information, and that reanimation 
              technology will take full advantage of the knowledge of how these 
              mechanisms work. Therefore the mechanisms don't need to preserved 
              in working order so long as the information is there. I agree that 
              Fahy's concerns apply primarily to revitalization without such detailed 
              nanotech repair and reconstruction. 
Of course, as I pointed out in the debate with Eric, such a complete 
              reconstruction may essentially amount to creating a new brain/person 
              with the cryonically preserved brain/body serving only as a blueprint, 
              in which case it would just as easy to create more than one renaimated 
              person. Eric responded to this notion by saying that the first one 
              is the reanimated person and subsequent ones are just copies because 
              after all, at that time, we could make copies of anyone anyway. 
With regard to your jigsaw puzzle, that may be a difficult puzzle 
              to put together, although I suppose we'll have the computational 
              horsepower to do it. 
 |   |     | 
 
  |  
|   | 
 Mind·X Discussion About This Article: 
  | 
  | 
 
 | 
  | 
Re: subjective time in an upgraded brain
   | 
 | 
  | 
 
 | 
I was actually just thinking about sense of time.
 
 
I think that it may be more about anchoring to external or internal events. For example; if you say that you will do X tomorrow, you put an event into a mental space labelled "tomorrow". You know how tomorrow feels and where you represent it. However, when the next day comes, you think about that "tomorrow" mental space and it still feels like X should be done tomorrow. The way to get out of the loop is to link event X to some other external stimulus that will occur on the next day. So, "after I brush my teeth, I will do X" could be the appropriate linking anchor.
 
 
I think you can see how this might work for longer periods of time. So, what about shorter periods? This could be measured by the change in current mental space. 
 
 
So, what do we know? 
 
 
When you have things to think about and do physically that are pleasant, times seems to go by quickly.
 
 
When you have things to think about, but don't like the subject, time can go by slowly.
 
 
When you have nothing to think about but you feel pleasure, time seems to go by quickly.
 
 
When you have nothing to think about and you are bored (uncomfortable), time seems to go by slowly.
 
 
So, it seems that pleasure and discomfort govern time sense more than the content of mental space, for short time period sense. But why?  | 
 
  | 
  | 
  | 
  | 
 
 
  | 
  | 
 
 | 
  | 
Re: subjective time in an upgraded brain
   | 
 | 
  | 
 
 | 
I would wager that the perception of time per se would not change significantly (after the initial period of adjustment).  My gut feeling is that the brain would compensate for an increase in number of compute cycles in the same way that it compensates when the visual field is reversed.  (Remember the experiment in which subjects wore glasses that made everything look upsidedown.  After an adjustment period, upsidedown just seemed normal.) I think the adjustment would require various ques, such as watching the clock tick, timing the heartbeat, watching objects fall, moving one's limbs, etc.  But after that adjustment, I don't think you would get the feeling, for example, that apples seem to fall more slowly than they used to.
 
 
The major difference one would notice is the decrease in time it takes to perform mental tasks.  For example, it took me several minutes to compose this response.  I think I will know I have been "accelerated" when I can compose faster than I can type.
 
 
James  | 
 
  | 
  | 
  | 
  | 
 
 
  | 
  | 
 
 | 
  | 
Re: subjective time in an upgraded brain
   | 
 | 
  | 
 
 | 
>  I don't think you would get the feeling, for example, that apples seem to fall more slowly than they used to. 
 
 
But I want that. I want a year, until the apple falls.
 
 
Of course, I'll not wait idle, to see it happen.  Instead, I will visit some virtual places meantime. Where everything will have "my speed".
 
 
As a side effect, this virtuality will be far more realistic, then the "natural reality" is.
 
 
So, I will not care much, what had really happened, with that apple. Just use it, to further speed my world's paste!
 
 
- Thomas  | 
 
  | 
  | 
  | 
  | 
 
 
  | 
  | 
 
 | 
  | 
Re: Soul not copyable by definition?
   | 
 | 
  | 
 
 | 
"This means we should also be careful when it comes to soul-transitions. You might say a transition exists out of: copy, delete, insert (somewhere else). 
 
 
Notice the 'delete'-thingy in there..."
 
 
I think you're right, when cryo-preserved people are reanimated, I think it will not be like waking up, but rather a new 'soul' will be spawned, with your memories but still not 'you'.  If you ever remembered something but don't 'remember' remembering it (jesus english is horrible for talking about philosophical ideas), that's what I think it would feel like, you'd remember everything about 'your' life and have all 'your' skills and knowledge, but they wouldn't feel like they were 'yours'
 
 
none of this means I think that the 'soul' is anything mystical.  rather I think it is a emergent phenomenon that occurs in complex systems, a case of the whole being greater than the sum of it's parts.  It's also why I think Hans Moravec's procedure is the only feasible way to achieve 'uploading', since it slowly integrates artifical neurons with your brains network, then slowly replaces the biological components.  Moravec's version hinges on the availability of nanotech; I think it could be done today. Here's a link where I explain this further: http://users.rcn.com/standley/AI/immortality.htm 
 
 
I think Roger Penrose is onto something when he argues that AI is impossible using traditional computing methods  | 
 
  | 
  | 
  | 
  | 
 
 
  | 
  | 
 
 | 
  | 
Re: Soul not copyable by definition?
   | 
 | 
  | 
 
 | 
Regarding the "reanimated you":
 
 
Why do we even suppose that we are REALLY the same "we" (soul if you like) upon waking from a night's sleep?  It it anything more than the "sensation of sameness"?
 
 
Yesterday, before you went to bed, perhaps you wondered "Will I wake up tomorrow morning?".  Suppose you fall asleep, and indeed, you (THAT "you") never re-awake.  Instead, a "new person awakes" in your body, who simply finds everything quite familiar and uneventful due to the inherited memory and state (curtains have the same color as "remembered", etc.)
 
 
The above is a metaphor.  I rather believe that "I" exist only for a moment, and am "new" each new moment.  The "me" that began this post never "lived" to know how it ended.  New "soul" every moment, as the "self-sense" only exists in the present moment.
 
 
The sense of "continuity of being" is exactly that;  a sensation.
 
 
Many folk look forward to the singularity as a kind of immortality vehicle, wherein their wildest dreams are as if real, etc.  In this regard, I posed the following:
 
 
Suppose I have a drug that (I claim) bestows both immortality and infinite bliss.  When taken, you are ramped into increasing circles of wonderous extacy and delight, so delightful and enraptured that you do not care a whit that you are slowly losing consciousness, become entirely unconscious, and then peacefully die.
 
 
Why, AT BASIS, is this not "immortality and infinite bliss"?
 
 
Being that Y-O-U only exist in the present ... immortality is only the EXPECTATION of endless tomorrows, NOT the exercise of those tomorrows.
 
 
Taking my "drug", the you that went happily unconscious never knows that there are no more tomorrows, so can have no regrets.  The you-of-tomorrow never gets to exist, so (clearly) can have no regrets.
 
 
Hence, the sense that my drug "cheats you of immortality" is only a sensation that comes from knowing how it really acts.  Not knowing this, it is (subjectively) identical to immortality.
 
 
Cheers! ____tony b____  | 
 
  | 
  | 
  | 
  | 
 
 
  | 
  | 
 
 | 
  | 
Re: Soul not copyable by definition?
   | 
 | 
  | 
 
 | 
Jay,
 
 
> "when I defined 'soul', I should have also written that it is something which processes information in linear time, meaning that it *is* the same continuously."
 
 
Ok.  That brings up an interesting point of definition.
 
 
Consider a computer, running a software operating system (OS), with that OS itself running an application (App), and that App is "continuously" processing data (say, weather readings or Shakespear or something.)
 
 
Which of [machine, OS, Application] is a thing that "is the same continuously"?
 
 
It is relatively easy to say "the machine is always the same" (hardware-wise), even though the particular state of its controlled register latches (transistor-supported "bits") are always changing.
 
 
It gets harder to say that the OS is unchanging.  Sure, its static code-base (upon "load") is usually the same (unless it has performed an automatic upgrade via the net), but much of how the OS is behaving depends upon large allocated "soft structures", whose size, form and content can change radically during operation.  These structures, and their contents, affect what the software "is" to a considerable extent at any moment.
 
 
Likewise, the "App" can possibly self-alter even more radically (especially one that is designed to "learn").
 
 
Where would "soul" (a thing "being the same continuously") reside in such a picture, metaphorically speaking?
 
 
Cheers! ____tony b____
 
 
 
  | 
 
  | 
  | 
  | 
  | 
 
 
  | 
  | 
 
 | 
  | 
Re: Soul not copyable by definition?
   | 
 | 
  | 
 
 | 
>Good point. Obviously, a synthetic recreation of a neuron would not be materially identical to an organic one. However, I think we are working on the assumption that the important functionality of a neuron is its information bearing and processing capability and that this can be replicated in a variety of physical media. Do you not think this is the case? 
 
 
I think you're right -- this would be the case.
 
 
Grant's thought experiment on the peach would be different than a thought experiment on the brain where each neuron was replaced one by one.  In short, the sensation of taste and the functionality of computation are vastly different things I would think.  No doubt Grant's "peach" would not TASTE like a peach at all, but a brain in which all its elements had been converted to silicon, would probably ACT like a brain in that it would compute (i.e., recognize differences, similarities, and identities).  Maybe the computation would have a different "flavor" or "feel" -- sort of like the difference between a song on analog vinyl and CD digital, same song, just a different feel to it. 
 
 
James Jaeger
 
 
  | 
 
  | 
  | 
  | 
  | 
 
 
  | 
  | 
 
 | 
  | 
Re: Soul not copyable by definition?
   | 
 | 
  | 
 
 | 
Spurk,
 
 
I feel as you do, that (perhaps) a "real" sentience requires access to the non-deterministic QM "substrate", and that our brains (well, "brains" in general) do this to some extent.
 
 
My point about "existing only for the moment" is different, and reflects the notion that our SUBJECTIVE sense of having continuity is only something that we "sense" each moment.  Thus, it effectively finesses the whole concept of "soul continuing".  Sure, just as our bodies (approximately) continue from moment to moment, one can say that our consciousness or "beinghood" does as well.  But such a view is effectively a conceptual one.  Even if I were to claim "I continue", my "self" of a week ago is no longer "feeling" anything, no longer "making decisions", etc.  Thus, it is only a conceptual view that continuity really means anything.  What is important is the chain of memory.
 
 
In principle, then, assuming I could "copy" myself into a dozen clones (including mental state), and then kill my "original self", the question of whether any of the clones is "me" becomes clear.  All are "me", and none are "me".
 
 
All are me, in that they each would subjectively "feel", and be convinced that they were the "original me", yet each would go off experiencing separate phenomena, as separate individuals.
 
 
Cheers!  ____tony b____  | 
 
  | 
  | 
  | 
  | 
 
 
  | 
  | 
 
 | 
  | 
Re: Soul not copyable by definition?
   | 
 | 
  | 
 
 | 
"assuming I could "copy" myself into a dozen clones (including mental state), and then kill my "original self", the question of whether any of the clones is "me" becomes clear. All are "me", and none are "me". 
 
 
All are me, in that they each would subjectively "feel", and be convinced that they were the "original me", yet each would go off experiencing separate phenomena, as separate individuals."
 
 
This is nutty sounding, but my 'gut' tells me that, if you 'booted up' the clones before killing yourself, 'you' would suddenly become a kind of hive mind(probably because of quantum effects like entanglement or as of yet undiscovered quantum phenomena.  When your original self was killed, you probably wouldn't even really notice it...
 
just an idea, not to be taken too seriously :)
 
 
spurk
 
http://users.rcn.com/standley/AI/AI.htm
 
 
 
 
 
  | 
 
  | 
  | 
  | 
  | 
 
 
  | 
  | 
 
 | 
  | 
Re: Soul not copyable by definition?
   | 
 | 
  | 
 
 | 
Spurk,
 
 
Beings who will experience ... "being" ... in an envisioned "uploaded" state, may well be able to experience a hive-mind sort of thing (with a commensurate loss of a degree of self-hood, I must imagine.
 
 
But short of this, I cannot justify a hive-mind simply as an outgrowth of duplication.  At a fundamental level, we are all massively "quantum entangled", yet do nor (apparently) experience each others experiences.  There may be some evidence that identical twins "share something" a bit deeper than the rest of us, but not to the degree that the death of one means they continue to experience "living" through the twin. (At least, I don't suppose so.)
 
 
I cannot help but feel that, short of a "soul in the religious sense" (which would defy the notion that conscious minds are artifacts of the underlying and "unthinking physical world") there is really no extant "me-ness" that acts like a fluid that can be poured from one vessel into another, always unbroken/contiguous.  Beyond my physically unique separateness, what makes me "me" is akin to the software I entertain (not really the best metaphor, but represents that what I entertain in my mind can be influenced by, and integrate outside information, in ways that might thereafter "change" how I act, even though I have not physically "rewired" my neural arrangement.)
 
 
I know this is a poor analogy, but there are a million systems out there running IDENTICAL versions of MS Windows ... yet even that perfect identicality does not lead to the kind of coherent QM entanglement that would allow my OS to reflect, consciously or not, the behaviors of its "clones" elsewhere.
 
 
  For me, everything becomes clear and simple when I consider the "me" I experience in real-time to be a thing that only exists for the present moment, a "soul-per-second" so to speak.
 
 
I know this sounds weird, but to try and "feel" this idea more thoroughly, I suggest trying this.  Close your eyes for a moment, and imagine yourself to be a "soul" that has existed in total unconsciousness for eternity, then open your eyes for a half-second and close them again to become thereafter unconscious for eternity.  Imagine that tiny "blip" of experience is all that you ever get.  That seems incredible, but you must also imagine that during that half-second, you inherit all "state" that was accumulated by all the other "souls" that inhabited that body before you, so you feel no disorientation during that brief experience, and just as quickly, never get to anticipate your "death" an instant later.
 
 
The try to imagine that your current and usual sense of "consciously existing" is the result of such a weird reality.
 
 
Sort of funny, I now imagine Isaac Newton saying "I think, therefore I am", where in reality, each word was uttered by a different "being".
 
 
Of course, I am using "sequence of eternal souls" metaphorically.  Such "souls" would rather not warrant the name, since they would not effectively "exist" except for that sliver of time.
 
 
Cheers!  ____tony b____  | 
 
  | 
  | 
  | 
  | 
 
 
  | 
  | 
 
 | 
  | 
Re: Soul not copyable by definition?
   | 
 | 
  | 
 
 | 
These are good points Tony.  No-one has brought up time-dilation yet, so I thought I would.  Consider the standard Planet Of The Apes style "twin paradox".  One twin takes off in a rocket ship at a large fraction of the speed of light.  Upon her return from the 5 year trip, the twin that remained on Earth has aged 60 years.
 
 
So we have 2 people, in the same universe, that subjectively experienced two vastly different subjective AND objective time spans.  Going back to what Tony has said about "blips" or snapshots of awareness, it would seem that the space-bound twin has experienced fewer of these snapshots.  I find it fascinating to consider this in the context of cellular automata, such as Conway's Game of Life.  
 
 
A pattern in the Life lattice ages deterministically every clock cycle.  So any two identical, isolated pattern structures will still be identical at the end of X number of clock cycles.  
 
 
We know that atomic clocks age the same, or differently, depending on the gravitational field they are exposed to.  So by extension, the matter composing a human brain will age the same, or differently depending on the gravitational field they are exposed to, affecting both its objective and subjective time-span.  Could the gravitational field have something to do with an aging/freezing effect like what could be done in a cellular automata experiment?
 
 
This seems like the most plausible explanation to me, and if this is the case, then the implications are that the "soul" is the information pattern that defines the physical existence of the human body.  Information is obviously copyable, but because we live in a chaotic universe, any copies would instantly and permanently diverge as soon as one of them "ages" a clock cycle.  
 
 
Furthermore, if you can capture even an abstraction of the relevant information describing the relationships between all the parts that make up the whole nervous system, then you would have a real copy by the definition I described above.  A digital version of this information could be aged at whatever clock cycles we can achieve in computers as Thomas Kristan so frequently points out.  | 
 
  | 
  | 
  | 
  | 
 
 
  | 
  | 
 
 | 
  | 
Re: Dialogue between Ray Kurzweil and Eric Drexler
   | 
 | 
  | 
 
 | 
This was a very thoughtful discussion.  My thanks to the participants and whoever else arranged for it. 
 
 
The main issue discussed was how much info near synapses is required to reconstruct the "weight" of that synapse (and I guess any local state regarding how that weight is updated).  The immediate reason for asking this question is to help estimate whether current cryonics techniques preserve this info. 
 
 
I agree that this is an important question, that it currently remains unanswered, and that we should try to answer it soon via the experiment Ray suggests. 
 
 
There is a related question, however, that I am even more interested in.  This question is estimating when scanning technology will have the spatial/chemical resolution and cost required to create an upload from a living, or ideally frozen/vitrified, brain.   
 
 
I wish someone would create a technology level vs time graph, analogous to the Moore's law graphs, showing how scanning resolution/cost is improving with time, and with target lines cooresponding to the optimistic and pessimistic estimates of the resolution required to create uploads.  
 
 
With such a graph, we could estimate when the scanning component of the technology required for uploads will be ready.  The other two technologies required are, of course, understanding brains well enough to know how to simulate one given scanning info, and the ability to have enough raw computational ability at a reasonable cost to acutally do a full brain simulation.    | 
 
  | 
  | 
  | 
  | 
 
 
  | 
  | 
 
 | 
  | 
Brave New World - 2100? 2200? 3000?
   | 
 | 
  | 
 
 | 
I've caught some thoughts right now:
 
During all the history of human civilization we've been proclaiming certain "objectively true" facts which have been turning out to be rather subjective and relative, if not to say "false". For example, it was considered for a long time that our earth if flat; that the sun moves around the earth; that there's some mystical substantion in all "unburned" things which is then turned into fire and heat (and our experience had shown that no "substantion" goes out, but oxygen comes in. Kind of the same myths is so called "computrone", physical carrier of computing process. And the idea of "certain eternal soul of thin matter directed from off this world" for me looks the same. Why it lives? Because we haven't got a chance for practical experience. When we do mind uploading, and the reanimated person says he/she is the same, when he/she realizes him(her)self, this question will soon disappear.
 
The matter is we are as never able to realize relativity of many things that can seem foolish for casual person: that we are just series of our "just-in-time" representations. (The word "are" is not very correct, it means that all the past, present and future time already exist and are determined. I can't agree with such fatalistic view and argue with it: we objectively can't determine position of an electron in an atom (in other words, even present is indetermined). So, how can future be determined, if it depends on indetermined present?)
 
But, on the other side, even in so "simple" computer systems of today there are several levels, for example 7 levels of networking: physical, channel, protocol, etc. The fact that data from website are transferred either by radio waves, or by twisted pair, or by phone cable, doesn't affect the website content. The networking system protects itself from external errors, minimalizing chance of critical changes by the checksum mechanism, stop bits and so on.
 
Our brains also must be enough protected from environment, enough not to "lose self". Put a magnet around your head, it will cause some changes in magnetic field, but the fact that something started to move on low level won't make you lose yourself, unless this field is so strong that can kill you. With or without that magnet, you would be you, but in alternate representations.
 
Till today we looked at our life as linear process, once started and in some time finished. Some people tried to move forward and backward, and created the idea of eternal life. But still linear, based on one separate uncopiable soul. Now it's time to move from linear to alternate personality, with no time limits but also with no copy limits. I hardly imagine which challenge this move gives to us; all our society must be rebuilt if we don't want total chaos. From our Western philosophy's position (black is black, white is white; black is NOT white, white is NOT black) we must jump to Eastern one (black becomes white, white becomes black; or - black contains white, white contains black, as famous Ying-Yang symbol shows).
 
But maybe some of our future copies will want to reintegrate (don't laugh, imagine dozen mind clones loving one woman - and how she has to feel?) by saving all common and all desired separate experience into one brain, and others - to separate into two souls (due to the same reason - unshared love), with dynamic cracking their own brains to clear "inconvenient" experience and emotions and to empower everything "convenient"? This doesn't seem more strange, alienistic or blasphemic for us than description of photography or video recording for people of 12th century Islamic world (original Islam doesn't allow to make images of all living creatures, especially people, to anybody except God).
 
As a conclusion, I want to say that I don't see any barrier for mind uploading in the future. But we must start to prepare ourselves for future "alter-life" right now. We must realize that you'll have the same right to say that you're original, as your clone, and will have the same instruments to prove it (past experience and knowledge). And not to see yourself pushed out of life, we must convert our current society into something different. Into the world where no IDs will be forced to use. Where good will will overcome external force, and direct connections between people will destroy our damned hierarchic state power system. A brave new world. Just a prologue to the next stage - The Resurrection. Making a God's will by human hands. No matter when - in 2100, 2200, 3000...  | 
 
  | 
  | 
  | 
  | 
 
 
 | 
  | 
 
 
 |