|
|
|
|
|
|
|
Origin >
Living Forever >
The Alcor Conference on Extreme Life Extension
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0531.html
Printable Version |
|
|
|
The Alcor Conference on Extreme Life Extension
On November 15-17, 2002, leaders in life extension and cryonics came together to explore how the emerging technologies of biotechnology, nanotechnology, and cryonics will enable humans to halt and ultimately reverse aging and disease and live indefinitely.
Published on KurzweilAI.net
Nov. 22, 2002. Additional reporting by Sarah Black.
The idea that death is inevitable, which I call the "death
meme," is a powerful and pervasive belief held by all humans,
with the exception of a small but growing group of life extensionists.
The thought leaders of this movement gathered together this past
weekend in Los Angeles to participate in the Fifth
annual Alcor Conference on Extreme Life Extension and share
ideas on pushing back the end of life. Bringing together longevity
experts, biotechnology pioneers, and futurists, the conference explored
how the emerging technologies of biotechnology, nanotechnology,
and cryonics will enable humans to halt and ultimately reverse aging
and disease and live indefinitely.
I had the opportunity to participate in this illuminating and stimulating
conference and I report herein on the highlights.
Robert Freitas is a Research Scientist at Zyvex, a nanotechnology
company, and in my view the world's leading pioneer in nanomedicine.
He is the author of a book by the same name and the inventor of
a number of brilliant conceptual designs for medical nanorobots.
In his first major presentation of his pioneering conceptual designs,
Freitas began his lecture by lamenting that "natural death
is the greatest human catastrophe." The tragedy of medically
preventable natural deaths "imposes terrible costs on humanity,
including the destruction of vast quantities of human knowledge
and human capital." He predicted that "future medical
technologies, especially nanomedicine, may permit us first to arrest,
and later to reverse, the biological effects of aging and most of
the current causes of natural death."
Freitas presented his design for "respirocytes," nanoengineered
replacements for red blood cells. Although they are much smaller
than biological red blood cells, an analysis of their functionality
demonstrates that augmenting one's blood supply with these high
pressure devices would enable a person to sit at the bottom of a
pool for four hours, or to perform an Olympic sprint for 12 minutes,
without taking a breath. Freitas presented a more complex blueprint
for robotic "microbivores," white blood cell replacements
that would be hundreds of times faster than normal white blood cells.
By downloading appropriately updated software from the Internet,
these devices would be quickly effective against any type of pathogen,
including bacteria, viruses, fungi, and cancer cells. Freitas also
presented a new concept of a "chromosome replacement robot,"
which would be programmed to enter a cell nucleus and perform repairs
and modifications to a person's DNA to reverse DNA transcription
errors and reprogram defective genetic information. Trillions of
such robots could be programmed to enter every cell in the body.
How we will get to this kind of technology was the subject of my
[Ray Kurzweil] presentation on the law of accelerating returns
at the conference. Communication bandwidths, the shrinking size
of technology, our knowledge of the human brain, and human knowledge
in general are all accelerating. Three-dimensional molecular computing
will provide the hardware for human-level "strong" AI
well before 2030. The more important software insights will be gained
in part from the reverse-engineering of the human brain, a process
well under way. The ongoing acceleration of price-performance of
computation, communication, and miniaturization will provide the
technologies to create nanobots that can instrument (place sensors
in) billions of neurons and interneuronal connections, greatly facilitating
the development of detailed models of how human intelligence works.
Once nonbiological intelligence matches the range and subtlety
of human intelligence, it will necessarily soar past it because
of the continuing acceleration of information-based technologies,
as well as the ability of machines to instantly share their knowledge.
Intelligent nanorobots will be deeply integrated in the environment,
our bodies and our brains, providing vastly extended longevity,
full-immersion virtual reality incorporating all of the senses,
experience "beaming," and enhanced human intelligence.
The implication will be an intimate merger between the technology-creating
species and the evolutionary process it spawned.
Aubrey de Grey, a researcher at the University of Cambridge,
began his talk by citing the fact that 100,000 people die of age-related
causes each day, and then quoted Bertrand Russell's statement that
"some of us think this is rather a pity." (Albeit Russell
was talking about nuclear war rather than aging.) de Grey described
a program he has devised to approach the goal of extreme life extension
"with a hard-headed, engineering frame of mind." He described
his goal as "engineered negligible senescence," referring
to the term "negligible senescence" that Tuck Finch introduced
in 1990, defined as "the absence of a statistically detectable
inverse correlation between age and remaining life expectancy."
Human society takes for granted the existence of this inverse correlation
(between age and remaining life expectancy), but de Grey explained
why he feels we have the knowledge close at hand to flatten out
this curve. His program (to develop engineered negligible senescence)
"focuses mainly on those subtle changes, the ones that accumulate
throughout life and only snowball into pathology rather late. That's
why 'engineered negligible senescence' is an accurate term for my
goal—I aim to eliminate those subtle changes, so allowing the cell/organ/body
to use its existing homeostatic prowess to maintain us in a physically
un-deteriorating state indefinitely."
de Grey argued persuasively for the feasibility of this goal and
described a multi-faceted program to address each known area of
aging, including his area of specialty in mitochondrial mutations
and lysosomal aggregates. He proposed an "Institute of Biomedical
Gerontology," with a budget of $100 million, to promote, coordinate,
and fund the focused development of these rejuvenation biotechnologies.
Christine Peterson, cofounder and President of the Foresight
Institute, provided guidelines on how the lay person can evaluate
the often conflicting advice and information on health and life
extension. Christine pointed out that as knowledge becomes increasingly
specialized, no one person can be an expert in every treatment intervention,
so "we are all lay persons" even if we have expertise
in some particular aspect of health treatment. She pointed out the
destructive implications of the benign sounding creed of the medical
profession, "first of all, do no harm." Because of the
extremely cautious, risk-adverse orientation that this principle
fosters, treatments desperately needed by millions of people are
tragically suppressed or delayed.
Max More, President of the Extropy Institute, and the Futures
specialist at ManyWorlds, Inc., presented what he called a "strategic
scenario analysis for your second life." More described his
own culture shock at having moved from England to Southern California,
which led him to consider the extreme adjustment challenge for people
(possibly himself) in the future being reanimated from cryonic suspension.
More pointed out that "to maximize our chances of a psychologically
successful revival, we have the responsibility to prepare ahead
of time." Using the discipline of scenario thinking from his
consulting work, More engaged in a series of thought experiments
that he would encourage people to engage in who have made the decision
to be cryonically suspended should they happen to die.
Michael West, President and CEO of Advanced Cell Technology,
Inc. and a pioneer of therapeutic cloning, presented a compelling
history of the science of cellular aging. He emphasized the remarkable
stability of the immortal germ line cells, which link all cell-based
life on Earth. He described the role of the telomeres, a repeating
code at the end of each DNA strand, which are made shorter each
time a cell divides, thereby placing a limit on the number of times
a cell can replicate (the "Hayflick limit"). Once these
DNA "beads" run out, a cell becomes programmed for cell
death. The immortal germ line cells avoid this destruction through
the use of a single enzyme called telomerase, which rebuilds the
telomere chain after each cell division. This single enzyme makes
the germ line cells immortal, and indeed these cells have survived
from the beginning of life on Earth billions of years ago.
This insight opens up the possibility of future gene therapies
that would return cells to their youthful, telomerase-extended state.
Animal experiments have shown telomerase to be relatively benign,
although some experiments have resulted in increased cancer rates.
There are also challenges in transferring telomerase into the cell
nuclei, although the gene therapy technology required is making
solid progress. West expressed confidence that new techniques would
provide the ability to transfer the telomerase into the nuclei,
and to overcome the cancer issue. Telomerase gene therapy holds
the promise of indefinitely rejuvenating human somatic (non-germ
line) cells i.e., all human cells.
West addressed the ethical controversies surrounding stem cell
therapies. He pointed out a number of inconsistencies in the ethical
position of those who oppose stem cell therapies. For example, a
fetus can divide in two, within the first two weeks after conception
and prior to implantation in the mother's womb, to create identical
twins. This demonstrates that a unique human life is not defined
by a fertilized egg cell, but only by an implanted embryo. Stem
cell therapies use fetal cells prior to this individuation process.
West pointed out the dramatic health benefits that stem cell therapies
promise, including the ability to create new cells and organs to
treat a wide variety of diseases such as Parkinson's disease and
heart disease. West also described promising new methodologies in
the field of "human somatic cell engineering" to create
new tissues with a patient's own DNA by modifying one type of cell
(such as a skin cell) directly into another (such as a pancreatic
Islet cell or a heart cell) without the use of fetal stem cells.
Greg Fahy, Chief Scientific Officer of 21st Century Medicine,
formerly director of an organ cryopreservation program at the American
Red Cross and a similar program for the Naval Medical Research Institute,
described prospects for preserving organs for long periods of time.
He pointed out how we now have "the ability to perfuse whole
kidneys with cryoprotectants at concentrations that formerly were
uniformly fatal, but which currently produce little or no injury."
The immediate goal of Fahy's research is to preserve transplant
organs for substantially longer periods of time than is currently
feasible. Fahy pointed out that by combining these techniques with
the therapeutic cloning technologies being developed by Michael
West and his colleagues, it will be possible in the future for people
to keep a supply of replacements for all of their organs, to be
immediately available in emergencies. He painted a picture "of
the future when organs are grown, stored, and transported as easily
as blood is today."
To suggest a way to make it to that future, I [Ray Kurzweil]
had the opportunity to present a set of ideas to apply our current
knowledge to life extension. My earlier presentation focused on
the nature of human life in the 21st century, whereas this presentation
described how we could live to see (and enjoy!) the century ahead.
These ideas are drawn from an upcoming book, A Short Guide to a
Long Life, which I am coauthoring with Terry Grossman, M.D., a leading
longevity expert.
These ideas should be thought of as "a bridge to a bridge
to a bridge," in that they provide the means to remain healthy
and vital until the full flowering of the biotechnology revolution
within 20 years, which in turn will bring us to the nanotechnology-AI
(artificial intelligence) revolution ten years after that. The latter
revolution will radically redefine our concept of human mortality.
I pointed out that the leading causes of death (heart disease,
cancer, stroke, diabetes, kidney disease, liver disease) do not
appear out of the blue. They are the end result of processes that
are decades in the making. You can understand where you are personally
in the progression of these processes and end (and reverse) the
lethal march towards these diseases. The program that Dr. Grossman
and I have devised allows you to assess how longstanding imbalances
in your metabolic processes can be corrected before you "fall
off the cliff." This information is not "plug and play,"
but the knowledge is available and can be applied through a comprehensive
and concerted effort.
The nutritional program that Dr. Grossman and I recommend provides
the best of the two contemporary poles of nutritional thinking.
The Atkins philosophy has correctly identified the dangers of a
high-glycemic-index diet as causing imbalances in the sugar and
insulin cycle, but does not focus on the equally important rebalancing
of omega 3 and omega 6 fats, and cutting down on the pro-inflammatory
fats in animal products. Conversely, the low-fat philosophy of Ornish
and Pritikin has not placed sufficient attention on cutting down
on high-glycemic-index starches. Our program recommends a moderately
low level of carbohydrates, dramatic reductions in high-glycemic-index
carbohydrates, as well as moderately low levels of fat, with an
emphasis on the anti-inflammatory Omega-3 fats found in nuts, fish,
and flaxseed.
A study of nurses showed that those nurses who ate at least a handful
of nuts (one ounce) each day had 75% less heart disease than the
nurses who did not eat nuts. Our program also includes aggressive
supplementation to obtain optimal lipid levels, reduce inflammation,
correct potential problems with the methylation (folic acid) cycle,
attain and maintain an optimal weight, and maintain glucose and
insulin levels in a healthy balance.
In a rare lecture, Eric Drexler, author of Engines of Creation,
the seminal book that introduced the field two decades ago, and
widely regarded as the father of nanotechnology, reflected on the
state of the nanotechnology field and its prospects. Drexler pointed
out that the term "nanotechnology" has broadened from
his original conception, which was the precise positional control
of chemical reactions to any technology that deals with measurements
of less than 100 nanometers. Drexler pointed to biology as an existence
proof of the feasibility of molecular machines. Our human-designed
machines, Drexler pointed out, will not be restricted to the limitations
of biology. He said that although the field was initially controversial,
no sound criticism has emerged for his original ideas. Drexler dramatically
stated, "I therefore declare victory by default."
Drexler cited the powerful analogy relating atoms and bits to nanotechnology
and software. We can write a piece of software to perform a certain
manipulation on several numbers. We can then use logic and loops
to perform that same manipulation billions or trillions of times,
even though we only have to write the software once. Similarly,
we can set up nanotechnology systems to perform the same nanoscale
mechanical manipulations billions or trillions of times and in billions
or trillions of locations.
Drexler described the broad applicability of nanotechnology to revolutionize
many areas of human endeavor. We will be able to build supercomputers
that are one thousandth of the size of a human cell. We will be
able to create electricity-generating solar panels at almost no
cost. We will be able to build extremely inexpensive spacecraft
out of diamond fiber. "The idea that our human world is limited
to the Earth is going to be obsolete very soon, as soon as these
technologies become available," Drexler pointed out. Indeed,
all manufacturing will be revolutionized. Nanotechnology-based manufacturing
will make feasible the ability to create any customized product
we can define at extremely low cost from inexpensive raw materials
and software.
With regard to our health, nanotechnology will be able to reconstruct
and rebuild just about everything in our bodies. Nanoscale machines
will enter all of our cells and proofread our DNA, patch the mitochondria,
destroy pathogens, remove waste materials, and rebuild our bodies
and brains in ways unimaginable today. Drexler defined this goal
as "permanent health."
Drexler expressed optimism for the prospects of successful reanimation
of cryonically preserved people. Nanorobots will be able to assess,
analyze, and investigate the state of the preserved cells, tissues,
and fluids; perform microscopic and nanoscopic repairs on every
cell and connection, and remove cryopreservatives. He chided other
cryonics supporters for making the "pessimistic argument"
that although cryonics had only a small chance of working, this
chance was better than the alternative, which provided no chance
for a second life. Based on our growing knowledge and confidence
in nanotechnology and emerging scenarios for applying these technologies
to the reanimation task, Drexler argued that we should be expressing
a valid optimism about the prospects for a healthy second life after
suspension.
Drexler was asked what he thought of the prospects for optical
and quantum computing. He replied that optical computers will remain
bulkier than programmable molecular computers and thus are likely
to remain special purpose devices. As for quantum computing, there
are designs for possible room-temperature quantum computers with
dozens of qubits, but the prospects for quantum computing are still
not clear.
Drexler was pessimistic on the prospects for picotechnology (technology
on a scale 1000 times smaller than nanotechnology). He explained
that one would need the conditions of a neutron star to make this
feasible, and even then there are theoretical problems getting subatomic
particles to perform useful functions such as computation.
I would point out that nanotechnology also appeared unlikely until
Drexler came along and showed how we could build machines that go
beyond the nanomachines of nature. A future Drexler is likely to
provide the conceptual designs to build machines that go beyond
the picomachines of atomic nuclei and atoms.
I have that penciled in for 2072.
| | |
|
|
Mind·X Discussion About This Article:
|
|
|
|
Re: The Alcor Conference on Extreme Life Extension
|
|
|
|
""How can you build anything useful out of indeterminacy? ""
"you figure it out through trial and error ;)"
That is true in the non-quantum world, but the process of figuring anything out is actually the indeterminate becoming determinate. According to Heisenberg and current orthodoxy, however, the quantum realm is FUNDAMENTALLY indeterminate, not just practically indeterminate, but actually nondeterminable. Not only do we not know what nature is doing at the quantum scale, but (according to the theory) we CAN'T know because nature itself doesn't know (?!). I don't buy into that of course, but the point is that in order to do theoretical exploratory engineering (or any engineering for that matter) you have to have determinacy. You have to have knowledge and knowable structures; otherwise you really have nothing to build with or upon. Picotechnology is currently not yet even in the theoretical exploratory engineering phase. We can't even begin exploring without some prior knowledge and gaining this knowledge has been fundamentally outlawed (not that it makes any real physical difference) by the Protectors of the Sanctity of the Theory (Bohr, Heisenberg etc.).
Evolution certainly uses determinant molecular structures with which to evolve. The 'indeterminacy' does not even come into play until you reach the quantum scale, with which we have no experience, so (according to orthodoxy) every real-world metaphor you try to use has no correlation with the pico realm.
However, I don't buy into orthodoxy so, to me the main roadblock for picotech is the root of the "indeterminacy" itself -- the fluid nature of the pico realm.
"picotech is only the beginning I think...constructed of stabalized probabilistic fluctuations/quantum wormholes/stringstuff/etc-''
You can't physically build structures out of mathematical probabilities let alone imaginary entities such as Worm/black/white-holes superstrings and big bangs which are merely mathematical consequences and epicycles of a fundamentally wrong and incomplete theory. Why would you believe in such far-out nonsensical extrapolations when the basic laws being extrapolated are so obviously flawed? They can't even get the quantum laws to fit with relativity theory and they can't get either of them to fit with common sense! The current model still doesn't even have a clue of what gravity is! (warped space? 'but space itself (so they say) is empty (X * 0 = 0) != gravity). If they fill space (as they are beginning to do) with quantum probability (whatever that physically represents) then there still is the question of HOW mass physically warps space. They haven't a clue so far. Yet black-hole theory and wormholes etc. are a direct mathematical extrapolation of the semantically empty and incomplete standard theory of gravity.
''at that range even picotech will be like giant lumbering planets! "
I appreciate the optimism and the scope because I too am very much a far-reaching optimist. I am merely coming from an alternate basis, the foundation of an alternate physics system that takes into account the experimental evidence of the liquid nature of the subatomic realm (and consequently answers many of the "unanswerables" of the current theory). It is my view of the structure (or lack thereof) of the quantum realm which makes me question the possibility of achieving anything pico-mechanically useful.
"'we know how to find/cause little attractors- or relatively stable ordered structures in chaos already- "
I am not talking about tweaking the VERY DETERMINATE computer simulations and iterations of simple equations that don't necessarily have a direct correlation with nature. The chaos you speak of is NOT indeterminate. Do you think it is possible to make a robotic arm purely out of liquid? What about making one from a matrix of mathematical probabilities that in principal cannot ever be seen, directly manipulated, and can never be humanly understood?
I have yet to see any evidence that there is anything solid enough at the pico scale to be even remotely mechanically useful. The particle aspect of the wave/particle duality is merely the fact that these phenomena are somewhat localized and coherent and travel in not-so-very-strait lines (all of which can be said about wave/fluid phenomena).
subtillion
|
|
|
|
|
|
|
|
|
Re: Beyond Picotechnology
|
|
|
|
I have heard some folk speak of technology "beyond nanotech", such as femtotech, attotech, etc.
Let me pull Occam out of a hat for just a moment, and survey the known universe.
At the very largest scales (galaxies and galactic clusters) one sees several categories of distinct forms; Spirals, Barred-Spirals, Elliptics, etc. But all in all, not that many significantly varied forms. This is because at that scale, the force of gravity hold sway, and all other forces tend to vanish to insignificance (at least as far as determining large scale structure).
As we come down to the scale of stars, we see a bit more variety; main sequence, red and blue giants and dwarfs, neutron star and black-hole remnants of supernovae. Some of this variety is due to the fact that the EM force is beginning to show its muscle, although still dominated by gravity.
At the "planet size" scale of things, we see a much greater variety of forms. Given the number of possible chemical compositions and ratios, distance of planet to sun, mass of planet, spin, etc., I would not doubt that there are thousands of distinct "planetary climates/atmospheres" throughout a given galaxy.
At the scale of You and Me, we see an explosion of forms; millions upon millions of species and chemical compositions. This is the scale where the major forces (grav, EM, strong and weak nuclear) interact with perhaps the greatest degree of "balance", allowing an incredible complexity of "stable and semi-stable" forms to emerge and persist.
However ... as we scale even smaller, we see only about 100 stable elemental forms, and we are unlikely to see many more.
Down at the nuclear realm, things are stricter still; largely protons, neutrons and electrons, over and over.
In the blast of a supernova shockwave, or in the supercompression of a black-hole accretion disk, matter is torn asunder so deeply and thoroughly, it momentarily becomes a scrambled quark/lepton soup. Yet moments later, as this stuff spews out into open space, the excess energies are released as photons and a few other short-lived exotics, and the bulk of it quickly settles down into ... protons, neutrons, and electrons, in the familiar "elementary" nucleonic arrangements. WHY?
Because at that scale, the strong nuclear forces totally dominates the other forces.
IBM researchers used a huge machine to move atoms one at a time, in order to spell "IBM" on a molecular substrate. Perhaps with nanomachines, this task will be made much easier.
But if you want to form nucleons into "new patterns", like force a cesium nucleus to form a figure-8, you will likely need to set up a resonance of enormous and continuous energy, because the stuff just DOES NOT WANT to take that form, and will snap back the moment you turn off the juice.
The idea that you might spell your name out in top quarks, no less build the quark analogy to a swiss watch, is EXCEEDINGLY questionable. The stuff will not obey you.
Also, machines "run forward" by exploiting entropy, taking something fuel-like from a high-energy state to a low one (second law of thermodynamics) in order to produce "order" in the formation of some structure. It has been shown that as one approaches the QM scales, this "law" (naturally) breaks down (it IS a law of aggregates, a statistical phenomenon). What this means is that, at and below the nanoscale, it becomes increasingly difficult to make something that preferentially runs "forward" (and thus does "work") as opposed to running forward and backward. This inescapable QM effect places limitations on what can be done at and below the nano-level.
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: Beyond Picotechnology
|
|
|
|
>"you need to clear your mind of these particular monkey-metaphores- <
oooh harsh words. Calm down a bit setAI. This is supposed to be a discussion not an outlet for your abusive tendencies.
>"you don't BUILD "machines" out of the ???/probabalistic/wave/ether that exists at Planck levels- you provide the conditions which ALLOWS Planck structures/ perhaps even Planck-"organisms" to EMERGE- you don't try to push things around and stick them together- "
So without pushing and pulling things (I would say material) around at that level HOW do you propose to set the necessary conditions for these planck objects to emerge?
Once again you are incoherently mixing your teenage-embryonic metaphors. ;) The only instance of emergent self-assembling structures we are familiar with is the self-organizing actions of complex molecular systems. These systems are built on tangible things called ATOMS. There is no such stable tangible object yet discovered at the Planck scale. ALL so-called "sub-atomic particles" are unstable and as far as we know, impossible to construct anything with. Quantum Theory itself asserts that the Quantum scale is fundamentally indeterminate, thus random, probabilistic and fuzzy.
You are right in your assumption that tangible "things" can emerge from the structure of quantum-scale matter. Atoms themselves are are such things. However, though I have never said it was impossible, show me an example of how YOU, as a representative human being, can take something that exists as (according to theory) mere random, fuzzy, quantum probabilities (or if we assume that they are wrong, a real physical fluid ) and construct (thus manipulating what QM-Theory says is fundamentally un-manipulatable) the conditions necessary for human-designed self-assembling structures to emerge. I am merely pointing to the inherent difficulties.
Otherwise you can argue until you are even bluer in the face that it IS possible, but your arguments will remain merely baseless assertions with no evidence to back them up and lots of evidence to suggest otherwise.
>"machines are temporary and limited concepts- they are what happens when child gods are not yet sophisticated enough to create LIVING organisms for whatever needs/desires"<
I dig your child-gods metaphor, but machines are NOT concepts.;) They are ACTUAL objects. ALL living organisms are built on machines. It is the level of complexity of those machines which is the arbitrary threshhold between machinery and life.
>>"We don't see any dimensions. We make them up so that we can fit the data into our accounting system. Dimensions are not real and neither are probabilities, wavefunctions, zero dimensional point particles, etc." <<
>"absolutely correct- but you forget the main point: Dimensions/Probability/etc are illusory- but they are MORE REAL and CLOSER to "the Truth" than the fantasies/ontologies of our senses and "epirical" sciences based off of observation-"<
My point is simply that the emperical observations directly indicate that quantum reality is FLUID, hence the observed wave-nature of matter. I am not suggesting that we abandon scientific reasoning or mathematics, but merely that we recognize them for what they are. These things, abstractions that they are, are absolutely necessary for technology to exist and evolve, but it helps to know what the abstractions are ACTUALLY talking about.
>"once again I think you aren't aware of a fundemental Truth: the most abstract/bizarre mathematical absurdities/abstractions are CLOSER to "reality" and are better models/frameworks/metaphores of the true forms of Existence- than our so-called empirical/reductionist disciplines- "<
That is NOT a fundamental truth. That is simply your assertion that you like the standard model better than YOUR LIMITED IMAGE of the alternate model that I am proposing. You can like the standard model all you want, but that has never stopped the progress of science in the past. You will eventually see (if you live long enough) that Physics WILL shift to a fluid-dynamics driven, complexity-science based, simulational, emergent, physical model, based on a continuous and compressible fluid substrate. Mark my words.
BTW the "abstract/bizarre mathematical absurdities/abstractions" of current physics are a DIRECT result of the "so-called empirical/reductionist disciplines".
subtillioN |
|
|
|
|
|
|
|
|
Theory of Elementary Waves : Critique
|
|
|
|
The Theory of Elementary Waves (TEW herein) is an interesting attempt to make sense of the quantum confusion of modern physics, but if the goal of the theory is to get rid of the weirdness and anti-causality of Quantum Physics then there are two HUGE fundamental problems with the theory.
Note: This theory focuses on the double slit experiment that is supposed to show the wave/particle duality of the electron.
1. TEW posits waves as the fundamental constituents of matter. This poses a serious problem because, as we all know, a wave MUST have a medium in which to travel, after all, a wave is a traveling disturbance of a medium. The substance of a wave therefore IS the medium. You cannot separate the two. A wave cannot be an independent object. The idea of disembodied waves is one of the main non-causal problems of physics that needs to be explained, therefore positing a deeper layer of disembodied (therefore non-causal) waves does not get us any closer to a causal explanation of reality. ((Perhaps the REAL fundamental nature of matter is not waves, but a fluid in which the wave-nature of matter and energy would finally find a causal explanation for its existence.))
2. TEW also posits a sort of backward causation (one that doesn't work).
It says:
"The waves travel from the detector towards the source, through the slit, interfere and induce the emission of particles at the source. The particle then follows the path of the waves back to the detector. The intensity pattern has already been determined by the dynamics of the waves prior to the particles even reaching the detector."
hmmm'.
Let's visualize this process: the disembodied elementary waves are emitted from the photon detectors. They radiate outward from the detector array as a smooth planar wave-front straight through the intervening space (in which the interference would normally take place). They then they strike the double slit barrier. The waves passing through the slits form new wave-fronts at each slit which spread toward the emission source interfering constructively and destructively on the WRONG SIDE of the barrier. The interference pattern would not happen in the space between the slits and the detectors as it would if it were traveling forward, and it also wouldn't form the interfering light cones in the right direction either. The interference patterns would be spread in the direction of the source of the emission where they would do no good. I don't think this was the result TEW was trying to achieve!
Furthermore, what determines when the EWs are emitted? Does it just wait around until it psychically senses that someone or something is going to turn on a light?
A simpler answer to the supposed wave/particle duality of the photon is this:
Suppose that the 'Zero Point Field' is not just an abstract field of mathematical probability, but that it actually physically exists, and that it is the fluid medium enabling the observed wave-nature of ALL matter. This fluid is the medium in which light-waves are a pressure disturbance.
If this is so, then when we sufficiently reduce the intensity of the continuous light waves in the double-slit experiment, why does only one photon-detector in the array go off at a time?... and why is the timing of this event sporadic and unpredictable? Doesn't this show that light is particulate?
First of all we have to understand what happens when a 'photon' is detected.
A photon-detection event is very complicated and requires a quantum-reaction between the incoming light-wave and a receiving atom. Remember that according to experimental evidence and current theory, all matter has a wave-nature. Therefore the reception of a 'photon' by an atom should be complicated by the harmonic interaction between the internal wave-nature of each. When the harmonics between the electron shell structure of the atom and the light-wave are just right, the reaction will take place. Therefore the quantum-reaction should be MUCH more complicated than a mere particle-particle collision. This is exactly what we see! The firing of a 'photon' detector is chaotic and unpredictable. You can never quite be sure when a quantum-reaction will take place. The general rate of the occurrence of the quantum-reaction is determined by the intensity of the light waves, so when the light intensity is low enough that the quantum-reaction is not happening successively then the complexity becomes readily apparent and the detector fires sporadically.
Note1: The quantum-reaction measurement-event actually PRODUCES a 'photon' which simply is the amount of light-wave pressure required to elicit a quantum reaction. A 'photon' is therefore a product of the measuring apparatus. It is a quantitative measure somewhat like a gallon or a spoon-full.
Note2: Just having the right amount of light intensity is not enough by itself to elicit the response. It also depends on the complex wave-nature (harmonic) interactions between the light-wave and the atom. This accounts for the sporadic nature of the detection event.
So how does this explain the double-slit experiment?
When the wave-front of a light-wave enters the two slits, the slits act as sources for two new wave-fronts. The new wave-fronts spread hemi-spherically and interfere constructively and destructively as expected. The waves spread through space, striking an array of 'photon' detectors in the familiar light and dark interference patterns. When the intensity of the wave fronts is high enough the quantum-reactions of the individual detectors within the light bands of the interference pattern are occurring fast enough that very many reactions are happening at the same time. The photon detectors light up simultaneously forming a dot pattern which roughly maps out the intensity of the continuous interference pattern of the light-waves, very much like the way a TV screen shows a pixilated image of the sky. When we reduce the intensity of the light, the quantum-reaction rates decrease accordingly, and fewer and fewer reaction events occur until finally we see only one event at a time. Though still present, the interference pattern becomes invisible and all we see is one quantum-reaction at a time at seemingly random places. When we accumulate the single quantum-reactions over time and add them together, the dot-pattern image of the interference pattern shows up again.
When you don't know the nature of the quantum-reaction which produces the photon-detection from the continuous wave-front then it appears to us that light is striking the detector array as single sporadic particles and yet it is somehow falling into the familiar wave-interference patterns when accumulated over time. This simple misunderstanding is due to the fact that the wave nature of ALL matter and energy, including the photon and the electron shell system of the atom, was not accounted for in the theoretical understanding of the 'photon' detection event.
It is the complex and therefore sporadic nature of the quantum-reaction and the dependence of the rate of this reaction on the intensity of the light-waves that makes the continuous light-waves appear particulate when it is assumed that a single photon strikes a single firing detector.
This is outlined and explained in a theory that uses no premises or constructions which contradict basic causal experience: no backward time propagation, no "spooky action at a distance", no unexplainable dualities or paradoxes, and no empty mathematical probabilities miraculously rendered physically real.
Check out www.anpheon.org
subtillioN
|
|
|
|
|
|
|
|
|
Re: The Alcor Conference on Extreme Life Extension
|
|
|
|
neat stuff, life extension...
we could probably achieve immortality within 5 years if the best neurologists, cyberneticians, AI researchers, etc in the world worked on it together. They would need a research budget in the tens of billions of dollars, and to "immortalize" a single individual would cost millions of dollars and probably take a few years from beginning to end.
when I work out some details I'll write up an abstract on the process and post a link to it on this forum, but the core concept of the process is really simple:
it's a less aggressive version of Hans Moravec's uploading schema. Unlike Moravec's, "my" process requires neither nanotechnology or superintelligent AI's to model and translate to software your brain in an afternoon sitting. Rather it utilizes proven technology that we have in working(albeit crude)form today. Here's the technologies that are essential to the process.
1)we have already developed and demonstrated working applications of the tech. required to input and extract data through direct brain-machine interfaces.
2)we have the technology to monitor and record all output and input of a single neuron in vivo.
this has been done, and preliminary work with this technique has shown that accurately modelling it's responses to a given input is not the daunting task many thought it would be.
3) integration of neuron-mimicking IC's into an existing, functional network of living neurons has been accomplished in vitro.
so we have all we need to make you immortal. heres how we do it:
you go to the hospital for brain surgery. In the operation, the doctors implant arrays of micro-electrodes into your brain. The first operation only interfaces with one brain function, e.g. control of your right arm. The electrodes connect to a machine that contains IC-based "neurons". Computers analyse the inputs and outputs of the monitored 'wet' neurons, and develop a rough model of their responses to their normal 'wet' synaptic inputs.
The Computer then configures the 'hard' neurons to act according to this model. After this is finished the electrodes begin sending input to the 'hard' neurons, and relaying thier output back to the 'wet' ones. Over time, the 'hard' and 'wet' neurons will adapt to each other, forming a naturally-structured network in which the 'hard' neurons are indispensable components.
As the two types of neurons more tightly integrate, more 'hard' neurons are added to the network, and 'wet' neurons are gradually removed.
After perhaps 6 months, 90% of your right arm control is handled by that network's 'hard' components. The remaining 'wet' neurons are kept as a back-up in case of partial hardware failure. Since neurons die over time, the living 10% of the network is kept 'fresh' by the periodic addition of fetal neurons.
6 months have passed; you return to the hospital for another round of brain surgery. The same procedure carried out in your first operation is completed, except this time a different part of your brain is interfaced. This time part of your, say, visual cortex is interfaced and slowly migrated from wetware to hardware. As before, 10% of the network remains wetware. Eventually, if you choose, the wetware can be fully removed without any ill effect. This is why. If two highly interconnected regions are migrated, the interconnects migrate as well; presumably the most tightly coupled regions would be migrated simultaneously. Area A's neurons would connect indiscriminately to Area B's neurons, and vice versa. This means that the 'wet' 10 percent of Area A's interconnect neurons are connected to 10% of Area B;s interconnect neurons. Area B's interconnects are 90% 'hard' as well, meaning that perhaps only 1-2 % of the total A/B interconnects are wet/wet. As more of your brain is migrated, the 'wet' components become more superfluous since the % of wet/wet links drops w/ each 'hard' neuron added.
This goes on and on for maybe 10 years; at the end of your decade long transformation, at most 1% of your total brain function is handled in wetware. Given the almost "holographic" nature of a large neural network, this 1% consisting of 'wet' neurons could all be removed at once and you would not even realize it... the biological 1% of your brain is no different in signifigance than the 'hard' neurons were when they handled a mere 1% of brain function, 10 years ago.
Most people will probably keep their few 'wet' neurons, for sentimental reasons or a reminder of what they once were, or whatever. Many will not, but neither group could tell which they were if they somehow they suffered amnesia (a disfunction of memory reduced to nothing but a memory of a now-impossible affliction :) a nice irony, I think...
It's taken you 10 years and 30 million dollars, but you are now free. Hang out near Alpha Centauri 3 billion years from now to get a front row view of our home system's star go nova. Help build and settle a new universe in the final days of our universe's heat death. Or get bored after a couple millennia and commit suicide. Whatever. It's up to you...
spurk
standley@rcn.com
http://users.rcn.com/standley/AI/AI.htm
Science-Fiction-Free Post(tm) |
|
|
|
|
|
|
|
|
Re: The Alcor Conference on Extreme Life Extension
|
|
|
|
SOMEONE SAID:
"we could probably achieve immortality within 5 years if the best neurologists, cyberneticians, AI researchers, etc in the world worked on it together. They would need a research budget in the tens of billions of dollars, and to "immortalize" a single individual would cost millions of dollars and probably take a few years from beginning to end.
COME ON! ARE YOU SERIOUS, IMMORATILITY IN 5 YEARS, THERE ARE SOME DREAMERS IN THIS FORUM. THIS IS MASSIVE TECH PROBLEM, PLUS THE WORLD NEEDS DEATH, WHAT WOULD HAPPEN IF WE WERE IMMORTAL.
THE FOLLOWING:
OVERPOPULATION ( A TRILLION PEOPLE, THE WORLD WOULD BE DESTROYED, EVEN WITH 5 BILLION IT IS HARD PRESSED.
QUALITY OF LIFE, EVEN IF TECH COUND HANDLE A VERY MASSIVE POP, WHAT ABOUT REAL ESTATE, WE WOULD ALL BE LIVING LIKE SARDINES, A PATHETIC MISERBLE EXISTANCE.
DIFFICULITY FOR THE YOUNGE, SINCE THE OLDER TYPES WILL HANG ON TO POSITIONS, NOT GIVING THE YOUNG OPPORTUNITY.
ZERO BIRTH RTAE WOULD HAVE TO BE UNDERTAKEN, THEN YOU WOULD BE LEFT WITH THE SAME OLD TIRED LOT. IT IT THE YOUNG, THE NEW, THAT BRING IDEALS, VITALIY
AND INNOVATION. SOCIETY WOULD STAGNATE.
GOING INTO SPACE (TO HANDLE POP), GREAT QUALITY OF LIFE THERE, IN A TIN CAN.
IMMORATLITY WOULD BE LIKE A CANCER TO THE UNIVERSE, I COULD THINK OF NOTHING WORSE THAN A ENDLESS GROWING HUMAN MASS, CONSUMING EVERYTHING LIKE LOCUSTS. HUMANS HAVE SHOWN SUCH STABILITY AND WISDOM IN THEIR HISTORY, PLUS CONCERN FOR THE ENVIRONMENT.
THE ONLY FORM OF IMMORTALITY THAT IS FEASIBLE IS ELECTRONIC.
|
|
|
|
|
|
|
|