Origin > Living Forever > The Future of Life
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0554.html

Printable Version
    The Future of Life
by   Ray Kurzweil

A coming era of personalized genetic medicine, breakthroughs that radically extend the human lifespan, nanomedicine, and the merger of our biological species with our own technology were among the future visions presented at TIME's "The Future of Life" conference.


Published on KurzweilAI.net March 31, 2003

In a celebration of the 50th anniversary of the discovery of DNA, TIME magazine recently held a conference on The Future of Life that brought together the architects of the genomic revolution to chart the future of biotech and its ramifications on mankind.

Richard Dawkins, Simony Professor for the Public Understanding of Science, Oxford University, said we will have "complete genomic maps for many thousands of species by 2010." Shortly thereafter, "we should be able to put together the genome for the 'missing link' between humans and chimpanzees, or something very close, and actually bring the missing link back to life." He also speculated about bringing "Lucy" (a celebrated hominid fossil), or at least a close genetic clone of Lucy, to life. His goal: to kiss Lucy.

Craig Venter, President of The Center for the Advancement of Genomics and founder of Celera, the first company to sequence the human genome, provided evidence for an exponential improvement in DNA sequencing comparable to Moore's Law in electronics.  Sequencing the human genome originally cost $3 to 5 billion. "In the last few years, we've gone from that down to less than $100 million and now because we have the genetic code once or twice, we can re-sequence all the genes in somebody for around maybe $300,000 now."

It's falling in cost by a factor of two to three each year and will "within a decade, get us down to $1,000." This will open the door to fully personalized medicine, in which people will routinely scan their entire genome and keep it on a postage-stamp-sized memory device, along with their entire medical record.  This will allow medicine to focus on preventing disease, rather than just treating it once symptoms appear.

Venter described the opportunities for biological forms of energy production.  A species that lives in very harsh conditions, called Archia, is very effective at converting CO2 to methane or hydrogen.  A synthetic species could be devised from a synthetic chromosome that would, similar to Archia, produce hydrogen.  By modifying the genes for photosynthesis, this process could be made highly efficient. 

He described ways of preventing synthetic microorganisms from evolving into inadvertently destructive strains.  He also described safeguards that can be engineered to discourage tampering by bioterrorists.

As we go from the genome to its expression in proteins (the "proteome"), we have a tenfold increase in complexity, he said.  The 100 trillion cells in the human body include about 300,000 proteins, so the proteome project will be far more complex than the genome project.

During the session "Lifespan: How Long? How Fun?" I presented my ideas on the potential to use this and other new knowledge to radically extend the human lifespan.  The knowledge we have today may be regarded as a "bridge to a bridge to a bridge."  Most of the deaths in contemporary society are caused by degenerative processes&#8212coronary artery disease, type II diabetes, stroke, and cancer—that can be slowed, halted, and even reversed.  This knowledge can keep us healthy until the full flowering of the biotechnology revolution, which is just now beginning to unfold.  That in turn can keep us going until we have the opportunity to literally rebuild our bodies and brains with nano-engineered methods that are far more powerful than those used by our biological systems. 

There are approximately ten different processes identified to underlie human aging.  In each case, we can identify emerging methods that can counteract these processes.  For example, human somatic-cell engineering will provide the means to replace our cells with telomere-extended versions, essentially rejuvenating our tissues with age-reversed versions of our cells and tissues.  As another example, the recent discovery of the "FIR" (Fat Insulin Receptor) gene provides the promise of a drug that will allow people to eat as much as they want yet remain slim while gaining the health benefits of being slim (including many of the health benefits of caloric restriction). 

This result has already been demonstrated in mice, and the FIR gene appears to be the same in mice and humans.  The initial process of creating vulnerable plaque in the coronary arteries has been identified and specific enzymes that would block this process, and thereby effectively halt coronary artery disease, have been described.  As we rapidly increase our understanding of the information processes underlying each disease, we will have the opportunity to develop sharply focused medications that effectively block these long-term degenerative processes.  There are developing scenarios to deal with each source of degenerative disease and aging process. 

With the advent of nanotechnology as applied to biology, we will gain the means for maintaining human health and vitality indefinitely.  As we reverse-engineer human biological processes, we are discovering that reengineering these processes can improve on their effectiveness many thousand-fold.  For example, a human macrophage can take hours to destroy a bacterium (I've actually watched this process with one of my own white blood cells).  Analysis of Robert Freitas' conceptual design for a nanoengineered robotic macrophage shows that it could be hundreds or thousands of times more effective than a macrophage. 

A "respirocyte" robotic replacement for our red blood cells, also designed by Freitas, would be thousands of times more effective than its biological equivalent. With these respirocytes, we could sit at the bottom of a pool for four hours or do an Olympic sprint for 15 minutes without taking a breath.  Freitas' detailed analyses have shown the feasibility of a DNA repair robot that could reverse the progressive increase in genetic errors, another source of aging.  Ultimately, nanoengineered robots inside the human body, traveling through the bloodstream, have the potential to reverse all known disease and aging processes. 

I pointed out that we will make more progress over the next several decades than is expected by most observers, because of the common failure to take into consideration the exponential increase in the paradigm shift rate (rate of progress).  We're doubling the rate of progress every decade, so the next 30 years will be like 140 years of progress at today's rate of progress

The Baroness Susan Greenfield, Director of the Royal Institution of Great Britain, expressed strong skepticism for these scenarios, stating that such technology has yet to be developed, and that I was underestimating the complexity of these genetically-based processes. However, I pointed out that there is only about 30 million bytes of useful information in the human genome.

Trillion times increase in hardware and software by 2030

Jaron Lanier, Chief Scientist, Advanced Network and Services, Inc., composer and visual artist, and the person who coined the term "virtual reality," expressed skepticism about our ability to handle the complexity of information processes in simulating biological and neurological processes.  He asserted that we are not making exponential progress in software&#8212compared to the rapid exponential pace of hardware (which is doubling every year or so)&#8212and that this will be needed to handle the complexity of biological systems. 

He proposed a different way of organizing software to keep up with hardware's enormous growth in power: rather than engineering each module with rigid functions and interfaces, we should build each module to communicate through a pattern-recognition paradigm with other modules, pointing out that this is how biology works, allowing for softer edges to the overall competency of a very complex system.

However, Bill Joy, Chief Scientist and Corporate Executive Officer, Sun Microsystems, was even more "optimistic" than I was about the ability to advance the power of software, indicating that software quality was advancing at the same exponential rate (i.e., doubling every year) as hardware. By 2010, we will see a thousand-fold increase in the price-performance of hardware, as well as a thousand-fold increase in the effectiveness of algorithms, he believes. A cellular simulation that takes a year of computation today will be able to be done in eight hours in 2010. This will allow "realistic simulations of cellular processes."

This will continue and by 2030, we will see another factor of one million in hardware as well as software (in comparison to 2010), for an overall improvement of one trillion. He provided some examples of ratios of one trillion to one to provide perspective on how profound this is. A speedup of one trillion to one would reduce the entire history of the universe to one week. It is the "ratio of the power of an atomic weapon to a match head" or the "ratio of Bill Gates' wealth to a nickel." These powers of computation and algorithmic sophistication will allow "modeling complex biological systems at the level of physics by 2030."

Joy was very concerned, however, with the downsides of these very powerful technologies.  He acknowledged that substantial increases in human lifespan were likely, but he was concerned with the empowerment of destructive individuals such as terrorists with these enormously powerful technologies. 

We need to consider today the impacts that these very powerful technologies will have in the future, he added. Some of the answers we will like, such as far more powerful treatments for disease. Some of the answers we won't like, such as providing far more powerful weapons to terrorists.

Paul Saffo, the panel moderator, asked the panel and the audience how long they expected to live.  Relatively few people in the audience indicated an expectation to live past 120 years.  My response of "at least a thousand years" was definitely close to the 100 percentile mark among this group.  Assuming we all live as long as we expect to live, I should win this argument by default.

Commenting on the complexity of life, Lanier expressed his long-term fascination and love for cephalopods (e.g., octopi). He made the point that despite their separate line of evolution, "some structures evolved in a very similar manner to humans." Examples include their eyes and features of their brain, including a cerebellum. Other features evolved very differently. For example, they gave up their skeletal system. An octopus can squeeze its entire body through a small hole.

The Internet and multicellular life

Larry Smarr, Director, California Institute of Telecommunications and Information Technology, drew a comparison between the growth of the Internet and the original evolution of multicellular life. Evolution discovered that there were advantages to organizing what had been individual cells into networks of multicellular organisms, which greatly facilitated communication among cells to improve the survival of the cells. Shortly after multicellular life started, "nervous systems evolved to further improve intercellular communication."

Similarly, the Internet has hooked together what had been separate computers that can now share information over long distances, he pointed out. The growth of the Internet has many biological features and has been developing like a multicellular organism, including a nervous system.

He predicted that rather than designing systems as we largely do today, we will create systems that have the dynamic qualities of living systems. This was similar to a point made by Lanier.

Smarr said we are beginning to understand the coding of genes and how they express themselves in metabolic networks. We are a long way, however, from truly understanding the flows of information in complex biological systems, he said.

During the session "The Next Frontier," I had the opportunity to present my ideas on the merger of our biological species with our own technology.  I pointed out that there are already many cyborgs among us.  The FDA recently approved a computerized neural implant for Parkinson's Disease that replaces the biological neurons destroyed by that disease.  This surgically implanted device communicates with its neighboring biological neurons in the same way that the original biological neurons do in the patient's "ventral posterior nucleus." 

As another example, there are already four major conferences on "BioMEMS" (Biological Micro Electronic Mechanical Systems) covering contemporary efforts to place tiny diagnostic and therapeutic machines in the human body and blood system.  One scientist has already cured type I Diabetes in rats with a nanoengineered device that releases insulin and blocks antibodies.  A similar approach should work in humans.  With continuing advances in miniaturization and the ongoing acceleration of the power of computation and communication technologies, during the 2020s we will be able to develop "nanobots"—tiny yet intelligent devices the size of human blood cells.  They will be able to navigate through the bloodstream, combat pathogens, and reverse human disease and aging processes. 

Most significantly, these nanobots will be able to directly interface non-invasively with our biological neurons to greatly expand human experience and intelligence.  By interfacing directly with our sensory system from inside the nervous system, nanobots will be able to provide full-immersion virtual reality.  By creating virtual interneuronal connections, nanobots can literally expand the 100 trillion limit on our interneuronal connections, which is where human thinking takes place. 

GM foods: safety concerns vs. benefits

Matt White Ridley, author, Genome: The Autobiography of a Species in 23 Chapters, interviewed on stage by Phil Elmer-Dewitt, Senior Science Editor, TIME magazine, presented "The Case for Optimism." He pointed out how often bad things predicted from bioscience keep failing to come to pass. Genetic engineering of microbes was thought to be dangerous, he said. Genetically modified (GM) plants were also thought to be bad for the environment, yet they keep being good for it.

Ridley made the point that genetically modified foods would help a return to small family farms. Large factory farms were created in part to help control pests, which is easier to do in large fields. However, GM foods allow far less pesticides to be used. This also helps the environment, the opposite of what the anti-GM movement has feared.

He discussed the European position against GM foods. The Europeans feel there is nothing in it for consumers, only for big business, and in particular American big business, and there is a general distrust of big business. The opposition also reflects a European backlash against the intensification of corporate agriculture versus rural farms.

Nonetheless, the mostly European participants in a panel on "The Politics of Genetically Modified (GM) Food" were distressed with the largely anti-GM stance of the European community.  Marc van Montegu cited the use of an image of a scorpion over a corn flake bowl to describe the fear-mongering of GM protestors.  He pointed out that for over 1,000 years, we've moved genes around.  Before GM technology, it was done randomly, whereas now we are able to do it more knowledgeably.  He also pointed out the potential and actual environmental benefits from GM technology.  For example, 60 million hectares of crops use the BT gene, which requires only a small fraction of the insecticide required by non-GM foods. 

Ingo Potrykus called European attitudes on GM "hysterical" and said they unduly focused on risks without considering benefits.  He cited the blocking of desperately needed food aid to Africa because of unfounded "health" concerns about GM foods.  He said that 24,000 people die of malnutrition each day and GM foods have the potential to ameliorate this critical problem.  He called for a more balanced approach to assessing risks and benefits.

Several speakers heralded the health benefits of "golden rice," a GM food that overcomes a vitamin deficiency that causes a half million children going blind each year. 

Brian Halweil was the sole panelist expressing skepticism about the benefits and safety of GM food.  He argued for different forms of sustainable agriculture.  He expressed deep concern about the unintended consequences of mixing genes of species that don't ordinarily mix, something that was not possible with pre-GM forms of mixing genes.  He emphasized that there were unanswered safety questions, citing a GM soy bean crop that is more susceptible to certain fungal diseases.

Discussing the danger of a terrorist bioengineering a new pathogen, Ridley said it would be very difficult to create pathogens worse than those Mother Nature has already created.

Regarding human cloning, he said he was not against it in principle, but opposed it currently on safety grounds. The current cloning technology is not yet perfected and introduces genetic errors, which were evident in Dolly, who was recently euthanized, and in other cloned animals.

Disappearing species

Edward O. Wilson, Pulitzer Prize Winner, and Pellegrino University Research Professor and Honorary Curator in Entomology, Museum of Comparative Zoology, Harvard University, described how little we know about life on Earth, and how rapidly life on Earth is slipping away from us, in terms of the rapidly rising rate of extinct species.  He said that there are between 3.5 and 100 million species, with insects and bacteria comprising most of the unknown species.  He said if you pick up a handful of soil, you'll have five to ten thousand species in your hand, most of which are unknown.  In an acre of a field, there are about a quarter million species

According to Wilson, we also know very little about the species that inhabit the human body.  There are 300 forms of bacteria in our mouth alone, and we're still counting. 

Interestingly, the entire genetics field depends on the enzyme that allows polymerase chain reactions, which was discovered in the hot spring in Yellowstone National Park. 

Wilson said that 99 percent of all the species that ever existed are already extinct.  The average lifespan for a species is about one million years.  Before humans came along, about one species per million went extinct each year.  The rate of species extinction is now at least a thousand times higher. 

James Logan, Chief of Medical Informatics and Health Systems for NASA, described some credible scenarios for exploration of the solar system during the next several decades.  The future of the human civilization, according to Logan, will not be limited to the Earth.

© 2003 KurzweilAI.net

   
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Software..
posted on 03/31/2003 11:07 AM by larsholm

[Top]
[Mind·X]
[Reply to this post]

It is hard for me to see that software efficiency should increase exponentially. It looks to me rather opposite that software effeciency decreases. The reasons for this are that software engineering expenses are rising while computational power gets cheaper. Causing software to be created fast, therefore of poorer quality.
(Or is this just the impression I'm getting from using software from Microsoft!? :)

Re: Software..
posted on 03/31/2003 4:45 PM by Nascent1

[Top]
[Mind·X]
[Reply to this post]

It is hard for me to see that software efficiency should increase exponentially.


It is simply a game of keeping one's job, if software programming were simplified, two things will happen in the future: One, all software will become one. Two, the singularity (inclusive of all software) will accelerate and displace all knowledge industries... So watch out!


The cascade towards the singularity could be triggered by one wrong step away from the common social goals, which I believe are rather stupid!


Nascent1

Re: The Future of Life
posted on 03/31/2003 12:16 PM by clarkd

[Top]
[Mind·X]
[Reply to this post]

"thousand-fold increase in the effectiveness of algorithms/" (quote from Bill Joy for year 2010)

I do believe that software will enjoy a new renaissance over the next 7 years. "build each module to communicate through a pattern-recognition paradigm with other modules/" (quote not from Bill Joy) will not be the breakthrough method of connecting modules together. The current method of publishing interface specs with international committees will also not be the correct solution. I believe that routines will have to go the object oriented route with each object capable of explaining to other modules what it can do and how to access those features. I believe that documentation and a computer readable method of one module querying the capabilities of other modules will break through many of the connection barriers we have today.

The biggest software breakthrough will be flexible and inexpensive database routines built into new object oriented languages. All programs of any reasonable size require an extensive database and languages today have such things only as expensive add-ons. The result is that less than optimal database manipulating routines are created by programmers over and over. The database and communications (local and over networks of all kinds) must be part of the language and not expensive add-ons. When (and not if) this occurs, the progress of even large scale program systems (made by many loosely coupled programmers from around the world) will happen at a pace not seen before.

The breakthrough will not come from any large software companies that exist today. In fact, a large software company has too much bloat and beaurocracy to accomplish what I am talking about, especially Microsoft.

In software, you haven't seen anything yet. The next 7 years will be a block buster for software. Just don’t own any stock in Microsoft, Computer Associates etc.

Re: The Future of Life
posted on 03/31/2003 2:52 PM by Biffj65

[Top]
[Mind·X]
[Reply to this post]

Software efficiency has been DECREASING at a substantial rate. If you wanted to use "singularity logic" on it, you would end up projecting that in 20 years we will have infinite computing power and 100% inefficient software.

Consider that "singularity": Windows-AI doing nothing useful faster than you can possibly imagine...

Object Oriented programming does NOT make algorithms more efficient! It makes it easier for high level language programmers to write code. The result is more layers of abstraction, increased intermediate generality, and higher instances of "tacked-on" functionality. This is simple: less engineering, less efficiency.

Writing software in assembler makes for much more efficient algorithms. Writing in high-level OOP languages makes for much more productive programmers.

Re: The Future of Life
posted on 03/31/2003 6:03 PM by clarkd

[Top]
[Mind·X]
[Reply to this post]

You are right that there is a difference between 'efficient algorithms/' and efficiently writing programs but your comment about 'Writing software in assembler makes for much more efficient algorithms/' is incorrect. Most modern compilers produce as good or faster running routines than whose programmed in assembler and writing in assembler is almost impossible to write correctly and maintain for larger programs. Most compilers produce longer code which is not the same as less efficient but memory today is large and except for micro-controllers, program size (within reason) is irrelevant.

The levels of abstraction are exactly what I am talking about when I say that software will increase incredibly in productivity in the not too distant future. It is the ability to abstract your program ideas and not rewrite and worry about program tasks that can be automated by the compiler. “Great ideas are built on the shoulders of other great ideas.” We have to stop reinventing the wheel and starting over, not just individually but as a sea of programmers.

'"tacked-on" functionality/' is exactly what you should be able to do with well designed code. This is not a fault but the ultimate goal. You can never know when you start a large project just where it will go so you must make an architecture that is extensible to unforeseen functionality.

Re: The Future of Life
posted on 03/31/2003 6:45 PM by Biffj65

[Top]
[Mind·X]
[Reply to this post]

You are right that there is a difference between 'efficient algorithms/' and efficiently writing programs but your comment about 'Writing software in assembler makes for much more efficient algorithms/' is incorrect. Most modern compilers produce as good or faster running routines than whose programmed in assembler and writing in assembler is almost impossible to write correctly and maintain for larger programs.


No, compilers do NOT produce more efficient algorithms than can be coded by hand. Efficiency of an algorithm is a balance between functionality and resource use. Higher level languages make the programmers job easier, they do not produce more optimal code.

Most compilers produce longer code which is not the same as less efficient but memory today is large and except for micro-controllers, program size (within reason) is irrelevant.


That is specious. The code is longer because it is less efficient. It is less efficient because it is replacing high level directives with generalized blocks of machine language. Your argument is the reason why we have operating systems that have swallowed the huge increase in hardware performance that has occurred over the last decade.

The levels of abstraction are exactly what I am talking about when I say that software will increase incredibly in productivity in the not too distant future. It is the ability to abstract your program ideas and not rewrite and worry about program tasks that can be automated by the compiler.


You are talking about increasing programmer productivity at the expense of code efficiency. That is the same thing that I said, but you are confusing the two. Software reusability is a benefit to the programmer, not to the program.

'"tacked-on" functionality/' is exactly what you should be able to do with well designed code.


You are obviously not a software engineer if you are pro "tacked-on" code! Good code should be extensible, not patched with baling wire. There is a difference.

This is not a fault but the ultimate goal. You can never know when you start a large project just where it will go so you must make an architecture that is extensible to unforeseen functionality.


If the future functionality is unseen, how can you design for it? You can't. So you keep tacking on functionality until the program becomes unwieldy. If you are lucky you get to redesign the foundations to accomodate the new functionality that exceeds the original design. But more likely you don't, you just slap special case band-aids on and make your delivery date.

Here is the reality: software development has become easy enough that you can drag graphical objects around on a screen and do 90% of the programming that way. The price is that the resulting program is orders of magnitude less efficient than it would be if it was engineered and implemented from the ground up on the machine. The latter is obviously not practical, being that the inefficient version takes 2 days, whereas the ground-up version takes 2 years. But hey, in another 18 months the difference in execution speed will not be as noticeable....

Re: The Future of Life
posted on 03/31/2003 7:20 PM by clarkd

[Top]
[Mind·X]
[Reply to this post]

I most certainly am a software engineer and I have made a database complier that sold over 28,000 copies world wide. I programmed most of that program in over 32,000 lines of C over a 6 year period, with many upgrades and revisions along the way. The program had about 400 lines of assembler to interface with the screen and some keyboard routines. The basic structure of the program stayed the same as the program grew in size and functionality (I started to create the program in 1982 and I have clients that still use a version of the program to this day). I also programmed a word process in over 38,000 lines of Z80 assembler that ran at 2 megahertz. The program had many of the features of present day word processors including table of contents, math routines, horizontal scrolling and many other features. Even though the program ran in under 30K of memory on a 2 megahertz machine, I got speeds similar to Microsoft Word on 1 gigahertz machines. (No graphical interface however!) I have programmed extensively on micro computers, both in applications and systems programming for over 26 years and I have a university degree with a course in software engineering that I completed in 1999.

We can have differing opinions without throwing stones at one another, don’t you think?

Re: The Future of Life
posted on 03/31/2003 8:08 PM by Biffj65

[Top]
[Mind·X]
[Reply to this post]

"tacked-on" functionality/' is exactly what you should be able to do with well designed code.

You are obviously not a software engineer if you are pro "tacked-on" code! Good code should be extensible, not patched with baling wire. There is a difference.


My apologies if that sounded like I was casting stones, that was not my intention. Given the history you describe of one of your products, you obviously had a good "extensible" design. The term "tacked-on" connotates a kludge, a futz, baling wire and duct tape. That never leads to long term product success.

So what I SHOULD have said was: I would be shocked if any software engineer promoted "tacked-on" code. Especially one that appears to have been in the field as long as I have, even if degreed more recently.

Based upon your assertion I assumed you were not a software engineer.

As that my assumption was erroneous, I apologize and suggest that either: A - we have a difference in definition for "tacked-on", or B - I strongly (but respectfully) disagree.

I do however still maintain my opinion that you are failing to separate developer efficiency from algorithmic efficiency. As that not all readers may be as versed in software development that is likely to lead them to incorrect assumptions. Such as that software is getting more effecient.

Re: The Future of Life
posted on 03/31/2003 9:02 PM by clarkd

[Top]
[Mind·X]
[Reply to this post]

I accept the apology but I would like to make a distinction between the “efficiency of an algorithm” and the “efficiency of the coding of that algorithm”. One algorithm might be more efficient than another because it does the same thing in a simpler manner or with less steps but that has nothing to do with how it is programmed. The “efficiency of an algorithm” is more than just the speed with which it executes. If a piece of code runs and appears instantaneous, what do you get when you make that function execute twice as fast? One half instantaneous is still instantaneous. The efficiency of a piece of code has no meaning unless it is put in the context of how it is used in a program. How often is it run? Are their people waiting for the output? Is it being processed in an overnight, un-tended run?

The idea of creating more complex code, in less calendar time, is the most important determinant of whether programming can progress at an exponential rate. Only the code that is time critical, or used most of the time needs the extensive optimization of low- level languages and is rarely needed.

I guess that means that I will respectfully disagree with your conclusion.

Re: The Future of Life
posted on 03/31/2003 11:13 PM by Biffj65

[Top]
[Mind·X]
[Reply to this post]

The efficiency of a piece of code has no meaning unless it is put in the context of how it is used in a program. How often is it run? Are their people waiting for the output? Is it being processed in an overnight, un-tended run?


I agree! Correctness, code size, data size, and speed of execution are elements of efficiency in an algorithm. The efficiency of the "solution" as a whole would also include the development time, supportability, and reliability. The right solution will optimize the characteristics that are more important.

The idea of creating more complex code, in less calendar time, is the most important determinant of whether programming can progress at an exponential rate. Only the code that is time critical, or used most of the time needs the extensive optimization of low- level languages and is rarely needed.


I also agree with the last statement about optimization. The thing that I must take issue with though is talk of programming progressing at an exponential rate. As I asserted in another post, I see a most definitely non-exponential rate of increase in programmer productivity. Clicking together pre-build functional units is faster than writing them from scratch, but it is nothing new, and it only takes you 80% of the way there.

Re: The Future of Life
posted on 03/31/2003 8:17 PM by AlberJohns

[Top]
[Mind·X]
[Reply to this post]

software development has become easy enough that you can drag graphical objects around on a screen and do 90% of the programming that way. The price is that the resulting program is orders of magnitude less efficient than it would be if it was engineered and implemented from the ground up on the machine. The latter is obviously not practical, being that the inefficient version takes 2 days, whereas the ground-up version takes 2 years. But hey, in another 18 months the difference in execution speed will not be as noticeable....

The greatest cost of creating software is in programmer time. A program which may run twice as fast, but takes two years to create as opposed to two days is not worth the extra cost of creating it.

In any case, highly optimized software created with assembly language is simply not necessary as it once was when the available hardware was so primitive. Even so, todays high level programmiing languages really aren't that bad, and can often come very close to hand optimized assembly languege.

The exponential growth in the rate at which we can quickly produce complex software is very real. The two important factors here seem to be the coplexity of the software, and the speed at which it can be created.

Re: The Future of Life
posted on 03/31/2003 8:37 PM by Biffj65

[Top]
[Mind·X]
[Reply to this post]

The greatest cost of creating software is in programmer time. A program which may run twice as fast, but takes two years to create as opposed to two days is not worth the extra cost of creating it.


I agree!

In any case, highly optimized software created with assembly language is simply not necessary as it once was when the available hardware was so primitive. Even so, todays high level programmiing languages really aren't that bad, and can often come very close to hand optimized assembly languege.


You make my point, but inversely: we no longer write efficient code because the increase in hardware performance absorbs the inefficiency. The problem is that we stack inefficiency upon inefficiency. How much fundamentally "better" has the software actually become? Not much. How much bigger and slower is it? Orders of magnitude.

I can not agree about compilers coming close to hand coded efficiency, not when you end up with 400,000 byte "hello" programs... But I am not promoting a return to assembler. I am refuting an increase in efficiency other than developer productivity. And even that is not of the magnitude suggested.

The exponential growth in the rate at which we can quickly produce complex software is very real. The two important factors here seem to be the coplexity of the software, and the speed at which it can be created.


Who says that the rate of complex software development has increased exponentially? That is not the case at all. There were applications generators around in the early 80's (I wrote one of them). There have most definitely been improvements, but absolutely not exponential. At best there has been a two or threefold productivity increase between VS.NET and applications frameworks available in the early 90's.

Re: The Future of Life
posted on 04/01/2003 5:11 AM by Shrike

[Top]
[Mind·X]
[Reply to this post]

[First, I am not a programmer. However, I am a network administrator and freelance consultant experienced with thousands of programs, and have served as project manager and designer on software development projects. (Yes, I have herded cats.)]

While this is an interesting discussion, I think that both of you are focusing too much on the programmer himself.

In a disussion of the future of software, I think there needs to be consideration for the -lack- of human programmers. There has been a lot of work done on self-programming programs, and we now see IBM and Sun disussing "self-healing" in server operating systems. A few years back at a Defcon I had some drinks with a woman who was part of a group which was working on "Biological Modeling of Intrusion Detection". (University of New Mexico IIRC)

Rather than focusing on the difference in efficiency between low-level and high-level languages, I think it more important to consider the architecture which would allow for software systems to inter-(face/connect/communicate) and also what situations would allow software to modify itself.

One interesting thing which I've read about is a researcher who built a machine out of flawed processors (even those which were missing i/o pins!) and then wrote an operating system to run on it. This is very important when we consider that nano-assembled processors or quantum computers will be vastly complex compared to silicon processors and the more they are, the greater the chance that there can be a failure in any given secton of the processor. This also has implications for massive parallelization, since a massively parallel system may have multiple sections fail. With the right OS, the system will continue to operate, even with huge gaps in it's underlying hardware.

This is not so very different from the internet today. As McNealy said, "The network is the computer", and he's right. The internet comprises a system which is extensible and decentralized (please, let's not even discuss the DNS root server situation). Information flows between various disparate sub-systems, such as from web servers (many of which are database driven) into search engine databases. There was no "grand design" which allowed that to happen, and it certainly isn't particularly efficient, but the structure of the internet allowed that sort of information flow to evolve.

There are literally thousands of different ways in which software makes use of the internet to flow information from place to place. These were made possible by standards, which any program could make use of.

Such standards also form the basis for designing self-evolving software. It's not important what hardware or OS the software runs on, or what language it was written in, or even how efficient it is. What is important is that it conforms to standards which dictate how it may modify itself. For instance, such software must not modify itself in such a way as to remove it's own ability to communicate with other software.

Optimization is something which software could do to itself if it had the right rules and tools. In the simplest form it would need to know what hardware or OS it's running on, what libraries it links to and have access to the right compiler. If it doesn't have that knowledge, it might ask a central repository where to find the routines it needs to find out, which it would incorporate into itself, and once it knows what it needs to know, it can then ask the repository where to find the right compiler.

This whole process need not even be built into the original software at all. The original software need never know that it can be optimized or that optimizing even exists, as long as it can be communicated with by another software which is aware of optimizing. That other software may know all there is to know about optimizing, debugging and profiling. If it just makes a habit of asking every other software it sees, "Have you been optimized? No? Well then here, load this routine and hand me back the output.", then within a short time just about all software (that it could talk to) would be optimized.

Obviously there are issues, of which security and trust are perhaps the biggest. Today, Microsoft wants to automatically push updates out to servers and workstations, without even asking the people who manage those machines, and naturally systems managers don't like that, since a broken update could take down their whole network and it's even possible that having their systems offline for a sufficient amount of time could put them out of business. Why would any thinking person give that sort of power to a convicted monopolist?

Microsoft would prefer (and are working hard to achieve) a closed system that they control completely, somewhat like that of a cable television distribution system. If they had such a system, it is entirely possible that they could build self-evolving software today. After all, their software would only need to trust other software from the same source. The basis for this is in place already, which anyone who has ever installed a driver which hasn't been "certified" by Microsoft, or anyone who has ever run across a web server with an SSL key that wasn't signed by someone that Microsoft has approved of, can see.

With total control over the access to the system, they wouldn't have to worry too as much about unathorized routines getting loose in the system. Of course, there would always be the possiblity of someone cracking the system, but it would be much harder. When was the last time you heard about a cable television network being hacked?

The security and trust problems are greatly increased on a decentralized system such as the internet, but these problems are certainly not unfathomable or unsolvable. There are governments which are aware of the control benefits to them of a closed system, and thus are working on passing legislation to modify the internet, or their little chunk of it, into a closed system.

While the detriment to civil liberties which a closed system presents should have any thinking person up in arms, the reality is that a closed system is a much safer breeding ground for self-evolving software, and thus may very well be the path which leads us to such developments.

Unless, of course, we get off our asses and start solving the problems now, while the internet is still a decentralized and open system.

Re: The Future of Life
posted on 04/01/2003 5:38 AM by Biffj65

[Top]
[Mind·X]
[Reply to this post]

The "-lack- of human programmers" is the real trick, isn't it? There is a world of difference between "self-healing" networks and programs that modify themselves. A lot of the unreasonable optimism about genetic algoriths and such comes from a lack of understanding. Unfortunately, in mny peoples minds evolution has become almost a religion in and of itself, that can make miracles just like religious people attribute to God.

Here is the GA idea in a nutshell: you have little pieces of code that randomly mutate. Beneficial mutations you keep, the others you cull. Hence you apply random mutation and natural selection to software.

The "believers" state that because computers will be so dang fast it MUST lead to AI, because it is like compressing millions of years into months or weeks or days, etc.

But how do you determine if a mutation is beneficial for an algorithm? Natural selection is all about survival. How does that apply to an algorithm? Well, it doesn't, you have to code some test to determine if the change is beneficial or not. That is not evolution, that is refinement. There is a big difference between the two.

Re: The Future of Life
posted on 04/01/2003 7:07 AM by Shrike

[Top]
[Mind·X]
[Reply to this post]

The "-lack- of human programmers" is the real trick, isn't it? There is a world of difference between "self-healing" networks and programs that modify themselves.


Is there? Just how difficult is it for a word processing program to determine that no one has ever used the built-in "mail merge" feature and to turn it off?

Or conversely, if the user happened to go to the help docs and type "mail merge" and the program replied, "Function available, but not installed. Would you like only information about the function, or should I get and install the function?"

Certainly this is a simple example, but nevertheless illustrative of my point. The software need never have heard of a "mail merge", if it can talk to some other software that has. "Mail merge" need not even be in the help docs, if the program can reach out and find it.

(The latest versions of Microsoft operating systems have a function similar to this; if you run a search for a file on your hard drive, and the file isn't found, the OS (I think it may actually be the indexing service which handles this) will forward that query to MS's central servers.)

If the software can reach out and find that information, and if it can get the code it needs to do a mail merge, and incorporate it into itself, then it has the ability to modify itself.

True, in it's simplest form it is nothing but a collection of rules, but that's a good enough basis from which to build self-evolving software.

[Note: I made no mention of AI, which is a term I don't much like, since obviously if an intelligence truly exists, then it is real and thus -not- artificial. If I were a machine intelligence, I might be insulted by being constantly called "artificial".

I will also note that I was referring to self-healing servers, not networks, since the particulars of self-healing are different for networks than for servers.]

A lot of the unreasonable optimism about genetic algoriths and such comes from a lack of understanding. Unfortunately, in mny peoples minds evolution has become almost a religion in and of itself, that can make miracles just like religious people attribute to God.


Oh I don't dispute this, if you look at some of my other posts you will see that I don't buy into the bullshit, whether it be hoopla or dogma.

Here is the GA idea in a nutshell: you have little pieces of code that randomly mutate. Beneficial mutations you keep, the others you cull. Hence you apply random mutation and natural selection to software.

The "believers" state that because computers will be so dang fast it MUST lead to AI, because it is like compressing millions of years into months or weeks or days, etc.


I understand the genetic algorithm concept quite well. I did not base my original post on it at all. For software to be constucted which evolves (meaning in this context that it changes itself to conform to outside influences) does not require genetic algorithms.

But how do you determine if a mutation is beneficial for an algorithm?


As you say below, you code a test. And again, I'm not talking about genetic algorithms.

To a word processor "beneficial" is completely irrelevant. It just needs to respond to a situation, such as fulfilling the user's desire to do a mail merge.

It would be up to the user to determine if that was beneficial or not.

Natural selection is all about survival.


True, but I'm not talking about either one.

How does that apply to an algorithm? Well, it doesn't, you have to code some test to determine if the change is beneficial or not.


In my original example of software self-optimizing itself, "beneficial" would be relevant, and of course, the software would have to execute tests to make a determination. There would be no need for genetic algorithms for this to happen.

This is not to say that such algorithms wouldn't be a useful addition to the software...they might be quite useful, but we don't need them to get the work done.

That is not evolution, that is refinement. There is a big difference between the two.


Actually, there isn't.

At dictionary.com, according to the American Heritage Dictionary, the first definition of "evolution" is:
1. A gradual process in which something changes into a different and usually more complex or better form.

According to the same source, the defintion of "refinement" is (obviously the first one is irrelevant):
1. The act of refining.
2. The result of refining; an improvement or elaboration.


Evolution, or "to evolve" requires neither natural selection or the pressure to survive.

Building a software using genetic algorithms may require natural selection or survival, but building self-evolving software does not. All that is required is that the software have the ability to modify itself. Even if all it does is refine itself, it is still self-evolving.

Re: The Future of Life
posted on 04/01/2003 7:20 AM by Biffj65

[Top]
[Mind·X]
[Reply to this post]

You never answer the question of how it can modify itself. If you code in all of the methods whereby it can modify itself, then you are really just turning on and off options. Maybe turning them on and off leads to installing additional DLLs, maybe not. That is not self-modifying software. That is a user selecting which features to install.

How do you propose you go from that to software that makes itself better?

Re: The Future of Life
posted on 04/01/2003 7:32 AM by Biffj65

[Top]
[Mind·X]
[Reply to this post]

Oh yes, and as to the difference between evolution and refinement, the context in which we are considering the terms is more important than what you find at dictionary.com, sorry if I used terms you are not familiar with in the context of the paper we are discussing.

Evolution is a process whereby improvement is achieved through interaction with an "environment", typically meaning over multiple generations. Refinement is incremental movement towards a predefined objective.

The issue is whether or not algorithms can be designed to improve themselves without direct human feedback. Otherwise it is training, which is an entirely different subject, and is not germane to the paper we were responding to.

Re: The Future of Life
posted on 04/01/2003 10:28 AM by Shrike

[Top]
[Mind·X]
[Reply to this post]

Oh yes, and as to the difference between evolution and refinement, the context in which we are considering the terms is more important than what you find at dictionary.com, sorry if I used terms you are not familiar with in the context of the paper we are discussing.


First, it was you who broke context by trying to apply genetic algorithms, natural selection and survival to a problem which requires only rulesets. This was clearly a knee-jerk reaction, rather than one which showed any thoughtful consideration of the statements I had made.

Second, trying to insult me won't work, since there is no possible way that I would allow myself to become upset over anything which was said on an internet message board.

This does not mean that I won't react to the attempted insult. I had been preparing an example to post in reply to your message which said "You never answer the question of how it can modify itself."

I was willing to overlook your inablity to focus on what I was saying, as well as your ridiculous substitution of the irrelevant genetic algorithm diatribe in place of a thoughtful reply to what I had said. However, now that you have shown that you are incapable of carrying on a discussion without becoming personally insulting, I see no need to waste any effort to continue discussing this with you.

Re: The Future of Life
posted on 04/02/2003 12:35 AM by AlberJohns

[Top]
[Mind·X]
[Reply to this post]

Evolution is a process whereby improvement is achieved through interaction with an "environment", typically meaning over multiple generations. Refinement is incremental movement towards a predefined objective.


There is no real distinction between "evolution" and "refinement". They are essentially the same thing. Evolution is a process of refinement, or "adaptation", over time.

The issue is whether or not algorithms can be designed to improve themselves without direct human feedback. Otherwise it is training, which is an entirely different subject, and is not germane to the paper we were responding to.


A neural network can improve over time by a process of "training", and this would certainly qualify as a form of improvement. Beyond that, there are many ways of automatically generating programs and algorithms by means of a very high level specification of requirements. These methods will only improve with time as we use them to discover even better methods.

Now, imagine what will happen if we incorporate these techiques into a self-improving AI, and give it the goal of improving itself. You will get a system which will improve at an exponential rate because the rate at which such a system improves will depend upon the quality and capabilities of the self-improving software. As it improves itself, the rate at which it will be able to generate further improvements will increase.

Re: The Future of Life
posted on 04/02/2003 3:34 PM by Shrike

[Top]
[Mind·X]
[Reply to this post]

Howdy,

Now, imagine what will happen if we incorporate these techiques into a self-improving AI, and give it the goal of improving itself. You will get a system which will improve at an exponential rate because the rate at which such a system improves will depend upon the quality and capabilities of the self-improving software. As it improves itself, the rate at which it will be able to generate further improvements will increase.



With my original post in this thread, I was trying to steer the conversation towards discussion of an idea mentioned in Kurzweil's original article, specifically in reference to this quote:


"He proposed a different way of organizing software to keep up with hardware's enormous growth in power: rather than engineering each module with rigid functions and interfaces, we should build each module to communicate through a pattern-recognition paradigm with other modules, pointing out that this is how biology works, allowing for softer edges to the overall competency of a very complex system."


While this could be applied to a machine intelligence if we had one, I'm interested in discussing the application of this concept on non-intelligent systems, such as we have today.

If anyone else is interested in the subject, then I would be very interested in exploring it. Any takers?

Re: The Future of Life
posted on 04/04/2003 1:20 PM by AlberJohns

[Top]
[Mind·X]
[Reply to this post]

He proposed a different way of organizing software to keep up with hardware's enormous growth in power: rather than engineering each module with rigid functions and interfaces, we should build each module to communicate through a pattern-recognition paradigm with other modules, pointing out that this is how biology works, allowing for softer edges to the overall competency of a very complex system.


I don't believe this sort of programming will be needed for things other than AI. For buisness information systems and such, our current "rigid" methods of programming seem to be quite good enough. I see the usefullness of pattern recognition as being primarily within the field of AI.

Of course, AI which uses such pattern recognition could then be used to quickly generate more traditional "rigid" forms of software that could then be used for more traditional tasks such as buisness information systems, gaming, research, etc. As I see it, only to the extent that such software incorporates AI functionality or features would pattern recognition be needed or useful.

Re: The Future of Life
posted on 04/16/2003 1:30 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

While this could be applied to a machine intelligence if we had one, I'm interested in discussing the application of this concept on non-intelligent systems, such as we have today.

If anyone else is interested in the subject, then I would be very interested in exploring it. Any takers?


You might try running your favorite search engine with the term "Linda programming language". Although the data matching in the so-called tuplespace is not really a sophisticated pattern recognition, 'Linda' otherwise comes close to the description of the idea Jaron Lanier seems to talk about in that article. (There are also versions to use 'Linda' with the C programming language). I haven't used the programming language myself, but found it interesting, and used the idea of a tuplespace (in a simplified form) in a software project more than ten years ago. (For synchronous as well as asynchronous communication between multiple threads.)

Greetings,
blue_is_not_a_number

Re: The Future of Life
posted on 04/16/2003 11:47 AM by grantcc

[Top]
[Mind·X]
[Reply to this post]

Hey Blue,

Did you notice that today yellow is the number of the terrorist threat level? Actually, anything can stand for anything else, symbolically. A drop of water can represent a number or anything else. So can a color. Remeber the lights in the curch tower - one if by land and two if by sea.

Grant

Re: The Future of Life
posted on 04/16/2003 3:59 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Actually, anything can stand for anything else, symbolically. A drop of water can represent a number or anything else. So can a color.


Of course, that's well known. Many intelligent authors have written many intelligent things about that.

The interesting thing about colors is that they are the most visible example of consciousness. Though perhaps not the most important example. More important might be to be aware of your outlook on life.

With colors being "so" visible, one can wonder about their "implementation", since one cannot describe mathematically how a specific primary color looks. Although it is being used as a qualifier, it is not a numerical, alphabetical or logical qualifier. So it must be something else, even though it operates within quantifiable phenomena. Then, we can also be aware of this whole matter (or 'noon-matter'), and so our awareness is making a non-mathematical reference. Isn't that weird? :-D

As for the the symbolic usage, you are right, and why should we spend any time ellaborating on that if this is already well understood?

Remeber the lights in the curch tower - one if by land and two if by sea.


(I don't know what you are referring to here.)

Re: The Future of Life
posted on 04/16/2003 6:04 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

With colors being "so" visible, one can wonder about their "implementation", since one cannot describe mathematically how a specific primary color looks.


This is a current technical limitation, not a limitation in principal. It is a problem with the current state of language and knowledge, and it says nothing whatsoever about the mechanisms of the conscious apprehension of color.

Although it is being used as a qualifier, it is not a numerical, alphabetical or logical qualifier.


Symbolism is at the higher level of abstract consciousness. Color (as the experience of color itself and not the words used to reference color) is NOT a symbol. Color is a highly complex neurological reaction to the sensation of specific electromagnetic frequencies in the retina.

So it must be something else, even though it operates within quantifiable phenomena.


Right, color is not symbolic, so any argument based purely on semiotics (mathematics, language, signs etc.) is superfluous.

Then, we can also be aware of this whole matter (or 'noon-matter'), and so our awareness is making a non-mathematical reference. Isn't that weird? :-D


It's not weird at all. Consciousness is at root NON-mathematical. This does not mean it is not causal nor does it mean that mathematical models of the mechanisms involved cannot be constructed. It just means that mathematics is not central nor even present in the mechanisms of consciousness. The mathematics is a symbol system that rests on top of the non-mathematical mechanisms. This is why it takes so long to learn.

Re: The Future of Life
posted on 04/17/2003 2:48 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

This is a current technical limitation, not a limitation in principal. It is a problem with the current state of language and knowledge, and it says nothing whatsoever about the mechanisms of the conscious apprehension of color.


The keyword is that you talk about a current _technical_ limitation. In order to explain (or describe) conscious experience, it would be necessary to have an equation that says:

'human conscious experience' is-suggested-to-be-equal-to 'technical description of processes'

The limitation in principle is on the human side: It is the human conscious experience of how a primary color looks, which cannot be described mathematically, and which can therefore not be part of such an equation. This is what I called 'conscious-how', how we experience something consciously. However, the information which we are conscious of (that is what I call 'conscious-size') will at some point in the future, I assume, be specifiable with such an equation, with 'measurable-size' on the "technical" side of this equation.

Symbolism is at the higher level of abstract consciousness. Color (as the experience of color itself and not the words used to reference color) is NOT a symbol. Color is a highly complex neurological reaction to the sensation of specific electromagnetic frequencies in the retina.

Right, color is not symbolic, so any argument based purely on semiotics (mathematics, language, signs etc.) is superfluous.


As Grantcc pointed out, even a drop of water can be used as a symbol, and so, he said, can colors. As the drop of water can of course exist independently of its potential function as a symbol, so can colors. That is, in fact, what I was always referring to when I wrote about "qualia in themselves" in our earlier discussion.

None of my arguments rested on the symbolic function of colors. Instead, they rest on our inability to describe mathematically what a certain color looks like. When a specific color appears at a certain point in the visual, conscious 3D image, then this certainly indicates information ('conscious-size') about what we see, and what I said was that it does so without the color being a numeric, alphabetic, or logical representation (although this is not part of my main argument, it is still part of the whole picture I have, and relates to Grantcc's message).

Sometimes you seem to point at the possibility that how we react to certain visual information could be explained by neural functions alone, without reference to conscious experience. I don't know if that might be so in some cases, but certainly in at least some cases our response can _not_ be explained without reference to conscious experience: When we talk about our actual conscious experience. Then we can say that we are aware of consciously seeing a color. (And this we do in a very similar way to saying that we are aware of seeing an object which looks blue.) So the conscious experience is at least in some cases a required causal element of our actions, and I would assume at least in all our conscious actions.

It's not weird at all. Consciousness is at root NON-mathematical. This does not mean it is not causal nor does it mean that mathematical models of the mechanisms involved cannot be constructed. It just means that mathematics is not central nor even present in the mechanisms of consciousness. The mathematics is a symbol system that rests on top of the non-mathematical mechanisms. This is why it takes so long to learn.


Yes, it is weird. :-D When you say that consciousness is non-mathematical, you seem to simply mean that we don't think in mathematical terms (which is usually true). When I am talking about the fact that some elements of the 3D image we see can be described mathematically, and some not, then I'm always talking about the possibility of, as you call it, "constructing a mathematical model". When we see a triangle, then that triangle appears in the conscious image we see. As I said above, one can investigate the possibility that the action of saying "I see a triangle" can be explained by neural functions alone, yet it is still clear that the conscious 3D image which we see can in this respect (also) be described mathematically (although we usually don't). That is the 'conscious-size' aspect. Only how a color looks to a human being (for example, versus how a sound sounds) cannot be described mathematically. This is the 'conscious-how' aspect. [Not to be confused with the so-called dual-aspect theories, which refer to 'measurable-size' and 'conscious-size' as far as I remember]. (Just for completeness, I additionally think the fact that we see a conscious image at all, is something that cannot be explained mathematically-physically. Which is, however, implied, since without the colors looking like something we wouldn't see any conscious image.)

At that point, it is not possible to construct a mathematical model. Since neuroscientists, to the best of my knowledge, talk about neural processes in the sense that it is possible to construct a mathematical model, this means that in this sense they cannot be considered as a model for the whole reality of consciousness.

The weird part is that our awareness is able to reference this 'conscious-how' (which will remain without mathematical model). It is so weird that Wittgenstein seems to have missed it. Sorry, Wittgenstein. ;-)

Re: The Future of Life
posted on 04/17/2003 6:56 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

The keyword is that you talk about a current _technical_ limitation. In order to explain (or describe) conscious experience, it would be necessary to have an equation that says:

'human conscious experience' is-suggested-to-be-equal-to 'technical description of processes'

The answer WILL NOT be “an equation”. It will be a HIGHLY complex causal model of the processes involved. This model will take a great deal of intuitive understanding of the complex relationships between the mechanisms and the illusions (qualia) that they produce. The understanding will not be (and never is) exclusively quantitative.

The limitation in principle is on the human side: It is the human conscious experience of how a primary color looks, which cannot be described mathematically, and which can therefore not be part of such an equation.


How many people do you know of that are looking for ‘an equation’ for consciousness? Such a thing is a complete myth: similar to the myth of an equation that will describe the entire universe and still "fit on a T-shirt". Linear mathematics is simply not near that efficient nor descriptive to produce such a result (though a T-shirt is relatively HUGE ;).

This is what I called 'conscious-how', how we experience something consciously.


Then ‘conscious how’ is a highly over-simplistic mythical idea and is thus the root of your failure to quantify how the mechanisms of the brain can produce the experience of color. You haven’t even looked at the brain mechanisms themselves. This is certainly THE place you should be looking.

As Grantcc pointed out, even a drop of water can be used as a symbol, and so, he said, can colors. As the drop of water can of course exist independently of its potential function as a symbol, so can colors. That is, in fact, what I was always referring to when I wrote about "qualia in themselves" in our earlier discussion.


Every linguistic/symbolic medium has an existence apart from the symbolism itself. This is its causal structure that enables the higher level syntactic modifiable structure which we use to elicit programmed semantic responses in each other.

None of my arguments rested on the symbolic function of colors. Instead, they rest on our inability to describe mathematically what a certain color looks like.


Not ‘semantic function’, which is entirely superfluous, but semantic (e.g. mathematical) descriptions which are multiple and independent of that which they describe.

When a specific color appears at a certain point in the visual, conscious 3D image, then this certainly indicates information ('conscious-size') about what we see, and what I said was that it does so without the color being a numeric, alphabetic, or logical representation …


And I have repeatedly said that EVERYTHING in the mind at the sub-symbolic, raw awareness level is entirely independent of symbolism. This includes EVERYTHING that we see, hear, smell, taste, touch, etc., including the “3D image” (which is not really even 3D). The only thing in the mind that does USE symbolic representation is the very highest level of the linguistic ‘stream of thought’, i.e. ‘the inner voice’, but even those mechanisms are entirely independent of symbolic representation. They merely PRODUCE symbolic functionality: they are NOT produced BY them.

Sometimes you seem to point at the possibility that how we react to certain visual information could be explained by neural functions alone, without reference to conscious experience.


“Without reference to conscious experience” ? My point is that conscious experience CAN be explained by the highly complex functionality of the neural architecture.

I don't know if that might be so in some cases, but certainly in at least some cases our response can _not_ be explained without reference to conscious experience: When we talk about our actual conscious experience.


I don’t really know what you are trying to say by this because any explanation which rests on that which it is trying to explain is circular logic. This can be entirely avoided because ‘qualia’ simply exists at a higher level of the causal hierarchy. There are modellable mechanisms which produce everything qualitative about consciousness.

Yes, it is weird. :-D When you say that consciousness is non-mathematical, you seem to simply mean that we don't think in mathematical terms (which is usually true).


No. There is no mathematics involved in ANY of the functioning of consciousness. Mathematics CAN exist at the symbolic level, but it is not part of the functioning of the mechanisms of consciousness.

When I am talking about the fact that some elements of the 3D image we see can be described mathematically, and some not, then I'm always talking about the possibility of, as you call it, "constructing a mathematical model".


The ‘image’ in the mind is NOT 3D (distance is simply inferred by the mismatch between the retinal projections), and every time you mention constructing a mathematical model of it you are talking of modeling the actual objects at the opposite end of the sensory chain. The modelization should take place at the subjective end not the objective end. Do you get my meaning? Modeling an object mathematically or symbolically tells us NOTHING about how the mind represents an object. The only way to understand how the brain works is to model the brain not the objects that it sees.

(Just for completeness, I additionally think the fact that we see a conscious image at all, is something that cannot be explained mathematically-physically.


You are equating mathematics with physical reality? Big mistake!

The weird part is that our awareness is able to reference this 'conscious-how' (which will remain without mathematical model).


Don’t be so sure of that. Sorry, but your basis for this assumption is deeply flawed.


subtillioN

Re: The Future of Life
posted on 04/18/2003 1:08 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

How many people do you know of that are looking for ?an equation? for consciousness? Such a thing is a complete myth: similar to the myth of an equation that will describe the entire universe and still "fit on a T-shirt". Linear mathematics is simply not near that efficient nor descriptive to produce such a result (though a T-shirt is relatively HUGE ;).


(Am I missing something about how 'non-linear' mathematics would do any better?)

Unless you want to make a point about the difference between an equation and an algorithm (in which case I wouldn't see how that could be significant in the context of this discussion), then there are quite a few contemporary scientists and philosophers on this 'list'. I would ask: Who is not? First, I would add everyone to this 'list' who argues in favor of 'Strong AI', the theory that a computer of the future could be conscious. This would include Daniel Dennett and Ray Kurzweil, and to a certain extent David Chalmers (who as always remains very flexible). Without going into the details of John Searle's views, in his discussion of 'free will' he writes that if he had to make a bet, he would be in favor of what he calls "Hypothesis 1". In my understanding, that adds him to the 'list', where he seems to be in good company. With Stephen Hawking it is not so clear to me anymore (in respect to the universe rather than consciousness specifically). He is lately talking about Goedel's theorem (which is mathematical itself). He used to say something like "The universe isn't mathematical, however it is well describable mathematically".

You are equating mathematics with physical reality? Big mistake!


This is a central example of how you are misunderstanding my whole argument. Again, I'll try to clarify:

1. I am _not_ saying that physical reality is mathematical.
2. I am _not_ saying that anyone else is saying that.
3. I _am_ saying that many contemporary scientists and philosophers make the following assumptions, either as working assumptions or as a definite world view:
a) All physical processes can be well described mathematically, at least in principle.
b) Physical reality is causally closed, meaning all physical events are caused by other physical events (if caused by anything at all, as some quantum physicists might want to add).

These two assumptions imply that any higher-level theories (chemical, biological, neural) do not add additional causal factors, but are simply the result of 'zooming-out' to a larger picture, and ultimately only summarize a more detailed 'zoomed-in' low-level physical description.

4.) I do not only _not_ agree with these assumptions, I decidedly disagree with them in the context of consciousness. I am actually trying to _disprove_ them in the context of consciousness. I am exploring how far the arguments put forward on my homepage go in this direction. Im my personal view, pretty far.

Since you misunderstand me on this subject, I can easily see how you misunderstand me on other points, and will leave it at that for now.

Don?t be so sure of that. Sorry, but your basis for this assumption is deeply flawed.


Aha. Thanks for letting me know. ;-D

Re: The Future of Life
posted on 04/18/2003 3:33 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

(Am I missing something about how 'non-linear' mathematics would do any better?) Unless you want to make a point about the difference between an equation and an algorithm ...


There is quite a HUGE gap in complexity between an ‘equation’ or even an ‘algorithm’ and the vast, causal, electro-chemical, network-architecture model necessary for an understanding of conscious processes. The understanding of the mechanisms is much more aptly termed a physical model than an ‘equation’ or an ‘algorithm’.

Mathematics is merely one possible medium for a 4D (at least) model of the highly complex causal mechanisms involved. An equation or an algorithm simply won’t cut it.

The actual brain (or an actual physical model thereof) is a much more accurate model. ;) All we have to do is tweak the brain to know that the 'physical mechanisms' (whateverthatmeans) of the brain are directly responsible for the experience of color. Does this mean that the brain is ultimately just a bunch of randomly moving billiard ball atoms in time-reversible kinetic-atomic mathematically-defined collision? Absolutely NOT! The standard ancient-Greek atom-in-the-void model is highly antiquated and the mathematical wave-equations patched into the semantic-vacuum of the spaces between ALL the so-called ‘ultimate particles’, still can’t remotely explain what is physically happening at that level. Until the root substrate-problem is addressed, Quantum Theory can only give us paradox and complementarity, which is quite far-off from a physical/causal understanding of nature.

The question really comes down to, “what is ‘physical’ reality?” Is it inanimate, lifeless, inert, or is it ultimately absolutely mathematically-definable? I certainly don’t think so. In fact, as Quantum Theory and Einstein’s e=mc^2 shows, the basic reality of so-called ‘inanimate’, ‘atomic’ nature is highly fluid-dynamic with wave-energy moving at the minimum speed of light itself. There is nothing in reality that is devoid of this light-speed fluid complexity. This is the exact opposite of the inanimate mathematical billiard-ball over-simplified materialism espoused by modern physics that both you and I reject. Even the seemingly 'simple atom' possesses a continuity of infinite and animate complexity. That infinite complexity (as Ilya Prigogine has hinted at) is that which our crude time-neutered mathematical quasi-non-linear logic is eternally groping towards.

[[ Whether we call reality by the terms: ‘physical’, ‘material’, ‘ethereal’, ‘living’, ‘spiritual’, or ‘whatever’, ALL have their correct modes of reference. Reality is all of these things and more, far more complex than any one-word description.]]

subtillioN: You are equating mathematics with physical reality? Big mistake!
Blue: This is a central example of how you are misunderstanding my whole argument. Again, I'll try to clarify:


Here is what you said, “(Just for completeness, I additionally think the fact that we see a conscious image at all, is something that cannot be explained mathematically-physically.”

[note that i didn't say that you believe that reality itself was mathematical or even physical]

This seems to imply that a mathematical description is equivalent to a physical description. I claim that there is a huge difference between the two. You can have a quite accurate mathematical model of the quantities and ratios involved in physical processes while still having no clue whatsoever as to what is physically happening: quantum mechanics is the epitome of just such a situation: the Ptolemaic Earth-centric model of the solar system is another great example pulled from the vast archive of the history of physical science.

1. I am _not_ saying that physical reality is mathematical.


No, but you do seem to be saying that reality is not ‘physical’ and that a mathematical description is equivalent to a physical description.

3. I _am_ saying that many contemporary scientists and philosophers make the following assumptions, either as working assumptions or as a definite world view:
a) All physical processes can be well described mathematically, at least in principle.
b) Physical reality is causally closed, meaning all physical events are caused by other physical events (if caused by anything at all, as some quantum physicists might want to add).

4.) I do not only _not_ agree with these assumptions, I decidedly disagree with them in the context of consciousness. I am actually trying to _disprove_ them in the context of consciousness. I am exploring how far the arguments put forward on my homepage go in this direction. Im my personal view, pretty far.


I would place ultimate qualifications on assumption (a), but there is no metaphysical, logically consistent or meaningful path around assumption (b). Every effect must have a cause in order for it to exist. Quantum mechanics is simply ignorant of the causes of its probabilistic wave-functions because the Physicists continue to deny the reality of the substrate for which the wave-nature of all matter and energy was originally evolved. They still believe the nonsense that the nothingness of ‘empty space’ can propagate waves. It is no wonder that they also believe in ‘randomness’ and ‘probabilities’ independent from ‘physical reality’. This is not too far from your intuited belief in the split between cause and effect.

I am still curious as to what you would put in place of causality or ‘physical reality’. Are you trying to open up a non-mathematical sanctuary for the spirit or God or are you trying to open a trap-door in the machine to allow the ghost to creep back in?

Your proof against these assumptions (by falsely claiming to address the mechanisms of consciousness) is superfluous and misleading because you are barking up the wrong tree [the contents-of-objective-reality-tree (3D descriptions of objects across the room) which is not even in the same metaphorical forest as the mechanisms-of-subjective-consciousness-tree which is on the other side of the subjective-objective quasi-continental-divide.]

Since you misunderstand me on this subject, I can easily see how you misunderstand me on other points, and will leave it at that for now.


I think you misunderstand my misunderstanding! ;)

I would be quite impressed if you attempted to show that the ACTUAL mechanisms of the BRAIN were non-describable via mathematics, which I think is true to an extent and I wouldn’t be surprised if the scientists had quite a tough time replicating many of the subtle electro-chemical patterns via their relatively crude semantic-void mathematical models. Since mathematics does not place limits on ‘physical reality’, however, this situation says nothing about whether ‘physical reality’ is ‘causally closed’ (which I think is a misnomer and dysphemism, because causality is anything but closed: it is infinite and continuous and it actually allows every possibility to exist, thus it is entirely open. Causality is the foundation of existence itself, and any escape from it would not be ‘freedom’ or ‘openness’. It would be nothingness). The situation which you attempt to show in your ‘proof’ merely points to the limits of mathematics when modeling the infinite complexity of physical reality. Zeno’s paradoxes have been around for thousands of years, and these limitations of mathematics in the face of the continuum are quite well known. This doesn’t stop people from consistently assuming the inverse: that the paradoxes place limits on the continuity of physical reality itself, but then again most people don’t know that a paradox is simply an example of faulty reasoning and that nature herself cannot function via paradox. Nature is FAR too complex to be that big of a bungler!



Aha. Thanks for letting me know. ;-D


You are welcome ;)

Re: The Future of Life
posted on 04/18/2003 4:09 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

I would place ultimate qualifications on assumption (a), but there is no metaphysical, logically consistent or meaningful path around assumption (b).


If this actually means that you don't take (a) as a given, then that's already good enough for me. In that case I don't care about the distinction between physical and non-physical, as this distinction is then meaningless. I claim victory! ;-D

Re: The Future of Life
posted on 04/18/2003 10:35 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I claim victory! ;-D


excellent! Well done! ;-D

Re: The Future of Life
posted on 04/18/2003 10:54 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

If this actually means that you don't take (a) as a given, then that's already good enough for me.


You were aware that I had claimed that "mathematics is ultimately limited" from the beginning, right?

In that case I don't care about the distinction between physical and non-physical, as this distinction is then meaningless.


I agree. The point was the clarification of just what 'physical' or 'material' means.

I claim victory! ;-D


Victory for you was the whole point, but without addressing my point about your addressing of the wrong side of the subjective/objective divide (the objective contents of consciousness "across the room" instead of the sensory/conscious mechanisms of the brain), "victory" for you, then means you must have abandoned your original erroneous claim to have proven that the experience of color is non-mathematizable. You should also be aware that these inevitable limits of mathematics (wherever they may lie) tell us nothing necessarily about the nature of physical reality and simply reveal the limits of artificial mathematics.

Re: The Future of Life
posted on 04/18/2003 11:47 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

You were aware that I had claimed that "mathematics is ultimately limited" from the beginning, right?


Those limitations may fall inside or outside the requirements necessary for describing (physical) reality. Let me double-check: Are you saying there might be (aspects of) physical processes which are in prinicle not describable with mathematics?

Re: The Future of Life
posted on 04/19/2003 2:31 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Those limitations may fall inside or outside the requirements necessary for describing (physical) reality. Let me double-check: Are you saying there might be (aspects of) physical processes which are in prinicle not describable with mathematics?


‘Continuity’ is the major (and perhaps the only) aspect of physical reality that is *in principal* not ABSOLUTELY describable by mathematics. All processes, objects, structures, things etc. possess this continuity aspect. [Continuity is not equivalent to homogeneity, however, so the ‘quantum’ is not ruled out by this metaphysical point of view. In fact it is quite aptly explained by it.] Mathematics is a finite (or indefinite) product/process from the finite mind, therefore it is and will eternally remain indefinitely far from describing *absolutely* this continuity aspect. We finite human beings can never deal functionally with the absolute and the infinite, therefore these ultimate limitations are perhaps technically irrelevant even though metaphysically they are meaningful and crucial for maintaining realistically integrated categorical distinctions.

The mathematical limitations stem directly from the limitations of the mind, thus they will continue to recede as we grow more and more technically and mentally advanced. As mathematics morphs from a serial, linear and symbolic language into a parallel, non-linear and causal/simulational language (such as DNA or the brain/mind: i.e. cognitively integrated into A.I. and the manufacturing-prototyping-testing evolutionary processes of the future technology) the boundaries which define these limitations will likely transform entirely and the mathematics of the future may acquire many of the properties of life itself, with its evolved, highly adaptive and intelligent tricks for ‘intuitively’ modeling complex phenomena.

Mathematics is only unlimited in the sense that it can never reach its limits of precision. This is simply because continuous reality is ‘resolution independent’ (if I may borrow phrase from CG terminology). There are no ultimate limits of precision.


subtillioN

Re: The Future of Life
posted on 04/19/2003 3:07 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Continuity? is the major (and perhaps the only) aspect of physical reality that is *in principal* not ABSOLUTELY describable by mathematics. All processes, objects, structures, things etc. possess this continuity aspect.

[...]
...and the mathematics of the future may acquire many of the properties of life itself, with its evolved, highly adaptive and intelligent tricks for ?intuitively? modeling complex phenomena.

[...]There are no ultimate limits of precision.


SubtillioN, excuse me for the personal question, may I ask how old you are?

Greetings,
Blue


Re: The Future of Life
posted on 04/19/2003 3:13 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

why do you ask?

Re: The Future of Life
posted on 04/19/2003 3:58 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

I am asking because it seems to be difficult to tell.

Re: The Future of Life
posted on 04/19/2003 3:04 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Those limitations may fall inside or outside the requirements necessary for describing (physical) reality.


There are two related and integrated kinds of descriptive limitations: precision and complexity. I think the limitations on complexity are certainly the most debilitating.

As far as precision goes, however…

Due to the fact that an absolutely precise description is physically impossible (without being an exact replica and thus possessing no distinction as a description whatsoever), it seems that the limits of mathematical description will perhaps always remain just on the edge of "the requirements necessary for describing (physical) reality". This is because descriptions are a product of the mind and their limitations are based on the limitations of the mind (and its augmentations). Therefore as the mind grows to encompass the new problem sets emerging within its expanding horizons, so do the limits of mathematics expand because mathematics is an integral effect and now a contributing cause of that cognitive expansion.

If there ever is a finite and realistically solvable problem set that needs extra precision or complexity, we will build the functionality into the description. It is just a matter of time.

Re: The Future of Life
posted on 04/17/2003 7:54 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

Remeber the lights in the church tower - one if by land and two if by sea.

(I don't know what you are referring to here.)


Does "The British are coming!" ring a bell? Or the name "Paul Revere?"

Grant

Re: The Future of Life
posted on 04/18/2003 12:08 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Does "The British are coming!" ring a bell? Or the name "Paul Revere?"


No. Should it? ;-) [Since I don't know what you are referring to, I hope the smiley isn't out of place ;-)]

Greetings,
Blue

Re: The Future of Life
posted on 04/18/2003 2:43 AM by grantcc

[Top]
[Mind·X]
[Reply to this post]

I thought everyone was familiar with the midnight ride of Paul Revere:

JMU Editor's Note: Fighting in the American Revolution began with the famous "Shot heard round the world" at Lexington, Massachusetts, on April 19th, 1775. The fighting continued that day with the British defeat at the bridge at Concord and their retreat to Boston. Paul Revere, a Boston silversmith, was one of the men who warned the militia of the British sortie from Boston. His ride and the subsequent fighting were immortalized in the following poem by Henry Wadsworth Longfellow. Longfellow wrote the poem in 1861, many years after the event, and in attempt to arouse patriotic feelings. He did not seek historical accuracy.


Paul Revere's Ride
Henry Wadsworth Longfellow

Listen, my children, and you shall hear
Of the midnight ride of Paul Revere,
On the eighteenth of April, in Seventy-five;
Hardly a man is now alive

Who remembers that famous day and year.
He said to his friend, "If the British march
By land or sea from the town to-night,
Hang a lantern aloft in the belfry arch

Of the North Church tower as a signal light,--
One, if by land, and two, if by sea;
And I on the opposite shore will be,
Ready to ride and spread the alarm

Through every Middlesex village and farm,
For the country folk to be up and to arm."
Then he said, "Good-night!" and with muffled oar
Silently rowed to the Charlestown shore,

Just as the moon rose over the bay,
Where swinging wide at her moorings lay
The Somerset, British man-of-war;
A phantom ship, with each mast and spar

Across the moon like a prison bar,
And a huge black hulk, that was magnified
By its own reflection in the tide.
Meanwhile, his friend, through alley and street

Wanders and watches with eager ears,
Till in the silence around him he hears
The muster of men at the barrack door,
The sound of arms, and the tramp of feet,

And the measured tread of the grenadiers,
Marching down to their boats on the shore.
Then he climbed the tower of the Old North Church,
By the wooden stairs, with stealthy tread,

To the belfry-chamber overhead,
And startled the pigeons from their perch
On the sombre rafters, that round him made
Masses and moving shapes of shade,--

By the trembling ladder, steep and tall,
To the highest window in the wall,
Where he paused to listen and look down
A moment on the roofs of the town,

And the moonlight flowing over all.
Beneath, in the churchyard, lay the dead,
In their night encampment on the hill,
Wrapped in silence so deep and still

That he could hear, like a sentinel's tread,
The watchful night-wind, as it went
Creeping along from tent to tent,
And seeming to whisper, "All is well!"

A moment only he feels the spell
Of the place and the hour, and the secret dread
Of the lonely belfry and the dead;
For suddenly all his thoughts are bent

On a shadowy something far away,
Where the river widens to meet the bay,--
A line of black that bends and floats
On the rising tide, like a bridge of boats.

Meanwhile, impatient to mount and ride,
Booted and spurred, with a heavy stride
On the opposite shore walked Paul Revere.
Now he patted his horse's side,
Now gazed at the landscape far and near,

Then, impetuous, stamped the earth,
And turned and tightened his saddle-girth;
But mostly he watched with eager search
The belfry tower of the Old North Church,

As it rose above the graves on the hill,
Lonely and spectral and sombre and still.
And lo! as he looks, on the belfry's height
A glimmer, and then a gleam of light!

He springs to the saddle, the bridle he turns,
But lingers and gazes, till full on his sight
A second lamp in the belfry burns!
A hurry of hoofs in a village street,

A shape in the moonlight, a bulk in the dark,
And beneath, from the pebbles, in passing, a spark
Struck out by a steed flying fearless and fleet:
That was all! And yet, through the gloom and the light,
The fate of a nation was riding that night;

And the spark struck out by that steed, in his flight,
Kindled the land into flame with its heat.
He has left the village and mounted the steep,
And beneath him, tranquil and broad and deep,

Is the Mystic, meeting the ocean tides;
And under the alders that skirt its edge,
Now soft on the sand, now loud on the ledge,
Is heard the tramp of his steed as he rides.

It was twelve by the village clock,
When he crossed the bridge into Medford town.
He heard the crowing of the cock,
And the barking of the farmer's dog,

And felt the damp of the river fog,
That rises after the sun goes down.
It was one by the village clock,
When he galloped into Lexington.

He saw the gilded weathercock
Swim in the moonlight as he passed,
And the meeting-house windows, blank and bare,
Gaze at him with a spectral glare,

As if they already stood aghast
At the bloody work they would look upon.
It was two by the village clock,
When he came to the bridge in Concord town.

He heard the bleating of the flock,
And the twitter of birds among the trees,
And felt the breath of the morning breeze
Blowing over the meadows brown.

And one was safe and asleep in his bed
Who at the bridge would be first to fall,
Who that day would be lying dead,
Pierced by a British musket-ball.

You know the rest. In the books you have read
How the British Regulars fired and fled ---
How the farmers gave them ball for ball,
From behind each fence and farm-yard wall,

Chasing the red-coats down the lane,
Then crossing the fields to emerge again
Under the trees at the turn of the road,
And only pausing to fire and load.

So through the night rode Paul Revere;
And so through the night went his cry of alarm
To every Middlesex village and farm,--
A cry of defiance and not of fear,

A voice in the darkness, a knock at the door,
And a word that shall echo forevermore!
For, borne on the night-wind of the Past,
Through all our history, to the last,

In the hour of darkness and peril and need,
The people will waken and listen to hear
The hurrying hoof-beats of that steed,
And the midnight message of Paul Revere.

Re: The Future of Life
posted on 04/18/2003 11:42 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

[...]
Paul Revere's Ride
Henry Wadsworth Longfellow
[...]


Thanks for the historical picture. What is 'meaning', basically?

Re: The Future of Life
posted on 04/19/2003 1:42 AM by grantcc

[Top]
[Mind·X]
[Reply to this post]

Thanks for the historical picture. What is 'meaning', basically?


Understanding a relationship between one thing and another. For example, the ability to understand the relationship between the color red and the need to stop your car or the color green and the right to proceed through an intersection. In this context, red "means" stop and green "means" go. In the poem, the lights in the church tower "meant" the British were coming by sea.

Grant

Re: The Future of Life
posted on 04/19/2003 2:17 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Understanding a relationship between one thing and another. For example, the ability to understand the relationship between[...]In this context, [...]


In your examples, the symbols were used for a purpose. Understanding them requires knowing this purpose. The 'sender' and the 'receiver' have an agreement about the purpose of their communication. They specified two distinct meanings, and there were two symbols to 'select' which meaning applied in each case. So symbols only 'select' a meaning, they do not define it. What then defines a meaning?

Re: The Future of Life
posted on 04/19/2003 11:32 AM by grantcc

[Top]
[Mind·X]
[Reply to this post]

So symbols only 'select' a meaning, they do not define it. What then defines a meaning?


The meaning is defined by the people who set up the relationship. In the poem, it was Paul Reviere and the people who were watching for the British in the church tower. For traffic lights, it's the people who design and install the system for regulating traffic and the people who agree to abide by that system. For language, it's the people who write, speak, hear and read words in a specific culture.

When a baby cries, the parent has to learn what that cry means. In some cases that cry means the baby is hungry and in other cases it means the baby is uncomfortable. What the cry means to the baby and what it means to the parent are often two different things. Eventually, the two converge and communication takes place. But the baby and the parent define the meaning between themseves through their actions and reactions.

Take the expression of pain for example. Some people will tell you it's a reflexive action we have no control over. But an English speaker usually says "ouch," while a Japanese person says "itai," a Chinese person says "aiyo" and a Philippino says "apo." If it was not an agreed upon expression developed by members of a specific culture, it would be the same worlwide, just like a baby's cry.

Grant

Re: The Future of Life
posted on 04/19/2003 12:18 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

[...]Eventually, the two converge and communication takes place. But the baby and the parent define the meaning between themseves through their actions and reactions.[...]


My thought was going along the same lines. Learning meaning is (at least often) based on experience. What more could we say? It is a new topic for me.

Re: The Future of Life
posted on 01/31/2005 3:57 PM by trait70426

[Top]
[Mind·X]
[Reply to this post]

one
i like the internet as a decentralised system.
i hope it could be a model of a possibble anarchy
period
two
reverse engineering looks like an infinite process
three

Dawkins not a scientist
posted on 04/15/2003 3:18 PM by Clifford

[Top]
[Mind·X]
[Reply to this post]

Just what exactly has Professor of Public Enlightenment Richard Dawkins actually DONE?

Noam Chomsky said Dawkin's heroic contribution of Meme Theory did about as much to advance Linguistics as E.O. Wilson's theory of biology-based art did to advance the singing career of Eddie Murphy.

Dawkins isn't a scientist- he's a WRITER. If I want STORIES, I'll read frickin' Harry Potter.

And if Dawkins and Francis Crick are really serious about saving the universe from religion, they ought to have the balls to tear up a picture of the Pope on "Saturday Night Live".

Re: Dawkins not a scientist
posted on 04/15/2003 4:55 PM by sushi101

[Top]
[Mind·X]
[Reply to this post]

Dawkins is a biologist.

Re: Dawkins not a scientist
posted on 04/15/2003 5:05 PM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

FYI - Dawkins is a great biologist. IMHO second to Darwin. His theory is the right one.

Chimpsky ... has no clue. That's for sure.

- Thomas

Re: Dawkins not a scientist
posted on 04/16/2003 7:08 PM by AlberJohns

[Top]
[Mind·X]
[Reply to this post]

Dawkins isn't a scientist- he's a WRITER. If I want STORIES, I'll read frickin' Harry Potter.

Dawkins is both a scientist AND a writer. Carl Sagan was also a member of both catagories. Your statement seems to imply that you have little respect for writers. Good writer are VERY important for communicating science and its implications to the public.

Without people like Dawkins, Sagan, and Kurzweil, the public would be almost entirely isolated from the culture of science. That would be bad because the public would remain ignorant, and science would die because of lack of public support. NEVER underestimate the power of a good writer. The pen is mightier than the sword.

Re: Dawkins not a scientist
posted on 04/16/2003 10:04 PM by claireatcthisspace

[Top]
[Mind·X]
[Reply to this post]

well said



Claire

Re: Dawkins not a scientist
posted on 04/16/2003 11:38 PM by Clifford

[Top]
[Mind·X]
[Reply to this post]

I LOVE writers-especially the ones who do popular science. I'm just saying Dawkins hasn't solved any problems- although he might be fun to listen to around the campfire.

Re: Dawkins not a scientist
posted on 04/17/2003 1:49 AM by Thomas Kristan

[Top]
[Mind·X]
[Reply to this post]

Richard Dawkins discovered, that a molecule
(part) has to be seen as the fundamental shape of the biological evolution. And not species, as it was seen before him.

Dawkins inventeded memes.

That two discoveries are highly important.

- Thomas

Re: Dawkins not a scientist
posted on 04/17/2003 5:02 AM by sushi101

[Top]
[Mind·X]
[Reply to this post]

Dawkins meme theory and especially one book "The Selfish Gene" are very important to science. Particular in relation to emerging areas such as AI and Neuroscience.

I always seem to think that this Wittgenstein quote says it all.

“What a Copernicus or a Darwin really achieved was not the discovery of a true theory but a fertile point of view.”

Re: Dawkins not a scientist
posted on 04/17/2003 11:15 PM by Clifford

[Top]
[Mind·X]
[Reply to this post]

"AI and neuroscience"

Yeah, that's what his literary agent says,too, but when I read "The Meme Machnine" by his disciple Susan Blackmoore, I wondered if I hadn't spent two days eating Cheetos.

Where's the beef? Ivory Tower pontifs are always spewing Wisdom. Is there ever going to be an APPLICATION?

I'm waiting for one of these Professors to descend from the mountain with a CONSCIOUSNESS DETECTOR. Until then, it's just poetry and vapor.

Re: Dawkins not a scientist
posted on 04/18/2003 3:12 AM by sushi101

[Top]
[Mind·X]
[Reply to this post]

So what was the application af Darwin?

You are confusing a number of things here.

We all know that Quantum Mechanics is a very important discovery. What we also know is that there are plenty of books out on quantum healing, quantum yoga, quantum mindreading and all kind of other sheit mumbo jumbo based on pure speculation.
The Meme Machine is not the "Selfish Gene"

What I think you fail to understand Clifford is that knowledge is good for different things.

Darwin didn't giver os an application he gave us an understanding that we have then been able to build theories upon.

No one is forcing you to read their stuff, but none the less for som of us it is important knowledge that we use as an applications to understanding fields like semantics that then in turn helps us creating better AI systems.

Re: Dawkins not a scientist
posted on 04/18/2003 3:13 AM by sushi101

[Top]
[Mind·X]
[Reply to this post]

>CONSCIOUSNESS DETECTOR

Is well underway! I will find the link.

Re: Dawkins not a scientist
posted on 04/18/2003 11:28 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

I'm waiting for one of these Professors to descend from the mountain with a CONSCIOUSNESS DETECTOR. Until then, it's just poetry and vapor.


It is the possibility or impossibility of such a thing which is exactly one of the crucial questions. Personally, I think such a device is not just impossible, but a misconception in the first place. One can not even ask the question in a meaningful way. Aside from that, poetry might be seen as a subjective term and there you are already entering the troublesome area of circular logic... ;-)

Re: Dawkins not a scientist
posted on 04/19/2003 3:13 PM by Clifford

[Top]
[Mind·X]
[Reply to this post]

In "Intelligent Machines", Ray K. said, "Quantum mechanics means it's possible to build a CONSCIOUSNESS DETECTOR" -- But he hasn't said anything more about that in the past 10 years and Penrose and Hameroff have disappeared in the cheap-labor resturaunts of Tuscon.

I want something that can tell me if an ant feels pain when you rip its legs off - and from there, something that can PROVE death is equivalent to a dreamless sleep like being under anesthesia.

(What if Frank Tipler's evil twin resurrects me in Hell?)

Re: Dawkins not a scientist
posted on 04/20/2003 4:24 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

In "Intelligent Machines", Ray K. said, "Quantum mechanics means it's possible to build a CONSCIOUSNESS DETECTOR" -- But he hasn't said anything more about that in the past 10 years and Penrose and Hameroff have disappeared in the cheap-labor resturaunts of Tuscon.


Probably because it sounds more like a MECHANICS DETECTOR.

Re: Dawkins not a scientist
posted on 01/31/2005 4:10 PM by trait70426

[Top]
[Mind·X]
[Reply to this post]

they are now doing experiments on humans

check out

www.mindcontrolforums.com

notice the plural

these people have no idea what is happening to them.

they are ignorant

Re: Dawkins not a scientist
posted on 04/19/2003 5:34 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

It is the possibility or impossibility of such a thing which is exactly one of the crucial questions. Personally, I think such a device is not just impossible, but a misconception in the first place.


I agree. Consciousness is not a force like magnetism or gravity that can be detected by some simple physical mechanism. The only way to 'detect' consciousness is to understand the mechanisms necessary for it's creation. When you recognize the mechanisms in action then you must infer that consciousness is the result. It is not really so mysterious.

Re: Dawkins not a scientist
posted on 04/20/2003 4:32 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

The only way to 'detect' consciousness is to understand the mechanisms necessary for it's creation.


Who says consciousness is created by a mechanism? A mechanism like a clockwork? A very complex compressible liquid clockwork? How can a mechanism create something which is not a mechanism? What is consciousness if it is not a mechanism? Very mysterious.

Re: Dawkins not a scientist
posted on 04/20/2003 4:40 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

How can a mechanism create something which is not a mechanism? What is consciousness if it is not a mechanism?


Spelling correction: This should read: How can a mechanism create someONE WHO is not a mechanism? WHO is consciousness if HE/SHE is not a mechanism?

Re: Dawkins not a scientist
posted on 04/20/2003 5:06 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Who says consciousness is created by a mechanism? A mechanism like a clockwork?


Get the clockwork idea of ‘mechanism’ out of your head. ‘Mechanism’ simply means the causal structuring which produces an effect.

Who said ‘a mechanism’ anyway? I said THE mechanisms. The 'mechanisms' of consciousness are extremely complex. Not at all like clockwork.

How can a mechanism create someONE WHO is not a mechanism?


How can an effect be produced without a cause? And who says consciousness is NOT a mechanism?

WHO is consciousness if HE/SHE is not a mechanism?


Does consciousness have a function? Certainly! It’s original function was the furthering of the evolution through the survival of the fittest. It is now deciding its own function far beyond it's initial evolutionary function.

Consciousness IS produced by, and in turn IS, a ‘mechanism’ (in the “causal structure that produces an effect” meaning of the term and not in the “clockwork” sense.)

Very mysterious.


You can hang on to your mysteries engendered by your clockwork-model-of-mechanism, but to me the relationship between consciousness and causality is quite simple and understandable given the inevitable limits of the mind to comprehend its own substrate. The substrate of consciousness is necessarily orders of magnitude beyond the comprehensive understanding of the higher level abstraction of symbolic LINEAR stream-of-thought consciousness itself. Once you understand the necessity of this relationship of the sub-structural to the meta-symbolic then it really is not so mysterious.


The mystery question
posted on 04/20/2003 6:45 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Consciousness IS produced by, and in turn IS, a ?mechanism? (in the ?causal structure that produces an effect? meaning of the term and not in the ?clockwork? sense.)


What is the mystery question? The question is whether the fact that we are conscious (as evidenced by the fact that we see colors) can be seen as an effect of neural-electrochemical-physical facts to be determined by neuroscience.

The question is not whether neuroscience is interesting or not, or whether neuroscience can explain how we process information (which may be seen as an important part of consciousness, but is not the whole story).

The question is also not whether the cause-and-effect principle applies to consciousness in general. Maybe besides physics in the contemporary sense and consciousness there is a third category, unseen to us, and both physics in the contemporary sense and consciousness are an effect of this third category, but then that depends on the definition of the word 'physics'.

The question is only whether consciousness can be seen as an effect of physics or as part of physical reality in the sense in which physical reality is understood today. And sorry to repeat, it is understood today as described by mathematical formulas. How is that different than a clockwork, basically?

Simply making the a-priori assumption that the cause-and-effect principle must also apply to consciousness just does not solve any mystery. It may put your mind at peace, but I have been around to long to fall into that trap.

Re: The mystery question
posted on 04/20/2003 11:20 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

The question is whether the fact that we are conscious (as evidenced by the fact that we see colors) can be seen as an effect of neural-electrochemical-physical facts to be determined by neuroscience.


It can't be seen thus if you haven't learned the ‘state of the art’ of the field of cognitive science. Perhaps THAT is why it seems so mysterious to you. Anything can be mysterious if you don't understand the causal relationships behind it. Its like a magic show… not so mysterious once you get a feel for the tricks!

The question is not whether neuroscience is interesting or not, or whether neuroscience can explain how we process information (which may be seen as an important part of consciousness, but is not the whole story).


Processing information is exactly what consciousness is and does.

The question is also not whether the cause-and-effect principle applies to consciousness in general.


How do you propose the “cause-and-effect principle” applies to consciousness if not through the ‘mechanisms’ of the complex dynamics of the neural architecture? What causal level besides the neural architecture level do you think has enough dynamic complexity to produce consciousness and how do you suppose it works?

Maybe besides physics in the contemporary sense and consciousness there is a third category, unseen to us, and both physics in the contemporary sense and consciousness are an effect of this third category, but then that depends on the definition of the word 'physics'.


The “third category” is complexity itself. It is not something simple like the laws of physics. It is the properties of the higher magnitudes of complexity that the top level symbolic abstraction of consciousness can only glimpse through intuition and generalization.

I think you are looking for something simpler than that which is required for such an effect.

The question is only whether consciousness can be seen as an effect of physics or as part of physical reality in the sense in which physical reality is understood today. And sorry to repeat, it is understood today as described by mathematical formulas. How is that different than a clockwork, basically?


To think of it as ‘clockwork’ predisposes you to misunderstanding. Are clocks non-linear? Are they unpredictable? Do they evolve, adapt, and solve problems?

NO

Are there examples of deterministic systems that DO posses all of these qualities?

YES

For example:

Artificial Neural Networks learn and adapt and are very efficient despite their miniscule component complexity in comparison to the modules of the brain. Maybe you should use a more apt metaphor so you don’t unconsciously get steered into the infertile territory of clockwork physics?

Hmmmn… how about “network architecture”?...
or even “fractal” or "chaos" or "non-linear dynamics" would be a much better metaphor though still much too simple.

Metaphors are perhaps more important than you realize. If you use an inappropriate one you end up with an intuited mystery! It's quite a subtle and devisive effect.

Simply making the a-priori assumption that the cause-and-effect principle must also apply to consciousness just does not solve any mystery. It may put your mind at peace, but I have been around to long to fall into that trap.


If I were simply applying the “cause-and-effect principle” to consciousness then, yes, like you, I would have no clue as to how to quantify the color blue!

(I wouldn’t even attempt it, however, because quantification is not my preferred mode of understanding)

Re: The mystery question
posted on 04/21/2003 1:39 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Processing information is exactly what consciousness is and does.


Here we go. Denial in perfection. A computer processes information. Did I miss a kind of information that could not be written down in a book, not stored on a computer? So: Does a computer see colors? Or hear sounds? Or both? There would be no basis to say (and verify) that, so that is not a meaningful scientific approach.

Are there examples of deterministic systems that DO posses all of these qualities? YES


You mentioned these qualities: "complex dynamics", "neural architecture", "complexity", "non-linear", "unpredictable", "evolve", "adapt", "solve", "deterministic", "learn", "fractal", "chaos", "non-linear dynamics", "cause-and-effect".

None of these qualities brought the clockwork any closer to consciousness, except maybe in terms of triggering wishful thinking. For example, the term "complexity" without further substantiation is just a magical placeholder for nothing.

If I were simply applying the ?cause-and-effect principle? to consciousness then, yes, like you, I would have no clue as to how to quantify the color blue!

(I wouldn?t even attempt it, however, because quantification is not my preferred mode of understanding)


Good that you don't attempt it, because you would be wasting an awful lot of time. Quantification, however, is not (primarily) a mode of understanding, it is (among other things) a basic scientific principle of exploration, analysis _and_ verification. Which world are you living in? The world of "Sorce Theory"? Sorry, that is something I really don't have a clue about, and that said I don't see how the metaphor of compressible fluids would add to the understanding of how come we are able to consciously see colors.

Especially if you say consciousness is 'processing information', it would seem to me that quantification were mandatory. Unless you have some kind of transcendental notion of information.

Re: The mystery question
posted on 04/21/2003 2:41 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

A computer processes information. Did I miss a kind of information that could not be written down in a book, not stored on a computer?


Yes apparently so. The brain processes sensory information about the external environment, albeit in a COMPLETELY different way than does a computer.

So: Does a computer see colors? Or hear sounds? Or both? There would be no basis to say (and verify) that, so that is not a meaningful scientific approach.


Information is NOT dependent on binary, serial, digital computers. Information is simply a useful representational abstraction about some aspect of reality.

You mentioned these qualities: "complex dynamics", "neural architecture", "complexity", "non-linear", "unpredictable", "evolve", "adapt", "solve", "deterministic", "learn", "fractal", "chaos", "non-linear dynamics", "cause-and-effect".

None of these qualities brought the clockwork any closer to consciousness, except maybe in terms of triggering wishful thinking.


That is obviously because a clock doesn't possess those qualities. ;) It is more and more obvious to me that you do not understand what neural network architectures are capable of and the longer you stick to your clockwork metaphor the longer you won't get it.

My advice for you to dispel your mysteries (a euphemism for ignorance) is to abandon the ‘clockwork’ metaphor.

For example, the term "complexity" without further substantiation is just a magical placeholder for nothing.


Any term "without further substantiation is just a magical placeholder for nothing". For further substantiation, try reading up on the research in cognitive science. I don't have the time to tutor you on the vast subject.

Good that you don't attempt it, because you would be wasting an awful lot of time.


((as YOU have done?))

Which world are you living in? The world of "Sorce Theory"?


I am living in a world without internal inconsistencies, paradoxes or dualities: a world in which all the forces of nature are united and understood through a common causal substrate: a world in which the mind is a unified part of the process of universal action. Sorce Theory is merely a vehicle for this understanding.

Sorry, that is something I really don't have a clue about, and that said I don't see how the metaphor of compressible fluids would add to the understanding of how come we are able to consciously see colors.


A clue is necessary for an understanding.

Here’s one:
Try looking at the vast research in cognitive science instead of at the basic physics level of billiards or compressible fluids or 3D objects across the room.

Especially if you say consciousness is 'processing information', it would seem to me that quantification were mandatory. Unless you have some kind of transcendental notion of information.


My notion of ‘information’ is ‘platform independent’. It runs on ALL intelligent systems across the spectrum from amoeba to computer to human to zebra. It is a much more useful concept that way. ;) If you need a concept that is tied to computers or language use the word ‘data’.


subtillioN

Re: The mystery question
posted on 04/21/2003 3:42 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

I am living in a world without internal inconsistencies, paradoxes or dualities: a world in which all the forces of nature are united and understood through a common causal substrate: a world in which the mind is a unified part of the process of universal action. Sorce Theory is merely a vehicle for this understanding.


Oh, sorry for the disturbance. Thanks for the discussion!
:-)

Re: The mystery question
posted on 04/21/2003 4:33 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Oh, sorry for the disturbance. Thanks for the discussion!
:-)


Since you have never addressed the proper level of the cognitive 'mechanisms' of the brain you presented no real disturbance, just some confusion that needed to be cleared up. ;-)

...and you are welcome.

:-)

Re: The mystery question
posted on 04/21/2003 4:00 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Did you realize that it was you who first used ‘information’ in reference to the actions of the brain? Did you realize that this usage of the term does not refer to information that is stored in a computer or written down?


On second thought, it is now apparent that the "equivocation" on your part, that seems so clear to me, is entirely unintentional. The problem here is that you are assuming that there are two types of 'information' in the brain. There is the non-quantifiable 'information', (which you hadn't acknowledged as ‘information’) such as the experience of the color 'blue' and then there is the quantifiable ‘information’, such as the experience of an object across the room (which happened to fit your prior definition of ‘information’).

This duality of the subjective contents of consciousness (based on objective criteria) is what appears to me as equivocation. To you, however, it follows directly from your definitions.

This assumption of the duality of the quantifiability of subjective content, however, is entirely baseless, again because the argument doesn't address the proper level of 'mechanism' where the duality (if it really existed) would actually apply. Your argument deals exclusively with the objective contents of subjective experience and not the causal mechanisms or processes of consciousness itself.

ALL the information in the brain is represented in the same basic way. It doesn't make a difference if you are looking at a depthless figure of an equation on a white void featureless space in front of your eyes or at a "3D object" across the room (as if there were an absolute objective distinction between 2D, 3D or any D) or even if you are looking at the clear blue sky. ALL sensory and memory 'information' is stored and instantiated via active patterns in the evolving structure of the neural network architecture of the brain. There is no absolute duality in the substructure of the content of the brain, mind or consciousness. Thus there is no grounds for the duality you have postulated and thus the duality is actually an unintentional equivocation.


subtillioN

Re: The mystery question
posted on 04/22/2003 2:13 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

On second thought, it is now apparent that the "equivocation" on your part, that seems so clear to me, is entirely unintentional. The problem here is that you are assuming that there are two types of 'information' in the brain.


It seems you are seeking something like a post-discussion clarification:

Information:
I have used the term 'information' only in the quantifiable sense (in the sense of information that can or could be stored on a computer, aside from perhaps practical difficulties). I consider this the common usage of the word in the context of this subject and would think that anyone who uses it differently would need to point that out clearly and explicitly (unless maybe it were completely obvious from the context).

Duality:
There is no duality in this context, just the fact that human reality (which of course must include consciousness) does not fit into contemporary concepts based on mathematics and quantification. Some phenomena fit better, others hardly a bit, but the impression of duality arises from taking contemporary concepts absolute. Nature simply has variety, whereas the line drawn by mathematics is abstract. It has to do, for example, with the fact that it is inconceivable, that it will not be possible, to tell or verify whether a computer might be seeing colors. No advancement in technology will change that as long as the computer is defined mathematically. (Unless the computers physical implementation would start doing something else than just behaving according to the definition of a computer, which to the best of my knowledge nobody is suggesting. And even for that purely theoretical case a similar, only generalized, reasoning would apply.) The computer is used here for illustration, this is not an argument about practical limitations of technology. Just to illustrate this further, there would have to happen something of the kind of a human being 'inside' the computer who could tell us about the presence or absence of consciousness, and if that were conceivable, we would have to rewrite history in a similar way (aside from that this human being would simply tell us that the computer is in fact not seeing colors). It happened to classical physics, and it will happen, I suggest, to physics as based on mathematical descriptions. (Though surely in a very different way). (And consequently to neuroscience and cognitive science). The question in consequence is how long this will take to be acknowledged scientifically, and how many errors we will make due to ignorance of these facts about consciousness.

The above is not an attempt of a conclusive argument, that has been done before. (See also http://www.occean.com). It is just an (additional) illustration of some of the arguments for clarification of what has been written here before and in the previous discussion. This is also not meant to restart the discussion, only a response to that you seemed to have felt the need for clarification.

The crucial point:
How a primary color looks ('conscious-how','qualia') is not information, it is rather something like the 'display medium'. Although our attention usually focusses on the information which we are conscious of, we are also (potentially) aware of 'how' we are conscious of that information.

Got it? :-)

Re: The mystery question
posted on 04/23/2003 5:32 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

It seems you are seeking something like a post-discussion clarification:


Clarification is clarification. I seek it in all its forms.

Information:
I have used the term 'information' only in the quantifiable sense (in the sense of information that can or could be stored on a computer, aside from perhaps practical difficulties). I consider this the common usage of the word in the context of this subject and would think that anyone who uses it differently would need to point that out clearly and explicitly (unless maybe it were completely obvious from the context).


Researchers in the field of cognitive science generally believe that EVERY process and aspect of consciousness is quantifiable to a sufficient degree. Thus they use the term ‘information’ freely in that domain. It is only the people who don't understand the mechanisms that assume that consciousness is not 'mechanical'. They think it is 'pure magic!' (i.e. not quantifiable) and thus it is not ‘information’.

We who have taken a glimpse behind the curtain to see the sleight-of-hand at work, know that the show is not really 'magic' and that there are real understandable (and quantifiable) tricks being adeptly used by the magician (brain) to cause the illusions.

Duality:
There is no duality in this context, just the fact that human reality (which of course must include consciousness) does not fit into contemporary concepts based on mathematics and quantification.


Some of us can see the reality beyond mathematics. Consciousness fits quite easily into the causal picture with no intrinsic absolute mysteries whatsoever, but you have to get a feel for the mechanisms behind the 'magic' to dispel these mysteries and apply causality at the proper level. This seems to be impossible for a large number of people, hopefully (for your sake) you are not one of them. ..good luck!

Hint: you have to actually understand the state-of-the-art of the field. This will give you a good feel for the power of the highly complex neural network and modular level 'mechanisms'.

Some phenomena fit better, others hardly a bit, but the impression of duality arises from taking contemporary concepts absolute. Nature simply has variety, whereas the line drawn by mathematics is abstract. It has to do, for example, with the fact that it is inconceivable, that it will not be possible, to tell or verify whether a computer might be seeing colors.



It really comes down to what it means to 'see color'. There are many different ways to define the word. Some require consciousness (which the computer obviously does not posses) and others simply require rudimentary ‘sensory’ information processing (which a computer can certainly handle).

No advancement in technology will change that as long as the computer is defined mathematically.


Computers ultimately don't have to be defined mathematically at all (for an analogy, see neural networks that can compute) and Artificial Intelligence is certainly not restricted to the substrate of binary, digital, serial computers. The original ‘computers’ were not mathematical whatsoever; they were human. That’s why they were so inefficient at mathematical computation and thus they had to be replaced.

The computer is used here for illustration, this is not an argument about practical limitations of technology. Just to illustrate this further, there would have to happen something of the kind of a human being 'inside' the computer who could tell us about the presence or absence of consciousness, and if that were conceivable, we would have to rewrite history in a similar way (aside from that this human being would simply tell us that the computer is in fact not seeing colors).


When you understand the mechanisms and effects of consciousness, all you need to know is that they are present and active in the correct way in the organism (artificial or not) and then you can ask the entity itself whether or not it is conscious. Based on how it answers you and what kind of a relationship you can build with it then you may have enough information to cast a judgment.

Consciousness is much too complex for a simple "consciousness detector".
You simply have to "know it when you see it". [If consciousness were simply a 'display medium', however, this might not be the case]

How can you prove to me that YOU are conscious? You can't, but I know that you show the intelligence that is integral to consciousness. Assuming that you are human, I also know that you have the necessary mechanisms to produce consciousness. That is close enough for me to make the necessary assumption that you probably ARE conscious.

The crucial point:
How a primary color looks ('conscious-how','qualia') is not information, it is rather something like the 'display medium'.


This is the critical assumption that is clouding your vision. Consciousness is not its own substrate ‘display medium’, though perhaps it could be viewed as a 'display' effect produced by a substrate ‘medium’. Consciousness is merely an effect, which can feedback into its own cause (thus changing itself in subtle and complex ways) and the immense complexity of the brain is the cause i.e. the 'medium' of this effect.

Consciousness is the evolved representational interface between the brain and the world. The ‘soul’, like ‘life’ itself, is a surface phenomenon (though this ‘surface’ is a metaphor for the complexity of the causal substrate upon which the phenomenon is dependent and emergent i.e. it does not have to look like a ‘surface’ to qualify as one, according to my definition).

Although our attention usually focusses on the information which we are conscious of, we are also (potentially) aware of 'how' we are conscious of that information.


The interface of consciousness is highly abstract and has the ability to represent anything that catches its attention, including portions of itself (such as color, sound, shape, smell, etc.) Qualia is simply how portions of this abstract interface appear as represented from inside of this interface. Normally we don’t even pay attention to it and we simply function as ‘intended’, as if the interface were not even there. The interface can also be used to vicariously observe its own substrate through the anatomical dissection of another human brain. It can thus formulate more detailed (though still HIGHLY simplified and abstract) self-representations in this round-about way.

It is the inevitable and vast simplifications of consciousness itself which make you represent it as a ‘medium’. What I am saying is that the ‘medium’ has a deeper causal level which is the network architecture of the brain.

The interface of consciousness is composed of sensory representations which we call ‘qualia’. Therefore to say that consciousness itself is the medium of ‘qualia’ is to propose circular logic. In my view, it amounts to saying that the consciousness (the medium) produces the components out of which consciousness is an amalgam, thus consciousness is self-caused. This ultimately explains nothing and furthermore we know that it is false because consciousness is easily manipulated by altering the chemistry, electricity or organization of the network architecture. Stimulate or remove one critical neuron and it can cause a whole cascade of dramatic changes in consciousness. Thus consciousness is not the medium of itself; the network architecture is its medium.

When you view consciousness as a medium separate from the causal mechanisms of the brain you then slip into dualism (if not substance then property dualism). For what is this medium composed of? How does it interact with the brain? At what level does the interaction take place?

When you deny the neural architecture level as the level of causal explanation then all of these new questions arise at the deeper level. This deeper level, however, simply possesses no observed architectural complexity sufficient to explain the intelligence or representational aspects of consciousness. This point of view thus complicates things immensely and this is exactly what Dennett is trying to steer people clear of.

If there is a dualism involved in consciousness then it is a “perspective dualism”. It is the difference in what the active brain looks like from inside the interface of consciousness as opposed to what it looks like from the outside. Consciousness, like everything else, has many properties. The properties of consciousness that are observable at any one point in time are determined by where you looking at it from. Since we can’t step outside our own consciousness because we ARE that consciousness, then in order to see the full spectrum of the properties, we must observe the outside viewable properties of consciousness vicariously. The dualism is an illusion of perspective. It is not intrinsic to consciousness itself. Consciousness does not actually possess two separate sets of properties. It can simply be seen from two vastly different vantage points. This is why the ‘dualism’ always falls along the subjective/objective quasi-divide.

Got it? :-)


I understand what you are trying to say, yes.


subtillioN

Re: The mystery question
posted on 04/23/2003 11:32 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

I understand what you are trying to say, yes.


No, sorry, you do not. I don't know who you are talking to, but it is not me.

Re: The mystery question
posted on 04/24/2003 12:58 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

No, sorry, you do not. I don't know who you are talking to, but it is not me.


Lol...ok =)

Re: The mystery question
posted on 04/24/2003 1:57 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Lol...ok =)


Well, I wasn't trying to be funny. The good part is it means I don't see an actual disagreement with the arguments themselves. For example:

Me:

The crucial point:
How a primary color looks ('conscious-how','qualia') is not information, it is rather something like the 'display medium'. Although our attention usually focusses on the information which we are conscious of, we are also (potentially) aware of 'how' we are conscious of that information.


You, in your response to the first sentence (as you repond to each sentence seperately) :

This is the critical assumption that is clouding your vision. Consciousness is not its own substrate 'display medium', though perhaps it could be viewed as a 'display' effect produced by a substrate 'medium'. Consciousness is merely an effect, which can feedback into its own cause (thus changing itself in subtle and complex ways) and the immense complexity of the brain is the cause i.e. the 'medium' of this effect.

Consciousness is the evolved representational interface between the brain and the world. The 'soul', like 'life' itself, is a surface phenomenon (though this 'surface' is a metaphor for the complexity of the causal substrate upon which the phenomenon is dependent and emergent i.e. it does not have to look like a 'surface' to qualify as one, according to my definition).

Re: The mystery question
posted on 04/24/2003 2:21 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

The good part is it means I don't see an actual disagreement with the arguments themselves.


excellent

The main point in that passage was in specifying at what level the causality of consciousness resides. I claimed that it is at the neural network level. I supposed you were thinking that consciousness is caused at a deeper level and that somehow this causal level is beyond mathematical approximation.

Is this correct?

It seems to me that the only way for causality to be beyond mathematical approximation is if the scale of action is beyond scientific observation i.e. micro/macro-perceptual. If we can observe it then we can approximate it. Do you believe that the causal level of consciousness is sub-quantum?

Re: The mystery question
posted on 04/24/2003 3:28 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

The main point in that passage was in specifying at what level the causality of consciousness resides. I claimed that it is at the neural network level. I supposed you were thinking that consciousness is caused at a deeper level and that somehow this causal level is beyond mathematical approximation.


For easier reference, here the passage which you are referring to:

blue_is_not_a_number: How a primary color looks ('conscious-how','qualia') is not information, it is rather something like the 'display medium'. Although our attention usually focusses on the information which we are conscious of, we are also (potentially) aware of 'how' we are conscious of that information.


The main point was to clarify this misunderstanding in the preceding message:

SubtillionN: The problem here is that you are assuming that there are two types of 'information' in the brain.


This passage was meant to clarify that I am not talking about two different types of information. It said that the 'how-a-primary-color-looks', which is what I call 'conscious-how', is not information, rather it is 'how-information-is-displayed' in consciousness. This is independent of what causes or constitutes consciousness (this display). So I am making a distinction between two cases:

a) We are conscious of (visual) information. (By seeing it in color)
b) We are aware of 'how' we are conscious of information. (By being aware of 'how' a specific color looks to us.)

This means I am not talking about two types (a duality) of information. Instead I am talking about information and qualia.

Of course, this was only a minor clarification within a larger context. But perhaps you can see in this example how my attempt to clarify one misunderstanding leads to the next one.

As a side note, I am not trying to specify the causality of consciousness at all. I don't have to. We don't know what causes energy/matter either (if it is caused by anything). I am only arguing that consciousness cannot be the caused by a process which is well described mathematically. But that is already a different question than the one I discussed above, in that passage.

Re: The mystery question
posted on 04/24/2003 4:59 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]


This passage was meant to clarify that I am not talking about two different types of information. It said that the 'how-a-primary-color-looks', which is what I call 'conscious-how', is not information, rather it is 'how-information-is-displayed' in consciousness.


Information is nothing but the organization of matter into semantically useful structures. Are qualia semantically useful?

You have yet to prove that consciousness is not an effect whose causation can be understood, represented and quantified, thus you have not proven that it too (like everything else) can be represented as information.

This is independent of what causes or constitutes consciousness (this display).


Since when has a ‘display’ been devoid of the properties of information? Every example of a display that we know of has many layers of information. It has the information specifying how the display works, what it is made out of and it also has the information that is actually displayed. You can’t separate the information displayed from the display mechanism. They are causally inseparable.

So I am making a distinction between two cases:

a) We are conscious of (visual) information. (By seeing it in color)
b) We are aware of 'how' we are conscious of information. (By being aware of 'how' a specific color looks to us.)

This means I am not talking about two types (a duality) of information. Instead I am talking about information and qualia.


You may think you are not, but you are. You are saying that the display (qualia) is independent of the display mechanism (The brain). This is a physical impossibility and it is nonsensical. That is exactly why consciousness is a mystery to you.

Of course, this was only a minor clarification within a larger context. But perhaps you can see in this example how my attempt to clarify one misunderstanding leads to the next one.


Until they are all cleared up this is inevitable.

As a side note, I am not trying to specify the causality of consciousness at all. I don't have to.


Without having any clue whatsoever as to the causality of consciousness and without even trying to, how can you be so sure that the causality cannot be quantified? You are entirely unaware that the process of understanding and quantifying those very mechanisms is well underway.

By claiming that the cause of consciousness needs no specification you are essentially saying that consciousness has no cause and thus it is its own cause.

So we have basic matter which is its own cause and now we have consciousness also. Do you not see that this is essentially the Cartesian mind/matter duality?

We don't know what causes energy/matter either (if it is caused by anything).


Physics has no clue what causes atomic matter and energy. Energy simply means “the ability to do work”. It has no clue what physically does this work. Sorce Theory, however, shows that energy (and everything else) is inherently explainable as a consequence and effect of the organization and motion of basic matter.

I am only arguing that consciousness cannot be the caused by a process which is well described mathematically.


You don’t understand and you don’t want to understand the causality beneath consciousness, yet at the same time you can claim to know whether or not it is quantifiable?

Again, without a clue as to the actual mechanisms of consciousness your argument has fallen flat. It is entirely baseless as admitted by your statement as to not know the causality beneath consciousness.

Re: The mystery question
posted on 04/24/2003 5:23 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

You don't understand and you don't want to understand the causality beneath consciousness, yet at the same time you can claim to know whether or not it is quantifiable?


Well, that sounds as if you are beginning to see what I'm talking about.

Again, without a clue as to the actual mechanisms of consciousness your argument has fallen flat. It is entirely baseless as admitted by your statement as to not know the causality beneath consciousness.


Just as flat as you are saying physics is without Source Theory. Excuse for not seeing a single indication that Source Theory will save us from this flatness.

Maybe I'll write more later. (And maybe not.)

Re: The mystery question
posted on 04/24/2003 6:06 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Excuse for not seeing a single indication that Source Theory will save us from this flatness.


Not knowing the theory automatically excuses you.

Sorce theory (not Source, but Sorce, as in Single fORCE) is a unified model which coherently explains the causality beneath all aspects of basic physical reality. Cognitive science explains the basic mechanisms of consciousness. Together they form a coherent and consistent heirarchical model of the causality of consciousness. From this vantage point the mechanisms of consciousness are not fundamentally mysterious, just VERY complex and difficult to grasp.

Re: The mystery question
posted on 04/24/2003 6:11 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Just as flat as you are saying physics is without Source Theory. Excuse for not seeing a single indication that Source Theory will save us from this flatness.


Very good observation. Standard mathematical physics is empty of understanding. Perhaps you had intuited this in your criticisms of the power of mathematics to explain consciousness. Sorce Theory gives Physics the causal understanding beneath the empty mathematics. It deletes the root-level confusions and the semantic emptyness. This is exactly how it "will save us from this flatness".

Re: The mystery question
posted on 04/21/2003 11:55 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

blue is not a number: The question is not whether neuroscience is interesting or not, or whether neuroscience can explain how we process information (which may be seen as an important part of consciousness, but is not the whole story).

subtillioN: Processing information is exactly what consciousness is and does.

blue is not a number: Here we go. Denial in perfection. A computer processes information. Did I miss a kind of information that could not be written down in a book, not stored on a computer?


Did you realize that it was you who first used ‘information’ in reference to the actions of the brain? Did you realize that this usage of the term does not refer to information that is stored in a computer or written down?

Just a little observation to help you watch out for this kind of equivocation in the future.

;)

Re: The mystery question
posted on 04/22/2003 2:24 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

See the other message I wrote a few minutes ago.

In order to be clickable, the URL should have been written without a following parenthesis:
http://www.occean.com

Re: Origins of Life
posted on 12/10/2003 7:54 AM by SynapticSymphony

[Top]
[Mind·X]
[Reply to this post]

Have any of you very fine people heard of the great Mr. Wilhelm Riech and the theories of Orgone energy and its effect on the origins of life?

Re: Origins of Life
posted on 12/11/2003 2:13 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

that Oranur experiment stuff was spooky- I always wanted to build a Cloudbuster Cannon- just to be safe-

Re: The Future of Life
posted on 01/31/2005 4:01 PM by billmerit

[Top]
[Mind·X]
[Reply to this post]

>He also speculated about bringing "Lucy" (a celebrated hominid fossil), or at least a close genetic clone of Lucy, to life. His goal: to kiss Lucy.<

That would be a tough way to find a mate.