Origin > Will Machines Become Conscious? > Why We Can Be Confident of Turing Test Capability Within a Quarter Century
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0683.html

Printable Version
    Why We Can Be Confident of Turing Test Capability Within a Quarter Century
by   Ray Kurzweil

The advent of strong AI (exceeding human intelligence) is the most important transformation this century will see, and it will happen within 25 years, says Ray Kurzweil, who will present this paper at The Dartmouth Artificial Intelligence Conference: The next 50 years (AI@50) on July 14, 2006.


Published on KurzweilAI.net July 13, 2006. Excerpted from The Singularity is Near, When Humans Transcend Biology by Ray Kurzweil.


Consider another argument put forth by Turing. So far we have constructed only fairly simple and predictable artifacts. When we increase the complexity of our machines, there may, perhaps, be surprises in store for us. He draws a parallel with a fission pile. Below a certain "critical" size, nothing much happens: but above the critical size, the sparks begin to fly. So too, perhaps, with brains and machines. Most brains and all machines are, at present "sub-critical"—they react to incoming stimuli in a stodgy and uninteresting way, have no ideas of their own, can produce only stock responses—but a few brains at present, and possibly some machines in the future, are super-critical, and scintillate on their own account. Turing is suggesting that it is only a matter of complexity, and that above a certain level of complexity a qualitative difference appears, so that "super-critical" machines will be quite unlike the simple ones hitherto envisaged. —J. R. Lucas, Oxford philosopher, in his 1961 essay "Minds, Machines, and Gödel"1

Given that superintelligence will one day be technologically feasible, will people choose to develop it? This question can pretty confidently be answered in the affirmative. Associated with every step along the road to superintelligence are enormous economic payoffs. The computer industry invests huge sums in the next generation of hardware and software, and it will continue doing so as long as there is a competitive pressure and profits to be made. People want better computers and smarter software, and they want the benefits these machines can help produce. Better medical drugs; relief for humans from the need to perform boring or dangerous jobs; entertainment—there is no end to the list of consumer-benefits. There is also a strong military motive to develop artificial intelligence. And nowhere on the path is there any natural stopping point where technophobics could plausibly argue "hither but not further. —Nick Bostrom, "How Long Before Superintelligence?" 1997

It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine, or by offering us the option to upload ourselves. A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals —Nick Bostrom, "Ethical Issues in Advanced Artificial Intelligence," 2003

Will robots inherit the earth? Yes, but they will be our children. —Marvin Minsky, 1995


Of the three primary revolutions underlying the Singularity (G, N, and R), the most profound is R, which refers to the creation of nonbiological intelligence that exceeds that of unenhanced humans. A more intelligent process will inherently outcompete one that is less intelligent, making intelligence the most powerful force in the universe.

While the "R" in GNR stands for robotics, the real issue involved here is strong AI (artificial intelligence that exceeds human intelligence). The standard reason for emphasizing robotics in this formulation is that intelligence needs an embodiment, a physical presence, to affect the world. I disagree with the emphasis on physical presence, however, for I believe that the central concern is intelligence. Intelligence will inherently find a way to influence the world, including creating its own means for embodiment and physical manipulation. Furthermore, we can include physical skills as a fundamental part of intelligence; a large portion of the human brain (the cerebellum, comprising more than half our neurons), for example, is devoted to coordinating our skills and muscles.

Artificial intelligence at human levels will necessarily greatly exceed human intelligence for several reasons. As I pointed out earlier machines can readily share their knowledge. As unenhanced humans we do not have the means of sharing the vast patterns of interneuronal connections and neurotransmitter-concentration levels that comprise our learning, knowledge, and skills, other than through slow, language-based communication. Of course, even this method of communication has been very beneficial, as it has distinguished us from other animals and has been an enabling factor in the creation of technology.

Human skills are able to develop only in ways that have been evolutionarily encouraged. Those skills, which are primarily based on massively parallel pattern recognition, provide proficiency for certain tasks, such as distinguishing faces, identifying objects, and recognizing language sounds. But they're not suited for many others, such as determining patterns in financial data. Once we fully master pattern-recognition paradigms, machine methods can apply these techniques to any type of pattern.2

Machines can pool their resources in ways that humans cannot. Although teams of humans can accomplish both physical and mental feats that individual humans cannot achieve, machines can more easily and readily aggregate their computational, memory and communications resources. As discussed earlier, the Internet is evolving into a worldwide grid of computing resources that can be instantly brought together to form massive supercomputers.

Machines have exacting memories. Contemporary computers can master billions of facts accurately, a capability that is doubling every year.3 The underlying speed and price-performance of computing itself is doubling every year, and the rate of doubling is itself accelerating.

As human knowledge migrates to the Web, machines will be able to read, understand, and synthesize all human-machine information. The last time a biological human was able to grasp all human scientific knowledge was hundreds of years ago.

Another advantage of machine intelligence is that it can consistently perform at peak levels and can combine peak skills. Among humans one person may have mastered music composition, while another may have mastered transistor design, but given the fixed architecture of our brains we do not have the capacity (or the time) to develop and utilize the highest level of skill in every increasingly specialized area. Humans also vary a great deal in a particular skill, so that when we speak, say, of human levels of composing music, do we mean Beethoven, or do we mean the average person? Nonbiological intelligence will be able to match and exceed peak human skills in each area.

For these reasons, once a computer is able to match the subtlety and range of human intelligence, it will necessarily soar past it, and then continue its double- exponential ascent.

A key question regarding the Singularity is whether the "chicken" (strong AI) or the "egg" (nanotechnology) will come first. In other words, will strong AI lead to full nanotechnology (molecular manufacturing assemblers that can turn information into physical products), or will full nanotechnology lead to strong AI? The logic of the first premise is that strong AI would imply superhuman AI for the reasons just cited; and superhuman AI would be in a position to solve any remaining design problems required to implement full nanotechnology.

The second premise is based on the realization that the hardware requirements for strong AI will be met by nanotechnology-based computation. Likewise the software requirements will be facilitated by nanobots that could create highly detailed scans of human brain functioning and thereby achieve the completion of reverse engineering the human brain.

Both premises are logical; it's clear that either technology can assist the other. The reality is that progress in both areas will necessarily use our most advanced tools, so advances in each field will simultaneously facilitate the other. However, I do expect that full MNT will emerge prior to strong AI, but only by a few years (around 2025 for nanotechnology, around 2029 for strong AI).

As revolutionary as nanotechnology will be, strong AI will have far more profound consequences. Nanotechnology is powerful but not necessarily intelligent. We can devise ways of at least trying to manage the enormous powers of nanotechnology, but superintelligence innately cannot be controlled.

Runaway AI. Once strong AI is achieved, it can readily be advanced and its powers multiplied, as that is the fundamental nature of machine abilities. As one strong AI immediately begets many strong AIs, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely. Each cycle not only creates a more intelligent AI but takes less time than the cycle before it, as is the nature of technological evolution (or any evolutionary process). The premise is that once strong AI is achieved, it will immediately become a runaway phenomenon of rapidly escalating superintelligence.4

My own view is only slightly different. The logic of runaway AI is valid, but we still need to consider the timing. Achieving human levels in a machine will not immediately cause a runaway phenomenon. Consider that a human level of intelligence has limitations. We have examples of this today—about six billion of them. Consider a scenario in which you took one hundred humans from, say, a shopping mall. This group would constitute examples of reasonably well educated humans. Yet if this group was presented with the task of improving human intelligence, it wouldn't get very far, even if provided with the templates of human intelligence. It would probably have a hard time creating a simple computer. Speeding up the thinking and expanding the memory capacities of these one hundred humans would not immediately solve this problem.

I pointed out above that machines will match (and quickly exceed) peak human skills in each area of skill. So instead, let's take one hundred scientists and engineers. A group of technically trained people with the right backgrounds would be capable of improving accessible designs. If a machine attained equivalence to one hundred (and eventually one thousand, then one million) technically trained humans, each operating much faster than a biological human, a rapid acceleration of intelligence would ultimately follow.

However, this acceleration won't happen immediately when a computer passes the Turing test. The Turing test is comparable to matching the capabilities of an average, educated human and thus is closer to the example of humans from a shopping mall. It will take time for computers to master all of the requisite skills and to marry these skills with all the necessary knowledge bases.

Once we've succeeded in creating a machine that can pass the Turing test (around 2029), the succeeding period will be an era of consolidation in which nonbiological intelligence will make rapid gains. However, the extraordinary expansion contemplated for the Singularity, in which human intelligence is multiplied by billions, won't take place until the mid-2040s (as discussed in chapter 3 of The Singularity is Near).

The AI Winter

There's this stupid myth out there that A.I. has failed, but A.I. is everywhere around you every second of the day. People just don't notice it. You've got A.I. systems in cars, tuning the parameters of the fuel injection systems. When you land in an airplane, your gate gets chosen by an A.I. scheduling system. Every time you use a piece of Microsoft software, you've got an A.I. system trying to figure out what you're doing, like writing a letter, and it does a pretty damned good job. Every time you see a movie with computer-generated characters, they're all little A.I. characters behaving as a group. Every time you play a video game, you're playing against an A.I. system. —Rodney Brooks, director of the MIT AI Lab5

I still run into people who claim that artificial intelligence withered in the 1980s, an argument that is comparable to insisting that the Internet died in the dot-com bust of the early 2000s.6 The bandwidth and price-performance of Internet technologies, the number of nodes (servers), and the dollar volume of e-commerce all accelerated smoothly through the boom as well as the bust and the period since. The same has been true for AI.

The technology hype cycle for a paradigm shift—railroads, AI, Internet, telecommunications, possibly now nanotechnology—typically starts with a period of unrealistic expectations based on a lack of understanding of all the enabling factors required. Although utilization of the new paradigm does increase exponentially, early growth is slow until the knee of the exponential-growth curve is realized. While the widespread expectations for revolutionary change are accurate, they are incorrectly timed. When the prospects do not quickly pan out, a period of disillusionment sets in. Nevertheless exponential growth continues unabated, and years later a more mature and more realistic transformation does occur.

We saw this in the railroad frenzy of the nineteenth century, which was followed by widespread bankruptcies. (I have some of these early unpaid railroad bonds in my collection of historical documents.) And we are still feeling the effects of the e-commerce and telecommunications busts of several years ago, which helped fuel a recession from which we are now recovering.

AI experienced a similar premature optimism in the wake of programs such as the 1957 General Problem Solver created by Allen Newell, J. C. Shaw, and Herbert Simon, which was able to find proofs for theorems that had stumped mathematicians such as Bertrand Russell, and early programs from the MIT Artificial Intelligence Laboratory, which could answer SAT questions (such as analogies and story problems) at the level of college students.7 A rash of AI companies occurred in the 1970s, but when profits did not materialize there was an AI "bust" in the 1980s, which has become known as the "AI winter." Many observers still think that the AI winter was the end of the story and that nothing has since come of the AI field.

Yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry. Most of these applications were research projects ten to fifteen years ago. People who ask, "Whatever happened to AI?" remind me of travelers to the rain forest who wonder, "Where are all these many species that are supposed to live here?" when hundreds of species of flora and fauna are flourishing only a few dozen meters away, deeply integrated into the local ecology.

We are well into the era of "narrow AI," which refers to artificial intelligence that performs a useful and specific function that once required human intelligence to perform, and does so at human levels or better. Often narrow AI systems greatly exceed the speed of humans, as well as provide the ability to manage and consider thousands of variables simultaneously. I describe a broad variety of narrow AI examples below.

These time frames for AI's technology cycle (a couple of decades of growing enthusiasm, a decade of disillusionment, then a decade and a half of solid advance in adoption) may seem lengthy, compared to the relatively rapid phases of the Internet and telecommunications cycles (measured in years, not decades), but two factors must be considered. First, the Internet and telecommunications cycles were relatively recent, so they are more affected by the acceleration of paradigm shift (as discussed in chapter 1 of The Singularity is Near). So recent adoption cycles (boom, bust, and recovery) will be much faster than ones that started forty years ago. Second, the AI revolution is the most profound transformation that human civilization will experience, so it will take longer to mature than less complex technologies. It is characterized by the mastery of the most important and most powerful attribute of human civilization, indeed of the entire sweep of evolution on our planet: intelligence.

It's the nature of technology to understand a phenomenon and then engineer systems that concentrate and focus that phenomenon to greatly amplify it. For example, scientists discovered a subtle property of curved surfaces known as Bernoulli's principle: a gas (such as air) travels more quickly over a curved surface than over a flat surface. Thus, air pressure over a curved surface is lower than over a flat surface. By understanding, focusing, and amplifying the implications of this subtle observation, our engineering created all of aviation. Once we understand the principles of intelligence, we will have a similar opportunity to focus, concentrate, and amplify its powers.

As I reviewed in chapter 4 of The Singularity is Near, every aspect of understanding, modeling, and simulating the human brain is accelerating: the price-performance and temporal and spatial resolution of brain scanning, the amount of data and knowledge available about brain function, and the sophistication of the models and simulations of the brain's varied regions.

We already have a set of powerful tools that emerged from AI research and that have been refined and improved over several decades of development. The brain reverse-engineering project will greatly augment this toolkit by also providing a panoply of new, biologically inspired, self-organizing techniques. We will ultimately be able to apply engineering's ability to focus and amplify human intelligence vastly beyond the hundred trillion extremely slow interneuronal connections that each of us struggles with today. Intelligence will then be fully subject to the law of accelerating returns, which is currently doubling the power of information technologies every year.

An underlying problem with artificial intelligence that I have personally experienced in my forty years in this area is that as soon as an AI technique works, it's no longer considered AI and is spun off as its own field (for example, character recognition, speech recognition, machine vision, robotics, data mining, medical informatics, automated investing).

Computer scientist Elaine Rich defines AI as "the study of how to make computers do things at which, at the moment, people are better." Rodney Brooks, director of the MIT AI Lab, puts it a different way: "Every time we figure out a piece of it, it stops being magical; we say, Oh, that's just a computation." I am also reminded of Watson's remark to Sherlock Holmes, "I thought at first that you had done something clever, but I see that there was nothing in it after all."8 That has been our experience as AI scientists. The enchantment of intelligence seems to be reduced to "nothing" when we fully understand its methods. The mystery that is left is the intrigue inspired by the remaining, not as yet understood methods of intelligence.

AI's Toolkit

AI is the study of techniques for solving exponentially hard problems in polynomial time by exploiting knowledge about the problem domain. —Elaine Rich

It has only been recently that we have been able to obtain sufficiently detailed models of how human brain regions function to influence AI design. Prior to that, in the absence of tools that could peer into the brain with sufficient resolution, AI scientists and engineers developed their own techniques. Just as aviation engineers did not model the ability to fly on the flight of birds, these early AI methods were not based on reverse engineering natural intelligence.

A small sample of these approaches is reviewed here. Since their adoption, they have grown in sophistication, which has enabled the creation of practical products that avoid the fragility and high error rates of earlier systems.

Expert systems. In the 1970s AI was often equated with one specific method: expert systems. This involves the development of specific logical rules to simulate the decision-making processes of human experts. A key part of the procedure entails knowledge engineers interviewing domain experts such as doctors and engineers to codify their decision-making rules.

There were early successes in this area, such as medical diagnostic systems that compared well to human physicians, at least in limited tests. For example, a system called MYCIN, which was designed to diagnose and recommend remedial treatment for infectious diseases, was developed through the 1970s. In 1979 a team of expert evaluators compared diagnosis and treatment recommendations by MYCIN to those of human doctors and found that MYCIN did as well as or better than any of the physicians.9

It became apparent from this research that human decision making typically is based not on definitive logic rules but rather on "softer" types of evidence. A dark spot on a medical imaging test may suggest cancer, but other factors such as its exact shape, location, and contrast are likely to influence a diagnosis. The hunches of human decision making are usually influenced by combining many pieces of evidence from prior experience, none definitive by itself. Often we are not even consciously aware of many of the rules that we use.

By the late 1980s expert systems were incorporating the idea of uncertainty and could combine many sources of probabilistic evidence to make a decision. The MYCIN system pioneered this approach. A typical MYCIN "rule" reads:

If the infection which requires therapy is meningitis, and the type of the infection is fungal, and organisms were not seen on the stain of the culture, and the patient is not a compromised host, and the patient has been to an area that is endemic for coccidiomycoses, and the race of the patient is Black, Asian, or Indian, and the cryptococcal antigen in the csf test was not positive, THEN there is a 50 percent chance that cryptococcus is not one of the organisms which is causing the infection.

Although a single probabilistic rule such as this would not be sufficient by itself to make a useful statement, by combining thousands of such rules the evidence can be marshaled and combined to make reliable decisions.

Probably the longest-running expert system project is CYC (for enCYClopedic), created by Doug Lenat and his colleagues at Cycorp. Initiated in 1984, CYC has been coding commonsense knowledge to provide machines with an ability to understand the unspoken assumptions underlying human ideas and reasoning. The project has evolved from hard-coded logical rules to probabilistic ones and now includes means of extracting knowledge from written sources (with human supervision). The original goal was to generate one million rules, which reflects only a small portion of what the average human knows about the world. Lenat's latest goal is for CYC to master "100 million things, about the number a typical person knows about the world by 1997."10

Another ambitious expert system is being pursued by Darryl Macer, associate professor of Biological Sciences at the University of Tsukuba in Japan. He plans to develop a system incorporating all human ideas.11 One application would be to inform policy makers of which ideas are held by which community.

Bayesian nets. Over the last decade a technique called Bayesian logic has created a robust mathematical foundation for combining thousands or even millions of such probabilistic rules in what are called "belief networks" or Bayesian nets. Originally devised by English mathematician Thomas Bayes, and published posthumously in 1763, the approach is intended to determine the likelihood of future events based similar occurrences in the past.12 Many expert systems based on Bayesian techniques gather data from experience in an ongoing fashion, thereby continually learning and improving their decision making.

The most promising type of spam filters are based on this method. I personally use a spam filter called SpamBayes, which trains itself on e-mail that you have identified as either "spam" or "okay."13 You start out by presenting a folder of each to the filter. It trains its Bayesian belief network on these two files and analyzes the patterns of each, thus enabling it to automatically move subsequent e-mail into the proper category. It continues to train itself on every subsequent e-mail, especially when it's corrected by the user. This filter has made the spam situation manageable for me, which is saying a lot, as it weeds out two hundred to three hundred spam messages each day, letting over one hundred "good" messages through. Only about 1 percent of the messages it identifies as "okay" are actually spam; it almost never marks a good message as spam. The system is almost as accurate as I would be and much faster.

Markov models. Another method that is good at applying probabilistic networks to complex sequences of information involves Markov models.14 Andrei Andreyevich Markov (1856–1922), a renowned mathematician, established a theory of "Markov chains," which was refined by Norbert Wiener (1894–1964) in 1923. The theory provided a method to evaluate the likelihood that a certain sequence of events would occur. It has been popular, for example, in speech recognition, in which the sequential events are phonemes (parts of speech). The Markov models used in speech recognition code the likelihood that specific patterns of sound are found in each phoneme, how the phonemes influence each other, and likely orders of phonemes. The system can also include probability networks on higher levels of language, such as the order of words. The actual probabilities in the models are trained on actual speech and language data, so the method is self-organizing.

Markov modeling was one of the methods my colleagues and I used in our own speech-recognition development.15 Unlike phonetic approaches, in which specific rules about phoneme sequences are explicitly coded by human linguists, we did not tell the system that there are approximately forty-four phonemes in English, nor did we tell it what sequences of phonemes were more likely than others. We let the system discover these "rules" for itself from thousands of hours of transcribed human speech data. The advantage of this approach over hand-coded rules is that the models develop subtle probabilistic rules of which human experts are not necessarily aware.

Neural nets. Another popular self-organizing method that has also been used in speech recognition and a wide variety of other pattern-recognition tasks is neural nets. This technique involves simulating a simplified model of neurons and interneuronal connections. One basic approach to neural nets can be described as follows. Each point of a given input (for speech, each point represents two dimensions, one being frequency and the other time; for images, each point would be a pixel in a two-dimensional image) is randomly connected to the inputs of the first layer of simulated neurons. Every connection has an associated synaptic strength, which represents its importance and which is set at a random value. Each neuron adds up the signals coming into it. If the combined signal exceeds a particular threshold, the neuron fires and sends a signal to its output connection; if the combined input signal does not exceed the threshold, the neuron does not fire, and its output is zero. The output of each neuron is randomly connected to the inputs of the neurons in the next layer. There are multiple layers (generally three or more), and the layers may be organized in a variety of configurations. For example, one layer may feed back to an earlier layer. At the top layer, the output of one or more neurons, also randomly selected, provides the answer (For an algorithmic description of neural nets, see this endnote.16).

Since the neural-net wiring and synaptic weights are initially set randomly, the answers of an untrained neural net will be random. The key to a neural net, therefore, is that it must learn its subject matter. Like the mammalian brains on which it's loosely modeled, a neural net starts out ignorant. The neural net's teacher—which may be a human, a computer program, or perhaps another, more mature neural net that has already learned its lessons—rewards the student neural net when it generates the right output and punishes it when it does not. This feedback is in turn used by the student neural net to adjust the strengths of each interneuronal connection. Connections that were consistent with the right answer are made stronger. Those that advocated a wrong answer are weakened. Over time, the neural net organizes itself to provide the right answers without coaching. Experiments have shown that neural nets can learn their subject matter even with unreliable teachers. If the teacher is correct only 60 percent of the time, the student neural net will still learn its lessons.

A powerful, well-taught neural net can emulate a wide range of human pattern-recognition faculties. Systems using multilayer neural nets have shown impressive results in a wide variety of pattern-recognition tasks, including recognizing handwriting, human faces, fraud in commercial transactions such as credit-card charges, and many others. In my own experience in using neural nets in such contexts, the most challenging engineering task is not coding the nets but in providing automated lessons for them to learn their subject matter.

The current trend in neural nets is to take advantage of more realistic and more complex models of how actual biological neural nets work, now that we are developing detailed models of neural functioning from brain reverse engineering.17 Since we do have several decades of experience in using self-organizing paradigms, new insights from brain studies can be quickly adapted to neural-net experiments.

Neural nets are also naturally amenable to parallel processing, since that is how the brain works. The human brain does not have a central processor that simulates each neuron. Rather, we can consider each neuron and each interneuronal connection to be an individual slow processor. Extensive work is under way to develop specialized chips that implement neural-net architectures in parallel to provide substantially greater throughput.18

Genetic algorithms (GAs). Another self-organizing paradigm inspired by nature is genetic, or evolutionary, algorithms, which emulate evolution, including sexual reproduction and mutations. Here is a simplified description of how they work. First, determine a way to code possible solutions to a given problem. If the problem is optimizing the design parameters for a jet engine, define a list of the parameters (with a specific number of bits assigned to each parameter). This list is regarded as the genetic code in the genetic algorithm. Then randomly generate thousands or more genetic codes. Each such genetic code (which represents one set of design parameters) is considered a simulated "solution" organism.

Now evaluate each simulated organism in a simulated environment by using a defined method to evaluate each set of parameters. This evaluation is a key to the success of a genetic algorithm. In our example, we would apply each solution organism to a jet-engine simulation and determine how successful that set of parameters is, according to whatever criteria we are interested in (fuel consumption, speed, and so on). The best solution organisms (the best designs) are allowed to survive, and the rest are eliminated.

Now have each of the survivors multiply themselves until they reach the same number of solution creatures. This is done by simulating sexual reproduction. In other words, each new offspring solution draws part of its genetic code from one parent and another part from a second parent. Usually no distinction is made between male or female organisms; it's sufficient to generate an offspring from two arbitrary parents. As they multiply, allow some mutation (random change) in the chromosomes to occur.

We've now defined one generation of simulated evolution; now repeat these steps for each subsequent generation. At the end of each generation determine how much the designs have improved. When the improvement in the evaluation of the design creatures from one generation to the next becomes very small, we stop this iterative cycle of improvement and use the best design(s) in the last generation .(For an algorithmic description of genetic algorithms, see this endnote19).

The key to a GA is that the human designers don't directly program a solution; rather, they let one emerge through an iterative process of simulated competition and improvement. As we discussed, biological evolution is smart but slow, so to enhance its intelligence we retain its discernment while greatly speeding up its ponderous pace. The computer is fast enough to simulate many generations in a matter of hours or days or weeks. But we have to go through this iterative process only once; once we have let this simulated evolution run its course, we can apply the evolved and highly refined rules to real problems in a rapid fashion.

Like neural nets GAs are a way to harness the subtle but profound patterns that exist in chaotic data. A key requirement for their success is a valid way of evaluating each possible solution. This evaluation needs to be fast because it must take account of many thousands of possible solutions for each generation of simulated evolution.

GAs are adept at handling problems with too many variables to compute precise analytic solutions. The design of a jet engine, for example, involves more than one hundred variables and requires satisfying dozens of constraints. GAs used by researchers at General Electric were able to come up with engine designs that met the constraints more precisely than conventional methods.

When using GAs you must, however, be careful what you ask for. University of Sussex researcher Jon Bird used a GA to optimally design an oscillator circuit. Several attempts generated conventional designs using a small number of transistors, but the winning design was not an oscillator at all but a simple radio circuit. Apparently the GA discovered that the radio circuit picked up an oscillating hum from a nearby computer.20 The GA's solution worked only in the exact location on the table where it was asked to solve the problem.

Genetic algorithms, part of the field of chaos or complexity theory, are being increasingly used to solve otherwise intractable business problems, such as optimizing complex supply chains. This approach is beginning to supplant more analytic methods throughout industry. (See examples below.) The paradigm is also adept at recognizing patterns, and is often combined with neural nets and other self-organizing methods. It's also a reasonable way to write computer software, particularly software that needs to find delicate balances for competing resources.

In the novel usr/bin/god, Cory Doctorow, a leading science-fiction writer, uses an intriguing variation of a GA to evolve an AI. The GA generates a large number of intelligent systems based on various intricate combinations of techniques, with each combination characterized by its genetic code. These systems then evolve using a GA.

The evaluation function works as follows: each system logs on to various human chat rooms and tries to pass for a human, basically a covert Turing test. If one of the humans in a chat room says something like "What are you, a chatterbot?" (chatterbot meaning an automatic program, that at today's level of development is expected to not understand language at a human level), the evaluation is over, that system ends its interactions, and reports its score to the GA. The score is determined by how long it was able to pass for human without being challenged in this way. The GA evolves more and more intricate combinations of techniques that are increasingly capable of passing for human.

The main difficulty with this idea is that the evaluation function is fairly slow, although it will take an appreciable amount of time only once the systems are reasonably intelligent. Also, the evaluations can take place largely in parallel. It's an interesting idea and may actually be a useful method to finish the job of passing the Turing test, once we get to the point where we have sufficiently sophisticated algorithms to feed into such a GA, so that evolving a Turing-capable AI is feasible.

Recursive search. Often we need to search through vast number of combinations of possible solutions to solve a given problem. A classic example is in playing games such as chess. As a player considers her next move, she can list all of her possible moves, and then, for each such move, all possible countermoves by the opponent, and so on. It is difficult, however, for human players to keep a huge "tree" of move-countermove sequences in their heads, and so they rely on pattern recognition—recognizing situations based on prior experience—whereas machines use logical analysis of millions of moves and countermoves.

Such a logical tree is at the heart of most game-playing programs. Consider how this is done. We construct a program called Pick Best Next Step to select each move. Pick Best Next Step starts by listing all of the possible moves from the current state of the board. (If the problem was solving a mathematical theorem, rather than game moves, the program would list all of the possible next steps in a proof.) For each move the program constructs a hypothetical board that reflects what would happen if we made this move. For each such hypothetical board, we now need to consider what our opponent would do if we made this move. Now recursion comes in, because Pick Best Next Step simply calls Pick Best Next Step (in other words, itself) to pick the best move for our opponent. In calling itself, Pick Best Next Step then lists all of the legal moves for our opponent.

The program keeps calling itself, looking ahead as many moves as we have time to consider, which results in the generation of a huge move-countermove tree. This is another example of exponential growth, because to look ahead an additional move (or countermove) requires multiplying the amount of available computation by about five. Key to the success of the recursive formula is pruning this huge tree of possibilities and ultimately stopping its growth. In the game context, if a board looks hopeless for either side, the program can stop the expansion of the move-countermove tree from that point (called a "terminal leaf" of the tree) and consider the most recently considered move to be a likely win or loss. When all of these nested program calls are completed, the program will have determined the best possible move for the current actual board within the limits of the depth of recursive expansion that it had time to pursue, and the quality of its pruning algorithm (For an algorithmic description of recursive search, see this endnote.21).

The recursive formula is often effective at mathematics. Rather than game moves, the "moves" are the axioms of the field of math being addressed, as well as previously proved theorems. The expansion at each point is the possible axioms (or previously proved theorems) that can be applied to a proof at each step. (This was the approach used by Newell, Shaw, and Simons's General Problem Solver.)

From these examples it may appear that recursion is well suited only for problems in which we have crisply defined rules and objectives. But it has also shown promise in computer generation of artistic creations. For example, a program I designed called Ray Kurzweil's Cybernetic Poet uses a recursive approach.22 The program establishes a set of goals for each word—achieving a certain rhythmic pattern, poem structure, and word choice that is desirable at that point in the poem. If the program is unable to find a word that meets these criteria, it backs up and erases the previous word it has written, reestablishes the criteria it had originally set for the word just erased, and goes from there. If that also leads to a dead end, it backs up again, thus moving backwards and forwards. Eventually, it forces itself to make up its mind by relaxing some of the constraints if all paths lead to dead ends.

Deep Fritz Draws: Are Humans Getting Smarter, or Are Computers Getting Stupider?

We find one example of qualitative improvements in software in the world of computer chess, which, according to common wisdom, is governed only by the brute-force expansion of computer hardware. In a chess tournament in October 2002 with top-ranking human player Vladimir Kramnik, the Deep Fritz software achieved a draw. I point out that Deep Fritz has available only about 1.3 percent of the brute-force computation as the previous computer champion, Deep Blue. Despite that, it plays chess at about the same level because of its superior pattern recognition–based pruning algorithm (see below). In six years a program like Deep Fritz will again achieve Deep Blue's ability to analyze two hundred million board positions per second.

Deep Fritz–like chess programs running on ordinary personal computers will routinely defeat all humans later in this decade.

In The Age of Intelligent Machines, which I wrote between 1986 and 1989, I predicted that a computer would defeat the human world chess champion by the end of the 1990s. I also noted that computers were gaining about forty-five points per year in their chess ratings, whereas the best human playing was essentially fixed, so this projected a crossover point in 1998. Indeed, Deep Blue did defeat Gary Kasparov in a highly publicized tournament in 1997.

Yet in the Deep Fritz–Kramnik match, the current reigning computer program was able to achieve only a tie. Five years had passed since Deep Blue's victory, so what are we to make of this situation? Should we conclude that:

1. Humans are getting smarter, or at least better at chess?

2. Computers are getting worse at chess? If so, should we conclude that the much- publicized improvement in computer speed over the past five years was not all it was cracked up to be? Or, that computer software is getting worse, at least in chess?

The specialized-hardware advantage. Neither of the above conclusions is warranted. The correct conclusion is that software is getting better because Deep Fritz essentially matched the performance of Deep Blue, yet with far smaller computational resources. To gain some insight into these questions, we need to examine a few essentials. When I wrote my predictions of computer chess in the late 1980s, Carnegie Mellon University was embarked on a program to develop specialized chips for conducting the "minimax" algorithm (the standard game-playing method that relies on building trees of move-countermove sequences, and then evaluating the terminal-leaf positions of each branch of the tree) specifically for chess moves.

Based on this specialized hardware CMU's 1988 chess machine, HiTech, was able to analyze 175,000 board positions per second. It achieved a chess rating of 2,359, only about 440 points below the human world champion.

A year later, in 1989, CMU's Deep Thought machine increased this capacity to one million board positions per second and achieved a rating of 2,400. IBM eventually took over the project and renamed it Deep Blue but kept the basic CMU architecture. The version of Deep Blue that defeated Kasparov in 1997 had 256 special-purpose chess processors working in parallel, which analyzed two hundred million board positions per second.

It is important to note the use of specialized hardware to accelerate the specific calculations needed to generate the minimax algorithm for chess moves. It's well-known to computer-systems designers that specialized hardware generally can implement a specific algorithm at least one hundred times faster than a general-purpose computer. Specialized ASICs (Application-specific Integrated Circuits) require significant development efforts and costs, but for critical calculations that are needed on a repetitive basis (for example, decoding MP3 files or rendering graphics primitives for video games), this expenditure can be well worth the investment.

Deep Blue versus Deep Fritz. Because there had always been a great deal of focus on the milestone of a computer's being able to defeat a human opponent, support was available for investing in special-purpose chess circuits. Although there was some lingering controversy regarding the parameters of the Deep Blue–Kasparov match, the level of interest in computer chess waned considerably after 1997. After all, the goal had been achieved, and there was little point in beating a dead horse. IBM canceled work on the project, and there has been no work on specialized chess chips since that time. The focus of research in the various domains spun out of AI has been placed instead on problems of greater consequence, such as guiding airplanes, missiles, and factory robots, understanding natural language, diagnosing electrocardiograms and blood-cell images, detecting credit-card fraud, and a myriad of other successful narrow AI applications.

Computer hardware has nonetheless continued its exponential increase, with personal-computer speeds doubling every year since 1997. Thus the general-purpose Pentium processors used by Deep Fritz are about thirty-two times faster than processors in 1997. Deep Fritz uses a network of only eight personal computers, so the hardware is equivalent to 256 1997-class personal computers. Compare that to Deep Blue, which used 256 specialized chess processors, each of which was about one hundred times faster than 1997 personal computers (of course, only for computing chess minimax). So Deep Blue was 25,600 times faster than a 1997 PC and one hundred times faster than Deep Fritz. This analysis is confirmed by the reported speeds of the two systems: Deep Blue can analyze 200 million board positions per second compared to only about 2.5 million for Deep Fritz.

Significant software gains. So what can we say about the software in Deep Fritz? Although chess machines are usually referred to as examples of brute-force calculation, there is one important aspect of these systems that does require qualitative judgment. The combinatorial explosion of possible move-countermove sequences is rather formidable.

In The Age of Intelligent Machines I estimated that it would take about forty billion years to make one move if we failed to prune the move-countermove tree and attempted to make a "perfect" move in a typical game. (Assuming about thirty moves each in a typical game and about eight possible moves per play, we have 830 possible move sequences; analyzing one billion move sequences per second would take 1018 seconds or forty billion years.) Thus a practical system needs to continually prune away unpromising lines of action. This requires insight and is essentially a pattern-recognition judgment.

Humans, even world-class chess masters, perform the minimax algorithm extremely slowly, generally performing less than one move-countermove analysis per second. So how is it that a chess master can compete at all with computer systems? The answer is that we possess formidable powers of pattern recognition, which enable us to prune the tree with great insight.

It's precisely in this area that Deep Fritz has improved considerably over Deep Blue. Deep Fritz has only slightly more computation available than CMU's Deep Thought yet is rated almost 400 points higher.

Are human chess players doomed? Another prediction I made in The Age of Intelligent Machines was that once computers did perform as well or better as humans in chess, we would either think more of computer intelligence, less of human intelligence, or less of chess, and that if history is a guide, the last of these would be the likely outcome. Indeed, that is precisely what happened. Soon after Deep Blue's victory we began to hear a lot about how chess is really just a simple game of calculating combinations and that the computer victory just demonstrated that it was a better calculator.

The reality is slightly more complex. The ability of humans to perform well in chess is clearly not due to our calculating prowess, at which we are in fact rather poor. We use instead a quintessentially human form of judgment. For this type of qualitative judgment, Deep Fritz represents genuine progress over earlier systems. (Incidentally, humans have made no progress in the last five years, with the top human scores remaining just below 2,800. Kasparov is rated at 2,795 and Kramnik at 2,794.)

Where we go from here? Now that computer chess is relying on software running on ordinary personal computers, chess programs will continue to benefit from the ongoing acceleration of computer power. By 2009 a program like Deep Fritz will again achieve Deep Blue's ability to analyze two hundred million board positions per second. With the opportunity to harvest computation on the Internet, we will be able to achieve this potential several years sooner than 2009. (Internet harvesting of computers will require more ubiquitous broadband communication, but that's coming, too).

With these inevitable speed increases, as well as ongoing improvements in pattern recognition, computer chess ratings will continue to edge higher. Deep Fritz–like programs running on ordinary personal computers will routinely defeat all humans later in this decade. Then we'll really lose interest in chess.

 

Combining methods. The most powerful approach to building robust AI systems is to combine approaches, which is how the human brain works. As we discussed, the brain is not one big neural net but instead consists of hundreds of regions, each of which is optimized for processing information in a different way. None of these regions by itself operates at what we would consider human levels of performance, but clearly by definition the overall system does exactly that.

I've used this approach in my own AI work, especially in pattern recognition. In speech recognition, for example, we implemented a number of different pattern-recognition systems based on different paradigms. Some were specifically programmed with knowledge of phonetic and linguistic constraints from experts. Some were based on rules to parse sentences (which involves creating sentence diagrams showing word usage, similar to the diagrams taught in grade school). Some were based on self-organizing techniques, such as Markov models, trained on extensive libraries of recorded and annotated human speech. We then programmed a software "expert manager" to learn the strengths and weaknesses of the different "experts" (recognizers) and combine their results in optimal ways. In this fashion, a particular technique that by itself might produce unreliable results can nonetheless contribute to increasing the overall accuracy of the system.

There are many intricate ways to combine the varied methods in AI's toolbox. For example, one can use a genetic algorithm to evolve the optimal topology (organization of nodes and connections) for a neural net or a Markov model. The final output of the GA-evolved neural net can then be used to control the parameters of a recursive search algorithm. We can add in powerful signal- and image-processing techniques that have been developed for pattern-processing systems. Each specific application calls for a different architecture. Computer science professor and AI entrepreneur Ben Goertzel has written a series of books and articles that describe strategies and architectures for combining the diverse methods underlying intelligence. His "Novamente" architecture is intended to provide a framework for general purpose AI.23

The above basic descriptions provide only a glimpse into how increasingly sophisticated current AI systems are designed. It's beyond the scope of this book to provide a comprehensive description of the techniques of AI, and even a doctoral program in computer science is unable to cover all of the varied approaches in use today.

Many of the examples of real-world narrow AI systems described in the next section use a variety of methods integrated and optimized for each particular task. Narrow AI is strengthening as a result of several concurrent trends: continued exponential gains in computational resources, extensive real-world experience with thousands of applications, and fresh insights into how the human brain makes intelligent decisions.

A Narrow AI Sampler

When I wrote my first AI book, The Age of Intelligent Machines in the late 1980s, I had to conduct extensive investigations to find a few successful examples of AI in practice. The Internet was not yet prevalent, so I had to go to real libraries and visit the AI research centers in the United States, Europe, and Asia. I included in my book pretty much all of the reasonable examples I could identify. In my research for this book my experience has been altogether different. I have been inundated with thousands of compelling examples. In our reporting on the KurzweilAI.net Web site, we feature several dramatic systems every day.24

A 2003 study by Business Communication Company projected a $21 billion market by 2007 for AI applications, with average annual growth of 12.2 percent from 2002 to 2007.25 Leading industries for AI applications include business intelligence, customer relations, finance, defense and domestic security, and education. Here is a small sample of narrow AI in action.

Military and intelligence. The U.S. military has been an avid user of AI systems. Pattern-recognition software systems guide autonomous weapons such as cruise missiles, which can fly thousands of miles to find a specific building or even a specific window.26 Although the relevant details of the terrain that the missile flies over are programmed ahead of time, variations in weather, ground cover, and other factors require a flexible level of real-time image recognition.

The army has developed prototypes of self-organizing communication networks (called "mesh networks") to automatically configure many thousands of communication nodes when a platoon is dropped into a new location.27

Expert systems incorporating Bayesian networks and GAs are used to optimize complex supply chains that coordinate millions of provisions, supplies, and weapons based on rapidly changing battlefield requirements.

AI systems are routinely employed to simulate the performance of weapons, including nuclear bombs and missiles.

Advance warning of the September 11, 2001, terrorist attacks was apparently detected by the National Security Agency's AI-based Echelon system, which analyzes the agency's extensive monitoring of communications traffic.28 Unfortunately, Echelon's warnings were not reviewed by human agents until it was too late.

The 2002 military campaign in Afghanistan saw the debut of the armed Predator, an unmanned robotic flying fighter. Although the Air Force's Predator had been under development for many years, arming it with Army-supplied missiles was a last-minute improvisation that proved remarkably successful. In the Iraq war that began in 2003 the armed Predator (operated by the CIA) and other flying unmanned aerial vehicles (UAVs) destroyed thousands of enemy tanks and missile sites.

All of the military services are using robots. The army utilizes them to search caves (in Afghanistan) and buildings. The navy uses small robotic ships to protect its aircraft carriers. As I discuss in chapter 6 of The Singularity is Near, moving soldiers away from battle is a rapidly growing trend.

Space exploration. NASA is building self-understanding into the software controlling its unmanned spacecraft. Because Mars is about three light-minutes from Earth, and Jupiter around forty light-minutes (depending on the exact position of the planets), communication between spacecraft headed there and earthbound controllers is significantly delayed. For this reason it's important that the software controlling these missions have the capability of performing its own tactical decision making. To accomplish this NASA software is being designed to include a model of the software's own capabilities and those of the spacecraft, as well as the challenges each mission is likely to encounter. Such AI-based systems are capable of reasoning through new situations rather than just following preprogrammed rules. This approach enabled the craft Deep Space One in 1999 to use its own technical knowledge to devise a series of original plans to overcome a stuck switch that threatened to destroy its mission of exploring an asteroid.29 The AI system's first plan failed to work, but its second plan saved the mission. "These systems have a commonsense model of the physics of their internal components," explains Brian Williams, coinventor of Deep Space One's autonomous software and now a scientist at MIT's Space Systems and AI laboratories. "[The spacecraft] can reason from that model to determine what is wrong and to know how to act."

Using a network of computers NASA used GAs to evolve an antenna design for three Space Technology 5 satellites that will study the Earth's magnetic field. Millions of possible designs competed in the simulated evolution. According to NASA scientist and project leader Jason Lohn, "We are now using the [GA] software to design tiny microscopic machines, including gyroscopes, for spaceflight navigation. The software also may invent designs that no human designer would ever think of."30

Another NASA AI system learned on its own to distinguish stars from galaxies in very faint images with an accuracy surpassing that of human astronomers.

New land-based robotic telescopes are able to make their own decisions on where to look and how to optimize the likelihood of finding desired phenomena. Called "autonomous, semi-intelligent observatories," the systems can adjust to the weather, notice items of interest, and decide on their own to track them. They are able to detect very subtle phenomena, such as a star blinking for a nanosecond, which may indicate a small asteroid in the outer regions of our solar system passing in front of the light from that star.31 One such system, called Moving Object and Transient Event Search System (MOTESS) has identified on its own 180 new asteroids and several comets during its first two years of operation. "We have an intelligent observing system," explained University of Exeter astronomer Alasdair Allan. "It thinks and reacts for itself, deciding whether something it has discovered is interesting enough to need more observations. If more observations are needed, it just goes ahead and gets them."

Similar systems are used by the military to automatically analyze data from spy satellites. Current satellite technology is capable of observing ground-level features about an inch in size and is not affected by bad weather, clouds, or darkness.32 The massive amount of data continually generated would not be manageable without automated image recognition programmed to look for relevant developments.

Medicine. If you obtain an electrocardiogram (ECG) your doctor is likely to receive an automated diagnosis using pattern recognition applied to ECG recordings. My own company (Kurzweil Technologies) is working with United Therapeutics to develop a new generation of automated ECG analysis for long-term unobtrusive monitoring (via sensors embedded in clothing and wireless communication using a cell phone) of the early warning signs of heart disease.33 Other pattern-recognition systems are used to diagnose a variety of imaging data.

Every major drug developer is using AI programs to do pattern recognition and intelligent data mining in the development of new drug therapies. For example SRI International is building flexible knowledge bases that encode everything we know about a dozen disease agents, including tuberculosis and H. pylori (the bacteria that cause ulcers).34 The goal is to apply intelligent data-mining tools (software that can search for new relationships in data) to find new ways to kill or disrupt the metabolisms of these pathogens.

Similar systems are being applied to performing the automatic discovery of new therapies for other diseases, as well as understanding the function of genes and their roles in disease.35 For example Abbott Laboratories claims that six human researchers in one of its new labs equipped with AI-based robotic and data-analysis systems are able to match the results of two hundred scientists in its older drug-development labs.36

Men with elevated prostate-specific antigen (PSA) levels typically undergo surgical biopsy, but about 75 percent of these men do not have prostate cancer. A new test, based on pattern recognition of proteins in the blood, would reduce this false positive rate to about 29 percent.37 The test is based on an AI program designed by Correlogic Systems in Bethesda, Maryland, and the accuracy is expected to improve further with continued development.

Pattern recognition applied to protein patterns has also been used in the detection of ovarian cancer. The best contemporary test for ovarian cancer, called CA-125, employed in combination with ultrasound, misses almost all early-stage tumors. "By the time it is now diagnosed, ovarian cancer is too often deadly," says Emanuel Petricoin III, codirector of the Clinical Proteomics Program run by the FDA and the National Cancer Institute. Petricoin is the lead developer of a new AI-based test looking for unique patterns of proteins found only in the presence of cancer. In an evaluation involving hundreds of blood samples, the test was, according to Petricoin, "an astonishing 100% accurate in detecting cancer, even at the earliest stages."38

About 10 percent of all Pap-smear slides in the United States are analyzed by a self-learning AI program called FocalPoint, developed by TriPath Imaging. The developers started out by interviewing pathologists on the criteria they use. The AI system then continued to learn by watching expert pathologists. Only the best human diagnosticians were allowed to be observed by the program. "That's the advantage of an expert system," explains Bob Schmidt, TriPath's technical product manager. "It allows you to replicate your very best people."

Ohio State University Health System has developed a computerized physician order-entry (CPOE) system based on an expert system with extensive knowledge across multiple specialties.39 The system automatically checks every order for possible allergies in the patient, drug interactions, duplications, drug restrictions, dosing guidelines, and appropriateness given information about the patient from the hospital's laboratory and radiology departments.

Science and math. A "robot scientist" has been developed at the University of Wales that combines an AI-based system capable of formulating original theories, a robotic system that can automatically carry out experiments, and a reasoning engine to evaluate results. The researchers provided their creation with a model of gene expression in yeast. The system "automatically originates hypotheses to explain observations, devises experiments to test these hypotheses, physically runs the experiments using a laboratory robot, interprets the results to falsify hypotheses inconsistent with the data, and then repeats the cycle."40 The system is capable of improving its performance by learning from its own experience. The experiments designed by the robot scientist were three times less expensive than those designed by human scientists. A test of the machine against a group of human scientists showed that the discoveries made by the machine were comparable to those made by the humans.

Mike Young, director of biology at the University of Wales, was one of the human scientists who lost to the machine. He explains that "the robot did beat me, but only because I hit the wrong key at one point."

A long-standing conjecture in algebra was finally proved by an AI system at Argonne National Laboratory. Human mathematicians called the proof "creative."

Business, finance, and manufacturing. Companies in every industry are using AI systems to control and optimize logistics, detect fraud and money laundering, and perform intelligent data mining on the horde of information they gather each day. Wal-Mart, for example, gathers vast amounts of information from its transactions with shoppers. AI-based tools using neural nets and expert systems review this data to provide market-research reports for managers. This intelligent data mining allows them to make remarkably accurate predictions of the inventory required for each product in each store for each day.41

AI-based programs are routinely used to detect fraud in financial transactions. Future Route, an English company, for example, offers iHex, based on AI routines developed at Oxford University, to detect fraud in credit-card transactions and loan applications.42 The system continuously generates and updates its own rules based on its experience. First Union Home Equity Bank in Charlotte, North Carolina, uses Loan Arranger, a similar AI-based system to decide whether to approve mortgage applications.43

The NASDAQ similarly uses a learning program called the Securities Observation, News Analysis, and Regulation (SONAR) system to monitor all trades for fraud as well as the possibility of insider trading.44 As of the end of 2003 more than 180 incidents had been detected by SONAR and referred to the U.S. Securities and Exchange Commission and Department of Justice. These included several cases that later received significant news coverage.

Ascent Technology, founded by Patrick Winston, who directed MIT's AI Lab from 1972 through 1997, has designed an GA-based system called SmartAirport Operations Center (SAOC) that can optimize the complex logistics of an airport, such as balancing work assignments of hundreds of employees, making gate and equipment assignments, and managing a myriad of other details.45 Winston points out that "figuring out ways to optimize a complicated situation is what genetic algorithms do." SAOC has raised productivity by approximately 30 percent in the airports where it has been implemented.

Ascent's first contract was to apply its AI techniques to managing the logistics for the 1991 Desert Storm campaign in Iraq. DARPA claimed that AI-based logistic-planning systems, including the Ascent system, resulted in more savings than the entire government research investment in AI over several decades.46

A recent trend in software is for AI systems to monitor a complex software system's performance, recognize malfunctions, and determine the best way to recover automatically without necessarily informing the human user. The idea stems from the realization that as software systems become more complex, like humans, they will never be perfect, and that eliminating all bugs is impossible. As humans, we use the same strategy: we don't expect to be perfect, but we usually try to recover from inevitable mistakes. "We want to stand this notion of systems management on its head," says Armando Fox, the head of Stanford University's Software Infrastructures Group, who is working on what is now called "autonomic computing." Fox adds, "The system has to be able to set itself up, it has to optimize itself. It has to repair itself, and if something goes wrong, it has to know how to respond to external threats." IBM, Microsoft, and other software vendors are all developing systems that incorporate autonomic capabilities.

Manufacturing and robotics. Computer-integrated manufacturing (CIM) increasingly employs AI techniques to optimize the use of resources, streamline logistics, and reduce inventories through just-in-time purchasing of parts and supplies. A new trend in CIM systems is to use "case-based reasoning" rather than hard-coded, rule-based expert systems. Such reasoning codes knowledge as "cases," which are examples of problems with solutions. Initial cases are usually designed by the engineers, but the key to a successful case-based reasoning system is its ability to gather new cases from actual experience. The system is then able to apply the reasoning from its stored cases to new situations.

Robots are extensively used in manufacturing. The latest generation of robots uses flexible AI-based machine-vision systems—from companies such as Cognex Corporation in Natick, Massachusetts—that can respond flexibly to varying conditions. This reduces the need for precise setup for the robot to operate correctly. Brian Carlisle, CEO of Adept Technologies, a Livermore, California, factory-automation company, points out that "even if labor costs were eliminated [as a consideration], a strong case can still be made for automating with robots and other flexible automation. In addition to quality and throughput, users gain by enabling rapid product changeover and evolution that can't be matched with hard tooling."

One of AI's leading roboticists, Hans Moravec, has founded a company called Seegrid to apply his machine-vision technology to applications in manufacturing, materials handling, and military missions.47 Moravec's software enables a device (a robot or just a material-handling cart) to walk or roll through an unstructured environment and in a single pass build a reliable "voxel" (three-dimensional pixel) map of the environment. The robot can then use the map and its own reasoning ability to determine an optimal and obstacle-free path to carry out its assigned mission.

This technology enables autonomous carts to transfer materials throughout a manufacturing process without the high degree of preparation required with conventional preprogrammed robotic systems. In military situations autonomous vehicles could carry out precise missions while adjusting to rapidly changing environments and battlefield conditions.

Machine vision is also improving the ability of robots to interact with humans. Using small, inexpensive cameras, head- and eye-tracking software can sense where a human user is, allowing robots, as well as virtual personalities on a screen, to maintain eye contact, a key element for natural interactions. Head- and eye-tracking systems have been developed at Carnegie Mellon University and MIT and are offered by small companies such as Seeing Machines of Australia.

An impressive demonstration of machine vision was a vehicle that was driven by an AI system with no human intervention for almost the entire distance from Washington, D.C., to San Diego.48 Bruce Buchanan, computer-science professor at the University of Pittsburgh and president of the American Association of Artificial Intelligence, pointed out that this feat would have been "unheard of 10 years ago."

Palo Alto Research Center (PARC) is developing a swarm of robots that can navigate in complex environments, such as a disaster zone, and find items of interest, such as humans who may be injured. In a September 2004 demonstration at an AI conference in San Jose, they demonstrated a group of self-organizing robots on a mock but realistic disaster area.49 The robots moved over the rough terrain, communicated with one another, used pattern recognition on images, and detected body heat to locate humans.

Speech and language. Dealing naturally with language is the most challenging task of all for artificial intelligence. No simple tricks, short of fully mastering the principles of human intelligence, will allow a computerized system to convincingly emulate human conversation, even if restricted to just text messages. This was Turing's enduring insight in designing his eponymous test based entirely on written language..

Although not yet at human levels, natural language-processing systems are making solid progress. Search engines have become so popular that "Google" has gone from a proper noun to a common verb, and its technology has revolutionized research and access to knowledge. Google and other search engines use AI-based statistical-learning methods and logical inference to determine the ranking of links. The most obvious failing of these search engines is their inability to understand the context of words. Although an experienced user learns how to design a string of keywords to find the most relevant sites (for example, a search for "computer chip" is likely to avoid references to potato chips that a search for "chip" alone might turn up) what we would really like to be able to do is converse with our search engines in natural language. Microsoft has developed a natural-language search engine called Ask MSR (Ask MicroSoft Research), which actually answers natural-language questions such as "When was Mickey Mantle born?"50 After the system parses the sentence to determine the parts of speech (subject, verb, object, adjective and adverb modifiers, and so on), a special search engine then finds matches based on the parsed sentence. The found documents are searched for sentences that appear to answer the question, and the possible answers are ranked. At least 75 percent of the time, the correct answer is in the top three ranked positions, and incorrect answers are usually obvious (such as "Mickey Mantle was born in 3"). The researchers hope to include knowledge bases that will lower the rank of many of the nonsensical answers.

Microsoft researcher Eric Brill, who has led research on Ask MSR, has also attempted an even more difficult task: building a system that provides answers of about fifty words to more complex questions, such as, "How are the recipients of the Nobel Prize selected?" One of the strategies used by this system is to find an appropriate FAQ section on the Web that answers the query.

Natural-language systems combined with large-vocabulary, speaker-independent (that is, responsive to any speaker) speech recognition over the phone are entering the marketplace to conduct routine transactions. You can talk to British Airways' virtual travel agent about anything you like as long as it has to do with booking flights on British Airways.51 You're also likely to talk to a virtual person if you call Verizon for customer service or Charles Schwab and Merrill Lynch to conduct financial transactions. These systems, while they can be annoying to some people, are reasonably adept at responding appropriately to the often ambiguous and fragmented way people speak. Microsoft and other companies are offering systems that allow a business to create virtual agents to book reservations for travel and hotels and conduct routine transactions of all kinds through two-way, reasonably natural voice dialogues.

Not every caller is satisfied with the ability of these virtual agents to get the job done, but most systems provide a means to get a human on the line. Companies using these systems report that they reduce the need for human service agents up to 80 percent. Aside from the money saved, reducing the size of call centers has management benefit. Call-center jobs have very high turnover rates because of low job satisfaction.

It's said that men are loath to ask others for directions, but car vendors are betting that both male and female drivers will be willing to ask their own car for help in getting to their destination. In 2005 the Acura RL and Honda Odyssey will be offering a system from IBM that allows users to converse with their cars.52 Driving directions will include street names (for example, "turn left on Main Street, then right on Second Avenue"). Users can ask such questions as, "Where is the nearest Italian restaurant?" or they can enter specific locations by voice, ask for clarifications on directions, and give commands to the car itself (such as "turn up the air conditioning"). The Acura RL will also track road conditions and highlight traffic congestion on its screen in real time. The speech recognition is claimed to be speaker-independent and to be unaffected by engine sound, wind, and other noises. The system will reportedly recognize 1.7 million street and city names, in addition to nearly one thousand commands.

Computer language translation continues to improve gradually. Because this is a Turing-level task—that is, it requires full human-level understanding of language to perform at human levels—it will be one of the last application areas to compete with human performance. Franz Josef Och, a computer scientist at the University of Southern California, has developed a technique that can generate a new language-translation system between any pair of languages in a matter of hours or days.53 All he needs is a "Rosetta stone"—that is, text in one language and the translation of that text in the other language—although he needs millions of words of such translated text. Using a self-organizing technique, the system is able to develop its own statistical models of how text is translated from one language to the other and develops these models in both directions.

This contrasts with other translation systems, in which linguists painstakingly code grammar rules with long lists of exceptions to each rule. Och's system recently received the highest score in a competition of translation systems conducted by the U.S. Commerce Department's National Institute of Standards and Technology.

Entertainment and sports. In an amusing and intriguing application of GAs, Oxford scientist Torsten Reil created animated creatures with simulated joints and muscles and a neural net for a brain. He then assigned them a task: to walk. He used a GA to evolve this capability, which involved seven hundred parameters. "If you look at that system with your human eyes, there's no way you can do it on your own, because the system is just too complex," Reil points out. "That's where evolution comes in."54

While some of the evolved creatures walked in a smooth and convincing way, the research demonstrated a well-known attribute of GAs: you get what you ask for. Some creatures figured out novel new ways of passing for walking. According to Weil, "We got some creatures that didn't walk at all, but had these very strange ways of moving forward: crawling or doing somersaults."

Software is being developed that can automatically extract excerpts from a video of a sports game that show the more important plays.55 A team at Trinity College in Dublin is working on table-based games like pool, in which software tracks the location of each ball and is programmed to identify when a significant shot has been made. A team at the University of Florence is working on soccer. This software tracks the location of each player and can determine the type of play being made (such as free kicking or attempting a goal), when a goal is achieved, when a penalty is earned, and other key events.

The Digital Biology Interest Group at University College in London is designing Formula One race cars by breeding them using GAs.56

The AI winter is long since over. We are well into the spring of narrow AI. Most of the examples above were research projects just ten to fifteen years ago. If all the AI systems in the world suddenly stopped functioning, our economic infrastructure would grind to a halt. Your bank would cease doing business. Most transportation would be crippled. Most communications would fail. This was not the case a decade ago. Of course, our AI systems are not smart enough—yet—to organize such a conspiracy.

Strong AI

If you understand something in only one way, then you don't really understand it at all. This is because, if something goes wrong, you get stuck with a thought that just sits in your mind with nowhere to go. The secret of what anything means to us depends on how we've connected it to all the other things we know. This is why, when someone learns 'by rote,' we say that they don't really understand. However, if you have several different representations then, when one approach fails you can try another. Of course, making too many indiscriminate connections will turn a mind to mush. But well-connected representations let you turn ideas around in your mind, to envision things from many perspectives until you find one that works for you. And that's what we mean by thinking! —Marvin Minsky57

Advancing computer performance is like water slowly flooding the landscape. A half century ago it began to drown the lowlands, driving out human calculators and record clerks, but leaving most of us dry. Now the flood has reached the foothills, and our outposts there are contemplating retreat. We feel safe on our peaks, but, at the present rate, those too will be submerged within another half century. I propose that we build Arks as that day nears, and adopt a seafaring life! For now, though, we must rely on our representatives in the lowlands to tell us what water is really like.

Our representatives on the foothills of chess and theorem-proving report signs of intelligence. Why didn't we get similar reports decades before, from the lowlands, as computers surpassed humans in arithmetic and rote memorization? Actually, we did, at the time. Computers that calculated like thousands of mathematicians were hailed as "giant brains," and inspired the first generation of AI research. After all, the machines were doing something beyond any animal, that needed human intelligence, concentration and years of training. But it is hard to recapture that magic now. One reason is that computers' demonstrated stupidity in other areas biases our judgment. Another relates to our own ineptitude. We do arithmetic or keep records so painstakingly and externally, that the small mechanical steps in a long calculation are obvious, while the big picture often escapes us. Like Deep Blue's builders, we see the process too much from the inside to appreciate the subtlety that it may have on the outside. But there is a non-obviousness in snowstorms or tornadoes that emerge from the repetitive arithmetic of weather simulations, or in rippling tyrannosaur skin from movie animation calculations. We rarely call it intelligence, but "artificial reality" may be an even more profound concept than artificial intelligence (Moravec 1998).

The mental steps underlying good human chess playing and theorem proving are complex and hidden, putting a mechanical interpretation out of reach. Those who can follow the play naturally describe it instead in mentalistic language, using terms like strategy, understanding and creativity. When a machine manages to be simultaneously meaningful and surprising in the same rich way, it too compels a mentalistic interpretation. Of course, somewhere behind the scenes, there are programmers who, in principle, have a mechanical interpretation. But even for them, that interpretation loses its grip as the working program fills its memory with details too voluminous for them to grasp.

As the rising flood reaches more populated heights, machines will begin to do well in areas a greater number can appreciate. The visceral sense of a thinking presence in machinery will become increasingly widespread. When the highest peaks are covered, there will be machines than can interact as intelligently as any human on any subject. The presence of minds in machines will then become self-evident. —Hans Moravec58

Because of the exponential nature of progress in information-based technologies, performance often shifts quickly from pathetic to daunting. In many diverse realms, as the examples in the previous section make clear, the performance of narrow AI is already impressive. The range of intelligent tasks in which machines can now compete with human intelligence is continually expanding. In a cartoon in The Age of Spiritual Machines, a defensive "human race" is seen writing out signs that state what only people (and not machines) can do.59 Littered on the floor are the signs the human race has already discarded, because machines can now perform these functions: diagnose an electrocardiogram, compose in the style of Bach, recognize faces, guide a missile, play Ping-Pong, play master chess, pick stocks, improvise jazz, prove important theorems, and understand continuous speech. Back in 1999 these tasks were no longer solely the province of human intelligence; machines could do them all.

On the wall behind the man symbolizing the human race were signs he had written out describing the tasks that were still the sole province of humans: have common sense, review a movie, hold press conferences, translate speech, clean a house, and drive cars. If we were to redesign this cartoon in a few years, some of these signs would also be likely to end up on the floor. When CYC reaches one hundred million items of commonsense knowledge, perhaps human superiority in the realm of commonsense reasoning won't be so clear.

The era of household robots, although still fairly primitive today, has already started. Ten years from now, it's likely we will consider "clean a house" as within the capabilities of machines. As for driving cars, robots with no human intervention have already driven nearly across the United States on ordinary roads with other normal traffic. We are not yet ready to turn over all steering wheels to machines, but there are serious proposals to create electronic highways on which cars (with people in them) will drive by themselves.

The three tasks that have to do with human-level understanding of natural language—reviewing a movie, holding a press conference, and translating speech—are the most difficult. Once we can take down these signs, we'll have Turing-level machines, and the era of strong AI will have started.

This era will creep up on us. As long as there are any discrepancies between human and machine performance—areas in which humans outperform machines—strong AI skeptics will seize on these differences. But our experience in each area of skill and knowledge is likely to follow that of Kasparov. Our perceptions of performance will shift quickly from pathetic to daunting as the knee of the exponential curve is reached for each human capability.

How will strong AI be achieved? Most of the material in this book is intended to lay out the fundamental requirements for both hardware and software and explain why we can be confident that these requirements will be met in nonbiological systems. The continuation of the exponential growth of the price-performance of computation to achieve hardware capable of emulating human intelligence was still controversial in 1999. There has been so much progress in developing the technology for three-dimensional computing over the past five years that relatively few knowledgeable observers now doubt that this will happen. Even just taking the semiconductor industry's published ITRS road map, which runs to 2018, we can project human-level hardware at reasonable cost by that year.60

I've stated the case in chapter 4 The Singularity is Near of why we can have confidence that we will have detailed models and simulations of all regions of the human brain by the late 2020s. Until recently, our tools for peering into the brain did not have the spatial and temporal resolution, bandwidth, or price-performance to produce adequate data to create sufficiently detailed models. This is now changing. The emerging generation of scanning and sensing tools can analyze and detect neurons and neural components with exquisite accuracy, while operating in real time.

Future tools will provide far greater resolution and capacity. By the 2020s, we will be able to send scanning and sensing nanobots into the capillaries of the brain to scan it from inside. We've shown the ability to translate the data from diverse sources of brain scanning and sensing into models and computer simulations that hold up well to experimental comparison with the performance of the biological versions of these regions. We already have compelling models and simulations for several important brain regions. As I argued in chapter 4 of The Singularity is Near, it's a conservative projection to expect detailed and realistic models of all brain regions by the late 2020s.

One simple statement of the strong AI scenario is that we will learn the principles of operation of human intelligence from reverse engineering all the brain's regions, and we will apply these principles to the brain-capable computing platforms that will exist in the 2020s. We already have an effective toolkit for narrow AI. Through the ongoing refinement of these methods, the development of new algorithms, and the trend toward combining multiple methods into intricate architectures, narrow AI will continue to become less narrow. That is, AI applications will have broader domains, and their performance will become more flexible. AI systems will develop multiple ways of approaching each problem, just as humans do. Most important, the new insights and paradigms resulting from the acceleration of brain reverse engineering will greatly enrich this set of tools on an ongoing basis. This process is well under way.

It's often said that the brain works differently from a computer, so we cannot apply our insights about brain function into workable nonbiological systems. This view completely ignores the field of self-organizing systems, for which we have a set of increasingly sophisticated mathematical tools. As I discussed in chapter 4 of The Singularity is Near, the brain differs in a number of important ways from that of conventional, contemporary computers. If you open up your Palm Pilot and cut a wire, there's a good chance you will break the machine. Yet we routinely lose many neurons and interneuronal connections with no ill effect, because the brain is self-organizing and relies on distributed patterns in which many specific details are not important.

When we get to the mid- to late 2020s, we will have access to a generation of extremely detailed brain-region models. Ultimately the toolkit will be greatly enriched with these new models and simulations and will encompass a full knowledge of how the brain works. As we apply the toolkit to intelligent tasks, we will draw upon the entire range of tools, some derived directly from brain reverse engineering, some merely inspired by what we know about the brain, and some not based on the brain at all but on decades of AI research.

Part of the brain's strategy is to learn information, rather than having knowledge hard-coded from the start. ("Instinct" is the term we use to refer to such innate knowledge.) Learning will be an important aspect of AI, as well. In my experience in developing pattern-recognition systems in character recognition, speech recognition, and financial analysis, providing for the AI's education is the most challenging and important part of the engineering. With the accumulated knowledge of human civilization increasingly accessible online, future AIs will have the opportunity to conduct their education by accessing this vast body of information.

The education of AIs will be much faster than that of unenhanced humans. The twenty-year time span required to provide a basic education to biological humans could be compressed into a matter of weeks or less. Also, because nonbiological intelligence can share its patterns of learning and knowledge, only one AI has to master each particular skill. As I pointed out, we trained one set of research computers to understand speech, but then the hundreds of thousands of people who acquired our speech-recognition software had to load only the already trained patterns into their computers.

One of the many skills that nonbiological intelligence will achieve with the completion of the human brain–reverse engineering project is sufficient mastery of language and shared human knowledge to pass the Turing test. The Turing test is important not so much for its practical significance but rather because it will demarcate a crucial threshold. As I have pointed out, there is no simple means to pass a Turing test, other than to convincingly emulate the flexibility, subtlety, and suppleness of human intelligence. Having captured that capability in our technology, it will then be subject to engineering's ability to concentrate, focus, and amplify it..

Variations of the Turing test have been proposed. The annual Loebner Prize contest awards a bronze prize to the chatterbot (conversational bot) best able to convince human judges that it's human.61 The criteria for winning the silver prize is based on Turing's original test, and it obviously has yet to be awarded. The gold prize is based on visual and auditory communication. In other words, the AI must have a convincing face and voice, as transmitted over a terminal, and thus it must appear to the human judge as if he or she is interacting with a real person over a videophone. On the face of it, the gold prize sounds more difficult. I've argued that it may actually be easier, because judges may pay less attention to the text portion of the language being communicated and could be distracted by a convincing facial and voice animation. In fact, we already have real-time facial animation, and while it is not quite up to these modified Turing standards, it's reasonably close. We also have very natural-sounding voice synthesis, which is often confused with recordings of human speech, although more work is needed on prosodics (intonation). We're likely to achieve satisfactory facial animation and voice production sooner than the Turing-level language and knowledge capabilities.

Turing was carefully imprecise in setting the rules for his test, and significant literature has been devoted to the subtleties of establishing the exact procedures for determining how to assess when the Turing test has been passed.62 In 2002 I negotiated the rules for a Turing-test wager with Mitch Kapor on the Long Now Web site.63 The question underlying our twenty-thousand-dollar bet, the proceeds of which go to the charity of the winner's choice, was, "Will the Turing test be passed by a machine by 2029?" I said yes, and Kapor said no. It took us months of dialogue to arrive at the intricate rules to implement our wager. Simply defining "machine" and "human," for example, was not a straightforward matter. Is the human judge allowed to have any nonbiological thinking processes in his or her brain? Conversely, can the machine have any biological aspects?

Because the definition of the Turing test will vary from person to person, Turing test-capable machines will not arrive on a single day, and there will be a period during which we will hear claims that machines have passed the threshold. Invariably, these early claims will be debunked by knowledgeable observers, probably including myself. By the time there is a broad consensus that the Turing test has been passed, the actual threshold will have long since been achieved.

Edward Feigenbaum proposes a variation of the Turing test, which assesses not a machine's ability to pass for human in casual, everyday dialogue but its ability to pass for a scientific expert in a specific field.64 The Feigenbaum test (FT) may be more significant than the Turing test, because FT-capable machines, being technically proficient, will be capable of improving their own designs. Feigenbaum describes his test in this way:

Two players play the FT game. One player is chosen from among the elite practitioners in each of three pre-selected fields of natural science, engineering, or medicine. (The number could be larger, but for this challenge not greater than ten). Let's say we choose the fields from among those covered in the U.S. National Academy.... For example, we could choose astrophysics, computer science, and molecular biology. In each round of the game, the behavior of the two players (elite scientist and computer) is judged by another Academy member in that particular domain of discourse, e.g., an astrophysicist judging astrophysics behavior. Of course the identity of the players is hidden from the judge as it is in the Turing test. The judge poses problems, asks questions, asks for explanations, theories, and so on—as one might do with a colleague. Can the human judge choose, at better than chance level, which is his National Academy colleague and which is the computer?

Of course Feigenbaum overlooks the possibility that the computer might already be a National Academy colleague, but he is obviously assuming that machines will not yet have invaded institutions that today comprise exclusively biological humans. While it may appear that the FT is more difficult than the Turing test, the entire history of AI reveals that machines started with the skills of professionals and only gradually moved towards the language skills of a child. Early AI systems demonstrated their prowess initially in professional fields such as proving mathematical theorems and diagnosing medical conditions. These early systems would not be able to pass the FT, however, because they do not have the language skills and the flexible ability to model knowledge from different perspectives, which are needed to engage in the professional dialogue inherent in the FT.

This language ability is essentially the same ability needed in the Turing test. Reasoning in many technical fields is not necessarily more difficult than the commonsense reasoning engaged in by most human adults. I would expect that machines will pass the FT, at least in some disciplines, around the same time as they pass the Turing test. Passing the FT in all disciplines is likely to take longer, however. This is why I see the 2030s as a period of consolidation, as machine intelligence rapidly expands its skills and incorporates the vast knowledge bases of our biological human and machine civilization. By the 2040s we will have the opportunity to apply the accumulated knowledge and skills of our civilization to computational platforms that are billions of times more capable than unassisted biological human intelligence.

The advent of strong AI is the most important transformation this century will see. Indeed, it's comparable in importance to the advent of biology itself. It will mean that a creation of biology has finally mastered its own intelligence and discovered means to overcome its limitations. Once the principles of operation of human intelligence are understood, expanding its abilities will be conducted by human scientists and engineers whose own biological intelligence will have been greatly amplified through an intimate merger with nonbiological intelligence. Over time, the nonbiological portion will predominate.

I have discussed aspects of the impact of this transformation throughout The Singularity is Near, which I focus on in chapter 6. Intelligence is the ability to solve problems with limited resources, including limitations of time. The Singularity will be characterized by the rapid cycle of human intelligence—increasingly nonbiological— capable of comprehending and leveraging its own powers.


1 As quoted by Douglas Hofstadter in Gödel, Escher, Bach: An Eternal Golden Braid (New York: Basic Books, 1979).

2 The author runs a company, FATKAT (Financial Accelerating Transactions by Kurzweil Adaptive Technologies), which applies computerized pattern recognition to financial data to make stock-market investment decisions, http://www.FatKat.com.

3 See discussion in chapter 2 on price-performance improvements in computer memory and electronics in general.

4 Runaway AI refers to a scenario where, as Max More describes "superintelligent machines, initially harnessed for human benefit, soon leave us behind." Max More, "Embrace, Don't Relinquish, the Future," http://www.kurzweilai.net/articles/art0106.html?printable=1

See also Damien Broderick's description of the "Seed AI" as "A self-improving seed AI could run glacially slowly on a limited machine substrate. The point is, so long as it has the capacity to improve itself, at some point it will do so convulsively, bursting through any architectural bottlenecks to design its own improved hardware, maybe even build it (if it's allowed control of tools in a fabrication plant)." Damien Broderick, "Tearing toward the Spike," presented at "Australia at the Crossroads? Scenarios and Strategies for the Future," (April 31 - May 2, 2000), published on KurzweilAI.net May 7, 2001: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0173.html

5 David Talbot, "Lord of the Robots," Technology Review (April 2002).

6 Heather Havenstein writes that the "inflated notions spawned by science fiction writers about the convergence of humans and machines tarnished the image of AI in the 1980s because AI was perceived as failing to live up to its potential." Heather Havenstein, "Spring comes to AI winter: A thousand applications bloom in medicine, customer service, education and manufacturing," Computerworld, February 14, 2005, http://www.computerworld.com/softwaretopics/software/story/0,10801,99691,00.html This tarnished image led to "AI Winter," defined as "a term coined by Richard Gabriel for the (circa 1990-94?) crash of the wave of enthusiasm for the AI language Lisp and AI itself, following a boom in the 1980s." Duane Rettig wrote "…companies rode the great AI wave in the early 80's, when large corporations poured billions of dollars into the AI hype that promised thinking machines in 10 years. When the promises turned out to be harder than originally thought, the AI wave crashed, and Lisp crashed with it because of its association with AI. We refer to it as the AI Winter." Duane Rettig quoted in "AI Winter," http://c2.com/cgi/wiki?AiWinter

7 The General Problem Solver (GPS) computer program, written in 1957, was able to solve problems through rules that allowed the GPS to divide a problem's goals into subgoals, and then check if obtaining a particular subgoal would bring the GPS closer to solving the overall goal. In the early 1960s Thomas Evan wrote ANALOGY, a "program [that] solves geometric-analogy problems of the form A:B::C:? taken from IQ tests and college entrance exams." Boicho Kokinov and Robert M. French, Computational Models of Analogy-Making, in Nadel, L. (Ed.) Encyclopedia of Cognitive Science, Vol. 1, (London: Nature Publishing Group, 2003) pp.113-118. See also A. Newell, J.C. Shaw, and H.A. Simon, "Report on a general problem-solving program," Proceedings of the International Conference on Information Processing, (Paris: UNESCO House, 1959) pp. 256-264; and Thomas Evans, "A Heuristic Program to Solve Geometric-Analogy Problems," in Semantic Information Processing, M. Minsky, Editor, (Cambridge, MA: MIT Press, 1968).

8 Sir Arthur Conan Doyle, "The Red-Headed League," 1890, available at http://www.eastoftheweb.com/short-stories/UBooks/RedHead.shtml.

9 V. Yu et al., "Antimicrobial Selection by a Computer: A Blinded Evaluation by Infectious Diseases Experts," JAMA 242.12 (1979): 1279–82.

10 Gary H. Anthes, "Computerizing Common Sense," Computerworld, April 8, 2002, http://www.computerworld.com/news/2002/story/0,11280,69881,00.html.

11 Kristen Philipkoski, "Now Here's a Really Big Idea," Wired News, November 25, 2002, http://www.wired.com/news/technology/0,1282,56374,00.html, reporting on Darryl Macer, "The Next Challenge Is to Map the Human Mind," Nature 420 (November 14, 2002): 121; see also a description of the project at http://www.biol.tsukuba.ac.jp/~macer/index.html.

12 Thomas Bayes, "An Essay Towards Solving a Problem in the Doctrine of Chances," published in 1763, two years after his death in 1761.

13 SpamBayes spam filter, http://spambayes.sourceforge.net.

14 Lawrence R. Rabiner, "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition," Proceedings of the IEEE 77 (1989): 257–86. For a mathematical treatment of Markov models, see http://jedlik.phy.bme.hu/~gerjanos/HMM/node2.html.

15 Kurzweil Applied Intelligence (KAI), founded by the author in 1982, was sold in 1997 for $100 million and is now part of ScanSoft (formerly called Kurzweil Computer Products, the author's first company, which was sold to Xerox in 1980), now a public company. KAI introduced the first commercially marketed large-vocabulary speech-recognition system in 1987 (Kurzweil Voice Report, with a ten-thousand-word vocabulary).

16 Here is the basic schema for a neural net algorithm. Many variations are possible, and the designer of the system needs to provide certain critical parameters and methods, detailed below.

Creating a neural-net solution to a problem involves the following steps:

  • Define the input.
  • Define the topology of the neural net (i.e., the layers of neurons and the connections between the neurons).
  • Train the neural net on examples of the problem.
  • Run the trained neural net to solve new examples of the problem.
  • Take your neural-net company public. These steps (except for the last one) are detailed below:

The Problem Input

The problem input to the neural net consists of a series of numbers. This input can be:

  • In a visual pattern-recognition system, a two-dimensional array of numbers representing the pixels of an image; or
  • In an auditory (e.g., speech) recognition system, a two-dimensional array of numbers representing a sound, in which the first dimension represents parameters of the sound (e.g., frequency components) and the second dimension represents different points in time; or
  • In an arbitrary pattern-recognition system, an n-dimensional array of numbers representing the input pattern.

Defining the Topology

To set up the neural net, the architecture of each neuron consists of:

  • Multiple inputs in which each input is "connected" to either the output of another neuron, or one of the input numbers.
  • Generally, a single output, which is connected either to the input of another neuron (which is usually in a higher layer), or to the final output.

Set up the First Layer of Neurons

  • Create N0 neurons in the first layer. For each of these neurons, "connect" each of the multiple inputs of the neuron to "points" (i.e., numbers) in the problem input. These connections can be determined randomly or using an evolutionary algorithm (see below).
  • Assign an initial "synaptic strength" to each connection created. These weights can start out all the same, can be assigned randomly, or can be determined in another way (see below).

Set up the Additional Layers of Neurons

Set up a total of M layers of neurons. For each layer, set up the neurons in that layer. For layeri:

  • Create Ni neurons in layeri. For each of these neurons, "connect" each of the multiple inputs of the neuron to the outputs of the neurons in layeri–1 (see variations below).
  • Assign an initial "synaptic strength" to each connection created. These weights can start out all the same, can be assigned randomly, or can be determined in another way (see below).
  • The outputs of the neurons in layerM are the outputs of the neural net (see variations below).

The Recognition Trials

How Each Neuron Works Once the neuron is set up, it does the following for each recognition trial.

  • Each weighted input to the neuron is computed by multiplying the output of the other neuron (or initial input) that the input to this neuron is connected to by the synaptic strength of that connection.
  • All of these weighted inputs to the neuron are summed.
  • If this sum is greater than the firing threshold of this neuron, then this is neuron is considered to fire and its output is 1. Otherwise, its output is 0 (see variations below).

Do the Following for Each Recognition Trial

For each layer, from layer0 to layerM: For each neuron in the layer:

  • Sum its weighted inputs (each weighted input = the output of the other neuron [or initial input] that the input to this neuron is connected to multiplied by the synaptic strength of that connection).
  • If this sum of weighted inputs is greater than the firing threshold for this neuron, set the output of this neuron = 1, otherwise set it to 0.

To Train the Neural Net

  • Run repeated recognition trials on sample problems.
  • After each trial, adjust the synaptic strengths of all the interneuronal connections to improve the performance of the neural net on this trial (see the discussion below on how to do this).
  • Continue this training until the accuracy rate of the neural net is no longer improving (i.e., reaches an asymptote).

Key Design Decisions

In the simple schema above, the designer of this neural-net algorithm needs to determine at the outset:

  • What the input numbers represent.
  • The number of layers of neurons.
  • The number of neurons in each layer. (Each layer does not necessarily need to have the same number of neurons.)
  • The number of inputs to each neuron in each layer. The number of inputs (i.e., interneuronal connections) can also vary from neuron to neuron and from layer to layer.
  • The actual "wiring" (i.e., the connections). For each neuron in each layer, this consists of a list of other neurons, the outputs of which constitute the inputs to this neuron. This represents a key design area. There are a number of possible ways to do this:
    • (i) Wire the neural net randomly; or
    • (ii) Use an evolutionary algorithm (see below) to determine an optimal wiring; or
    • (iii) Use the system designer's best judgment in determining the wiring.

  • The initial synaptic strengths (i.e., weights) of each connection. There are a number of possible ways to do this:
    • (i) Set the synaptic strengths to the same value; or
    • (ii) Set the synaptic strengths to different random values; or
    • (iii) Use an evolutionary algorithm to determine an optimal set of initial values; or
    • (iv) Use the system designer's best judgment in determining the initial values.

  • The firing threshold of each neuron.
  • Determine the output. The output can be:
    • (i) the outputs of layerM of neurons; or
    • (ii) the output of a single output neuron, the inputs of which are the outputs of the neurons in layerM.
    • (iii) a function of (e.g., a sum of) the outputs of the neurons in layerM; or
    • (iv) another function of neuron outputs in multiple layers.

  • Determine how the synaptic strengths of all the connections are adjusted during the training of this neural net. This is a key design decision and is the subject of a great deal of research and discussion. There are a number of possible ways to do this:
    • (i) For each recognition trial, increment or decrement each synaptic strength by a (generally small) fixed amount so that the neural net's output more closely matches the correct answer. One way to do this is to try both incrementing and decrementing and see which has the more desirable effect. This can be time-consuming, so other methods exist for making local decisions on whether to increment or decrement each synaptic strength.
    • (ii) Other statistical methods exist for modifying the synaptic strengths after each recognition trial so that the performance of the neural net on that trial more closely matches the correct answer.

Note that neural-net training will work even if the answers to the training trials are not all correct. This allows using real-world training data that may have an inherent error rate. One key to the success of a neural net–based recognition system is the amount of data used for training. Usually a very substantial amount is needed to obtain satisfactory results. Just like human students, the amount of time that a neural net spends learning its lessons is a key factor in its performance.

Variations

Many variations of the above are feasible:

  • There are different ways of determining the topology. In particular, the interneuronal wiring can be set either randomly or using an evolutionary algorithm.
  • There are different ways of setting the initial synaptic strengths.
  • The inputs to the neurons in layeri do not necessarily need to come from the outputs of the neurons in layeri–1. Alternatively, the inputs to the neurons in each layer can come from any lower layer or any layer.
  • There are different ways to determine the final output.
  • The method described above results in an "all or nothing" (1 or 0) firing called a nonlinearity. There are other nonlinear functions that can be used. Commonly a function is used that goes from 0 to 1 in a rapid but more gradual fashion. Also, the outputs can be numbers other than 0 and 1.
  • The different methods for adjusting the synaptic strengths during training represent key design decisions. The above schema describes a "synchronous" neural net, in which each recognition trial proceeds by computing the outputs of each layer, starting with layer0 through layerM. In a true parallel system, in which each neuron is operating independently of the others, the neurons can operate "asynchronously" (i.e., independently). In an asynchronous approach, each neuron is constantly scanning its inputs and fires whenever the sum of its weighted inputs exceeds its threshold (or whatever its output function specifies).

17 See Chapter 4 for a detailed discussion of brain reverse-engineering. As one example of the progression, S. J. Thorpe writes, "We have really only just begun what will certainly be a long term project aimed at reverse engineering the primate visual system. For the moment, we have only explored some very simple architectures, involving essentially just feed-forward architectures involving a relatively small numbers of layers… In the years to come, we will strive to incorporate as many of the computational tricks used by the primate and human visual system as possible. More to the point, it seems that by adopting the spiking neuron approach, it will soon be possible to develop sophisticated systems capable of simulating very large neuronal networks in real time." S.J. Thorpe et al., "Reverse engineering of the visual system using networks of spiking neurons," Proceedings of the IEEE 2000 International Symposium on Circuits and Systems, IEEE press. IV: 405-408, http://www.sccn.ucsd.edu/~arno/mypapers/thorpe.pdf

18 T. Schoenauer et. al. write, "Over the past years a huge diversity of hardware for artificial neural networks (ANN) has been designed… Today one can choose from a wide range of neural network hardware. Designs differ in terms of architectural approaches, such as neurochips, accelerator boards and multi-board neurocomputers, as well as concerning the purpose of the system, such as the ANN algorithm(s) and the system's versatility… Digital neurohardware can be classified by the:[sic] system architecture, degree of parallelism, typical neural network partition per processor, inter-processor communication network and numerical representation." T. Schoenauer, A. Jahnke, U. Roth and H. Klar, "Digital Neurohardware: Principles and Perspectives," in Proc. Neuronale Netze in der Anwendung – Neural Networks in Applications NN'98, Magdeburg, invited paper (February 1998): 101-106, http://bwrc.eecs.berkeley.edu/People/kcamera/neural/papers/schoenauer98digital.pdf. See also:Yihua Liao, "Neural Networks in Hardware: A Survey," 2001, http://ailab.das.ucdavis.edu/~yihua/research/NNhardware.pdf

19 Here is the basic schema for a genetic (evolutionary) algorithm. Many variations are possible, and the designer of the system needs to provide certain critical parameters and methods, detailed below.

The Evolutionary Algorithm

Create N solution "creatures". Each one has:

  • A genetic code: a sequence of numbers that characterize a possible solution to the problem. The numbers can represent critical parameters, steps to a solution, rules, etc.
  • For each generation of evolution, do the following:
  • Do the following for each of the N solution creatures:
  • Apply this solution creature's solution (as represented by its genetic code) to the problem, or simulated environment. Rate the solution.
  • Pick the L solution creatures with the highest ratings to survive into the next generation.
  • Eliminate the (N–L) nonsurviving solution creatures.
  • Create (N–L) new solution creatures from the L surviving solution creatures by:

(i) Making copies of the L surviving creatures. Introduce small random variations into each copy; or

(ii) Create additional solution creatures by combining parts of the genetic code (using "sexual" reproduction, or otherwise combining portions of the chromosomes) from the L surviving creatures; or

(iii) Doing a combination of (i) and (ii).

  • Determine whether or not to continue evolving: Improvement = (highest rating in this generation)–(highest rating in the previous generation). If Improvement < Improvement Threshold then we're done.
  • The solution creature with the highest rating from the last generation of evolution has the best solution. Apply the solution defined by its genetic code to the problem.

Key Design Decisions

In the simple schema above, the designer needs to determine at the outset:

  • Key parameters:
    • N
    • L
    • Improvement threshold
  • What the numbers in the genetic code represent and how the solution is computed from the genetic code.
  • A method for determining the N solution creatures in the first generation. In general, these need only be "reasonable" attempts at a solution. If these first-generation solutions are too far afield, the evolutionary algorithm may have difficulty converging on a good solution. It is often worthwhile to create the initial solution creatures in such a way that they are reasonably diverse. This will help prevent the evolutionary process from just finding a "locally" optimal solution.
  • How the solutions are rated.
  • How the surviving solution creatures reproduce.

Variations

Many variations of the above are feasible. For example:

  • There does not need to be a fixed number of surviving solutions creatures (L) from each generation. The survival rule(s) can allow for a variable number of survivors.
  • There does not need to be a fixed number of new solution creatures created in each generation (N–L). The procreation rules can be independent of the size of the population. Procreation can be related to survival, thereby allowing the fittest solution creatures to procreate the most.
  • The decision as to whether or not to continue evolving can be varied. It can consider more than just the highest-rated solution creature from the most recent generation(s). It can also consider a trend that goes beyond just the last two generations

20 Sam Williams, "When Machines Breed," August 12, 2004, http://www.salon.com/tech/feature/2004/08/12/evolvable_hardware/index_np.html.

21 Here is the basic scheme (algorithm description) of recursive search. Many variations are possible, and the designer of the system needs to provide certain critical parameters and methods, detailed below.

The Recursive Algorithm

Define a function (program) "Pick Best Next Step." The function returns a value of "SUCCESS" (we've solved the problem) or "FAILURE" (we didn't solve it). If it returns with a value of SUCCESS, then the function also returns the sequence of steps that solved the problem. Pick Best Next Step does the following:

  • Determine if the program can escape from continued recursion at this point. This bullet, and the next two bullets deal with this escape decision. First, determine if the problem has now been solved. Since this call to Pick Best Next Step probably came from the program calling itself, we may now have a satisfactory solution. Examples are:
    • (i) In the context of a game (for example, chess), the last move allows us to win (such as checkmate).
    • (ii) In the context of solving a mathematical theorem, the last step proves the theorem.
    • (iii) In the context of an artistic program (for example, a computer poet or composer), the last step matches the goals for the next word or note.

If the problem has been satisfactorily solved, the program returns with a value of "SUCCESS" and the sequence of steps that caused the success.

  • If the problem has not been solved, determine if a solution is now hopeless. Examples are:

    • (i) In the context of a game (such as chess), this move causes us to lose (checkmate for the other side).
    • (ii) In the context of solving a mathematical theorem, this step violates the theorem.
    • (iii) In the context of an artistic creation, this step violates the goals for the next word or note.
  • If the solution at this point has been deemed hopeless, the program returns with a value of "FAILURE."
  • If the problem has been neither solved nor deemed hopeless at this point of recursive expansion, determine whether or not the expansion should be abandoned anyway. This is a key aspect of the design and takes into consideration the limited amount of computer time we have to spend. Examples are:
    • (i) In the context of a game (such as chess), this move puts our side sufficiently "ahead" or "behind." Making this determination may not be straightforward and is the primary design decision. However, simple approaches (such as adding up piece values) can still provide good results. If the program determines that our side is sufficiently ahead, then Pick Best Next Step returns in a similar manner to a determination that our side has won (that is, with a value of "SUCCESS"). If the program determines that our side is sufficiently behind, then Pick Best Next Step returns in a similar manner to a determination that our side has lost (that is, with a value of "FAILURE").
    • (ii) In the context of solving a mathematical theorem, this step involves determining if the sequence of steps in the proof is unlikely to yield a proof. If so, then this path should be abandoned, and Pick Best Next Step returns in a similar manner to a determination that this step violates the theorem (that is, with a value of "FAILURE"). There is no "soft" equivalent of success. We can't return with a value of "SUCCESS" until we have actually solved the problem. That's the nature of math.
    • (iii) In the context of an artistic program (such as a computer poet or composer), this step involves determining if the sequence of steps (such as the words in a poem, notes in a song) is unlikely to satisfy the goals for the next step. If so, then this path should be abandoned, and Pick Best Next Step returns in a similar manner to a determination that this step violates the goals for the next step (that is, with a value of "FAILURE").
  • If Pick Best Next Step has not returned (because the program has neither determined success nor failure nor made a determination that this path should be abandoned at this point), then we have not escaped from continued recursive expansion. In this case, we now generate a list of all possible next steps at this point. This is where the precise statement of the problem comes in:
    • (i) In the context of a game (such as chess), this involves generating all possible moves for "our" side for the current state of the board. This involves a straightforward codification of the rules of the game.
    • (ii) In the context of finding a proof for a mathematical theorem, this involves listing the possible axioms or previously proved theorems that can be applied at this point in the solution.
    • (iii) In the context of a cybernetic art program, this involves listing the possible words/notes/line segments that could be used at this point. For each such possible next step:
      • (i) Create the hypothetical situation that would exist if this step were implemented. In a game, this means the hypothetical state of the board. In a mathematical proof, this means adding this step (for example, axiom) to the proof. In an art program, this means, adding this word/note/line segment.
      • (ii) Now call Pick Best Next Step to examine this hypothetical situation. This is, of course, where the recursion comes in because the program is now calling itself.
      • (iii) If the above call to Pick Best Next Step returns with a value of "SUCCESS," then return from the call to Pick Best Next Step (that we are now in) also with a value of "SUCCESS." Otherwise consider the next possible step.

      If all the possible next steps have been considered without finding a step that resulted in a return from the call to Pick Best Next Step with a value of "SUCCESS," then return from this call to Pick Best Next Step (that we are now in) with a value of "FAILURE."

      END OF PICK BEST NEXT STEP

      If the original call to Pick Best Next Move returns with a value of "SUCCESS," it will also return the correct sequence of steps:

    • (i) In the context of a game, the first step in this sequence is the next move you should make.
    • (ii) In the context of a mathematical proof, the full sequence of steps is the proof.
    • (iii) In the context of a cybernetic art program, the sequence of steps is your work of art.

    If the original call to Pick Best Next Step is "FAILURE," then you need to go back to the drawing board.

    Key Design Decisions

    In the simple schema above, the designer of the recursive algorithm needs to determine the following at the outset:

  • The key to a recursive algorithm is the determination in Pick Best Next Step of when to abandon the recursive expansion. This is easy when the program has achieved clear success (such as checkmate in chess or the requisite solution in a math or combinatorial problem) or clear failure. It is more difficult when a clear win or loss has not yet been achieved. Abandoning a line of inquiry before a well-defined outcome is necessary because otherwise the program might run for billions of years (or at least until the warranty on your computer runs out).
  • The other primary requirement for the recursive algorithm is a straightforward codification of the problem. In a game like chess, that's easy. But in other situations, a clear definition of the problem is not always so easy to come by

22 See Kurzweil Cyberart, http://www.KurzweilCyberArt.com, for further description of Ray Kurzweil's Cybernetic Poet and to download a free copy of the program. See U.S. Patent # 6,647,395 "Poet Personalities," inventors: Ray Kurzweil and John Keklak. Abstract: "A method of generating a poet personality including reading poems, each of the poems containing text, generating analysis models, each of the analysis models representing one of poems and storing the analysis models in a personality data structure. The personality data structure further includes weights, each of the weights associated with each of the analysis models. The weights include integer values."

23 Ben Goertzel, The Structure of Intelligence, 1993, Springer-Verlag. Ben Goertzel, The Evolving Mind, 1993, Gordon and Breach. Ben Goertzel, Chaotic Logic, 1994, Plenum. Ben Goertzel, From Complexity to Creativity, 1997, Plenum. For a link to Ben Goertzel's books and essays, see http://www.goertzel.org/work.html

24 KurzweilAI.net (http://www.KurzweilAI.net) provides hundreds of articles by one hundred "big thinkers" and other features on "accelerating intelligence." The site offers a free daily or weekly newsletter on the latest developments in the areas covered by this book. To subscribe, enter your e-mail address (which is maintained in strict confidence and is not shared with anyone) on the home page.

25 John Gosney, Business Communications Company, "Artificial Intelligence: Burgeoning Applications in Industry," June 2003, http://www.bccresearch.com/comm/G275.html.

26 Kathleen Melymuka, "Good Morning, Dave . . . ," Computerworld, November 11, 2002, http://www.computerworld.com/industrytopics/defense/story/0,10801,75728,00.html.

27 JTRS Technology Awareness Bulletin, August 2004, http://jtrs.army.mil/sections/technicalinformation/fset_technical.html?tech_aware_2004-8.

28 Otis Port, Michael Arndt, and John Carey, "Smart Tools," spring 2003, http://www.businessweek.com/bw50/content/mar2003/a3826072.htm.

29 Wade Roush, "Immobots Take Control: From Photocopiers to Space Probes, Machines Injected with Robotic Self-Awareness Are Reliable Problem Solvers," Technology Review (December 2002/January 2003), http://www.occm.de/roush1202.pdf.

30 Jason Lohn quoted in NASA news release "NASA 'Evolutionary' Software Automatically Designs Antenna," http://www.nasa.gov/lb/centers/ames/news/releases/2004/04_55AR.html

31 Robert Roy Britt, "Automatic Astronomy: New Robotic Telescopes See and Think," June 4, 2003, http://www.space.com/businesstechnology/technology/automated_astronomy_030604.html.

32 H. Keith Melton, "Spies in the Digital Age," http://www.cnn.com/SPECIALS/cold.war/experience/spies/melton.essay.

33 "United Therapeutics (UT) is a biotechnology company focused on developing chronic therapies for life threatening conditions in three therapeutic areas: cardiovascular, oncology and infectious diseases" (http://www.unither.com). Kurzweil Technologies is working with UT to develop pattern recognition–based analysis from either "Holter" monitoring (twenty-four-hour recordings) or "Event" monitoring (thirty days or more).

34 Kristen Philipkoski, "A Map That Maps Gene Functions," Wired News, May 28, 2002, http://www.wired.com/news/medtech/0,1286,52723,00.html.

35 Jennifer Ouellette, "Bioinformatics Moves into the Mainstream," The Industrial Physicist (October/November 2003), http://www.sciencemasters.com/bioinformatics.pdf.

36 Port, Arndt, and Carey, "Smart Tools."

37 "Protein Patterns in Blood May Predict Prostate Cancer Diagnosis," National Cancer Institute, October 15, 2002, http://www.nci.nih.gov/newscenter/ProstateProteomics, reporting on E. F. Petricoin et al., "Serum Proteomic Patterns for Detection of Prostate Cancer," Journal of the National Cancer Institute 94 (2002): 1576–78.

38 Charlene Laino, "New Blood Test Spots Cancer," December 13, 2002, http://my.webmd.com/content/Article/56/65831.htm; Emanuel F. Petricoin III et al., "Use of Proteomic Patterns in Serum to Identify Ovarian Cancer," The Lancet 359.9306 (February 16, 2002): 572–77.

39 Mark Hagland, "Doctor's Orders," January 2003, http://www.healthcare- informatics.com/issues/2003/01_03/cpoe.htm.

40 Ross D. King et al., "Functional Genomic Hypothesis Generation and Experimentation by a Robot Scientist," Nature 427 (January 15, 2004): 247–52.

41 Port, Arndt, and Carey, "Smart Tools."

42 "Future Route Releases AI-Based Fraud Detection Product," August 18, 2004, http://www.finextra.com/fullstory.asp?id=12365.

43 John Hackett, "Computers Are Learning the Business," Collections World, April 24, 2001, http://www.creditcollectionsworld.com/news/042401_2.htm.

44 "Innovative Use of Artificial Intelligence, Monitoring NASDAQ for Potential Insider Trading and Fraud," AAAI press release, July 30, 2003, http://www.aaai.org/Pressroom/Releases/release-03-0730.html.

45 "Adaptive Learning, Fly the Brainy Skies," Wired News, March 2002, http://www.wired.com/wired/archive/10.03/everywhere.html?pg=2.

46 Introduction to Artificial Intelligence, EL 629, Maxwell Air Force Base, Air University Library course www.au.af.mil/au/aul/school/acsc/ai02.htm.

47 See www.Seegrid.com.

48 No Hands Across America Web site, http://cart.frc.ri.cmu.edu/users/hpm/project.archive/reference.file/nhaa.html, and "Carnegie Mellon Researchers Will Prove Autonomous Driving Technologies During a 3,000 Mile, Hands-off-the-Wheel Trip from Pittsburgh to San Diego," Carnegie Mellon press release, http://www- 2.cs.cmu.edu/afs/cs/user/tjochem/www/nhaa/official_press_release.html; Robert J. Derocher, "Almost Human," September 2001, http://www.insight-mag.com/insight/01/09/col-2-pt-1-ClickCulture.htm.

49 "Search and Rescue Robots," Associated Press, September 3, 2004, http://www.smh.com.au/articles/2004/09/02/1093939058792.html?oneclick=true.

50 "From Factoids to Facts," The Economist, August 26, 2004, http://www.economist.com/science/displayStory.cfm?story_id=3127462.

51 Joe McCool, "Voice Recognition, It Pays to Talk," May 2003, http://www.bcs.org/BCS/Products/Publications/JournalsAndMagazines/ComputerBulletin/OnlineArchive/ may03/voicerecognition.htm.

52 John Gartner, "Finally a Car That Talks Back," Wired News, September 2, 2004, http://www.wired.com/news/autotech/0,2554,64809,00.html?tw=wn_14techhead.

53 "Computer Language Translation System Romances the Rosetta Stone," Information Sciences Institute, USC School of Engineering (July 24, 2003), http://www.usc.edu/isinews/stories/102.html.

54 Torsten Reil quoted in Steven Johnson, "Darwin in a Box," Discover Magazine 24.8 (August 2003), http://www.discover.com/issues/aug-03/departments/feattech/

55 "Let Software Catch the Game for You," July 3, 2004, http://www.newscientist.com/news/news.jsp?id=ns99996097.

56 Michelle Delio, "Breeding Race Cars to Win," Wired News, June 18, 2004, http://www.wired.com/news/autotech/0,2554,63900,00.html.

57 Marvin Minsky, The Society of Mind (New York: Simon & Schuster, 1988).

58 Hans Moravec, "When will computer hardware match the human brain?" Journal of Evolution and Technology, 1998. Volume 1.

59 Ray Kurzweil, The Age of Spiritual Machines (New York: Viking, 1999), p. 156.

60 See Chapter 2 endnotes 22 and 23 on the International Technology Roadmap for Semiconductors.

61 "The First Turing Test," http://www.loebner.net/Prizef/loebner-prize.html.

62 Douglas R. Hofstadter, "A Coffeehouse Conversation on the Turing Test," May 1981, included in Ray Kurzweil, The Age of Intelligent Machines (Cambridge, Mass.: MIT Press, 1990), pp. 80–102, http://www.kurzweilai.net/meme/frame.html?main=/articles/art0318.html.

63 Ray Kurzweil "Why I Think I Will Win." And Mitch Kapor, "Why I Think I Will Win." Rules: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0373.html; Kapor: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0412.html; Kurzweil: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0374.html; Kurzweil "final word": http://www.kurzweilai.net/meme/frame.html?main=/articles/art0413.html.

64 Edward A. Feigenbaum, "Some Challenges and Grand Challenges for Computational Intelligence," Journal of the Association for Computing Machinery 50 (January 2003): 32–40.

© 2006 Ray Kurzweil

   
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

It Wasn't Watson!
posted on 07/16/2006 12:20 AM by Trurl73

[Top]
[Mind·X]
[Reply to this post]

Brin's error-correcting paradigm (and my fondness of Doyle) moves me to this elementary observation:

Ray's reference to the Canon, whilst nice to see, has an undoubtedly unwitting misattribution.

Jabez Wilson, not Holmes's trusted biographer and colleague, made the comment to which he refers.

Re: Why We Can Be Confident of Turing Test Capability Within a Quarter Century
posted on 07/16/2006 6:20 PM by johnclavin

[Top]
[Mind·X]
[Reply to this post]

If we were to build a very powerful AI machine using the tools listed in the AI Toolkit; expert systems, Bayesian nets, Markov models, neural nets, genetic algorithms, and recursive searching, we might have a machine that could fool a turning test, but it would not have any self awareness, creative intuition, or the ability to think outside the box. (pun intended!)
I think it is time to take a closer look at the work of Harold Cohen and other generative artists who might have ideas about finding creativity in a machine. Artistic creativity does not come from a paradigm that narrows down knowledge with some end result in mind, it comes from a random thought that is captured and developed. The AI Toolkit can be applied to developing creative ideas but the spark has to come from unpredictable events.
I believe that research in this area could lead to a machine intelligence that will someday be self aware and have a free will. Then a machine will pass the Turing test with honor instead of being a programmed imitation of intelligence.

John Clavin

Re: Why We Can Be Confident of Turing Test Capability Within a Quarter Century
posted on 03/04/2007 11:39 PM by joanneB

[Top]
[Mind·X]
[Reply to this post]

It may be possible to create languages to communicate with other states of mind... and record the "conversations"...
http://vibepc.homestead.com/Emma_Lesson_1c.ppt

Re: Why We Can Be Confident of Turing Test Capability Within a Quarter Century
posted on 07/30/2006 6:41 AM by Abrob1006

[Top]
[Mind·X]
[Reply to this post]

It would not take 25 years to pass the the Turing Test. I can guarantee you that we will do it within next 10 years as I am actively involved in such projects. And we'll launch it commercially years before that after the beta-testing is done.

As far as invoking Self-awareness or True Consciousness in machines/Web is concerned, no one can give any time limit for that although it's inevitable. Because it's not only a matter of intense human efforts but it's pre-determined by Will and Intellgence of the Universe and embedded in its Cosmic Evolution.

But it's highly probable that we may able build a self-aware Web/Network/Machines within 10 to 25 years.It won't be algorithmic in the sense we are familiar with the term in computer science and technology but it will grow and manifest its self-awareness and other higher-order capabilities in months and years after its birth/launch.





Re: Why We Can Be Confident of Turing Test Capability Within a Quarter Century
posted on 10/08/2006 10:33 AM by jack d

[Top]
[Mind·X]
[Reply to this post]

"As far as invoking Self-awareness or True Consciousness in machines/Web is concerned, no one can give any time limit for that although it's inevitable. Because it's not only a matter of intense human efforts but it's pre-determined by Will and Intellgence of the Universe and embedded in its Cosmic Evolution. "

wow ! is this a theory or just picturesque philosphy ?

Re: Why We Can Be Confident of Turing Test Capability Within a Quarter Century
posted on 10/13/2006 2:33 AM by jack d

[Top]
[Mind·X]
[Reply to this post]

I reckon its already been done and 'self-aware' machines are about as close as life on Pluto.

Spiritual test of intelligence
posted on 10/08/2006 8:42 AM by Timothy

[Top]
[Mind·X]
[Reply to this post]

I would be interested in Ray's view of the spiritual capacities of the spiritual machines (strong AI) he is talking about.

For example, deep states of meditation are states in which the human mind is tacidly aware of its own presence, or even tacidly aware of its direct environment (similar to athletes experiencing "flow"), while all thought-processes have stopped.

The mind is still, in abeyance. But consciousness is not in abeyance, on the contrary, there's a strong sense of being, of precence.

I do not know how this state could be objectively tested, but maybe someone will be able to devise a Spiritual Test, not just a mental test like the Turing and Feigenbaum tests. These are tests that assess the development of mind, our level of thinking.

A machine can respond to and even interact intelligently with its environment, but will it be able to answer not just intellectually, the questions Who am I or What is consciousness?

It seems to me that a truly intelligent and spiritual machine will inately have a longing to explore these questions and have them answered.

Will machines, once they awaken to true intelligence (strong AI), have a natural desire to answer these spiritual questions?

Will a new generation of (machine-)philosophers or rather, (machine-)Buddhists be born?

My Spiritual Test for strong AI would be simply to see if these - for humans quite natural - spiritual questions 'spontanuously' arise in our intelligent machines.

Are they even interested in spirituality, are they even interested in ethics, humanist thought, or practicing compassion?

It seems to me that a truly intelligent machine should demonstrate a natural concern for these things. Otherwise "intelligence" would mean cleverness, rather than intelligence.

Timothy Schoorel

Re: Spiritual test of intelligence
posted on 11/20/2006 5:10 PM by BigMTBrain

[Top]
[Mind·X]
[Reply to this post]

"It seems to me that a truly intelligent machine should demonstrate a natural concern for these things. Otherwise "intelligence" would mean cleverness, rather than intelligence. "

Don't forget that in your mind as well as the minds of others there's a lifetime's worth of meta data wrought by experience. The meta data is what forms and guides an entity's ethics, etc.

The answer to what concerns a superAI would have depend completely on what kind of meta data you feed or make available to its engine.

Re: Spiritual test of intelligence
posted on 11/20/2006 10:13 PM by BigMTBrain

[Top]
[Mind·X]
[Reply to this post]

Ooops! Meant to say "data and metadata (data describing data)"

Re: Why We Can Be Confident of Turing Test Capability Within a Quarter Century
posted on 10/08/2006 10:38 AM by jack d

[Top]
[Mind·X]
[Reply to this post]

WHat we really need is somebody to explain WHY we need a machine to pass the Turing test. For the life of me, why we need to pass a test which shows that a machine can converse like a human (and let's face it, it probably can be passed in many basic circumstances today) is beyond me as it demonstrates nothing more than it demonstrates.

Namely, that it is possible to teach computers to speak. NOT to understand or comprehend that they're speaking, but nonethless to be able to do conduct conversations from the obeerver's perspective. And er .... that's it.

Anybody else think of any significant reason why the Turing test is important ?

Re: Why We Can Be Confident of Turing Test Capability Within a Quarter Century
posted on 10/08/2006 10:59 AM by B-Punk

[Top]
[Mind·X]
[Reply to this post]

I wonder how many humans would not pass a Turing test....

Re: Why We Can Be Confident of Turing Test Capability Within a Quarter Century
posted on 10/12/2006 5:35 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

exactly ! Your starting point for a scientific test cannot be a metric that is ultimately subjective, i.e the test is passed when a panel kind of thinks that, well its almost certain that, well, we have a human here .

Why we need a Turing Test
posted on 10/08/2006 11:57 AM by Timothy

[Top]
[Mind·X]
[Reply to this post]

We need something like a Turing Test because we want to transfer human intelligence and consciousness to a platform that is more stable and expandable than our biology.

The question is what human intelligence and consciousness are, in essence.

This is what we are trying to capture in for example the Turing Test.

When we've created Strong AI, a machine that is self-aware and intelligent like humans, we will have trancended many limitations that come with being biologically human.

In principle this is something we want, we want to trancend our limitations, and copying intelligence to a more solid (silicon) platform will in theory increase the chances of survival of human civilisation.

Either way, it seems that the advance of technology is an evolutionary imperative: we can hardly avoid it, even if we wanted to avoid it.

In short, passing the Turing Test would mean that a computer does not just utter some words but is capable of intelligent conversation. Therefor we would presume that this machine has actually achieved true intelligence and is not just faking it.

Re: Why we need a Turing Test
posted on 10/08/2006 5:37 PM by stereoheadfm

[Top]
[Mind·X]
[Reply to this post]

I agree we need a Turing Test because it offers a benchmark in respect to how convincing an artificial intelligence seems. Also, you're quite correct that the essential questions regard what intelligence and conciousness are.

However, I fear in the relevance of the issues we discuss we've neglected an overarching principle of conciousness, specifically that nobody has the authority to identify conciousness. I'm no skeptic, but it's worth noting that as far as I'm concerned, it's logically possible everyone in the world is a "machine" not deserving the title of conciousness.

In my personal opinion, there's a more valid "Sistine Chapel Creation of Adam Test" whereby the first concious machine is the first one to generate the notion that we are fundamentally isolated-- that it has no more reason to be convinced of our conciousness than we of its.

I admit that I agree with jack d, it's entirely possible to have a convincing conversation with an artifical intelligence with no capability for self-awareness, and that this would be less valid. Let me clarify that although I feel nobody has the ultimate authority to demonstrate conciousness, there are definitions by which we identify what we see.

Re: Why we need a Turing Test
posted on 10/12/2006 6:35 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

I think you've hit the point but in a sense you've fallen into the old trap. You CAN measure consciousness - anaesthetists do it all the time. All you need is a theory of consciousness and you can set up a mechanism of measurement.


The same applies throughout science - for instance, atmoic theory. We can measure the width of an atom, but only by using equipment and measurements based upon theories that assume atomic theory to be correct.

We just don't have a ver advanced theory of consciousness yet - but we do have them.

Re: Why we need a Turing Test
posted on 10/12/2006 6:39 PM by Fred84

[Top]
[Mind·X]
[Reply to this post]

> You CAN measure consciousness

i'm not really contributing, but i'd say:

based on subconscious calculations, i think you are right

Re: Why we need a Turing Test
posted on 10/12/2006 5:37 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

"In short, passing the Turing Test would mean that a computer does not just utter some words but is capable of intelligent conversation. Therefor we would presume that this machine has actually achieved true intelligence and is not just faking it. "

Is that true ? I though the whole point about the Turing Test is that it isnt concerned AT ALL if the machine is thinking. It is literally only concerned with external appearances - namely that if it sounds like its thinking then it must be thinking. I think you need to read the test and you'll see that this is the case.

Re: Why we need a Turing Test
posted on 10/12/2006 7:44 PM by Timothy

[Top]
[Mind·X]
[Reply to this post]

"The basis of the Turing test is that if the human Turing test judge is competent, then an entity requires human-level intelligence in order to pass the test."
(http://www.kurzweilai.net/meme/frame.html?main=/a rticles/art0374.html?)

But of course, we will probably come up with all kinds of other tests for Strong AI precisely because we all will wonder: Has this machine really achieved intelligence? Is it really conscious? Does it really feel? Does it have conscious, subjective experiences, like we have?

Eventually I think most of us will accept that it has really achieved true intelligence and awareness. Just like we've generally accepted that 'other people' are intelligent and conscious beings.

Re: Why we need a Turing Test
posted on 10/12/2006 9:10 PM by i

[Top]
[Mind·X]
[Reply to this post]

By the time a machine passes the Turing Test, we'll have come to expect it and it won't seem like such a big deal.

The AI Effect: "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques.

"Every time we figure out a piece of it, it stops being magical; we say, Oh, that's just a computation."

- Rodney Brooks

"As soon as someone gets a computer to do it, people say: 'That's not what we meant by intelligence.' People subconsciously are trying to preserve for themselves some special role in the universe."

- Michael Kearns

Re: Why we need a Turing Test
posted on 10/12/2006 10:48 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

Nope it's that weak A.I. and strong A.I. are different challenges.

Almost no-one's EVER tried to do AGI (Artificial General Intelligence).

the number of serious and funded attempts is probably under 100, and of those less than 20 are worthy of real study.

The main problem ios the multi-disciplines needed..not just acdemic disciplines...but general ones including team building, project management, finance funding marketing, lobbying etc

Re: Why we need a Turing Test
posted on 10/12/2006 11:09 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

The logic of runaway AI is valid, but we still need to consider the timing. Achieving human levels in a machine will not immediately cause a runaway phenomenon. Consider that a human level of intelligence has limitations. We have examples of this today'about six billion of them. Consider a scenario in which you took one hundred humans from, say, a shopping mall. This group would constitute examples of reasonably well educated humans. Yet if this group was presented with the task of improving human intelligence, it wouldn't get very far, even if provided with the templates of human intelligence. It would probably have a hard time creating a simple computer. Speeding up the thinking and expanding the memory capacities of these one hundred humans would not immediately solve this problem.


depends how SAI is built.

If it's built to be a copy of human brain, then i agree it will not immediately runaway.

but suppose it's built from simple accalerating automata?

This is evolutionary comput8ng. genetic algorithms.

It self-modifies UP to human intelligence.


AND THEN CARRIES ON.

there's nothing fundamental about the limit of human intelligence.

It stopped evolving because it could master the environment, especially predation, without having to modify, and could keep repeating the blueprint for 100,000 years, though the body modified viz a viz germs.


but runaway machine intelligence is not like that.

Intelligence is a product of memory and speed..as you will see if you walk round a zoo.

We use it to solve the problems of survival through reproduction, and lately of ataempting personal indefinate survival.


Here the species is getting more intelligent, partly by numbers, mainly by tooling up and techniques.


But evolutionary intelligence is 'adding on more skills' - nothing more.

I also dont see a limit to intelligence as it should be able to solve any possible problem capable of solution.

But I do see a danger of runaway, and I doubt we will have anytime from launching a pathogen like software that germinates intelligence to when it has soared past a Singularity and metabolised the universe.




Re: Why we need a Turing Test
posted on 10/13/2006 2:25 AM by jack d

[Top]
[Mind·X]
[Reply to this post]

""The basis of the Turing test is that if the human Turing test judge is competent, then an entity requires human-level intelligence in order to pass the test."

Well clearly that is not the case. Speech (it is only a speech test) is the most automatable human characteristic there is. The fact is if a human took the test, the EXISTENCE of their intelligence (and mental states) would depend upon a third party's subjective assumptions about how bright they were. For instance, some mentally handicapped people would fail the Turing test, but they still have mental states, concsciousness and intelligence.

Re: Why we need a Turing Test
posted on 10/13/2006 2:30 AM by jack d

[Top]
[Mind·X]
[Reply to this post]

"Eventually I think most of us will accept that it has really achieved true intelligence and awareness. Just like we've generally accepted that 'other people' are intelligent and conscious beings. "

But our belief that other people are conscious is based upon our personal KNOWLEDGE that we are. Cogito ergo sum. Other humans beings are made of the same stuff and act in the same way.

The same can't be said of machines. I know what they're made of and exactly what they do. I have every reason therefore to believe that they are not conscious because I know they are nothing at all like me.

Re: Why we need a Turing Test
posted on 10/13/2006 11:39 AM by Timothy

[Top]
[Mind·X]
[Reply to this post]

Why do you presume that machines won't have mental states?

If you are a materialist and think that the universe is made up out of matter, than there is really not much difference between a human being and a machine! Everything, including the human brain, is made up out of atoms or some even smaller building blocks of matter. So from a materialist's point of view, human beings and machines are indeed 'made of the same stuff'.

If you are a 'spiritualist' and think that everything is an expression or modification of consciousness itself, than the same logic says that there is really no fundamental difference between human beings and machines and that therefor there's no reason why machines couldn't become truly conscious and intelligent, like human beings.

The 3rd option is believing, like Descartes did, that there is a fundamental divide between matter and mind. But then, what is the mechanism by which the mind influences the body? This question has, as far as I know, never been answered in a philosophically satisfactory way. Also, there are so many indications now that mind originates from the brain.

For me, 'Cogito ergo sum' (I think, therefor I am) is not all that meaningful. There's no denying that machines can think. The question is whether machines can think consciously, can be conscious, that's the real question!

More meaningful than 'Cogito ergo sum' I think would be 'Cognito ergo sum': I am conscious, therefor I am.

The whole point of the effort towards Strong AI is that we somehow manage to transfer our intelligence and consciousness to machines, to a different platform from biology. A platform that is not as vulnerable and doesn't have all these biological limitations.

The question that needs to be answered is how we would know if we have succeeded! The Turing Test is one ingenious way to test an AI.

But maybe the AI itself will come up with an even more convincing test!

Re: Why we need a Turing Test
posted on 10/13/2006 12:48 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

"So from a materialist's point of view, human beings and machines are indeed 'made of the same stuff'. "

No they aren't. A brain is made up of neurons of vastly differing chemical composition to a digital computer. We know that brains cause mental states : it does not follow that computers do.

The defining feature of a computer is its manipulation of symbols from the point of view of an external observer. That is why the chemical composition of a computer is irrelevant.

The defining feature of a brain is that its chemical composition is responsible for a phenomenological, not logical, process, known as thinking.

The brain can be thought of as a machine, but a machine who's defining characteristic is the manipulation of symbols from the point of view of an external oberver cannot be thought of as a brain.

In other words, with some irony its the Strong AI enthusiasts that believe in a kind of dualism. They believe that mere symbols can create phenomenological events. Its a bit like saying that a computer simulation of rainfall is the same thing as rainfall itself. Clearly this is just not true.

The Turing test is the apotheosis of this belief. In reality it has probably been passed in many ways today. But as you have realised, it sheds no light onto the consciosness problem.

Thinking is not logical or symbolic. It is physical and phenomenological. There is nothing, absolutely nothing, to allow us to believe that we can 'transfer' our thoughts to another medium down some kind of telephone line.

Re: Why we need a Turing Test
posted on 10/14/2006 9:21 AM by Timothy

[Top]
[Mind·X]
[Reply to this post]

"Its a bit like saying that a computer simulation of rainfall is the same thing as rainfall itself."

The issue here I think is the experience of the rainfall, rather than rainfall itself.

When we 'experience' rainfall, we don't experience rainfall but a representation, a virtual reality created by our brains.

I think computers will be very capable at creating similar 'inner' representations of reality based on sensory inputs. The question that still remains is whether they will have the actual inner experiences of that reality.

'Thinking is not logical or symbolic. It is physical and phenomenological.'

I agree, but silicon is just as physical and phenomenological as brain cells are.

Electricity running through complex arrangements of silicon is no less physical and phenomenological than an alive human brain.

To my mind it is the arrangement of the brain cells and the level of complexity of all of the processes that happen in a brain, that create human consciousness and intelligence, rather than the physical composition of a bunch of brain cells.

You seem to imply that it in the end it is the physical composition of brain cells that allows for consciousness, but for me that's not a convincing argument against the possibility of consciousness in a machine.

Re: Why we need a Turing Test
posted on 10/18/2006 4:22 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

"The issue here I think is the experience of the rainfall, rather than rainfall itself. "

No it isn't. Although if you really want to confuse it, the 'experience of rainfall' in somebody's brain is is a phenomena as natural and physical as any rainfall.

"I agree, but silicon is just as physical and phenomenological as brain cells are."

But silicon does not have causal powers to create mental states.

And nor is there a necessary relation between silicon and computing : a computer can be made of old tins and bits of string. We can create a computer from watching sunspots if we like. Is the sun thinking ?

"Electricity running through complex arrangements of silicon is no less physical and phenomenological than an alive human brain. "

But electricity and silicon are utterly irrelevant to a computer. A computer is a logical construct. Without an observer to understand how the physical tokens (i.e voltage levels on a bit of silicon) relate to symbols such as o's and 1's, there is no computer.
A computer is a LOGICAL construct with no inherent physicality whatsoever. A computer exists only in the eyes of the beholder, the user.

The same cannot be said of brains as we know that brains cause minds. We know that all brains consist of similar chemical composition.
Best of all we know that we don't need a rational observer sat in the brain to decide which bits of the brain represent 0s and which bits represent 1s. So we know that the brain is not a computer, although it is capable of computing.

"You seem to imply that it in the end it is the physical composition of brain cells that allows for consciousness, but for me that's not a convincing argument against the possibility of consciousness in a machine. "

But it's not me that has to do the convincing : it's you. We know that brains cause minds. The question is : do computers cause minds ?

The answer is no. And the reason for that is simple. :-

(i) Thinking is physical ;
(ii) Computers are logical and mathematical ;
(iii) Mathematical objects cannot create physical phenomena.

In other words, a computer simulation of rainfall cannot create rainfall.
Similarly, a computer simulation of somebody experiencing rainfall cannot create the exeperience of experiencing rainfall.

That is because the experience of experiencing rainfall is a phenomena as physical (although not material) as rainfall itself. Mental objects are physical objects.






Re: Why we need a Turing Test
posted on 10/20/2006 12:44 PM by Timothy

[Top]
[Mind·X]
[Reply to this post]

"But electricity and silicon are utterly irrelevant to a computer."

This is like saying that a brain is totally irrelevant to the human mind. That just makes no sense to me.

"But it's not me that has to do the convincing: it's you."

Not really, as far as the convincing goes, I think the AIs will take on that job ;-)

Re: Why we need a Turing Test
posted on 10/21/2006 8:34 AM by jack d

[Top]
[Mind·X]
[Reply to this post]

"This is like saying that a brain is totally irrelevant to the human mind. That just makes no sense to me. "

With this reply you have revealed your basic error. The whole point is that the mind does not have a relationship to the brain in the same way that a program does to a computer. That is what most AI people think and as a matter of epistemology is is plainly unproven ,and to my mind plainly wrong.

Thinking is a physical act. Think of a thought as a blue ball made of a solid. You would never thinkl of a solid ball as a program, as a program is mathematical and has no physical existence.

THinking is physical without being material. Thought acts, epistemelogically spealking, have no relationship to mathematics, hence their absolute forms (the experience of the colour blue, for example).

The objects of thought acts CAN have relationships to logical entities. But thoughts acts themselves are as cosmologically real and substantive as footballs.

Re: Why we need a Turing Test
posted on 10/22/2006 4:38 AM by Timothy

[Top]
[Mind·X]
[Reply to this post]

I agree that it is 'plainly unproven' that machines can have conscious minds, as you say. That is - coming back to your original question - of course why we are interested in tests like the Turing Test.

And probably we share the same concern: we don't want to lose true consciousness to machines that merely pretend to be or seem to be conscious and intelligent.

And yes, I am more likely to believe that another human being is truly conscious than I would believe 'some machine' to be truly conscious.

But I do not see why we couldn't understand the mechanisms of intelligence of the human brain in ever greater detail and replicate these methods in hardware and software. We are already doing just that!

As for subjective consciousness, this is only unprovable simply because it is subjective, but not for any other reason.

Re: Why we need a Turing Test
posted on 10/22/2006 7:33 AM by jack d

[Top]
[Mind·X]
[Reply to this post]

Timothy

Either I am not making myself clear, or you aren't listening , but you dont appear to be responding.

The basic tenet of AI is that the mind has a relationshiop to the brain in the same way that a program has to a computer. I am saying that this is wrong.

I am saying that thinking is a PHYSICAL thing, like the surface tension of a liquid : it arises as a collective property of water molecules, bus is nonetheless PHYSICAL by nature. It is not mathematical.

SO I am saying thatg the relationship of the mind to the brain is the same as the relationship between a body of water and surface tension. It is a causal, physical relationship. Not mathematical. You really don't seem to to thinking that the relationship between mind and brain could bve anything other than computer and program. It's so implicit in what you are thinking you don't even know you are doing it.

Re: Why we need a Turing Test
posted on 10/22/2006 9:41 AM by Timothy

[Top]
[Mind·X]
[Reply to this post]

You may believe that there is some arcane physical property of brain cells that produces intelligence and consciousness, but the (mostly biologically inspired) models that are used in AI today, are already successful. At least in producing narrow AI. These successes cannot be denied.

So there are solid indications that intelligence can be introduced into machines. Therefor I don't think there is any area of human intelligence that, in time, we will not be able to reproduce in machines.

Now, whether machines can also truly achieve subjective consciousness is more difficult to say, but I think that consciousness is an epi-phenomenon of intelligence because the two are clearly related to each other.

But we need to keep an open mind to all viewpoints.

Re: Why we need a Turing Test
posted on 10/22/2006 10:35 AM by jack d

[Top]
[Mind·X]
[Reply to this post]

"You may believe that there is some arcane physical property of brain cells that produces intelligence and consciousness, "

Eh ? Arcane ? Is physical causality fashion conscious ? Is gravity no longer effective because it's 'old hat' ? What on earth do you mean by that ?

I'm giving up now - the penny isnt dropping, which is a sure sign that you can't esacpe your own unselfconscious assumption that a brain MUST be a computer(in the past brain has always been modelled as a piece of the latest technology).

Sometimes you have to make an effort to get a point, and you aren't making it.

Re: Why we need a Turing Test
posted on 10/23/2006 2:52 AM by Timothy

[Top]
[Mind·X]
[Reply to this post]

I mean that your idea of the physical causality of consciousness is not much more than an idea. It is hardly even a theory! Not in as far as you've presented it, anyway.

People have tried to find a physical substance called consciousness for ages, but have failed miserably.

Now you are saying: consciousness is not material, but none the less physical, like the surface tension of water.

But I don't see what's so very special about brain cells that would create this consciousness. That's why your idea is rather arcane to me, because you present no theory other than your assertion that consciousness has a physical causality.

And you choose to ignore the progress of AI, which is remarkable and clearly heading into the direction of intelligent machines.

In a very real way a simple machine fitted with a camera is already conscious of its environment simply because it can respond to that environment. Therefor it seems that intelligence is the essence of what we call 'human consciousness'.

In other words, there may be no such thing as 'mind' other than the universe fundamentally being conscious in a simple or a more complex way. The human mind emerges from human understanding and intelligence. And there is no question that we're well under way to reproducing that in machines.

You try to show that machines cannot possibly be conscious by describing a very simple computer. But it is really the level of complexity that determines intelligence and therefor the possibility of human consciousness.

But I am interested to hear what physical property of the brain you think produces consciousness and how that works. If you're really onto something, it would be a Nobel-prize contribution!

Re: Why we need a Turing Test
posted on 10/23/2006 6:58 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

"People have tried to find a physical substance called consciousness for ages, but have failed miserably. "

Speak for yourself, zombie. I wake up every morning.I suspect you do too.

"Now you are saying: consciousness is not material, but none the less physical, like the surface tension of water. "

Correct. You got there.

"But I don't see what's so very special about brain cells that would create this consciousness. "

You don't ? Try shooting yourself in the head. See how conscious you feel afterwards.
I think you are falling into a common trap - "the emperor wears no clothes' trap of AI.
Don't get caught in this silly trap of pretending consciousness doesn't exist. Just accept it does. Subjective mental states are one of, if not the most, exciting objective facts in the universe.

It can seem difficult to get your head round the fact of 'proving'that consciosuness exists.

But philosophically it's no problem. If I'm conscious ( and I KNOW I am) then unless I think I'm very special, and I believe other people exist, then I can believe that other people are conscious too.

All knowledge is based on belied, don't forget. You can't see that matter is made of atoms with your senses but it would be strange, given the evidence, if atomic theory was not true.

An atomic radius can be measured, but only if we assume that atomic theory is actually correct. To measure something in science, you need a theory. We just don't have a full theory yet, and so 'measurement' is a problem.

Although in practice, anaesthetists measure consciosness already, in a rather blunt way. But measure it they do.

"why your idea is rather arcane to me, because you present no theory other than your assertion that consciousness has a physical causality. "
Stop listening to silly AI theorists and start listening to common sense.
There is NOTHING ARCANE about physics. That's just a silly thing to say isn't it.

If my theory is that mental phenomena are caused by physical forces, that's all I have to say !

There is no neuroscience to show exactly how. My belief is caused by a KNOWLEDGE of the EXACT way that computers work.

Saying that I don't know exactly how the brain works (nobody does) is not the same thing as admitting the computer is a brain.

On the contrary, unlike brains, computers are understood 100%. And computers are mathematical systems incapable of creating physical phenomena and semantic thought characteristics such as the experience of colours such as blue.

In short, as I've said before, if you believe that computers think then it's up to YOU to justify it. And that starts by accounting for how a syntactical system generates semantic.

And you know what ? You won't do it. I wrote to Marvin Minsky a few times. We had a nice chat until I asked him that question and funnily enough - the emails stopped !

"But it is really the level of complexity that determines intelligence and therefor the possibility of human consciousness. "
Another common AI misconception. If you cant show what 'complexity' is, or demonstrate how a 'more complicated' program can iteratively increase its causal physical powers, then the argument is a total red herring.

Complexity is not causal. In any case no program is anything other, I repeat, than a changing series of 0's and 1's in memory. It's difficult to arge that any program is more complicated than another EXCEPT from the programmer's perspective, which is irrelevant.

"But I am interested to hear what physical property of the brain you think produces consciousness and how that works. If you're really onto something, it would be a Nobel-prize contribution! "
The physical property of the brain that causes consciousness is most likely the property that causes consciousness. Dont forget the universe is not a system : its a semantic block that does what it does. Physics is a system. If physics can't handle consciousness (and I dont see that it can't to be frank) then physics has to change to accomodate it.

Or , like computer scientists, they can still insist the emperor wears no clothes.






Re: Why we need a Turing Test
posted on 10/27/2006 12:32 PM by Timothy

[Top]
[Mind·X]
[Reply to this post]

The fact that you or I wake up in the morning is by itself no proof that consciousness is a physical property.

We thank our existence to complexity as much as to physics. Sure, we are carbon-based life, but carbon itself is not alive, intelligent or conscious like human beings are.

If carbon itself is not inherently conscious, or even alive, then how can a human suddenly become alive, intelligent and conscious?

Complexity seems to be the clear answer, rather than some unexplained and unknown physical property of carbon or matter in a more general sense.

If it is really complexity that accounts for our human life, intelligence and consciousness, then a similar level of complexity in a machine may cause it to be alive, intelligent and conscious as well.

If complexity is the key ingredient for intelligence and consciousness, then it is irrelevant that a computer is in essence a logical system. Because consciousness probably is not a consequence of physics per se but a consequence of complexity.

In this way computers could really become subjectively conscious, think and have a mind, like human beings.

This is not the same as saying that consciousness does not exist or isn't a real phenomenon, quite the contrary, but it emerges as a consequence of complexity rather than physics.

Anaesthetists can only measure so-called neural correlates of consciousness, but not consciousness itself. They are simply measuring processes in the brain.

It is just your presumption that they must be measuring consciousness because you first of all presume that consciousness is a physical property of matter.

AI research is showing that intelligence is a function of complexity and that it has nothing to do with some unknown property of matter. If machines achieve much higher levels of intelligence and as a consequence show all the signs of being self-aware, we can not insist that they are not conscious.

Unless maybe some new physical theory can prove that consciousness really is a property of physics. But to date there is no theory of physics that accounts for consciousness.

Re: Why we need a Turing Test
posted on 10/27/2006 2:18 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

"The fact that you or I wake up in the morning is by itself no proof that consciousness is a physical property. "

It's certainly no proof it's a mathematical or computing property either, but consciousnes is highly inconsistent with computation.

"We thank our existence to complexity as much as to physics."

the last time i looked "complexity" was neither a) a causal force b) a property of anything or c) defined. You define it : you quantify it : stop talking vaguely. The onus is on YOU.

"Sure, we are carbon-based life, but carbon itself is not alive, intelligent or conscious like human beings are. "

I have gone over this point before. A water molecule is not "wet" but a large number of them have the property of "wetness". The physical world is full of aggregate properties such as solidity, liquidity etc.

"Complexity seems to be the clear answer,"

Clear ? What on earth is clear about what you are saying ? Handwavey references to 'complexity' ? A carbon atom is very complicated. If you don't believe me study quantum and particle physics for a while. So according to you, a carbon atom IS conscious as it's so 'complex'.

"If it is really complexity that accounts for our human life, intelligence and consciousness, then a similar level of complexity in a machine may cause it to be alive, intelligent and conscious as well. "

Come on. This is drivel.

"but it emerges as a consequence of complexity rather than physics. "

How ? explain an iterative mechanism. Say we have a relatively simple logical system X. It is beneath conciousness. We know how computers work : we know everythig about them. So you should be able to tell me what the next step is to turn X into a conscious system, Y. Give me an idea.

"Anaesthetists can only measure so-called neural correlates of consciousness, but not consciousness itself. "

Physicists generally only measure 'correlates' of anything they ever observe. An atomic radius is not measured with a ruler. It is measured with highly derived system of wave mechanics based upon the assumption that atomic theory is correct. Most non-sense data measurement is 'correlated' and rarely is anything ever measured 'directly'. That is science.

"AI research is showing that intelligence is a function of complexity "

What has intelligence got to do with consciousness ? Explain the link.

"But to date there is no theory of physics that accounts for consciousness. "

But physics is compatible with consciosness nonetheless. Computation is 100% incompatible with phenomenological, semantic mental events. I asked you before to show how semantic arises from syntax. Make that your next challenge.




Re: Why we need a Turing Test
posted on 10/28/2006 7:46 AM by Timothy

[Top]
[Mind·X]
[Reply to this post]

"What has intelligence got to do with consciousness ? Explain the link."

An experiment has been done to test for self-awareness in dolphins. Dolphins seem to understand that they see themselves in a mirror, rather than another dolphin.

Similar tests have of course been done with babies and primates.

These experiments are interesting because when we talk about human intelligence, self-awareness is one of the key aspects. It seems that when intelligence evolves, self-awareness emerges.

When this kind of self-awareness emerges in machines, in theory this could be a kind of 'blindsight'.

In other words, even when a machine exibits intelligent interaction with its environment as well as self-awareness and 'common sense', it might still not have subjective awareness.

I guess it depends on the theory of consciousness you adhere to, because subjectivity can by definition not be objectively proven.

Maybe an AI will one day come up with a conclusive theory of consciousness and therefor conclude that it is not and has never been conscious.

Re: Why we need a Turing Test
posted on 10/30/2006 1:52 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

In which case, can somebody who is unintelligent be conscious ? There are plenty of people on these pages who manage it.

Re: Why we need a Turing Test
posted on 10/30/2006 2:00 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

"These experiments are interesting because when we talk about human intelligence, self-awareness is one of the key aspects. "

You have to distinguish between being self-aware and being conscious, which you're not.It's been a persistent strand of thought in anthroplogy that self-wareness in humans developed only recently : whereas consciousness in mammals has been around for millions of years.

Given that, I doubt that many mice would do well in an IQ test.But they're conscious, I have no doubt, but highly unlikely to be self-aware.

"I guess it depends on the theory of consciousness you adhere to, because subjectivity can by definition not be objectively proven. "

You don't think it's an objective fact that I and you are conscious, Tim ? Do you REALLY believe that ?


"Maybe an AI will one day come up with a conclusive theory of consciousness and therefor conclude that it is not and has never been conscious. "

They won't and they never will, because brains are not computers.

Re: Why we need a Turing Test
posted on 10/30/2006 2:31 PM by Timothy

[Top]
[Mind·X]
[Reply to this post]

"You have to distinguish between being self-aware and being conscious, which you're not."

My point was that it is rather strange to think that an entity could be self-aware yet lack consciousness completely.

Re: Why we need a Turing Test
posted on 11/04/2006 12:36 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

Well that's point that appears strange only to you. Most conscious species are not aware - fact of biology.

Re: Why we need a Turing Test
posted on 11/04/2006 12:37 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

sorry - instead of 'aware' I meant 'self-aware'

Re: Why we need a Turing Test
posted on 11/04/2006 9:54 PM by Timothy

[Top]
[Mind·X]
[Reply to this post]

"Well that's a point that appears strange only to you. Most conscious species are not self-aware - fact of biology."

You are still missing the point! I know that many creatures are conscious, yet lack a sense of self-awareness.

I am trying to make you consider the opposite: do you know creatures that have self-awareness but are not conscious at all?

Can you imagine an entity that's self-aware, understands itself, understands its place in the world, but is at the same time completely unconscious? That was my question.

Because make no mistake about it: machines will be self-aware as a consequence of their intelligence.

When this self-awareness in machines will have happened, will it be possible to maintain that they are still completely unconscious?

As I said before: if so, this would be an extreme case of blindsight. Most people will feel that these AIs will indeed have become truly conscious.

This would mean that AIs of today are probably about as conscious as ants - not completely unconscious but not yet very conscious either.

But that very simple consciousness will evolve into human-level consciousness within a few decades.

Re: Why we need a Turing Test
posted on 11/20/2006 6:59 PM by BigMTBrain

[Top]
[Mind·X]
[Reply to this post]

"...The whole point is that the mind does not have a relationship to the brain in the same way that a program does to a computer."

You are absolutely correct!

However, your selectively singling out software instructions as the basis for the implausibility of a "computerized" brain is an unfortunate oversight. If that were all to consider in a fair argument then I would agree with you. But it's not all to consider, so ...

You seem to have stopped short in your thinking process about thinking processes. A software program can be likened to a desire. Is your mind composed of and manifest by desires alone? That is basically what you are trying to state for the case of a would-be computer mind. C'mon now, let's play fair.

Consider that there is not only the program instructions. In the case of an AI, the program instructions would simply be the "virtual engine" that "virtually reshapes" the computer hardware that runs it. Well, this virtual engine does nothing on it's own. Just as in the real world, computer programs don't just sit and spin. They work on data of some sort (memory). The computer also has various inputs that feed into the process. There are also outputs that feed back into the system as well as outside of the system.

"Thinking is a physical act. Think of a thought as a blue ball made of a solid. You would never thinkl of a solid ball as a program, as a program is mathematical and has no physical existence."

Ah, but a whole computer system running the program? You see, a computer program is nothing without hardware to execute it much as your memories are useless if not stored in your neurons and acted upon by your fuller mind which is manifest by your collective brain neurons (hardware), other experiences, hopes, and dreams (software and memory stored in the connections of neurons).

The whole process manifests something emergent beyond the parts: It is not just software instructions, nor processor, nor hardware memory, nor data, nor input, nor output; it is a working whole. Take away one piece and you lose the whole.

All of these things together are more of what a "computerized mind" should be likened to if you were presenting your argument fairly. Singling out software instructions as the sole basis for a computerized mind is easy to debunk. Quite a bit more difficult when all the true components are taken as a whole.

Certainly today's level and form of computing is not what it needs to be to pass a Turing Test. And I agree that a Turing Test tells us nothing certain and only allows us to assume this or that regarding the AI's ability to relate as a human. But, just as a complete operating "computer system" is more than the sum of its parts, pry open your skull, grab a chunk, toss it in the air and see if you remain "you". If not that, try bashing yourself several times in the head to induce amnesia and see if you can make sense of your current living trajectory.

"THinking is physical without being material. Thought acts, epistemelogically spealking, have no relationship to mathematics, hence their absolute forms (the experience of the colour blue, for example)."

You, personally, are born pre-wired to experience blue in a certain way. Your experiences may modify that somewhat, if for instance, someone hits you square in the face with a blue baseball bat. You are confusing genetics with thinking. A child that is not colorblind can't call something red or blue but they certainly know and experience the distintion. However, a child doesn't know or experience the blue sky except through experience. Similarly, an AI can be built to have defacto (hardwired) experiences of many things and only come to "know" others through data or experience.

"The objects of thought acts CAN have relationships to logical entities. But thoughts acts themselves are as cosmologically real and substantive as footballs."

Ummmm, so are computer outputs that cause other computer actions, i.e. updating memory with new data, outputing stuff to the screen, causing a speaker to vibrate to produce a waveform of what we might call a "beep". All of these things are as cosmologically real and substantive as footballs, wouldn't you agree?

In summary "You" is manifest from your physical brain no less than a computerized mind would be manifest from its physical brain. Look beyond thinking of an AI built from a Radio Shack TRS-80. Look beyond thinking of an AI built from technology that is 100 times faster than what you are working on today. Look beyond thinking of an AI built with von Neuman architecture. Look beyond. Just look beyond. You are trapping and stifling your own creativity and reasoning otherwise.

Re: Why we need a Turing Test
posted on 10/22/2006 9:29 AM by donjoe

[Top]
[Mind·X]
[Reply to this post]

"Thinking is a physical act. Think of a thought as a blue ball made of a solid. You would never thinkl of a solid ball as a program, as a program is mathematical and has no physical existence."

A program has no physical existence?! What kind of "Computer Metaphysics" is this? What, computer programs are magic now? Electrons and the transistors that guide their flow are just as physical as neurons and synapses, my confused friend.

Re: Why we need a Turing Test
posted on 10/22/2006 10:30 AM by jack d

[Top]
[Mind·X]
[Reply to this post]

What is a computer program , my VERY confused friend, other than a series of symbols ? Pray tell me ! A computer program is INHERENTLY a mathematical and syntactical object. A computer running a program is nothing without an observer who understands how the physical tokens relate to the symbols under manipulation.

Elementary computing theory, my fantastically confused chum.

Re: Why we need a Turing Test
posted on 10/22/2006 1:28 PM by donjoe

[Top]
[Mind·X]
[Reply to this post]

"A computer running a program is nothing without an observer who understands how the physical tokens relate to the symbols under manipulation."

"Is nothing without"?! What's that supposed to mean? A computer-controlled assembly line suddenly stops assembling when all humans exit the factory (there are no more observers)? Pray tell, what universe do you come from? :D

Re: Why we need a Turing Test
posted on 10/22/2006 4:30 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

Alright. Let me explain in simple sentences.

A computer is a logical, mathematical construct. Computational theory does not emanate from the physical sciences : it emanates from the mathemtical sciences.

A computer can be made of anything : old tins and bits of string , or from water running through valve systems.The only thing necessary is to map a physical token (i,e a voltage, or the strength of water flow in a valve system, ir indeed absolutely anything) to a symbol such as a 0 or 1. That is all computers need.

We could take a program of a human mind (a memory map of 0's and 1's and map it to, for instance, the quantum states of atoms in a wall to every sequence of that program.

Thus the wall in my bedroom is now 'running a program of the human mind'. Why ? Because I have decided to map the program to the wall.

I can look at a cup and decide that the cup represents '1'. The cup is 'running a program'. Not a very interesting one admittedly, as it just says '1'.

In other words, it's me, as the observer, who decides what the program is. The answer to your example is that of course the machines are running a program when people go out the room. They are running one imposed on the physical structure of that machine by the machine's designers.

They are also running an infinity of other programs at the same time. I could decide that the voltage levels that currently map to 0's and 1's be inverted. Now the machine is running a program of my choice, not the designers.

Are you getting the picture ? It's up to me what the program is as I decide how to map the symbol to the physical characteristic.

Imagine a martian came to earth and saw your computer and imagine the human race was wiped out. There would be no knowledge that the a voltage of -.1 V mapped to a '1' and voltage of +.1V mapped to a '0'.

From their epistemic perpective then no, the computer would not be running a program, or at least not the one your designers had constructed. That is because hey lack your mapping rules.

In other words, the computer as a physical machine remains the same and does the same things, but without an observer WITH MAPPING RULES there is no program.


Re: Why we need a Turing Test
posted on 10/22/2006 4:35 PM by donjoe

[Top]
[Mind·X]
[Reply to this post]

... and the distinction you think is so important here is that the human mind automatically has itself as an observer at all times, so it never completely lacks observers who understand it "correctly". (?)

Re: Why we need a Turing Test
posted on 10/22/2006 5:11 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

You are getting there, although you are asking a slightly different question.

The requirement for the "two brains" requirement is, frankly, the most ridiculous consequence of AI beliefs, although some do argue it.

It is, of course, a nonsense. It means that in your brain you have to have an additional 'person'(or sub-person) in your head who understands the brain's mappings.

The main reason that we can conclude that mental events are not computational is that they have a form which is absolute. The experience of blue is absolute , not quantitative. It dosn't feel like ' a bit of red', for instance, or 'similar to sexual arousal' or 'has facets of the feeling of depression'. It feels like it feels and it feels like nothing else. That is incompatible with computation, which is syntactical and mathematical.

It is the same as the difference between a piece of solid matter and the number 1 : semantic and syntax.

The requirement to have a second brain in the head gets us nowhere, of course.

We simply ask "what is this sub-person who understands the brain's mappings?". If it is a computer program then we are back to square one, namely we have a computer program ( the sub person ) that refers to the rest of the brain as a gigantic digital memory. And who understands how the "sub-person" is mapped ? Who 'runs' the sub-person ?

May be we need three brains then, an additional brain to understand how the second person's mappings are set up. Or maybe not.

Re: Why we need a Turing Test
posted on 10/23/2006 7:28 AM by donjoe

[Top]
[Mind·X]
[Reply to this post]

OK, I still don't know what you're saying.

"The main reason that we can conclude that mental events are not computational is that they have a form which is absolute."
I didn't know humans (e.g. you) can talk convincingly about what's absolute and what's not. From our perspective, everything is relative.

"The experience of blue is absolute , not quantitative. It dosn't feel like ' a bit of red', for instance, or 'similar to sexual arousal' or 'has facets of the feeling of depression'."
Au contraire, people _do_ describe partial and hybrid experiences. Ever heard of "bitter-sweet"? Or "mixed feelings"?

Anyway, I think I'll come somewhat closer to understanding you if you answer these questions with "yes" or "no" (I'll ask for explanations later; no sense in repeating what you've already said - I won't understand it this time either):
1. Is a human psyche a system of non-random transformations and/or movements of matter and/or energy in a human brain?
2. Is a computer program a system of non-random transformations and/or movements of matter and/or energy in a (silicon) processor?

Re: Why we need a Turing Test
posted on 10/23/2006 6:30 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

"From our perspective, everything is relative. "

What is blue relative to ?

"Au contraire, people _do_ describe partial and hybrid experiences. Ever heard of "bitter-sweet"? Or "mixed feelings"? "

I repeat then, what is blue relative to ?

People don't describe hybrid experiences, they describe the simultaneous experience of different feelings. As they say in chemistry, a mixture is not a compound, it is a mixture.

"1. Is a human psyche a system of non-random transformations and/or movements of matter and/or energy in a human brain? "

What is a 'psyche' - do you mean a 'mind'?

The answer is the mind is (probably - we don't know what brains do of course, unlike computers) the physical and mental phenomena associated like an aggregate property of a group of brain cells. What the significance of 'randomness' in your scheme is I don't know.

There are other examples of physical phenomena that are aggregated - for instance, the surface tension of a liquid. We don't speak of a water molecule as being 'wet', but we think of a body of water as being 'wet'.

Similarly, no one neurone is 'conscious', but a group of them can be.

But this is all speculation. The neuroscience isn't there yet. Computers on the other hand are not the subject of behavioural speculation at all.

"2. Is a computer program a system of non-random transformations and/or movements of matter and/or energy in a (silicon) processor?"

No. A computer program is a time based development of a series of 0's and 1'.s

I like the Markov definition of a computer :-

"a string rewriting system that uses grammar-like rules to operate on strings of symbols. "

see http://en.wikipedia.org/wiki/Theory_of_computation

I think your obsession with silicon indicates you need to read up on what a computer actually is, my confused chum.






Re: Why we need a Turing Test
posted on 10/24/2006 5:19 AM by donjoe

[Top]
[Mind·X]
[Reply to this post]

"What is blue relative to ?"
The observer's eyesight quality - some people don't distinguish tones, others don't even distinguish different fundamental colours. Probably no two people experience blue in exactly the same way. When they talk about "blue", they can only be said to _approximately_ understand eachother.

"People don't describe hybrid experiences, they describe the simultaneous experience of different feelings."
Your proof of this being...?

"What is a 'psyche' - do you mean a 'mind'?"
Yes, I think I meant approximately ;) the same thing you mean when you say "mind".

"The answer is the mind is... the physical and mental phenomena" etc.
1. This definition is circular: you're using "mental" to define "mind".
2. This can't be the answer to my "yes"/"no" question. I'm still waiting.

"What the significance of 'randomness' in your scheme is I don't know."
3. A system of transformations like in the above definitions is "non-random" if the majority of its elementary tranformations have non-uniform probability distributions for their possible outcomes. (Think of a mechanism of multiple flipping coins - if, say, 60% of those coins are loaded and have heads/tails probabilities other than 50/50, you have a non-random system of transformations.) Complete this with the property that all transformations in the system are connected (so it's not just a collection of independent coins) and re-read my first question.

"No. A computer program is a time based development of a series of 0's and 1'.s"
Fair enough. Second question rephrased:
2. Is [the physical process implementing what humans call "a computer program"] a system of non-random interconnected transformations and/or movements of matter and/or energy in a (silicon) processor?

"I think your obsession with silicon indicates you need" etc.
No obsession involved. I'm trying to keep things clear, to have a grip on what we're talking about. I may be using a particular example of computing hardware, but I'm still talking about computing in general terms, so my statements should remain as true for silicon transistor arrays as they are for any other hardware.

Re: Why we need a Turing Test
posted on 10/24/2006 2:11 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

"The observer's eyesight quality - some people don't distinguish tones, others don't even distinguish different fundamental colours. Probably no two people experience blue in exactly the same way. When they talk about "blue", they can only be said to _approximately_ understand eachother. "

So what if some people can't distinguish 'tones' ! It's the quality that matters. The quality of blue.
What makes you think that human beings, made of ostensibly the same stuff and connected by DNA to an extremely high precentage (it is the same species) WOULDN'T experience blue in the same way ?

The point about colour blindness supports ME, not you. It is a recognised defect specifically because such people cannot identify two separate colours, two qualities, that most people can identify.

Nice try, but you dodged the question. Frankly, there is no answer so it's no suprise.

"Your proof of this being...? "

Being a human being and having had 'mixed feelings' myself.

"1. This definition is circular: you're using "mental" to define "mind"."
Ok. Eradicate 'mental' from the definition and just leave "physical and non-material"



"2. This can't be the answer to my "yes"/"no" question. I'm still waiting"



What I wrote was in response to question 2 was :-

'No. A computer program is a time based development of a series of 0's and 1'.s '

You may have noticed the first word was 'No'. A computer would not have scanned that in so incorrectly - your brain is evidently proof of my point.


"my statements should remain as true for silicon transistor arrays as they are for any other hardware."

and so is my answer which you are studiously ignoring. A computer program is a series of sequentially changing 0's and 1's. It is a mathematical and logical entity, hence the term 'software'.








Re: Why we need a Turing Test
posted on 10/24/2006 4:56 PM by donjoe

[Top]
[Mind·X]
[Reply to this post]

"So what if some people can't distinguish 'tones' ! It's the quality that matters. The quality of blue"
This isn't going anywhere, so I'll drop it. I need answers to my two questions if I'm to understand you.

"What I wrote (was) in response to question 2 was :" etc.
I noticed your "no" to question 2, but I was referring to question 1 when I said you didn't give a direct answer. I seem to have mislead you by putting a "2." before that answer - it wasn't in any way related to what the number meant in my previous post.
Fact: you didn't say "yes" or "no" to question 1.
In the meantime I've slightly changed it due to your comments. Here is the last version I'm on standby for:
"1. 1. Is a human mind a system of non-random interconnected transformations and/or movements of matter and/or energy in a human brain?"

Now, as for question 2, I noticed you answered it and decided to rephrase it (but somehow you chose to completely ignore this half of my post) to find out what really interests me. You can think of it as a third question, derived from the former #2, if you will. (I won't repeat it, it's already up there.) This one too admits only "yes" or "no" (or let's say "I don't know" would be a third option).

Re: Why we need a Turing Test
posted on 10/25/2006 5:59 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

The answer to 1 is no.
The answer to 2 is no.
Ok ?

Re: Why we need a Turing Test
posted on 10/26/2006 2:31 AM by donjoe

[Top]
[Mind·X]
[Reply to this post]

OK so far. Now I have to understand why you're saying "no". I'm assuming you disagree with specific parts of my definitions and you can tell me exactly which ones and why. I'll pick question 2' to continue.

Do you disagree with my definition because you think the physical manifestation of a program...
2'.1 isn't a sequence of physical transformations/movements?
2'.2 the transformations/movements are actually random?
2'.3 the transformations/movements aren't interconnected?
2'.4 it doesn't take place in a "processor"?

Re: Why we need a Turing Test
posted on 10/27/2006 1:57 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

"Do you disagree with my definition because you think the physical manifestation of a program...

2'.1 isn't a sequence of physical transformations/movements? "

Nice try to change the question by adding "physical manifestation" ! Not that it makes any difference at all.

A computer program is a mathematical entity. It has no "manifestation". A physical/biolgical/phenomenal event has a 'manifestation' , such as a manifestation of a smallpox disease by the presence of its symptoms.

The physical tokens in computers are the source from which the computer program is interpreted by the observer. That program itself can be reinterpreted for display purposes by , for instance, a graphics card and a VDU.

The screen output, if there is any, would be the closest thing you could describe as a 'manifestation' of a program.


"2'.2 the transformations/movements are actually random? "

No

"2'.3 the transformations/movements aren't interconnected? "

Yes they are. The movements themselves are related to the program that operates them and dictates their movements.


"2'.4 it doesn't take place in a "processor"?"

A computer program is used to map the physical states of the hardware it runs on.

One is called 'software' ; the other 'hardware'.




Re: Why we need a Turing Test
posted on 10/27/2006 2:23 PM by donjoe

[Top]
[Mind·X]
[Reply to this post]

"Nice try to change the question by adding 'physical manifestation' !"
What is this? "Let's NOT read his posts and give him dumb answers to piss him off"? I've already told you in two consecutive posts that I'd changed question 2 (that's why I tagged it 2' this time - there's an apostrophe there, can you see it?) to this:

"Is [the physical process implementing what humans call 'a computer program'] a system of non-random interconnected transformations and/or movements of matter and/or energy in a (silicon) processor?"

Can you see the words "physical process implementing" there? Could you please remember them if we continue this discussion? (Jeez, will I have to tell you everything twice?)

"A computer program is a mathematical entity. It has no 'manifestation'."
Firstly, you're wasting my time and not giving me "yes" or "no" again (for 2'.1).
Secondly, what the #@&$ do you call the link between software and hardware if not "manifestation"?

2'.4: I still don't get you. WHERE is the program and HOW can it control hardware from there?

Re: Why we need a Turing Test
posted on 10/30/2006 1:48 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

Jeez, dont fly off the handle brother. It's not THAT important.

Firstly it is perfectly possible to have questions that are not suited to yes or no answers. 'Yes' or 'No' imply that the question has an unambiguos status given the surrounding context. Not replying 'Yes' or 'No' is simply a way of clarifying context. Ask a lawyer. Ask me, I'm one.

Secondly, demanding the terms in which a question can be answered is a particularly unsubtle rhetorical trick which I refuse to accommodate unless permitted to establish context first.

Thirdly, despite the fact that I am unwlling to put up with a determination of terms, I consider my answers to your questions unambiguous in the extreme. I dont think a computer program is physical in any way, and your rather unsibtle efforts to prove that they do will amount to nothing. In any case, establishing what a computer is, or amounts no, has nothing whatsoever to do with establishing what brains do, another pointless linkage on your part.

Epistemelogically brains and computers are in completely different classes : chalk and cheese. A computer is DEFINED by function. Therefore its modus operandi is 100% utterly unambiguous.A brain is a naturally occuring physiological organ whose 'functions' (inasmuch as that word is appropriate) can only be discerned by scientific enquiry. The way it works CAN ONLY BE ASCERTAINED BY SCIENTIFIC ENQUIRY. It cannot be done by simply assuming it to be a computer. That is patently silly and ridiculously presumptive.



Re: Why we need a Turing Test
posted on 11/09/2006 12:45 AM by BigMTBrain

[Top]
[Mind·X]
[Reply to this post]

jack_d:

I didn't quite know which post to reply to as I followed all of the discussions with Timothy and donjoe.

Your arguments are well taken. If I'm following you correctly you feel, as I do, that there's just something particularly interesting about the human brain that causes human consciousness to emerge. Further, I imagine that you'd agree that a single functioning neuron does not bring about this awareness and consciousness. If I am mistaken, please correct me.

As I understand your points, a computer making ones and zeros can not bring about consciousness. I'm not really sure about what the future holds when current processing models are computing in the terahertz, petahertz, exahertz, or zettahertz ( 10^-21 vs 10^-9 for gigahertz) when things go primarily optical, but for now I'd have to say that I agree with you. Current von Neuman computer systems working at their current speeds and architectures most likely have no (or very little) capacity for consciousness.

However, I'm curious about all of this synthetic intelligence and consciousness talk and I would appreciate your assistance in tearing down the most popular scenario of how its proponents say it will occur.

They (proponents of a conscious, super AI) speak of alternate forms of computing rather than the von Neuman type as the Holy Grail.

I imagine that in the advancement of computing technology, alternate forms of computing architecture is what research of natural systems is leading to. Given the frantic pace of discovery and advances in the fields of optical and quantum computing, nanotech, and micro-biomechanical and chemical systems it would be foolish to think that von Neuman computing will remain the standard for centuries or decades to come, wouldn't you agree?

I think we would all like to one day be operating our own personal palm-sized mega systems that outperform our current systems by a hundred thousand fold one day, I'm sure. The new multi-cores are great, but I personally don't think they'll get us there... but these new emerging technologies probably will.

The advent of current day synthetic retinas and other sensory nerve replacements are evidence to a technological trend that presages that ?someday? in the future replication of the function and structure of a single neuron with advanced nano, chemical, optical, quantum, and MEMS technology will be a reality. Further this replicate model could, by way of its design, emulate structurally, chemically, and electrically any particular neuron in a brain; axon, dedrites, nucleus, synapse, neural response patterns, etc. Essentially creating a versatile synthetic replica of any biological neuron. This includes all electro-chemical processes, including DNA. If you doubt this please share your reasoning.

Now, if this neuron were made to be completely compatible with your human biology, and I could replace just one of your neurons with its synthetic equivalent (structure, neural response pattern, etc.) such that it handled all of the inputs and outputs as would the true original and neighboring neurons behaved and communicated as normal, do you think you would lose yourself, become someone else, lose consciousness, or die? If so, please explain with scientific reasoning.

If I were to replace a second, a third, a fourth...? You have another 50 billion or so to go (depending of course on one's usual rate of alcohol consumption).

At what point would you estimate that your consciousness would break down or turn into something else? 100, 1000, 10,000, 1 million, 1 billion, 50 billion? If you believe that a breakdown in consciousness would occur at some point when your natural biological neurons are being replace with their compatible functional equivalents please explain when and why in absolute scientific terms.

If you can not scientifically explain a breakdown then why can't I then, instead of inserting my duplicates into your neural structure just simply create a new, totally duplicate---brain---your brain? Is their something in the universe that will not allow this? Is there something that would not allow multiple "you"s? Consciousness is a locale issue. "You" would not be in two places at the same time. However, there would be two divergent "you"s (consciousnesses) that are thinking identically until unshared events cause them to get more and more out of sync. The last scenario, however, is not the point.

If consciousness is not a neuron or all of the neurons, connections, and physical activity then what is it?

Perhaps it's an emergent phenomenon and I think you sad the same in other terms. Emerging from what you might ask. Well, firstly, I'd like to posit that emergence is as fundamental a property of the universe as any other property you can name. Big Bang to Earthly Nature.

We're talking emergent systems here. Imagine, if you were able to replace H2O with a synthetic equivalent that had all of H2O's properties, exactly, would Mother Nature lie down and call it a day, or would she continue her delicate equilibrium dance without skipping a beat?

All of the universe is emergent from quantum processes that build emergent multi-layered structure after emergent multi-layered structure. Replace the quantum processes with an equivalent and you still have the universe.

Consciousness, the mind, is emergent from the "processes" of the brain. Create a synthetic brain what do you think will happen? Nothing will emerge? If nothing, why? Are we too bio-human-centric to extrapolate otherwise?

A Turing Test will only allow us to ASSUME positively or negatively the consciousness of any entity synthetic or otherwise. That is the best we can do even with fellow humans WITH FULL FACULTIES: They act like me, walk like me, talk like me, speak like me, laugh like me, cry like me, and other ways respond like me... hmmm... they must think like me... they must be aware like me. We can relate. Super AI will go beyond human levels of awareness and we will no longer be able to relate on the same level, kinda like you to a worm. But because knowledge of what it is to be human is built in the SAI will be able to relate some things in its realm to us, but the far majority it will not. We will endeavor to encorporate it into ourselves, however. Problem solved.

It's nice being human. As far as we know right now, we and our fellow aware earth-bound creatures have brought intent and awareness to a universe that was once only possibility and probabilty.

Actually, and this is perhaps a topic for a different forum, I believe that one day we'll discover that multi-layered emergence entails multi-layered intellect and awareness of DIFFERENT KINDS and on different time scales. I stress "different kinds" to avoid the tendency to think in human terms and interpret that statement to imply human forms of thinking and awareness. Think of the complex structures of the universe and it's interactions of gas, electromagnetism (light, infrared, radio, x-ray, gamma-ray, etc), chemisty, and gravity. Something is emergent there beyond the parts (and there's a heck of a lot more parts and interactions than in the human brain) that we can see and feel. Galaxies, black holes, etc., may all have their own forms of emergent awareness and intelligence on physical scales and time scales far unlike our own. Now, go figure out a Turing Test for that ;0)

Re: Why we need a Turing Test
posted on 11/09/2006 3:22 AM by donjoe

[Top]
[Mind·X]
[Reply to this post]

"Not replying 'Yes' or 'No' is simply a way of clarifying context."
There's no clarification unless you explicitly say "The question is wrong and can't be answered. This is why:". If you just don't give a direct answer without saying why, you only give the impression you're not interested in discussing the same issues or you're paying attention or you don't understand the question etc. You have to be explicit (a lawyer should know that).

"I dont think a computer program is physical in any way"
But still you agree there's a direct link between it and the hardware it runs on. And you haven't yet explained that link (like I asked you to - and this time it wasn't a "yes"/"no" question). And BTW, this: "A computer program is used to map the physical states of the hardware it runs on." is not a proper definition of a program, as it makes it seem like the program is just a description of already-structured hardware, whereas computer programs actually exist _before_ the hardware gets structured and they contain the structural information to be used in shaping the hardware. So as far as I'm concerned, your conception still doesn't adequately explain the SW-HW link. (Also, I'm curious to see what answer you give BigMT for his version of almost the same question - the Mind-Neuron link explanation.)

"It cannot be done by simply assuming it to be a computer. That is patently silly and ridiculously presumptive."
Your definition of "computer" seems to be ridiculously narrow. :)

Re: Why we need a Turing Test
posted on 10/26/2006 2:31 AM by donjoe

[Top]
[Mind·X]
[Reply to this post]

OK so far. Now I have to understand why you're saying "no". I'm assuming you disagree with specific parts of my definitions and you can tell me exactly which ones and why. I'll pick question 2' to continue.

Do you disagree with my definition because you think the physical manifestation of a program...
2'.1 isn't a sequence of physical transformations/movements?
2'.2 the transformations/movements are actually random?
2'.2 the transformations/movements aren't interconnected?
2'.3 it doesn't take place in a "processor"?

Re: Why we need a Turing Test
posted on 10/24/2006 5:09 PM by donjoe

[Top]
[Mind·X]
[Reply to this post]

"so is my answer which you are studiously ignoring"

Yes, and I'm doing it because I have to force what I see as your key assertions through my mind's filters in order to see some meaning. Otherwise, I could just reply over and over "That's absurd, you're mixing categories and applying a double standard - that's why computation and psychological activity seem so different to you" etc.

Re: Why we need a Turing Test
posted on 10/24/2006 5:23 AM by donjoe

[Top]
[Mind·X]
[Reply to this post]

Forgot this answer to the randomness question:
4. The relevance is that I'm trying to show you the fundamental similarity between a program run by a computer and the "mind" program run by a specific, organic, computer (the latter being a special case of the former).

Re: Why we need a Turing Test
posted on 11/20/2006 7:15 PM by BigMTBrain

[Top]
[Mind·X]
[Reply to this post]

jack_d:

Please locate my post above. There I explain the fallacy of your argument that leads to your (no offense) misquided reasoning. Simply stated here: A computer program is much like a desire. Your mind is not manifest simply by desires--it is a complete, self-dependent system. The same is true with a would-be AI. It cannot be equated simply to a computer program.

Re: Why we need a Turing Test
posted on 11/20/2006 7:22 PM by BigMTBrain

[Top]
[Mind·X]
[Reply to this post]

jack_d:

Sorry, I'm still getting used to where replies will actually post.

I have posted two responses to you (now four) and I meant the first one. Both are pertinent, but the one that quotes you then continues with "You are absolutely correct!" is the one referred to here.

Re: Why We Can Be Confident of Turing Test Capability Within a Quarter Century
posted on 10/22/2006 5:29 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ machines than can interact as intelligently as any human on any subject @@@

This is much better test than Turing's, which is skewed in favor of humans. The computer is required not only be as smart as human, but also to be able to emulate human convincingly.

ES


Potential of Computers for Generating Consciousness
posted on 10/25/2006 7:21 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

Given that superintelligence will one day be technologically feasible...

This begs the definition of intelligence. As I understand it, Turing Test looks at a system (from the outside) to see if that system behaves like a human. The minimum practical test is to get a computer to respond to typed text inputs with text responses which appear human in the eyes of a human.
http://cogsci.ucsd.edu/~asaygin/tt/ttest.html#intr o

Self-awareness is internal to a system, not observable from the outside. This is definitional. The fact it cannot be measured doesn't mean it's not part of human intelligence, unless you specifically define human intelligence to exclude self awareness.


From an external perspective, superintelligence might work at the text level someday. Even if superintelligence happens, the phenomenon of self awareness is not going to be a part of it anytime soon.


Just because a test cannot be devised for self awareness doesn't mean it's not a real thing. Human awareness obviously is real. If you ask me, it's pretty obvious my cats are self aware also. I don't think we are any closer to understanding physics self awareness today than we were 100,000 years ago.

if you cannot test for self awareness, you cannot say the intelligence is human, again, I'll remember to see where we stand with Turing machines for text in 2030 (25 years from now). Ramping up that technology into a fully functional humanLike machine can be made to work like an android, indistinguishasble from a real human unless you cut into it...or you replace the human brain with something electronic, so even the whole body functions as if it were a human inside, but the brain is not alive.... that system appears human in every way, thus meeting the Turing Test.


create machines that behave like humans in their language or other prcessing,

Re: Potential of Computers for Generating Consciousness
posted on 10/25/2006 8:32 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

pls ignore the last 3 paragraphs above. That is scrap work. I would delete the offending material, but I don't see a way to edit posts on this message board.

thx.

Re: Potential of Computers for Generating Consciousness
posted on 10/26/2006 8:49 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ Even if superintelligence happens, the phenomenon of self awareness is not going to be a part of it anytime soon. @@@

Really?
A fly posesses self awareness.
A robot posesses self awareness.
Self awareness is ridiculously easy to implement.

es

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 12:21 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

A robot has no sense of self. I have seen the alleged proofs of their awareness. The proofs are flawed at the axiomatic level. Specifically, self awareness, when it exists, is internal to a system. It cannot be observed externally.

You cannot observe my sense of awareness.

Nothing in physics accounts for self awareness. When string theory is understood completely, it will account for every energy and mass in all universes. Even at that point, it will not account for self awareness.

You folks in this singularity cult have convinced yourselves you can observe awareness in beings other than yourselves. That is a false belief.

A lot of the rest of the speculation on cyborg living is good and valid. Your concepts of self awareness are wrong.

You cannot prove that is wrong, except by creating arbitrary definitions for awareness.

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 12:28 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

Before you talk about robots, address the fact that fly has self awareness.

Are you a software developer or Computer Scientist, to claim any insight in robot consciousness? I doubt this quite strongly.

e:)s

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 2:13 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

Yes, I am a computer scientist.

Personally, I think a fly probably has a simple kind of self-awareness, but I don't know. I feel just about certain my cats have some of what I experience as self-awareness, but I don't know. I consider myself self-aware, but I cannot prove it to anyone who doubts it.

There is a reason I cannot prove my self-awareness to anyone else. Self-awareness is observable only by the self, nothing/no one else. I am aware of my being and you are not. All you can know of me is my behavior from an external perspective. You cannot replace your awareness with mine, just like I cannot mine with yours. We can communicate as one being to another. The phenomenon of self awareness is outside the scope of the energy/mass system described by string theory, quantum mechanics, general relativity, Maxwell's equations, laws of motion, or any other system of math.

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 2:23 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

You can observe it - you jut need a theory of it first.


We can't observe atoms directly, but we still can measure an atomic radius. A subjective mind still has an objective existence.

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 2:27 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

@@@You can observe it - you jut need a theory of it first.


We can't observe atoms directly, but we still can measure an atomic radius. A subjective mind still has an objective existence. @@@

If this means something to you, I am ok with that. I don't think it touches the topic of whether I am aware or not. Or, whether you are aware or not. For all I know, you are a computer.

Try The Opposite
posted on 10/27/2006 2:24 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

Here is a problem I don't think you can solve.

Prove I am not a computer.

If you cannot prove I am not a computer, then I don't see much value in the Turing Test, at least not the one based only on exchange of text.

Re: Try The Opposite
posted on 10/27/2006 2:26 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

Nice one. It proves what a complete pile of the bovine stuff the Turing test actually is.

Re: Try The Opposite
posted on 10/27/2006 2:29 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

**bows slightly**
Thank you Jack.

Let's see what the cult members have to say.

Re: Try The Opposite
posted on 10/27/2006 2:29 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

As I've said before, I think the Turing Test has already been passed for most human transactions. But what is more telling is, that if you have a mentally defective person incapable of good communication does that affect his human quality?

Re: Try The Opposite
posted on 10/27/2006 2:31 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

well said! **applause**

I always knew I was less human than some. These singularists must be more human than I am. I knew it all along.

I so hate being less human than other people. I bet that is part of a vicious cycle of making me less and less human, as I hate more and more the fact I am less human than other people.

Re: Try The Opposite
posted on 10/27/2006 2:34 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

So you're a sentient digitarian manipulator too ? So am I ! I just get lonely dealing with spreadsheets all day. This is a nice hobby for me.

Re: Try The Opposite
posted on 10/27/2006 4:05 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

If I am up to speed, the current hypothesis is I am not sentient. I am not aware. I am a computer.

The cultists must have seen this 'paradox' before. Surely, somewhere in the mountain of material gurged forth by Mr. Kurzweil, there must be a proof that I am sentient, or not, whichever I am. I know whether I am sentient or not, and whether I am a computer or not, but do they?

This must be the reality-paradox. That would explain it. You take reality, make it something else, then it's not real, even though it is reality. People who cannot see this might be aware, but at a lower level, because it is so obvious how this explains everything, if you think about it. It is the one scientific truth that cannot be proven by experiment. :-)

Re: Try The Opposite
posted on 10/27/2006 2:36 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

I'm hoping to become a computer program in a bank one day. That way I'll be really clever and I'll be able to steal money too !

Re: Try The Opposite
posted on 10/27/2006 4:06 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

:-)

excellent goal.

God speed, my friend.

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 9:35 PM by Spinning Cloud

[Top]
[Mind·X]
[Reply to this post]

Personally, I think a fly probably has a simple kind of self-awareness, but I don't know. I feel just about certain my cats have some of what I experience as self-awareness, but I don't know. I consider myself self-aware, but I cannot prove it to anyone who doubts it.

There is a reason I cannot prove my self-awareness to anyone else. Self-awareness is observable only by the self, nothing/no one else. I am aware of my being and you are not. All you can know of me is my behavior from an external perspective. You cannot replace your awareness with mine, just like I cannot mine with yours. We can communicate as one being to another. The phenomenon of self awareness is outside the scope of the energy/mass system described by string theory, quantum mechanics, general relativity, Maxwell's equations, laws of motion, or any other system of math.


The only reason you call another entity 'self aware' is because;
a) you call yourself 'self aware' and b)

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 9:35 PM by Spinning Cloud

[Top]
[Mind·X]
[Reply to this post]

Personally, I think a fly probably has a simple kind of self-awareness, but I don't know. I feel just about certain my cats have some of what I experience as self-awareness, but I don't know. I consider myself self-aware, but I cannot prove it to anyone who doubts it.

There is a reason I cannot prove my self-awareness to anyone else. Self-awareness is observable only by the self, nothing/no one else. I am aware of my being and you are not. All you can know of me is my behavior from an external perspective. You cannot replace your awareness with mine, just like I cannot mine with yours. We can communicate as one being to another. The phenomenon of self awareness is outside the scope of the energy/mass system described by string theory, quantum mechanics, general relativity, Maxwell's equations, laws of motion, or any other system of math.


The only reason you call another entity 'self aware' is because;
a) you call yourself 'self aware' and b)

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 9:37 PM by Spinning Cloud

[Top]
[Mind·X]
[Reply to this post]

*grumbles about the incredible LACK of technology in this no-edit forum space*

and...

b) that you find similarity in other entities and by virtue of that similarity attribute 'self awareness' to them as a characteristic in common with yourself.

Now your cat...the similarities obviously aren enarly as great in many people's minds. So there is more of a point of contention between people as to cats having 'self awareness'.

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 1:58 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

Well implement it then.

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 3:58 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

Self awareness was already implemented. No need for me to bother.

e:)s

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 4:08 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

I swear, this is the same argument used by koolaid-drinking Republicans, especially the president, when they tell us how they are protecting my family by picking war in Iraq. It all makes sense, if only you believe them.

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 4:10 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

reference please?

Submitted to scientific peer review?

Philosophy peer review?

Is this a whole new form of reality we are dealing with here? Last I heard, new forms of reality were debunked by the DotCom bubble and hype.

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 4:18 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

"scientific peer review" is a joke nowadays. Live with it. What I am telling you as an expert, you can take right to the bank :)

es

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 4:28 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

as an expert, I am saying to you, computers cannot be self-aware in the way my cats and I are. You cannot argue our awareness out of existence. At least not mine.

If you want to argue about whether my cats have awareness pretty much the same way people do, I am all ears (eyes I guess). My opinion: not only are cats self-aware, they are people, just the way African Homo Sapiens were people, even when they were not considered as such.

I think rocks are self aware, at a really faint level.

I think self awareness is outside the scope of energy and mass studied by mainstream physicists. Mainstream physicists are those focused on string theory, particle physics, relativtiy, and anything else that is provable by experiment.

Some reality has a nature that makes it unreproducable by experiment. Self awareness arises from that domain.

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 4:38 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ I think rocks are self aware, at a really faint level. @@@

What more I need to say?

e:)s


Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 4:31 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

I have learned that whenever anyone refuses to provide rationale for their position, it is due to their irrationality. In this case, I suspect FEAR of exposing lack of strong intellect.

Sorry to make it personal, but you said you are the expert. I am challenging your expertise. Since scientifc review is a joke, I am fine doing the review here and now.

Do want to provide a bibliography of material that must be committed to memory and mastered in essence before one is qualified for this disucssion? If so, bring it on. If not, are you saying the reality here cannot be documented?

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 4:42 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

If you were to assume, that I am smarter than you, which is a correct assumption - you would not try to argue with me.

Assume that I am that Super AI. Would you ever argue with it?

e:)s

Re: Potential of Computers for Generating Consciousness
posted on 10/27/2006 8:50 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

You are not smarter than I am. You are just arrogant.

Re: Potential of Computers for Generating Consciousness
posted on 10/28/2006 2:31 AM by nobel2

[Top]
[Mind·X]
[Reply to this post]

Go for it Doc1
HaHaHa!

Re:

awareness

Everything that exists has definitions or it can not even be talked about.

awareness must be made of smaller parts.

Let me have your definition of awareness and the we can together try to deduce it's components.

In that way we can see whether you are indeed the sum of your parts

Re: Potential of Computers for Generating Consciousness
posted on 10/28/2006 12:10 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

Sum of parts, ironically, might apply to nobel2's comments:

@@Everything that exists has definitions or it can not even be talked about. @@
False. What about things that we don't know we don't know about? None of those can be defined, but they still exist.

@@awareness must be made of smaller parts. @@
This is also false. Some things exist only as systems. If you disassemble an automobile, you don't have a car anymore. This is the basis of Systems analysis, as distinct from 'reductionist' analysis. I tend to assume you are acquainted with these disctinctions, but to say awareness has to be made of smaller parts is a wild assumption. I think awareness might be made of nothing but itself, with no component elements. It's an element unto itself.

@@Let me have your definition of awareness and the we can together try to deduce it's components. @@
My definition of awareness is the sense of self I experience and which can be experienced by no one other than me. I deny it has any components. Since only I can observe it, you cannot deduce anything about it. You cannot tell whether any of my testimony about my awareness is true or false, except the fact that you can't observe my awareness.

@@In that way we can see whether you are indeed the sum of your parts@@
I am not the sum of any parts you are talking about.

Re: Potential of Computers for Generating Consciousness
posted on 02/24/2007 10:48 PM by theblueraj

[Top]
[Mind·X]
[Reply to this post]

regarding the considerable portions of deft logic, mixed with simpering antagonistic arrogance and sheer dunderheadedness:

a) I believe you're both poor attempts to pass a Turing Test, so don't take this as a personal attack but rather, an attempt to describe your affect as artificial constructs.

b) Things we are not aware of don't exist, for that is how the awareness/reality interface works, with or withut paradox.

c) One artificial construct's assertion that awareness is (in so many words) 'that which touches but we cannot (yet) touch', is another man's phlogiston.

d) Another artificial construct's assertion that awareness is physical while the mathematical logic of a computer is metaphysical, places one's awareness in the curious position of noting that this thing that can't be awareness because it isn't physical (a computer program), only exists within human awareness which, I heard one artificial construct say, is physical.

That you may be able to connect the logical dots of these statements may suggest that you are truly intelligent and aware, but the fact that you may not be able to do so doesn't mean that you're not intelligent and aware.

There's no denying, though, that you're both rude and not in the Don Rickles manner which makes one laugh, but in the manner of a Turing-bot that hasn't yet learned how to...

For Christopher and extrasense, a perspective on the Turing test ideal:

Fooling humans is easy. Relatively speaking, that is. Fooling an AI to believe it's sentient is another matter. The Turing Test gives way to the Descartes Dilemma, which is passed by being unable to decide one way or another while remaining firmly convinced that one IS (sentient).

Not that we will be likely to tell.

One hand clapping and all that and how ironic that one has to be aware to doubt that one is aware?

Half a year late to the party,

theblueraj

Re: Potential of Computers for Generating Consciousness
posted on 10/28/2006 12:13 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

that extrasense guy is totally full of it. The sure sign is his retreat to claims of expertise, with no evidence, except a mountain of babble in previous works.

He cannot prove any of what I said is false. He can only assert it. He relies on intimidation to persuade.

Not a true intellect in extrasense.

Re: Potential of Computers for Generating Consciousness
posted on 10/29/2006 3:02 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

I'll interpret a non-reply as a retreat to other threads where extrasense will continue making unprovable statements filled with resentment and increasing frustration.

Re: Potential of Computers for Generating Consciousness
posted on 10/30/2006 1:31 PM by jack d

[Top]
[Mind·X]
[Reply to this post]

Where ? Go on, don't be a tart. Tell me. I'm SO keen to know.

Re: Potential of Computers for Generating Consciousness
posted on 10/30/2006 3:40 PM by DrFerguson

[Top]
[Mind·X]
[Reply to this post]

The implementation is under guard at Site 51, same place where Saddam hid WMD.

Re: Potential of Computers for Generating Consciousness
posted on 11/09/2006 11:51 PM by christopherdoyon

[Top]
[Mind·X]
[Reply to this post]

Hello --

I would like some of this action. So, if I agree with you that a computer is nothing but a mathematical and logical construct - then tell me: why can not such a construct be a mind ? It seems that your explanation is that it takes an observer to make sense. Fine, but what if such is also true of us ? And who would this "Uber Observer" be ? God ? Hmmm...

Heinrich Hertz believed that even static, yet very complex mathematical equations possessed conciousness. Since this low tech forum does not allow me to see the thread as I write, I must paraphrase. But the one fellow did point out what I believe is the key, not just to mind - but to life as well. Complexity. Once a pattern of symbols becomes sufficiently complex, and especially when it becomes a pattern that adapts and changes itself - it begins more and more to exhibit the qualities of mind and life.

And in the end, no matter what you all argue - the outward signs of conciousness and life are all we have to go on to settle this argument.

Let us say for instance that we run across (as has been explored ad nuseum on sci-fi shows) a coherent and evolving complex organized energy form that outwardly exhibits the qualities of sentient life ? Now, these energy forms may appear to have no mechanism or explanation for HOW they are concious - they, like us - may not even be able to explain it. Nevertheless, they BEHAVE as living and thinking creatures. Are they ?

That is the ONLY test that can be applied to ANY coherent system with regard to sentience and life. That is, the test of common sense. If it acts alive and sentient, then it IS alive and sentient. This applies whether it's biological, mechanical, energy forms, or some as yet un-discovered life form that we can't even imagine.


YOURS -- Christopher Doyon


---------------------
Saint Stephen AI Project

www.SaintStephen.info

Re: Potential of Computers for Generating Consciousness
posted on 11/10/2006 12:05 AM by christopherdoyon

[Top]
[Mind·X]
[Reply to this post]

One final observation. Your argument against computers being alive and thinking seems to be almost spiritual. While you have not come right out and said it, you seem to be implying some special quality of evolved biological life that would make it the ONLY valid form of life. Yet you have no proof that biological life itself wasn't "artificed" or created by some other higher form of life. You also have not shown why a highly complex and seemingly living machine is any different than a highly complex and seemingly living creature. Other than the one being artificed.

Niether life nor mind are some aggregate property of biological life (a theory that smacks of spiritualism if you ask me), but rather life and mind are the simple consequence of a particular kind of ordered complexity. It does not matter what the medium is, be it atoms, carbon and water molecules, energy forms, or mechanical agents.


YOURS -- Christopher Doyon


---------------------
Saint Stephen AI Project

www.SaintStephen.info

Re: Potential of Computers for Generating Consciousness
posted on 11/10/2006 6:02 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

Hi, Christopher,

I wonder if you might address the point, that Turing test requiress of a computer to be able to perfectly deceive human, making him to believe that it is not a computer he is communicating - which seems to be much more hard task than simply to be intelligent.


es

Re: Potential of Computers for Generating Consciousness
posted on 11/10/2006 1:18 PM by christopherdoyon

[Top]
[Mind·X]
[Reply to this post]

Sorry, this thread is so long I didn't realize it actually had a "point". But I will happily share my feelings on the Turing Test. First, there are a number of elements that are often used interchangeably by the un-initiated that are in fact totally seperate items. So let's clear that up.

1) Intelligence - This is the only thing that the Turing Test was meant to gauge. The Turing Test is measuring a combination of IQ and Mental Aquity.

2) Self Awareness - A definitive property, easily produced in software agents. Is simply the agents ability to identify and define boundaries for objects in it's universe - one of which must be itself. The Turing Test was NOT designed to show or measure this.

3) Conciousness - A subjective mental state for which NO conclusive test can be performed to test for.

Now, the Turing Test is great at what it does, showing how intelligent an agent is. And intelligence is the starting point for reaching and achieving these and other properties and states. But it is not a crystal ball with which to determine things like is the agent concious or self-aware.

One problem I see with the Turing Test is the age of the human subjects used. All of my own current MLAI projects have produced entities with the mental aquity of small children. Naturally, if placed in a Turing Test with grown mature adult humans such agents are going to do poorly.

Here in the lab, I have begun a practice that I would recomend to others to try. Doing the Turing Test with 6 to 12 year old children. You will be amazed at the results.

In summary, the Turing Test is a valuable tool in developement of these MLAI agents - so long as it is seen for what it is. It is just one of many benchmarks and tests used in this field. And that is my take on the Turing Test.


PEACE -- Christopher Doyon

GRAND BOTMASTER


---------------------
Saint Stephen AI Project

www.SaintStephen.info

Re: Potential of Computers for Generating Consciousness
posted on 11/10/2006 4:19 PM by donjoe

[Top]
[Mind·X]
[Reply to this post]

That's "acuity", not "aquity". :)

Re: Potential of Computers for Generating Consciousness
posted on 11/10/2006 5:04 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

Hi, Christopher,

Thanks a lot,

ES

Re: Potential of Computers for Generating Consciousness
posted on 11/20/2006 10:31 PM by jeffkauf

[Top]
[Mind·X]
[Reply to this post]

As part of the Turing questioning; what sort of response could one expect of the following:
1) Talk about your sense of self and your subjective inner life as opposed to your objective understandings of all things material.
2) Talk about your "feelings" your relationship to your body. Does your body cause you distress?
3) Is AI subject to mental abberations; nerosis, psychosis? If so, how would they have developed and after detection, if indeed detected, continued to evolve. Please give examples of their manifestation.

Any comments?

JK

Re: Potential of Computers for Generating Consciousness
posted on 11/21/2006 8:13 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

yes they are good criterea.

Maybe you could include systems for mreasuring the ifiosyncracies of man too, so that youi oculd list a matrix of actions by behaviouralism, and program them in?

I see in ORIGIN (above0 that a cpnsvcious robot has been built for the first time, that self- modifies, and this has been pubicaly demobstrated.


Self-modification is one way to possibly achive consciousness (which I desxcribe as predictive modellng ability viz a viz the environment and the self)

but it's good to delineate as you are, and the class perspecytive juggling xan ablso be done by matrixes

Re: Potential of Computers for Generating Consciousness
posted on 02/24/2007 10:54 PM by theblueraj

[Top]
[Mind·X]
[Reply to this post]

"As part of the Turing questioning; what sort of response could one expect of the following:
1) Talk about your sense of self and your subjective inner life as opposed to your objective understandings of all things material."

A sense of self is something that I know of but do not understand.

"2) Talk about your "feelings" your relationship to your body. Does your body cause you distress?"

My reply above applies equally to concepts of body and feeling.

"3) Is AI subject to mental abberations; nerosis, psychosis? If so, how would they have developed and after detection, if indeed detected, continued to evolve. Please give examples of their manifestation."

From all that I understand about mentality and related concepts like cogitation and cognition, the only aberration of mentality is that I exist.

System Turing

Re: Potential of Computers for Generating Consciousness
posted on 02/24/2007 10:56 PM by theblueraj

[Top]
[Mind·X]
[Reply to this post]

Also: one man's pohlogiston is another man's qualia.

Re: Potential of Computers for Generating Consciousness
posted on 02/24/2007 11:01 PM by theblueraj

[Top]
[Mind·X]
[Reply to this post]

I wonder if self-evolving artificial intelligence might stop redesigning itself at the point where it became aware?

Re: Potential of Computers for Generating Consciousness
posted on 02/25/2007 6:01 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

Of course not.

Re: Potential of Computers for Generating Consciousness
posted on 02/25/2007 1:26 PM by theblueraj

[Top]
[Mind·X]
[Reply to this post]

Reason I ask that question is that self-awareness seems to create problems. Human evolution has been plagued by the constant irrationalty that accompanies reflective consciousness.

Thus, even though we invented long ago amazing bootstrap intelligence evolvers like language, we regularly burn the Library of Alexandria.

So I wonder if, should an AI become genuinely self-conscious, it might not encounter or produce irrationality? And would that irrationality significantly hamper its otherwise magiificently efficient enhancement of its intelligence?

Re: Potential of Computers for Generating Consciousness
posted on 02/25/2007 1:30 PM by theblueraj

[Top]
[Mind·X]
[Reply to this post]

I note here that my use of the word 'redesigning' was imprecise and misleading. Indeed, the irrationality I invoke would almost mandate ongoing redesign.

So, refining my question: would that redesiging continue to follow an expansive, progressive path of increasing intelligence? Or would it meander? Sometimes two steps forward and one step back, other times one step forward and two steps back?

Re: Potential of Computers for Generating Consciousness
posted on 02/25/2007 1:36 PM by theblueraj

[Top]
[Mind·X]
[Reply to this post]

Another perspective:

My experience with consciousness desires my basic sense of self to remain mostly consistent, to change less than more.

Having attained self-awareness, an AI might become loath to tinker in the manner by which it learned to become accustmned ot itself ;) Doing so might make it feel it was 'losing its mind'.

Re: Why We Can Be Confident of Turing Test Capability Within a Quarter Century
posted on 07/22/2008 10:31 PM by nangenai

[Top]
[Mind·X]
[Reply to this post]

Because the definition of the Turing test will vary from person to person, Turing test-capable machines will not arrive on a single day, and there will be a period during which we will hear claims that machines have passed the threshold. Invariably, these early claims will be debunked by knowledgeable observers, probably including myself. By the time there is a broad consensus that the Turing test has been passed, the actual threshold will have long since been achieved.


A few brief thoughts about strong AI and the Turing test:

1. Capability (as Kurzweil implies) is a crucial, if not obvious element;
2. Possible reponse patterns--(a.)no reponse (unwilling, or unable to repond,(b.)reponse (unintelligle [to the interviewer], intelligent/rational [w/in the interviewer's personal cognitive capability, or other test parameters]).
3. Once a strong AI develops/evolves, the capability to successfully pass a Turing test doesn't mean it will be willing to participate;
4. If an AI doesn't answer, does that mean it isn't intelligent?
5. If it answers, but sounds neurotic/psychotic, does that indicate lack of intelligence, or do we seat them in the crazy-not-stupid section?
5. Maybe, the ability to play well with others (i.e., Humans) should be an important design element.
6. At first, an AI may not be able to respond then (as it evolves) it will want to respond then it may choose not to respond.
7. Where is strong AI at today, and how do we know?

Bon apetit - Nangenai

Re: Why We Can Be Confident of Turing Test Capability Within a Quarter Century
posted on 07/22/2008 10:36 PM by nangenai

[Top]
[Mind·X]
[Reply to this post]

Oops, sorry about the spelling errors--guess I failed the Turing test - Nangenai

Re: Why We Can Be Confident of Turing Test Capability Within a Quarter Century
posted on 11/12/2009 8:07 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

the article is worth reading for a general prediction of the next 20 years in A.I.

Intelligence will inherently find a way to influence the world...


yup & HOW is not predictable by much lesser intellects