Origin > Will Machines Become Conscious? > Can a Machine Think?
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0214.html

Printable Version
    Can a Machine Think?
by   Clinton W. Kelly

There are three ways to create an AI: model the mind, model the brain, and artificial life. Which one will work?


Originally published March 22, 2001 at iMP Magazine. Published on KurzweilAI.net June 26, 2001.

Why do we think computers may have the "right stuff?" The reasons are among some of the most significant philosophical concepts of the late 20th century.

In one variant or another, the question "can a machine think" has occupied the attention of philosophers and others for centuries, stimulated from time-to-time by the emergence of ingenious mechanisms which suggested at least the possibility of an affirmative answer. In our own times, we have seen the creation of machines that are autonomous--robots, for example, that can perform tasks without constant human supervision. Does this mean that the device thinks? Thinking about what it means for a machine to think means thinking, as well, about ourselves. Indeed, what does it mean to think? Does thinking define humanity? Do animals think?

Logic and Reasoning: Machines Which (Who?) Play Chess

One of these ingenious "thinking" mechanisms caught the public's attention in 1809. The Emperor Napoleon I, a chess player certainly of prominence and reportedly of ability, lost a chess game in 15 moves to a seeming thinking machine; a clockwork automaton known as "the Turk." Reports of the time say that Napoleon "angrily stalked from the room."

The Turk, named for one of its component parts, a mannequin dressed in elegant Turkish attire, had been built for the Austrian Empress Maria Theresa in 1769 by the Baron von Kempelen, a Hungarian engineer. The machine consisted of the mannequin, whose mechanical arm moved the chess pieces, and a cabinet at which the mannequin sat. Inside the cabinet was a shining, brass clockwork mechanism. This mechanism was supposed to be responsible for deciding the automaton's moves and for positioning the mannequin's arm. The cabinet was opened for inspection before each game to convince the audience that the mechanism was all that it contained.

The Turk did not win every game but it won enough of them to establish a reputation as a player of rank. Its success gave rise to much speculation about how a cabinet full of gears could successfully compete with well-known human chess masters. How could this clockwork mechanism think? In fact, it couldn't. The Turk was a magician's illusion; the cabinet cleverly designed to conceal a human chess player.

Now, fast-forward some 188 years to 11 May 1997. Another prominent chess player, grandmaster and world champion Gary Kasparov, has lost the sixth game and the match to a machine; but this automaton was no illusion. A creation of silicon and software (not brass gears), special purpose computer hardware and clever instructions encoding the insights of skilled players, IBM's Deep Blue was the first machine to defeat a world champion. Within the limits imposed by the game, this machine seemed to think. Observers commented on its style of play almost as if they regarded it as human.

For many people, the outcome of this chess match between human and machine signaled the beginning of a major change in the way we view ourselves and our place in the universe. Chess has long been regarded as the most cerebral of games. The ability to play chess well has always been considered a hallmark of human logic and reasoning. Now we have a machine that solves a problem that we have always solved with the highest human intelligence. Suddenly, there appears to be a whole range of activities which had been the sole province of humans that is now open to computers. If a computer can unseat the world's best chess player, some wondered, "How long will it be before I lose my seat down at the office?"

So Deep Blue plays a mean game of chess. What else can it do? Essentially nothing; but there are a very large number (tens of thousands?) of less well-known computer programs from the field of artificial intelligence (AI) with performance equally impressive in their areas as Deep Blue's. Each of these, like Deep Blue, captures in some limited way elements of intelligent behavior. There are programs that: play Chess, Backgammon, Go and Bridge; solve word problems; detect credit card fraud; pick stocks; troubleshoot machinery; find information on the World Wide Web; target advertising; screen loan applicants; monitor compliance of Bosnian combatants with arms restrictions; predict chemical reactions; make medical diagnoses; reduce emissions from an electric generating plant; daily schedule 1000s of telephone repair personnel; screen pap smears; simulate production from an oil reservoir; conduct logistics planning for Operation Desert Storm; control a process to make soup; detect violations of The Nuclear Test Ban Treaty; read handwriting; write poems; compose music; paint a picture; prove mathematical theorems; design jet engines and computer systems; help explore Mars; and, in 1995, drove a car autonomously from Washington, D.C. to San Diego, California.

It would appear that some of those "seats down at the office" are indeed now occupied by computers. Yet somehow, none of these AI applications, impressive and useful as they are, would be called "intelligent" in the general, human sense. Most of them are predicated on highly controlled situations where there are "rules" and it is challenging, but possible, to anticipate contingencies. Thus, these systems do beautifully in their niches, but our common sense concept of intelligence requires something more.

How Do We Know If We Have It?

But how do we recognize and measure intelligence? This is a remarkably difficult issue. Leaving to one side debates over IQ and other formal metrics, common sense notions of intelligence are based on observations of behavior with the belief that the more complex the behavior, in some sense, the more intelligent the animal.

The most complex human behavior is the use of language--which is not to be confused with communication. Animals communicate. Studies of animal intelligence with data on such animals as: mantis shrimp, lobsters, horseshoe crabs, octopi, and sea anemones (as well as the more traditional studies of rats, mice, birds and, of course, primates) show that they also exhibit other complex behaviors. Thus, whatever "intelligence" is, it exists to some degree across a wide range of organisms. But there is a qualitative, clear and major difference between the ability of humans and that of animals to communicate: animals do not possess anything remotely like human language skills.

The notion of language and intelligence has preoccupied many engaged in research in computing, beginning with pioneers like the British mathematician Alan M. Turing. Turing, a cryptographer during World War II, devised a test for intelligence in 1950 that focuses on behavior that requires language skills.

The essence of the Turing test is a conversation, via a teletype, on any topic whatsoever, between a person, a computer and a judge (a human). The judge's goal is to decide which of the respondents is human. If a computer, said Turing, could answer so as to convince the judge it is a person (not a computer) then for all practical purposes the computer could be said to "think". More recently, the test is thought of as perhaps using speech instead of a teletype machine, and perhaps including some requirements for image understanding. No AI program has come close to passing the Turing test.

Since Turing devised the test, objections to it have been voiced in several quarters. The philosopher John Searle of the University of California, Berkeley, has raised one of the most interesting and contentious philosophical arguments against the "sufficiency" of the Turing test and indeed against artificial intelligence. That is, does the Turing test enough? Will an answer to the Turing test tell us whether computers can be human? Searle asks us to imagine that he sits in a room with a slot in the door through which come slips of paper with questions written in Chinese characters. Searle does not understand Chinese but he has in the room with him a code book of instructions in English, which tell him how to develop answers. He follows the instructions, prepares the answer in Chinese, and pushes it through the slot. The answer makes sense to the Chinese speakers outside. Now to this outside observer, the "room" appears to understand Chinese. Searle, however, was just following formal rules and was completely ignorant of the meaning of either the questions or the answers. He concludes, since he knows he does not understand Chinese, that mere symbol manipulation, although producing the appearance of intelligence to the outside observer, cannot produce understanding or awareness in the mechanism doing the manipulation, in this instance Searle himself.

What Searle is saying is that if he does not understand Chinese solely on the basis of running a computer program for understanding Chinese (the instructions in the code book), then neither does any digital computer. Digital computers, says Searle, "merely manipulate formal symbols according to rules in the program." He continues: "What goes for Chinese goes for other forms of cognition as well. Just manipulating the symbols is not by itself enough to guarantee consciousness, cognition, perception, understanding, thinking and so forth." Searle's test means that language is more than symbol manipulation, a bias that is implicit in the Turing test, and this raises the bar, so to speak, for what constitutes "thought."

Searle also links intelligence and consciousness, suggesting that conscious intentionality is the essence of intelligent behavior. Marvin Minsky of MIT, one of the founders of AI, believes that consciousness, specifically emotion, is critical for setting and changing goals; clearly an important part of intelligent behavior. Others have no difficulty in separating intelligence from consciousness. One author says "it's a lot easier to imagine the possibility of an intelligent computer than it is to imagine the possibility of a conscious computer or a computer with free will."

There are also arguments against the "necessariness"' of the Turing Test based on observations of animal behavior. These have lead ethologists and others to believe that there are non-verbal cognitive behaviors which also are indicative of intelligence. For example: some animals can form concepts, can categorize things, form a cognitive map of the landscape, learn, adapt to new situations, remember and act upon cause and effect, i.e., use what worked before, and develop problem-solving strategies. These people argue that although "Elephants don't play chess" and "Dogs don't argue syllogisms," both exhibit intelligent behavior. Apes, chimps and orangutans prepare and transport tools ahead of time; pigeons can make relative numerosity judgments; mappies practice deception as do vervet monkeys and chimps, and bees can create mental maps and do spatial reasoning.

All these observations of animal behavior suggest that intelligence is complex, multi-faceted and very hard to define. The Turing Test captures one kind of intelligence; other tests are required to capture non-verbal behaviors characteristic of intelligent animals which perceive, move around, reason, and survive in the real world. Thus, to devise a test, or series of tests, for what constitutes a "thinking machine" means developing criteria for such properties as perception, contingent or context dependent decisions, and problem-solving. This need not be at a high cognitive level like responding to chess moves; rather, a fairly primitive "thinking machine" would have to recognize an obstacle, back up and then move around it. This sequences of moves turns out to be very challenging to design.

Why Do We Believe We Can Construct A "Thinking Machine?"--What A Computer Can And Cannot Do

We now know that we can make computers excel on limited problems such as recognizing speech (if it is grammatical, carefully pronounced and the context is restricted), scheduling a factory, recognizing a particular object in a scene, designing a jet engine, or even performing a complex medical diagnosis. But we are very far from creating a computer which can pass an unrestricted Turing Test--even leaving aside Searle's objections to the test. As one writer put it, "computers have mastered intellectual tasks, such as chess and integral calculus, but they have yet to attain the skills of a lobster in dealing with the real world." Given the gap between these niche capabilities and the requirements of the unrestricted Turing Test, why do we think we can create an intelligent machine--a machine which can pass the unrestricted Turing Test? Why do we think computers may have the "right stuff?" The reasons are among some of the most significant philosophical concepts of the late 20th century.

The philosophy which dominated thinking about the mind for almost three centuries is called Cartesian dualism; the position first set forth by the French mathematician and philosopher Rene Descartes in the early-1600's, that there are two kinds of substances in the world: mental and physical or immaterial "mind stuff" apart from material substance. If we held this belief today, there would be little reason to suppose we could make much progress creating intelligence using a computer. Today, most philosophers instead argue that the mind (and intelligence) is an emergent property of material processes at the micro-level. That is to say, intelligence, which includes thinking, arises from brain's biochemistry, which is shaped by heredity and environment. The fundamental insight about biology and mind suggests that if we simulate the brain at the right level of detail, mind and intelligence may also emerge from the simulation. The open issue is how far down in the structure do we have to go? Can we get mind and intelligence by simulating brain processes at the higher "psychological" level-- how we read, for example--or do we require lower-level neuro-physiological detail?

What we know of the human brain suggests it is almost unimaginably complex. It contains approximately 100 billion neurons of different types, densely interconnected, with each neuron linked to 10s of thousands of others. By contrast, a snail has about 100,000 neurons, a bee about 600,000, and a laboratory rat about 65 million. But a brain is much more than neurons. The neurons, and other types of cells, are immersed in a complicated chemistry which they influence and which affects them, and which develops over time. The correct way to think of the brain is as a complex, dynamic, nonlinear chemical system, not just as a network of neurons. Why do we think, even in principle, without for the moment considering the formidable practical issues, that we can simulate this? The answer requires examining the theoretical limits of computation; limits on what a computer can do.

The concept of computation was first formalized by Turing in 1935, well before there were electronic computers. Turing's goal was to formalize some intuitive concepts of methods for mathematical reasoning. To do this, he employed a mechanical metaphor which is called the Turing machine. It consists of an infinite tape, a sensing head for reading and writing symbols on the tape, and a control box with a finite number of internal states. In the control box is a table (the software program) which the machine uses to determine what action to take. For each possible state of the control box, and for each possible symbol being read by the sensing head, the table has an entry which tells the machine what symbol to print on the tape, in which direction to move the sensing head along the tape, and which state to enter next. So imagine the head scooting back and forth along the tape reading and writing symbols. Thought of this way, the Turing machine is simply a device for transforming one string of symbols into another string according to a predetermined set of rules; the table in the control box. The advantage of the Turing Machine is not as an actual device to do computation but to clarify operations masked in real computers. However, your personal computer (as well as the largest supercomputer) are Turing Machines at their core.

The simplicity of the Turing Machine helps people establish theoretical limits on the ultimate problem-solving capabilities of real computers. One such result, called the Church-Turing thesis, is that if anything can be computed at all, it can be computed by a Turing machine. No physical process is known to exist that can be used to build a device computationally more powerful than a Turing machine. The Church-Turing Thesis is called a thesis, not a theorem, because it is not amenable to proof. Nevertheless, it is believed by most mathematicians; no evidence to the contrary has turned up.

There are things which a Turing machine cannot do. There are numbers which are uncomputable, numbers which a Turing machine cannot generate even by executing an infinite number of steps. Most people believe that uncomputability is not important in real-world processes. Thus, if we describe real-world processes (like the functioning of a neuron in a brain) by the appropriate equations, those equations can be solved/computed, by a Turing machine given sufficient time. So if you accept that the brain is a physical process, that there is no mysterious "mind stuff", then in principle it can be simulated by a Turing machine. To restate the Church-Turing Thesis: the brain is a physical process, physical processes are computable, all computable processes can be computed by a Turing Machine (or any digital computer).

If you wish to simulate such features of intelligence/consciousness as playing chess or doing symbolic integration, then a relatively coarse level of simulation, a psychological-level, will likely suffice. If you wish to have creativity, emotional responses, an aesthetic sense, or even self-awareness, then a very fine-grained, neurophysiological-level simulation will likely be required, and the end may only be realized when we have totally duplicated a living brain; either as a simulation or in some, perhaps organic, type of "hardware".

Not everyone agrees that simulation is the answer. Searle argues that a simulation of a process is not that process. A simulation of an airplane does not fly, a simulation of the digestive process does not digest. Searle believes that consciousness emerges as a result of natural processes, but that simulation and computation cannot themselves create consciousness.

AI is based on faith that there are significant features of intelligence which can "be floated on top of entirely different sorts of substrates than those of organic brains." This is a consequence of the Church-Turing thesis. The computing hardware doesn't matter; silicon, Tinkertoys (MIT students built a Tic-Tac-Toe playing computer out of Tinkertoys), or living neurons are all computationally equivalent. But Searle claims the "hardware" does make a difference, and to achieve intelligence or consciousness we will have to replicate some of the organic processes themselves. Some counter by saying that a simulation of information processing is information processing. A simulation of two plus two still comes out four. Intelligence comes about through information processing, the argument goes, so a simulation of information processing can yield intelligence. That is, the simulation captures causality. It is difficult to confirm or refute Searle's position on philosophical grounds. Ultimately, it and other positions will be decided empirically.

Another class of objections is raised by the British mathematician and physicist Roger Penrose. He bases this on his conviction that mathematicians can solve problems, which by a theorem proven by the mathematician Kurt Gödel, can have no guaranteed algorithmic solution, where by "algorithm," he means a process that guarantees a solution, although not necessarily an efficient one (as opposed to a "heuristic," which is a process learned from experience that may give the right answer or one that is close enough but is not guaranteed to). He, therefore, concludes that humans use non-algorithmic or uncomputable processes to solve these problems. Penrose argues at length and persuasively, but the argument may be flawed. Deep Blue uses a process which, although it does not guarantee a solution to the chess problem (a win), still wins an impressive number of games. Like Deep Blue, humans often seem to use heuristics to attempt solutions. Like the process underlying Deep Blue, these don't guarantee a solution, but nevertheless often produce one and--unlike a chess game--are well suited to adapting to changing conditions. The Gödel theorem speaks of algorithms which always guarantee a solution. The answer may be that humans succeed on these Gödel problems not through the use of uncomputable (non-Turing) processes but through the use of heuristics. This doesn't rule out the possibility that humans may also use uncomputable processes, but it seems to make it less likely. On the other hand, Penrose may be correct; in which case intelligence through computation will be unachievable

Means to an End

Approaches to try to realize the potential in the Church-Turing Thesis fall into three categories: Symbolic ("Model the Mind"), Connectionist or Artificial Neural Systems ("Model The Brain"), and a relatively new body of practice grouped under the heading of Artificial Life ("Model Evolution").

Symbolic AI systems are designed and programmed "Top Down", rather than trained or evolved. They tend to be propositional, using a list of rules and facts to simulate a general psychological theory of some aspect of intelligence, or to simulate the application of knowledge in some specific area of expertise; there are systems that simulate "reasoning", systems that simulate "knowing", and systems which do both. A common approach used in symbolic AI is the production system. It generally has three parts: a list of rules of the form IF-THEN, called production rules or productions; a control mechanism used to decide when and how to apply a given rule; and a working memory, a 'blackboard' where the results of rule activations or "firings" are posted. An IF-THEN rule representing facts might look like the following: If an animal has pointed teeth and if an animal has claws and if an animal has forward eyes, then the animal is likely a carnivore. A rule is "fired" when the IF-clauses are satisfied, and the results of the firing, the "THENs", are posted on the "blackboard". These results may be taken up by the If-clauses of other rules causing them to fire in-turn. A typical production system will have thousands of rules. Many of the niche application examples cited earlier are based on production systems. Deep Blue incorporates production rules to evaluate the strategic worth of chess positions.

Production systems and other methods used in symbolic AI have been much less successful in more general problem solving; among the reasons is a lack of commonsense knowledge which, for example, would lead a system doing medical diagnoses to prescribe smallpox treatment for a car with rust spots. The number of "facts" which make up the body of commonsense knowledge is immense; probably millions of rules to represent enough knowledge so that new concepts could be "explained" in terms of previous rules and so that the system could "bootstrap" itself. A system exhibiting intelligence would also likely require additional millions of rules, describing reasoning processes. Aside from the fact that we don't understand these processes, the effort required to write such a program and to get it to work reliably poses a practical problem we don't know how to solve. A possible way around this problem is to create systems which can learn. While symbolic systems which learn can be constructed, they tend not to scale well; the more rules they learn, the slower they run. It is tempting to believe that the doubling of computational power every 18 months will solve this problem, but experience to date suggests that is unlikely. Symbolic approaches may also have a fundamental flaw; critics argue that rules and other symbolic means are only rough approximations of sub-symbolic processes underlying intelligence, and that to obtain intelligence these processes must be included.

Problems with scaling-up symbolic systems and concern about what they might have left out gave rise to the connectionist or artificial neural system approach, based on studies of the brain's architecture. The most salient characteristic of the brain is the dense interconnection among the neurons. Perhaps, the "hardware" does matter to some degree, and if many simple processors representing neurons were densely interconnected, brain-like behavior might result without writing millions of lines of code. Intelligence might spontaneously emerge from the interactions of many simple processors.

To investigate the connectionist or artificial neural system hypothesis, mathematical models loosely approximating some of the features of animal nervous systems have been constructed; these are largely limited from a few hundred to a few thousand "neurons" which are interconnected by links with variable weights or strengths. The neurons are generally modeled as a simple thresholding function. If the weighted sum of inputs to a neuron exceeds some set threshold value, the neuron fires and outputs a signal which goes to all those neurons to which it is connected. It is hard to see how such a simple process can give rise to complex behavior, but some remarkable performance has been obtained; one neuron can't do much but networks of neurons can do a lot.

The key point about artificial neural systems is that they are trained, not programmed; they learn. Machines that learn are absolutely crucial to obtaining intelligent behavior. It is impossible to program in everything a machine must know to pass the Turing test or do much else. Deep Blue does not learn. If the size of the chess board were changed, another row or column added, or if some of the chess pieces were given additional moves, Deep Blue would be lost but a human would learn to cope. Learning in a neural net takes place by changing the pattern of weights which determines its response. Information is likewise stored in patterns of weights distributed across the net. Artificial neural systems excel at pattern recognition whether the pattern is visual, or exists in more abstract data. They do less well, thus far, in solving problems requiring explicit rational or logical thought where symbolic systems excel. The remarkable thing about artificial neural systems is that so much performance has been obtained out of (relative to the brain) ridiculously simple systems.

Mathematicians describe complex systems as ones which are composed of very large numbers of interacting parts. The brain certainly qualifies. A characteristic of complex systems is that they have emergent properties; properties which occur suddenly when a certain level of complexity is reached, and whose emergence could not have been predicted from knowledge of the parts and the interactions. Simulating at most a few thousand very simple neurons is well below the level of complexity at which intelligence might kick in. A single neuron doesn't think and isn't conscious, yet the brain does and is.

To explore the potential of artificial neural systems, and to see if intelligence will emerge at some level of complexity, will require the capability to simulate very large numbers of neurons and their interconnections. But numbers alone are not sufficient; it is necessary to model the neuron itself in more detail;to move beyond the simple caricature of thresholding and capture much more of the complexity inherent in the functioning of the biological neuron, and to acknowledge there are many different kinds of neurons.

Constructing a machine which might think using this type of connectionist approach now seems to be a hardware as well as a software problem; today's computers can't handle the required calculations in a reasonable time.

What is the size of the simulation problem? The human brain has about one-hundred billion neurons; by some reckoning it processes information at a rate of about a million-billion bits/sec (1015) to 10 billion-billion bits/sec (1019). It is much slower than a computer at the "component" level, but more than compensates with massive parallelism. The best simulations run at about 10 to 100 billion (1010-1011) bits/sec. The difference in performance is somewhere between a factor of 10,000 (104), to one of 1000 million (109). If computer performance continues to double every 18 months, this difference will be erased somewhere between the years 2020 and 2040 without making any far-fetched assumptions about technology. That certainly will not automatically result in machine-based human intelligence, but it does suggest that if we understand enough about brain functioning, the computer capacity will exist in one form or another to exploit that understanding. But what if we don't have that level of understanding?

How, if we don't understand something, can we replicate it? The answer leads us to the third approach to AI, derived from the relatively new discipline of artificial life. Artificial life started with the mathematician John von Neumann's work on self-reproducing automata and focuses on the creation of life-like systems using the simulation and emulation of biological and ethological processes. Two of these are key to creating an artificial nervous system: directed evolution and self-organization.

Directed evolution is a way to artificially speed up the process of evolution, and to direct the process toward an explicit goal; in this case, intelligence. One way to use it is to evolve the software we need to simulate a nervous system. We put code representing the functions of all the things we believe may be important to intelligence in a simulation. We randomly create some variants of these, run the simulations, pick some winners, let them "breed" (i.e., exchange some code), create offspring, throw in some mutations, and repeat the process. While this sounds simpler than it is, it captures the essential ingredients of the process. It has been used to create solutions to some very hard problems, but never on this scale.

On the surface, this is what mathematicians would call an "intractable problem;" you not only have to simulate a nervous system, you have to simulate a large family of candidate systems, and do it perhaps thousands of times. We have now far outstripped the projected capabilities of computer technology. A possible solution may lie in using directed evolution on hardware and evolving a nervous system directly.

Imitating a nervous system in hardware is an enormous challenge. It may be that silicon itself, the stuff of chips, is unsuitable and that proteins or some other bio-material might be preferable. Two points seem clear: first, whatever the material, the system is so complex, containing perhaps millions of neurons, that it will likely have to be constructed by harnessing the capacity of some materials or components to self-organize or self-assemble into higher-order systems using instructions implicit in the components. Having to somehow otherwise connect up these neurons individually, as is done in semiconductor fabrication, seems impossible; particularly so when you realize that the architecture of the interconnections themselves is shaped by learning. You can't really specify all those connections in advance. Second, the self-organization would be pushed in a particular direction favoring intelligence, using directed evolution.

Two brief examples suggest possibilities: first, a number of researchers have used self-organization to grow rat hippocampal neurons in patterns to form simple "circuits". Second, researchers at the University of Sussex and others have used directed evolution to evolve novel digital circuits. This is done by taking advantage of a type of integrated circuit called a field-programmable gate array in which connections among components on the chip are under software control. Very recently, other researchers, using the same technology, have started to evolve artificial neural systems in hardware. Whether these approaches can scale to the levels required is unknown.

Can A Machine Think?

There are good reasons to believe a sufficiently complex machine could one day pass the unrestricted Turing test. Whether or not that constitutes sufficient or necessary proof of intelligence or consciousness, will be the subject of continuing philosophical debate. Machines have been created (e.g., Deep Blue) which outperform humans in many niche areas. Some make extensive use of the particular strengths of computers such as rapid search and large memory, while others try to simulate human problem-solving or some of the machinery of the brain. Some of Deep Blue's predecessors tried to reproduce the methods of human grandmasters, i.e., recognizing key configurations of pieces on the board. Deep Blue relies more on massive and rapid searches of possible sequences of moves.

While it is not possible to predict which of the three major approaches to artificial intelligence might be the basis for an intelligent machine (perhaps all of them will be incorporated to some degree), it seems a safe bet that a major component will resemble the functioning of a human brain at the level of individual neurons. If we can simulate the functioning of the brain at a deep level, the resulting network would literally be a tabula rasa, a blank mind. It would not show intelligence nor consciousness unless it was subjected to experiences similar to those of a human brain. It must be able to investigate the environment around it, interact with that environment, and learn common sense and all the other things which contribute to intelligence, since you cannot directly program in intelligence.

In the nearer-term, the pursuit of machine intelligence will continue to yield large benefits; we will be able to talk to our computers to dictate e-mail or documents, to command intelligent software agents to find information for us, and generally interact naturally with a variety of increasingly more complicated devices using spoken language. We can imagine computers that learn to do what you want them to do, and software and systems that can fix themselves or develop new capabilities.

There is no way to predict the impact of machines having true intelligence; the only way to find out is to try and construct them. We may fail, but as we try we will learn much that is valuable about ourselves, and the brain that makes us human.

Orinally published March 22, 2001, iMP Magazine. © 2001. Clinton W. Kelly III. All rights reserved.

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Behold! Machines Who Think (Already)
posted on 03/20/2002 10:14 PM by mentifex@scn.org

[Top]
[Mind·X]
[Reply to this post]

The big question now is, "Can the AI genie be put back in the bottle?" Machines are already thinking with http://www.scn.org/~mentifex/jsaimind.html -- the Robot AI Mind in JavaScript -- but the AI has a very low IQ and the thinking is of a minimalist nature. But real, True AI is now loose on the 'Net, available also at http://mind.sourceforge.net/mind4th.html in Win32Forth and in ports to Visual Basic and JavaScript. Does anyone know of any other real AI projects floating around on the 'Net, with freely avaialble source code to inspect and run?

Re: Behold! Machines Who Think (Already)
posted on 04/05/2003 1:07 AM by Robert

[Top]
[Mind·X]
[Reply to this post]

The machine has yet to declare, as Descarte did, "I think therefore I am." We, Humans, have not progressed, to the point, beyond which, our programs can only decieve the limited in intelligence as to thier ability to mimic life. The best current test of self aware intelligence is the "Turing Test".

Re: Behold! Machines Who Think (Already)
posted on 04/23/2003 1:28 AM by gdorring

[Top]
[Mind·X]
[Reply to this post]

I think that your point concerning the 'ability to mimic life' is an important one'one that should be more enthusiastically explicated. As of right now, all that computers have been able to do is 'simulate' intelligence; there has not been, and will never be, a computer that can actually 'replicate' intelligence. The reason for this is due to a linguistic distinction between syntactics and semantics. Briefly, syntactics refers to the grammatical rules of language. Syntax is the governing system that orders our words and makes them into coherent intelligible statements. Computers operate at a syntactic level'that is, there is a system of rules that dictate the arrangement of binary code. The ones and zeros in themselves do not represent anything, but when they are arranged in specific orders, i.e. when they give an output to a corresponding input, they become meaningful to the interpretive user. Semantics, on the other hand, refers to the meaning of symbols. Humans operate on a semantic level where symbols such as the word C-A-T actually refer to a physical cat in the real world. This 'aboutness' of the symbol is referred to as intentionality. So if syntactics refers to the dyadic relationship between ones and zeros, then semantics refers to the triadic relationship between perceiver-symbol-object. A computer will never be able to understand that the ones and zeros that manifest themselves as the word C-A-T on the display screen actually refers to a real physical cat in the world. Computers will absolutely never break through the syntactic (dyadic) bulwark! Computers will never think in the true sense of the term! Strong AI is an impossibility.

Also, just a note about your claim that the Turing Test is the best test of self-awareness intelligence: You should really read Alan Turing's 'Computing Machinery and Intelligence.' You will be surprised to find that the Turing Test is fallible and only shows that a computer can simulate intelligence. (This also renders your comments inconsistent in the last post, sorry). You should also read John Searle's 'Can Computers Think?' It is here that he explicates the fundamental problem with the Turing Test by use of his Gedanken Experiment, also known as the Chinese Room Experiment. This ties directly into the syntactic/semantic distinction. If you want me to go on about this let me know.

PS Have you ever noticed that Descartes' notorious Cogito argument ("I think, therefore I exist") isn't even logically correct. In reference to Aristotelian logic, there is a missing premise to complete the modus ponens argument!! haha

Re: Behold! Machines Who Think (Already)
posted on 04/23/2003 2:59 AM by Robert

[Top]
[Mind·X]
[Reply to this post]

Syntactics, symantics, The rules of the language we speak and read. They differ from country to country, from region to region. Why do so many have difficulty with the most wide spread language in the world, the language of technology, the english language? The english language has the capability to evolve to include the most complex of ideas and concepts using the simplest alphabet created to date. I.E. the word pneumonoultramicroscopicsilicovolcanoeconiosus is a specfic word in english that other less sophisticated languages would be completely inadquete to deal with. Therefore: language is important to the expression of thought but does not preclude the thought necessary to sentience.

mind
posted on 03/20/2002 5:55 PM by heart@ta.ck

[Top]
[Mind·X]
[Reply to this post]

the mind is what the brain does, the brain is the hardware, so just port the mind to other hardware. Understand what the different functions of the brain are, port that. I know it's not simple, but it's do-able.

A-life is a dead end in the pursuit of AI.
Random evolution != intelligence

Re: mind
posted on 03/20/2002 6:15 PM by elenduil@uomail.com

[Top]
[Mind·X]
[Reply to this post]

Isnt our own intelligence the result of evolution? (wich is in a sence random)

Re: mind
posted on 04/05/2003 12:54 AM by Robert

[Top]
[Mind·X]
[Reply to this post]

Evoloution is random only in it's search for all viable options! That which increases survival is tried in infinite variation, to find a better survival rate.

Re: mind
posted on 04/23/2003 2:29 AM by Robert

[Top]
[Mind·X]
[Reply to this post]

Simple Replication of brain matter, in my opinion, will not result in a mind or intelligence.
Sensorial inputs and imbedded programming need to be present in the structure. Not,instinct as we beleive occurs in lower lifeforms, but a structure of testing and analysis. A,real world,direct test of sensation and perception to co-ordinate what we do to what affects us. I.E. I cry, an entity, responds and hugs or feeds or cleans until what bothered me or made me uncomfortable goes away.

Re: Can a Machine Think?
posted on 10/18/2002 12:55 PM by entell

[Top]
[Mind·X]
[Reply to this post]

I think it is quite useless to try to replicate the brain in silicon (or whatever)
to make intelligent machines. I believe the brain is simply a tool for
intelligence to present itself on. Much like software running on PCs.
The PC can do and be many things based on what software it runs. The software
is what makes the PC what it is. The hardware is just the platform for
the software to run itself on. If researchers are trying to replicate
the brain to get to intelligence, they will be very sorry when they
assemble the artificial brain. I can't believe that brain would be the
source of intelligence. If so, there is a very interesting question to
be asked: I don't understand why noone asks the
difference between a dead person and a living one. Or even better, a
normal living person and a person in a coma. The brain is alive and
ticking yet there is no consciousness. Therefore consciousness cannot
possibly be a product of the brain. Does everyone agree? Any comments
on this?

Re: Can a Machine Think?
posted on 10/19/2002 1:45 AM by ELDRAS

[Top]
[Mind·X]
[Reply to this post]

from eldras:

Of course a machine thinks.

Pne needs to be dextrous at definitions.

It took me 20 years so i suppose most here have done a lot too.

The article suggest three ways to attempt a thinking machine.


I say (and have used)...all three are important.


John Searle's chineese room is a complete botch up. He's afriend of a friend, i've never jarred with him, though his students adore him, his idea that it's impossible for a machine to think is utter tosh.

It revolves around the nothion of understanding and is drawn from his jewsih religious beliefs ie like Einstein, one can't get away from the progreamming that there is a soul, if one's had it!



Well of course we're evolved machines.

that's one model of what we are anyhow.

machines do think already.


Let's define thinking for a bit if we attack this debate in earnest.


What is thought?


Questions & Answers.


Let's keep it simple; if we don't then epistomological taughtological antropomorphic etymologies might conjugate too many oxymoronic nonclematures. OK?


thought is questions and answers.

can machine have questions and answers?

yes Look on the internet.

ty any chat bot.

Use a calculator.


The issue of causation...ie how did they get them, is another issue.

machines think.

Anyone who asserts that machiones don't thinbk, is guilty of disfocus.


Machines think, but not as much as men presently.



HOW DO MEN THINK


That's pretty easy. I know i understand how the brain workd.


men have questions & answers.


They have them as reflexes to evolved goals. These goals are retrospectively apparent.


But not more than that.

happy to argue the opposite.

i don't see AI as more than a card game which

I for one have solved.


If anybody else has architected for strong AI and stopped like me seeing the dangers of a launch, I'd love a jar.



Ta Salutant!



ELDRAS

Re: Can a Machine Think?
posted on 04/05/2003 3:17 AM by Robert

[Top]
[Mind·X]
[Reply to this post]

You say "Pne needs to be dextrous at definitions"?
And it took you 20 years?
I have read your missive and am wonderous that you ,in your infinite wisdom and, limited vocabulary, have been able to declare "of course a machine thinks."
Which machines have you conversed with lately?
Have you spoken of philosophy with your car, or perhaps discussed cosmology or even metaphysics with your toaster?
What "nothion of understanding" can be derived from anyones heretidy?
What is, please forgive my ignorance, a "nothion" anyway?
Perhaps, if you understood the English language better, I would not be confused by your statements.
Your concepts and opinions would be better appreciated by all, with a better command of the language that is spoken and written by the majority of the populations of the world.

I think thyey don't
posted on 03/31/2003 7:55 AM by Reem****

[Top]
[Mind·X]
[Reply to this post]

I am gonna to make a report about this issue,but in my opinion, machines just follow orders but they don't think. It's just programmed to do what we want not what it wants, so we(humans) think but not machines. I think that the small ant has the ability to feel and distinguish more than any robot.Emotions are part of intelligence in my point of view and robots have nothing to do with emotins. If we programmed them to say Today is hot at given temperature it will, but it cann't really distinguish that, and if the temperature rise more,but we programmed it to say it is cold,it will say that.

Re: I think thyey don't
posted on 04/05/2003 3:29 AM by Robert

[Top]
[Mind·X]
[Reply to this post]

Of course you are completely correct, at this time.
Science, and the practitioners of science, believes that if we can create an artificial "Human" without the traditional preoccupations of sex and power, that our society can greatly benifit from such a creation. Also, there is that in our ego, that would feel a great satisfaction in doing what only God has been able to do before. Create a new kind of life!

Re: I think thyey don't
posted on 04/23/2003 3:05 AM by Robert

[Top]
[Mind·X]
[Reply to this post]

Our current technology might be capable of sentient thought, but our lack of understanding of our own thought processes inhibit our ability to invest this capability in our machines.

Re: Can a Machine Think?
posted on 04/05/2003 1:15 AM by Robert

[Top]
[Mind·X]
[Reply to this post]

Nothing is useless! Things are either possible or not, or probable or not. The investigation into the possible or probable cannot be useless, as it either proves or disproves our understanding of the universe.

Re: Can a Machine Think?
posted on 04/23/2003 3:18 AM by Robert

[Top]
[Mind·X]
[Reply to this post]

Why not. The ability and desire to create is uppermost in most of humanity. Would not it be the most glorious thing to create a new life!

Re: Can a Machine Think?
posted on 10/19/2002 6:03 PM by Claire

[Top]
[Mind·X]
[Reply to this post]

No.


Claire

Re: Can a Machine Think?
posted on 10/19/2002 6:51 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

Could you pracy what you just said

Re: Can a Machine Think?
posted on 04/05/2003 2:25 AM by Robert

[Top]
[Mind·X]
[Reply to this post]

You make a simple statement of negation to the concept of a machine thinking. I am amazed that so few adult humans can manage it,in even the most rudimentary form!

Re: Can a Machine Think?
posted on 04/05/2003 12:47 AM by Robert

[Top]
[Mind·X]
[Reply to this post]

Redact: Can humans think? Mostly they don't, they react.

Re: Can a Machine Think?
posted on 02/13/2004 4:31 AM by upwing

[Top]
[Mind·X]
[Reply to this post]

I have heard it claimed that Turing did not believe in the Turing Test and that Pascal did not believe in Pascal's Wager. Does anyone reading this know if either one of these claims is true or false? (References with page numbers please.)

Re: Can a Machine Think?
posted on 02/14/2004 8:13 AM by Tomaz_(Thomas)_Kristan

[Top]
[Mind·X]
[Reply to this post]

Nobody knows. But it doesn't matter a bit!

Re: Can a Machine Think?
posted on 02/19/2004 8:08 AM by upwing

[Top]
[Mind·X]
[Reply to this post]

"Nobody knows."
This utterance is often used to refer to a large (not small) number of people having been consulted. (Indeed, some would say that either nobody knows or somebody knows.) But, anyway, thanks for your time.

"But it doesn't matter a bit!"
I should think that the views or arguments of a Turing or a Thomas here might constitute at least a bit (of information), perhaps even a "wit" (of wisdom).

-------------------------

I have heard it claimed that Turing did not believe in the Turing Test and that Pascal did not believe in Pascal's Wager. Does anyone reading this know if either one of these claims is true or false? (References with page numbers please.)

Re: Can a Machine Think?
posted on 02/19/2004 9:43 AM by Tomaz_(Thomas)_Kristan

[Top]
[Mind·X]
[Reply to this post]

It would matter, if he had left any argument against. Since non of them had, the probability that they had them, is small. Smaller than the probability, that it's just an urban legend. Like that one of Darwin, allegedly rejecting his work on the death bed.