Origin > Visions of the Future > The Age of Intelligent Machines > The Age of Intelligent Machines, Chapter Three: Mathematical Roots
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0317.html

Printable Version
    The Age of Intelligent Machines, Chapter Three: Mathematical Roots
by   Raymond Kurzweil

From Ray Kurzweil's revolutionary book The Age of Intelligent Machines, published in 1990.


In the world of formal mathematics, it is just as bad to be almost right as it is to be absolutely wrong. In a sense, that's just what mathematics is. But that's not good psychology.
Marvin Minsky, The Society of Mind
A mathematician is a machine for turning coffee into theorems.
Paul Erdos

The AI field was founded by mathematicians: John McCarthy, Alan Turing (1912-1954), Norbert Wiener (1894-1964), students of Alonzo Church, Claude Shannon, Marvin Minsky, and others. LISP, the primary language for academic research in artificial intelligence, was adapted from a mathematical notation designed by Stephen Kleene and Barkley Rosser, both students of Church.1

Mathematics has often been viewed as the ultimate formalization of our thinking process, at least of the rational side of it. As I noted in the last chapter (and as was noted in several of the contributed articles at the end of the last chapter), the relationship of logic and the analytic process underlying mathematics to cognition has been debated through the ages by philosophers, many of whom were also mathematicians. The actual deployment of mathematical techniques to emulate at least certain aspects of human thought was not feasible until the electronic computer became available after World War II. However, the foundations of computation theory, along with the set theory on which computation theory is based, were established long before the potential of the electron to revolutionize applied mathematics was realized.2

Mathematics has often been described as a branch of philosophy, the branch most concerned with logic.3 It has only been in this century that the fields of mathematics and philosophy have split into largely distinct disciplines with few major figures doing important work in both areas. Bertrand Russell, having been a pivotal figure in the establishment of both modern set theory and logical positivism, was perhaps the last.

Russell's Paradox

In the early part of this century Bertrand Russell, a young and as yet relatively unknown mathematician and philosopher, became increasingly occupied with a certain type of paradox and attempts to understand its implications. The resolution of the paradox had important implications for the subsequent development of the theory of computation. The following story illustrates Russell's class of paradoxes:4

A judge is sentencing a man for a crime that he finds reprehensible and for which he wishes to mete out the most severe sentence he can think of. So he tells the convicted man not only that he is sentenced to die but also that because his crime was so offensive, the sentence is to be carried out in a unique way. "The sentence is to be carried out quickly," the judge says. "It must be carried out no later than next Saturday. Furthermore, I want the sentence to be carried out in such a way that on the morning of your execution, you will not know for certain that you are going to be executed on that day. When we come for you, it will be a surprise."

When the judge finished describing his unusual sentence, the condemned man seemed surprisingly pleased and replied, "Well, that's great, judge, I am greatly relieved."

To this the judge said, "I don't understand, how can you be relieved? I have condemned you to be executed, I have asked that the sentence be carried out soon, but you will be unable to prepare yourself because on the morning that your sentence is to be carried out, you will not know for certain that you will die that day."

The convicted man said, "Well, your honor, in order for your sentence to be carried out, I could not be executed on Saturday."

"Why is that?" asked the judge.

"Because since the sentence must be carried out by Saturday, if we actually get to Saturday, I will know for certain that I am to be executed on that day, and thus it would not be a surprise."

"I suppose you are right," replied the judge. "You cannot be executed on Saturday. I still do not see why you are relieved."

"Well," said the prisoner, "if we have definitely ruled out Saturday, then I cannot be executed on Friday either."

"Why is that?" asked the judge.

"We have agreed that I definitely cannot be executed on Saturday. Therefore, Friday is the last day I can be executed. Thus, if Friday rolls around, I will definitely know that I am to be executed on that day, and therefore it would not be a surprise. So I cannot be executed on Friday."

"I see," said the judge.

"Thus, the last day I can be executed would be Thursday. But if Thursday rolls around, I would know I had to be executed on that day, and thus it would not be a surprise. So Thursday is out. By the same reasoning we can eliminate Wednesday, Tuesday, Monday, and today."

The judge scratched his head as the confident prisoner was led back to his prison cell.

There is an epilogue to the story. On Thursday the prisoner was taken to be executed. And he was very surprised. So the judge's orders were successfully carried out.

If we analyze the paradox contained in the above story, we see that the conditions that the judge has set up result in a conclusion that none of the days meets, because, as the prisoner so adroitly points out, each one of them in turn would not be a surprise. But the conclusion itself changes the situation, and now surprise is possible again. This brings us back to the original situation in which the prisoner could (in theory) demonstrate that each day in turn would be impossible, and so on. The judge applies Alexander's solution to this Gordian knot.

A simpler example and the one that Russell actually struggled with is the following question about sets: Consider set A, which is defined to contain all sets that are not members of themselves. Does set A contain itself? As we consider this famous problem, our first realization is that there are only two possible answers: yes and no. We can therefore exhaustively consider all of the possible answers (this is not the case for many problems in mathematics). Let us try "yes." If the answer is yes, then set A does contain itself. But if set A contains itself, then according to its defining condition set A would not belong to set A, and thus it does not belong to itself. Since the assumption that A contains itself led to a contradiction, it must have been wrong. If the answer is "no," then set A does not contain itself. But again according to the defining condition, if set A does not belong to itself, then it would belong to set A. As with the story about the prisoner, we have contradictory propositions that imply one another. The assumption of no yields yes, which yields no, and so on.

This type of paradox may seem amusing, but to Russell it threatened the very foundations of mathematics.5 The definition of set A appears to be a perfectly reasonable one, and the question of whether set A belongs to itself also appears perfectly reasonable. Yet it cannot be answered. Without a resolution to this paradox the basic theory of mathematics was in question.

To solve the problem, Russell invented a concept of a logical transformation as an operation that requires the equivalent of a quantum of time. Russell designed a set of logical operations in which a particular problem would be expressed as a "program" of operations to follow.6 We then turn the program on and let it run. Each logical inference or other transformation is implemented in turn, and when the process is completed, we get our answer. If we apply this theoretical machine to the problem of set A, the logical operations are "executed" in turn. At a certain point the answer will be yes, but the program keeps running, and at a later point the answer becomes no. The program runs in an infinite loop, constantly alternating between yes and no.

Russell then provides narrow and broad definitions of a set. In the narrow sense, a set has a definition that allows the construction of a program that can determine whether a given entity is a member of the set in a finite amount of time. According to this definition, set A (whose program produces an infinite loop) is not a true set, so the paradox is eliminated.7

In the broad sense, the program defining the logical rules of set membership need not come to a halt in a finite amount of time, it just needs to come to an answer in a finite amount of time; it is allowed to change that answer as the program continues to run. According to this definition, set A is a proper set. The question of whether set A belongs to itself will be yes at one point in "time" and no at another point, and the program will alternate between the two. Thus, logical inferences are not implemented instantly, but rather one at a time with an orderly change of state between each. In our case, the answer is never yes and no at the same time. In the broad definition, set A is a particular type of set that is "unstable," just as an electronic circuit can be unstable. Nonetheless, the contradiction is eliminated.

Russell does not explicitly refer to time in his theory of types (of sets). He provides procedures for allowable transformations on propositions that can be considered meaningful within a logical system. This contrasts with the transformations generated by the logical system itself, which are used to determine the truth or falsity of propositions. Thus, according to Russell, certain propositions are neither true nor false and cannot be addressed by the axioms. In our discussion above, a proposition concerning an "unstable set" would not be meaningful. The theory is interesting in that we have one set of transformations generated by the axioms of a logical system determining truth or falsity and another set of transformations generated by the metarules of Russell's theory of types determining meaningfulness. Russell's transformations are algorithmic in nature, and the issues raised are similar to certain issues in computation theory that received attention after Turing devised his Turing machine. Though Russell did not explicitly link the theory of types to


An unstoppable proposition: Russell's Paradox
computation theory (otherwise, we might be referring to a Russell Machine rather than a Turing Machine as a primary model of computation), Russell's theory of types clearly provided a foundation for Turing's later work.

The lecture on logic delivered by the prisoner changed the situation. He has shown quite logically why it is not possible for him to be executed following the judge's instructions. The judge then realizes that the prisoner's belief that he cannot be executed makes it possible once again to execute him. Before the prisoner can formulate another lecture on logic (that is, before the "program" simulating this situation can alternate again to "impossible to execute"), the judge quickly implements his sentence.

Principia Mathematica

Russell expanded his theory to lay a new foundation for logic and the theory of sets in his first major work in mathematics, The Principles of Mathematics, published in 1903. He subsequently felt that all of mathematics should be recast in terms of his new theory of sets, since the concept of sets and their interactions is fundamental to all other mathematical disciplines. With the help of his friend and former tutor Alfred North Whitehead (1861-1947), he labored for nearly ten years to apply his new theory of sets and logic to all realms of mathematics. Russell reported that the effort nearly exhausted him, and even late in his life he felt that this had been the most intense work of his extremely prolific career.8 It was probably his most influential. As it was, Whitehead and Russell did not manage to complete their reexamination. They nonetheless published their work in three volumes in 1910, 1912, and 1913 under the title Principia Mathematica. The work was truly revolutionary and provided a new methodology for all mathematics that was to follow.

As significant as Principia was to mathematics in general, it was a pivotal development in terms of the foundations of the theory of computation that would be developed two decades later. Russell had created a theoretical model of a logic machine, which we now recognize as similar to a computer, particularly in its execution of logical operations in cycles.9 Indeed, Turing's subsequent theoretical model of a computer, the Turing Machine, has its roots directly in Russell's theoretical logic engine.10 Russell also created a concept of a logical programming language that is remarkably similar in many respects to one of the most recent programming languages, PROLOG, developed originally in France and now the basis for the Japanese Fifth Generation Computer project.11 Principia was also influential on efforts by Allen Newell, Herbert Simon, and J.C. Shaw to develop theorem-proving machines in the 1950s.12

Modern set theory, still based on Russell's Principia, provides a foundation for much of mathematics. It is interesting to note that modern set theory is in turn based on Russell's theoretical model of computation. Viewing things in this way, we could argue that mathematics is a branch of computation theory. What is particularly impressive about Russell's achievement is that there were no computers even contemplated at the time he developed his theory. Russell needed to invent a theoretical model of a computer and programming to address a flaw in the foundation of logic itself.

The Five Contributions of Turing

We must know, we shall know.
David Hilbert

Turing was perhaps the pivotal figure in the development of the computer and its underlying theory. Building on the work of Bertrand Russell and Charles Babbage, he created his own theoretical model of a computer and in the process established modern computation theory.13 He was also instrumental in the development of the first electronic computers, thus translating theory into reality. He developed specialized electronic computation engines to decode the German Enigma code, enabling the British to withstand the Nazi air force. He was also a major champion of the possibility of emulating human thought through computation.14 He wrote (with his friend David Champernowne) the first chess-playing program and devised the only widely accepted test of machine intelligence (discussed from a variety of perspectives in several of the contributed articles at the end of chapter 2).15

As a person, Turing was unconventional and extremely sensitive. He had a wide range of unusual interests ranging from the violin to morphogenesis (the differentiation of cells).16 There were public reports of his homosexuality, which greatly disturbed him, and he died at the age of 41, a suspected suicide.

The Enigma code

By 1940 Hitler had the mainland of Europe in his grasp, and England was preparing for an anticipated invasion. The British government organized its best mathematicians and electrical engineers, including Alan Turing, with the mission of cracking the German military code. It was recognized that with the German air force enjoying superiority in the skies, failure to accomplish this mission was likely to doom the nation. In order not to be distracted from their task, the group lived in the tranquil pastures of Hertfordshire.

The group was fortunate in having a working model of the German code machine Enigma, captured by the Polish Secret Service. Working with several hints gathered by British Intelligence, they were able to narrow the coding possibilities, but only slightly. Under Turing's leadership, their strategy was to build an electromagnetic computer, use telephone relays to do an exhaustive search of all possible codes that the Enigma machine could produce, and apply these codes to intercepted messages. The strategy was a challenging one because an (electromagnetic) computer had never been built before. They named the machine Robinson, after a popular cartoonist who drew "Rube Goldberg" machines.17 The group's own Rube Goldberg succeeded brilliantly and provided the British with a transcription of nearly all significant Nazi messages.

The German military subsequently made a modification to Enigma, adding two additional coding wheels, which greatly expanded the number of possible codes. To meet this new challenge, Turing and his fellow cryptoanalysts set to building a substantially faster machine called Colossus, built with two thousand electronic vacuum tubes.18 Colossus and nine similar machines running in parallel did their job again and provided uninterrupted decoding of vital military intelligence to the Allied war effort.

Colossus was regarded by the Turing team as the world's first electronic digital computer, although unlike Harvard's relay-based Mark I, it was not programmable. Of course, it did not need to be: it had only one job to do.

Remarkably, the Germans relied on Enigma throughout the war. Refinements were added, but the world's first computers built by Alan Turing and his associates were able to keep up with the increasing complexity. Use of this vital information required supreme acts of discipline on the part of the British government. Cities that were to be bombed by Nazi aircraft were not forewarned, lest preparations arouse German suspicions that their code had been cracked. The information provided by the Robinson and Colossus machines was used only with the greatest discretion, but the cracking of Enigma was enough to enable the Royal Air Force to win the Battle of Britain.

Hilbert's twenty-third problem and the Turing machine

While many in England and elsewhere remain grateful to Turing for his contributions to the war effort, his greatest legacy is considered to be the establishment of the modern theory of computation. Yet his original goal was not the development of such a theory but rather to address one of the problems set down by his predecessor David Hilbert (1862-1943).

The works of Hilbert, a German mathematician born in 1862, are still widely regarded as highly influential on the research goals of today's mathematicians. He is credited with consolidating the accomplishments of nineteenth-century mathematics with such works as The Foundations of Geometry, published in 1899.19 Perhaps of even greater significance, he set the agenda for twentieth-century mathematics as well with a list of the twenty-three most pressing unsolved problems that he presented at the 1900 International Mathematical Conference in Paris. In his address he predicted that these problems would occupy the attention of the next century of mathematicians. Hilbert appears to have been correct. The problems have been solved slowly and each solution has been regarded as a major event. Several that remain unsolved today are regarded by many mathematicians as the most important unsolved problems in mathematics.

Hilbert's twenty-third problem is whether or not an algorithm exists that can determine the truth or falsity of any logical proposition in a system of logic that is powerful enough to represent the natural numbers (numbers like 0, 1, 2, . . .). The statement of this problem was perhaps the first time that the concept of an algorithm was formally introduced into mathematics.

The question remained unanswered until 1937. In that year Alan Turing presented a paper entitled "On Computable Numbers, with an Application to the Entscheidungsproblem" (the Entscheidungsproblem is the decision or halting problem).20 The paper presented his concept of a Turing Machine, a theoretical model of a computer, which continues to form the basis of modern computational theory.

A Turing machine consists of two primary (theoretical) units: a "tape drive" and a "computation unit." The tape drive has a tape of infinite length on which there can be written (and subsequently read) any series of two symbols: 0 (zero) and 1 (one). The computation unit contains a program that consists of a sequence of commands made up from the list of operations below. Each "command" consists of two specified operations, one to be followed if the last symbol read by the machine was a 0 and one if it had just read a 1. Below are the Turing machine operations:

  • Read tape
  • Move tape left
  • Move tape right
  • Write 0 on the tape
  • Write 1 on the tape
  • Jump to another command
  • Halt

The Turing machine has persisted as our primary theoretical model of computation because of its combination of simplicity and power.21 Its simplicity derives from its very short list of capabilities, listed above. As for its power, Turing was able to show that this extremely simple machine can compute anything that any machine can compute, no matter how complex. If a problem cannot be solved by a Turing machine, then it cannot be solved by any machine (and according to the Church-Turing thesis, not by a human being either).22

An unexpected discovery that Turing reports in his paper is the concept of unsolvable problems, that is, problems that are well defined with unique answers that can be shown to exist, but that we can also show can never be computed by a Turing machine. The fact that there are problems that cannot be solved by this particular theoretical machine may not seem particularly startling until one considers the other conclusion of Turing's paper, namely, that the Turing machine can model any machine. A machine is regarded as any process that follows fixed laws. According to Turing, if we regard the human brain as subject to natural law, then Turing's unsolvable problems cannot be solved by either machine or human thought, which leaves us with the perplexing situation of being able to define a problem, to prove that a unique answer exists, and yet know that the answer can never be known.23

The busy beaver

One of the most interesting of the unsolvable problems, the busy beaver problem, was discovered by Tibor Rado.24 It may be stated as follows. Each Turing machine has a certain number of states that its internal program can be in. This corresponds to the number of steps in its internal program. There are a number of different 4-state Turing machines that are possible, a certain number of 5-state machines possible, and so on. Given a positive integer n, we construct all the Turing machines that have n states. The number of such machines will always be finite. Next, we eliminate those n-state Turing machines that get into an infinite loop (that is, never halt). Finally, we select the machine (one that halts) that writes the largest number of 1s on its tape. The number of 1s that this Turing machine writes is called the busy beaver of n. Rado showed that there is no algorithm, that is, no Turing machine, that can compute this function for all ns. The crux of the problem is sorting out those n-state Turing machines that get into infinite loops. If we program a Turing machine to generate and simulate all possible n-state Turing machines, this simulator itself goes into an infinite loop when it attempts to simulate one of the n-state Turing Machines that gets into an infinite loop.

The busy beaver function can be computed for some ns, and interestingly, it is also an unsolvable problem to separate those ns for which we can determine the busy beaver of n from those for which we cannot. Aside from its interest as an example of an unsolvable problem, the busy beaver function is also interesting in that it can be considered to be itself an intelligent function. More precisely stated, it is a function that requires increasing intelligence to compute for increasing arguments. As we increase n, the complexity of the processes needed to compute the busy beaver of n increases.

With n = 6, we can deal with addition, and the busy beaver of 6 equals 35. In other words, addition is the most complex operation that a Turing machine with only 6 steps in its program is capable of performing. A 6-state Turing machine is not capable, for example, of multiplication. At 7, the Busy Beaver does learn to multiply, and the busy beaver of 7 equals 22,961. At 8 it can exponentiate, and the number of 1s that our eighth busy beaver writes on its tape is approximately 1043. By the time we get to 10, we are dealing with a process more complex than exponentiation, and to represent the busy beaver of 10 we need an exotic notation in which we have a stack of exponents the height of which is determined by another stack of exponents,


The Busy Beaver: an intelligent function?
the height of which is determined by another stack of exponents, and so on. For the twelfth busy beaver we need an even more exotic notation. It is likely that human intelligence (in terms of the complexity of mathematical operations that can be understood) is surpassed well before the busy beaver gets to 100.

Turing showed that there are as many unsolvable problems as solvable ones, the number of each being the lowest order of infinity, the so-called countable infinity (that is, the number of integers). Turing also showed that the problem of determining the truth or falsity of any logical proposition in an arbitrary system of logic powerful enough to represent the natural numbers was an unsolvable problem. The answer, therefore, to Hilbert's twenty-third problem posed 37 years earlier is no; no algorithm exists that can determine the truth or falsity of any logical proposition in a system of logic that is powerful enough to represent the natural numbers.

The second and third answers to Hilbert's question

Around the same time Alonzo Church, an American mathematician and philosopher, published Church's theorem, which examined Hilbert's question in the context of arithmetic. Church independently discovered the same answer as Turing.25

Also working independently, a young Czech mathematician, Kurt Gödel (1906-1978), sought to reexamine an issue that was not entirely settled by Whitehead and Russell's Principia Mathematica.26 Whitehead and Russell had sought to determine axioms that could serve as the basis for all of mathematics, but they were unable to prove conclusively that an axiomatic system that can generate the natural numbers (theirs or any other) would not give rise to contradictions. It was assumed that such a proof would be found sooner or later, but Gödel stunned the mathematical world by proving that within such a system there inevitably exist propositions that can be neither proved nor disproved. Some have interpreted Gödel's theorem to imply that such uncertain propositions are simply indeterminate, neither true nor false. This misses the depth of Gödel's insight, however. Such propositions, according to Gödel, are not indeterminate; they are definitely either true or false. It is just that we can never determine which. Another implication is that in such axiomatic systems it is not certain that the axioms will not result in contradictions. Gödel's incompleteness theorem has been called the most important in all mathematics, and its implications are still being debated.27 One of the implications is that the answer to Hilbert's twenty-third problem is again no.

Taken together, the work of Turing, Church, and Gödel, all published in the 1930s, represented the first formal proofs that there are definite limits to what logic, mathematics, and computation can do. These discoveries strongly contradict Wittgenstein's statement in the Tractatus that "if a question can be framed, it can be answered" (6.5).

Hope and order versus distress and perplexity

As a final comment on Turing's, Church's and Gödel's perplexing insights into the nature of logic, it is interesting to note the stark contrast of the mood and attitude of the intellectual and cultural life in Europe and the United States at the turn of the century in comparison with that of several decades later.28 Music had shifted from the romantic style of Brahms (1833-1897) and the early Mahler (1860-1911) to the atonality of Schoenberg (1874-1951). Art and poetry had made the same switch from romantic styles to the cubism and expressionism of Picasso (1881-1973) and the minimalism of Pound (1885-1972), Eliot (1888-1965), and Williams (1883-1963). It is not unusual for changes in attitude and world view to be reflected across the arts, but it is interesting to note that the shift was reflected in science and mathematics as well. In physics, mechanics had gone from a fully refined and consistent Newtonian model to a paradoxical quantum model. The most puzzling aspect of quantum mechanics and one of its essential features, the Heisenberg uncertainty principle, is its conclusion that there are profound limits to what human beings can know. In addition, the principle of duality, which had existed previously only in metaphysical doctrine, was now firmly established in the apparently contradictory wave-particle nature of light. Perhaps most disturbing, mathematics itself had gone from its turn-of-the-century emphasis on comprehensive formalisms that covered all of mathematics to a conclusion in the mid 1930s that logic had inherent and irremovable contradictions and that problems existed that could never be solved.

Turing's test

Having established a theory of computation and having played a major role in the implementation of that theory, Turing's interest ran to speculation on the ultimate power of this new technology. He was an enthusiast for the potential of machine intelligence and believed that it was feasible, although he appeared to have a reasonably realistic sense of how long such a development would take.

In a paper entitled, "Computing Machinery and Intelligence," published in the journal Mind in 1950, Turing describes a means for determining whether or not a machine is intelligent: the Turing test. It should be noted that a computer "passing" the Turing test is an indication that it is intelligent. The converse of this statement does not necessarily hold. A machine (or organism) unable to pass the test does not necessarily indicate a lack of intelligence. Some observers ascribe a high level of intelligence to certain species of animals such as dolphins and whales, but these animals are obviously in no position to pass the Turing test (they have no fingers, for one thing).

To date no computer has come close to passing this test. The test basically involves the ability of the computer to imitate human performance. Narrower versions of the test have been proposed. For example, a computer chess program was recently able to "pass" a narrow version of the Turing test in that observers (again, observing through terminals) were unable to distinguish its playing from that of a skilled human chess player. Another variation-one involving the ability of a computer to compose stanzas of poetry-is provided in "A (Kind of) Turing Test" in chapter 9. Computers are now beginning to imitate human performance within certain well-defined domains. As Dan Dennett said in his article at the end of chapter 2, such narrow formulations of the Turing test fall far short of the original. I discuss the prospect of a computer passing the original Turing test in chapter 10.

Turing expected that a computer would pass his test by the end of the century and remarked that by that time "the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted." Turing's prediction contrasted with other statements around the same time that were much more optimistic in terms of time frame. (In 1965 Herbert Simon predicted that by 1985 "machines will be capable of doing any work that a man can do."29) Turing was as optimistic as anyone with regard to the power of cybernetic technology.30 Yet he appears not to have underestimated (at least not as much as some other observers) the difficulty of the problems that remained to be solved.

The Church-Turing thesis

In addition to finding some profound limits to the powers of computation, Church and Turing also advanced, independently, an assertion that has become known as the Church-Turing thesis: if a problem that can be presented to a Turing machine is not solvable by one, then it is also not solvable by human thought. Others have restated this thesis to propose an essential equivalence between what a human can think or know and what is computable. The Church-Turing thesis can be viewed as a restatement in somewhat more precise terms of one of Wittgenstein's primary theses in the Tractatus.

I should point out that although the existence of Turing's unsolvable problems is a mathematical certainty, the Church-Turing thesis is not a mathematical proposition at all. It is a conjecture that, in various disguises, is at the heart of some of our most profound debates in the philosophy of mind.31

The Church-Turing thesis has both a negative and a positive side. The negative side is that problems that cannot be solved through any theoretical means of computation also cannot be solved by human thought. Accepting this thesis means that there are questions for which answers can be shown to exist but can never be found (and to date no human has ever solved an unsolvable problem).

The positive side is that if humans can solve a problem or engage in some intelligent activity, then machines can ultimately be constructed to perform in the same way. This is a central thesis of the AI movement. Machines can be made to perform intelligent functions; intelligence is not the exclusive province of human thought. We can thus arrive at another possible definition of artificial intelligence: AI represents attempts to provide practical demonstrations of the Church-Turing thesis.

In its strongest formulation, the Church-Turing thesis addresses issues of determinism and free will. Free will, which we can consider to be purposeful activity that is neither determined nor random, would appear to contradict the Church-Turing thesis. Nonetheless, the truth of the thesis is ultimately a matter of personal belief, and examples of intelligent behavior by machines are likely to influence one's belief on at least the positive side of the question.

The skepticism of Ada Lovelace (companion and assistant to Charles Babbage, the nineteenth-century inventor of the Analytical Engine, a "mechanical" computer) regarding the possibility of intelligent machines was no doubt related to the limitations of computers with whirling gears and levers that he proposed to her. Today it is possible to imagine building machines whose hardware rivals the complexity and capacity of the human brain. As our algorithms grow more sophisticated and machines at least appear to be more intelligent and more purposeful, discussions of the Church-Turing thesis will become more practical than the highly theoretical debate in Church's and Turing's time.

 Be the first to comment on this article!  
 

[Post New Comment]