Origin > Visions of the Future > The Futurecast > The Paradigms and Paradoxes of Intelligence, Part 2: The Church-Turing Thesis
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0256.html

Printable Version
    The Paradigms and Paradoxes of Intelligence, Part 2: The Church-Turing Thesis
by   Raymond Kurzweil

An exploration of the Church-Turing Thesis, originally written for "The Futurecast," a monthly column in the Library Journal.


Originally published August 1992. Published on KurzweilAI.net August 6, 2001.

In examining some of the early roots of computers and computation ("The Paradigms and Paradoxes of Intelligence. Part 1: Russell's Paradox," LJ June 15, p. 50-51), I noted that the theoretical foundations of computation theory were developed by Bertrand Russell to address a flaw in the concept of logic itself and not in an attempt to build a practical machine. But it was Hitler's anticipated invasion of England that brought the theory to life.

With the mainland of Europe in Hitler's grasp, the British government in 1940 organized its best mathematicians and electrical engineers under the leadership of Alan Turing, with the mission of cracking the German military code. It was recognized that with the German air force enjoying superiority in the skies, failure to accomplish this mission was likely to doom the nation. In order not to be distracted from their task, the group lived in the tranquil pastures of Hertfordshire.

The result of the group's exhaustive efforts was a machine called Robinson, named after a popular cartoonist who drew elaborate "Rube Goldberg" machines. Robinson was the world's first operational computer. Robinson and a later version called Colossus succeeded brilliantly and provided the British with a transcription of nearly all significant Nazi messages.

Remarkably, the Germans relied on Enigma (their enciphering machine) throughout the war. Refinements were added, but the world's first computers built by Turing and his associates were able to keep up with the increasing complexity. Use of this vital information required supreme acts of discipline on the part of the British government. Cities that were to be bombed by Nazi aircraft were not forewarned, lest preparations arouse German suspicions that their code had been cracked. The information provided by the Robinson and Colossus machines was used only with the greatest discretion, but the cracking of Enigma was enough to enable the Royal Air Force to win the Battle of Britain.

The father of invention

If necessity is the mother of invention, then war is surely its father. In the burning cities of World War II, the computer age was born. In at least four distinct ways, Alan Turing was a pivotal figure in a revolution that is still sweeping over us. In addition to having designed and built the world's first operational computer, he created the foundations of computational theory. Although resting heavily on Russell's earlier work, Turing brought it from the world of pure mathematics to create an entirely new discipline. The Turing Machine, which is Turing's expression of the ideal computer, continues today as our primary theoretical model of a computer. As mentioned in the June 15 Futurecast, Turing used the Turing Machine to demonstrate the existence of unsolvable problems: mathematical propositions that we can prove have unique answers that we can also prove can never be found or known.

Turing also began the first serious work on artificial intelligence (AI). The notion of creating a new form of intelligence on Earth emerged with an intense passion simultaneously with the electronic hardware on which it was to be based. The similarity of computer logic to at least some aspects of our thinking process was not lost on Turing, or for that matter on any of the designers of the early machines. Turing's 1950 classic paper "Computing Machinery and Intelligence" (Mind, 1950) lays out an agenda that in fact occupied the next quarter century of AI research: game playing, natural language understanding and translation, theorem proving, and, of course, the cracking of codes.

In that same paper, Turing describes his now-famous Turing Test for determining whether or not a machine is intelligent. The test involves the ability of a computer to imitate a human's ability to engage in a typed dialog such that human "judges" are unable to distinguish its performance from that of real humans. The dialog takes place over terminal lines so that the judges cannot base their decision on visual observation of their dialog partners. The Turing Test continues today as our primary test for human-like intelligence in a machine. To date, computers have "passed" narrow versions of the Turing Test in which the domain of dialog is restricted to a certain topic (such as wine or psychological problems). Passing the unadulterated Turing Test in which there are no restrictions on the scope of the dialog has not yet been achieved by a machine. We will begin to see reports that a computer has passed Turing's Test early in the next century but these reports will be premature until at least the year 2020.

Perhaps Turing's most intriguing contribution was in the area of philosophy. His interest in philosophy follows a tradition in which mathematicians were philosophers and vice versa. Up until the 20th century, mathematics was considered a branch of philosophy, the branch most concerned with logic. It has only been in this century that the fields of mathematics and philosophy have split into largely distinct disciplines with few major figures doing important work in both areas. Bertrand Russell and Alan Turing were among the last.

Turing goes to Church

Thus by 1950, Turing had helped defeat the Nazis by inventing the world's first operational computer and had explored both the startling potential and the profound limitations of this invention. The potential had to do with Turing's belief that the computer would ultimately be capable of emulating and even surpassing human thought. The limitations were expressed in Turing's concept of unsolvable problems, which demonstrated profound limitations to the powers of both computation and mathematics. Turing, along with mathematician and philosopher Alonzo Church, advanced, imdependently, an assertion that has become known as the Church-Turing thesis: If a problem that can be presented to a Turing Machine is not solvable by one, then it is also not solvable by human thought. Others have restated this thesis to propose an essential equivalence between what a human can think or know and what is computable. The Church Turing thesis can be viewed as a restatement in somewhat more precise terms of one of the primary theses of the Tractatus Logico- Philosophicus, a seminal work written in 1921 by Ludwig Wittgenstein.

Although the existence of Turing's unsolvable problems is a mathematical certainty, the Church-Turing thesis is not a mathematical proposition at all. It is a conjecture that, in various disguises, is at the heart of some of our most profound debates in the philosophy of mind.

The Church-Turing thesis has both a negative and positive side. The negative side is that problems that cannot be solved through any theoretical means of computation also cannot be solved by human thought. Accepting this thesis means that there are questions for which answers can be shown to exist but which can never be found (and to date, no human has ever solved one of Turing's unsolvable problems, or at least has never proven that he or she has).

The positive side is that if humans can solve a problem or engage in some intelligent activity, then machines can ultimately be constructed to perform in the same way. This is a central thesis of the Al movement. Machines can be made to perform intelligent functions; intelligence is not the exclusive province of human thought. Indeed, it has been suggested that the term "artificial intelligence" be defined as "attempts to provide practical demonstrations of the Church-Turing thesis.

The illusion of free will

In its strongest formulation, the Church-Turing thesis addresses issues of determination and free will. Free will, which we can consider to be purposeful activity that is neither determined nor random, would appear to contradict the Church-Turing thesis. Free will has long been a philosophical mystery, a dividing wall between opposing camps of philosophers. One side, epitomized in recent decades by the logical positivism school that grew out of Wittgenstein's Tractatus, points out that the human mind is, after alL made up of matter and as such is subject to natural law, the same natural law to which rnachines are subject. Natural law, in the Newtonian sense, is fully determined. In the 20th century world of quantum rnechanics, natural law may also take on a random element. Yet in neither formulation is there room for purposeful activity that is neither determined nor random. Thus, according to this school of thought, free will is just an illusion. Despite our impression to the contrary, our decisions and our actions are to some extent automatiscally determined by our internal states and experiences and to some extent by the unpredictable quantum interactions of our constituent parts.

The reaction of many philosophers, particularly the existentialists, to the Church-Turing thesis and the entire logical positivist school was understandably hostile. As the views and theories of the proponents of any major philosophical movement are diverse and complex, it is nonetheless reasonable to view existentialism as a reaction on an intellectual and culture level to the major drift of Western thought toward greater and greater reliance on rational and analytic views. At its core, existentialism defines human reality as almost the reverse of the logical-positivist view. It considers the highly rational analyses of language by the logical positivist school, rooted as they are in the mathematics of Russell, Turing, and Church, as either meaningless or trivial. It regards the spiritual and emotive life, which are meaningless in logical-positivist terms, as the seat of true meaning.

In existential terms, we are conscious, aware of our existence, experiencing our own emotions and thoughts, and freely making moral choices. How this can be explained in rational terms is a mystery, but it is the mystery that is the essence of human existence. Why I experience the experiences and feelings of the person who is now writing this month's Futurecast and not the experiences and feelings of some other person is impossible to answer. The logical positivists would say that it is impossible to even frame the question. But the existentialists would point out that the fact that the logical positivists are unable to even contemplate the mysteries of consciousness and free will just demonstrates the emptiness of their philosophy.

The world's first software engineer

The first person to write seriously about programming computers to emulate human thought was Ada Lovelace, the companion and assistant to Charles Babbage, the 19th century inventor of the "Analytical Engine," a "mechanical" computer. Babbage's computer never worked, but its design was a remarkable foreshadowing of the design of modern computers over a century later. Although she never had an opportunity to run (and debug) her programs, Lovelace wrote programs for the Analytical Engine and is regarded as the world's first software engineer. She also wrote essays on the possibility of computers playing chess and composing music. She finally concluded that though the computations of the Analytical Engine could not properly be regarded as "thinking," they could nonetheless perform activities that would otherwise require the extensive application of human thought.

Her skepticism that a computer could truly 'think' was no doubt related to the limitations of computers with whirling gears and levers that Babbage had proposed. Today it is possible to imagine building machines whose hardware rivals the complexity and capacity of the human brain. As our algorithms grow more sophisticated and our machines at least appear to be more intelligent and more purposeful, discussions of the Church-Turing thesis will become more practical than the highly theoretical debate of Church and Turing's time.

The next two Futurecasts will examine two approaches to capturing intelligence in a computer: the recursive paradigm and the neural paradigm. We will then look at a very practical way in which human capabilities could be transferred to a machine, by literally transferring a human mind to a computer. While the technology to achieve this does not exist today, the capability is not that far off. Then the implications of the genteel philosophical debate highlighted here will take on a new urgency.

Reprinted with permission from Library Journal, August 1992. Copyright © 1992, Reed Elsevier, USA

Other Futurecast columns

 Be the first to comment on this article!  
 

[Post New Comment]