The Paradigms and Paradoxes of Intelligence, Part 1: Russell's Paradox
An exploration of Russell's Paradox, written for "The Futurecast," a monthly column in the Library Journal.
Originally published June 1992. Published on KurzweilAI.net August 6, 2001.
Over the past several months The Futurecast has examined some of the ramifications of this early phase of machine intelligence. For the next several months we will take a look at the nature of intelligence, what it is, and what makes it tick, and then examine the long-term feasibility of simulating the human mind and the practical and philosophical implications of doing so.
A judge is sentencing a man for a crime that he finds reprehensible and for which he wishes to mete out the most severe sentence. He tells the convicted man not only that he is sentenced to die, but that the sentence is to be carried out in a unique way. "The sentence is to be carried out no later than next Saturday. Furthermore, I want the sentence to be carried out in such a way that on the morning of your execution, you will not know for certain that you are going to be executed on that day. When we come for you, it will be a surprise."
When the judge finishes describing his unusual sentence, the condemned man seems surprisingly pleased and replies, "Well, that's great judge. I am greatly relieved." To this the judge says, "I don't understand. How can you be relieved?"
The man replies, "Well, your honor, in order for your sentence to be carried out, I could not be executed on Saturday." "Why is that?' asks the judge. "Since the sentence must be carried out by Saturday, if we actually get to Saturday, I will know for certain that I am to be executed on that day, and thus it would not be a surprise."
"I suppose you are right," responds the judge. "You cannot be executed on Saturday. I still do not see why you are relieved." "Well," says the prisoner, "if we have definitely ruled out Saturday, then I cannot be executed on Friday either."
"Why is that?'' asks the judge. "We have agreed that I definitely cannot be executed on Saturday. Therefore, Friday is the last day I can be executed. Thus, if Friday rolls around, I will definitely know that I am to be executed on that day, and therefore it would not be a surprise. So I cannot be executed on Friday." "I see," says the judge.
"Thus the last day I can be executed would be Thursday. But if Thursday rolls around, I would know I had to be executed on that day, and thus it would not be a surprise. So Thursday is out. By the same reasoning we can eliminate Wednesday, Tuesday, Monday, and today."
The judge scratches his head as the confident prisoner is led back to his cell. On Thursday, the prisoner is taken to be executed. He is very surprised. So the judge's orders are successfully carried out. The above is my version of what has become known as Russell's paradox. In the early part of this century, Bertrand Russell, a young and as yet relatively unknown mathematician and philosopher, became increasingly occupied with a certain type of paradox and attempted to understand its implications.
If we analyze the paradox contained in the above story, we see that the conditions that the judge has set up result in a conclusion where none of the days meets the conditions, because, as the prisoner so adroitly points out, each one of them in turn would not be a surprise. But the conclusion itself changes the situation, and now surprise is possible again. This brings us back to the original situation in which the prisoner could (in theory) demonstrate that each day in turn would be impossible, and so on. The judge applies Alexander's solution to this Gordian knot.
A simple example and the one that Russell actually struggled with is the following question about sets. In mathematics, a set is a collection of objects (cats, children, people you like, as well as theoretical "things" - squares, circles, lines, points, etc.). A set is often defined in terms of a rule. For example, we could define a set to consist of all shapes that are circles. Russell's paradox concerned set A, defined as containing all sets that are not members of themselves. Russell then asked the question: Does set A contain itself?
As we consider this problem, our first realization is that there are only two possible answers: yes or no. If the answer is yes, then set A does contain itself. But if set A contains itself, then, according to its definition, set A would not belong to set A, and thus it does not belong to itself. Since the assumption that A contains itself leads to a contradiction, it must be wrong.
If the answer is no, then set A does not contain itself. But again, according to the defining condition, if A does not belong to itself, then it would belong to set A. As with the story about the prisoner, we have contradictory propositions that imply one another. The assumption of no yields yes, which yields no, and so on. Both yes and noThis may all seem amusing (or confusing), but to Russell, it threatened the very foundations of mathematics. The concept of a "set" is fundamental to mathematics, and the definition of set A appears to be a perfectly reasonable one. The question of whether or not set A belongs to itself also seems perfectly reasonable. Yet it appears that we cannot come up with a logically consistent answer. Without a resolution to this paradox, the basic premise of all of mathematics was in question.
To solve this problem, Russell invented a concept of a logical transformation as an operation that requires the equivalent of a quantum of time. Russell designed a set of logical operations in which a particular problem would be expressed as a "program" of operations to follow. We then turn the program on and let it run. Each logical inference is implemented in turn, and when the process is completed, we get our answer.
If we apply this theoretical machine to the problem of set A, the logical operations are "executed" in turn. At a certain point, the answer will be yes, but the program keeps running, and at a later point the answer becomes no.
Russell then provided narrow and broad definitions of a set. In the narrow sense, a set has a definition that allows the construction of a program that can determine whether a given entity is a member of the set in a finite amount of time. According to this definition, set A is not a true set, so the paradox is eliminated. Eliminating the contradictionsIn the broad sense, the program defining the logical rules of set membership need not come to a halt in a finite amount of time, it just needs to come to an answer in a finite amount of time; it is allowed to change that answer as the program continues to run. According to this definition, set A is a proper set. The question of whether set A belongs to itself will be yes at one point in "time" and no at another point, and the program will alternate between the two.
Thus, logical inferences are not implemented instantly, but rather one at a time with an orderly change of state between each. In our case, the answer is never yes and no at the same time. In the broad definition of a set, set A is a valid set, but a particular type that is "unstable," just as an electronic circuit can be unstable. Nonetheless, the contradiction is eliminated.
Similarly, we can build a "Russell" logic machine to simulate the logical inferences inherent in the story of the prisoner. The lecture on logic delivered by the prisoner would be simulated by a sequence of logical inferences. He has shown quite logically why it is not possible for him to be executed following the judge's instructions. The judge then realizes that the prisoners belief that he cannot be executed makes it possible once again to execute him (because now he can be surprised). Before the prisoner can formulate yet another lecture on logic showing why he cannot be executed (that is, before the "program" simulating this situation can alternate again to "impossible to execute"), the judge quickly implements his sentence. He must implement his sentence at a particular point in the program. Russell expanded his theory to lay a new foundation for logic and the theory of sets in his first major work in mathematics, The Principles of Mathematics, published in 1903. He subsequently felt that all of mathematics should be recast in terms of his new theory of sets, since the concept of sets and their interactions is fundamental to all other mathematical disciplines.
With the help of his friend and former tutor Alfred North Whitehead (1861-1947), he labored for nearly ten years to apply his new theory of sets and logic to all realms of mathematics. Whitehead and Russell did not manage to complete their reexamination, but they nonetheless published their work in three volumes in 1910, 1912, and 1913 under the title Principia Mathematica. The work was truly revolutionary and provided a new methodology for all mathematics that was to follow.
As significant as Principia was to mathematics in general, it was a pivotal development in terms of the foundations of the theory of computation that would be developed two decades later. Russell had created a theoretical model of a logic machine that we now recognize as essentially a computer, particularly in its execution of logical operations in cycles of time.
Modern set theory, still based on Russell's Principia, provides a foundation for much of mathematics. It is interesting to note that modern set theory is in turn based on Russell's theoretical model of computation. Viewing things in this way, it would be correct to say that mathematics is a branch of computer theory. What is particularly impressive about Russell's achievement is that there were no computers even contemplated at the time he developed his theory. Russell needed to invent a theoretical model of a computer and programming to address a flaw in the foundation of logic itself.
A quarter of a century later, Alan Turing created the Turing machine, based largely on Russell's earlier work, that endures today as our primary theoretical model of a computer. Turing is also famous for creating the first operational computer ever built in 1940.
The Turing machine has persisted as our primary theoretical model of computation because of its combination of simplicity and power. Its simplicity derives from its short list of capabilities (which in the interest of space I will not describe). As for its power, Turing was able to show that this extremely simple machine can compute anything that any machine can compute, no matter how complex. If a problem cannot be solved by a Turing machine, then it cannot be solved by any machine.
An unexpected discovery that Turing reports in his 1937 paper on the Turing machine (entitled "On Computable Numbers, with an Application to the Entscheidungsproblem") is the concept of unsolvable problems, i.e., problems that are well-defined with unique answers that can be shown to exist, but that can never be computed by a Turing machine. The fact that there are problems that cannot be solved by this particular theoretical machine may not seem particularly startling until one considers the other conclusion of Turing's paper, namely, that the Turing machine can model any machine. A machine is regarded as any process that follows fixed laws.
According to Turing, if we regard the human brain as subject to natural physical law (a reasonable position in the view of many observers), then Turing's unsolvable problems cannot be solved by either machine or human thought. We are left with the perplexing situation of being able to define a problem, to prove that a unique answer exists, and yet know that the answer can never be known. Turing went on to prove that there are as many unsolvable problems as solvable ones, the number of each being the lowest order of infinity, the so called countable infinity (that is, the number of integers).
Turing then teamed up with Alonzo Church, an American mathematician and philosopher. The result of this collaboration is the Church-Turing thesis that postulates an essential equivalence between what can be computed by a Turing machine (that is, by a computer) and what can be accomplished by human thought.
Reprinted with permission from Library Journal, June 1992. Copyright © 1992, Reed Elsevier, USA
Other Futurecast columns
|