

ELEMENTARY THEORIESThese phenomena are a manifestation of the precarious balance between algorithmic precision and expressive power inherent in every formal language and its logic. The most popular, widely taught formalization is the first order predicate calculus, also called elementary logic, a formalization of reasoning in socalled first order predicate languages. That apparatus leads from "elementary axioms" to "elementary" theories.The important requirement for any formalization is the existence of both a "mechanism" (algorithm) for deciding, given any well formed sentence (wfs) of the language concerned, whether or not that wfs is an axiom, and one for deciding of any given configuration of wfs's whether or not it is an instance of one of the rules. The resulting concept of a formal proof is decidable, i.e., there exists an algorithm, which, when fed any finite sequence of wfs', will come up with the "answer yes" (0) or the "answer no" (1) according as that sequence is a formal proof in the system or not. The resulting axiomatizable theory will in general only be effective in the sense that there exists an algorithmic procedure for listing all and only those wfs' that are theorems. That does by no means guarantee a decision procedure for theoremhood. In fact most common theories have been proven undecidable. To start with, a familiar structure like N or R will serve as the socalled STANDARD MODEL or INTENDED INTERPRETATION for the elementary theory meant to describe it. Observe that the notion of a standard model presupposes some basic concept of mathematical reality and truth. Gšdel talks of "inhaltliches Denken" (formal thinking) in juxtaposition to "formales Denken" (formal thinking). His translators use the term 'contentual'. 'Intentional' might be just as good a choice. Of course one might dodge the need for a metaphysical position by using terms like "preverbal" or "informal". But that does not make the problem go away. If we want to talk about standard models, if we want our theories to describe something ÷ approximately and formally ÷ what is it that we want them to describe? A question that would not disturb a Platonist like Gšdel. The formalist's way out is to throw away the ladder once he has arrived at his construction and to concentrate on the questions of formal consistency and formal completeness, purely syntactic notions. A theory is formally consistent if and only if for no wfs A both A and notA are theorems and formally complete if and only if for every wfs A either A or notA is a theorem. To an extreme formalist the existence of an abstract object coincides with the formal consistency of the properties describing it. If at all, he will draw his models from yet another theory, most likely some, necessarily incomplete, formalization or other of elementary set theory, presumed ÷ but only presumed ÷ to be consistent. An unsatisfactory strategy. We, however, are left with the conundrum of Mathematical Truth and the semantic notions that depend on a "meaning" attached to the theory. With respect to an interpretation of its language over a structure S a theory, formalized or not, is
sound (semantically consistent) if and only if only wfs' true in S are theorems semantically complete if and only if all wfs' true in S are theorems. Granted a clear and distinct idea of the structures N and R we talk of the sets of all wfs' that are true under the intended interpretations on N and on R as True Elementary Arithmetic, TN, and as the True Theory TR of the Reals. Consider an EXAMPLE: Leaving aside the question where N comes from, I should think that we all know what we mean by the wfs (F^3) for ALL positive integers x, y and z: the sum of the cubes of x and y is not equal to the cube of z. F is short for FERMAT. To explain it to a naive computer mind, we would say: "Make two lists as follows; in the left one, L, write down successively the results of adding the cubes of two positive integers, 1^3 + 1^3, 1^3 + 2^3, 2^3 +2^3, 1^3 + 3^3, 2^3 + 3^3, 3^3 + 3^3,..., and into the right one, R, put all the cubes 1^3 (1), 2^3 (8), 3^3 (27), 4^3 (64), 5^3 (125) and so on. Now run through both lists comparing the entries. (F^3) claims that you will never find the same number showing up both on the left and on the right". A computer can easily compile these lists in so orderly a fashion and run through them so systematically that, for each bound N, it will, after a computable number of steps, say f(N) of them, have calculated and compared all pairs of numbers in L and in R smaller than N. You will probably agree that this tedious explanation makes it sufficiently clear what we mean here by ALL. You may want to use nicer language like talking about NEVER finding a matching pair. The purpose of symbolization, however, is not only orderliness, but clarification. The dual to the socalled UNIVERSAL QUANTIFIER (for all) is the EXISTENTIAL QUANTIFIER. Just think for a moment, assuming (F^3) were false, how easy it would be for your patient computer to prove that. It would only have to go on long enough until it found a COUNTER EXAMPLE, i.e., a positive integer that shows up in both lists, R and L. Having done so it would have proved
NOT(F^3) there EXIST positive integers x, y and z, such that the sum of the cubes of x and y is equal to the cube of z. In 1753, using clever transformations of the problem, Euler succeeded in proving the restriction (F^3) of Fermat's theorem to cubes. But Fermat's general Conjecture
(F) for ALL positive integers x, y, z and n, n greater than 2, the sum of the nth powers of x and y is not equal to the nth power of z has only been proved conclusively a few years ago by means of techniques way beyond elementary arithmetic. It should be noted here that variable exponentiation is not part of the language of N but can be paraphrased in it. In the above procedure you will have to organize your left list according to an enumeration of triples (x,y,n) and the right one according to pairs (z,n). The capacity to visualize an ongoing sequence of calculations and comparisons leads to an understanding of what is meant by the truth of (F). Yet, in spite of many efforts, its proof had to wait till algebraic geometry and number theory had achieved the maturity necessary to allow its construction. We have a pretty good understanding of what we mean when we claim that ALL integers ÷ or all pairs or triples of them ÷ have a certain property, provided that we understand the property itself. The most manageable kind of properties that integers may have are what we call recursive or computable. They are susceptible to a decision procedure as illustrated by the example of checking for fixed n and any given triple of integers x,y,z whether or not the sum of the nth powers of the first two is equal to that of the third. A property P of triples of numbers is called recursive if and only if its socalled characteristic function that takes on value 0 at the triple (m,n,q) if that triple has the property and value 1 otherwise (its decision function) is computable by an algorithm like a Turing machine, or, equivalently, is recursive. The amazing ÷ often elusive ÷ power of the universal quantifier brought home by Gšdel's incompleteness proof, discussed in the next section, is again manifest in the intrinsic difficulties with which the conclusive proof of Fermat's theorem is fraught. ELEMENTARY ARITHMETICBased on a naive concept of Truth, every true theory of a definite structure is complete and consistent in both senses, a pretty useless observation. For, a byproduct of Gšdel's Incompleteness proof of 1931 [7] is the nonformalizability of elementary arithmetic, TN, and with it of many other theories.EVERY SOUND AXIOMATIZATION OF ELEMENTARY ARITHMETIC IS INCOMPLETE.The most natural candidate for axiomatizing TN goes back to Giuseppe Peano (1895) and consists of the recursive rules for addition, multiplication and the natural ordering on the set N of non negative integers built up from 0 by the successor operation that leads from n to n+1, together with the Principle P of Mathematical Induction, which postulates that every set of numbers containing 0 and closed under the successor operation exhausts all of N, or, equivalently, that every property enjoyed by 0 and inherited by successors is universal. P is a principle that adults may consider a definition of the set N, while children will ÷ in my experience ÷ take it for granted. But, if you want to articulate it in the language of the first order predicate calculus you run into trouble. As illustrated in example (F) elementary languages can quantify over individuals. But quantification over socalled HIGHER ORDER items like properties is beyond its scope. P is a typical sentence of second order logic.PEANO ARITHMETIC, PA, is the first order approximation to second order arithmetic obtained by replacing P with the following schema of infinitely many axioms
(PW) If 0 has the property expressed by the wff W and, whenever a number x has that property, then so does x+1, then all natural numbers have property W. one for each wff (well formed formula) W of the elementary language of arithmetic. Reformulating (PW) in terms of proofs rather than truth sheds light on it and illustrates how one might want to go about replacing the basic concept of truth in mathematics by a primitive notion of proof. Writing W(x) for "x has property W" the Principle of Proof by Mathematical Induction reads
(PPW) Given 1) a proof of W(0) and 2) a method for turning any proof of W(x) into a proof of W(x+1) THEN there exists a proof of "for all x: W(x)". Note how appealing this formulation is: Given any number n, you only have to start with the proof given by 1) and then apply the method of 2) n times to obtain a proof of the sentence W(n). But there is a subtlety here. So understood, the principle only guarantees that, for every number n, a proof of W(n) can be found, a typical "for all ÷ there exists ÷" claim. Its power lies, however, in the "there existsfor all ÷" form of the conclusion as exhibited above. These may sound like nitpicking distinctions, but they are of great prooftheoretic significance. For instance, from the consistency of PA, proved by means transcending PA, follows: If g is the Gšdel number of the Gšdel sentence G then:
for each natural number n, the sentence "n is not the Gšdel number of a proof of the sentence with Gšdel number g" is a theorem of PA. However G itself, namely the sentence
"for all x: x is not the Gšdel number of a proof of the sentence with Gšdel number g" is not a theorem of PA. G is the sentence that truthfully claims its own unprovability. Much deep work is required to establish this result rigorously. Occasionally proofs by mathematical induction are confused with arguments based on socalled 'inductive reasoning', a term used in philosophical discussions of logic and the sciences ÷ yet another reason for all these elaborations. Axiomatized but undecidable theories are a fortiori incomplete. In 1939 Tarski proved that
THERE IS A FINITELY AXIOMATIZABLE FRAGMENT OF PA ALL OF WHOSE CONSISTENT EXTENSIONS ARE UNDECIDABLE. [8] And yet, if we accept an intentional concept of truth, we seem to obtain a complete theory from Peano's mere handful of axioms together with that one marvelous second order tool P on which so much of our mathematical thinking hinges. For: ALL MODELS OF PEANO'S SECOND ORDER AXIOM SYSTEM ARE ISOMORPHIC. Still, any attempt to formalize second order arithmetic is again doomed to founder on the cliffs identified by Gšdel and Tarski. The juxtaposition of these claims is and ought to be baffling. In fact they bring home the discrepancy between the naive and the formalist concept of a model. From the naive point of view they mean that higher order logic cannot be completely formalized. Even so completeness proofs for it are widely hailed ÷ at the price of allowing all sorts of nonisomorphic models even for second order Peano Arithmetic. Enough of that for now. Fermat's last theorem may well be beyond the scope of elementary Peano Arithmetic. In other words, (F) is presumably left undecided by PA. A few mathematically interesting theorems expressible in the language of PA with that property are already known. They are embeddable in stronger but still convincing first order theories, some elementary set theory or other. After all that we are faced with the question where new axioms come from, in other words with THE PROBLEM OF THE NATURE OF MATHEMATICAL TRUTH. To declare "OK, as of October 27, 1995, the day that Wiles was awarded the Prix Fermat by the town of Toulouse, (F) shall be added to the list of axioms for elementary arithmetic" would seem quite inappropriate. We want more intuitively obvious first principles.


