Home | Third Culture | Digerati | Reality Club

NON STANDARD MODELS

In amazing contrast to TN the first order theory TR of the ordered field of the REALS has been successfully and completely formalized � starting with Euclid's axioms, improved by Hilbert just before the turn of the century and completed as well as proved complete by Tarski about the time of the Second World War. My immediate reaction when I first heard of this feat was shock and distrust of those Berkeley logicians. "How could that be? The reals are so much more complicated than the integers. Aren't the natural numbers defined as the non negative integral reals?" Well, the solution of that conundrum lies in the

LIMITATION OF EXPRESSIVE POWER INHERENT IN FORMAL LANGUAGES.

As a matter of fact, the natural numbers are not "elementarily definable" among the reals; there is no wff of the language of R that picks out the natural numbers among the reals.

Moreover, in spite of its completeness, TR has non-isomorphic models! It has countable models, uncountable ones, Archimedean as well as Non-Archimedean ones; some harbor hyperreals, others only standard reals... What is going on? First the chicken-or-egg question must be faced: what comes first, the model or the theory? Ever since the elaborations by Tarski in 1934 and by Mal'cev in 1936 of the results by L�wenheim of 1915, and by Skolem of 1920 (a brief exposition will follow below) we understand that first order chickens are prone to lay a medley of eggs, some "real" in the Platonic sense of being standard and others weird, artificial, substitutes, freaks, in short non-standard. The Ur-hen, the axiomatization, originated from a standard egg, the "intended interpretation", a natural mathematical construct like our everyday arithmetic of the positive integers, or, more sophisticated, the real number system of the 19th century. After the chicken has grown to maturity it starts laying models, and, roaming through the virtual reality of model theory instead of free ranging in Platonic realms, it comes up with non-standard eggs. The only constraint on those is consistency and the verification of the axioms, i.e., the genetic chicken code. These models are hatched within the confines of some entrenched formalization of set theory.

What really lies at the basis of non standard objects like hyper reals is � again � the limitation inherent in first order languages. In the elementary language of real number theory we cannot distinguish between Archimedean and non Archimedean orderings and that opens the door to constructions that were scorned by my teachers although they might use infinitesimals as a handy figure of speech the way we still talk Platonically. We thought that Cauchy and Weierstrass' arithmetization of analysis had done away with that alleged abuse of language, but now it is back en vogue again and very useful too (see below).

NON STANDARD PHENOMENA are closely connected with the SEMANTIC COMPLETENESS OF ELEMENTARY LOGIC, first proved by G�del in 1930 [9] and extended in many ways since, in particular by Henkin who also dealt with formalizations of higher order logic. The underlying meta theorem rests on two facts, one inherent in the finitary nature of a formal deduction, the second involving non- constructive instructions for building a model

  1. WHENEVER ALL FINITE SUBSETS OF A SET OF WFS' ARE CONSISTENT THEN SO IS THE ENTIRE SET and

  2. EVERY CONSISTENT SET OF WFS' HAS A MODEL.

By definition Semantic Completeness of a formal calculus means EQUIVALENCE BETWEEN FORMAL DERIVABILITY AND SEMANTIC VALIDITY where validity stands for truth under all interpretations, i.e. in all models.

At first this looks like an amazing result especially in view of currently rampant incompleteness. It is unfortunate that popular literature so often fails to make a clear distinction between the two concepts of semantic and of syntactic completeness (pp.10,11). Only the experienced reader will automatically know from the context which notion is at stake.

As a matter of fact the completeness of first order logic is achieved at a price: the expressive poverty of the formal language. Completeness proofs for higher order logic are ensnared in the same kind of bargain. They are based on a concept of model that to the naive mind seems contrived. Elementary languages are incapable of distinguishing between arbitrarily large finiteness and infinity, and so are forced to tolerate the infinitely small. Consider the infinite set of wfs' 0 < a < 1, a + a < 1, a + a + a < 1,..,a + a + a +...+ a < 1,... and let U be its union with TR, the set of all wfs' that are true in the field R of the reals. Every finite subset V of U has a model: just take R and interpret a by 1/n, where n is the number of symbols occurring in that finite set V. By 1) then the whole set U is consistent and so, by 2) it and with it the elementary theory of the reals has a model which harbors an element satisfying all these inequalities, i.e., a non- Archimedean, non-standard, or hyper, real a. It is positive and yet smaller than any fraction 1/n, n a positive integer.

Ruled by its logic, the language cannot prohibit such anomalies. But there is a silver lining to this shortcoming: Because of the consistency of infinitesimals with TR every truth about the reals that can be expressed in the elementary language of R holds for all reals � standard or not � and so, by G�del's completeness theorem, it has a formal proof. And if the approach via infinitesimals is smoother that is just great. One cannot help but marvel at the native instinct with which the seventeenth century mathematicians went about their work

Similarly, any first order theory of N, including TN, has models that contain infinitely large integers. The elementary theory of finite groups has infinite models and so in fact does every first order theory of arbitrarily large finite models.

All this is meant to explain that these non standard phenomena have no bearing on the question whether Platonism is an appropriate view of the origin of Mathematics. I am deliberately not using the word "correct". Whether Platonism is "true" seems an ill posed question, luring into vicious circles. How can we contemplate the truth of this, that or the other "ism" before we have a clear and distinct idea of what � if anything � we mean by the Truth of a theory?

The existence of non standard models should NOT be confounded with the occurrence of incomplete concepts like that of a geometry or that of a set. In the case of hyperreals we are running into limitations of the formal language while dealing with complete theories, in the second case we are simply facing the fact that the intuitive concept, say of a geometry or a set, that we had in mind when setting up the formalization is not completely fathomed yet, in both senses of completeness. Of course the easiest reaction is to say, "that concept is out there, let us go look more closely and we shall eventually find its complete characterization". In this frame of mind G�del is reputed to have been convinced that we shall eventually understand enough about sets to come up with new axioms that will decide the continuum hypothesis. But in other cases the expedient policy will allow a concept to bifurcate � sailors have no trouble with non-Euclidean geometries.

The big question is where our standard concepts come from, how do we all know what we mean by the Standard Reals? How can we distinguish between Archimedean orderings and non Archimedean ones, when we cannot make the distinction in first order language? Well we can always resort to hand waving when words fail. We can indeed communicate about them beyond the confines of formalism. They are conceptions, constructions, structures, figments of our imagination, of the human mind that is our common heritage. Other creatures may have other ways of making sense of and finding their way in a Universe that we are sharing with them.

This century has seen the development of a powerful tool, that of formalization, in commerce and daily life as well as in the sciences and mathematics. But we must not forget that it is only a tool. An indiscriminate demand for fool proof rules and dogmatic adherence to universal policies must lead to impasses. The other night, watching a program about the American Civil Liberties Union I was repeatedly reminded of G�del's Theorem: every system is bound to encounter cases which it cannot decide, snags that will confront its user with a choice between either running into a contradiction or jumping out of the system . That is when, with moral issues at stake, cases of precedence are decided by thoughtful judgment going back to first principles of ethics, in the sciences alternate hypotheses are formed and in mathematics new axioms crop up.

Returning to my question, think of mathematics as a jungle in which we are trying to find our way. We scramble up trees for lookouts, we jump from one branch to another guided by a good sense of what to expect until we are ready to span tight ropes (proofs) between out posts (axioms) chosen judiciously. And when we stop to ask what guides us so remarkably well, the most convincing answer is that the whole jungle is of our own collective making � in the sense of being a selection out of a primeval soup of possibilities. Monkeys are making of their habitat something quite different from what a pedestrian experiences as a jungle.

To sum it all up I see mathematical activity as a jumping ahead and then plodding along to chart a path by rational toil.

The process of plodding is being analyzed by proof theory, a prolific branch of meta mathematics. Still riddled with questions is the jumping.


Previous | Page 1 2 3 4 5 | Beginning