MTO

Parsons, the whole story, at last

I have been blogging on and off for quite a while about Charles Parsons’s Mathematical Thought and Its Objects, latterly as we worked through the book in a reading group here. I’ve now had a chance to put together a revised (sometimes considerably revised) version of all those posts into a single document — over 50 pages, I’m afraid. You can download it here.

I’ve learnt quite a bit from the exercise. I’ll be very interested in any comments or reactions.

Parsons’s Mathematical Thought: Sec 55, Set theory

The final(!) section of Parsons’s book is one of the briefest, and its official topic is about the biggest — the question of the justification of set-theoretic axioms. But, reasonably enough, Parsons just offers here some remarks on how the case of justifying set theory fits with his remarks in the preceding sections.

First, on “rational intuition” again. We can work ourselves into sufficient familiarity with ZFC for its axioms to come to seem intrinsically plausible — but such rational intuitions (given the questions than have been raised, by mathematicians and philosophers) “fall short of intrinsic evidence“. Which isn’t very helpful.

And what about Parsons’s modified holism? In the case of set theory, is there “a dialectical relation of axioms and their consequences such as our general discussion of Reason would suggest”? We might suppose not, given that (equivalents of) the standard axioms were already “essentially in place in Skolem’s address of 1922”. Nonetheless, Parsons suggests, we do find such a dialectical relation, historically in the reception of the axiom of choice, and perhaps now in continuing debates about large cardinal axioms, etc., where “the role of intrinsic plausibility” is much diminished, and having the right (or at least desirable) consequences are an essential part of their justification. But, Parsons concludes — the final sentence of his book — “apart from the purely mathematical difficulties, many problems of methodology and interpretation remain in this area”. Which is, to say the least, a rather disappointing note of anti-climax!

Afterword Later this week, I’ll post (a link to) a single document wrapping up all the blog-posts here into a just slightly more polished whole, and then I must cut down 30K words to a short critical notice for Analysis Reviews. I feel I’ve learnt a lot from working through (occasionally, battling with) Parsons — but in the end I suppose my verdict has to be a bit lukewarm. I’m unconvinced about his key claims on structuralism, on intuition, on the impredicativity of the notion of number, in each case in part because, after 340 pages, I’m still not really clear enough what the claims amount to.

Parsons’s Mathematical Objects: Sec. 54, Arithmetic

How does arithmetic fit into the sort of picture of the role of reason and so-called “rational intuition” drawn in Secs. 52 and 53?

The bald claim that some basic principles of arithmetic are “self-evident” is, Parsons thinks, decidedly unhelpful. Rather, “in mathematical thought and practice, the axioms of arithmetic are embedded in a rather dense network … [which] serves to buttress [their] evident character … so that in that respect their evident character does not just come from their intrinsic plausibility.” Moreover, there is a subtle interplay between general principles and elementary arithmetical claims — a dialectic “between attitudes towards mathematical axioms and rules and methodological or philosophical attitudes having to do with constructivity, predicativity, feasibility, and the like”. Which, as Parsons notes, is all beginning to sound rather Quinean. How is his position distinctive?

Not by making any more play with talk of “rational intuition”, which made its temporary appearance in Sec. 53 just as a way of talking about what is intrinsically plausible: indeed, the idea that the axioms of arithmetic derive a special status from being grounded in rational intuition is said to be “in an important way misleading”. Where Parsons does depart from Quine — and it is no surprise to be told, at this stage in the book! — is in holding that some elementary arithmetic principles can be intuitively known in the Hilbertian sense he discussed in earlier chapters. And the main point he seems to want to make in this chapter is that, as we move to more sophisticated areas of arithmetic which cannot directly be so grounded, so “the conceptual or rational element in arithmetical knowledge becomes much more prominent”, the web of arithmetic isn’t thereby totally severed from intuitive knowledge grounded in intuitions of stroke strings and the like. It is still the case that “an intuitive domain witnesses the possibility of the structure of numbers”.

Of course, how impressed we are by that claim will depend on how well we think Parsons defended his conception of intuitive knowledge in earlier chapters (and I’m not going to go over that ground again now, and nor indeed does Parsons). And what grounds the parts of arithmetic that don’t get rooted in Hilbertian intuition? To be sure, those more advanced parts can get tied to other bits of mathematics, notably set theory, so there is that much rational constraint. But that just shifts the question: what grounds those theories? (There are some remarks in the next chapter, but as we’ll see they are not very unhelpful.)

So where have we got to? Parsons’s picture of arithmetic retains a role for Hilbertian intuition. And unlike an “all-in” holism, he wants to emphasize the epistemic stratification of mathematics (though his remarks on that stratification really do little more than point to the phenomenon). But still, “our view does not differ toto caelo from holism”. And I’m left really pretty unclear what, in the end, the status of the whole web of arithmetical belief is supposed to be.

Parsons’s Mathematical Objects: Secs 52-53, Reason, "rational intuition" and perception

Back to Parsons, to look at the final chapter of his book, called simply ‘Reason’. And after the particularly bumpy ride in the previous chapter, this one starts in a very gentle low-key way.

In Sec. 52, ‘Reason and “rational intuition”‘, Parsons rehearses some features of our practice of supporting our claims by giving reasons (occasionally, he talks of ‘features of Reason’ with a capital ‘R’: but this seems just to be Kantian verbal tic without particular significance). He mentions five. (a) Reasoning involves logical inference (and “because of their high degree of obviousness and apparently maximal generality, we do not seem to be able to give a justification of the most elementary logical principles that is not in some degree circular, in that inferences codified by logic will be used in the justification”). (b) In a given local argumentative context, “some statements … play the role of principles which are regarded as plausible (and possibly even evident) without themselves being the conclusion of arguments (or at least, their plausibility or evidence does not rest on the availability of such arguments).” (c) There is there is a drive towards systematization in our reason-giving — “manifested in a very particular way [in the case of mathematics], though the axiomatic method”. (d) Further, within a systematization, there is a to-and-fro dialectical process of reaching a reflective equilibrium, as we play off seemingly plausible local principles against more over-arching generalizing claims. (e) Relatedly, “In the end we have to decide, on the basis of the whole of our knowledge and the mutual connections of its parts whether to credit a given instance of apparent self-evidence or a given case of what appears to be perception”.

Now, that final Quinean anti-foundationalism is little more than baldly asserted. And how does Parsons want us to divide up principles of logical inference from other parts of a systematized body of knowledge? His remarks about treating the law of excluded middle “simply as an assumption of classical mathematics” suggest that he might want to restrict logic proper to some undisputed core — though he doesn’t tell us what that is. Still, quibbles apart, the drift of Parsons’s remarks here will strike most readers nowadays as unexceptionable.

Sec. 52, ‘Rational inuition and perception’, says a bit more to compare and contrast intuitions in the sense of statements found in a given context of reasoning to be intrinsically plausible — call these “rational intuitions” — and intuitions in the more Kantian sense that has occupied Parsons in earlier chapters. As he says, “intrinsic plausibility is not strongly analogous to perception [of objects]”, in the way that Kantian intuition is supposed to be. But perhaps analogies with perception remain. For one thing, there is the Gödelian view that intrinsic plausibility for some mathematical propositions involves something like perception of concepts. And there is perhaps is another analogy too, suggested by George Bealer: reason is subject to illusions that, like perceptual illusions, persist even after they have been exposed. But Parsons only briefly floats those ideas here, and the section concludes with a different thought, namely there is a kind of epistemic stratification to mathematics, with propositions at the lowest level seeming indisputably self-evident, and as we get more general and more abstract, self-evidence decreases. Which is anodyne indeed.

Parsons Mathematical Thought: Sec. 51, Predicativity and inductive definitions

The final section of Ch. 8 sits rather uneasily with what’s gone before. The preceding sections are about arithmetic and ordinary arithmetic induction, while this one briskly touches on issues arising from Feferman’s work on predicative analysis, and iterating reflection into the transfinite. It also considers whether there is a sense in which a rather different (and stronger) theory given by Paul Lorenzen some fifty years ago can also be called ‘predicative’. There is a page here reminding us of something of the historical genesis of the notion of predicativity: but there is nothing, I think, in this section which helps us get any clearer about the situation with arithmetic, the main concern of the chapter. So I’ll say no more about it.

Parsons Mathematical Thought: Sec. 50, Induction and impredicativity, continued

Suppose we help ourselves to the notion of a finite set, and say x is a number if (i) there is at least one finite set which contains x and if it contains Sy contains y, and (ii) every such finite set contains 0. This definition isn’t impredicative in the strict Russellian sense (as Alexander George points out in his ‘The imprecision of impredicativity’). Nor is it overtly impredicative in the extended sense covering the Nelson/Dummett/Parsons cases. We might argue that it is still covertly impredicative in the latter sense, if we think that elucidating the very notion of a finite set — e.g. as one for which there is a natural which counts its members — must in turn involve quantification over naturals. But is that right? This is where Feferman and Hellman enter the story. For, as Parsons remarks, they aim to offer in their theory EFSC a grounding for arithmetic in a theory of finite sets that is predicatively acceptable and that also explains the relevant idea of finiteness in a way that does not presuppose the notion of natural number. Though now things get a bit murky (and I think it would take us too far afield to pursue the discussion and further here). But Parsons’s verdict is that

EFSC admits the existence of sets that are specified by quantification over all sets, and this assumption is used in proving the existence of an N-structure [i.e. a natural number structure]. For this reason, I don’t think that … EFSC can pass muster as strictly predicative.

This seems right, if I am following. It would seem, then, Parsons would still endorse the view that no explanation of the property natural number is in sight that is not impredicative in a broad sense — where an explanation counts as impredicative in the broad sense if it is impredicative in Russell’s sense, or in the Parsons sense, or invokes concepts whose explanation is in turn impredicative in one of those senses. But the question remains: what exactly is the significance of that broad claim if I am right that even e.g. a constructivist needn’t always have a complaint about definitions which are impredicative in a non-Russellian way? It would have been good to have been told.

Back, though, to the question of induction. Dummett, to repeat, says that “the totality of natural numbers is characterised as one for which induction is valid with respect to any well-defined property” including ones whose definitions “may contain quantifiers whose variables range over the totality characterised”. Likewise Nelson. Now, as a gloss on what happens in various formalized systems of arithmetic, that is perhaps unexceptionable. But does the totality of natural numbers have to be so characterized? Return to what I called the simplest explanation of the notion of the natural numbers, which says that (i) zero is a natural number, (ii) if n is a natural number, so is Sn, and (iii) whatever is a natural number is so in virtue of clauses (i) and (ii). This explanation, Parsons argued, sustains induction for any well-defined property. But as we noted before, that argument leaves it wide open which are the well-definined properties. So it seems a further thought, going beyond what is given in the simplest explanation, to claim that any predicate involving first-order quantifications over the numbers is in fact well-defined. There are surely arithmeticians of finitist or constructivist inclinations, who fully understand the idea that the natural numbers are zero and its successors and nothing else, and understand (at least some) primitive recursive functions, but who resist the thought that we can understand predicates involving arbitrarily complex quantifications over the totality of numbers, since we are in general bereft of a way of determining in a finitistically/constructively acceptable way whether such a predicate applies to a given number. To put it in headline terms: it is a significant conceptual move to get from grasping PRA to grasping (first-order) PA — we might say that it involves moving from treating the numbers as a potential infinity to treating them as a completed infinity — and it wants an argument that someone who balks at the move has not grasped the property natural number.

How much arithmetic can we get it we do balk at the extra move and restrict induction to those predicates we have the resources to grasp in virtue of grasping what it is to be a natural number (plus at least addition and multiplication, say)? Well, arguably we can get at least as far as IΔ0, and Parsons talks a bit about this at the end of the present section. He says, incidentally, that such a theory is ‘strictly predicative’ — but I take it that this is meant in a sense consistent with saying an explanation ‘from outside’ of what the theory is supposed to be about, i.e. the natural numbers, is necessarily impredicative in the broad sense. I won’t pursue the details of the compressed discussion of IΔ0 here.

So where does all this get us? Crispin Wright has written

Ever since the concern first surfaced in the wake of the paradoxes, discussion of the issues surrounding impredicativity — when, and under what assumptions, are what specific forms of impredicative characterizations and explanations acceptable — has been signally tangled and inconclusive.

Indeed so! Given that tangled background, any discussion really ought to go more slowly and more explicitly than Parsons does. And I think we need to distinguish here grades of impredicativity in a way that Parsons doesn’t do. Agree that in the broadest sense an explanation of the natural numbers is impredicative: but this doesn’t mean that finitists or constructivists need get upset. Induction over predicates involving arbitrarily embedded quantifications over the numbers involves another grade of impredicativity, this time something the finitist or constructivist will indeed refuse to countenance. (I perhaps will return to these matters later: but for now, we must press on!)

Parsons’s Mathematical Thought: Sec. 50, Induction and impredicativity

Here’s the first half of an improved(?!?) discussion of this section: sorry about the delay!

Parsons now takes up another topic that he has written about influentially before, namely impredicativity. He describes his own earlier claim like this: “no explanation [of the predicate `is a natural number’] is in sight that is not impredicative”. That claim has been challenged by Feferman and Hellman in a couple of joint papers, and Parsons takes the present opportunity to respond. As the title of this section indicates, Parsons links claims about impredicativity to thoughts about the scope of induction: but as we’ll see, the link takes some teasing out.

What, though, does Parson mean by impredicativity? Oddly, he doesn’t come out with a straight definition of the notion. Nor does he really explain why it might matter whether definitions of the natural numbers have to be impredicative. So before tackling his discussion, we’d better pause for some preliminary clarifications and reflections.

The usual sort of account of impredicativity, in the same vein as Russell’s original (or rather, as one of Russell’s originals), runs roughly like this: ‘a definition … is impredicative if it defines an object which is one of the values of a bound variable occurring in the defining expression’, i.e. an impredicative specification of an entity is one ‘involving quantification over all entities of the same kind as itself’. (Here,the first quotation is from Fraenkel, Bar-Hillel and Levy, Foundations of Set Theory, p. 38, one of a number of very similiar Russellian definitions quoted by Alexander George in his ‘The imprecision of impredicavity; the second much more recent quotation is from John Burgess Fixing Frege, p. 40.) Thus Weyl, famously, argued against the cogency of some standard constructions in classical analysis on the grounds of their impredicativity in this sense. (And because ACA0 bans impredicative specifications of sets of numbers, it provides one possible framework for developing those portions of analysis which should be acceptable to someone with Weyl’s scruples. Now, as Parsons in effect notes, a theory like ACA0 which lacks an impredicative comprehension principle is often described as being, unqualifiedly, a predicative theory of arithmetic: but that description takes it for granted that its first-order core — usually first order Peano Arithmetic — isn’t impredicative in some other respect.)

But why should we care about avoiding impredicative definitions for Xs? Why should such definitions lack cogency? Well, suppose we think that Xss are in some sense (however tenuous) ‘constructed by us’ and not determined to exist prior to our mathematical activity. Then, very plausibly, it is illegitimate to give a recipe for constructing a particular X which requires us to take as already given a totality of Xs which includes the very one that is now being constructed. So at least any definition which is to play the role of a recipe-for-construction had better not be impredicative. Given Weyl’s constructivism about sets, then, it is no surprise that he rejects impredicative definitions of sets. I’ll not pause to assess this line of thought any further here: but I take it that it is a familiar one. (By the way, I don’t want to imply that constructivist thoughts are the only ones that might make us suspicious of impredicative definitions: though as Ramsey and Gödel pointed out, it is far from clear why a gung-ho realist should eschew impredicative definitions.)

Now, on the Russellian understanding of the idea, a definition of the set of natural numbers will count as ‘impredicative’ if it quantifies over some totality of sets including the set of natural of numbers. Modulated into property talk, we’d have: a definition of the property of being a natural number will count as impredicative if it quantifies over some totality of properties including the property of being a natural number. Some familiar definitions are indeed impredicative in this sense: take, for example, a Frege/Russell definition which says that x is a natural number iff x has all the hereditary properties of zero. Then, the quantification is over a totality which includes the property of being a natural number, and the definition is impredicative in a Russellian sense. But are all explanations we might give of what it is to be a natural number impredicative in the same way?

Take, for example, what I’ll call ‘the simplest explanation’: (i) zero is a natural number, (ii) if n is a natural number, so is Sn, and (iii) whatever is a natural number is so in virtue of clauses (i) and (ii) — and hence, almost immediately, (iv) the natural numbers are what we can do induction over. This characterization of the property of being a natural number which Parsons gives in Sec. 47 does not explicitly involve a quantification over a class of properties including that of a natural number. And though it might be claimed that an understanding of the extremal clause (iii) requires a grasp of second-order quantification, I’ve urged before that this view is contentious (and indeed the view doesn’t seem to be one that Parsons endorses — see again the discussion of his Sec. 47). So here we have, arguably, an explanation of the concept of number which isn’t impredicative in the Russellian sense. But does the quantification in (iii) make the explanation impredicative in some different, albeit closely related, sense?

Well here’s Edward Nelson, at the beginning of his book, Predicative Arithmetic. In induction we can use what he calls ‘inductive formulae’ which involve quantifiers over the numbers themselves. This, he supposes, entangles us with what he calls an ‘impredicative concept of number’:

A number is conceived to be an object satisfying every inductive formula; for a particular inductive formula, therefore, the bound variables are conceived to range over objects satisfying every inductive formula, including the one in question.

Dummett, quoted approvingly by Parsons, says much the same:

[T]he notion of `natural number’ … is impredicative. The totality of natural numbers is characterised as one for which induction is valid with respect to any well-defined property, … the impredicativity remains, since the definitions of the properties may
contain quantifiers whose variables range over the totality characterised.

So the thought seems to be that any definition of the numbers is more or less directly going to characterize them as what we can do induction over, and that `a characterization of the natural numbers
that includes induction as part of it will be impredicative’ (to quote Parsons’s gloss). But note, Dummett says that there is impredicativity here, not because the totality of natural numbers is being defined in terms of a quantification over some domain which has as a member the totality of natural numbers itself (which is what we’d expect on the Russellian definition), but because the totality is defined in terms of a quantification whose domain is (or includes) the same totality. To quote Parsons again:

Because the number concept is characterized as one for which induction holds for any well-defined predicate or property, there is impredicativity if those involving quantification over numbers are included, as they evidently are.

However, to repeat, that involves a non-Russellian notion of impredicativity. In fact it seems that Parsons would also say that an explanation of the concept P — whether or not couched as an explicit definition — is impredicative if it involves a quantification over the totality of things of which fall under P. It is perhaps in this extended sense, then, that our ‘simplest explanation’ of the property of being a natural number might be said to be impredicative.

But now note that it isn’t at all obvious why we should worry about about a property’s being impredicative if it is a non-Russellian case. Suppose, just for example, we want to be some kind of constructivist about the numbers: then how are our constructivist principles going to be offended by saying that the numbers are zero, its successors, and nothing else? Prescinding from worries about our limited capacities, the ‘simplest explanation’ of the numbers tells us, precisely, how each and every number can be ‘constructed’, at least in principle, and tells us not to worry about there being any ‘rogue cases’ which our construction rules can’t reach. What more can we sensibly want? We might add that, if we are swayed by the structuralist thought that in some sense we can only be given the natural numbers all together (whether by a general method of construction, or otherwise), then perhaps we ought to expect that any acceptable explanation of the property of being a natural number will — when properly articulated — involve us in talking of all the numbers, at least in that seemingly anodyne way that is involved in the extremal clause (iii) above.

These preliminary reflections, then, seem rather to diminish the interest of the claim that characterizations of the property natural number are inevitably impredicative, if that is meant in the in the Parsons sense. But be that as it may. Let’s next consider: is the claim actually true?

To be continued

Parsons’s Mathematical Thought: Sec. 49, Uniqueness and communication, continued

In sum, then, we might put things like this. Parsons has defended an ‘internalist’ argument — an argument from “within mathematics” — for the uniqueness of the numbers we are talking about in our arithmetic, whilst arguing against the need for (or perhaps indeed, the possibility of) an ‘externalist’ justification for our intuition of uniqueness.

Can we rest content with that? Some philosophers would say we can get more — and Parsons briefly discusses two, Hartry Field and Shaughan Lavine, though he gives fairly short shrift to both. Field has argued that we can appeal to a ‘cosmological hypothesis’ together with an assumption of the determinateness of our physical vocabulary to rule out non-standard models of our applicable arithmetic. Parsons reasonably enough worries: “If our powers of mathematical concept formation are not sufficient [to rule out nonstandard models], then why should our powers of physical concept formation do any better?” Lavine supposes that our arithmetic can be regimented as a “full schematic theory” which is in fact stronger than the sort of theory with open-ended induction that we’ve been considering, and for which a categoricity theorem can be proved. But Parsons finds some difficulty in locating a clear conception of exactly what counts as a full schematic theory — a difficulty on which, indeed, I’ve commented elsewhere on this blog.

In both cases, I think Parsons’s points are well taken: but his discussions of Field and Lavine are brief, and more probably needs to be said (though not here).

Parsons’s Mathematical Thought: Sec. 49, Uniqueness and communication

Parsons now takes another pass at the question whether the natural numbers form a unique structure. And this time, he offers something like the broadly Wittgensteinian line which we mooted above as a riposte to skeptical worries — though I’m not sure that I have grasped all the twists and turns of Parsons’s intricate discussion.

We’ll start by following Parsons in considering the following scenario. Michael uses a first-order language for arithmetic with primitives 0, S, N, and Kurt uses a similar language with primitives 0′, S’, N’. Each accepts the basic Peano axioms, and each also stands ready to accept any instances of the first-order induction schema for predicates formulable in his respective language (or in an extension of that language which he can come to understand). And we now ask: how could Michael determine that his ‘numbers’ are isomorphic to Kurt’s?

We’ll assume that Michael is a charitable interpreter, and so he thinks that what Kurt says about his numbers is in fact true. And we can imagine that Michael recursively defines a function f from his numbers to Kurt’s in the obvious way, putting f(0) = 0′, and f(Sn) = S’f(n) (of course, to do this, Michael has to add Kurt’s vocabulary to his own, while shelving detailed questions of interpretation — but suppose that’s been done). Then trivially, each f(n) is an N’ by Kurt’s explicit principles which Michael is charitably adopting. And Michael can also show that f is one-one using his own induction principle.

In sum, then, Michael can show that f is an injection from the Ns into the N’s, whatever exactly the latter are. But, at least prescinding from the considerations in the previous section, that so far leaves it open whether — from Michael’s point of view — Kurt’s numbers are non-standard (i.e. it doesn’t settle for Michael whether there are also Kurt-numbers which aren’t f-images of Michael-numbers). How could Michael rule that out? Well, he could show that f is onto, and hence prove it a bijection, if he could borrow Kurt’s induction principle — which he is charitably assuming is sound in Kurt’s use — applied to the predicate ∃m(Nm & fm = ξ). But now, asks Parsons, what entitles Michael to suppose that that is indeed one of the predicates Kurt stands prepared to apply induction to? Why presume, for a start, that Kurt can get to understand Michael’s predicate N so as to bring it under the induction principle?

It would seem that, so long as Michael regards Kurt ‘from the outside’, trying to ‘radically interpret’ him as if an alien, then he has no obvious good reason to presume that. But on the other hand, that’s just not a natural way to regard a fellow human being. The natural presumption is that Kurt could learn to use N as Michael does, and so — since grasping meaning is grasping use — could come to understand that predicate, and likewise grasp Michael’s f, and hence come to understand the predicate ∃m(Nm & fm = ξ). Hence, taking for granted Kurt’s common humanity and his willingingness to extend the use of induction to new predicates, Michael can then complete the argument that his and Kurt’s numbers are isomorphic. Parsons puts it like this. If Michael just takes Kurt as a fellow speaker who can come to share a language, then

We now have a situation that was lacking when we viewed Michael’s understanding of Kurt as a case of radical interpretation; namely, he will take his own number predicate as a well-defined predicate according to Kurt, and so he will allow himself to use it in induction on Kurt’s numbers. That will enable him to complete the proof that his own numbers are isomorphic to Kurt’s.

And note, the availability of the proof here ”does not depend on any global agreement between them as to what counts as a well-defined predicate”, nor on Michael’s deploying a background set theory.

So far, then, so good. But how far does this take us? You might say: if Michael and Kurt in effect can come to belong to the same speech community, then indeed they might then reasonably take each other to be talking of the same numbers (up to isomorphism) — but that doesn’t settle whether what they share is a grasp of a standard model. But again, that is to look at them together ‘from the outside’, as aliens. If we converse with them as fellow humans, presume that they stand ready to use induction on our predicates which they can learn, then we can use the same argument as Michael to argue that they share our conception of the numbers. You might riposte that this still leaves it open whether we’ve all grasped a nonstandard model. But that is surely confused: as Dummett for one has stressed, in order to formulate the very idea of models of arithmetic — whether standard or nonstandard — we must already be making use of our notion of ‘natural number’ (or notions that swim in the same conceptual orbit like ‘finite’, or stronger notions like ‘set’). To cast put that notion into doubt is to saw off the branch we are sitting on in describing the models. Or as Parsons says, commenting on Dummett,

[I]n the end, we have to come down to mathematical language as used, and this cannot be made to depend on semantic reflection on that same language. We can see that two purported number sequences are isomorphic without strong set-theoretic premisses, but we cannot in the end get away from the fact that the result obtained is one ”within mathematics” (in Wittgenstein’s phrase). We can avoid the dogmatic view about the uniqueness of the natural numbers by showing that the principles of arithmetic lead to the Uniqueness Thesis …

So, there is indeed basic agreement here with the Wittgensteinian observation that in the end there has to be understanding without further interpretation. But Parsons continues,

… but this does not protect the language of arithmetic from an interpretation completely from the outside, that takes quantifiers over numbers as ranging over a non-standard model. One might imagine a God who constructs such an interpretation, and with whom dialogue is impossible, and with whom dialogue is impossible. But so far the interpretation is, in the Kantian phrase, ”nothing to us”. If we came to understand it (which would be an essential extension of our own linguistic resources) we would recognize it as unintended, as we would have formulated a predicate for which, on the interpretation, induction fails.

Well, yes and no. True, if we come to understand someone as interpreting us as thinking of the natural numbers as outstripping zero and its successors, then we would indeed recognize him as getting us wrong — for we could then formulate a predicate ‘is-zero-or-one-of-its-successors’ for which induction would have to fail (according to the interpretation), contrary to our open-ended commitment to induction. And further dialogue will reveal the mistake to the interpreter who gets us wrong. However, contra Parsons, we surely don’t have to pretend to be able to make any sense of the idea of a God who constructs such an interpretation and ‘with whom dialogue is impossible’: Davidson and Dummett, for example, would both surely reject that idea.

But where exactly does all this leave us on the uniqueness question? To be continued …

Parsons’s Mathematical Thought: Sec. 48, The problem of the uniqueness of the number structure: Nonstandard models

”There is a strongly held intuition that the natural numbers are a unique structure.” Parsons now begins to discuss whether this intuition — using ‘intuition’, of course, in the common-or-garden non-Kantian sense! — is warranted. He sets aside until the long Sec. 49 issues arising from arguments of Dummett’s: here he makes some initial points on the uniqueness question, arising from the consideration of nonstandard models of arithmetic.

It’s worth commenting first, however, on a certain ‘disconnect’ between the previous section and this one. For recall, Parsons has just been discussing how we might introduce a predicate ‘N‘ (‘… is a natural number’) governed by the rules (i) N0, and (ii) from Nx infer N(Sx), plus the extremal clause (iii) that nothing is a number that can’t be shown to be so by rules (i) and (ii). Together with the rules for the successor function, the extremal clause — interpreted as intended — ensures that the numbers will be unique up to isomorphism. Conversely, our naive intuition that the numbers form a unique structure is surely most naturally sustained by appeal to that very clause. The thought is that any structure for interpreting arithmetic as informally understood must take numbers to comprise a zero element, its successors (all different, by the successor rules), and nothing else. And of course the numbers in each structure will then have a natural isomorphism between them (which matches zeros with zeros, and n-th successors with n-th successors). So the obvious issue to take up at this point is: what does it take to grasp the intended content of the extremal clause? Prescinding from general worries about rule-following, is that any special problem about understanding that clause which might suggest that, after all, different arithmeticians who deploy that clause could still be talking of different, non-isomorphic, structures? However, obvious though these questions are given what has gone before, Parsons doesn’t raise them.

Given the ready availability of the informal argument just sketched, why should we doubt uniqueness? Ah, the skeptical response will go, regiment arithmetic however we like, there can still be rival interpretations (thanks to the Löwenheim/Skolem theorem). Even if we dress up the uniqueness argument — by putting our arithmetic into a set-theoretic setting and giving a formal treatment of the content of the extremal clause, and then running a full-dress version of the informal Dedekind categoricity theorem — that still can’t be used settle the uniqueness question. For the requisite background set theory itself, presented in the usual first-order way, can itself have nonstandard models: and we can construct cases where the unique-up-to-isomorphism structure formed by ‘the natural numbers’ inside such a nonstandard model won’t be isomorphic to the ‘real’ natural numbers. And going second-order doesn’t help either: we can still have non-isomorphic ”general models” of second-order theories, and the question still arises how we are to exclude {those}. In sum, the skeptical line runs, someone who starts off with worries about the uniqueness of the natural-number structure because of the possibilities of non-standard models of arithmetic, won’t be mollified by an argument that presupposes uniqueness elsewhere, e.g. in our background set theory.

Now, that skeptical line of thought will, of course, be met with equally familiar responses (familiar, that is, from discussions of the philosophical significance of the existence of nonstandard models as assured us by the Löwenheim/Skolem theorem). For example, it will be countered that things go wrong at the outset. We can’t keep squinting sideways at our own language — the language in which we do arithmetic, express extremal clauses, and do informal set theory — and then pretend that more and more of it might be open to different interpretations. At some point, as Wittgenstein insisted, there has to be understanding without further interpretation (and at that point, assuming we are still able to do informal arithmetical reasoning at all, we’ll be able to run the informal argument for the uniqueness of the numbers).

How does Parsons stand with respect to this sort of dialectic? He outlines the skeptical take on the Dedekind argument at some length, explaining how to parlay a certain kind of nonstandard model of set theory into a nonstandard model of arithmetic. And his response isn’t the very general one just mooted but rather he claims that the way the construction works ”witnesses the fact the model is nonstandard” — and he means, in effect, that our grasp of the constructed model which provides a deviant interpretation of arithmetic piggy-backs on a prior grasp of the standard interpretation — so the idea that we might have deviantly cottoned on to the nonstandard model from the outset is undermined. Yet a bit later he says he is not going to attempt to directly answer skeptical arguments based on the L-S theorem. And he finishes the section by saying the theorem ”seems still to cast doubt on whether we have really ‘captured’ the ‘standard’ model of arithmetic”. So I’m left puzzled.

Parsons does, however, touch on one interesting general point along the way, noting the difference between those cases where we get deviant interpretations that we can understand but which piggy-back on a prior understanding of the theory in question, and those cases where we know there are alternative models because of the countable elementary submodel version of the L-S theorem. Since the existence of such submodels is given to us by the axiom of choice, these resulting interpretations are, in a sense, unsurveyable by us, so — for a different reason — are also not available as alternative interpretations we might have cottoned on to from the outset. The point is worth further exploration which it doesn’t receive here.

Scroll to Top