”There is a strongly held intuition that the natural numbers are a unique structure.” Parsons now begins to discuss whether this intuition — using ‘intuition’, of course, in the common-or-garden non-Kantian sense! — is warranted. He sets aside until the long Sec. 49 issues arising from arguments of Dummett’s: here he makes some initial points on the uniqueness question, arising from the consideration of nonstandard models of arithmetic.

It’s worth commenting first, however, on a certain ‘disconnect’ between the previous section and this one. For recall, Parsons has just been discussing how we might introduce a predicate ‘N‘ (‘… is a natural number’) governed by the rules (i) N0, and (ii) from Nx infer N(Sx), plus the extremal clause (iii) that nothing is a number that can’t be shown to be so by rules (i) and (ii). Together with the rules for the successor function, the extremal clause — interpreted as intended — ensures that the numbers will be unique up to isomorphism. Conversely, our naive intuition that the numbers form a unique structure is surely most naturally sustained by appeal to that very clause. The thought is that any structure for interpreting arithmetic as informally understood must take numbers to comprise a zero element, its successors (all different, by the successor rules), and nothing else. And of course the numbers in each structure will then have a natural isomorphism between them (which matches zeros with zeros, and n-th successors with n-th successors). So the obvious issue to take up at this point is: what does it take to grasp the intended content of the extremal clause? Prescinding from general worries about rule-following, is that any special problem about understanding that clause which might suggest that, after all, different arithmeticians who deploy that clause could still be talking of different, non-isomorphic, structures? However, obvious though these questions are given what has gone before, Parsons doesn’t raise them.

Given the ready availability of the informal argument just sketched, why should we doubt uniqueness? Ah, the skeptical response will go, regiment arithmetic however we like, there can still be rival interpretations (thanks to the Löwenheim/Skolem theorem). Even if we dress up the uniqueness argument — by putting our arithmetic into a set-theoretic setting and giving a formal treatment of the content of the extremal clause, and then running a full-dress version of the informal Dedekind categoricity theorem — that still can’t be used settle the uniqueness question. For the requisite background set theory itself, presented in the usual first-order way, can itself have nonstandard models: and we can construct cases where the unique-up-to-isomorphism structure formed by ‘the natural numbers’ inside such a nonstandard model won’t be isomorphic to the ‘real’ natural numbers. And going second-order doesn’t help either: we can still have non-isomorphic ”general models” of second-order theories, and the question still arises how we are to exclude {those}. In sum, the skeptical line runs, someone who starts off with worries about the uniqueness of the natural-number structure because of the possibilities of non-standard models of arithmetic, won’t be mollified by an argument that presupposes uniqueness elsewhere, e.g. in our background set theory.

Now, that skeptical line of thought will, of course, be met with equally familiar responses (familiar, that is, from discussions of the philosophical significance of the existence of nonstandard models as assured us by the Löwenheim/Skolem theorem). For example, it will be countered that things go wrong at the outset. We can’t keep squinting sideways at our own language — the language in which we do arithmetic, express extremal clauses, and do informal set theory — and then pretend that more and more of it might be open to different interpretations. At some point, as Wittgenstein insisted, there has to be understanding without further interpretation (and at that point, assuming we are still able to do informal arithmetical reasoning at all, we’ll be able to run the informal argument for the uniqueness of the numbers).

How does Parsons stand with respect to this sort of dialectic? He outlines the skeptical take on the Dedekind argument at some length, explaining how to parlay a certain kind of nonstandard model of set theory into a nonstandard model of arithmetic. And his response isn’t the very general one just mooted but rather he claims that the way the construction works ”witnesses the fact the model is nonstandard” — and he means, in effect, that our grasp of the constructed model which provides a deviant interpretation of arithmetic piggy-backs on a prior grasp of the standard interpretation — so the idea that we might have deviantly cottoned on to the nonstandard model from the outset is undermined. Yet a bit later he says he is not going to attempt to directly answer skeptical arguments based on the L-S theorem. And he finishes the section by saying the theorem ”seems still to cast doubt on whether we have really ‘captured’ the ‘standard’ model of arithmetic”. So I’m left puzzled.

Parsons does, however, touch on one interesting general point along the way, noting the difference between those cases where we get deviant interpretations that we can understand but which piggy-back on a prior understanding of the theory in question, and those cases where we know there are alternative models because of the countable elementary submodel version of the L-S theorem. Since the existence of such submodels is given to us by the axiom of choice, these resulting interpretations are, in a sense, unsurveyable by us, so — for a different reason — are also not available as alternative interpretations we might have cottoned on to from the outset. The point is worth further exploration which it doesn’t receive here.