Math. Thought and Its Objects

Encore #10: Parsons on intuition

Just yesterday, Brian Leiter posted the results of one of his entertaining/instructive online polls, this time on the “Best Anglophone and German Kant scholars  since 1945“. Not really my scene at all. Though I did, back in the day, really love Bennett’s Kant’s Analytic (as philosophy this is surely brilliant, whatever its standing as “scholarship”). I note that in comments after his post, Leiter expresses regret for not having listed Charles Parsons in his list of contributors to Kant scholarship to be voted on. Well, true enough, Parsons has battled over the years to try to make sense of/rescue something from Kantian thoughts about ‘intuition’ as grounding e.g. arithmetical knowledge. But with what success, I do wonder? I found the passages about intuition in Mathematical Thought and Its Objects rather baffling. Here I put together some thoughts from 2008 blog posts.

Is any of our arithmetical knowledge intuitive knowledge, grounded on intuitions of mathematical objects? Parsons writes, “It is hard to see what could make a cognitive relation to objects count as intuition if not some analogy with perception” (p. 144). But how is such an analogy to be developed?

Parsons tries to soften us up for the idea that we can have intuitions of abstracta (where these intuitions are somehow quasi-perceptual) by considering the putative case of perceptions – or are they intuitions? – of abstract types such as letters. The claim is that “the talk of perception of types is something normal and everyday” (p. 159).

But it is of course not enough to remark that we talk of e.g. seeing types: we need to argue that we can take our talk here as indeed reporting a (quasi)-perceptual relation to types. Well, here I am, looking at a squiggle on paper: I immediately see it as being, for example, a Greek letter phi. And we might well say: I see the letter phi written here. But, in this case, it might well be said, ‘perception of the type’ is surely a matter of perceiving the squiggle as a token of the type, i.e. perceiving the squiggle and taking it as a phi.

Now, it would be wrong to suppose that – at an experiential level – ‘seeing as’ just factors into a perception and the superadded exercise of a concept or of a recognitional ability. When the aspect changes, and I see the lines in a drawing as a picture of a duck rather than a rabbit, at some level the content of my conscious perception itself, the way it is articulated, changes. Still, in seeing the lines as a duck, it isn’t that there is more epistemic input than is given by sight (visual engagement with a humdrum object, the lines) together with the exercise of a concept or of a recognitional ability. Similarly, seeing the squiggle as a token of the Greek letter phi again doesn’t require me to have some epistemic source over and above ordinary sight and conceptual/recognitional abilities. There is no need, it seems, to postulate something further going on, i.e. quasi-perceptual ‘intuition’ of the type.

The deflationist idea, then, is that seeing the type phi instantiated on the page is a matter of seeing the written squiggle as a phi, and this involves bring to bear the concept of an instance of phi. And, the suggestion continues, having such a concept is not to be explained in terms of a quasi-perceptual cognitive relation with an abstract object, the type. If anything it goes the other way about: ‘intuitive knowledge of types’ is to be explained in terms of our conceptual capacities, and is not a further epistemic source. (But note, the deflationist who resists the stronger idea of intuition as a distinctive epistemic source isn’t barred from taking Parsons’s permissive line on objects, and can still allow the introduction of talk via abstraction principles of abstract objects such as types. He needn’t have a nominalist horror of talk of abstracta.)

Let’s be clear here. It may well be that, as a matter of the workings of our cognitive psychology, we recognize a squiggle as a token phi by comparing it with some stored template. But that of course does not imply that we need be able, at the personal level, to bring the template to consciousness: and even if we were to have some quasi-perceptual access to the template itself, it wouldn’t follow that we have quasi-perceptual access to the type. Templates are mental representations, not the abstracta represented.

Parsons, however, explicitly rejects the sketched deflationary story about our intuition of types when he turns to consider the particular case of the perception of expressions from a very simple ‘language’, containing just one primitive symbol ‘|’ (call it ‘stroke’), which can be concatenated. The deflationary reading

does not accurately render our perceptual consciousness of strokes. It would make what I want to call intuition of a string an instance of seeing a certain inscription as  of a type …. But in actual cases, the identification of the type will be firmer and more explicit that the identification of any physical inscriptions that is an instance of the type. That the inscriptions are real physical objects with definite physical properties plays no role in the mathematical treatment of the language, which is what concerns us. An illusory presentation of a string, provided it is sufficiently clear, will do as well to illustrate a mathematical notion as a real one. (p. 161)

There seem to be two points here, neither of which will really trouble the deflationist.

The first point is that the identification of a squiggle’s type may be “firmer and more explicit” than our determination of its physical properties as a token (which I suppose means that a somewhat blurry shape may still definitely be a letter phi). But so what? Suppose we have some discrete conceptual pigeon-holes, and have reason to take what we see as belonging in one pigeonhole or another (as when we are reading Greek script, primed with the thought that what we are seeing will be a sequence of letters from forty eight upper and lower case possibilities). Then fuzzy tokens can get sharply pigeonholed. But there’s nothing here that the deflationist about seeing types can’t accommodate.

The second point is that, for certain illustrative purposes, illusory strings are as good as physical strings. But again, so what? Why shouldn’t seeing an illusory strokes as a string be a matter of our tricked perceptual apparatus engaging with our conceptual and/or /recognitional abilities? Again, there is certainly no need to postulate some further cognitive achievement, ‘intuition of a type’.

Oddly, Parsons himself, when wrestling with issues about vagueness, comes close to making these very points. For you might initially worry that intuitions which are founded in perceptions and imaginings will inherit the vagueness of those perceptions or imaginings – and how would that then square the idea that mathematical intuition latches onto sharply delineated objects? But Parsons moves to block the worry, using the example of seeing letters again. His thought now seems to be the one above, that we have some discrete conceptual pigeon-holes, and in seeing squiggles as a phi or a psi (say), we are pigeon-holing them. And the fact that some squiggles might be borderline candidates for putting in this or that pigeon-hole doesn’t (so to speak) make the pigeon-holes less sharply delineated. Well, fair enough. But thinking in these terms surely does not sustain the idea that we need some basic notion of the intuition of the type phi to explain our pigeon-holing capacities.

So, I’m unpersuaded that we actually need (or indeed can make much sense of) any notion of the quasi-perceptual ‘intuition of types’ – and in particular, any notion of the intuition of types of stroke-strings – that resists a deflationary reading. But let’s suppose for a moment that we follow Parsons and think we can make sense of such a notion. Then what use does he want to make of the idea of intuiting strokes and stroke-strings?

Parsons writes

What is distinctive of intuitions of types [here, types of stroke-strings] is that the perceptions and imaginings that found them play a paradigmatic role. It is through this that intuition of a type can give rise to propositional knowledge about the type, an instance of intuition that. I will in these cases use the term ‘intuitive knowledge’. A simple case is singular propositions about types, such as that ||| is the successor of ||. We see this to be true on the basis of a single intuition, but of course in its implications for tokens it is a general proposition. (p. 162)

This passage raises a couple of issues worth commenting on.

One issue concerns the claim that there is a ‘single intuition’ here on basis of which we see that that ||| is the successor of ||. Well, I can think of a few cognitive situations which we might agree to describe as grounding quasi-perceptual knowledge that ||| is the successor of || (even if some of us would want to give a deflationary construal of the cases, one which doesn’t appeal to intuition of abstracta). For example,

  1. We perceive two stroke-strings
           ||
           |||
    and aligning the two, we judge one to be the successor or the other.
  2. We perceive a single sequence of three strokes ||| and flip to and fro between seeing it as a threesome and as a block of two followed by an extra stroke.

But, even going along with Parsons on intuition, neither of those cases seems aptly described as seeing something to be true on the basis of a single intuition. In the first case, don’t we have an intuition of ||| and a separate intuition of || plus a recognition of the relation between them? In the second case, don’t we have successive intuitions, and again a recognition of the relation between them? It seems that our knowledge that ||| is the successor of || is in either case grounded on intuitions, plural, plus a judgement about their relation. And now the suspicion is that it is the thoughts about the relations that really do the essential grounding of knowledge here (thoughts that could as well be engaging with perceived real tokens or with imagined tokens, rather than with putative Parsonian intuitions that, as it were, reach past the real or imagined inscriptions to the abstracta).

The other issue raised by the quoted passage concerns the way that the notion of ‘intuitive knowledge’ is introduced here, as the notion of propositional knowledge that arises in a very direct and non-inferential way from intuition(s) of the objects the knowledge is about: “an item of intuitive knowledge would be something that can be ‘seen’ to be true on the basis of intuiting objects that it is about” (p. 171). Such a notion looks very restrictive – on the face of it, there won’t be much intuitive knowledge to be had.

But Parsons later wants to extend the notion in two ways. First

Evidently, at least some simple, general propositions about strings can be seen to be true. I will argue that in at least some important cases of this kind, the correct description involves imagining arbitrary strings. Thus, that will be included in ‘intuiting objects that a proposition is about’. (p. 171)

But even if we now allow intuition of ‘arbitrary objects’, that still would seem to leave intuitive knowledge essentially non-inferential. However,

I do not wish to argue that the term ‘intuitive knowledge’ should not be used in that [restrictive] way. Our sense, following that of the Hilbert School, is a more extended one that allows that certain inferences preserve intuitive knowledge, so that there can actually be a developed body of mathematics that counts as intuitively known. This seems to me a more interesting conception, in addition to its historical significance. Once one has adopted this conception, one has to consider case by case what inferences preserve intuitive knowledge. (p. 172)

Two comments about his. Take the second proposed extension first. The obvious question to ask is: what will constrain our case-by-case considerations of which kinds of inference preserve intuitive knowledge? To repeat, the concept of intuitive knowledge was introduced by reference to an example of knowledge seemingly non-inferentially obtained. So how are we supposed to ‘carry on’, applying the concept now to inferential cases? It seems that nothing in our original way of introducing the concept tells us which such further applications are legitimate, and which aren’t. But there must be some constraints here if our case-by-case examinations are not just to involve arbitrary decisions. So what are these constraints? I struggle to find any clear explanation in Parsons.

And what about intuiting ‘arbitrary’ strings? How does this ground, for example, the knowledge that every string has a successor? Well, supposedly, (1) “If we imagine any [particular] string of strokes, it is immediately apparent that a new stroke can be added.” (p. 173) (2) But we can “leave inexplicit its articulation into single strokes” (p. 173), so we are imagining an arbitrary string, and it is evident that a new stroke can be added to this too. (3) “However, …it is clear that the kind of thought experiments I have been describing can be taken as intuitive verifications of such statements as that any string of strokes can be extended only if one carries them out on the basis of specific concepts, such as that of a string of strokes. If that were not so, they would not confer any generality.” (p. 174) (4) “Although intuition yields one essential element of the idea that there are, at least potentially, infinitely many strings …more is involved in the idea, in particular that the operation of adding an additional stroke can be indefinitely iterated. The sense, if any, in which intuition tells us that is not obvious.” (p. 176) But (5) “Once one has seen that every string can be extended, it is still another question whether the string resulting by adding another symbol is a different string from the original one. For this it must be of a different type, and it is not obvious why this must be the case. … Although it will follow from considerations advanced in Chapter 7 that it is intuitively known that every string can be extended by one of a different type, ideas connected with induction are needed to see it” (p. 178).

There’s a lot to be said about all that, though (4) and (5) already indicate that Parsons thinks that, by itself, ‘intuition’ of stroke-strings might not take us terribly far. But does it take us even as far as Parsons says? For surely it is not the case that imagining/intuiting adding a stroke to an inexplicitly articulated string, together with the exercise of the concept of a string of strokes, suffices to give us the idea that any string can be extended. For we can surely conceive of a particularist reasoner, who has the concept of a string, can bring various arrays (more or less explicitly articulated) under that concept, and given a string can recognize that this one can be extended – but who can’t advance to frame the thought that they can all be extended? The generalizing move surely requires a further thought, not given in intuition.

Indeed, we might now wonder quite what the notion of intuition is doing here at all. For note that (1) and (2) are a claims about what is imaginable. But if we can get to general results about extensibility by imagining particular strings (or at any rate, imagining strings “leaving inexplicit their articulation into single strokes”, thus perhaps |||| with a blurry filling) and then bring them under concepts and generalizing, why do we also need to think in terms of having cognitive access to something else which is intrinsically general, i.e. stroke-string types? It seems that Parsonian intuitions actually drop out of the picture. What gives them an essential role in the story?

Finally, note Parsons’s pointer forward to the claim that ideas “connected with induction” can still be involved in what is ‘intuitively known”. We might well wonder again as we did before: what integrity is left to the notion of intuitive knowledge once it is no longer tightly coupled with the idea of some quasi-perceptual source and allows inference, now even non-logical inference, to preserve intuitive knowledge? I can’t wrestle with this issue further here: but Parsons ensuing discussion of these matters left me puzzled and unpersuaded.

Again, as with the last post, that’s how things seemed to be more than seven years ago. If other readers have a better of sense of what a Parsonian line on intuition might come to, comments are open!

Encore #9: Parsons on noneliminative structuralism

I could post a few more encores from my often rather rude blog posts about Murray and Rea’s Introduction to the Philosophy of Religion. But perhaps it would be better for our souls to to an altogether more serious book which I blogged about at length, Charles Parsons’ Mathematical Thought and Its Objects. I got a great deal from trying to think through my reactions to this dense book in 2008. But I often struggled to work out what was going on. Here, in summary, is where I got to in a series of posts about the book’s exploration of structuralism. I’m very sympathetic to structuralist ideas: but I found it difficult to pin down Parsons’s version.

In his first chapter, Parsons defends a thin, logical, conception of ‘objects’ on which “Speaking of objects just is using the linguistic devices of singular terms, predication, identity and quantiÞcation to make serious statements” (p. 10). His second chapter critically discusses eliminative structuralism. The third chapter presses objections against modal structuralism. But Parsons still finds himself wanting to say that “something close to the structuralist view is true” (p. 42), and he now moves on characterize his own preferred noneliminative version. We’ll concentrate on the view as applied to arithmetic.

Parsons makes two key initial points. (1) Unlike the eliminative structuralist, the noneliminativist “take[s] the language of mathematics at face value” (p. 100). So arithmetic is indeed about numbers as objects. What characterizes the position as structuralist is that we don’t “take more as objectively determined about the objects about which it speaks than [the relevant mathematical] language itself specifies” (p. 100). (2) Then there is “the aspect of the structuralist view stressed by Bernays, that existence for mathematical objects is in the context of a background structure” (p. 101). Further, structures aren’t themselves objects, and “[the noneliminativist] structuralist account of a particular kind of mathematical object does not view statements about that kind of object as about structures at all”.

But note, thus far there’s nothing in (1) and (2) that the neo-Fregean Platonist (for example) need dissent from. The neo-Fregean can agree e.g. that numbers only have numerical intrinsic properties (pace Frege himself, even raising the Julius Caesar problem is a kind of mistake). Moreover, he can insist that individual numbers don’t come (so to speak) separately, one at a time, but come all together forming an intrinsically order structured — so in, identifying the number 42 as such, we necessarily give its position in relation to other numbers.

So what more is Parsons saying about numbers that distinguishes his position from the neo-Fregean? Well, he in fact explicitly compares his favoured structuralism with the view that the natural numbers are sui generis in the sort of way that the neo-Fregean holds. He writes

One further step that the structuralist view takes is to reject the demand for any further story about what objects the natural numbers are [or are not]. (p. 101)

The picture seems to be that the neo-Fregean offers a “further story” at least in negatively insisting that numbers are sui generis, while the structuralist refuses to give such a story. As Parsons puts it elsewhere

If what the numbers are is determined only by the structure of numbers, it should not be part of the nature of numbers that none of them is identical to an object given independently.

But of course, neo-Fregeans like Hale and Wright won’t agree that their rejection of cross-type identities is somehow an optional extra: they offer arguments which — successfully or otherwise — aim to block the Julius Caesar problem and reveal certain questions about cross-type identifications as ruled out by our very grasp of the content of number talk. So from this neo-Fregean perspective, we can’t just wish into existence a coherent structuralist position that both (a) construes our arithmetical talk at face value, as referring to numbers as genuine objects, yet also (b) insists that the possibility of cross-type identifications is left open, because — so this neo-Fregean story goes — a properly worked out version of (a), together with reflection on the ways that genuine objects are identified under sortals, implies that we can’t allow (b).

Now, on the sui generis view about numbers, claims identifying numbers with sets will be ruled out as plain false. Or perhaps it is even worse, and such claims fail to make the grade for being either true or false (though it is, of course, notoriously difficult to sustain a stable, well-motivated, distinction between the neither-true-nor-false and the plain false — so let’s not dwell on this). Conversely, assuming that numbers are objects, if claims identifying them with sets and the like are false or worse, then numbers are sui generis. So it seems that if Parsons is going to say that numbers are objects but are not sui generis, he must allow space for saying that claims identifying numbers with sets (or if not sets, at least some other objects) are true. But then Parsons is faced with the familiar Benacerraf multiple-candidates problem (if not for sets, then presumably an analogous problem for other candidate objects, whatever they are: let’s keep things simple by running the argument in the familiar set-theoretic setting). How do we choose e.g. between saying that the finite von Neumann ordinals are the natural numbers and saying that the finite Zermelo ordinals are?

It seems arbitrary to plump for either choice. Rejecting both together (and other choices, on similar grounds) just takes us back to the sui generis view — or even to Benacerraf’s preferred view that numbers aren’t objects at all. So that, it seems, leaves just one position open to Parsons, namely to embrace both choices, and to avoid the apparently inevitable absurdity that \(\{\emptyset,\{\emptyset\}\}\) is identical to \(\{\{\emptyset\}\}\) (because both are identical to 2) by going contextual. It’s only in one context that ‘\(2 = \{\emptyset,\{\emptyset\}\}\)’ is true; and only in another that ‘\(2 = \{\{\emptyset\}\}\)’ is true.

And this does seem to be the line Parsons seems inclined to take: “The view we have defended implies that [numbers] are not definite objects, in that the reference of terms such as ‘the natural number two’ is not invariant over all contexts” (p. 106). But how are we to understand that? Is it supposed to be rather like the case where, when Brummidge United is salient, ‘the goal keeper’ refers to Joe Doe, but when Smoketown City is salient, ‘the goal keeper’ refers to Richard Roe? So when the von Neumann ordinals are salient, ‘2’ refers to \(\{\emptyset,\{\emptyset\}\}\) and the Zermelo ordinals are salient, ‘2’ refers to \(\{\{\emptyset\}\}\)? But then, to pursue the analogy, while ‘the goal keeper’ is indeed sometimes used to talk about now this particular role-filler and now that one, the designator is apparently also sometimes used more abstractly to talk about the role itself — as when we say that only the goal keeper is allowed to handle the ball. Likewise, even if we do grant that ‘2’ sometimes refers to role-fillers, it seems that sometimes it is used to talk more abstractly about the role — perhaps as when we say, when no particular \(\omega\)-sequence of sets is salient, that 2 is the successor of the successor of zero. Well, is this the way Parsons is inclined to go, i.e. towards a structuralism developed in terms of a metaphysics of roles and role-fillers?

Well, Parsons does explicitly talk of “the conclusion that natural numbers are in the end roles rather than objects with a definite identity” (p. 105). But why aren’t roles objects after all, in his official thin ‘logical’ sense of object? — for we can use “the linguistic devices of singular terms, predication, identity and quantification to make serious statements” about roles (and yes, we surely can make claims about identity and non-identity: the goal keeper is not the striker). True, roles are as Parsons might say, “thin” or “impoverished” objects whose intrinsic properties are determined by their place in a structure. But note, Parsons’s official view about objects didn’t require any sort of ‘thickness’: indeed, he is “most concerned to reject the idea that we don’t have genuine reference to objects if the ‘objects’ are impoverished in the way in which elements of mathematical structures appear to be” (p. 107). And being merely ‘thin’ objects, roles themselves (e.g. numbers) can’t be the same things as ‘thick’ role-fillers. So now, after all, numbers qua number-roles do look to be sui generis entities with their own identity — objects, in the broad logical sense, which are not to be identified with any role-filler — in other words, just the kind of thing that Parsons seems not to want to be committed to.

The situation is further complicated when Parsons briefly discusses Dedekind abstraction, though similar issues arise. To explain: suppose we have a variety of ‘concrete’ structures, whether physically realized or realized in the universe of sets, that satisfy the conditions for being a simply infinite system. Then Dedekind’s idea is that we ‘abstract’ from these a further structure \(\langle N, 0, S\rangle\) which is — so to speak — a ‘bare’ simply infinite system without other inessential physical or set-theoretic features, and it is elements of this system which are the numbers themselves. (Erich H. Reck nicely puts it like this: “[W]hat is the system of natural numbers now? It is that simple infinity whose objects only have arithmetic properties, not any of the additional, ‘foreign’ properties objects in other simple infinities have.”) Since the bare structure is all that is generated by the Dedekind abstraction, “it conforms to the basic structuralist intuition in that the number terms introduced do not give us more than the structure” (p. 105), to borrow Parsons’s words. But, he continues,

This procedure gets its force from the use of a typed language. Thus, the question arises what is to prevent us from later, for some specific purpose, speaking of numbers in a first-order language and even affirming identities of numbers and objects given otherwise.

To which the answer surely is that, to repeat, on the Dedekind abstraction view, the ‘thin’ numbers determinately do not have intrinsic properties other than those given in the abstraction procedure which introduces them: so, by assumption, they are determinately distinct from any ‘thicker’ object with such further properties. Why not?

So now I’m puzzled. For Parsons, does ‘the natural number two’ (i) have a fixed reference to a sui-generis ‘thin’ role-object (or Dedekind abstraction, if that’s different), or (ii) have a contextually shifting reference to a role-filler, or (iii) both? The latter is perhaps the most charitable reading. But it would have helped a lot if Parsons had much more explicitly related his position to an articulated metaphysics of role/role-filler structuralism. Elsewhere, he writes that “the metaphysical tradition is likely to be misleading as a source of ideas about the objects of modern mathematics”. Maybe that’s right. But then it is all the more important to be absolutely clear and explicit about what new view is being proposed. And here, I fear, Parsons’s writing falls short of that.

Or so I thought now over seven years ago. I haven’t re-read Parsons’s text since. I would be very interested to get any comments from readers who worked their way to some clearer understanding of his position. 

Parsons, the whole story, at last

I have been blogging on and off for quite a while about Charles Parsons’s Mathematical Thought and Its Objects, latterly as we worked through the book in a reading group here. I’ve now had a chance to put together a revised (sometimes considerably revised) version of all those posts into a single document — over 50 pages, I’m afraid. You can download it here.

I’ve learnt quite a bit from the exercise. I’ll be very interested in any comments or reactions.

Parsons’s Mathematical Thought: Sec 55, Set theory

The final(!) section of Parsons’s book is one of the briefest, and its official topic is about the biggest — the question of the justification of set-theoretic axioms. But, reasonably enough, Parsons just offers here some remarks on how the case of justifying set theory fits with his remarks in the preceding sections.

First, on “rational intuition” again. We can work ourselves into sufficient familiarity with ZFC for its axioms to come to seem intrinsically plausible — but such rational intuitions (given the questions than have been raised, by mathematicians and philosophers) “fall short of intrinsic evidence“. Which isn’t very helpful.

And what about Parsons’s modified holism? In the case of set theory, is there “a dialectical relation of axioms and their consequences such as our general discussion of Reason would suggest”? We might suppose not, given that (equivalents of) the standard axioms were already “essentially in place in Skolem’s address of 1922”. Nonetheless, Parsons suggests, we do find such a dialectical relation, historically in the reception of the axiom of choice, and perhaps now in continuing debates about large cardinal axioms, etc., where “the role of intrinsic plausibility” is much diminished, and having the right (or at least desirable) consequences are an essential part of their justification. But, Parsons concludes — the final sentence of his book — “apart from the purely mathematical difficulties, many problems of methodology and interpretation remain in this area”. Which is, to say the least, a rather disappointing note of anti-climax!

Afterword Later this week, I’ll post (a link to) a single document wrapping up all the blog-posts here into a just slightly more polished whole, and then I must cut down 30K words to a short critical notice for Analysis Reviews. I feel I’ve learnt a lot from working through (occasionally, battling with) Parsons — but in the end I suppose my verdict has to be a bit lukewarm. I’m unconvinced about his key claims on structuralism, on intuition, on the impredicativity of the notion of number, in each case in part because, after 340 pages, I’m still not really clear enough what the claims amount to.

Parsons’s Mathematical Objects: Sec. 54, Arithmetic

How does arithmetic fit into the sort of picture of the role of reason and so-called “rational intuition” drawn in Secs. 52 and 53?

The bald claim that some basic principles of arithmetic are “self-evident” is, Parsons thinks, decidedly unhelpful. Rather, “in mathematical thought and practice, the axioms of arithmetic are embedded in a rather dense network … [which] serves to buttress [their] evident character … so that in that respect their evident character does not just come from their intrinsic plausibility.” Moreover, there is a subtle interplay between general principles and elementary arithmetical claims — a dialectic “between attitudes towards mathematical axioms and rules and methodological or philosophical attitudes having to do with constructivity, predicativity, feasibility, and the like”. Which, as Parsons notes, is all beginning to sound rather Quinean. How is his position distinctive?

Not by making any more play with talk of “rational intuition”, which made its temporary appearance in Sec. 53 just as a way of talking about what is intrinsically plausible: indeed, the idea that the axioms of arithmetic derive a special status from being grounded in rational intuition is said to be “in an important way misleading”. Where Parsons does depart from Quine — and it is no surprise to be told, at this stage in the book! — is in holding that some elementary arithmetic principles can be intuitively known in the Hilbertian sense he discussed in earlier chapters. And the main point he seems to want to make in this chapter is that, as we move to more sophisticated areas of arithmetic which cannot directly be so grounded, so “the conceptual or rational element in arithmetical knowledge becomes much more prominent”, the web of arithmetic isn’t thereby totally severed from intuitive knowledge grounded in intuitions of stroke strings and the like. It is still the case that “an intuitive domain witnesses the possibility of the structure of numbers”.

Of course, how impressed we are by that claim will depend on how well we think Parsons defended his conception of intuitive knowledge in earlier chapters (and I’m not going to go over that ground again now, and nor indeed does Parsons). And what grounds the parts of arithmetic that don’t get rooted in Hilbertian intuition? To be sure, those more advanced parts can get tied to other bits of mathematics, notably set theory, so there is that much rational constraint. But that just shifts the question: what grounds those theories? (There are some remarks in the next chapter, but as we’ll see they are not very unhelpful.)

So where have we got to? Parsons’s picture of arithmetic retains a role for Hilbertian intuition. And unlike an “all-in” holism, he wants to emphasize the epistemic stratification of mathematics (though his remarks on that stratification really do little more than point to the phenomenon). But still, “our view does not differ toto caelo from holism”. And I’m left really pretty unclear what, in the end, the status of the whole web of arithmetical belief is supposed to be.

Parsons’s Mathematical Objects: Secs 52-53, Reason, "rational intuition" and perception

Back to Parsons, to look at the final chapter of his book, called simply ‘Reason’. And after the particularly bumpy ride in the previous chapter, this one starts in a very gentle low-key way.

In Sec. 52, ‘Reason and “rational intuition”‘, Parsons rehearses some features of our practice of supporting our claims by giving reasons (occasionally, he talks of ‘features of Reason’ with a capital ‘R’: but this seems just to be Kantian verbal tic without particular significance). He mentions five. (a) Reasoning involves logical inference (and “because of their high degree of obviousness and apparently maximal generality, we do not seem to be able to give a justification of the most elementary logical principles that is not in some degree circular, in that inferences codified by logic will be used in the justification”). (b) In a given local argumentative context, “some statements … play the role of principles which are regarded as plausible (and possibly even evident) without themselves being the conclusion of arguments (or at least, their plausibility or evidence does not rest on the availability of such arguments).” (c) There is there is a drive towards systematization in our reason-giving — “manifested in a very particular way [in the case of mathematics], though the axiomatic method”. (d) Further, within a systematization, there is a to-and-fro dialectical process of reaching a reflective equilibrium, as we play off seemingly plausible local principles against more over-arching generalizing claims. (e) Relatedly, “In the end we have to decide, on the basis of the whole of our knowledge and the mutual connections of its parts whether to credit a given instance of apparent self-evidence or a given case of what appears to be perception”.

Now, that final Quinean anti-foundationalism is little more than baldly asserted. And how does Parsons want us to divide up principles of logical inference from other parts of a systematized body of knowledge? His remarks about treating the law of excluded middle “simply as an assumption of classical mathematics” suggest that he might want to restrict logic proper to some undisputed core — though he doesn’t tell us what that is. Still, quibbles apart, the drift of Parsons’s remarks here will strike most readers nowadays as unexceptionable.

Sec. 52, ‘Rational inuition and perception’, says a bit more to compare and contrast intuitions in the sense of statements found in a given context of reasoning to be intrinsically plausible — call these “rational intuitions” — and intuitions in the more Kantian sense that has occupied Parsons in earlier chapters. As he says, “intrinsic plausibility is not strongly analogous to perception [of objects]”, in the way that Kantian intuition is supposed to be. But perhaps analogies with perception remain. For one thing, there is the Gödelian view that intrinsic plausibility for some mathematical propositions involves something like perception of concepts. And there is perhaps is another analogy too, suggested by George Bealer: reason is subject to illusions that, like perceptual illusions, persist even after they have been exposed. But Parsons only briefly floats those ideas here, and the section concludes with a different thought, namely there is a kind of epistemic stratification to mathematics, with propositions at the lowest level seeming indisputably self-evident, and as we get more general and more abstract, self-evidence decreases. Which is anodyne indeed.

Parsons Mathematical Thought: Sec. 51, Predicativity and inductive definitions

The final section of Ch. 8 sits rather uneasily with what’s gone before. The preceding sections are about arithmetic and ordinary arithmetic induction, while this one briskly touches on issues arising from Feferman’s work on predicative analysis, and iterating reflection into the transfinite. It also considers whether there is a sense in which a rather different (and stronger) theory given by Paul Lorenzen some fifty years ago can also be called ‘predicative’. There is a page here reminding us of something of the historical genesis of the notion of predicativity: but there is nothing, I think, in this section which helps us get any clearer about the situation with arithmetic, the main concern of the chapter. So I’ll say no more about it.

Parsons Mathematical Thought: Sec. 50, Induction and impredicativity, continued

Suppose we help ourselves to the notion of a finite set, and say x is a number if (i) there is at least one finite set which contains x and if it contains Sy contains y, and (ii) every such finite set contains 0. This definition isn’t impredicative in the strict Russellian sense (as Alexander George points out in his ‘The imprecision of impredicativity’). Nor is it overtly impredicative in the extended sense covering the Nelson/Dummett/Parsons cases. We might argue that it is still covertly impredicative in the latter sense, if we think that elucidating the very notion of a finite set — e.g. as one for which there is a natural which counts its members — must in turn involve quantification over naturals. But is that right? This is where Feferman and Hellman enter the story. For, as Parsons remarks, they aim to offer in their theory EFSC a grounding for arithmetic in a theory of finite sets that is predicatively acceptable and that also explains the relevant idea of finiteness in a way that does not presuppose the notion of natural number. Though now things get a bit murky (and I think it would take us too far afield to pursue the discussion and further here). But Parsons’s verdict is that

EFSC admits the existence of sets that are specified by quantification over all sets, and this assumption is used in proving the existence of an N-structure [i.e. a natural number structure]. For this reason, I don’t think that … EFSC can pass muster as strictly predicative.

This seems right, if I am following. It would seem, then, Parsons would still endorse the view that no explanation of the property natural number is in sight that is not impredicative in a broad sense — where an explanation counts as impredicative in the broad sense if it is impredicative in Russell’s sense, or in the Parsons sense, or invokes concepts whose explanation is in turn impredicative in one of those senses. But the question remains: what exactly is the significance of that broad claim if I am right that even e.g. a constructivist needn’t always have a complaint about definitions which are impredicative in a non-Russellian way? It would have been good to have been told.

Back, though, to the question of induction. Dummett, to repeat, says that “the totality of natural numbers is characterised as one for which induction is valid with respect to any well-defined property” including ones whose definitions “may contain quantifiers whose variables range over the totality characterised”. Likewise Nelson. Now, as a gloss on what happens in various formalized systems of arithmetic, that is perhaps unexceptionable. But does the totality of natural numbers have to be so characterized? Return to what I called the simplest explanation of the notion of the natural numbers, which says that (i) zero is a natural number, (ii) if n is a natural number, so is Sn, and (iii) whatever is a natural number is so in virtue of clauses (i) and (ii). This explanation, Parsons argued, sustains induction for any well-defined property. But as we noted before, that argument leaves it wide open which are the well-definined properties. So it seems a further thought, going beyond what is given in the simplest explanation, to claim that any predicate involving first-order quantifications over the numbers is in fact well-defined. There are surely arithmeticians of finitist or constructivist inclinations, who fully understand the idea that the natural numbers are zero and its successors and nothing else, and understand (at least some) primitive recursive functions, but who resist the thought that we can understand predicates involving arbitrarily complex quantifications over the totality of numbers, since we are in general bereft of a way of determining in a finitistically/constructively acceptable way whether such a predicate applies to a given number. To put it in headline terms: it is a significant conceptual move to get from grasping PRA to grasping (first-order) PA — we might say that it involves moving from treating the numbers as a potential infinity to treating them as a completed infinity — and it wants an argument that someone who balks at the move has not grasped the property natural number.

How much arithmetic can we get it we do balk at the extra move and restrict induction to those predicates we have the resources to grasp in virtue of grasping what it is to be a natural number (plus at least addition and multiplication, say)? Well, arguably we can get at least as far as IΔ0, and Parsons talks a bit about this at the end of the present section. He says, incidentally, that such a theory is ‘strictly predicative’ — but I take it that this is meant in a sense consistent with saying an explanation ‘from outside’ of what the theory is supposed to be about, i.e. the natural numbers, is necessarily impredicative in the broad sense. I won’t pursue the details of the compressed discussion of IΔ0 here.

So where does all this get us? Crispin Wright has written

Ever since the concern first surfaced in the wake of the paradoxes, discussion of the issues surrounding impredicativity — when, and under what assumptions, are what specific forms of impredicative characterizations and explanations acceptable — has been signally tangled and inconclusive.

Indeed so! Given that tangled background, any discussion really ought to go more slowly and more explicitly than Parsons does. And I think we need to distinguish here grades of impredicativity in a way that Parsons doesn’t do. Agree that in the broadest sense an explanation of the natural numbers is impredicative: but this doesn’t mean that finitists or constructivists need get upset. Induction over predicates involving arbitrarily embedded quantifications over the numbers involves another grade of impredicativity, this time something the finitist or constructivist will indeed refuse to countenance. (I perhaps will return to these matters later: but for now, we must press on!)

Parsons’s Mathematical Thought: Sec. 50, Induction and impredicativity

Here’s the first half of an improved(?!?) discussion of this section: sorry about the delay!

Parsons now takes up another topic that he has written about influentially before, namely impredicativity. He describes his own earlier claim like this: “no explanation [of the predicate `is a natural number’] is in sight that is not impredicative”. That claim has been challenged by Feferman and Hellman in a couple of joint papers, and Parsons takes the present opportunity to respond. As the title of this section indicates, Parsons links claims about impredicativity to thoughts about the scope of induction: but as we’ll see, the link takes some teasing out.

What, though, does Parson mean by impredicativity? Oddly, he doesn’t come out with a straight definition of the notion. Nor does he really explain why it might matter whether definitions of the natural numbers have to be impredicative. So before tackling his discussion, we’d better pause for some preliminary clarifications and reflections.

The usual sort of account of impredicativity, in the same vein as Russell’s original (or rather, as one of Russell’s originals), runs roughly like this: ‘a definition … is impredicative if it defines an object which is one of the values of a bound variable occurring in the defining expression’, i.e. an impredicative specification of an entity is one ‘involving quantification over all entities of the same kind as itself’. (Here,the first quotation is from Fraenkel, Bar-Hillel and Levy, Foundations of Set Theory, p. 38, one of a number of very similiar Russellian definitions quoted by Alexander George in his ‘The imprecision of impredicavity; the second much more recent quotation is from John Burgess Fixing Frege, p. 40.) Thus Weyl, famously, argued against the cogency of some standard constructions in classical analysis on the grounds of their impredicativity in this sense. (And because ACA0 bans impredicative specifications of sets of numbers, it provides one possible framework for developing those portions of analysis which should be acceptable to someone with Weyl’s scruples. Now, as Parsons in effect notes, a theory like ACA0 which lacks an impredicative comprehension principle is often described as being, unqualifiedly, a predicative theory of arithmetic: but that description takes it for granted that its first-order core — usually first order Peano Arithmetic — isn’t impredicative in some other respect.)

But why should we care about avoiding impredicative definitions for Xs? Why should such definitions lack cogency? Well, suppose we think that Xss are in some sense (however tenuous) ‘constructed by us’ and not determined to exist prior to our mathematical activity. Then, very plausibly, it is illegitimate to give a recipe for constructing a particular X which requires us to take as already given a totality of Xs which includes the very one that is now being constructed. So at least any definition which is to play the role of a recipe-for-construction had better not be impredicative. Given Weyl’s constructivism about sets, then, it is no surprise that he rejects impredicative definitions of sets. I’ll not pause to assess this line of thought any further here: but I take it that it is a familiar one. (By the way, I don’t want to imply that constructivist thoughts are the only ones that might make us suspicious of impredicative definitions: though as Ramsey and Gödel pointed out, it is far from clear why a gung-ho realist should eschew impredicative definitions.)

Now, on the Russellian understanding of the idea, a definition of the set of natural numbers will count as ‘impredicative’ if it quantifies over some totality of sets including the set of natural of numbers. Modulated into property talk, we’d have: a definition of the property of being a natural number will count as impredicative if it quantifies over some totality of properties including the property of being a natural number. Some familiar definitions are indeed impredicative in this sense: take, for example, a Frege/Russell definition which says that x is a natural number iff x has all the hereditary properties of zero. Then, the quantification is over a totality which includes the property of being a natural number, and the definition is impredicative in a Russellian sense. But are all explanations we might give of what it is to be a natural number impredicative in the same way?

Take, for example, what I’ll call ‘the simplest explanation’: (i) zero is a natural number, (ii) if n is a natural number, so is Sn, and (iii) whatever is a natural number is so in virtue of clauses (i) and (ii) — and hence, almost immediately, (iv) the natural numbers are what we can do induction over. This characterization of the property of being a natural number which Parsons gives in Sec. 47 does not explicitly involve a quantification over a class of properties including that of a natural number. And though it might be claimed that an understanding of the extremal clause (iii) requires a grasp of second-order quantification, I’ve urged before that this view is contentious (and indeed the view doesn’t seem to be one that Parsons endorses — see again the discussion of his Sec. 47). So here we have, arguably, an explanation of the concept of number which isn’t impredicative in the Russellian sense. But does the quantification in (iii) make the explanation impredicative in some different, albeit closely related, sense?

Well here’s Edward Nelson, at the beginning of his book, Predicative Arithmetic. In induction we can use what he calls ‘inductive formulae’ which involve quantifiers over the numbers themselves. This, he supposes, entangles us with what he calls an ‘impredicative concept of number’:

A number is conceived to be an object satisfying every inductive formula; for a particular inductive formula, therefore, the bound variables are conceived to range over objects satisfying every inductive formula, including the one in question.

Dummett, quoted approvingly by Parsons, says much the same:

[T]he notion of `natural number’ … is impredicative. The totality of natural numbers is characterised as one for which induction is valid with respect to any well-defined property, … the impredicativity remains, since the definitions of the properties may
contain quantifiers whose variables range over the totality characterised.

So the thought seems to be that any definition of the numbers is more or less directly going to characterize them as what we can do induction over, and that `a characterization of the natural numbers
that includes induction as part of it will be impredicative’ (to quote Parsons’s gloss). But note, Dummett says that there is impredicativity here, not because the totality of natural numbers is being defined in terms of a quantification over some domain which has as a member the totality of natural numbers itself (which is what we’d expect on the Russellian definition), but because the totality is defined in terms of a quantification whose domain is (or includes) the same totality. To quote Parsons again:

Because the number concept is characterized as one for which induction holds for any well-defined predicate or property, there is impredicativity if those involving quantification over numbers are included, as they evidently are.

However, to repeat, that involves a non-Russellian notion of impredicativity. In fact it seems that Parsons would also say that an explanation of the concept P — whether or not couched as an explicit definition — is impredicative if it involves a quantification over the totality of things of which fall under P. It is perhaps in this extended sense, then, that our ‘simplest explanation’ of the property of being a natural number might be said to be impredicative.

But now note that it isn’t at all obvious why we should worry about about a property’s being impredicative if it is a non-Russellian case. Suppose, just for example, we want to be some kind of constructivist about the numbers: then how are our constructivist principles going to be offended by saying that the numbers are zero, its successors, and nothing else? Prescinding from worries about our limited capacities, the ‘simplest explanation’ of the numbers tells us, precisely, how each and every number can be ‘constructed’, at least in principle, and tells us not to worry about there being any ‘rogue cases’ which our construction rules can’t reach. What more can we sensibly want? We might add that, if we are swayed by the structuralist thought that in some sense we can only be given the natural numbers all together (whether by a general method of construction, or otherwise), then perhaps we ought to expect that any acceptable explanation of the property of being a natural number will — when properly articulated — involve us in talking of all the numbers, at least in that seemingly anodyne way that is involved in the extremal clause (iii) above.

These preliminary reflections, then, seem rather to diminish the interest of the claim that characterizations of the property natural number are inevitably impredicative, if that is meant in the in the Parsons sense. But be that as it may. Let’s next consider: is the claim actually true?

To be continued

Parsons’s Mathematical Thought: Sec. 49, Uniqueness and communication, continued

In sum, then, we might put things like this. Parsons has defended an ‘internalist’ argument — an argument from “within mathematics” — for the uniqueness of the numbers we are talking about in our arithmetic, whilst arguing against the need for (or perhaps indeed, the possibility of) an ‘externalist’ justification for our intuition of uniqueness.

Can we rest content with that? Some philosophers would say we can get more — and Parsons briefly discusses two, Hartry Field and Shaughan Lavine, though he gives fairly short shrift to both. Field has argued that we can appeal to a ‘cosmological hypothesis’ together with an assumption of the determinateness of our physical vocabulary to rule out non-standard models of our applicable arithmetic. Parsons reasonably enough worries: “If our powers of mathematical concept formation are not sufficient [to rule out nonstandard models], then why should our powers of physical concept formation do any better?” Lavine supposes that our arithmetic can be regimented as a “full schematic theory” which is in fact stronger than the sort of theory with open-ended induction that we’ve been considering, and for which a categoricity theorem can be proved. But Parsons finds some difficulty in locating a clear conception of exactly what counts as a full schematic theory — a difficulty on which, indeed, I’ve commented elsewhere on this blog.

In both cases, I think Parsons’s points are well taken: but his discussions of Field and Lavine are brief, and more probably needs to be said (though not here).

Scroll to Top