*Some final thoughts after the TMS meeting last week (again, mostly intended for local mathmos rather than the usual philosophical readers of this blog …).*

Consider again that rather unclear question ‘Does mathematics need a philosophy?’. Here’s another way of construing it:

Are mathematicians inevitably guided by some general conception of their enterprise — by some ‘philosophy’, if you like — which determines how they think mathematics should be pursued, and e.g. determines which modes of argument they accept as legitimate?

Both Imre Leader and Thomas Forster touched on this version of the question in very general terms. But to help us to think about it some more, I suggest it is illuminating to have a bit of detail and revisit a genuine historical debate.

We need a bit of jargon first (which comes from Bertrand Russell). A deﬁnition is said to be *impredicative* if it deﬁnes an object* E* by means of a quantiﬁcation over a domain of entities which includes *E* itself. An example: the standard definition of the infimum of a set *X* is impredicative. For we say that *y = inf(X)* if and only if *y *is a lower bound for *X*, and for any lower bound *z* of *X, z ≤ y. *And note that this definition quantifies over the lower bounds of *X*, one of which is the infimum itself (assuming there is one).

Now Poincaré, for example, and Bertrand Russell following him, famously thought that impredicative definitions are actually as bad as more straightforwardly circular definitions. Such deﬁnitions, they suppose, oﬀend against a principle banning viciously circular definitions. But are they right? Or are impredicative definitions harmless?

Well, local hero Frank Ramsey (and Kurt Gödel after him) equally famously noted that some impredicative definitions are surely entirely unproblematic. Ramsey’s example: picking out someone as the tallest man in the room (the person such that no one in the room is taller) is picking him out by means of a quantiﬁcation over the people in the room who include that very man, the tallest man. And where on earth is the harm in that? Surely, there’s no harm at all! In this case, the men in the room are *there anyway*, independently of our picking any one of them out. So what’s to stop us identifying one of them by appealing to his special status in the plurality of them? There is nothing logically or ontologically weird or scary going on.

Likewise, it would seem, in other contexts *where we take a realist stance*, and where we suppose that – in some sense – reality already supplies us with a ﬁxed totality of the entities to quantify over. If the entities in question are ‘there anyway’, what harm can there be in picking out one of them by using a description that quantiﬁes over some domain which includes that very thing?

Things are otherwise, however, if we are dealing with some domain with respect to which we take a less realist attitude. For example, there’s a line of thought which runs through Poincaré, an early segment of Russell, the French analysts such as Borel, Baire, and Lebesgue, and then is particularly developed by Weyl in his *Das Kontinuum*: the thought is that mathematics should concern itself only with objects which can be deﬁned. [This connects with something Thomas Forster said, when he rightly highlighted the distinctively modern conception of a function as any old pairing of inputs and outputs, whether we can define it or not — this is the ‘abstract nonsense’, as Thomas called it, that the tradition from Poincaré to Weyl and onwards was standing out against.] In that tradition, to quote the later great constructivist mathematician Errett Bishop,

A set [for example] is not an entity which has an ideal existence. A set exists only when it has been deﬁned.

On *this* line of thought, deﬁning a set is – so to speak – deﬁning it into existence. And

from this point of view, impredicative deﬁnitions will indeed be problematic. For the deﬁnitist thought suggests a hierarchical picture. We deﬁne some things; we can then deﬁne more things in terms of those; and then define more things in terms of those; keep on going on. But what we can’t do is deﬁne something into existence by impredicatively invoking a whole domain of things already including the very thing we are trying to define into existence. That indeed would be going round in a vicious circle.

So the initial headline thought is this. If you are full-bloodedly realist — ‘Platonist’ — about some domain, if you think the entities in it are ‘there anyway’, then you’ll take it that impredicative deﬁnitions over that domain can be just ﬁne. If you are some stripe of anti-realist or constructivist, you will probably have to see impredicative deﬁnitions as illegitimate.

Here then, we have a nice example where your philosophical Big Picture take on mathematics (‘We are exploring an abstract realm which is “there anyway”’ vs. ‘We are together constructing a mathematical universe’) does seem to make a difference to what mathematical devices you can, on reflection, take yourself legitimately to use. *Hence the fact that standard mathematics is up to its eyes in impredicative constructions rather suggests that, like it or not, it is committed to a kind of realist conception of what it is up to. *So yes, it seems that most mathematicians are implicitly caught up in some general realist conception of their enterprise, as Imre and Thomas in different ways came close to suggesting. In the terms of the previous instalment, we can’t, after all, so easily escape entangling with some of the Big Picture issues by saying ‘not our problem’.

Return to the story I gestured at in the last instalment about what I called the the Battle of the Isms. I rather cheated by then assuming that the game was taking mathematics uncritically as it is and seeing how it fits in the rest of our story of the world and of our cognitive grasp of the world. In other words, I took it for granted that the enterprise of trying to get an overview, trying to understand how mathematics fits together with other forms of enquiry, isn’t going to produce some nasty surprises and reveal that the mathematicians might somehow have being doing some of it wrong, and need to mend their ways! *But as we’ve just been noting, historically that isn’t how it was at all*. So while Logicism (which Imre mentioned) and Hilbert’s sophisticated version of Formalism were conservative Isms, which were supposed to give us ways of holding on to the idea that — despite its very peculiar status — classical mathematics is just fine as it is, these positions were up against some radically critical strands of thought. These included famously Brouwer’s *Intutionism* as well as Weyl’s *Predicativism*. The critics argued that the classical maths of the late nineteenth century had over-reached itself in descending into ‘abstract nonsense’ (which was why we got a crisis in foundations when the set-theoretic and other paradoxes were discovered), and to get out of the mess we need to stick to more constructivist/predicativist styles of reasoning, recognising that world of mathematics is in some sense our construction (which you might think has something to do with how we can get to know about it).

Now, that’s more than a little crude and we can’t follow those debates any further here. As a thumbnail history, though, what happened is that as far as mathematical practice is concerned the conservative classical realists won. Predicative analysis, for example, survives in a small back room of the mansion of mathematics, where its practitioners still like to show off how you far you can get hopping on one leg, with an arm tied behind your back — as the lovers of abstract nonsense, as Thomas described himself, might put it. Though by the way, it very importantly turns out that predicative analysis is all that science actually needs (so we don’t have, so to speak, external, practical reasons for going classical). But the victory of the classical realists wasn’t a conceptually well-motivated philosophical victory — there are such things, sometimes, but this certainly wasn’t one of them. The conceptual debates spluttered on and on, but the magisterial authority of Hilbert and others was enough to convince most mathematicians that they needn’t change their way of doing things. So they didn’t.

Yet it seems that we can imagine things having gone differently on some Twin Earth, where the internal culture (the philosophy, if you like) of mathematicians developed differently over a hundred years, so that low-commitment approaches were particularly prized, and the constructivists/predicativists got to occupy the main rooms of the mansion, dishing out the grants to their students, while the lovers of abstract nonsense were banished to the attics to play with their wild universe of sets in the Department of Recreational Mathematics. Or if we can’t imagine that, why not?

There’s a lot more to be said. But maybe, just maybe, it does behove mathematicians — before they scorn the philosophers — to reflect occasionally that it really *isn’t* quite so obvious that our mathematical practice is free from deep underlying philosophical presumptions (even in a broad, Big Picture sense).