At last week’s meeting of the Trinity Mathematical Society, Imre Leader and Thomas Forster gave introductory talks on “Does Mathematics need a Philosophy?” to a startlingly large audience, before a question-and-answer session. The topic is quite a big one, and the talks were very short. But here are just a few after-thoughts, in three instalments (primarily for mathmos such as the members of TMS, but others might be interested …).

Imre did very briskly sketch a couple of philosophical views about mathematics, which he called *platonism* and *formalism*. And he suggested that mathematicians tend to be platonist in their assumptions about what they are up to (in so far as they presume that they are exploring a determinate abstract mathematical universe, where there are objective *truths* to be discovered) but they turn formalist when writing up their proofs for public consumption.

Now, Imre characterized *formalism* as an account of the nature of mathematics along the lines of “it’s all juggling with meaningless symbols, a game of seeing what symbol strings you can ‘deduce’ from other strings according to given rules”. It is worth remarking that (at least as far as serious players in the history of philosophical reflection about mathematics go) this is something of a straw position: for example the great David Hilbert is usually taken to be the paradigm formalist; but his position was a lot more nuanced than* that*. But ok, I’ve heard other mathematicians too describe the same naive kind of it’s-all-symbol-juggling line: so, for present purposes let’s understand ‘formalism’ as Imre did, though nothing really hangs on this. And the point I want to make is that it is a mistake to conflate endorsing *formalism* of any kind, naive or sophisticated, with something quite different, namely pursuing the project of *formalization. *Since I’ve also more than once heard others just make the same conflation, it’s worth pausing to pick it apart. (If some of the following sounds familiar to regular readers of this blog, it is because I’m shamelessly plagiarizing my earlier self.)

Start from the observation that, in presenting complex mathematical arguments, it helps to regiment our propositions into mathematical-English-plus-notation in ways which are expressly designed to be precise, free from obscurities, and where the logical structure of our claims is clear [think of the way we use the quantifier/variable notation — as in — to make the structure of statements of generality crystal clear]. Then we try to assemble our propositions into something approximating to a chain of formal deductions. Why? Because this enforces honesty: we have to keep a tally of the premisses we invoke, and of exactly what inferential moves we are using. And honesty is the best policy. Suppose we get from the given premisses to some target conclusion by inference steps each one of which is obviously valid (no suppressed premisses are smuggled in, and there are no suspect inferential moves). Then our honest toil then buys us the right to confidence that our premisses really do entail the desired conclusion. Hooray!

True, even the most tough-minded mathematics texts are written in an informal mix of ordinary language and mathematical symbolism. Proofs are very rarely spelt out in every formal detail, and so their presentation still falls short of the logicians’ ideal of full formalization. But we will hope that nothing stands in the way of our more informally presented mathematical proofs being sharpened up into fully formalized ones. Indeed, we might hope and pray that they could ideally be set out in a strictly regimented formal language of the kind that logicians describe (and which computer proofs implement), with absolutely every tiny inferential move made totally explicit, so that everything could be mechanically checked as being in accord with some overtly acknowledged rules of inference, with the proofs ultimately starting from our stated axioms.

True again, the extra effort of laying out everything in complete detail will almost never be worth the cost in time and ink. In mathematical practice we use enough formalization to convince ourselves that our results don’t depend on illicit smuggled premisses or on dubious inference moves, and leave it at that — our motto is “sufficient unto the day is the rigour thereof”. Here are local heroes Whitehead and Russell making the point in *Principia*:

Most mathematical investigation is concerned not with the analysis of the complete process of reasoning, but with the presentation of such an abstract of the proof as is sufficient to convince a properly instructed mind.

(A properly instructed mind being, like them, a Trinity mathmo.)

Let’s all agree, then: *formalization *(at least up to a point) is a Good Thing, because a proof sufficiently close to the formalized ideal is just the thing you need in order to check that your bright ideas really do fly and then to convince the properly instructed minds of your readers. [Well, being a sort-of-philosophical remark, you’ll be able to find some philosophers who seem to disagree, as is the way with that cantankerous bunch. But the dissenters are usually just making the point that producing formalizable proofs isn’t the be-all and end-all of mathematics — and we can happily agree with *that*. For a start, we often hanker after proofs that not only work but are in some way *explanatory*, whatever exactly that means.]

So Imre would have been dead right if he had said that mathematicians are typically (demi-semi)-*formalizing* when they check and write up their proofs. But in fact, having described formalism as the game-with-meaningless-symbols idea, he said that mathematicians turn *formalist* in their proofs. Yet that’s a quite different claim.

Anyone who is tempted to run them together should take a moment to recall that one of the earliest clear advocates of the virtues of formalization was Frege, the original arch anti-formalist. But we don’t need to wheel out the historical heavy guns. The key point to make here is a very simple one. Writing things in a regimented, partially or completely symbolic, language (so that you can better check what follows from what) doesn’t mean that you’ve stopped expressing propositions and started manipulating meaningless symbols. Hand-crafted, purpose-designed languages are still languages. The move from ‘two numbers have the same same sum whichever way round you add them’ to e.g. ‘’ changes the medium but not the message. And the fact that you can and should temporally *ignore* the meaning of non-logical predicates and functions while checking that a formally set-out proof obeys the logical rules [because the logical rules are formalized in syntactic terms!], doesn’t mean that non-logical predicates and functions don’t any longer *have* a meaning!

In sum then, the fact that (on their best public behaviour) mathematicians take at least some steps towards making their proofs formally kosher doesn’t mean that they are being (even temporary) formalists.

Which is another Good Thing, because out-right naive formalism of the “it’s all meaningless symbols” variety is a pretty wildly implausible philosophy of mathematics. But that’s another story ….

*To be continued*

The the Platonist / Formalist point (I think) I’ve seen before is a bit different. It’s the same in the Platonism part; but in the Formalism part, it doesn’t say mathematicians turn formalist when writing up their proofs, though it agrees the formalism is “for public consumption.” Instead, the idea seems to be that their public version of what mathematics is about, and of what mathematicians are doing, is proof, and not “exploring a determinate abstract mathematical universe, where there are objective truths to be discovered”.

I think one place where I’ve seen that version is in Davis and Hersh,

The Mathematical Experience.Yes, I think that is quite a common idea, that mathematicians tend to be platonists in their secret hearts but formalists when pressed for a story for public consumption. But that could just mean that mathematicians tend to get into a muddle! Imre Leader’s line is interesting because — albeit by sliding from thoughts about the virtues of formalisation to thoughts about formalism — it at least tries to give a rationale for the formalist turn.

What is that rationale? (I’m having trouble extracting it from your comments.)

Ok, ‘rationale’ is too generous … !

As far as I know, formalism construed as “juggling meaningless symbols” is just a straw man. Nobody ever held such a position, certainly not Hilbert (who knew better). The one who came closest to holding the extreme position, perhaps, was Curry.

Absolutely. Yes, I should certainly have said that, particularly given the intended raiders. I’ll edit. Thanks!

Yes. I think there’s a conflation of formalism (the parodic version) with some version of finitism. And while it isn’t a particularly forgivable conflation, it’s understandable.

Pingback: Logic and Reality | Sapere Aude

Of course, it is hard to know what is in the mind, precisely, of a mathematician, in

relation to `what is the status of his philosophy’ — and indeed perhaps the

mathematician himself would not know — but let me explain what I meant by

`formalism’, and why it is different to what most mathematicians would call

`being completely rigorous’ or `being formal’, and also why I do think that

mathematicians are (or would like to be) formalists when doing their proofs.

First of all, of course I was using `meaningless strings of symbols’ as a nice way of

talking to an audience — what I mean precisely is: `writing a proof where the

proof can be checked mechanically, in the sense that what we have to check about

the purported proof to see that it is indeed a valid proof is some purely syntactic

things, not things where the meaning of the terms is relevant’.

That’s what I would call being `formalist’ when doing proofs. In contrast,

`formalising’ sounds to be like `being completely rigorous and precise’. Let me

use those two terms like that, and say what the difference is, and give an example

to explain in what sense mathematicians are aspiring to be formalists, even if they

have no idea how to be formalists and only know how to do formalising.

Let me take as an example some simple number theory. Suppose we present the

Peano axioms (the usual three of Peano — I am not talking first-order logic here).

And suppose a lecturer says `we can define a function from N to N as follows:

it is called `factorial’, and is written n!, and is defined by: f(1)=1 and

f(n)=nf(n-1) for n bigger than 1.

A student might say `why is that defined?’ And the careful lecturer would say

`it is defined for all n, by induction on n’. If the student said `what is the

property p(n) to which you are applying the third Peano axiom’, the

lecturer would say `p(n) is `n! is defined’ ‘. All well and good and completely

careful and rigorous etc. That’s what I would call `formalising’.

But now suppose the student said `wait, what is this word `defined’? Is it some

primitive notion? What do you mean?’ Then here are three possible answers…

A) Aha, good point. It looks like this is some weird primitive notion. But there is

an amazing super-clever trick for making it precise: the notion of an `attempt’.

(And goes on to explain what an attempt is.)

B) I myself don’t know how to get rid of that word `defined’, but logicians have sorted

it all out and I know it is OK. I mean, I know it can be turned into a statement (in a proof) in a way that can be mechanically checked.

C) Ah yes, that is a famous grey area, and people who work in Foundations have still

not sorted out how to make that kind of statement — the word `defined’ is not really

understoof, in the sense that for 100 years people have worried about how to make

that into a mechanically-checkable construct.

I claim that all mathematicians would want to answer A or B, and would be really

unhappy if the answer were C (which it isn’t, of course). Very few mathematicians

would know A (unless they were logicians!), but all would feel cosy with B and if

logicians told them C they would feel queasy and unhappy.

And that is why I claim that mathematicians, when doing proofs, are being formalist

(as I’ve defined it above), and not just being formalisers (again, in the sense I describe

above). This is an important distinction, as most mathematicians would not know or

care exactly how to mechanise their statements, but I believe it is very important to

them to know it can be done. The above example of `attempts’ illustrates this very

clearly.