Weir’s formalist account of arithmetic in headline form comes to this: the arithmetical claim *P* is correct just in case that there is (or in practice could be) a concrete proof of *P*. (We’ll stick to considering the arithmetical case.)

Weir needs proofs to be *concrete*: treating proofs as abstract objects wouldn’t at all chime with his naturalist worries about abstracta. He needs to allow the practical *possibility* of a concrete proof to suffice for truth, or else — for all he knows — there could be random gaps even among the simple computable truths about small numbers because no one has yet bothered to write down a computation. But the modality had better be not be an extravagant one (“in some possible world, however remote, where our counterparts can e.g. handle proofs with more steps than particles in the actual world, …”), or again we could be taking on modal commitments that are inimicable to Weir’s strong naturalism: so as we said, we need some notion of *practical* possibility here.

Are these concrete proofs particular tokens? If so, you and I always give different proofs. That wouldn’t tally with our ordinary ways of talking of proofs (when we talk of Euclid’s proof of the infinitude of primes, we don’t mean just some ancient token; when we say there is only one known proof of *P* we don’t mean that it has only been written down once). But Weir wants to avoid proof-types (or he is back with the unwanted abstracta); so he makes play with with a notion of a proof-*tipe*, where a tipe is a mereological sum of equiform tokens. Thus understood, a proof tipe is a scattered concrete thing, and you and I giving the same proof — to speak with the vulgar — is a matter of you and I producing tokens that are part of the same proof tipe. (Weir thinks that for his purposes he can get away with a lightweight mereological theory that doesn’t get tangled with issue about e.g. absolutely restricted principles of composition. Let’s give him that assumption. It will be the least of our worries.)

Now, concrete numerals and concrete proofs are few (even counting in practically possible ones as well). The obvious challenge to Weir’s position is that his formalism will therefore lead to some kind of finitist revisionism rather than conserving the arithmetic we know and love.

To press the point, take a concretely statable claim *P* that some very large number *n* is prime. Then it could well be that there is no practically possible concrete proof of either *P* or not-*P* in your favourite formal system *S* (the system whose concrete proofs, according to Weir, make for the correctness of arithmetical claims on your lips). Yet it is an elementary arithmetical claim that either *P* or not-*P*, a truth now seemingly without a formalist-approved truth-maker (given that Weir endorses a standard truth-functional account of the connectives). What to do?

Well at this point I struggle. The idea seems to be this. Here is our practice of producing concrete formal proofs in the system *S*. Like other bits of the world, we can theorize about this, doing some applied mathematics *M* which — like other bits of applied mathematics — purports to talk about some idealized mathematical model, in this case a model of the real-world practice of proving things in *S*. And now, we *do* have a concrete *M*-proof that, in the idealized model, the model’s representative for the concrete claim that either *P* or not-*P*. So,

the EXISTENCE [remember: small caps indicate talk-in-the-metaphysics room] of concrete indeterminables [like

P] should not inhibit reasoners from applying excluded middle to them so long as THERE IS aconcreteproof that, in alegimitateidealization, the image of the indeterminable is decidable in the formal sense. (TTP, p. 205)

But hold on! Setting aside worries about ‘legitimate’, what makes an arithmetical claim correct, on your lips, is by hypothesis the practical availability of a concrete proof in a formal game *S* [maybe different speakers are working in different games with appropriate mappings between them, but let’s not worry about that now]. So by hypothesis, a concrete proof in a *different* bit of mathematics *M* — a bit of applied mathematics about an idealized model of our arithmetical practice — just isn’t the sort of thing that can make an arithmetical claim correct. If we allow moves external to *S* to make-true arithmetical claims, then *S* isn’t after all the formal system proofs within which proofs provide correctness conditions for arithmetic.

So on the face of it, it looks as if Weir has simply cheated here by changing the rules of the formalist game midstream! And I’m evidently not alone in thinking this: John Burgess in his *Philosophia Mathematica* review finds himself baffled by the same passage. Weir himself recognizes that his readers might find the move here sticky. In one of his comments in the thread following my last blog post on his book, he writes

If we want to know whether a particular area is bivalent … we don’t ask whether for every token there is a token proof or disproof, we ask if there is a concrete proof of a formal negation-completeness result for the idealised theory. If so, we are allowed to lay down all instances of LEM for that sub-language as axioms … even if that means that some concretely provable theorems —

tis prime ortis not prime say — have disjuncts which are neither concretely provable nor refutable. I suspect this get out of jail manoeuvre will prove one of the biggest sticking points in the reception of the book, but I’m still confident about its legitimacy!

Like Burgess, I can’t share Weir’s confidence.

Thanks Peter: well my prediction as to where I’d meet resistance was unsurprisingly right but it is worrying for me when the likes of John Burgess and yourself don’t even seem to see the argument, rather than see it but reject it. (Burgess seems to think a passage in the bottom half of p.205 is the argument but it’s much more extended than that).

A clarification. You write

‘what makes an arithmetical claim correct, … is the practical availability of a concrete proof in a formal game S … . So by hypothesis, a concrete proof in a different bit of mathematics M — a bit of applied mathematics about an idealized model of our arithmetical practice — just isn’t the sort of thing that can make an arithmetical claim correct.’

I agree entirely. That’s why when moving from decidable fragments of arithmetic to full arithmetic I reject (pp. 123-4) as a resolution of the problem Gödel’s 1st theorem causes appeal to a proof of a Gödel sentence in a stronger system. This does not show the sentence is a truth of the original system for what is proved true expresses a different proposition (the derivability relation in the extended position being different). Similarly if your formal system of arithmetic really is PA then a concrete proof, in ZFC say, that an undecidable string such as Goodstein’s theorem is true in all models of a standard type is not a concrete proof of the PA sentence.

It’s different if you give a concrete proof in another, perhaps stronger, system that there is an abstract proof within the original system itself, the sort of case we get with Boolos’ curious inference. I would count that as a concrete proof of the proposition expressed by the sentence itself in the original system (cf. p.203, fn14) .

But neither of these cases are relevant to my problem case. I’m not claiming that for every concretely expressible primality claim P there will be a concrete proof, in some system or other, that there is a concrete proof in Delta-zero arithmetic of P or else there will be a concrete proof that there will be a Delta-zero refutation of P. I’m saying that if one adds P v ¬P as axiom in each such case then there will be a very short concrete proof of each LEM P v ¬P and it is legitimate to do so because there is (or could be if we expanded the sketch) a proof of the negation-completeness of Delta-zero arithmetic.

The problem I then sought to resolve is that given the other things I say, P v ¬P is true, truth distributes over disjunction so one or other is true, but truth = concrete provability is my ‘headline’ claim; yet I readily admitted that in some cases neither will be concretely provable.

I don’t think moving to supervaluationism or an inferentialist account of the connectives will help me. (Burgess is wrong to say I ‘simply ignore inferentialism’. True I don’t really tackle it but I do acknowledge it in a non-negligible footnote p. 78 fn14 citing work including my own on that approach.)

The reason is that the, as I see it, atypical cases where concrete truth and concrete proof come apart are, as it were, scattered unsystematically all over the language with no bound we can set in advance on length (not physical) of abstract type or complexity of the atypical cases. Our endemic and unavoidable use of abbreviation, in ways which cannot be constrained in advance, sees to that. So no systematic theory will match the jaggy, unsystematic, patchy mess which is the concrete corpus of linguistic reality.

So, to return to the problem for me thus posed:- what intellectual crime am I, as a formalist, committing in affirming t is prime or not in cases where I agree there is no concrete proof or refutation? In general I think I should affirm a mathematical thesis of formal system S only when I think we could come up with a concrete proof in S (which might be a concrete proof in S* that an abstract proof in S exists). Ditto for refutations and affirming negations. But in the case at hand, I will be agnostic on t is prime and agnostic on t isn’t prime. I do affirm ‘t is prime or not’ but there is a one-line concrete proof of this because I’ve added all Delta-zero instances of LEM because it is true (there is a concrete proof in the formal metatheory) that there is an abstract proof or disproof of each.

So my intellectual crime is not affirming a thesis where, as a formalist, I shouldn’t but in assuming a distributivity of truth where, critics will allege, I’ve no right to. I suppose the crime is brute inconsistency if I say neo-formalism is: for every true concrete utterance there is a concrete structure which we, with our actual powers, could verify is a proof, under the rules of the system in question.

So I don’t say that, I made the weaker claim that that is typically the case. The reason I believe it typically holds is not because I’ve performed an enumerative induction scouring the world for examples. Similarly the reason I believe no concrete utterance of a standard system with brackets has an uneven number of left and right brackets is not because I’ve gone around totting up confirming examples. It’s because I accept a rather simple concrete inductive proof re the formal idealisation and read that result back into the concrete tokens (discarding as ill-formed unbalanced concrete tokens on the basis of the formal result). On pp. 198ff I attack strict or ultrafinitism claiming that though we can characterise wff and proof ‘tipes’ as non-abstract surrogates for types, with tokens as concrete parts, we can prove no interesting generalisations except by going to the formal idealisation and using the usual methods, recursive structure, proof by induction etc. I take it you aren’t going to object to that or side with the strict finitist.

If we can (concretely) prove negation-completeness for the abstract idealisation of a sub-language then adding (despite the redundancy) LEM for every wff in that sublanguage I say is legitimate, even though it can then arise that we have a concrete proof of P v ¬P but we know there can be no concrete proof of P and none of ¬P. This doesn’t tell against the theory that, in slogan form, truth for concrete tokens is concrete provability any more than if by dint of an abbreviatory convention I introduce,’¬(two to the power a trillion) 7 is prime’ is a concrete token of an abstract formula consisting of the abstract type of ‘7 is prime’ prefaced by 2 to the trillion negation signs, we should deny that the sentence has 2 to the (trillion-1) true constituents. There certainly won’t be that many concrete tokens. But we can establish no non-trivial generalities about concrete tipes except by proving things about the abstract systems, often inductively generated, and then working back.

So once again what intellectual crime do I, as a formalist commit, if I affirm t is prime or not prime, affirm its truth, and also agree both disjuncts lack concrete proofs? The crime am I to be accused of isn’t even stateable in the concrete mereological language of proof tipes. The notions of wff and truth needed to state the problem: “one disjunct of the instance of LEM is true, but neither disjunct is concretely provable” are only available in the formal theory of the language where we define wff and truth recursively. (To be a deflationist about truth in natural language does not mean one has to deny one can have formal explicit definitions of truth in formal languages).

I’m entitled to the formal language and to their being truths in it because there are typical concrete proofs (or the proof sketches could in a reasonable sense be expanded to proofs) of the results I need in proof theory. But in the language in which the problem is stateable I can demonstrate the resolution: if we can prove that the abstract idealisation of concrete provability sustains negation-completeness then we are entitled to affirm LEM, if not, not. The second part here shows that the resolution is not a trite or hand-waving one. If I’m wrong about infinitary logics being a legitimate idealisation, then I’d have to conclude that LEM does not hold in general for full arithmetic. That is, in fact, what I say about ZFC towards the end of the book except I allow that it may be acceptable, pro tem, to reason classically if one thinks a negation-complete (in infinitary logic) extension of current theory is feasible.

The picture then is this: all there is in mathematical reality are our concrete utterances, of theses, proof sketches, proofs plus some non-abstract, perhaps mereological, groupings of these into patterns realised in different tokens. Our grasp of mathematics consists entirely in our dispositions to utter tokens and respond to utterances in the appropriate way, in particular (in pure maths) showing the right sort of sensitivity to presentations of proof and refutation tokens. Among the tokens we grasp are those in formal metatheory and applied maths in general. The metatheory we adopt to explain how truth of compounds systematically depends on parts, ditto for proofs and so on, should determine and constrain what laws and principles we lay down as normative constraints. LEM is only justified, then, if negation-completeness is concretely provable of the formal idealisation of the theory. To say that truth for concrete utterances = (typically) concrete provability and that the exceptions are untroubling if the right metatheoretic relations hold between idealised truth and idealised provability, is not at all the same as saying truth for concrete utterances = abstract provability.

To object to my account it seems to me one has to either come down on the side of the strict finitist and deny that it is legitimate to establish results about our concrete corpus indirectly, by establishing results about the formal idealisation. Not a good move I say. Or say that even if truth in say decimal arithmetic could be typically constituted by the existence of concrete proofs that is not true in proof theory, model theory etc. Why not, I don’t see why there should be a difference?

Many thanks for this, Alan. I was brutally short in my post, hoping to provoke just such a brisker explanation from you of your position here! I’ll muse and respond …

Many thanks again to Alan for his long post.

He writes “In general I think I should affirm a mathematical thesis of formal system S only when I think we could come up with a concrete proof in S (which might be a concrete proof in S* that an abstract proof in S exists)” [where S* is I assume supposed to be an appropriate metatheory about S].

Well, that must be loose talk, as a concrete proof in S* certainly isn’t (in general) a concrete proof in S. Perhaps the thought is rather this. Suppose I’m working with a formal system S. Then I am in a position actually to affirm P (‘concretely’, as it were) as a claim inside the system S if either I can construct (really construct, ‘concretely’) a proof within S that leads to P as conclusion or if in an appropriate mathematical metatheory about S I can construct (really construct, ‘concretely’) a proof that S |- P.

Here’s an example (maybe more interesting the the LEM case). Suppose I have a metatheory M in which I can prove that ACA_0 is conservative over PA. Then, because of speed-up, there may be a short concrete proof of P in ACA_0 where there is no practically possible proof of P in PA. But still, my concrete conservativeness proof plus my concrete derivation in ACA_0 add up to a concrete proof that PA |- P. And, the story goes, that’s enough to warrant me affirming P within PA.

OK. But we might wonder whether Alan’s

formalistis entitled to that story. Recall, on Alan’s account, we have formal games of sentence-shuffling whose sentences are given correctness conditions in terms of their own derivability within the system in question. In particular, M is a formal game of sentence-shuffling: the correctness conditions for M-sentences are given in terms of derivability-in-M.So why, on this formalist story, do they tell us anything about PA?.Alan says e.g. “The metatheory we adopt to explain how truth of compounds systematically depends on parts, ditto for proofs and so on, should determine and constrain what laws and principles we lay down as normative constraints.” But I can’t make anything of that. In what sense does the metatheory M — whose claims, if they are “about” anything for the formalist, are about derivability-in-M — “explain” or “constrain” anything about PA? I remain baffled.

Actually, I’m not unsympathetic to an idea that we can express with Alan’s words “all there is in mathematical reality are our concrete utterances, of theses, proof sketches, proofs plus some non-abstract, perhaps mereological, groupings of these into patterns realised in different tokens”. For this could be read in a Wittgensteianian spirit: there’s the mathematical practice, and we shouldn’t be hankering for anything “out there” which underpins it or which it answers to or which makes it correct. Alan’s formalist, it might then be said, hasn’t sufficiently broken free of those hankerings: he realizes that there is nothing outside the practice to make this or that mathematical sentence correct, but still looks for correctness-making facts.

Thanks Peter that’s very helpful. The example of establishing P from PA axioms via a shortish proof in ACA_0 plus a proof of conservativeness is a good one. I think virtually everyone (who didn’t entertain doubts about the consistency of ACA_0 but not PA) would indeed take this ‘to warrant me affirming P within PA’. The question, as you say, is am I entitled to belong to that consensus? Since I’m not a platonist can I take this as a warrant for affirming P with the informational and metaphysical content I claim it has?

Well when you say

‘M is a formal game of sentence-shuffling: the correctness conditions for M-sentences are given in terms of derivability-in-M. So why, on this formalist story, do they tell us anything about PA?’

I am going to wheel in my informational content/metaphysical content distinction. I am not a term formalist. I don’t think PA for example, is about syntax or proof theory. What makes its sentences true or false is indeed sentence shuffling which generates proofs but none of that is transparently there in the sense, the informational content. In a deflationary reading of ‘about’, ‘7 is prime’ is about the number 7 and primeness. I’m happy with that; I certainly want to deny that ‘7 is prime’ has the same informational content as something like: “the string blah blah …. is derivable in PA”. Similarly a formal proof theory M for PA is about PA in the same way that PA is about 7 and co, though what makes its sentences true or false again is (typically) concrete proofs.

Loose talk costs lives but is it ok in the philosophy of proofs? In general I want to be pretty liberal about proof but without lapsing into anarchy. As I said in the book and my exchange with Jeff, proofs needn’t be in formal languages, could be diagrammatic etc. And I don’t want to take too strict a stance on when proof sketches shade into gapless proofs, a matter more for the experts. I certainly want to individuate formal system more broadly than by proof architecture: classical logic Hilbert axiomatic style is the same ‘system’ in one sense, a sense I want, as classical logic in natural deduction as classical logic by sequent calculus etc. So though there is a strict and well-defined sense in which natural deduction classical logic is a different system from sequent calculus classical logic there are perfectly rigorous senses in which they are the same (this is the same ball park as Quine’s ‘theory’/‘theory formalisation’ distinction). Still, I shouldn’t go for something as gross as: a system is just a set of ordered pairs X:A over a language, with X sets of sentences (and leaving aside the individuation of language). As I said to Jeff, I don’t want outright assertion of P to count as a proof just when it so happens it’s a theorem, even though we could produce an axiomatisation with P as an axiom.

So I should tighten things up. At the cost of making the Hilbert architecture a different system, perhaps we can specify that two ‘architectures’ are realisations of the one system (assuming we have the language fixed) just when they have the same axioms, the derivability relation is extensionally the same and …. Here follow some further clauses which make them both proof systems in an intuitive sense. For example, that the rules are purely formal, make no appeal to meaning. Some (but not me) might want to add that proof is finitary and proofhood is decidable.

Now I argued in the book that by adding some abbreviatory conventions, derived rules etc. we won’t change the system, if the derivability relation (over the initial language) stays the same. Granted this conservativeness there is no change in the meaning of the sentences, even though concrete proofs might emerge that didn’t exist before. The simplest sort of case is where there are proof tokens which include parts like ‘… n applications of &E: P …’ I want to say these are tokens of proofs in the original system itself, for the above reasons and even though that’s not so in the most fine-grained sense of system. Moreover not a sketch but a proof in the no-gaps sense. This seems clear in a case where we could in a practical sense actually fill in the gaps, or get a theorem prover to do it in real time.

However to grasp the proof token, to grasp the practice of the abbreviatory conventions, even though one might not need anything like a grasp of proof theory, one still might need e.g. a grasp of decimal arithmetic; yet the language of the initial system we started out from might contain no arithmetic at all. Well, why can’t I just say not everyone will understand the proof perhaps but it’s still a concrete proof in the original system albeit using a more powerful architecture?

At the far end of this spectrum, if such it is, is the case where a Wiles proves a sentence of PA using very advanced maths from loads of different areas, topology, differential geometry, set theory, whatever and, let’s suppose (we are not there with Fermat yet as I understand it) proves using all that high powered machinery that the sentence is true in all first order models of PA and points us in the direction of the completeness proof (which I think has now been formalised by one of the formal theorem provers such as Isabelle). Surely, even if it were actually a concrete, gap-free proof, surely this can’t count as a concrete PA proof of P, you will say? Ok not in the narrow sense. But in a wider sense yes. It’s concrete, it’s a proof, it’s a proof of a sentence of PA and it proves that that sentence is derivable in the formal system of PA (and remember again that for me there are truths of proof theory, model theory, I’m not a fictionalist).

Perhaps the issue here is partly terminological. Whether or not it’s a concrete proof in PA, I want to count it a truth-maker for PA sentences, for obvious reasons (then I am in harmony with those who say the proposition it expresses in PA has been established as true).

(What if I convince you that PA |- P using a wider system S* which turns out to be inconsistent? Then the ‘proof’ isn’t a real proof, another example that Cartesian certainty is not to be had. What if T|- P is provable from consistent S* and its negation from consistent S**. If the |- relation isn’t r.e. if the logic is infinitary, S and S* needn’t be too wild I think.) Mmm, tricky case, I think I’d have to say here P means something different in T relative to S* than T relative to S**. A regress perhaps looms, so I’ll make the usual move- it’s a virtuous one!)

None of the above, of course, addresses the question of sentences for which we have no reason to think we will have a concrete proof, in any system, that it is provable in a given system nor that it is refutable in that system. If, nonetheless we do have a concrete proof that is either provable or refutable in original system that for me justifies changing the proof architecture by adding the instance of LEM as a derived rule or axiom.

Why? One more brief try: when explaining what our grasp of concrete mathematical okens amounts to, this is a real property of real individuals we are trying to explain. We are only allowed to appeal to our linguistic practice with other concrete tokens: sub-formulae, proofs etc; we must work with an ontology only of things which exist for only they can explain how existing beings have the powers they have. When doing maths, including metamathematics and including making notational changes, adding derived rules, abbreviatory conventions etc, we can use any maths we think useful.

Your key claim, Alan, is this:

But that needs to be unpacked.

On your view, mathematical games of symbol shuffling aren’t robustly “about” anything — they don’t point outside themselves to an independently reality. In particular, concrete tokens in the PA game don’t point to numbers “out there”, abstract entities which are “self-sufficient objects” as Frege would put it. Sure, having firmly put Platonist myth behind us, you think we can if we like go on to now use talk of denotation and truth in a thin sense. OK. But this is essentially to be construed Sellars’ style. Thus, to say “sssssss0” in the PA game denotes 7 is essentially to say no more than that a “sssssss0” token plays a certain role in the formal game, the role played in another more homely dialect by “7”. In other words, talk of denotation here is an oblique way of describing the structure of the PA game (nothing more).

Likewise for any other pure mathematical game: concrete tokens don’t point outside the game (and not in particular to concrete token of another game!). We can if we want to, use thin, “disquotational” talk of truth and denotation. But again this is just a way of indirectly describing the structure of the relevant game (nothing more). Right?

In what sense, then, can one pure mathematical game G give us info about another game? Not by being robustly about the other game — such a game G doesn’t point outside itself. Not in a thin sense — thin “aboutness” claims just give us info about the structure of G.

So what can it mean to say that a pure mathematical game like a “proof-theory” M be about another one, PA?To put it snappily but metaphorically: your “realism” about numbers (you want to say PA is true about numbers) is a thin internal realism. Likewise for other pure mathematical games you like to play: you will be a thin internal realist about what they are “about” (at least, if they are indeed about their “subejct matter” in the way that PA is about 7). But then, on the face of it, it’s a bit of a mystery how plays in one such game could “determine and constrain” another.

Ah, but you do give us a toy model to think about:

But this points in a quite different direction, no? Here, the “formal idealization” belongs to some bit of

applied mathematics, a theory which does indeed robustly refer, but to concrete tokenings out there in the world.Thisisn’t about our practice of playing the PA game in the thin way PA is about 7. And, like any scientific theory, we should treat this bit of applied maths with caution — there’s idealization and idealization! In so far as it is a theoryabout real-world concrete tokenings, and in so far as we care about real accuracy, want the TRUTH about tokenings, I’d say that a messy version “with friction and air-resistance” [i.e., in this case, one that accommodates the very limited nature of our actual concrete tokenings] has to trump a version that e.g. allows tokenings of unlimited length. Such a realistically hobbled “proof theory”-as-a-warts-and-all-bit-of-applied-maths won’t have the sort of results you want.So which is it to be? Proof theory as pure maths, which isn’t robustly about anything? Or proof theory as yielding the applied maths of concrete tokenings, in which case best theory with friction and air resistance won’t give you what you want.

Peter: ‘So which is it to be? Proof theory as pure maths, which isn’t robustly about anything? Or proof theory as yielding the applied maths of concrete tokenings, in which case best theory with friction and air resistance won’t give you what you want.’

The latter definitely- as I said earlier in these comments ‘Among the tokens we grasp are those in formal metatheory and applied maths in general’ viewing the former as a sub-species of the latter. But in going on to emphasise that I am not a term formalist and that I take pure and applied maths to be true I perhaps obscured the point that metatheory is for me applied maths.

Now applied mathematical theories I claimed can be seen as formally true, taking all their axioms, empirical, bridge laws, pure just as a formal theory. We want them though to be conservative over the empirical sub-language. If we can extract or generate an empirical language and plausible theory therein which is isomorphic to the mathematical theory then we advance a realist interpretation; if the empirical parts are of low-theoreticity, we’ll have to go intstrumentalistic.

As you say, for the applied theory which is formal proof theory or model theory, realism is out. In 7 section 1.2 I set out the formal idealisation explicitly as an applied theory mapping from the small finite number of concrete types and yielding an ‘injective idealisation’ into (speaking formally) the infinite mathematical structures. I think then the crunch comes here:

“so far as it is a theory about real-world concrete tokenings, and in so far as we care about real accuracy, want the TRUTH about tokenings, I’d say that a messy version “with friction and air-resistance” [i.e., in this case, one that accommodates the very limited nature of our actual concrete tokenings] has to trump a version that e.g. allows tokenings of unlimited length.”

But is this even the case in physical theory: that a really accurate theory ‘trumps’ the frictionless one? Suppose I think that the real situation is a giant finite matrix of atomic spacetime regions each with a fixed number of properties. I might still use an applied theory of continua, manifolds, multi perhaps infinite dimensional phase spaces etc. because I have no ability to defend, or perhaps even formulate, generalisations beyond the most trivial in the really accurate theory, might only be able to make predictions, albeit not fully accurate, in the idealisation etc.

The situation is partly similar with proof theory and model theory I think but also the instrumental goals of these applied theories are different from those of the physical sciences. I sketched a few suggested goals on pp. 201-2, 217 including in some cases increasing epistemic certainty, conceptual innovation but most crucially ability to prove results such as soundness, completeness, normalisation etc. I should have emphasised more perhaps that there is an element of normativity (cf. p. 194) absent in the physical case. There are certain concepts which guide or logical practice: well-formedness, truth, provability, which cannot even be specified at the level of concrete types, at the level of the real world. They have to be stated at the level of the applied idealisation and challenges- such as ‘does proof distribute over the connectives in the same way as truth?’- perforce answered at the same level too.

Thanks again Alan for your last (I’m only starting another thread to stop the ever-narrower-columns going to vanishing point …!). What you say confirms my expectation of what you would officially say. A quick final stab, and I’ll leave you the last word, as I really

mustget the wildly overdue review written!A propos the “crunch”. I

wouldanddosay the same about physics. A physical theory can have all kinds of virtues — ease of use for calculational/predictive purposes being high among them. But having such virtues may well be at the expensive of descriptive accuracy. Classical fluid mechanics, to take a real-world version of your example, is just terrific: but it also just doesn’t get it right about what fluids are. It gets some things right enough for certain purposes: but we have to know its core domain of application, and know what out-of-domain results are mere artefacts of over-idealization. (See my chaos book, which is all about how to distinguish artefacts from working parts of theories.)Similarly here. Take an arithmetic game EP, quantifier free, including exponentiation and a primitive “is prime” predicate. Then an idealizing applied meta-theory about our actual token-producing practice in playing the EP game may get it right enough for certain purposes.

For example, to borrow one of your favoured examples, the no-go result showing that we can’t “prove” bracket-unbalanced equations by correct plays. Here, idealizing away from constraints about length-of-wff-we-can-handle contraints does no harm. So: we prove that even idealizing widely on our abilities, we

stillwon’t prove unbalanced wffs — so we can, removing the idealization, fairly conclude that a fortioribecause we know re-imposing length constraints won’t increase what we can derivewe won’t be able to prove an unbalanced wff of the short, practically inscribable, kind. So far so good.But it would be different — wouldn’t it? — to apply the idealizing theory e.g. to “justify” adding LEM to EP? Maybe an applied theory about our actual token-producing practice, which idealizes right away from constraints of length-of-wff-we-can-handle shows that, according to the theory, we can prove one of P or not-P (for any P which says a number defined by a certain exponent tower is prime). But we

knowthis applied theory radically idealises in going quite wrong about the limits of our actual practice. So — another crunch! — results such as the one about there being a non-concrete-because-too-long idealised-EP “proof” either of P or of not-P is just an artefact of our over-idealization of the applied theory.Why take such artefacts any more seriously than the artefacts of fluid dynamics?Well my main last word has to include ‘thanks’, a very big thank you to Peter for expending so much time and energy on my book and allowing me such opportunities to expand on, and try to defend, what I say there. What if all reviewers did this? How much better would the practice of reviewing be?!

On the substantive point I think there is an assumption here which I would challenge. Well maybe it’s not an assumption you are making in your last, it’s way below the surface if it is, but it’s instructive to consider it I think. Let ‘lengths’ be the numbers we associate with abstract types, of a wff or proof, when giving the usual recursive characterisation. Here I speak with the Platonist, as I claim I’m allowed to ‘in formal mode’. The Assumption, then, is that there is a finite number k such that all and only concretely feasible wffs are less than equal to k in length (ditto for proofs w.r.t. another number j). Here we prescind from vagueness, which I think we can probably do here without too much harm.

If that Assumption were true it looks as if the right strategy for idealisation of proof is a LUBbite one as in Least Upper Bound-ite. (No disrespect to Luddites, not all of whom, I believe, were indiscriminately opposed to technological innovation per se). We look for upper bounds on lengths of concrete, feasible proof and try to get as low as we can, whilst still being able to give a systematic theoretical account.

I note firstly that if the Lubbite strategy is right it doesn’t seem to matter whether one is a platonist or not. Whatever the extent of the abstract realm, formal idealisation should not exceed the LUB, in particular, the logic we use must fit inside the bounds, all else is to be discarded as ‘artefact’. (We might study as, ‘technical’ aids, more powerful logics but only by using the ‘real’ logic to do so; infinitary logic is often treated in this way).

The second thing I note is how rebarbative is the position which the Lubbite strategy would lead to, granted the assumption. We could just define wff and proof abstractly as normal but add a rider that these are the ‘pre-wffs’- all wffs are to be <= k in length, proofs <= j (admitting that this is a bit over-precise). So even polynomially bounded logic would be way too strong as a legit logic. In polynomially bounded logic there is a polynomial f such that for each theorem, length x, the shortest proof is no longer than f(x). It’s very weak (especially if P is not equal to NP).

Fortunately the Assumption, I think is wrong, indeed even removing the ‘only’ it is still false. For the philosophically interesting notion of ‘proof system’, proof systems, even for what we think of as a fixed theory, have an open-ended nature, in particular we can always add new, more powerful, notational conventions, abbreviations, add derived rules etc. As a result, whatever the length of the longest wffs or proofs we can currently write down, it is not true that all wffs/proofs of that length or less or feasible. Nothing like it. I draw the conclusion (I don’t say this follows from rejection of the Assumption, but it’s the most plausible, coherent response I think) that the Lubbite strategy is wholly misconceived and idealisation should not be constrained by size at all (not even by finite versus infinite). The most important thing is that the recursive procedures for generating more complex out of less complex correspond fairly closely to operations present in our actual, albeit teeny wee, practice.

And it goes without saying, massive thanks to Alan for all his comments and elucidations and replies. Which have helped me enormously, and — to judge by the number of hits on comments threads — have been much read by others too!

Last, last, omega last word! A typo I missed: ‘that length or less or feasible’ should of course be ‘that length or less are feasible’. And in the Assumption, I meant to restrict to a given fixed ‘proof system’, so the assumption is that for any such there is a bound k and all and only its feasible wffs are <= k, and a bound j, all and only the concretely feasible proofs in that system are <= j.