KGFM 1: Macintyre on the impact of incompleteness on maths

I’m going to be reviewing the recently published collection Kurt Gödel and the Foundations of Mathematics edited by Baaz, Papadimitriou, Putnam, Scott and Harper, for Philosophia Mathematica. This looks to a really pretty mixed bag, as is usual with volumes generated by block-buster conferences: but there are some promising names among the contributors, and a quick initial browse suggests that some of the papers should be very worth reading. So, as I go through the twenty one papers over the coming few weeks, I will intermittently blog about them here.

First up is Angus Macintyre, writing on ‘The impact of Gödel’s Incompleteness Theorems on Mathematics’. His title is pretty much the same as that of a short and very readable piece by Feferman in the Notices of the AMS and his conclusion is also much the same: the impact is small. To be sure, “Some of the techniques that originated in Gödel’s early work (and in the work of his contemporaries) remain central in logic and occasionally in work connecting logic and the rest of mathematics.” But “[a]s far as incompleteness is concerned, its remote presence has little effect on current mathematics.” For example, “The long-known connections between Diophantine equations, or combinatorics, and consistency statements in set theory seem to have little to do with major structural issues in arithmetic” (p. 14). And similarly elsewhere in maths.

There’s a lot of reference to mathematical results, and nearly all of the detailed discussion is well beyond my comfort zone (or that of most readers of this blog, I’d guess: try, e.g., “Étale cohomology of schemes can be used to prove the basic facts of the coefficients of zeta functions of abelian varieties over finite fields”). So I can’t very usefully comment here.

Probably the most exciting and novel thing in this piece is the substantial appendix which aims to give an outline justification for Macintyre’s view that we have “good reasons for believing that the current proof(s) of FLT [Fermat’s Last Theorem] can be modified, without abandoning the grand lines of such proofs, to proofs in PA.”  But again, I’m frankly outside my competence here, and I can only refer enthusiasts (or skeptics) about this project to the paper for the details, which look rather impressive to me.

A decidedly tough read for the opening piece!

Thanks to Orlando May

I’m getting back down to work on the second edition of An Introduction to Gödel’s Theorems. One thing I plan to do is to put up some pages of exercises as I go along, which I’ve been meaning to do for ages, but takes a surprising amount of time. Watch this space.

Meanwhile, one interim thing I’ve just done is put online a major update to the corrections page for the latest printing(s) of the first edition. The corrections are almost all due to Orlando May, who has evidently been reading the book with a quite preternaturally accurate eye. I’m most grateful.

If anyone else has anything to add, large or small, about how to improve the book next time around, then of course I’ll again be more than grateful!

TTP At long last, a short review

Very, very belatedly — and apologies for this in particular to Alan Weir — I’ve gathered together some of the thoughts from previous blog posts into a short review of Truth Through Proof. This is absurdly compressed, even though Mind are allowing me twice the normal length; and I say almost nothing in the review that I haven’t already said here (and to which Alan has responded in comments, and comments on comments …). But for what it is worth, here’s a late draft.

Just what we needed: more logic books

I’ve just finished writing a short paper with the title “Santa’s singleton” (think about it …), of which I’ll post a version here in due course, once I’ve tried it out in a talk or two. Try to contain your excitement. Meanwhile, you might like to know about five logic books that have turned up on the new books shelves in the C.U.P. bookshop fairly recently. Or then again, if like me you have a dauntingly high pile of recent books you’ve been meaning to read which is already growing too fast, maybe you won’t like to know. But here’s the list anyway, in order of appearance …

  1. Bernard Linsky, The Evolution of Principia Mathematica, (subtitled ‘Bertrand Russell’s Manuscripts and Notes for the Second Edition’ which gives you a good idea what it is about). OK, this is a scholarly effort aimed at specialists and very expensive: but I’ve sat in the shop and dipped at length and found parts of it very interesting. A definite ‘must’ to order for the library, and well worth looking at when it arrives. In fact, that goes for the rest too.
  2. Matthias Baaz et al., eds, Kurt Gödel and the Foundations of Mathematics
    Horizons of Truth
    is one of those large mixed-bag collections of papers emerging from conferences. But it has a starry line-up of contributors, and — dipping again — some of the papers look good or very good. I’m going to be reviewing this for Philosophia Mathematica, so when I get to work I’ll perhaps blog about this collection here.
  3. Juliette Kennedy and Roman Kossak, eds, Set Theory, Arithmetic, and Foundations of Mathematics is a collection whose “inspiration is the work and interests of the logician Stanley Tennenbaum, and through Tennenbaum the work of Kurt Gödel”. It happens that Tim Button and I have just written a short paper forthcoming in Philosophia Mathematica on the philosophical significance (or lack of it) of Tennenbaum’s lovely theorem about non-standard models of arithmetic: so I’ve seized on this with real interest. Again, I’ll report back in the blog when I’ve had a chance to read more.
  4. Harold Simmons, An Introduction to Category Theory has the expository virtues you’d expect from the author. I’m not so sure that it is the One First Book On Categoty Theory that is perhaps still to be written. But having read a largish initial chunk in a couple of sittings, I can certainly recommend it.
  5. Sara Negri and Jan von Plato, Proof Analysis. From the blurb: “This book continues from where the authors’ previous book, Structural Proof Theory, ended. It presents an extension of the methods of analysis of proofs in pure logic to elementary axiomatic systems and to what is known as philosophical logic.” I very much liked the previous book (it worked well in a grad reading group, and we learnt a lot from it). The work on axiomatic theories is original, and this looks likely to be a very rewarding read.

So there they are. I’m sure most of us will knock them off in just a few days. Next!!

Beginning maths, beginning philosophy

Tim Gowers has begun what he plans to be a long series of posts on his blog giving advice for beginning Cambridge mathmos. But the posts should of course be of great interest for new undergraduate mathematicians anywhere. So spread the word.

And already some of his advice seems pretty applicable to beginning philosophers too. That shouldn’t be too surprising: in both cases, what we are trying to teach isn’t a body of facts so much as an understanding of argumentative moves. In both cases, students need to learn how to see how to fillet out the key ideas (and tell what’s just joining-up-the-dots).

So I might be inspired to do what I have meant to do for a while and put together some how-to-get-started-thinking-philosophically and how-to-approach-your-work notes for new philosophers. It shouldn’t be made a mysterious business, whether it’s maths or philosophy. But meanwhile, read Gowers!

TTP, 14. Worries about excluded middle

Weir’s formalist account of arithmetic in headline form comes to this: the arithmetical claim P is correct just in case that there is (or in practice could be) a concrete proof of P. (We’ll stick to considering the arithmetical case.)

Weir needs proofs to be concrete: treating proofs as abstract objects wouldn’t at all chime with his naturalist worries about abstracta. He needs to allow the practical possibility of a concrete proof to suffice for truth, or else — for all he knows — there could be random gaps even among the simple computable truths about small numbers because no one has yet bothered to write down a computation. But the modality had better be not be an extravagant one (“in some possible world, however remote, where our counterparts can e.g. handle proofs with more steps than particles in the actual world, …”), or again we could be taking on modal commitments that are inimicable to Weir’s strong naturalism: so as we said, we need some notion of practical possibility here.

Are these concrete proofs particular tokens? If so, you and I always give different proofs. That wouldn’t tally with our ordinary ways of talking of proofs (when we talk of Euclid’s proof of the infinitude of primes, we don’t mean just some ancient token; when we say there is only one known proof of P we don’t mean that it has only been written down once). But Weir wants to avoid proof-types (or he is back with the unwanted abstracta); so he makes play with with a notion of a proof-tipe, where a tipe is a mereological sum of equiform tokens. Thus understood, a proof tipe is a scattered concrete thing, and you and I giving the same proof — to speak with the vulgar — is a matter of you and I producing tokens that are part of the same proof tipe. (Weir thinks that for his purposes he can get away with a lightweight mereological theory that doesn’t get tangled with issue about e.g. absolutely restricted principles of composition. Let’s give him that assumption. It will be the least of our worries.)

Now, concrete numerals and concrete proofs are few (even counting in practically possible ones as well). The obvious challenge to Weir’s position is that his formalism will therefore lead to some kind of finitist revisionism rather than conserving the arithmetic we know and love.

To press the point, take a concretely statable claim P that some very large number n is prime. Then it could well be that there is no practically possible concrete proof of either P or not-P in your favourite formal system S (the system whose concrete proofs, according to Weir, make for the correctness of arithmetical claims on your lips). Yet it is an elementary arithmetical claim that either P or not-P, a truth now seemingly without a formalist-approved truth-maker (given that Weir endorses a standard truth-functional account of the connectives). What to do?

Well at this point I struggle. The idea seems to be this. Here is our practice of producing concrete formal proofs in the system S. Like other bits of the world, we can  theorize about this, doing some applied mathematics M which — like other bits of applied mathematics — purports to talk about some idealized mathematical model, in this case a model of the real-world practice of proving things in S. And now, we do have a concrete M-proof that, in the idealized model, the model’s representative for the concrete claim that either P or not-P. So,

the EXISTENCE [remember: small caps indicate talk-in-the-metaphysics room] of concrete indeterminables [like P] should not inhibit reasoners from applying excluded middle to them so long as THERE IS a concrete proof that, in a legimitate idealization, the image of the indeterminable is decidable in the formal sense. (TTP, p. 205)

But hold on! Setting aside worries about ‘legitimate’, what makes an arithmetical claim correct, on your lips, is by hypothesis the practical availability of a concrete proof in a formal game S [maybe different speakers are working in different games with appropriate mappings between them, but let’s not worry about that now]. So by hypothesis, a concrete proof in a different bit of mathematics M — a bit of applied mathematics about an idealized model of our arithmetical practice — just isn’t the sort of thing that can make an arithmetical claim correct. If we allow moves external to S to make-true arithmetical claims, then S isn’t after all the formal system proofs within which proofs provide correctness conditions for arithmetic.

So on the face of it, it looks as if Weir has simply cheated here by changing the rules of the formalist game midstream! And I’m evidently not alone in thinking this: John Burgess in his Philosophia Mathematica review finds himself baffled by the same passage. Weir himself recognizes that his readers might find the move here sticky. In one of his comments in the thread following my last blog post on his book, he writes

If we want to know whether a particular area is bivalent … we don’t ask whether for every token there is a token proof or disproof, we ask if there is a concrete proof of a formal negation-completeness result for the idealised theory. If so, we are allowed to lay down all instances of LEM for that sub-language as axioms … even if that means that some concretely provable theorems — t is prime or t is not prime say — have disjuncts which are neither concretely provable nor refutable. I suspect this get out of jail manoeuvre will prove one of the biggest sticking points in the reception of the book, but I’m still confident about its legitimacy!

Like Burgess, I can’t share Weir’s confidence.

Log xy = Log x + Log y

In their (rich, original, ground-breaking) writings on plurals, Alex Oliver and Timothy Smiley more than once say that “mathematical practice” shows that addition is allowed to take plural terms as arguments. Thus, with a trivial change of variables, in their  ‘A modest logic of plurals’ (JPL, 2006) they write

One might think that [‘+’] can only take singular terms as arguments, … Mathematical practice shows that this is quite wrong, as witness the right-hand side of this equation (Hardy, 1925, p. 408)

Log xy = Log x + Log y

bearing in mind that Log z is an infinitely many-valued function.

But this is surely a mis-step. In fact Hardy himself realises that — given that the Log terms are many-valued — he’d better explain the extended use of the equation notation here (that is the novelty). So he does. Hardy tells us that we are to read this as “Every value of either side is one of the values of the other side”. So already, the surface equation is frankly offered as syntactic sugar for something more structured. Fair enough: we do this sort of thing all the time. And how are to read Hardy’s snappy explication? Obviously, if you want the gory details, he intends this — what else?

For any (particular!) a such that a is a value of Log xy, there is a b such that b is a value of Log x and a c such that c is a value of Log y, and a = b + c. And for any b such that b is a value of Log x and c such that c is a value of Log y, there is an a such that a is a value of Log xy and a = b + c.

(In a slogan: you do the selections from the many values before the addition.) Which reveals that, when the wraps are off, the addition functor here is  straightforwardly applied to singular terms, just as you’d expect.

Writing Log xy = Log x + Log y is a handy extension of equation notation to plural terms produced by many-valued functions (though like Hardy, careful writers on complex analysis take it that it does need explaining). But once this sort of extension is explained, we don’t need to discern any further accompanying novelty in the use of addition. No?

Abortion, again

It has been depressing to see the issue of abortion being politicised again in the UK, when a hard-won, decent and humane settlement has been in place. And it is particularly depressing for a philosopher to see that usual dreadful arguments (on both sides, it has to be said) trotted out again. Here’s one way of thinking about the issue which hopefully sheds some moral light.

Consider the following “gradualist” view: As the human zygote/embryo/foetus slowly develops, its death slowly becomes a more serious matter. At the very beginning, its death is of little consequence; as time goes on, its death is a matter it becomes appropriate to be gradually more concerned about.

Now, this view seems to be the one that almost all of us in fact do take about the natural death of human zygotes/embryos/foetuses. After all, very few of us are worried by the fact that a very high proportion of conceptions quite spontaneously abort. We don’t campaign for medical research to reduce that rate (nor do opponents of abortion campaign for all women to take drugs to suppress natural early abortion). Compare: we do think it is a matter for moral concern that there are high levels of infant mortality in some countries, and campaign and give money to help reduce that rate.

Again, very few of us are scandalized if a woman who finds she is pregnant by mistake in a test one week after conception is then mightily pleased when she discovers that the pregnancy has naturally terminated some days later (and even has a drink with a girl friend to celebrate her lucky escape). Compare: we would find it morally very inappropriate, in almost all circumstances, for a woman in comfortable circumstances to celebrate the death of an unwanted young baby.

Similarly for accidental death. Suppose a woman finds she is a week or two pregnant, goes horse riding, falls badly at a jump, and as a result spontaneously aborts. That might be regrettable, but we wouldn’t think she’d done something terrible by going riding and running the risk. Compare: we would be morally disapproving of someone badly risking the life of new born by carrying it while going in for some potentially very dangerous activity.

So: our very widely shared attitudes to the natural or accidental death of the products of conception do suggest that we do in fact regard them as of relatively lowly moral status at the beginning of their lives, and of greater moral standing as time passes. We are all (or nearly all) gradualists in these cases.

It is then quite consistent with such a view to take a similar line about unnatural deaths. For example, it would be consistent to think that using the morning-after pill is of no moral significance, while bringing about the death of an eight month foetus is getting on for as serious as killing a neonate, with a gradual increase in the seriousness of the killing in between.

Some, at any rate, of those of us who are pro (early) choice are moved by this sort of gradualist view. The line of thought in sum is: the killing of an early foetus has a moral weight commensurate with the moral significance of the natural or accidental death of an early foetus. And on a very widely shared view, that’s not very much significance. So from this point of view, early abortion is of not very much significance either. But abortion gradually gets a more significant matter as time goes on.

You might disagree. But then it seems that you either need (a) an argument for departing from the very widely shared view about the moral significance of the natural or accidental miscarriage of the early products of conception. Or (b) you need to have an argument for the view that while the natural death of a zygote a few days old is of little significance, the unnatural death is of major significance. Neither line is easy to argue. To put it mildly.

We can agree however that killing a neonate is, in general, a very bad thing. So the remaining question is how to scale the cases in between. That’s something that serious and thoughtful and decent people can disagree about to some extent. But note it is a disagreement about matters of degree (even if any legal arrangements have to draw artificial sharp lines). All-or-nothing views, on either side of the debate, have nothing to recommend them.


Music to sort books by

Who knew retirement could be so busy? Two jointly written papers just done; a draft book to comment on in detail; four book reviews to do; a growing pile of things I want to read (Huw Price’s collected papers arrived today). Not to mention the books I’m supposed to be working on.

Though strictly speaking, I’m not retired till the end of next month. But apart from wrapping up a couple of reports as chairman of examiners, job duties are all done. And I’m on track for getting out of my office in the next couple of weeks and making room in my study at home for what I want to move there. I guess I could now just pack up the remaining office books, perhaps four medium boxes worth of them, and stash them in the garage. But that would be a Bad Move. Best not to keep postponing decisions. The trouble is, however, that I keep getting sidetracked into actually dipping in and reading the things.

And there a few box files of old lecture notes to dispose of. Gosh: did I really once know about all that stuff? For instance, 30 year old intro moral philosophy lectures anyone? Am I really going to type them up and put them on the web? Would enough people be much interested if I did? No, and no. So into the bin with them. Easier said than done of course. That’s part of me that’s gone. But needs must.

All that sorting out takes time, too. So, as I said, a busy time. But in good way. And some of the sorting out at home has been accompanied in the most enjoyable way, listening to Haydn. At my retirement drinks party, I was presented inter alia with the boxed set of his complete symphonies (the Adam Fischer/Austro-Hungarian Haydn Orchestra recordings — a mere 33 CDs to explore). Which is just perfect while fossicking about trying to decide  what books to keep, what to give away. So thanks again, everyone who contributed so generously. I started with Symphony no. 1 and am slowly working my way through. From the outset, a delight and very rewarding.

TTP, 13. Formalism and “pluralism”

In TTP 11, I emphasized that Weir’s position interweaves two separable strands. One strand I called “formalism about arithmetical correctness”: at a first approximation, what makes an arithmetical claim correct is something about what can be done in some formal game(s) played with uninterpreted tokens. The other strand proposes, as I put it, that the content of arithmetical claims is not “transparently representational”. So far, in these blog posts, I’ve been talking mostly about the way the second strand gets developed. It has been Hamlet with only brief appearances of the Prince of Denmark.

Weir’s story about content is intended to serve as a kind of protective wrapping around the formalist core (so he can say e.g. that although arithmetical claims have formalist correctness conditions we aren’t actually talking about synactic whatnots when we make common-or-garden arithmetical claims — thus avoiding at least some incredulous stares). But there is no getting away from it: when the wraps are off, the story about those correctness conditions is indeed very starkly formalistic. What makes arithmetical claims correct, when they are correct, is facts about plays with concrete tokens in some rule-governed practice of token-shuffling (actual plays, or practically possible ones).

Well, here I am, making arithmetical claims. These are supposedly made true by facts about concrete moves in some formal practice. Which formal practice? Look again at the toy models I offered in TTP 10. There was a (1) a game with an abacus, with facts about this making true tokens like ‘68 + 57 = 125’ (whose content is tied to such facts, but non-representationally). Then there was (2) something like school-room practice where we write down “long additions” etc. on our slates, with facts about what can be written down in this practice making true equiform tokens ‘68 + 57 = 125’ (which therefore have a different content from before). Then there was (3) a formal proof-system for quantifier-free arithmetic, and tokens such as ‘68 + 57 = 125’ are now tied to facts about what can be derived inside the formal system. So when I say ‘68 + 57 = 125’ which formal game is my utterance tied to (one of these or another)? What content does my utterance have? What’s to choose?

Weir’s response is: don’t choose.

The neo-formalist position is pluralist rather than relativist. The truth-value of ‘68 + 57 = 125’ is not relative to a formal system. Rather there is a plurality of systems, and the sentence expresses different senses in each, whilst it is made true (or false) in the context of a given one iff it is provable therein. (p. 108)

Which is the line he has to take (irrespective of the details of his story about non-representational content). Take the child brought up “bilingually” to play the abacus game and comment on that, and play the school-room game and comment on that. By mishap the child could come to believe 68 + 57 = 125 in the comment mode in one context and disbelieve in the other: so the contents had better be different.

OK: two children taught two different games and two different commenting practices will mean something by equiform comments. Yet it would be mighty odd, wouldn’t it, to say that two real children taught real arithmetic by different methods mean something different by ‘68 + 57 = 125’? The natural thing to say, most of us will think, is that if the kids end up as practical arithmeticians counting the world in the same ways, using addition when they want to put together the counts of two different piles to give a count of their combination, then whether they get to ‘68 + 57 = 125’ by abacus, school-room sum, doing a formal proof (heaven help us!) or using a calculator, they mean the same. For the sense of what they say is essentially grounded in their applied practice, in how they use arithmetic in the world.

From this natural perspective, Weir’s story begins to look upside down. He first talks of unapplied formal games (as it might be with an abacus, or writing down sums as in the school room, or operating with an uninterpreted proof system); assigns various contents to ’68 + 57 = 125′ treated as comments on moves in those various as-yet-unapplied games. And only afterwards, with the various informational contents fixed by the liaisons to the uninterpreted formal games, does Weir talk about applying the ‘arithmetic’ to the world. But at least as far as arithmetic is concerned, this looks topsy turvy: it’s the embedding in the applied practice of counting and adding and multiplying that gives sense to arithmetic; claims, properly so called. (Cf. Wittgenstein in the Big Typescript discussing what makes arithmetic different from a game.)

Later (p. 218) Weir modifies his pluralism. He says two speaker S and S* using e.g. ‘68 + 57 = 125’ “express the same sense” if there is an “admissible mapping” between theie linguistic practices L and L*. The details don’t matter (which is good, because they are not clear). The trouble is that I don’t see what entitles Weir to play fast and loose with the notion of sense like this. He’s spent chapters earlier on the in book trying to give us a story about sense or “informational content” that allows him to distingish the thin sense of ‘68 + 57 = 125’ from the rich sense of a statement of the correctness conditions of the claim in terms of moves in a formal game. He can do this because he cuts sense fine. And his criterion of difference in sense is that you can believe 68 + 57 = 125 (as a comment on the game) without being able to frame the concepts necessary to state the metaphysical correctness conditions. But then, by the same Fregean criterion of difference, the bilingual L/L* speaker like the abacus user who also does sums on paper could still express two different senses by different uses of ‘68 + 57 = 125’. Weir seems, later, just to be changing the sense of “sense”.

So I think Weir is landed with the radical pluralism he cheerily embraced earier, and his later attempt to soften it is a mis-step by his own lights. Many will count this as a strike against neo-formalism.

To be continued

Scroll to Top