Avigad MLC — 4: Sequent calculi vs tableaux?

Jeremy Avigad’s book is turning out to be not quite what I was expecting. The pace and (often) compression can make for rather surprisingly tough going. And there are few pauses for broader reflection on what’s going on. Take for example the section on Skolemization (which I’ll get to in the next post). This goes at pace, and is probably fairly testing if you’ve never seen this idea before. And the only comment we get about why this might all matter is at the end of the section, where we read:

The reduction of classical first-order logic to quantifier-free logic with Skolem functions is also mathematically and philosophically interesting. Hilbert viewed such functions (more precisely, epsilon terms, which are closely related) as representing the ideal elements that are added to finitistic reasoning to allow reasoning over infinite domains.

So that’s just one sentence expounding on the interest  — which is hardly likely to be transparent to most readers!

Still, I’m cheerfully pressing on through the book, with enjoyment. As I put it before, I’m polishing off some rust — and indeed acquiring bits of new knowledge at other points. So, largely as an aide memoire for when I get round to updating the Beginning Mathematical Logic Guide next year, I’ll keep on jotting down a few notes, and I will keep posting them here in case anyone else is interested.

MLC continues, then, with Chapter 6 on Cut Elimination. And the order of explanation is, I think, interestingly and attractively novel.

Yes, things begin in a familiar way. §6.1 introduces a standard sequent calculus for (minimal and) intuitionistic FOL logic without identity. §6.2 then, again in the usual way, gives us a sequent calculus for classical logic by adopting Gentzen’s device of allowing more than one wff to the right of the sequent sign. But then A notes that we can trade in two-sided sequents, which allow sets of wffs on both sides, for one-sided sequents where everything originally on the left gets pushed to the right of sequent side (being negated as it goes). These one-sided sequents (if that’s really the best label for them) are, as far as I can recall, not treated at all in Negri and von Plato’s lovely book on structural proof theory; and they are mentioned as something of an afterthought at the end of the relevant chapter on Gentzen systems in Troelstra and Schwichtenberg. But here in MLC they are promoted to centre stage.

So in §6.2 we are introduced to a calculus for classical FOL using such one-sided, disjunctively-read, sequents (we can drop the sequent sign as now redundant) — and it is taken that we are dealing with wffs in ‘negation normal form’, i.e. with conditionals eliminated and negation signs pushed as far as possible inside the scope of other logical operators so that they attach only to atomic wffs. This gives us a very lean calculus. There’s the rule that any \Gamma, A, \neg A with A atomic counts as an axiom. There’s just one rule each for \land, \lor, \forall, \exists. There also is a cut rule, which tells us that from \Gamma, A and \Gamma, {\sim}{A} we can infer \Gamma (here {\sim}{A} is notation for the result of putting the negation of A in negation normal form).

And Avidgad  now proves twice over that this cut rule is eliminable. So first in §6.3 we get a semantics-based proof that the calculus without cut is already sound and complete. Then in §6.4 we get a proof-theoretic argument that cuts can be eliminated one at a time, starting with cuts on the most complex formulas, with a perhaps exponential increase in the depth of the proof at each stage — you know the kind of thing! Two comments:

  1. The details of the semantic proof will strike many readers as familiar — closely related to the sounds and completeness proofs for a Smullyan-style tableaux system for FOL. And indeed, it’s an old idea that Gentzen-style proofs and certain kind of tableaux can be thought of as essentially the same, though conventionally written in opposite up-down directions (see Ch XI of Smullyan’s 1968 classic First-Order Logic). In the present case, Avigad’s one-sided sequent calculus without cut is in effect a block tableau system for negation normal formulas where every wff is signed F. Given that those readers whose background comes from logic courses  for philosophers will probably be familiar with tableaux (truth-trees), and indeed given the elegance of Smullyan systems, I think it is perhaps a pity that A misses the opportunity to spend a little time on the connections.
  2. A’s sparse one-sided calculus does make for a nicely minimal context in which to run a bare-bones proof-theoretic argument for the eliminability of the cut rule, where we have to look at a very small number of different cases in developing the proof instead of having to hack through the usual clutter. That’s a very nice device! I have to report though that, to my mind, A’s mode of presentation doesn’t really make the proof any more accessible than usual. In fact, once again  the compression makes for quite hard going (even though I came to it knowing in principle what was supposed to be going on, I often had to re-read). Even just a few more examples along the way of cuts being moved would surely have helped.

To continue (and I’ll be briefer) §6.5 looks at proof-theoretic treatments of cut elimination for intuitionistic logic, and §6.6 adds axioms for identity into the sequent calculi and proves cut elimination again. §6.7 is called ‘Variations on Cut Elimination’ with a first look at what can happen with theories other than the theory of identity when presented in sequent form. Finally §6.8 returns to intuitionistic logic and (compare §6.5) this time gives a nice semantic argument for the eliminability of cut, going via a generalization of Kripke models.

All good stuff. But I hope it doesn’t sound too ungrateful to say that a student new to sequent calculi and cut-elimination proofs would still do best, I think, to read a few chapters of Negri and von Plato (for example) first, in order to later be able get a lively appreciation of §6.4 and the following sections of MLC.

Avigad, MLC — 3: are domains sets?

The next two chapters of MLC are on the syntax and proof systems for FOL — in three flavours again, minimal, intuitionstic, and classical — and then on semantics and a smidgin of model theory. Again, things proceed at pace, and ideas come thick and fast.

One tiny stylistic improvement which could have helped the reader (ok, this reader) would have been to chunk up the sections into subsections — e.g. by simply marking the start of a new subsection/new subtheme by leaving a blank line and beginning the next paragraph  “(a)”, “(b)”, etc. A number of times, I found myself re-reading the start of a new para to check whether it was supposed to flow on from the previous thought. Yes, yes, this is of course a minor thing! But readers of dense texts like this need all help we can get!

So in a bit more detail, how do Chapters 4 and 5 proceed? Broadly following the pattern of the two chapters on PL, in §4.1 we find a brisk presentation of FOL syntax (in the standard form, with no syntactic distinction made between variables-as-bound-by-quantifiers and variables-standing-freely). Officially, recall, wffs that result from relabelling bound variables are identified. But this seems to make little difference: I’m not sure what the gain is.

§4.2 presents axiomatic and ND proof systems for the quantifiers, adding to the systems for PL in the standard ways. §4.3 deals with identity/equality and says something about the “equational fragment” of FOL. §4.4 says more than usual about equational and quantifier-free subsystems of FOL, noting some (un)decidability results. §4.5 briefly touches on prenex normal form. §4.6 picks up the topic (dealt with in much more detail than usual) of translations between minimal, intuitionist, and classical logic. §4.7 is titled “Definite Descriptions” but isn’t as you might expect about how to add a description operator, a Russellian iota, but rather about how — when we can prove \forall x\exists! yA(x, y) — we can add a function symbol f such that f(x) = y holds when A(x, y), and all goes as we’d hope. Finally, §4.8 treats two topics: first, how to mock up sorted quantifiers in single-sorted FOL; and second, how to augment our logic to deal with partially defined terms. That last subsection is very brisk: if you are going to treat any varieties of free logic (and I’m all for that in a book at this level, with this breadth) there’s more worth saying.

Then, turning to semantics, §5.1 is the predictable story about full classical logic with identity, crisply done, with soundness and completeness theorems proved. §5.2 tells us more about equational and quantifier-free logics.  §5.3 extends Kripke semantics to deal with quantified intuitionistic logic. We then get algebraic semantics for classical and intuitionistic logic in §5.4 (so, as before, A is casting his net more widely than usual — though the treatment of the intuitionistic case is indeed pretty compressed). The chapter finishes with a fast-moving 10 pages giving us two sections on model theory. §5.5 deals with some (un)definability results, and talks briefly about non-standard models of true arithmetic. §5.6 gives us the L-S theorems and some results about axiomatizability. So that’s a lot packed into this chapter. And at a sophisticated level too — it is telling that A’s note at the end of the chapter gives Peter Johnstone’s book on Stone Spaces as a “good reference” for one of the constructions!

What more is there to say? I’m enjoying polishing off some patches of rust! — but as is probably already clear, these initial chapters are pitched at what many student readers will surely find a pretty demanding level, unless they bring quite a bit to the party. That’s not a criticism, I‘m just locating the book on the spectrum of texts. I’d have slowed down the exposition a bit at a number of points, but that (as I said in the last post) is a judgement call, and it would be unexciting to go into details here.

One general comment, though. I note that A does define a model for a FOL language as having a set for quantifiers to range over,  but with a function (of the right arity) over that set as interpretation for each function symbol, and a relation (of the right arity) over that set as interpretation for each relation symbol. My attention might have flickered, but A seems happy to treat functions and relations as they come, not explicitly trading them in for set-theoretic surrogates (sets of ordered tuples). But then it is interesting to ask — if we treat functions and relations as they come, without going in for a set-theoretic story, then why not treat the quantifiers as they come, as running over some objects plural? That way we can interpret e.g. the first-order language of set theory (whose quantifiers run over more than set-many objects) without wriggling.

A does nicely downplay the unnecessary invocation of sets — though not consistently. E.g. he later falls into saying “the set of primitive recursive functions is the set of functions [defined thus and so]” when he could equally well have said, simply,  “the primitive recursive functions are the functions [defined thus and so]”. I’d go for consistently avoiding unnecessary set talk from the off — thus making it much easier for the beginner at serious logic to see when set theory starts doing some real work for us. Three cheers for sets: but in their proper place!

Avigad, MLC — 2: And what is PL all about?

After the chapter of preliminaries, there are two chapters on propositional logic (substantial chapters too, some fifty-five large format pages between them, and they range much more widely than the usual sort of introductions to PL in math logic books).

The general approach of MLC foregrounds syntax and proof theory. So these two chapters start with §2.1 quickly reviewing the syntax of the language of PL (with \land, \lor, \to, \bot as basic — so negation has to be defined by treating \neg A as A \to \bot). §2.2 presents a Hilbert-style axiomatic deductive system for minimal logic, which is augmented to give systems for intuitionist and classical PL. §2.3 says more about the provability relations for the three logics (initially defined in terms of the existence of a derivation in the relevant Hilbert-style system). §2.4 then introduces natural deduction systems for the same three logics, and outlines proofs that we can redefine the same provability relations as before in terms of the availability of natural deductions. §2.5 notes some validities in the three logics and §2.6 is on normal forms in classical logic. §2.7 then considers translations between logics, e.g. the Gödel-Gentzen double-negation translation between intuitionist and classical logic.

As you’d expect, this is all technically just fine. Now, A says in his Preface that “readers who have had a prior introduction to logic will be able to navigate the material here more quickly and comfortably”. But I suspect that in fact some prior knowledge will be pretty essential if you are really going get much out the discussions here. To be sure, the point of the exercise isn’t (for example) to get the reader to be a whizz at knocking off complex Gentzen-style natural deduction proofs; but are there quite enough worked examples for the real newbie to get a good feel for the claimed naturalness of such proofs? Is a single illustration of a Fitch-style alternative helpful? I’m doubtful.

And actually, even if you do have some prior logical knowledge, I’m pretty sure that the chapter’s final §2.8 — which takes a very brisk look at other sorts of deductive system, and issues about decision procedures — goes too fast to be likely to carry the majority of readers along.

To continue, Chapter 3 is on semantics. We get the standard two-valued semantics for classical PL, along with soundness and completeness proofs, in §3.1. Then we get interpretations in Boolean algebras in §3.2. Next, §3.3 introduces Kripke semantics for intuitionistic (and minimal) logic — as I said, A is indeed casting his net more widely that usual in introducing PL. §3.4 gives algebraic and topological interpretations for intuitionistic logic. And the chapter ends with §3.5, ‘Variations’, introducing what A calls generalised Beth semantics.

Again I really do wonder about the compression and speed of some of the episodes in this chapter; certainly, those new to logic could find them very hard going. Filters and ultrafilters in Boolean algebras are dealt with at speed; some more examples of Kripke semantics at work might have helped to fix ideas; Heyting semantics is again dealt with at speed. And §3.5 will (surely?) be found challenging.

Still, I think that for someone coming to MLC who already does have enough logical background (perhaps half-baked, perhaps rather fragmentary) and who is mathematically adept enough, these chapters — perhaps initially minus their last sections — should bring a range of technical material into a nicely organised story in a helpful way, giving a good basis for pressing on through the book.

For me, the interest here in the early chapters of MLC is in thinking through what I might do differently and why. As just implied, without going into more detail here, I’d have gone rather more slowly at quite a few points (OK, you might think too slowly — these things are of course a judgment call). But perhaps as importantly, I’d have wanted to add some more explanatory/motivational chat.

For example, what’s so great about minimal logic? Why talk about it at all? We are told that minimal logic “has a slightly better computational interpretation” than intuitionistic logic. But so what? On the face of it doesn’t treat negation well, its Kripke semantics doesn’t take the absurdity constant seriously, and lacking disjunctive syllogism it isn’t a sane candidate for regimenting mathematical reasoning (platonistic or constructive). And if your beef with intuitionist logic are relevantist worries about its explosive character, then minimal logic is hardly any better (since from a contradiction, we can derive any and every negated wff).  So — a reader coming with a bit of logical background might wonder — why seriously bother with it? It would be worth saying something.

Another example. We are introduced to the language of propositional logic (with a fixed infinite supply of propositional letters). Glancing ahead, in Chapter 4 we meet a multiplicity of first-order languages (with their various proprietary and typically finite supplies of constants and/or relation symbols and/or function symbols). Why the asymmetry?

Connectedly but more basically, what actually are the propositional variables doing in a propositional language? Different elementary textbooks say different things, so A can’t assume that everyone is approaching the discussions here with the same background of understanding of what PL is all about. A cheerfully says that the propositional variables “stand for” propositions. Which isn’t very helpful given that neither “stand for” nor “proposition” is at all clear! Now, of course, you can get on with the technicalities of PL (and FOL) while blissfully ignoring the question of what exactly the point of it all is, what exactly the relation is between the technical games being played and the real modes of mathematical argumentation you are, let’s hope, intending to throw light on when doing mathematical logic. But I still think that an author owes us some story about their understanding of these rather contentious issues, an understanding which must shape their overall approach. A notes in his Preface that we can use formal systems to understand patterns of mathematical inference — (though he then oddly says that “the role of a proof system” for PL “is to derive tautologies”, when you would expect something like “is to establish tautological entailments”); and perhaps more will come out as the book progresses about how he conceives of this use. But I’d have liked to have seen rather more upfront.

Avigad, MLC — 1: What are formulas?

I noted before that Jeremy Avigad’s new book Mathematical Logic and Computation has already been published by CUP on the Cambridge Core system, and the hardback is due any day now. The headline news is that this looks to me the most interesting and worthwhile advanced-student-orientated book that has been published recently.

I’m inspired, then, to blog about some of the discussions in the book that interest me for one reason or another, either because I might be inclined to do things differently, or because they are on topics that I’m not very familiar with, or (who knows?) maybe for some other reason. I’m not planning a judicious systematic review, then: these will be scattered comments shaped by the contingencies of my own interests!

Chapter 1 of MLC is on “Fundamentals”, aiming to “develop a foundation for reasoning about syntax”. So we get the usual kinds of definitions of inductively defined sets, structural recursion, definitions of trees, etc. and applications of the abstract machinery to defining the terms and formulas of FOL languages, proving unique parsing, etc.

This is done in a quite hard-core way (particularly on trees), and I think you’d ideally need to have already done a mid-level logic course to really get the point of various definitions and constructions. But A [Avigad, the Author, of course!] notes that he is here underwriting patterns of reasoning that are intuitively clear enough, so the reader can at this point skim, returning later to nail down various details on a need-to-know basis.

But there is one stand-out decision that is worth pausing over. Take the two expressions \forall xFx and \forall yFy. The choice of bound variable is of course arbitrary. It seems we have two choices here:

  1. Just live with the arbitrariness. Allow such expressions as distinct formulas, but prove that  formulas like these which are can be turned into each other by the renaming of bound variables (formulas which are \alpha-equivalent, as they say) are always interderivable, are logically equivalent too.
  2. Say that formulas proper are what we get by quotienting expressions by \alpha-equivalence, and lift our first-shot definitions of e.g. wellformedness for expressions of FOL to become definitions of wellformedness for the more abstract formulas proper of FOL.

Now, as A says, there is in the end not much difference between these two options; but he plumps for the second option, and for a reason. The thought is this. If we work at expression level, we will need a story about allowable substitutions of terms for variables that blocks unwanted variable-capture. And A suggests there are three ways of doing this, none of which is entirely free from trouble.

  1. Distinguish free from bound occurrences of variables, define what it is for a term to be free for a variable, and only allow a term to be substituted when it is free to be substituted. Trouble: “involves inserting qualifications everywhere and checking that they are maintained.”
  2. Modify the definition of substitution so that bound variables first get renamed as needed — so that the result of substituting y + 1 for x in \exists y(y > x) is something like \exists z(z > y + 1). Trouble: “Even though we can fix a recipe for executing the renaming, the choice is somewhat arbitrary. Moreover, because of the renamings, statements we make about substitutions will generally hold only up to \alpha-equivalence, cluttering up our statements.”
  3. Maintain separate stocks of free and bound variables, so that the problem never arises. Trouble: “Requires us to rename a variable whenever we wish to apply a binder.”

I’m not quite sure how we weigh the complications of the first two options against the complications involved in going abstract and defining formulas proper by quotienting expressions by \alpha– equivalence. But be that as it may. The supposed trouble counting against the third option is, by my lights, no trouble at all. In fact A is arguably quite misdescribing what is going on in that case.

Taking the Gentzen line, we distinguish constants with their fixed interpretations, parameters or temporary names whose interpretation can vary, and bound variables which are undetachable parts of a quantifier-former we might represent ‘\forall x \ldots\ x\ldots \ x\ldots’. And when we quantify Fa to get \forall xFx we are not “renaming a variable” (a trivial synactic change) but replacing the parameter which has one semantic role with an expression which is part of a composite expression with a quite different semantic role. There’s a good Fregean principle, use different bits of syntax to mark different semantic roles: and that’s what is happening here when we replace the ‘a’ by the ‘x’, and at the same time bind with the quantifier \forall x (all in one go, so to speak).

So its seems to me that option 1c is markedly more attractive than A has it (it handles issues about substitution nicely, and meshes with the elegant story about semantics which has \forall xFx true on an interpretation when Fa is true however we extend that interpretation to give a referent to the temporary name a). The simplicity of 1c compared with option 2 in fact has the deciding vote for me.

Book note: Topology, A Categorical Approach

Having recently been critical of not a few books here(!), let me mention a rather good one for a change. I’ve had on my desk for a while a copy of Topology: A Categorical Approach by Tai-Danae Bradley, Tyler Bryson and John Terilla (MIT 2020). But I have only just got round to reading it, making a first pass through with considerable enjoyment and enlightenment.

The cover says that the book “reintroduces basic point-set topology from a more modern, categorical perspective”, and that frank “reintroduces” rather matters: a reader who hasn’t already encountered at least some elementary topology would have a pretty hard time seeing what is going on. But actually I’d say more. A reader who is innocent of entry-level category theory will surely have quite a hard time too. For example, in the chapter of ‘Prelminaries’ we get from the definition of a category on p. 3 to the Yoneda Lemma on p. 12! To be sure, the usual definitions we need are laid out clearly enough in between; but I do suspect that no one for whom all these ideas are genuinely new is going to get much real understanding from so rushed an introduction.

But now take, however, a reader who already knows a bit of topology and who has read Awodey’s Category Theory (for example). Then they should find this book very illuminating — both deepening their understanding of topology but also rounding out their perhaps rather abstract view of category theory by providing a generous helping of illustrations of categorial ideas doing real work (particularly in the last three chapters). Moreover, this is all attractively written, very nicely organized, and (not least!) pleasingly short at under 150 pages before the end matter.

In short, then: warmly recommended. And all credit too to the authors and to MIT Press for making the book available open-access. So I need say no more here: take a look for yourself!

Avigad on Mathematical Logic and Computation

A heads up, as they say. Jeremy Avigad’s new book Mathematical Logic and Computation has now been published by CUP (or at least, an e-version is already available on the Cambridge Core system if you have access — with the hardback due soon). Here’s a link to the front matter of the book, which gives you the Table of Contents and the Preface. Between them, they give you a fair idea of the coverage of the book.

As you’d expect from this author, this book is very worth having, an excellent addition to the literature, with plenty more than enough divergences and side-steps from the more well-trodden paths through the material to be consistently interesting. Having quickly read a few chapters, and dipped into a few more, I’d say that the treatments of topics, though very clear, are often rather on the challenging side (Avigad’s Carnegie Mellon gets very high-flying students in this area!). For example, the chapters on FOL would probably be best tackled by someone who has already done a course based on something like Enderton’s classic text. But that’s not a complaint, just an indication of the level of approach.

When the physical version becomes available — so much easier to navigate! — I’m going to enjoy settling down to a careful read through, and maybe will comment in more detail here. Meanwhile, this is most certainly a book to make sure your library gets.

The Annotated Gödel

Some years ago, Charles Petzold published his The Annotated Turing which, as its subtitle tells us, provides a guided tour through Alan Turing’s epoch-making 1936 paper. I was prompted at the time to wonder about putting together a similar book, with an English version of Gödel’s 1931 paper interspersed with explanatory comments and asides. But I thought I foresaw too many problems. For a start, not having any German, I’d have had to use one of the existing translations, which would lead to copyright issues, and presumably extra problems if I wanted to depart from the adopted translation by e.g. rendering Bew by Prov, etc. And then I felt it wouldn’t be at all easy to find a happy level at which to pitch the commentary.

Plan (A) would be to follow Petzold, who is very expansive and wide ranging about Turing’s life and times, and is aiming for a fairly wide readership too (his book is over 370 pages long). I wasn’t much tempted to try to emulate that.

Plan (B) would be write for a much narrower audience, readers who are already familiar with some standard modern textbook treatment of Gödelian incompleteness and who want to find out how, by comparison, the original 1931 paper did things. You then wouldn’t need to spend time explaining e.g. the very ideas of primitive recursive functions or Gödel numberings, but could rapidly get down to note the quirks in the original paper, giving a helping hand to the logically ept so that they can navigate through. However, the Introductory Note to the paper in the Collected Works pretty much does that job. OK, you could say a bit more (25 pages, perhaps rather than 14). But actually the original paper is more than clear enough for that to be hardly necessary, if you have already tackled a good modern treatment and then read that Introductory Note for guidance in reading Gödel himself.

Plan (C) would take a middle course. Not ranging very widely, sticking close to Gödel’s text. But also not assuming much logical background or any prior acquaintance with the incompleteness theorems, so having to slow down to explain ideas of formal systems, primitive recursion and so on and so forth. But to be frank, I didn’t and don’t think Gödel’s original paper is the best peg on which to hang a first introduction to the incompleteness theorems. Better to write a book like GWT! So eventually I did just that, and dropped any thought of doing for Gödel something like Petzold’s job on Turing.

But now, someone has bravely taken on that project. Hal Prince, a retired software engineer, has written The Annotated Gödel, a sensibly-sized book of some a hundred and eighty pages, self-published on Amazon. Prince has retranslated the incompleteness paper in a somewhat more relaxed style than the version in the Collected Works, interleaving commentary intended for those with relatively little prior exposure to logic. So he has adopted plan (C). And the thing to say immediately — before your heart sinks, thinking of the dire quality of some amateur writings on Gödel — is that the book does look entirely respectable!

Actually, I shouldn’t have put it quite like that, because I do have my reservations about the typographical look of the book. Portions of different lengths of a translation from the 1931 paper are set in pale grey panels, separated by episodes of commentary. And Prince has taken the decidedly odd decision not to allow the grey textboxes containing the translation to split themselves over pages. This means that an episode of commentary can often finish halfway down the page, leaving blank inches before the translation continues in a box at the top of the next page. And there are other typographical choices while also unfortunately make for a somewhat  unprofessional look. That’s a real pity, and does give a quite misleading impression of the quality of the book.

Now, I haven’t read the book with a beady eye from cover to cover; but the translation of the prose seems quite acceptable to me. Sometimes Prince seems to stick a bit closer to the Gödel’s original German than the version in the Works, sometimes it is the other way about. For example, in the first paragraph of Gödel’s §2, we have

G, undefinierten Grundbegriff: W, primitive notion: P, undefined basic notion,

while in the very next sentence we have

G, Die Grundzeichen: W, The primitive signs: P, The symbols.

But such differences are relatively minor.

Where P’s translation of G departs most is not in rendering the German prose but in handling symbolism. W just repeats on the Englished pages exactly the symbolism that is in the reprint of G on the opposite page. But where G and W both e.g. have “Bew(x)”, P has “isProv(x)”. There’s a double change then. First, P has rendered the original “Bew”, which abbreviated “beweisbare Formel”, to match his translation for the latter, i.e. “provable formula”. Perhaps a good move. But Prince has also included an “is” (to indicate that what we have here is an expression attributing a property, not a function expression). To my mind, this makes for a bit of unnecessary clutter here and elsewhere: you don’t need to be explicitly reminded on every use that e.g. “Prov” expresses a property, not a function.

Elsewhere the renditions of symbolism depart further. For example, G has “A_1\mbox{-}Ax(x)” for the wff which says that x is an instance of Axiom Schema II.1. P has “isAxIIPt1(x)”. And there’s a lot more of this sort of thing which makes for some very unwieldy symbolic expressions that I don’t find particularly readable.

There are other debatable symbolic choices too. P has “\Rightarrow” for the object language conditional, which is an unfortunate and unnecessary change. And P writes “x[v \leftarrow y]” for the result of substituting y for v where v is free in x. This may be a compsci notation, but to my untutored eyes makes for mess (and I’d say bad policy too to have arrows in different directions meaning such different things).

Other choices for rendering symbolism involve more significant departures from G’s original but are also arguably happier (let’s not pause to wonder what counts as faithful enough translation!). For example, there is a moment when G has the mysterious “p = 17 Gen q”: P writes instead “p = forall(\mathbf{x_1}, q)”. In G, 17 is the Gödel number for the variable x_1: P uses a convention of bolding a variable to give its Gödel number, which is tolerably neat.

There’s more to be said, but I think your overall verdict on the translation element of Prince’s book might go either way. The prose is as far as I can judge handled well. The symbolism is tinkered with in a way which makes it potentially clearer on the small scale, but makes for some off-putting longwindedness when rendering long formulas. If you are going to depart from Gödel’s symbolism, I don’t think that P chooses the best options. But as they say, you pays your money and makes your choice.

But what about the bulk of the book, the commentary and explanations interspersed with the translation of Gödel’s original? My first impression is definitely positive (as I said, I haven’t yet done a close reading of the whole). We do get a lot of helpful framing of the kind e.g. “Gödel is next going to define … It is easier to understand these definitions if we think about what he needs and where he is going.” And Prince’s discussions as we go along do strike me as consistently sensible and accurate enough, and will indeed be helpful to those who bring the right amount to the party.

I put it like that because, although I think the book is intended for those with little background in logic, I really do wonder whether e.g. the twenty pages on the proof of Gödel’s key Theorem VI will gel with those who haven’t previously encountered an exposition of the main ideas in one of the standard textbooks. This is the very difficulty I foresaw in pursuing plan (C). Most readers without much background will be better off reading a modern textbook.

But, on the other hand, for those who have already read GWT (to pick an example at random!), i.e. those who already know something of Gödelian incompleteness, they should find this a useful companion if they want to delve into the original 1931 paper. Though some of the exposition will now probably be unnecessarily laboured for them, while they would have welcomed some more  “compare and contrast” explanations bringing out more explicitly how Gödel’s original relates to standard modern presentations.

In short, then: if someone with a bit of background does want to study Gödel’s original paper, whereas previously I’d just say ‘read the paper together with its Introductory Note in the Collected Works’, I’d now add ‘and, while still doing that, and depending quite where you are coming from and where you stumble, you might very well find some or even all of the commentary in Hal Prince’s The Annotated Gödel a pretty helpful accompaniment’.

In praise of Truss …

Praise not for our new Prime Minister, about whom the less said the better, but for her admirable logician father John Truss.

By chance, I had occasion recently to dip into his 1997 book Foundations of Mathematical Analysis (OUP) which I didn’t know before and which is excellent. It is my sort of book, in that there is a lot of focus on conceptual motivation and Big Ideas, and a relatively light hand with detailed proofs. E.g. if you want just a very few pages on Gödelian incompleteness, his treatment in the first chapter is exemplary. Or jumping to the end of the book, there is e.g. a really helpful broad-brush section on the ingredients of Gödel’s and Cohen’s independence proofs in set theory, and a very good chapter on constructive analysis and choice principles. In between, we get a story that takes us from the naturals to the integers to the reals to metric spaces and more (e.g. a nice chapter on measure and Baire category). OK, this is a tale which is often told, but I think John Truss’s version is particularly insightful and good at bringing out the conceptual shape of the developing story. So, recommended!

OUP have disappointingly let the book go out of print. A Djvu file can, however, be found at your favourite file depository.

(Another) one to miss …

If any title in the Cambridge Elements series was especially designed to catch my interest, it would be Theoretical Computer Science for the Working Category Theorist. Not, of course, that I am in any sense a ‘working category theorist’, but I’m certainly interested, and know just a bit. And I know a smidgin too about computability theory. So I, for one, should be an ideal reader for an accessible short book which sets out to give more insights as to how the two are related. The introduction notes that there are a number of texts aimed at the computer scientist learning category theory (like the excellent Barr and Wells, Category Theory for Computing Science); but Noson Yanofsky promises something quite different, aimed at someone who already knows the basics of category theory and who wants to learn theoretical computer science.

It most certainly didn’t work for me. The author has a line about the central relevance of the notion of a monoidal category here, but makes it quite unnecessarily hard going to  extract the key ideas. Indeed, overall, this  just strikes me as very badly written. Careless too. What, for example, are we to make of slapdash remarks like “the German mathematician Georg Cantor showed that sets can be self-referential” or “Kurt Gödel showed that mathematical statements can refer to themselves”? (And on Gödel, I doubt that anyone who isn’t already familiar with the proof of  Gödelian incompleteness theorems from the unsolvability of the Halting problem is going to get much real understanding from the discussion on pp. 71–72.)

I could go into a lot more detail, but to be honest I found the book far too disappointing to want to put in the effort required to disentangle things enough to make for useful comments. Sorry to be so negative. And your mileage, of course, might vary …

[Also see Peter F’s comment.]

One True Logic

There’s a new book out by Owen Griffiths and Alex Paseau, One True Logic: A Monist Manifesto (OUP). As the title suggests, this argues against logical pluralism. Yes, of course, there are myriad logical systems which we can concoct and study as interesting mathematical objects. But what about the logic we actually use in reasoning about them and about other mathematical matters? Is there in fact one correct logic which tracks what really does follow from what? Our authors take a conservative line, in that they are anti-pluralist: there is indeed one true logic for in-earnest applications. They are unconservative in defending a highly infinitary logic in that role.

I’ve read the first few chapters with enjoyment and enlightenment. But I’m going to have to shelve the book for the moment, as it will be too distracting from other commitments to engage seriously with the rest of it for a while. One of the delights of somewhat senior years is finding it more difficult to think about more than one thing at a time. (“But what’s new?” murmurs Mrs Logic Matters from the wings.)

For a start, I must continue cracking on with the category theory project. I have now revised Chapters 1 to 15 of Beginning Category Theory. So here they are, in one long PDF which also includes the remaining unrevised chapters from the 2015/2018 Gentle Intro.

In this iteration there are quite a few minor changes to Chapters 1 to 13 (correcting typos, clarifying some phrasing, deleting an unnecessary section, adding a new theorem in §12.2, etc.), though there is nothing very significant there. I have also now revised Chapter 15, the first of the two general chapters on limits/colimits. This and the preceding chapter on equalisers/co-equalisers could surely do with more polishing and lightening-up in places. But as I’ve said previously, I’m including revised chapters when they are at least an improvement on what went before (I’m not waiting for final-draft perfection!).

If you are like me, you are looking for the more-than-occasional consoling distractions from the state of the wider world. Let me share one.

Of the great pianists I have had the chance to hear live over the years, the one I perhaps find the most emotionally engaging of all is Maria João Pires. Her unmannered directedness goes straight to the heart. So here she is, playing Schubert, Debussy and Beethoven, in a video recorded in Gstaad last August.

Scroll to Top