Avigad MLC — 6: Arithmetics

Chs 1 to 7 of MLC, as we’ve seen, give us a high-level and often rather challenging introduction to core first-order logic with a quite strongly proof-theoretic flavour. Now moving on, the next three chapters are on arithmetics — Ch. 8 is on primitive recursion, Ch. 9 on PRA, and Ch. 10 on richer first-order arithmetics.

I won’t pause long over Ch. 8, as the basic facts about p.r. functions aren’t wonderfully exciting! Rather characteristically, Avigad dives straight into the general definition of p.r. functions, and then it’s shown that the familiar recursive definitions of addition, exponentiation, and lots more, fit the bill. We then see how to handle finite sets and sequences in a p.r. way. §8.4 discusses other recursion principles which keep us within the set of p.r. functions, and §8.5 discusses recursion along well-founded relations — these two sections are a bit tersely abstract, and it would surely have been good to have more by way of motivating examples. Finally §8.6 tells us about diagonalizing out of the p.r. functions to get a computable but not primitive recursive function, and says just a little about fast-growing functions. All this is technically fine, it goes without saying; though once again I suspect that many students will find this chapter rather more manageable if they have had a preliminary warm-up with a gentler first introduction to the topic first.


But while Ch. 8 might be said to be relatively routine (albeit quite forgiveably so!), Ch. 9 is anything but. It is the most detailed and helpful treatment of Primitive Recursive Arithmetic that I know.

Avigad first presents an axiomatization of PRA in the context of a classical quantifier-free first-order logic. Hence

  1. The logic has the propositional connectives, axioms for equality, plus a substitution rule (wffs with variables are treated as if universally quantified, so from A(x) we can infer A(t) for any term).
  2. We then have a symbol for each p.r. function — and we can think of these added iteratively, so as each new p.r. function is defined by composition or primitive recursion from existing functions, a symbol for the new function is introduced along with its appropriate defining quantifier-free equations in terms of already-defined functions.  
  3. We also have a quantifer-free induction rule: from A(0) and A(x) \to A(Sx), infer A(t) for any term.

§§9.1–9.3 explore this version of PRA in some detail, deriving a lot of arithmetic, showing e.g. that PRA proves that if p is prime, and p \mid xy then either p \mid x or p \mid y, and noting along the way that we could have used an intuitionistic version of the logic without changing what’s provable.

Then the next two sections very usefully discuss two variant presentations of PRA. §9.4 enriches the language and the logic by allowing quantifiers, though induction is still just for quantifier-free formulas. It is proved that this is conservative over quantifier-free PRA for quantifier-free sentences. And there’s a stronger result. Suppose full first-order PRA proves the \Pi_2 sentence \forall x \exists y A(x, y), then for some p.r. function symbol f, quantifier-free PRA proves A(x, f(x)) (and we can generalize to more complex \Pi_2 sentences).

By contrast §9.5 weakens the language and the logic by removing the connectives, so all we are left with are equations, and we replace the induction rule by a rule which in effect says that functions satisfying the same p.r. definition are everywhere equal. This takes us back — as Avigad nicely notes — to the version of PRA presented in Goodstein’s (now unread?) 1957 classic  Recursive Number Theory, which old hands might recall is subtitled  ‘A Development of Recursive Arithmetic in a Logic-Free Equation Calculus’.

All this is done in an exemplary way, I think. Perhaps Avigad is conscious that in this chapter he is travelling over ground that it is significantly less well-trodden in other textbooks, and so here he allows himself to be rather more expansive in his motivating explanations.


The following Ch. 10 is the longest in the book, some forty two pages on ‘First-Order Arithmetic’. Or rather, the chapter is on arithmetics, plural — for as well as the expected treatment of first-order Peano Arithmetic, with nods to Heyting Arithmetic,  there is also a perhaps surprising amount here about subsystems of classical PA.

In more detail, §10.1 briefly introduces PA and HA. You might expect next to get a section explaining how PA (with rather its minimal vocabulary) in fact can be seen as extending the PRA which we’ve just met (with all its built-in p.r. functions). But we have to wait until  §10.4 to get the story about how to define p.r. functions using some version of the beta-function trick. In between, there are two longish sections on the arithmetical hierarchy of wffs, and on subsystems of PA with induction restricted to some level of the hierarchy. Then §10.5 shows e.g. how truth for \Sigma_n sentences can be defined in a \Sigma_n way, and shows e.g. that I\Sigma_{n+1} (arithmetic with induction for \Sigma_{n + 1} wffs) can prove the consistency of I\Sigma_{n}, and also — again using truth-predicates — it is shown e.g. that I\Sigma_{n} is finitely axiomatizable.

There’s a minor glitch. In the proof of 10.3.5 there is a reference to the eighth axiom of Q — but Robinson arithmetic isn’t in fact introduced until Chapter 12. Otherwise, so far, so good. The material here is all stuff that is very good to know. Yes, the discussion is a bit dense in places; but it should all be pretty manageable because the ideas are, after all, straightforward enough.

However, the chapter ends with another ten pages whose role in the book I’m not so sure about. §10.6 proves three theorems using cut elimination arguments, namely (1) that I\Sigma_{1} is conservative over PRA for \Pi_2 formulas; (2) Parikh’s Theorem, (3) that so-called B\Sigma_{n+1} is conservative over I\Sigma_{n} for \Pi_{n+2} wffs. What gives these results, in particular the third, enough interest to labour through them? They are, as far as I can see, never referred to again in later chapters of the book. And yet §10.7 then proves the same theorems again using model theoretic arguments. I suppose that these pages give us samples of the kinds of conservation results we can achieve and some methods for proving them. But I’m not myself convinced they really deserve this kind of substantial treatment in a book at this level.

Super-infinite

Not, then, a maths post on the infinite, but rather a brief but very warm recommendation for Katherine Rundell’s biography of John Donne.

Having no young children around, we have a good rule which helps to keep Christmas relaxed — no exchanges of presents (I can really recommend that too!). But we might wander down to the bookshops a day or two before, and get ourselves a couple of new books we might particularly enjoy reading over the holidays. I went to Heffer’s planning to buy Irene Vallejo’s much praised Papyrus. However, browsing around, I found myself gripped by the first few pages of Super-Infinite: The Transformations of John Donne. So that’s what we bought. And I have now read it with great enjoyment over the last few days. I can certainly see why the book won the Baillie Gifford Prize for Non-Fiction.

Her pieces in the London Review of Books in recent years show that Katherine Rundell can write wonderful prose, of great zest and imaginative power. And she has been a prize-winning author of books for children. She brings her immense gifts for story-telling and her stylistic panache to her academic work as an English scholar, here recounting Donne’s quite extraordinary life and so illuminating his poetry.

Rundell’s title Super-infinite comes from one of Donne’s own new formations, and the paradox is impatient with the limitations of language and what it can be used to describe. The book stages an often thrilling meeting between Donne and Rundell, and its basis is this supple, flexible wordplay in which sentences jump between registers and tones, between this world and the next, between everything and something more. On a portrait of the young Donne, soulful and seductive: ‘He wore a hat big enough to sail a cat in, a big lace collar, an exquisite moustache.’ On his despairs and griefs, and the deaths of his family members and too many of his young children: ‘He was a man who walked so often in darkness that it became for him a daily commute.’ On the violence of his style: ‘He wanted to wear his wit like a knife in his shoe.’ Phrases such as these do the work which literary critics are sometimes afraid to do. They attest to love, and perform that love, and in so doing Rundell returns us to a Donne who is new yet old, nicely paradoxical, his own man and everyone’s.

I can’t put it better than that, from from Daniel Swift’s review in the Spectator. I couldn’t put Super-Infinite down and was even left wanting more.

So now, having devoured the book so quickly, I suppose I will need to wander now down to Heffer’s again, to pick up a copy of Papyrus.

A Christmas card

Another year, another Christmas card. Like last year, this is a picture to be found down the road at the Fitzwilliam Museum, a Nativity long attributed to Bastiano Mainardi, a member of Domenico Ghirlandaio’s workshop and his brother-in-law. But latterly, the Fitz has claimed to see in it the hand of the master himself. Though I do wonder!


Ghirlandaio has been in my mind recently, in part prompted by reading Paola Tinagli’s Women in Renaissance Art (which, among much else, has interesting things to say about the iconography of the great frescos in Santa Maria Novella). And I was prompted to read that book by the references that Maggie O’Farrell gives at the end of her wonderful novel The Marriage Portrait.

The portrait is of the young Lucrezia de’ Medici. Here she is:

Lucrezia is taking her seat at the long dining table, which is polished to a watery gleam and spread with dishes, inverted cups, a woven circlet of fir. Her husband is sitting down, not in his customary place at the opposite end but next to her, close enough that she could rest her head on his shoulder, should she wish; he is unfolding his napkin and straightening a knife and moving the candle towards them both when it comes to her with a peculiar clarity, as if some coloured glass has been put in front of her eyes, or perhaps removed from them, that he intends to kill her. She is sixteen years old, not quite a year into her marriage.

How can you not read on? By some way, my favourite novel of this last year. So that’s my warm recommendation of a book to treat yourself to this Christmastide!


Distracting fictions of one kind or another have been necessary more than ever, no? As Eliot says, “Humankind cannot bear very much reality.” To mention just one reality befalling the world, who would have predicted, twelve months ago, that we would be three hundred days into a simmering land war in Europe? Grim times.

But just for a few days let’s try not to dwell too much on that! May you and yours indeed have a very happy Christmas, and I earnestly hope your New Year is a peaceful one.

Why is it so difficult to introduce type theory to the rest of us?

One episode in the Beginning Mathematical Logic Study Guide which I must radically revise in the next edition is the final section on type theories, which was tacked on almost as an afterthought. But it will, indeed, take quite a bit of work to organize a better overview of what is a messy area, and devotees of varieties of type theory are not always the clearest of advocates to help us along.

Egbert Rijke has an Introduction to Homotopy Type Theory coming out soon with CUP. This is a textbook aimed at quite a wide readership; “it introduces the reader to Martin-Löf’s dependent type theory, to the central concepts of univalent mathematics, and shows the reader how to do mathematics from a univalent point of view. … The book is entirely self-contained, and in particular no prior familiarity with type theory or homotopy theory is assumed.” Which sounds promising. And a late pre-print can now be downloaded: here’s the link. This may very well be very useful to some readers of this blog, depending on your background. And all credit to Rijke for making his text freely available on the arXiv.

The first part of the book is, as announced, on Martin-Löf’s version of type theory. I’ve dived into this 98 page introduction hopefully. But I can’t, to be honest, say that I have wildly enjoyed the experience — Rijke makes it no easier than others for a conservatively-minded logician to happily find their way in. He acknowledges “Type theory [or at least, type theory of this stripe] can be confusing for people who are new to the subject,” but this means that many of us could do with rather more explanatory chat than we get here. For a small but not insignificant example, right at the outset we are told without further ado that “The expression 𝑎 : 𝐴 is … not considered to be a proposition, i.e., something which one can assert about an arbitrary element and an arbitrary type, but it is considered to be a judgment, i.e., an assessment that is part of the construction of the element 𝑎 : 𝐴.” Is the distinction between a proposition and a judgement transparently clear to you? No. Me neither. (Amusingly, I asked ChatGPT to give a simple explanation of the difference between propositions and judgements in Martin-Löf, and it was a lot clearer. On its first attempt. Though it lost the plot a bit when the question was re-asked.)

Well, meaning is use and all that, and eventually the mists clear somewhat as the notions of proposition and judgement get used later in Rijke. More generally, if you have in fact already encountered a bit of type theory, his explanations will probably serve well for revision and consolidation. But we still await (OK, I still await) a really introductory text on dependent-type-theory-for-old-fashioned-logicians.

Off to Boston Spa

A small package of hardback copies of Gödel Without (Too Many) Tears has just arrived. One copy I’ll give to the Moore Library here (the university’s main library for the mathematical sciences); another will go to my long-ago college (once a Trinity mathmo, always a Trinity mathmo). A third will be sent off to Boston Spa, because you are legally required to deliver a copy of any book you publish to the British Library.

I suppose it quite likely that this third copy will disappear into the “Additional Storage Building” there, soon never to be seen again. In his strangely rambling but sporadically gripping Bibliophobia, Brian Cummings sets the scene:

From high in the roof, the book robot swings down in an arc in a vertical plane, 10 m or more in a single movement, between stacks ranking 20 m high. It pauses, chooses a stack, then hovers, humming all the time, hunting for the book that it is programmed to find. The scale of the building is difficult for a human to take in. The void is 24 m high by 24 m wide by 64 m long. In any case, the room is not designed for humans. Oxygen levels are kept at 14–15 per cent, which is similar to trying to breathe at the top of a Himalayan mountain. Visitors watch from a special cage, advised to leave after fifteen minutes for their own safety. This library is not a human environment, for it is designed for habitation by books and robots alone.

This is a library for the twenty-first century. … Since space at the main site [of the British Library] at St Pancras in London is at a premium, it is proposed to house books 262 km away and transport them between the Reading Rooms in both locations. Eventually, it is planned that seven million items will be held at the Additional Storage Building in Boston Spa. All of this makes logical sense …

The robotic crane adds a volume to a pile that it is assorting in a plastic bucket. In time, it delivers this to an airlock at the end of the room. There are 140,000 bar-coded containers. It is only at this moment that human intervention takes over, as staff retrieve items from the airlock to send to Reading Rooms, a maximum of 48 hours from ordering to arrival at a London desk. Yet if a human librarian wanted to enter the vault to retrieve a book, using either gigantic ladders or high-wire trapeze artists, it would not be feasible. The books are no longer on fixed shelves ear-marked for their location. Only the crane knows where the books are. It never puts a book back as it found it, absorbing the used item back into the system in the order in which it comes, then remembering where it was. The books are engaged in an eternal game of musical buckets, finding new neighbours, and slotting in accordingly. Only humans need shelf marks any longer: the shelves have gone. The retrieval system is fully automated …

Logical, but the stuff of Borgesian nightmare, squirrelling away everything in a limitless library, including the unread and the unreadable, published in their tens of thousands a year. Libraries, you feel (well, I feel), should be places of more homely comfort — whether in a private room or two with friendly familiars on the walls, or in a congenial public space for browsing along the shelves and reading and working with others similarly occupied.

I’m sure no one would notice if GWT2 never arrived at Boston Spa. But duty calls.

Logicisms and Gödel’s Theorem

Russell famously announced “All mathematics deals exclusively with concepts definable in terms of a very small number of logical concepts and … all its propositions are decidable from a very small number of fundamental logical principles.” That wildly ambitious version of logicism is evidently sabotaged by Gödel’s Theorem which shows that, fix on a small number of logical principles and definitions as you will, you won’t even be able to derive from them all arithmetical truths let alone the rest of mathematics. But how do things stand with latter-day, perhaps less ambitious, forms of neo-logicism?

Second-order logic plus Hume’s Principle gives us second-order Peano arithmetic, and True Arithmetic is a semantic consequence. Buy, for the sake of argument, that Hume’s Principle is in some sense as-good-as-analytic. But how does that help with the epistemological ambitions of a logicism once we see that Gödel’s Theorem shows that second-order semantic consequence is not axiomatizable? Fabian Pregel at Oxford has a very nice piece ‘Neo-Logicism and Gödelian Incompleteness’ coming out shortly in Mind, arguing first that the earlier Wright/Hale canonical writings on their neo-logicism unhappily vacillate, and then that Wright’s more extended discussion of the issue in his 2020 ‘Replies’ (in the volume of essays on his work edited by Alexander Miller) is also unsatisfactory.

I’ll leave Pregel to speak for himself, and just recommend you read his piece when you can. But I was prompted  to look again at Neil Tennant’s recent discussion of his deviant form of neo-logicism (which isn’t Pregel’s concern). Right at the outset of his The Logic of Number (OUP, 2022), Tennant is emphatic that (a) respecting what he calls the Gödel phenomena must be absolutely central in any sort of foundationalist story. But he still wants (b) to defend a version of logicism about the natural numbers (using introduction and elimination rules in a first-order context). So how does he square his ambition (b) with his vivid recognition that (a)?

Tennant  writes:

Logicism maintains that Logic (in some suitably general and powerful sense that will have to be defined) is capable of furnishing definitions of the primitive concepts of this main branch of mathematics. These definitions allow one to derive the mathematician’s ‘first principles’ of number theory as results within logic itself. The logicist is therefore purporting to uncover a deeper source of justification for these ‘first principles’ than just that they seem obvious or self-evident to mathematicians working in the branch of mathematics in question, … [p. 5]

So he is out to defend what Wright 2020 calls the “Core logicist thesis” that at least we can get to the mathematician’s familiar Peano Axioms starting from logic-plus-definitions.  And the methods of his version of logicism, Tennant says,

can be used to determine to what extent the truths in a particular branch of mathematics might be logical in their provenance. So it is more nuanced and discerning than logicism in its original and ambitious form, even when confined to number theory. [pp 5–6]

the formulation of mathematical theories in terms of introduction and elimination rules for the main logico-mathematical operators furnishe[s] a principled basis for drawing an analytic/synthetic distinction within those mathematical theories. [p. 13]

So Tennant is quite happy with an analytic/synthetic distinction being applied within the class of first-order arithmetical truths. In fact, that was already his view back in 1987, when his version of logicism in Anti-Realism and Logic dropped more or less stone-dead from the press (Neil has a terrible habit of trying to take on too much at once, as I’d say he does in his latest book — so that earlier book had something to annoy everyone, and I doubt that very many got to the final chapters!)

Whether Tennant should be happy about the idea of non-logical arithmetical truths concerning logical objects is another question, of course, but that’s the line.

OK: so the criterion of success, for Tennant, is that the logicist

accounts for what kind of things the natural numbers are, and thereby also enables one rigorously (and constructively, and relevantly) to derive as theorems the postulates of ‘pure’ Dedekind–Peano arithmetic, which the pure mathematician takes as first principles for the pure theory of the natural numbers. [p. 51]

NB ‘derive’ — it is syntactic provability as against semantic consequence more generally that is in question.

To be sure, we share Frege’s logicist aspiration to establish at least the natural numbers as logical objects, and to derive the Dedekind–Peano postulates that govern them from a deeper, purely logical foundation. Moreover, we claim to have succeeded where Frege himself had failed, for want of a consistent foundational logical theory. The natural numbers, though sui generis, are logical objects. They are recognizable and identifiable as such because of the role they play in our thoughts about objects that fall under sortal concepts. Our logico-genetic path to the natural numbers has proved to be fully logicist. And it does not take Hume’s Principle as its point of departure. [pp 54–55]

And a bit later — and here we interestingly link up with some remarks of Pregel’s which also touch on what has come to be called “Isaacson’s Thesis” — Tennant writes

Note that it will suffice, for the natural logicist to be able to claim substantial success in this project, to recover the axioms of Dedekind–Peano arithmetic. Indeed, if the natural logicist manages to succeed only on Dedekind–Peano arithmetic, this might offer the explanation sought by Isaacson [1987] of why it is that the axioms of Dedekind–Peano arithmetic are so very ‘natural’. As Isaacson puts it,

… Peano Arithmetic occupies an intrinsic, conceptually well-defined region of arithmetical truth. … it consists of those truths which can be perceived directly from the purely arithmetical content of a categorical conceptual analysis of the notion of natural number.

We have already acknowledged Gödelian incompleteness in arithmetic, and we fully recognize the logical and epistemological challenge posed by it. [p. 69]

So, for Tennant it is an open question how far logicist methods will take us into arithmetic. But that doesn’t impugn, he thinks, its success in giving us the mathematician’s ‘first principles’ for arithmetic. And indeed it might be conceptually important that the logicist method takes us just so far into arithmetic and no further, in the spirit of something like Isaacson’s ideas.

Pregel very reasonably asks the Wright-style neo-logicist

In particular, what account is the Neo-Logicist to offer of the analyticity status of the semantic consequences of HP that are not deductive consequences? Are they analytic as well, though for a different reason? Or synthetic? And how do we account for the fact that different possible choices of second-order deductive systems mean different formulas get categorised as ‘core’?

Tennant would bite the bullet for his version of logicism — they’re synthetic. And his logical framework is first order, so he doesn’t hit the second problem.

OK, that’s all mostly tangential to Pregel’s concerns with the canonical version of neo-logicism. But the point at which they both touch on Isaacson is interesting and suggestive — though since Tennant isn’t tangling with second-order logic, he avoids some of the worries Pregel rightly raise about whether we can deploy something like  Isaacson’s Thesis in the canonical second order framework.

Now, I hasten to add, all this isn’t to say that I outright endorse Tennant’s version of logicism. But do I find myself suspecting that his way with Gödel by limiting the ambitions of a logicism is a “best buy” for someone who wants to rescue something of substantive interest out of a latter-day version of logicism.

And now I’m kicking myself — I have just remembered that I meant to change a footnote about neo-logicism in Gödel Without (Too Many) Tears having read Neil when his book came out, and I forgot to do so before publishing the second edition last month. Bother!

Avigad MLC — 5: More on FOL

Following on from the very interesting Chapter 6 on cut-elimination, there’s one further chapter on FOL, Chapter 7 on “Properties of First-Order Logic”. There are sections on Herbrand’s Theorem, on the Disjunction Property for intuitionistic logic, on the Interpolation Lemma, on Indefinite Descriptions and on Skolemization. This nicely follows on from the previous chapter, as the proofs here mostly rely on the availability of cut-elimination. I’m not going to dwell on this chapter, though, which I think most readers will find quite hard going. Hard going in part because, apart from perhaps the interpolation lemma, it won’t this time be obvious from the off what the point of various theorems are.

Take for example the section on Skolemization. This goes at pace. And the only comment we get about why this might all matter is at the end of the section, where we read:

The reduction of classical first-order logic to quantifier-free logic with Skolem functions is also mathematically and philosophically interesting. Hilbert viewed such functions (more precisely, epsilon terms, which are closely related) as representing the ideal elements that are added to finitistic reasoning to allow reasoning over infinite domains.

So that’s just one sentence expounding on the wider interest  — which is hardly likely to be transparent to most readers! It would have been good to hear more.

Avigad’s sections in this chapter are of course technically just fine and crisply done, and can certainly be used as a good source to consolidate and develop your grip on their cluster of topics if you already have met the key ideas. But to be honest I wouldn’t recommend starting here.


OK: The chapters to this point take us some 190 pages into MLC — they form, if you like, a book within the book, on core FOL topics but with an unusually and distinctively proof-theoretic flavour. So this is well worth having. But to repeat what I’ve said more than once before, I suspect that a reader who is going to happily navigate and appreciate the treatments of topics here will typically need rather more background in logic than Avigad implies.

So when I get round to revising the Beginning Mathematical Logic Study Guide, these opening chapters will find their place in the suggested more advanced supplementary readings.

Meeting Leibniz on the white road.

The non-fiction, non-logic, book which I have most enjoyed this year is the one I have just finished, Edmund de Waal’s remarkable The White Road (from 2105). I was enticed by the jacket description — the author “sets out on a quest – a journey that begins in the dusty city of Jingdezhen in China and travels on to Venice, Versailles, Dublin, Dresden, the Appalachian Mountains of South Carolina and the hills of Cornwall to tell the history of porcelain. Along the way, he meets the witnesses to its creation; those who were inspired, made rich or heartsick by it, and the many whose livelihoods, minds and bodies were broken by this obsession. It spans a thousand years and reaches into some of the most tragic moments of recent times.” It is indeed a quite remarkable read, intriguing and then — when it touches on some of the human disasters of the last century — distressing and moving. As you would expect from The Hare with Amber Eyes or Letter to Camondo, it is quite wonderfully well written — de Waal’s often short, beautifully crafted, paragraphs as carefully arranged as those white porcelain pots of his in their vitrines. Which makes the later pages all the more telling.


I hadn’t expected to bump into Leibniz on the road that takes us to seventeenth century Chinese porcelain and the attempts of the Jesuit missionaries to discern its secrets. But then I didn’t know that one of the few books Leibniz published in his lifetime was the Novissima Sinica — the very latest news from China! Unlike a Descartes who thinks that what is most fundamental is common to us all and is available through rational reflection, Leibniz is committed to the idea that the growth of knowledge will come from bringing together different perspectives, different ways of thinking, about our shared universe. So he has an extensive correspondence with the Jesuit mission, and (to quote de Waal) writes that

all this activity around China … is ‘un commerce de lumière’; enlightenment stretching both ways. This is a tremendous idea and a beautiful image, one of an equality of concerns, a correspondence of civilisations, of light.

As Leibniz writes to his friend Sophie, the wife of the elector of Hanover:

I will thus have a sign placed at my door with these words: bureau of address for China, because everyone knows that one only has to address me in order to learn some news. And if you wish to know about the great philosopher Confucius … or the drink of immortality which is the philosopher’s stone of that country, or some things which are a little more certain, you only have to order it.

He is one of the gatekeepers. If you want to know about Chinese mathematics, the I Ching as a coding of chance events, Chinese characters and their relationship to hieroglyphs, you go to him. Leibniz has been to visit Father Francesco Grimaldi in Rome, just back from the emperor Kangxi’s court, and written up copious notes on fireworks, glass and metal. I realise to my surprise that my hero, the father of rationalism, is anxious to keep ahead in this new, congested field of China Studies.

So Leibniz knows about the problem of porcelain. Later, he corresponds with the mathematician Ehrenfried Walter von Tschirnhaus, who has a sideline experimenting on how to create porcelain here in Europe, and almost gets there. And the extraordinary story continues, that was all new to me, as I followed The White Road. 

Mastodon, maybe: Twitter, not so much

Twitter could be pretty informative and entertaining, and I’d occasionally tweet posts myself. But the Elon Musk takeover seems to be Very Bad News on a number of fronts. So, like many, I’m backing off for the time being, though I’ve not actually deleted my account yet — the Musk era might or might not last. Hence I’ve removed the Twitter feed from the footer that shows on Logic Matters pages (when using a computer or tablet browser).

Instead, I’ll post occasional tweet-sized music-related snippets in the same place — about new CDs, old CDs I’ve been re-listening to, YouTube video links, etc.

Again like many, I’ve joined a Mastodon server, and there’s now a link in the footer to @PeterSmith@mathstodon.xyz. I’m seeing more mathsy stuff and fewer cats and otters. Still exploring who to follow in the wider fediverse. But so far, I’m enjoying the atmosphere and the occasional distractions.

Avigad MLC — 4: Sequent calculi vs tableaux?

Jeremy Avigad’s book is turning out to be not quite what I was expecting. The pace and (often) compression can make for rather surprisingly tough going.There is a shortage of motivational chat, which could make the book a challenging read for its intended audience.

Still, I’m cheerfully pressing on through the book, with enjoyment. As I put it before, I’m polishing off some rust — and improving my mental map of the logical terrain. So, largely as an aide memoire for when I get round to updating the Beginning Mathematical Logic Guide next year, I’ll keep on jotting down a few notes, and I will keep posting them here in case anyone else is interested.


MLC continues, then, with Chapter 6 on Cut Elimination. And the order of explanation is, I think, interestingly and attractively novel.

Yes, things begin in a familiar way. §6.1 introduces a standard sequent calculus for (minimal and) intuitionistic FOL logic without identity. §6.2 then, again in the usual way, gives us a sequent calculus for classical logic by adopting Gentzen’s device of allowing more than one wff to the right of the sequent sign. But then A notes that we can trade in two-sided sequents, which allow sets of wffs on both sides, for one-sided sequents where everything originally on the left gets pushed to the right of sequent side (being negated as it goes). These one-sided sequents (if that’s really the best label for them) are, as far as I can recall, not treated at all in Negri and von Plato’s lovely book on structural proof theory; and they are mentioned as something of an afterthought at the end of the relevant chapter on Gentzen systems in Troelstra and Schwichtenberg. But here in MLC they are promoted to centre stage.

So in §6.2 we are introduced to a calculus for classical FOL using such one-sided, disjunctively-read, sequents (we can drop the sequent sign as now redundant) — and it is taken that we are dealing with wffs in ‘negation normal form’, i.e. with conditionals eliminated and negation signs pushed as far as possible inside the scope of other logical operators so that they attach only to atomic wffs. This gives us a very lean calculus. There’s the rule that any \Gamma, A, \neg A with A atomic counts as an axiom. There’s just one rule each for \land, \lor, \forall, \exists. There also is a cut rule, which tells us that from \Gamma, A and \Gamma, {\sim}{A} we can infer \Gamma (here {\sim}{A} is notation for the result of putting the negation of A in negation normal form).

And Avidgad  now proves twice over that this cut rule is eliminable. So first in §6.3 we get a semantics-based proof that the calculus without cut is already sound and complete. Then in §6.4 we get a proof-theoretic argument that cuts can be eliminated one at a time, starting with cuts on the most complex formulas, with a perhaps exponential increase in the depth of the proof at each stage — you know the kind of thing! Two comments:

  1. The details of the semantic proof will strike many readers as familiar — closely related to the soundness and completeness proofs for a Smullyan-style tableaux system for FOL. And indeed, it’s an old idea that Gentzen-style proofs and certain kind of tableaux can be thought of as essentially the same, though conventionally written in opposite up-down directions (see Ch XI of Smullyan’s 1968 classic First-Order Logic). In the present case, Avigad’s one-sided sequent calculus without cut is in effect a block tableau system for negation normal formulas where every wff is signed F. Given that those readers whose background comes from logic courses  for philosophers will probably be familiar with tableaux (truth-trees), and indeed given the elegance of Smullyan systems, I think it is perhaps a pity that A misses the opportunity to spend a little time on the connections.
  2. A’s sparse one-sided calculus does make for a nicely minimal context in which to run a bare-bones proof-theoretic argument for the eliminability of the cut rule, where we have to look at a very small number of different cases in developing the proof instead of having to hack through the usual clutter. That’s a very nice device! I have to report though that, to my mind, A’s mode of presentation doesn’t really make the proof any more accessible than usual. In fact, once again  the compression makes for quite hard going (even though I came to it knowing in principle what was supposed to be going on, I often had to re-read). Even just a few more examples along the way of cuts being moved would surely have helped.

To continue (and I’ll be briefer) §6.5 looks at proof-theoretic treatments of cut elimination for intuitionistic logic, and §6.6 adds axioms for identity into the sequent calculi and proves cut elimination again. §6.7 is called ‘Variations on Cut Elimination’ with a first look at what can happen with theories other than the theory of identity when presented in sequent form. Finally §6.8 returns to intuitionistic logic and (compare §6.5) this time gives a nice semantic argument for the eliminability of cut, going via a generalization of Kripke models.

All very good stuff, and I learnt from this. But I hope it doesn’t sound too ungrateful to say that a student new to sequent calculi and cut-elimination proofs would still do best to read a few chapters of Negri and von Plato (for example) first, in order to later be able get a lively appreciation of §6.4 and the following sections of MLC.

Scroll to Top