# Books

## An Introduction to Proof Theory, Ch. 5.

This chapter is on The Sequent Calculus. We meet Gentzen’s LJ and LK, with the antecedents and succedents taken to be sequences of formulas. It is mentioned that some alternative versions treat the sequent arrow as relating multisets so the corresponding structural rules will be different; but it isn’t mentioned that alternative rules for the logical connectives are often proposed, making for some alternative sequent calculi with nicer formal features. The definite article in the chapter title could mislead.

And how are we to interpret the sequent arrow? Taking the simplest case with a single formula on each side, we are initially told that AB “has the meaning of AB” (p. 169). But then later we are told that “we might interpret such a sequent [as AB] as the statement that B can be deduced from the assumption A” (p. 192, with letters changed). So which does it express? — a material conditional or a deducibility relation? I vote for saying that the sequent calculus is best regarded as a regimented theory of deducibility!

Now, at this point in the book, the conscientious reader of IPT will have just worked through over a hundred pages on natural deduction. So we might perhaps have expected to get next some linking sections explaining how we can see the sequent calculus as in a sense a development from natural deduction systems. I am thinking of the sort of discussion we find in the very helpful opening chapter ‘From Natural Deduction to Sequent Calculus’ of Jan von Plato and Sara Negri’s Structural Proof Theory (CUP, 2001). But as it is, IPT dives straight, presenting LK. There is, by the way, zero discussion here or later why the rules of LK have the particular shape they have (namely, to set up the single-succedent version as the intuitionist calculus).

IPT then immediately starts considering proof discovery by working back from the desired conclusion, with the attendant complications that arise from using sequences rather than sets or multisets. This messiness at the outset doesn’t seem well calculated to get a student to appreciate the beauty of the sequent calculus! I wouldn’t have started quite like this. However, the discussion here does give some plausibility to the claim that provable sequents should be provable without cut.

A proof of cut elimination will be the topic of the next chapter. The rest of this present chapter goes through some more sequent proofs, and proves a couple of lemmas (e.g. one about variable replacement preserving proof correctness). And then there are ten laborious pages showing how an intuitionist natural deduction NJ proof can be systematically transformed into an LJ sequent proof, and vice versa — just giving outline proofs would have done more to promote understanding.

The  lack of much motivational chat at the beginning of the chapter combined with these extended and less-than-thrilling proofs at the end do, to my mind, make this a rather unbalanced menu for the beginner meeting the sequent calculus for the first time.  At the moment, therefore, I suspect that many such readers will get more out of, and more enjoy, making a start on von Plato and Negri’s book. But does IPT’s Chapter 6 on cut elimination even up the score?

To be continued

## An Introduction to Proof Theory, Ch. 4

What are we to make of Chapter 4: Normal Deductions? This gives very detailed proofs of normalization, first for the ∧⊃¬∀ fragment of natural deduction, then for the intuitionist system, then for full classical natural deduction, with equally detailed proofs of the subformula property along the way. It all takes sixty-seven(!!) pages, often numbingly dense. You wouldn’t thank me for trying to summarize the different stages, though I think things go along fairly conventional lines. I’ll just raise the question of who will really appreciate this kind of very extended treament?

It certainly is possible to introduce this material without mind-blowing tedium. For example, Jan von Plato’s very readable book Elements of Logical Reasoning (CUP 2013) gets across a rich helping of proof theoretic ideas at a reasonably introductory level with some zest, and has a rather nice balance between explanations of general motivations and proof-strategies on the one hand and proof-details on the other. An interested philosophy student with little background in logic who works through the book should come away with a decent initial sense of its topics, and will be able to appreciate e.g. Prawitz’s Natural Deduction. the more mathematical will then be in a position to tackle e.g. Troesltra and Schwichtenberg’s classic with some conceptual appreciation.

Mancosu, Galvan and Zach’s methodical coverage of normalization isn’t, I suppose, much harder than von Plato’s presentation (scattered sections of which appear at different stages of his book); it should therefore be accessible to those who have the patience to plod though the details of their Chapter 4 and who try not to lose sight of the wood for the trees. But I do wonder whether the slow grind here will really produce more understanding of the proof-ideas. I guess that I can recommend the chapter as a helpful supplement for someone who wants to chase up particular details (e.g. because they found some other, snappier, outline of normalization proofs hard to follow on some point). Is it, however, the best place for most readers to make a first start on the topic? A judgement call, perhaps, but I have to say that I really very much doubt it.

To be continued

## An Introduction to Proof Theory, Ch. 3

Chapter 3 of IPT is on Natural Deduction. The main proof-theoretic work, on normalization, is the topic of the following chapter, and so the topics covered here are relatively elementary ones.

In particular, the first four sections of the chapter simply introduce Gentzen-style proof systems for minimal, intuitionist, and classical logic. So, as with most of the previous chapter, the main question about these sections is: just how good is the exposition?

One initial point. The book, recall, is aimed particularly at philosophy students with only a minimal background in mathematics and logic. But if they are reading this book at all, then — a pound to a penny — they will have done some introductory logic course. And it is quite likely that in this course they will already have met natural deduction done Fitch-style (indenting subproofs in the way expounded in no less than thirty different logic texts aimed at philosophers). It seems strange, to say the least, not to mention Fitch-style natural deduction at all; wouldn’t it have helped many readers to do a compare-and-contrast, explaining why (for proof-theoretic work) Gentzen-style tree layouts are customarily the standard?

An explicit comparison might, indeed, have helped in various places. For example, on p. 73 we are bluntly told that the inference from A to (BA) is allowed by the conditional proof rule, i.e. vacuous discharge is allowed. No explanation is given to the neophyte of why this isn’t a cheat. A student familiar with conditional proof in a Fitch-style setting (where there is no requirement that the formula at the head of a subproof is essentially used in the derivation of the end-formula of the subproof) should have a head start in understanding what is going on in the Gentzen setting and why it might be thought unproblematic. Why not make the connection?

Next, it is worth saying that, while this chapter is about natural deduction proofs, there isn’t a single actual example of a proof set out! For the tree arrays which populate the chapter aren’t object language proofs, with formulas as assumptions and a formula as conclusion — they are all entirely schematic, with every featured expression a metalinguistic schema. Ok, this is pretty common practice; and it wouldn’t be worth remarking on if it was very clearly explained that this is what is happening. But it isn’t (I suppose a mis-stated single sentence back on p. 20 in the previous chapter is somehow supposed to do the job). And worse, our authors sometimes seem to themselves forget that everything is schematic. For example, at the foot of p. 79 an attempted proof schema is laid out, ending with the schema 𝐴(𝑐) ⊃ ∀𝑥𝐴(𝑥). This is followed by the comment “The end-formula of this ‘deduction’ is not valid, so it should not have a proof”. Now, it is indeed formulas, not schemas, that have natural deduction proofs according to the definition of a proof on p. 69. OK, so what is “the end-formula” here? There is  no unique such formula. Are we supposed to be generalising then about all formula-instances of the proof schema? But the end-formula of an instance of this attempted proof schema could perfectly well be valid. Ok, yes of course, it is easy to tidy this up so it says what is meant, along with a number of related mishaps: but the student reader shouldn’t be left having to do the work.

A third point. Oddly to my mind, the authors have decided to say nothing here about the supposed harmony between the core introduction and elimination rules shared by classical and intuitionist logic (there is just the briefest of mentions in the Preface of this as a topic that the reader might want to follow up). Well, I can certainly understand not wanting to get embroiled in the ramifying debates about the philosophical significance of harmony. But the relevant formal feature of the rules (that demarcates “or” from “tonk”) is so very neat that it is surely worth saying something about it, even when formal issues are at the forefront of attention. More generally — though I do realize this now is a rather unfocused grumble — I’d say that the expository §§3.1–3.4 don’t really bring out the elegance of Gentzen-style natural deduction.

Moving on to the rest of the chapter, §3.5 notes a couple of measures of the complexity of a deduction tree, and then the remaining two sections of the chapter look at a handful of results about deductions proved by induction on complexity. In particular, §3.7 shows that a formula is deducible in the axiomatic system for classical logic given in the previous chapter if and only if it is deducible in the classical natural deduction system described in this chapter. This bit of proof-theory is nicely done.

I’ve more quibbles that I’ll skip over. Overall, though, I would say that this chapter does fall somewhat short of my ideal of an introduction to Gentzen-style natural deduction. Still, students coming to it already knowing about Fitch-style proofs will surely be able to manage the transition and should get the hang of things pretty well. But I suspect that those who have never encountered a natural deduction system before might find it all significantly harder going.

To be continued

## An Introduction to Proof Theory, Ch. 2

IPT is particularly aimed at those “who have only a minimal background in mathematics and logic”. Well, philosophy students who haven’t done much logic may well not have encountered an old-school axiomatic presentation of logic before. So — perhaps reasonably enough — Chapter 2 is on Axiomatic Calculi.  The chapter divides into two uneven parts, which I’ll take in turn.

In the early sections, we meet axiomatic versions of minimal, intuitionist and classical logic for propositional logic (presented using axiom schemas in the usual way). Next, after a pause to explain the idea of proofs by induction, there’s a proof of the deduction theorem, and an explanation of how to prove independence of axioms (more particularly, the independence of ex falso from minimal logic, and the independence of the double negation rule from intuitionist logic) using deviant “truth”-tables. Then we add axioms for predicate logic, and prove the deduction theorem again. So this part of the chapter is routine stuff. And really the only question is: how well does the exposition unfold?

By my lights, not very well. OK, perhaps the general state of the world is rather getting to me, and my mood is less than sunny. But I did find this disappointing stuff.

For a start, the treatment of schemas is a mess. It really won’t do to wobble between saying (p. 19) that axioms are formulas which are instances of schemas, and saying that such a formula is an instance of an axiom (p. 21). Or how about this: “an instance of an instance of an axiom is an axiom” (p. 21). This uses ‘axiom’ in two different senses (i.e. first meaning axiom schema) and then meaning axiom), and it uses ‘instance’ in two different senses (first meaning instance (of schema) and then meaning variant schema got by systematically replacing individual schematic letters by schemas for perhaps non-atomic wffs).

Or how about this? — We are told that a formula is a theorem just if it is the end-formula of a derivation (from no assumptions). We are immediately told, though, that “in general we will prove schematic theorems that go proxy for an infinite number of proofs of their instances.” (p. 20) You can guess what this is supposed to mean: but it isn’t what the sentence actually says. There’s more of the same sort of clumsiness.

When it comes to laying out proofs, things again get messy. The following, for example, is described as a derivation (p. 32):

This is followed by the comment “Note that lines 1 and 2 do not have a ⊢ in front of them. They are assumptions, not theorems. We indicate on the left of the turnstile which assumptions a line makes use of. For instance, line 4 is proved from lines 1 and 3, but line 3 is a theorem — it does not use any assumptions.” But first, with this sort of Lemmon-esque layout, shouldn’t the initial lines have turnstiles after all? Shouldn’t the first line commence, after the line number, with “(1) ⊢” — since the assumption that the line makes use of is itself? And more seriously, in so far we can talk of line (9) as being derived from what’s gone before, it is the whole line read in effect as a sequent which is derived from line (8) read as a sequent — it’s not an individual formula derived from earlier ones by a rule of inference. So in fact this just isn’t a derivation any more in the original sense given in IPT in characterizing axiomatic derivations.

I could continue carping like this but it would get rather boring. However, a careful reader will find more to quibble about in these sections of the chapter, and — more seriously — a student coming to them fresh could far too easily be misled/confused.

The final long §2.15, I’m happy to say, is a rather different kettle of fish, standing somewhat apart from the rest of the chapter in both topic and level/quality of presentation. This section can be treated pretty much as a stand-alone introduction to a key fact about classical and intuitionist first-order Peano Arithmetic (with an axiomatic logic), namely that we can interpret the first in the second, thereby establishing that classical PA is consistent if the intuitionist version is. This is, I think, neatly done enough (though I suspect the discussion might need a slightly more sophisticated reader than what’s gone before).

Just one very minor comment. A student might wonder why what is here called the Gödel-Gentzen translation (atoms are left untouched) is different from what many other references call the Gödel-Gentzen translation (where atoms are double negated). A footnote of explanation might have been useful. But as I say, this twelve page section seems much better.

To be continued

## An Introduction to Proof Theory, Ch. 1

It’s arrived! Ever since it was announced, I’ve been very much looking forward to seeing this new book by Paolo Mancosu, Sergio Galvan and Richard Zach. As they note in their preface, most proof theory books are written at a fairly demanding level. So there is certainly a gap in the market for a book that presents some basic proof theory taking up themes from Gentzen in a more widely accessible way, covering e.g. proof normalization, cut-elimination, and a proof of the consistency of arithmetic using ordinal induction. An Introduction to Proof Theory (OUP, newly published) aims to be that book.

Back in the day, when I’d finished my Gödel book, I had it in mind for a while to write a Gentzen book a bit like IPT (as I’ll refer to it) though in parts a couple of notches more technical. But when I got down to work, I quickly realized that my grip on the area was really quite embarrassingly shallow in places, and I lost all confidence. What I should have done was downsize my ambitions and tried instead to write a book more like this present one. So I have a particular personal interest in seeing how Mancosu, Galvan and Zach write up their project. I’m cheering them on!

Some brisk notes, then, as I read through …

Chapter 1: Introduction has three brisk sections, on ‘Hilbert’s consistency program’, ‘Gentzen’s proof theory’ and ‘Proof theory after Gentzen’.

The scene-setting here is done very cogently and reliably as far as it goes (just as you’d expect). However, on balance I do think that — given the intended readership — the first section in particular could have gone rather more slowly. Hilbert’s program really was a great idea, and a bit more could have been said to  explore and illuminate its attractions. On the other hand, an expanded version of the third section would probably have sat more naturally as a short valedictory chapter at the end of the book.

But then, one thing I’ve learnt from writing my own introductory books is that you aren’t going to satisfy everyone — indeed,  probably not even a majority of your hoped-for readers will be happy. Wherever you set the dial, many will complain that you take things at far too slow a pace, while others will complain that you lost them only a few chapters in. So, in particular, these questions of how much initial scene-setting to provide are very much a judgement call.

To be continued (and I’ll return to The Many and the One in due course).

## The Many and the One, Ch. 5 & Ch. 6

I confess that I have never been able to work up much enthusiasm for mereology. And Florio and Linnebo’s Chapter 5, in which they compare ‘Plurals and Mereology’, doesn’t come near to persuading me that there is anything of very serious interest here for logicians. I’m therefore quite cheerfully going to allow myself to ignore it here. So let’s move on to Chapter 6, ‘Plurals and Second-Order Logic’. The broad topic  is a familiar one ever since Boolos’s classic papers of — ye gods! — almost forty years ago: though oddly enough F&L do not directly discuss Boolos’s arguments here.

In §6.1, F&L give a sketchy account of second-order logic, and then highlight its monadic fragment. Note, they officially treat the second-order quantifiers as ranging over Fregean concepts. And they perhaps really should have said more about this — for can the intended reader be relied on to have a secure grasp on Frege’s notion? Indeed, what is a Fregean concept?

The following point seems relevant to F&L’s project. According to Michael Dummett’s classic discussion (in his Frege, Philosophy of Language, Ch. 7), Fregean concepts are extensional items: while (for type reasons) we shouldn’t say that co-extensive concepts are identical, the relation which is analogous to identity is indeed being coextensive. So the concept expressions ‘… is a creature with a heart’ and ‘… is a creature with a kidney’ have the same Fregean concept as Bedeutung. I take it that Dummett’s account is still a standard one (the standard one?). For example, Michael Potter in his very lucid Introduction to the Cambridge Companion to Frege — while noting Frege’s reluctance to talk of identity in this context — writes (without further comment)

Concepts, for Frege, are extensional, so that, for instance, the predicates ‘x is a round square’ and ‘x is a golden mountain’ refer to the same concept (namely the empty one).

But now compare F&L. They write

Two coextensive concepts might be discerned by modal properties. Assume, for example, that being a creature with a heart and being a creature with a kidney are coextensive. Even so, these two [sic] concepts can be discerned by a modal property such as possibly being instantiated by something that lacks a heart.

Which seems to suggest that, contra Dummett and Potter’s Frege, co-extensive predicates can have distinct concepts as Bedeutungen. That’s why I really do want more elaboration from F&L of their story about the Fregean concepts which, according to them, are to feature in an account of the semantics of second-order quantification.

§6.2 describes how theories of plural logic and monadic second order logic can be interpreted in each other. And, analogously to §4.3, a question then arises: can we eliminate pluralities in favour of concepts, or vice versa?

So §6.3 discusses the possibility of using second-order language to eliminate first-order plural terms, as once suggested by Dummett. As F&L note, this suggestion has already come in for a lot of criticism in the literature; but they argue that there is some wriggle room for defenders of (something like) Dummett’s line to avoid the arguments of e.g. Oliver and Smiley and others. I’m not really convinced. For example, F&L suggest that a manoeuvre invoking events proposed by Higginbotham and Schein will help the cause — simply ignoring the extended critique of that manoeuvre already in Oliver and Smiley’s Plural Logic.  In the end, though, F&L think that there is a more compelling argument against the elimination of pluralities in favour of concepts on the basis of their respective modal behaviour (but note, F&L are here seemingly relying  on their departure from the standard Dummettian construal of Fregean concepts — or if not, we need to hear more).

§6.4 then looks at the possibility of an elimination going the other way, reducing second-order logic to a logic of plurality. But so far we have only been offered a way of interpreting  monadic second order logic using plurals; the obvious first question is — how can we interpret full second-order logic with polyadic predicates, quantifying over polyadic concepts? Perhaps we can do the trick if we help ourselves to a pairing function for the first-order domain (so, for example, dyadic relations get traded in for monadic properties of pairs). F&L raise this familiar idea: but suggest — again very briefly — that there is another modal objection: “while a plurality of ordered pairs can model the extension of a dyadic relation, it cannot in general represent all of its intensional features.” Tell us more! We also get a promissory note forward to discussion of a different objection to eliminating second-order logic.

There’s a short summary §6.5. But, to my mind, this is again a somewhat disappointing chapter. As it happens, my inclinations are with F&L’s conclusion that both plural logic and second order logic can earn their keep (without one being reduced to the other). But I do rather doubt that anyone who already  takes a different line will find themselves compelled to change their minds by the arguments so far outlined here.

To be continued, but after a break.

## The Many and the One, Ch. 4

In the next part of their book, ‘Comparisons’, F&L discuss ‘Plurals and Set Theory’ (Chapter 4). ‘Plurals and Mereology’ (Chapter 5), and ‘Plurals and Second-order Logic’ (Chapter 6).

Here, in bald outline, is what happens in Chapter 4.  §4.1 describes a ‘simple set theory’ framed in a two-sorted first-order language, with small-x quantifiers running over a domain of individuals and big-X quantifiers running over sets of those individuals. The two sorts are linked by an axiom scheme of set comprehension, (S-Comp): ∃Xx(xX ↔ φ(x)). §4.2 notes that the mutual interpretability of this theory with a certain simple plural logic. (We can’t simply replace big-X set variables by double-x plural variables, at least given the usual assumption that there is an empty set in the range of big-X variables but not an empty plurality in the range of double-x plural variables. But working around that wrinkle involves only minor tinkering.) §4.3 then asks whether this mutual interpretability means we should eliminate plurals in favour of sets or alternatively eliminate sets in favour of plurals. §4.4 suggests that we need plurals in elucidating the very notion of a set (so don’t eliminate plurals): the root idea is that “For every plurality of objects xx from [a given domain], we postulate their set {xx},” where postulation seems to be tantamount to defining into existence. We are promised more about definitions of this kind in Chapter 12.

§4.5 then notes that mathematical uses of sets crucially involve not just sets of individuals (numbers, perhaps) but sets of sets, sets of sets of sets. etc.; and, for a start, it is very unclear that these can be eliminated in favour of pluralities of pluralities. §4.6 then says more about the iterative conception of set, and §4.7 gives the axioms of ZFC. §4.8 jumps on to wonder whether we can use plurals in explicating the notion of proper classes. The chapter ends with §4.9 which raises a problem:

We have described two very attractive applications of plural logic: as a way of giving an account of sets, and as a way of obtaining proper classes “for free”. Regrettably, it looks like the two applications are incompatible. The first application suggests that any plurality forms a set. Consider any objects xx. Presumably, these are what Gödel calls “well-defined objects”. If so, it is permissible to apply the “set of” operation to xx, which yields the corresponding set {xx}. The second application, however, requires that there be pluralities corresponding to proper classes, which by definition are collections too big to form sets.

F&L again promise to return to deal with this apparent tension in their Chapter 12.

Does the chapter work? Well, although I said in my first post on the book that I wouldn’t fuss too much about this sort of thing, it is pretty difficult to know quite at whom this chapter is aimed. For example, §4.6 very briskly outlines the iterative conception of set, helping itself along the way to the idea that we take unions at levels indexed by limit ordinals (where ordinals are unexplained). But I wonder who is supposed to (a) already be familiar with the notion of a limit ordinal in §4.6, but (b) still need to have the axioms of ZFC given again in §4.7? And won’t the reader who actually needs §4.7 then need more explanation of the role of proper classes in set theory (and the difference between their appearance as virtual classes in e.g. Kunen, versus a more substantive appearance in NBG)?

And to go back to the beginning of the chapter, I would guess that someone with enough logical education to know about limit ordinals would also know enough to want to ask more about the principle S-Comp: does the comprehension principle apply to predicates φ(x) which themselves involve bound set variables? or involve free set variables as parameters? or neither? We are not told, and there is no hint that the issue might matter. There is also no hint at all that the kind of “simple set theory” with two sorts of quantifier might actually be of real interest, e.g. in reverse mathematics when considering subsystems of second-order arithmetic. This lack of development is typical and disappointing.

As it happens, I am in sympathy with F&L’s overall line that (i) plural logic is repectable and can earn its keep in certain important contexts, and (ii) set theory is just fine in its place too! But I can’t see that this arm-waving chapter really advances the case for either limb (and I could nag away more at some of the details). In so far as there are hints of novel argumentative moves, the work of elaborating them is left for much later. So I did find the level of discussion in this chapter frustratingly rather superficial: hopefully, F&L do better when they return to cash out those promissory notes.

To be continued.

## Vaguely distracted

I’ve been distracted from plurals — just for a day, I tell myself — by the arrival of Crispin Wright’s collected papers on vagueness (a 450 page book at a very decent price, by the way, and a must-read for anyone half-interested in the topic). Richard Heck contributes a forty page introduction, picking out some main themes: and on a quick first read this seems pretty insightful and really very helpful for (re)orientation. And then jumping to the end, I’ve been reading Wright’s most recent piece on “Intuitionism and the Sorites Paradox” (which was in the Oms/Zardini edited volume of essays on the Sorites). This is, needless to say, an impressive, challenging, imaginative essay. But I’m going to have to just mull over the ideas for now, and return to this, and to many of the other essays in the book, more seriously later. I’m sure, though, that my quarter-baked thoughts on vagueness won’t be fit for public consumption here! And for now it is back to plural logic (because getting a bit clearer about that is more directly relevant to other projects …).

## The Many and the One, Ch. 3/i

In Chapter 3, ‘The Refutation of Singularism?’, Florio and Linnebo get down to critical work. As the chapter’s title suggests, the topic is going to be various arguments that have been offered against singularist attempts to render plural discourse in the framework of standard logic. Can we really regiment sentences involving what appear to be plural terms denoting many things at once by using singular terms denoting just one thing — a set, or perhaps a mereological sum? F&L aim to show that “regimentation singularism is a more serious rival to regimentation pluralism than the [recent] literature suggests.”

What is the standard for assessing such formal regimentations? For F&L, as they say in §3.1, the key question is whether or not “singularist regimentations mischaracterize logical relations in the object language or mischaracterize the truth values of some sentences.” But that, presumably, can’t be quite the whole story. If, for example, the purported singularist regimentations turn out to be an unprincipled piecemeal jumble, with apparently logically similar sentences involving plural terms having to be regimented ad hoc, in significantly different ways, in order to preserve the singularist doctrine case by case, that will surely be a serious strike in favour of taking plurals at face value. Or so discutants in this debate have assumed, and F&L don’t give any reason for objecting.

An aside: Not that it matters, but F&L also claim in passing that

Regimentation can also serve the purpose of representing ontological commitments. The ontological commitments of statements of the object language are not always fully transparent. The translation might help clarify them. Following Donald Davidson, one might for instance regard certain kinds of predication as implicitly committed to events. As a result, one might be interested in a regimentation that, by quantifying explicitly over events, brings these commitments to light.

But careful! For Davidson, it is because we (supposedly) need to discern quantificational structure in regimenting action sentences to reflect their inferential properties that we need to recognize an ontology of events for the quantifiers to range over. So while, as F&L say, we want formal regimentation to track already acknowledged informal logical relations, with questions of ontological commitment (at least for a Quinean like Davidson) it goes the other way around — it only makes sense to read off ontological commitments after we have our regimentations (since “to be is to be the value of a variable”).

In §3,2, F&L move on to consider one class of anti-singularist consideration, what they call ‘substitution arguments’. Or rather they briefly consider one such argument, from a 2005 paper by Byeong-Uk Yi. A strange choice, by my lights, since the locus classicus for the presentation of such arguments is of course a 2001 paper by Oliver and Smiley, and then again in their 2013/2016 book Plural Logic. Their Chapter 3, ‘Changing the Subject’, in particular, is a tour-de-force relentlessly deploying such arguments. (F&L wrongly say that “changing the subject” is “[O&S’s] name for singularist attempts to eliminate plurals”. Not so. It is their punning name for one singularist strategy, the one which takes a plural-subject/predicate sentence and tries to regiment it as a singular-subject/predicate sentence. O&S’s following chapter discusses another, different, singularist strategy).

OK. Here’s a very quick reminder of the relevant sections of Plural Logic. In their §3.2, O&S argue for a uniform treatment of plural subjects, whether they are combined with a distributive or collective predicate. Thus, we shouldn’t (as Frege seems committed to do) carve ‘Tim and Alex met in the pub and had a pint’ into two sentences ‘Tim and Alex met in the pub’ [collective predicate, subject referring to some singular thing, the set {Tim, Alex} or mereological whole Tim + Alex] and ‘Tim and Alex had a pint’ [distributive predicate, so this turn is to be carved into the conjunction of ‘Tim had a pint’ and ‘Alex had a pint’]. O&S give two compelling arguments for uniformity. In §3.3, they then argue against a naive version of “changing the subject” where we regiment a plural-subject/predicate sentence by changing to a singular subject (substitute singular for plural) while leaving the predicate unchanged. They give elaborated versions of the familar sort of Boolos objection to doing that: it may be true that the cheerios were tasty, but it seems haywire to say the set of cheeries was tasty, etc., etc.

So in §3.4, O&S discuss the strategy of changing the subject and the predicate in a way that preserves coherence and truth-values. And the first point they press is that initial attempts to do this just move the plural from subject to predicate — for example if we want to regiment the plural subject term in ‘Russell and Whitehead wrote Principia’ by using a singular subject term for a set, we could render that sentence by ‘{Russell, Whitehead} is such-that-its-members-wrote-Principia’. But there are two problems with this sort of regimentation. (1) There are uniformity worries: take the sentence ‘Russell and Whitehead wrote Principia, Wittgenstein didn’t’ (the property denied of Wittgenstein here is surely not the same property of being such that its members etc. etc.). And crucially (2) a singularist will need to get rid of the plural term buried in the complex predicate. And so O&S consider various strategies for various cases. They make some headway in giving more-or-less contorted singular renditions of a number of plural sentences; but they sum up as follows:

The most striking feature of the analyses is their diversity. Although there is a uniform first stage [along the lines of the Russell and Whitehead example] the further analysis required in order to eliminate the residual plurals varies widely from case to case. It appears that we are condemned to a piecemeal and promissory approach, hoping rather than knowing that a suitable analysis can be found for any plural sentence. Such untidiness is unattractive, to say the least.

I think we are supposed to read ‘unattractive’ as indeed a radical understatement!

Now back to Florio and Linnebo. As I said, they consider just one observation by Yi, namely that there are contexts where we can’t intersubstitute ‘Russell and Whitehead’ and ‘{Russell, Whitehead}’ salva veritate (without changing the predicate). And F&L in effect note that changing the predicate in an appropriate way will rescue the day for Yi’s particular examples — though they cheerfully allow different changes in a couple of different contexts. But how piecemeal do they want to be? What about Oliver and Smiley’s further examples? F&L just don’t say.

Snap verdict: F&L’s two page jab gives no good reason to dissent from O&S’s extended trenchant arguments against singularism based on substitution considerations, broadly understood.

To be continued.

## The Many and the One, Ch. 2

Chapter 2, ‘Taking Plurals at Face Value’, continues at an introductory level.

Oddly, Florio and Linnebo give almost no examples of the full range of plural expressions which they think a formal logic of plurals might aim to regiment (compare, for example, the rich diet of examples given by Oliver and Smiley in §1.2 of their Plural Logic, ‘Plurals in Mathematics and Logic’). Rather F&L start by immediately sketching three singularist strategies for eliminating plurals, starting the with familiar option of trading in a plural term denoting many things for a singular term denoting the set of those things.

They will be returning to discuss these singularist strategies in detail later. But for now, in their §2.2, F&L introduce the rival idea that “plurals deserve to be understood in their own terms by allowing the use of plural expressions in our regimenting language”. §2.3 then announces “the” language of plural logic. But that’s evidently something of a misnomer. It is a plural formal language, but — for a start — it lacks any function expressions (and recall how central it is O&S’s project to have a workable theory account of function expressions which take plural arguments).

F&L leave it open whether one should “require a rigid distinction between the types of argument place of predicates. An argument place that is open to a singular argument could be reserved exclusively for such arguments. A similar restriction could be imposed on argument places open to plural arguments.” But why should we want such selection restrictions? O&S remark very early on (their p. 2) that — bastard cases aside — “every simple English predicate that can take singular terms as arguments can take plural ones as well.” Are they wrong? And if not, why should we want a formal language to behave differently?

F&L seem think that not having selection restrictions would depart from normal logical practice. They write

In the philosophical and logical tradition, it is widely assumed that if an expression can be replaced by another expression salva congruitate in one context, then it can be so replaced in all contexts. This assumption of “strict typing” is true of the language of first-order logic, as well as of standard presentations of second-order logic.

But that’s not quite accurate. For example, in a standard syntax of the kind F&L seem to assume for singular first-order logic, a name can be substituted salva congruitate for a variable when that variable is free, but not when it is quantified. (As it happens, I think this is a strike against allowing free variables! — but F&L aren’t in a position to say that.) Any anyway, there is a problem about such selection restrictions once we add descriptions and functional terms, or so Oliver and Smiley argue (Plural Logic, p. 218). If we allow ostensibly plural descriptions and multi-valued functions (and it would be odd if a plural logic didn’t) it won’t in general be decidable which resulting terms are indeed singular arguments and which are plural; so having singular/plural selection restrictions on argument places will make well-formedness undecidable. (If F&L don’t like that argument and/or have a different account of ‘singular’  vs ‘plural argument’, which they haven’t previously defined, then they need to tell us.)

Moving on, §2.4 presents what F&L call “The traditional theory of plural logic”. I’m not sure O&S, for example, would be too happy about that label for a rather diminished theory (still lacking function terms, for a start), but let that pass. This “traditional” theory is what you get by adding rules for the plural quantifiers which parallel the rules for the singular quantifiers, plus three other principles of which the important one for now is the unrestricted Comprehension principle: ∃xφ(x) → ∃xx∀x(x ≺ xx ↔ φ(x)) (if there are some φs, then there are some things such that an object is one of them iff it is φ).

Evidently unrestricted Comprehension gives us some big pluralities! Take φ(x) to be the predicate x = x, and we get that there are some things (i.e. all objects whatsover) such that any object at all is one of them. F&L flag up that there may be trouble waiting here, “because there is no properly circumscribed lot of ‘all objects whatsoever’.” Indeed! This is going to be a theme they return to.

§2.5 and §2.6 note that plural logic has been supposed to have considerable philosophical significance. On the one hand, it arguably is still pure logic and ontologically innocent: “plural variables do not range over a special domain but range in a special, plural way over the usual, first-order domain.”
And pressing this idea, perhaps (for example) we can sidestep some familiar issues if “quantification over proper classes might be eliminated in favor of plural quantification over sets”. On the other hand, a plural logic is expressively richer than standard first-order logic which only has singular quantification — it enables us, for example, to formulate categorical theories without non-standard interpretations. F&L signal scepticism, however, about these sorts of claims; again, we’ll hear more.

The chapter finishes with §2.7, promisingly titled ‘Our methodology’. One of the complaints (fairly or unfairly) about O&S’s book has been the lack of a clear and explicit methodology: what exactly are the rules of their regimentation game, which pushes them towards what some find to be a rather baroque story?  Why insist (as they do) that our regimented language tracks ordinary language in allowing empty names while e.g. cheerfully going along with the material conditional with all its known shortcomings? (What exactly are the principles on which conventionally tidying the conditional is allowed, but not tidying away the empty names?) Disappointingly, despite its title, F&L’s very short section doesn’t do any  better than O&S. “We aim to provide a representation of plural discourse that captures the logical features that are important in the given context of investigation.” Well, yes. But really, that settles nothing until the “context of investigation” is articulated.

To be continued.

Scroll to Top