In presenting a formal theory of first-order logic for an introductory course — or, in my case, an introductory book — there are a *lot* of choices to be made, even once we’ve fixed on using a natural deduction proof system. (This paper by Pelletier reviews at length some of the choice points; and an overlapping paper by Pelletier and Hazen has a table telling us the assorted choices made by no less than 50 texts.)

I’m interested here, and in the next couple of posts, about a subset of issues concerning the *language(s)* we are going to be using:

- Do we talk of
*one*language or*many*languages (different languages for different interesting first-order theories, and different temporary ad hoc languages for use when we are regimenting different arguments)? - In explaining the semantics of quantifiers, do we give a Tarski-style account in terms of satisfaction where we assign sequences of objects to sequences of
*variables*? Or do we make use of some new kind of*supplementary names*for objects in the domain, so that e.g. ‘$latex \exists xFx$’ comes out true on an interpretation if ‘$latex Fa$’ is true for some extra name ‘*a’*on some perhaps extended interpretation? - What do we use as
*parameters*in natural deduction? Names? Variables? A new, syntactically distinguished, sort of symbol?

Here, the first issue is to some extent a matter of presentational style, but (as we will note in a subsequent post) the details of what you say can make a difference on more substantive points. The second issue is probably familiar enough and we needn’t delay over it just now. To elucidate the last issue – the only one of the three dealt with explicitly by Pelletier – consider a natural deduction proof to warrant the inference ‘Everyone loves pizza. Anyone who loves pizza loves ice-cream. So everyone loves ice-cream’. In skeletal form the proof goes like this, ending with a step of universal quantifier introduction (add to taste extra bells and whistles, such as Fitch-style indentation, and even rearrange into a tree):

- $latex \quad \forall xFx$
- $latex \forall x(Fx \to Gx)$
- $latex Fq$
- $latex (Fq \to Gq)$
- $latex Gq$
- $latex \forall xGx \quad\quad\textrm{(UI)}$

Here, I’ve used the intentionally unusual ‘*q’*. But what would you prefer to write in its place? Some would use e.g. ‘*x’*, picked from the list of official variables for the relevant language in play. Some would use e.g. ‘*a’*, picked from the list of official individual constants (names) for the relevant language. However, we might note that ‘*q’* is neither being used as a variable tied to a prefixed quantifier, nor used as a genuine constant naming a determinate thing, but in a third way. It is, as they say, being used as a parameter. So we might wonder whether we should somehow mark syntactically when a symbol is being used as a parameter (following the Fregean precept that important semantic differences should be perspicuously marked in our symbolism).

It is interesting to see how different authors of logic textbooks for students which cover natural deduction handle our three choice-points. So let’s take a number of texts in turn, in the chronological order of editions that I can reach from my shelves. (I’ll return to mention some other books, including that by Suppes, in the next post – but for the moment I am considering just a handful of books, ones that I have learnt from over the years, and/or ones that that might still be recommended to students at least as supplementary reading.)

*Richmond Thomason, Symbolic Logic (Macmillan, 1970) *is, looking back, a strange mixture of a book, extremely lucid in some respects, but making pretty heavy weather of some of the more technical presentations. The main reason I am mentioning it here is that Thomason is emphatic about distinguishing parameters from names and variables. But otherwise, this is a many-language, almost-Tarski book (almost, because it considers valuations of all-the-parameters-at-once, rather than assigning values to all-the-variables-at-once).

*Neil Tennant, Natural Logic (Edinburgh UP, 1978/1990) *talks of *the *language of first-order logic. In giving a model, distinguished names are assigned objects, but later “undistinguished names” are employed as names of arbitrary objects considered in the course of a proof — i.e. they are employed as parameters. Free variables feature not in proofs but in the syntactic story about how wffs are built up, and the associated Tarski-style semantic story for wffs.

*Paul Teller, A Modern Formal Logic Primer (Prentice-Hall 1989) *seems to be a one-language book. At least, formation rules for a single language are introduced for sentential logic in Vol I, and Vol II carries on in the same spirit. The semantics is initially given by assuming that every object has a name. Then, late on, that assumption is dropped, but the resulting semantics is of the non-Tarskian kind I’ve indicated. Names are used in natural deduction proofs using the (UI) rule as above, but are marked with a ‘hat’ in this role, so these parameters are syntactically marked. But names used in their (admittedly slightly different) use as parameters in existential quantifier elimination (EE) arguments are *not* marked.

*Jon Barwise and John Etchemendy, The Language of First-Order Logic* (CSLI, 2nd ed. 1991) talks initially of FOL, *the* language of first-order logic, but soon starts talking of first-order languages, plural. On semantics, there is initially a version of the idea that we extend a first-order language with additional names, and ‘$latex \exists x\varphi(x)$*’* is true on interpretation *I* so long as ‘$latex \varphi(a)$*’* is true on some extension of *I *which is just like *I* except in what it assigns to the new name ‘*a’*. But later, an official Tarskian story is given. Wffs with free variables are allowed, but don’t feature in proofs where parameters are all constants. But when parametric, the constants are first introduced in a ‘box’ (boxing a constant *c* is “the formal analog of the English phrase ‘Let *c* denote an arbitrary object’”). But this syntactic marking of a parameter does not – unlike Teller’s ‘hats’ – persist.

*Merrie Bergmann, James Moor, Jack Nelson, The Logic Book (McGraw-Hill, 3rd ed. 1998) *introduces a single, catch-all language PL. Initially, just a very informal semantics is given, enough for most purposes. But when a formal account is eventually given, it is Tarskian. The authors allow free variables in wffs. But their natural deduction system is again presented as a system for deriving sentences from sentences, and in the course of proofs it is names (not free variables) which are used parametrically.

*R.L. Simpson, Essentials of Symbolic Logic (Broadview 1999/2008) *is perhaps the most elementary of the books mentioned in this list – it talks of *the* language of symbolic logic but we can’t answer the question of what style of formal semantics Simpson prefers as none is given. The book is worthy of mention here, however, because of the particular clarity of its ND system, and the way it handles the quantifier rules. It uses distinguished parameters in both (UI) and (EE) proofs.

*Dirk van Dalen, Logic and Structure (Springer, 4th ed. 2004) *allows for many languages, different languages of different signatures. The book’s semantics is non-Tarskian, and goes via adding extra constant symbols, one for every element of the domain, and then ‘$latex \exists x\varphi(x)$*’* is true on interpretation *I* so long as some ‘$latex \varphi(\overline{a})$*’* is true (where ‘$latex \overline{a}$*’* is one of the extra constants). The natural deduction system uses free variables in a parametric role.

*Ian Chiswell and Wilfrid Hodges, Mathematical Logic *(OUP, 2007) allows many languages with different signatures. The authors give a straight Tarskian semantics. Their natural deduction system allows wffs with free variables, and allows both variables and constants to be used parametrically (what matters, of course, is that the parametrically used terms don’t appear in premisses which the reasoning relies on, etc.).

*Nick Smith, Logic: The Laws of Truth (Princeton, 2012) *goes for one all-encompassing language of FOL, with unlimited numbers of predicates of every arity etc. When we actually use the language, we’ll only be interested in a tiny fragment of the language, and interpretations need only be partial. Semantically, Smith is non-Tarskian: ‘$latex \exists x\varphi(x)$*’* is true on interpretation *I* so long as ‘$latex \varphi(a)$*’* is true on some extension of *I *which is just like *I* except in what it assigns to the new name ‘*a’*. Smith’s main proof system is a tableau system; but he also describes a Fitch-style system and other ND systems. In these cases, names are used as parameters. (The one all-encompassing language of FOL will give us an unending supply.) Smith allows free variables in wffs in his syntax — but these wffs don’t get a starring role in his proof systems.

So in summary we have, I think, the following (correct me if I have mis-assigned anyone, and let me know about other comparable-level texts which cover natural deduction if they are interestingly different):

Book | One/ Many? | Tarksi/non-Tarski? | Parameters? |
---|---|---|---|

Thomason | Many | Tarski | Distinguished |

Tennant | One | Tarski | Names |

Teller | One | Non-Tarski | Semi-Distinguished |

Barwise & Etchemendy | Many | Both | Names |

Bergmann, et al. | One | Tarski | Names |

Simpson | One | — | Distinguished |

van Dalen | Many | Non-Tarski | Variables |

Chiswell and Hodges | Many | Tarski | Both |

Smith | One | Non-Tarski | Names |

The obvious question is: is there a principled reason (or at least a reason of accessibility-to-students) for preferring one selection of options over any other, or is it a matter of taste which way we go?

*Revised to take account of first two comments. To be continued.*

Daniel NagaseThis a bit old, but I’m going through Kit Fine’s papers right now and recalled that he has a book about this natural deduction practice, i.e. of using “arbitrary names”:

Reasoning with Arbitrary Objects. I haven’t read it in detail, but it does seem rather typical of Fine. So if you enjoy his writings (I do), this may be an interesting book to check out in this connection. (It seems that he incorporates some of the ideas from this book in hisSemantic Relationism).David MakinsonOne seldom-mentioned advantage of the substitutional account of satisfaction as contrasted with the x-variant (Tarskian) one, is that it ties in nicely with the usual Henkin proof of completeness. That proof introduces constants in its construction of a canonical model, so that when the semantics are presented in the standard x-variant way, the argument in effect trades on the equivalence of that presentation with the substitutional one.

In my courses in recent years, I have been outlining both of these two semantic readings for the quantifier, noting their ultimate equivalence, and giving my own notations for (i) substitution of a constant a for free occurrences of a variable x in a formula alpha, and (ii) an x-variant of a given Tarskian valuation v, so as to make the parallel visually evident. Namely: alpha subscripted by x:=a for the former and v subscripted by x=a for the latter. In the former, a is constant of the object language, while in the latter it is a name, in our metalanguage, of an element of the domain. This abuse of language does not confuse them at all, nor does the omission of maniac devices like quotation marks and corners, which only obscure the essentials.

The class quickly gets the feel that the two approaches are at bottom the same. After a few obligatory manual computations of each kind, exhibiting not only the identity of results but also the parallels in the calculations, I let them choose whichever of the two paths they prefer. Most, but not all, of them seem to incline to the substitutional account. They also like to be able to exercise their individual taste in this way. These are philosophy students, and I suspect that maths majors with some set theory behind them might tend to the x-variant account; on the other hand, cs majors might, like the philosophers, prefer to be substitutional.

One would like to be able to articulate, at least for one’s own benefit if not that of the students, a common pattern of which both the substitutional and x-variant accounts are instances, but that does not seem to be easy. Perhaps a category theory buff could do it.

David AuerbachI was going to mention Suppes (we always remember our first). I am fond of separate style for parameters on, as you say Fregean grounds, but I’m ambivalent about its pedagogic virtue. As to the first issue, I’m a many languages person.

For the semantics issue I lean to the non-Tarskian version; for an elementary course it has less machinery and it comports well with the tableau method.

Peter SmithI think we are in happy agreement then!

David MakinsonOn the three issues raised, and particularly the last, it is useful to look at Francis. J. Pelletier’s survey “A history of natural deduction and elementary logic textbooks”, in J. Woods & B. Brown eds Logical Consequence: Rival Approaches, Vol 1, Oxford: Hermes Science Publications, 2001. It extends an earlier paper of Pelletier called “A brief history of natural deduction” in History and Philosophy of Logic 20: 1999, 1-31.

A later survey by Pelletier & Allen P. Hazen, called simply “Natural Deduction” overlaps with the one of 2001, updating some of the material. As well as the discursive review, it also contains an appendix entitled “50 elementary logic textbooks” classifying in tabular form their accounts of natural deduction along various dimensions. I think that this survey appeared in one of Dov Gabbay’s Handbooks.

One old textbook worth looking at in this connection is Patrick Suppes 1957 Introduction to Logic. If my memory serves me, Suppes introduced a third kind of individual term that he called “ambiguous names”, distinct from variables and constants, to do the job referred to in the third question, and went further by placing subscripts on them to indicate dependencies on other terms.

Rather different in style is an article by Daniel J. Velleman called “Variable declarations in natural deduction”, which appeared about ten years ago; I don’t have a note of where it came out, only a dusty paper version. As one would guess from the title, Velleman takes his cue from the way in which some computer programs “declare” variables, also seeing the procedure as implicit in Jaskowski’s famous but little-read 1934 paper.

Peter SmithThanks for the reminder about the first Pelletier piece (which I’d looked at a while back) — though his concerns being somewhat different, he doesn’t I think really speak to the first of my two questions. I’ll follow up his longer version (which now you mention it I might have seen but not read). I’ll get to Suppes in the next post(!). And many thanks for the pointer to the Velleman paper which I don’t know at all, sounds as if it should be very interesting.