In presenting a formal theory of first-order logic for an introductory course — or, in my case, an introductory book — there are a *lot* of choices to be made, even once we’ve fixed on using a natural deduction proof system. (This paper by Pelletier reviews at length some of the choice points; and an overlapping paper by Pelletier and Hazen has a table telling us the assorted choices made by no less than 50 texts.)

I’m interested here, and in the next couple of posts, about a subset of issues concerning the *language(s)* we are going to be using:

- Do we talk of
*one *language or *many* languages (different languages for different interesting first-order theories, and different temporary ad hoc languages for use when we are regimenting different arguments)?
- In explaining the semantics of quantifiers, do we give a Tarski-style account in terms of satisfaction where we assign sequences of objects to sequences of
*variables*? Or do we make use of some new kind of *supplementary names* for objects in the domain, so that e.g. ‘’ comes out true on an interpretation if ‘’ is true for some extra name ‘*a’* on some perhaps extended interpretation?
- What do we use as
*parameters *in natural deduction? Names? Variables? A new, syntactically distinguished, sort of symbol?

Here, the first issue is to some extent a matter of presentational style, but (as we will note in a subsequent post) the details of what you say can make a difference on more substantive points. The second issue is probably familiar enough and we needn’t delay over it just now. To elucidate the last issue – the only one of the three dealt with explicitly by Pelletier – consider a natural deduction proof to warrant the inference ‘Everyone loves pizza. Anyone who loves pizza loves ice-cream. So everyone loves ice-cream’. In skeletal form the proof goes like this, ending with a step of universal quantifier introduction (add to taste extra bells and whistles, such as Fitch-style indentation, and even rearrange into a tree):

Here, I’ve used the intentionally unusual ‘*q’*. But what would you prefer to write in its place? Some would use e.g. ‘*x’*, picked from the list of official variables for the relevant language in play. Some would use e.g. ‘*a’*, picked from the list of official individual constants (names) for the relevant language. However, we might note that ‘*q’* is neither being used as a variable tied to a prefixed quantifier, nor used as a genuine constant naming a determinate thing, but in a third way. It is, as they say, being used as a parameter. So we might wonder whether we should somehow mark syntactically when a symbol is being used as a parameter (following the Fregean precept that important semantic differences should be perspicuously marked in our symbolism).

It is interesting to see how different authors of logic textbooks for students which cover natural deduction handle our three choice-points. So let’s take a number of texts in turn, in the chronological order of editions that I can reach from my shelves. (I’ll return to mention some other books, including that by Suppes, in the next post – but for the moment I am considering just a handful of books, ones that I have learnt from over the years, and/or ones that that might still be recommended to students at least as supplementary reading.)

*Richmond Thomason, Symbolic Logic (Macmillan, 1970) *is, looking back, a strange mixture of a book, extremely lucid in some respects, but making pretty heavy weather of some of the more technical presentations. The main reason I am mentioning it here is that Thomason is emphatic about distinguishing parameters from names and variables. But otherwise, this is a many-language, almost-Tarski book (almost, because it considers valuations of all-the-parameters-at-once, rather than assigning values to all-the-variables-at-once).

*Neil Tennant, Natural Logic (Edinburgh UP, 1978/1990) *talks of *the *language of first-order logic. In giving a model, distinguished names are assigned objects, but later “undistinguished names” are employed as names of arbitrary objects considered in the course of a proof — i.e. they are employed as parameters. Free variables feature not in proofs but in the syntactic story about how wffs are built up, and the associated Tarski-style semantic story for wffs.

*Paul Teller, A Modern Formal Logic Primer (Prentice-Hall 1989) *seems to be a one-language book. At least, formation rules for a single language are introduced for sentential logic in Vol I, and Vol II carries on in the same spirit. The semantics is initially given by assuming that every object has a name. Then, late on, that assumption is dropped, but the resulting semantics is of the non-Tarskian kind I’ve indicated. Names are used in natural deduction proofs using the (UI) rule as above, but are marked with a ‘hat’ in this role, so these parameters are syntactically marked. But names used in their (admittedly slightly different) use as parameters in existential quantifier elimination (EE) arguments are *not* marked.

*Jon Barwise and John Etchemendy, The Language of First-Order Logic* (CSLI, 2nd ed. 1991) talks initially of FOL, *the* language of first-order logic, but soon starts talking of first-order languages, plural. On semantics, there is initially a version of the idea that we extend a first-order language with additional names, and ‘*’* is true on interpretation *I* so long as ‘*’* is true on some extension of *I *which is just like *I* except in what it assigns to the new name ‘*a’*. But later, an official Tarskian story is given. Wffs with free variables are allowed, but don’t feature in proofs where parameters are all constants. But when parametric, the constants are first introduced in a ‘box’ (boxing a constant *c* is “the formal analog of the English phrase ‘Let *c* denote an arbitrary object’”). But this syntactic marking of a parameter does not – unlike Teller’s ‘hats’ – persist.

*Merrie Bergmann, James Moor, Jack Nelson, The Logic Book (McGraw-Hill, 3rd ed. 1998) *introduces a single, catch-all language PL. Initially, just a very informal semantics is given, enough for most purposes. But when a formal account is eventually given, it is Tarskian. The authors allow free variables in wffs. But their natural deduction system is again presented as a system for deriving sentences from sentences, and in the course of proofs it is names (not free variables) which are used parametrically.

*R.L. Simpson, Essentials of Symbolic Logic (Broadview 1999/2008) *is perhaps the most elementary of the books mentioned in this list – it talks of *the* language of symbolic logic but we can’t answer the question of what style of formal semantics Simpson prefers as none is given. The book is worthy of mention here, however, because of the particular clarity of its ND system, and the way it handles the quantifier rules. It uses distinguished parameters in both (UI) and (EE) proofs.

*Dirk van Dalen, Logic and Structure (Springer, 4th ed. 2004) *allows for many languages, different languages of different signatures. The book’s semantics is non-Tarskian, and goes via adding extra constant symbols, one for every element of the domain, and then ‘*’* is true on interpretation *I* so long as some ‘*’* is true (where ‘*’* is one of the extra constants). The natural deduction system uses free variables in a parametric role.

*Ian Chiswell and Wilfrid Hodges, Mathematical Logic *(OUP, 2007) allows many languages with different signatures. The authors give a straight Tarskian semantics. Their natural deduction system allows wffs with free variables, and allows both variables and constants to be used parametrically (what matters, of course, is that the parametrically used terms don’t appear in premisses which the reasoning relies on, etc.).

*Nick Smith, Logic: The Laws of Truth (Princeton, 2012) *goes for one all-encompassing language of FOL, with unlimited numbers of predicates of every arity etc. When we actually use the language, we’ll only be interested in a tiny fragment of the language, and interpretations need only be partial. Semantically, Smith is non-Tarskian: ‘*’* is true on interpretation *I* so long as ‘*’* is true on some extension of *I *which is just like *I* except in what it assigns to the new name ‘*a’*. Smith’s main proof system is a tableau system; but he also describes a Fitch-style system and other ND systems. In these cases, names are used as parameters. (The one all-encompassing language of FOL will give us an unending supply.) Smith allows free variables in wffs in his syntax — but these wffs don’t get a starring role in his proof systems.

So in summary we have, I think, the following (correct me if I have mis-assigned anyone, and let me know about other comparable-level texts which cover natural deduction if they are interestingly different):

Book |
One/ Many? |
Tarksi/non-Tarski? |
Parameters? |

Thomason |
Many |
Tarski |
Distinguished |

Tennant |
One |
Tarski |
Names |

Teller |
One |
Non-Tarski |
Semi-Distinguished |

Barwise & Etchemendy |
Many |
Both |
Names |

Bergmann, et al. |
One |
Tarski |
Names |

Simpson |
One |
— |
Distinguished |

van Dalen |
Many |
Non-Tarski |
Variables |

Chiswell and Hodges |
Many |
Tarski |
Both |

Smith |
One |
Non-Tarski |
Names |

The obvious question is: is there a principled reason (or at least a reason of accessibility-to-students) for preferring one selection of options over any other, or is it a matter of taste which way we go?

*Revised to take account of first two comments. To be continued.*