## Postcard from the Lake District

After a week in the Lakes, here’s a photo taken walking back from Howtown to Pooley Bridge above Ullswater. Beautiful. Worries about Trump, Brexit, and even Begriffsschrift seem happily remote.

## Begriffsschrift and absolutely unrestricted quantification

We owe to Frege in Begriffsschrift our modern practice of taking unrestricted quantification (in one sense)  as basic. I mean, he taught us how to rephrase restricted quantifications by using unrestricted quantifiers plus connectives in the now familiar way, so that e.g. “Every F is G”  is regimented as  “Everything is such that, if it is F, then it is G” , and some “Some F is G”  is regimented as  “Something is such that it is F and it is G” — with the quantifier prefix in each case now running over everything. And we think the gain in formal simplicity in working with unrestricted quantifiers outweighs the small departure from the logical forms of natural language (and quirks to do with existential import, etc.).

But what counts as the “everything” which our unrestricted quantifiers run over? In informal discourse, we cheerfully let the current domain of quantification be set by context (“I’ve packed everything”,  “Everyone is now here, so we can begin”). And we are adepts at following conversational exchanges where the domain shifts as we go along.

In the context of using a formal first-order language, we require that the domain, i.e. what counts as “everything”, is fixed once and for all, up front: no shifts are then allowed, at least while that language with that interpretation is in force. All changes of what we want to generalize about are to be made by explicitly restricting the complex predicates our quantifiers apply to, as Frege taught us.  The quantifiers themselves stay unrestrictedly about the whole domain

What about Frege in Begriffsschrift, however? There’s nothing there explicit about domains. Is that because he thinks that the quantifiers are always to be taken as ranging, not over this or that domain, but over absolutely everything — over all objects that there are?

Some have taken this to be Frege’s view. In particular, when Dummett talks about Frege and unrestricted quantification in Frege: Philosophy of Language, he is firmly of the view that “When these individual variables are those of Frege’s symbolic language, then their domain is to be taken as simply the totality of all objects” (p. 529).

But it isn’t obvious to me that Frege is committed to an idea of absolutely general quantification, at least in Begriffsschrift. (Re)reading the appropriate bits of that book plus the two other contemporary pieces published in Bynum’s Conceptual Notation, and the two contemporary pieces in Posthumous Writings, there doesn’t seem to be a clear commitment to the view.

OK, Frege will write variations on: “$\forall x(Fx \to Gx)$” means that whatever you put in place of the “$x$”, “$(Fx \to Gx)$” is correct. But note that here he never gives daft instantiations of the variable, totally inappropriate to the e.g. arithmetic meaning of F and G.

This is not quite his example, but he does the equivalent of remarking that “$\forall x(x$ is even $\to x^2$ is even)” isn’t refuted by “$(1\ \textrm{is even} \to 1^2\ \textrm{is even})$” because (given the truth-table for “$\to$”), that comes out true. But he never, as he should if the quantifiers are truly absolutely unrestricted, consider instances such as “The Eiffel Tower is even $\to$  The Eiffel Towe$\textrm{r}^2$ is even” — which indeed is problematic as the consequent looks nonsense.

Similarly, in PW, p. 27, Frege cheerfully writes “The numbers … are subject to no conditions other than $\vdash n = n + 0$, etc.”. There’s not a flicker of concern here about instances — as they would be if the implicit quantifier here were truly universal — such as “$\vdash \textrm{Napoleon} = \textrm{Napoleon} + 0$”. Rather it seems clear that here Frege’s quantifiers are intended to be running over … numbers! (Later, in Grundgesetze, Frege does talk about extending addition to be defined over non-numbers: but  it is far from clear that the Frege of Begriffsschrift has  already bitten the bullet and committed himself to the view that every function is defined for the whole universe.)

Earlier in PW, p. 13, Frege talks about the concept script “supplementing the signs of mathematics with a formal element” to replace verbal language. And this connects with what has always struck me as one way of reading Begriffsschrift.

1.  Yes it is all purpose, in that the logical apparatus can be added to any suitable base language (the signs of mathematics, the signs of chemistry, etc. and as we get cleverer and unify more science, some more inclusive languages too). And then too we have the resources to do conceptual analysis using that apparatus (e.g. replace informal mathematical notions with precisely defined versions) — making it indeed a concept-script. But what the quantifiers, in any particular application, quantify over will depend on what the original language aimed to be about: for the original language of arithmetic or chemistry or whatever already had messy vernacular expressions of generality, which we are in the course of replacing by the concept script.
2. Yes, the quantifiers will then unrestrictedly quantify over all numbers, or all chemical whatnots, or …, whichever objects the base language from which we start aims to be about (or as we unify science, some more inclusive domain set by more inclusive language).
3. And yes, Frege’s explanation of the quantifiers — for the reasons Dummett spells out — requires us to have a realist conception of objects (from whichever domain) as objects which determinately satisfy or don’t satisfy a given predicate, even if we have no constructive way of identifying each particular object or of deciding which predicates they satisfy. Etc.

But the crucial Fregean ingredients (1) to (3) don’t add up to any kind of commitment to conceiving of the formalised quantifiers as absolutely unrestricted. He is, to be sure, inexplicit here — but it not obvious to me that a charitable reading of Begriffsschrift at any rate has to have Frege as treating his quantifiers as absolutely unrestricted.

Posted in Logic | 9 Comments

## Jeanne Moreau, 23.i.1928 – 31.vii.2017

Posted in This and that | 1 Comment

## Which is the quantifier?

A note on another of those bits of really elementary logic you don’t (re)think about from one year to the next – except when you are (re)writing an introductory text! This time, the question is which is the quantifier, ‘$\forall$’ or ‘$\forall x$’, ‘$\exists$’ or ‘$\exists x$? Really exciting, eh?

Those among textbook writers calling the quantifier-symbols $\forall$’, ‘$\exists$’ by themselves the quantifiers include Barwise/Etchemendy (I’m not sure how rigorously they stick to this, though). Tennant also calls e.g. ‘$\forall$’ a quantifier, and refers to  ‘$\forall x$’ as a quantifier prefix.

Those calling the quantifier-symbol-plus-variable the quantifier include Bergmann/Moor/Nelson, Chiswell/Hodges, Guttenplan, Jeffrey, Lemmon, Quine, Simpson, N. Smith, P. Smith, Teller, and Thomason. (Lemmon and Quine of course use the old notation ‘$(x)$’ for the universal quantifier.) Van Dalen starts by referring to ‘$\forall$’ as the quantifier, but slips later into referring to ‘$\forall x$’ as the quantifiers.

It’s clear what the majority practice is. Why not just go with it?

Modern practice is to parse ‘$\forall x(Fx \to Gx)$’ as  ‘$\forall x$’ applied to ‘$(Fx \to Gx)$’. However Frege (I’m reading through Dummett’s eyes, but I think this is right, mutating the necessary mutanda to allow for differences in notation) parses this as the operator we might temporarily symbolize ‘$\forall x \ldots x \ldots x \ldots$’ applied to  ‘$(F\ldots \to G\ldots)$’.

To explain: Frege discerns in ‘$\forall x(Fx \to Gx)$’ the complex predicate ‘$(F\ldots \to G\ldots)$’ (what you get by starting from ‘$(Fn \to Gn)$’ and removing the name). Generalizing involves applying an operator to this complex predicate (it really is an ‘open sentence’ not in the sense of containing free variables but in containing gaps — it is unsaturated). Another way of putting it: for a Fregean, quantifying in is a single operation of taking something of syntactic category s/n, and forming a sentence by applying a single operation of category s/(s/n). This quantifying operator is expressed by filling-the-gaps-with-a-variable-and-prefixing-by- ‘$\forall x$’ in one go, so to speak. The semantically significally widget here is thus ‘$\forall x \ldots x \ldots x \ldots$’. Yes, within that, ‘$\forall$’ is a semantically significant part (it tells us which kind of quantification is being done). But — the Fregean story will go — ‘$\forall x$’ is not a semantically significant unit.

(Suppose that instead of using ‘x’s and ‘y’s to tell us which quantifier gets tied to which slots in the two place predicate ‘$L\ldots,-$’ to give ‘$\forall x\exists yLxy$’ we had used ‘$\forall \exists L\ldots,-$’ with arrows tying each quantifier to its appropriate slot. Typographical convenience apart, that would serve just as well. But then no one would think that ‘$\forall$’ and half a broken arrow is a semantically significant unit!)

So, whether you think ‘$\forall x$’ is worthy of being called the universal quantifier is actually not such a trivial matter after all. For it should depend on your view as to whether ‘$\forall x$’ a semantically significant unit, shouldn’t it? You might think that the true believing Fregean protests about this sort of thing too much. I’d disagree — but at any rate the underlying issue is surely not just to be waved away by unargued terminological fiat.

Posted in Logic | 4 Comments

## Ivana Gavrić, Chopin

I’ve very much admired Ivana Gavrić previous rightly praised discs (I wrote about the first two here in an earlier post, and you can find more about  them here). So I was really  looking forward to hearing her play in the intimate surroundings of the Fitzwilliam Museum in Cambridge a few days ago; but she had to postpone the concert as she can’t fly at the moment on doctor’s orders.

So to make up for that, I bought a copy of her new Chopin CD, which I have been listening to repeatedly over the last few days.  Now, I should say that I’m not the greatest Chopin devotee, and indeed the only recording of his music that I have returned to at all frequently in recent years has been Maria João Pires’s utterly wonderful performances of the Nocturnes. So I’m hardly in a position to give a very nuanced response to this new disc. But I am loving it.

Ivana Gavrić plays groups of Chopin’s earlier Mazurkas, seperated by a couple of Preludes, a Nocturne and the Berceuse, which makes for something like a concert programme (you should listen up to the Berceuse — which is quite hauntingly played, her left hand rocking the cradle in a way that somehow catches at the heart — and then take an interval!). Some of the Mazurkas are very familiar, but many are (as good as) new to me. The Gramophone reviewer wanted Gavrić to perhaps play with more abandon — but no, her unshowy, undeclamatory, playing seems just entirely appropriate to the scale and atmosphere of the pieces,  often tinged with melacholy as they are.  She is across the room from a group of you, friends and family perhaps, rather than performing to a concert hall. And repeated listenings reveal the subtle gestures and changes in tone she uses to shape the dances; these are wonderfully thought-through performances. Listen to the opening Mazurka in C sharp minor, Op. 6 no.2 – the whole CD is in fact available on Apple Music – and you will be captivated.

Posted in Music | 1 Comment

## Why mandatory reiteration in Fitch-style proofs?

Digressing from issues about the choice of language(s), another post about principles for selecting among natural deduction systems — another choice point, and (though I suppose relatively minor) one not considered by Pelletier and Hazen.

Take the most trite of examples. It’s agreed that if P then R. I need to convince someone that if P and Q then R (I’m dealing with a dullard here!). I patiently spell it out:

We are given if P then R. So just suppose that P and Q. Then of course P will be true, and hence (given what we agreed) R. So our supposition leads to R, OK? – hence as I said, if P and Q then R.

You want that set out more formally? Here goes, indenting sub-proofs Fitch-style!

$1.\quad (P \to R)\quad\quad\quad\quad \textrm{premiss}$
$2. \quad\quad{|}\quad (P \land Q)\quad\quad\textrm{supposition}$
$3. \quad\quad{|}\quad P\quad\quad\quad\quad\ \ 2\ \land\textrm{E}$
$4. \quad\quad{|}\quad R\quad\quad\quad\quad\ \ 1, 3\ \textrm{MP}$
$5. \quad ((P \land Q) \to R) \quad 2-4\ \textrm{CP}$

Job done!

Except that in many official versions of Fitch-style proofs, this derivation is illegitimate. We can’t appeal to the initial premiss at (1) in order to draw the modus ponens inference inside the subproof: rather, we have to first re-iterate the premiss in the subproof, and then appeal to this reiterated conditional in applying modus ponens.

This requirement for explicit reiteration seems to go back to Jaśkowski’s first method of laying out proofs; and it is endorsed by Fitch’s crucial Symbolic Logic (1952). Here are some other texts using the same graphical layout as introduced by Fitch, and also requiring inferences in a subproof to appeal only to what’s in that subproof, so that we have to use a reiteration rule in order to pull in anything we need from outside the subproof: Thomason (1970), Simpson (1987/1999), Teller (1989).

Now, such a reiteration rule has to say what you can reiterate. For example: you can reiterate items that occur earlier outside the current subproof, so long as they do not appear in subproofs that have been closed off (and then, but only then, these reiterated items can be cited within the current subproof). But it is less complicated and seems more natural not to require wffs actually to be rewritten. The rule can then simply be: you can cite “from within a subproof, items that occur earlier outside the current subproof, so long as they do not appear in subproofs that have been closed off” — which is exactly how Barwise and Etchemendy (1991) put it. Others not requiring reiteration include Hurley (1982), Klenk (1983) and Gamut (1991). (Bergmann, Moor and Nelson (1980) have a rule they call reiteration, but they don’t require applying a reiteration rule before you can cite earlier wffs from outside a subproof.)

So: why require that you must, Fitch-style, reiterate-into-subproofs before you can cite  e.g. an initial premiss? What do we gain by insisting that we repeat wffs within a subproof before using them in that subproof?

It might be said that, with the repetitions, the subproofs are proofs (if we treat the reiterated wffs as new premisses). But that’s not quite right as in a Fitch-style proof it is required that the premisses be listed up front, not entered ambulando. So we’d have to re-arrange subproofs with the new supposition and the reiterated wffs all up front: and no one insists that we do that.

Anyway: at the moment I’m not seeing any compelling advantage for going with Fitch/Thomason/Simpson/Teller rather than with Barwise and Etchemendy and many others. Going the second way keeps things snappier yet doesn’t seem any more likely to engender student mistakes, does it? But has anyone any arguments for sticking with a reiteration requirement?

Posted in IFL, Logic | 7 Comments

## The language(s) of first-order logic #3

Back to those choice-points noted in the first of these three posts: one formal language for FOL or many? Tarski or quasi-substitutional semantics? use of symbols as parameters as opposed to names or variables to be syntactically marked?

In the second post, I gave a Fregean reason for inclining to syntactically marking parameters (the thought being we should syntactically mark important differences in semantic role).

I certainly don’t want to go with van Dalen and use free variables for this role, for the very basic reason that I simply want to avoid using free variables in an elementary text. Back to Begriffsschrift! By my Fregean lights,  a formal language quantifier should be thought as ‘$(\forall x)\ldots x\ldots x\ldots$’, the whole being a slot-filler operating on a gappy predicate. The artifice of parsing a quantified wff as ‘$(\forall x)$’ followed by a complete wff which has a free variable ‘$x$’ when stand-alone just obscures Frege’s insight into the nature of quantification. (And before you object that Frege has italic letter variables that look like free variables, remember that they aren’t: the italic letter wffs are just introduced as abbreviations for corresponding (universally) quantified wffs where the quantifier is given maximal scope — so for Frege, the ‘free’ variable wffs are explained in terms of quantified wffs, rather than vice versa.)

It is at least less misleading to use name-expressions from our formal language in the parametric role — relying on the similarities between, so to speak, permanent names and temporary names. But now note that there can be a clash here with the line we take on the first choice-point. If we say that there is one big language of FOL with an indefinite supply of names (only some of which get a fixed interpretation) then fine,  that leaves us plenty of names to use a parameters. But suppose we take the many-language line, and think in terms of a language as having a particular fixed number of names (maybe zero, as in the basic language of first-order set theory!). Then there may not be enough names to recruit for parametric use in arbitrarily complex arguments. Compare: at least we don’t run out of variables, which is why van Dalen, and Chiswell/Hodges, who take the many-language line, have to recruit  variables to use as parameters.

So I think that many-language logicians who want to use natural deduction with its parametric reasoning have very good reasons for introducing a distinguished class of symbols for parametric use. For on most many-language stories there is a fixed  number, maybe zero, of proper names in the language, so we won’t have enough parameters if we stick to names. While the Fregean precept about marking important differences counts strongly against re-using variables-for-quantifiers in a quite different use as temporary names.

(What about Barwise and Etchemendy who I have down in my first post as both taking a many-language view and as using names as parameters? Well, to be honest, I find their position not entirely clear, but I think they conceive of a language as having some distinguished names, with an intended interpretation, though perhaps zero as in the language of set theory, but also having a supply of further names that can be used ad hoc, e.g.  to name particular sets as on p. 16, such names also being available for use as parameters.)

But should we go for many languages or one all-purpose-language? It is notable that in my list of texts in the first of these posts, the more mathematical authors go for many languages. That is no accident surely; when in more advanced logical work we regiment mathematical theories, it is very natural to think of these various theories having their own languages, the language of set theory, the language of first-order arithmetic, the language of category theory, and so on.  This conforms with how mathematicians tend to speak and think. So this gives us a serious reason to start as we mean to go on, thinking in terms of their being many first-order languages, with their distinct signatures and particular non-logical vocabularies. Another reason for taking the many-languages line from the outset is the sheer inelegance of the one all-purpose-language picture. We build up a language with infinite non-logical vocabularies of every possible arity; we then throw away again almost all the complexity either by making interpretations partial or by in principle interpreting everything and then showing almost the interpretative work is redundant. It is difficult to find that very aesthetically pleasing.

OK. None of all that is decisive. None of the logic books which make different choices are  thereby bad books! Still, I think there are reasonably weighty reasons – rather more than mere considerations of taste – to go for many languages, and (as we’ve just seen, not unconnectedly) for giving a language a class of symbols to serve as parameters (‘arbitrary names’ or whatever your favourite label is) which is syntactically distinct from names (proper names, individual constants) and variables. So that’s what I’ll be doing in the newly added natural deduction chapters in the second edition of IFL.

To be continued

## The language(s) of first-order logic #2

One theme touched on the last post is whether the language for our preferred natural deduction system syntactically distinguishes symbols used as parameters in (UI) and/or (EE) proofs. Of the nine books I mentioned, two use distinct symbols for parameters, as against names (individual constants) and variables. And another book (Teller’s) syntactically marks the use of names as parameters in subproofs leading to an application of (UI).

‘Distinguishers’ are perhaps slightly over-represented in our sample: in Pelletier and Hazen’s table of 50 texts, only 11 are reported as syntactically distinguishing “arbitrary names”. In a way, this low proportion of distinguishers is a bit surprising, for two reasons. First, there’s the weight of authority: Gentzen himself gives distinguished symbols for bound variables and variables occurring free in the role of parameters. (True, Gentzen was not the only begetter of natural deduction, and Jaśkowski does not distinguish real variables from symbols used as parameters.) But second, and much more importantly, Gentzen’s symbolic distinction marks a real difference in the role of symbols: and where we can easily do so we should surely follow (as I put it before) the Fregean precept that important differences in semantic role should be perspicuously marked in our symbolism.

True, we can recycle ordinary names or ordinary variables to play the new role of parameters (or ‘temporary names’ or whatever you want to call them). This gives us economy of notation. But Frege would insist that the desire for economy in one’s primitive symbolism should be trumped by the desire for perspicuity (on marking another distinction, he talks of  “[his] endeavour to have every objective distinction reflected in symbolism”). I think we should aim, within reason, to be with Frege here.

An oddity to remark on. There are some who mark only one of the special use of symbols in (UI) inferences and in (EE) inferences. Thus Teller, we noted, puts hats on what he calls ‘arbitrary occurrences’ of a name in the course of a (UI) proof; but he doesn’t syntactically distinguish the use of a name as a temporary name instantiating an existential quantifier in the course of a (EE) proof. Suppes in his much used Introduction to Logic (1957) does things the other way about. The same variables that appear bound appear free in his (UI) inferences; but instantiations of existential quantifiers are done using special symbols as what he calls ‘ambiguous names’. Of course, both proof systems work fine! – but I’m not sure why you would want to leave it to context to mark one distinction in the use of symbols and syntactically mark the other. (Lemmon, whose Beginning Logic (1965) taught generations of UK students natural deduction in a Suppes-style layout, uses distinguished symbols as what he calls ‘arbitrary names’ in both (UI) and (EE) inferences.)

To be continued.

Posted in Logic | 2 Comments

## The language(s) of first-order logic #1

In presenting a formal theory of first-order logic for an introductory course — or, in my case, an introductory book — there are a lot of choices to be made, even once we’ve fixed on using a natural deduction proof system. (This paper by Pelletier reviews at length some of the choice points; and an overlapping paper by Pelletier and Hazen has a table telling us the assorted choices made by no less than 50 texts.)

I’m interested here, and in the next couple of posts, about a subset of issues  concerning the language(s) we are going to be using:

• Do we talk of one language or many languages (different languages for different interesting first-order theories, and different temporary ad hoc languages for use when we are regimenting different arguments)?
• In explaining the semantics of quantifiers, do we give a Tarski-style account in terms of satisfaction where we assign sequences of objects to sequences of variables? Or do we make use of some new kind of supplementary names for objects in the domain, so that e.g. ‘$\exists xFx$’ comes out true on an interpretation if  ‘$Fa$’ is true for some extra name ‘a’ on some perhaps extended interpretation?
• What do we use as parameters in natural deduction? Names? Variables? A new, syntactically distinguished, sort of symbol?

Here, the first issue is to some extent a matter of presentational style, but (as we will note in a subsequent post) the details of what you say can make a difference on more substantive points. The second issue is probably familiar enough and we needn’t delay over it just now. To elucidate the last issue – the only one of the three dealt with explicitly  by Pelletier – consider a natural deduction proof to warrant  the inference  ‘Everyone loves pizza. Anyone who loves pizza loves ice-cream. So everyone loves ice-cream’. In skeletal form the proof goes like this, ending with a step of universal quantifier introduction (add to taste extra bells and whistles, such as Fitch-style indentation, and even rearrange into a tree):

1. $\quad \forall xFx$
2. $\forall x(Fx \to Gx)$
3. $Fq$
4. $(Fq \to Gq)$
5. $Gq$
6. $\forall xGx \quad\quad\textrm{(UI)}$

Here, I’ve used the intentionally unusual ‘q’. But what would you prefer to write in its place? Some would use e.g. ‘x’, picked from the list of official variables for the relevant language in play.  Some would use e.g.  ‘a’, picked  from the list of official individual constants (names) for the relevant language. However, we might note that  ‘q’  is neither being used as a variable tied to a prefixed quantifier, nor used as a genuine constant naming a determinate thing, but in a third way. It is, as they say, being used as a parameter. So we might wonder whether we should somehow  mark syntactically when a symbol is being used as a parameter (following the Fregean precept that important semantic differences should be perspicuously marked in our symbolism).

It is interesting to see how different authors of logic textbooks for students which cover natural deduction handle our three choice-points. So let’s take a number of texts in turn, in the chronological order of editions that I can reach from my shelves. (I’ll return to mention some other books, including that by Suppes, in the next post – but for the moment I am  considering just a handful of books, ones that I have learnt from over the years, and/or ones that that might still be recommended to students at least as supplementary reading.)

Richmond Thomason, Symbolic Logic (Macmillan, 1970) is, looking back, a strange mixture of a book, extremely lucid in some respects, but making pretty heavy weather of some of the more technical presentations. The main reason I am mentioning it here is that Thomason is emphatic about distinguishing parameters from names and variables. But otherwise, this is a many-language, almost-Tarski book (almost, because it considers valuations of all-the-parameters-at-once, rather than assigning values to all-the-variables-at-once).

Neil Tennant, Natural Logic (Edinburgh UP, 1978/1990) talks of the language of first-order logic. In giving a model, distinguished names are assigned objects, but later “undistinguished names” are employed as names of arbitrary objects considered in the course of a proof — i.e. they are employed as parameters. Free variables feature not in proofs but in the syntactic story about how wffs are built up, and the associated Tarski-style semantic story for wffs.

Paul Teller, A Modern Formal Logic Primer (Prentice-Hall 1989) seems to be a one-language book. At least, formation rules for a single language are introduced for sentential logic in Vol I, and Vol II carries on in the same spirit. The semantics is initially given by assuming that every object has a name. Then, late on, that assumption is dropped, but the resulting semantics is of the non-Tarskian kind I’ve indicated. Names are used in natural deduction proofs using the (UI) rule  as above, but are marked with a ‘hat’ in this role, so these parameters  are syntactically marked. But names used in their (admittedly slightly different) use as parameters in existential quantifier elimination (EE) arguments are not marked.

Jon Barwise and John Etchemendy, The Language of First-Order Logic (CSLI, 2nd ed. 1991) talks initially of FOL, the language of first-order logic, but soon starts talking of first-order languages, plural. On semantics, there is initially a version of the idea that we extend a first-order language with additional names, and ‘$\exists x\varphi(x)$ is true on interpretation I so long as ‘$\varphi(a)$ is true on some extension of which is just like I except in what it assigns to the new name ‘a’. But later, an official Tarskian story is given. Wffs with free variables are allowed, but don’t feature in proofs where parameters are all constants. But when parametric, the constants are first introduced in a ‘box’ (boxing  a constant c is “the formal analog of the English phrase ‘Let c denote an arbitrary object’”). But this syntactic marking of a parameter does not – unlike Teller’s ‘hats’ – persist.

Merrie Bergmann, James Moor, Jack Nelson, The Logic Book (McGraw-Hill, 3rd ed. 1998) introduces a single, catch-all language PL. Initially, just a very informal semantics is given, enough for most purposes. But when a formal account is eventually given,  it is Tarskian. The authors allow free variables in wffs. But their natural deduction system is again presented as a system for deriving sentences from sentences, and in the course of proofs it is names (not free variables) which are used parametrically.

R.L. Simpson, Essentials of Symbolic Logic (Broadview 1999/2008) is perhaps the most elementary of the books mentioned in this list – it talks of the language of symbolic logic but we can’t answer the question of what style of formal semantics Simpson prefers as none is given.  The book is worthy of mention here, however, because of the particular clarity of its ND system, and the way it handles the quantifier rules. It uses distinguished  parameters in both (UI) and (EE) proofs.

Dirk van Dalen, Logic and Structure (Springer, 4th ed. 2004) allows for many languages, different languages of different signatures. The book’s semantics is non-Tarskian, and goes via adding extra constant symbols, one for every element of the domain, and then ‘$\exists x\varphi(x)$ is true on interpretation I so long as some ‘$\varphi(\overline{a})$ is true (where ‘$\overline{a}$ is one of the extra constants). The natural deduction system uses free variables in a parametric role.

Ian Chiswell and Wilfrid Hodges, Mathematical Logic (OUP, 2007) allows many languages with different signatures. The authors give a straight Tarskian semantics. Their natural deduction system allows wffs with free variables, and allows both variables and constants to be used parametrically (what matters, of course, is that the parametrically used terms don’t appear in premisses which the reasoning relies on, etc.).

Nick Smith, Logic: The Laws of Truth (Princeton, 2012) goes for one all-encompassing language of FOL, with unlimited numbers of predicates of every arity etc. When we actually use the language, we’ll only be interested in a tiny fragment of the language, and interpretations need only be partial. Semantically, Smith is non-Tarskian: ‘$\exists x\varphi(x)$ is true on interpretation I so long as ‘$\varphi(a)$ is true on some extension of which is just like I except in what it assigns to the new name  ‘a’. Smith’s main proof system is a tableau system; but he also describes a Fitch-style system and other ND systems. In these cases, names are used as parameters. (The one all-encompassing language of FOL will give us an unending supply.)  Smith allows free variables in wffs in his syntax — but these wffs don’t get a starring role in his proof systems.

So in summary we have, I think, the following (correct me if I have mis-assigned anyone, and let me know about other comparable-level texts which cover natural deduction if they are interestingly different):

Book One/ Many? Tarksi/non-Tarski? Parameters?
Thomason Many Tarski Distinguished
Tennant One Tarski Names
Teller One Non-Tarski Semi-Distinguished
Barwise & Etchemendy Many Both Names
Bergmann, et al. One Tarski Names
Simpson One Distinguished
van Dalen Many Non-Tarski Variables
Chiswell and Hodges Many Tarski Both
Smith One Non-Tarski Names

The obvious question is: is there a principled reason (or at least a reason of accessibility-to-students) for preferring one selection of options over any other, or is it a matter of taste which way we go?

Revised to take account of first two comments. To be continued.

Posted in Logic | 6 Comments

## Truth trees for propositional logic

I’ve revised (and of course revised again, and re-revised!) the chapters on truth-trees for propositional logic for the second edition of my intro logic text. What took four chapters and forty pages has now become two chapters and twenty-eight pages, though with two pages of exercises still to be added.

I previously went from T/F-signed trees to T-signed trees to unsigned trees in a messy way, and that’s gone (so we now have just one sort of tree, initially T-signed and then the Ts are dropped). I previously did trees first for the connectives and/or/not and then added the material conditional and biconditional in a separate chapter; now I just treat the first four connectives in one chapter and the biconditional is relegated to exercises. There are other tidyings, and I’ve dropped the stuff about saturated sets (destined to become an online extra).

So I hope, in short, that the result reads more cleanly and (even) more accessibly. Here it is. All comments most gratefully received!

Posted in IFL | 1 Comment