The language(s) of first-order logic #2

One theme touched on the last post is whether the language for our preferred natural deduction system syntactically distinguishes symbols used as parameters in (UI) and/or (EE) proofs. Of the nine books I mentioned, two use distinct symbols for parameters, as against names (individual constants) and variables. And another book (Teller’s) syntactically marks the use of names as parameters in subproofs leading to an application of (UI).

‘Distinguishers’ are perhaps slightly over-represented in our sample: in Pelletier and Hazen’s table of 50 texts, only 11 are reported as syntactically distinguishing “arbitrary names”. In a way, this low proportion of distinguishers is a bit surprising, for two reasons. First, there’s the weight of authority: Gentzen himself gives distinguished symbols for bound variables and variables occurring free in the role of parameters. (True, Gentzen was not the only begetter of natural deduction, and Jaśkowski does not distinguish real variables from symbols used as parameters.) But second, and much more importantly, Gentzen’s symbolic distinction marks a real difference in the role of symbols: and where we can easily do so we should surely follow (as I put it before) the Fregean precept that important differences in semantic role should be perspicuously marked in our symbolism.

True, we can recycle ordinary names or ordinary variables to play the new role of parameters (or ‘temporary names’ or whatever you want to call them). This gives us economy of notation. But Frege would insist that the desire for economy in one’s primitive symbolism should be trumped by the desire for perspicuity (on marking another distinction, he talks of  “[his] endeavour to have every objective distinction reflected in symbolism”). I think we should aim, within reason, to be with Frege here.

An oddity to remark on. There are some who mark only one of the special use of symbols in (UI) inferences and in (EE) inferences. Thus Teller, we noted, puts hats on what he calls ‘arbitrary occurrences’ of a name in the course of a (UI) proof; but he doesn’t syntactically distinguish the use of a name as a temporary name instantiating an existential quantifier in the course of a (EE) proof. Suppes in his much used Introduction to Logic (1957) does things the other way about. The same variables that appear bound appear free in his (UI) inferences; but instantiations of existential quantifiers are done using special symbols as what he calls ‘ambiguous names’. Of course, both proof systems work fine! – but I’m not sure why you would want to leave it to context to mark one distinction in the use of symbols and syntactically mark the other. (Lemmon, whose Beginning Logic (1965) taught generations of UK students natural deduction in a Suppes-style layout, uses distinguished symbols as what he calls ‘arbitrary names’ in both (UI) and (EE) inferences.)

To be continued.

2 thoughts on “The language(s) of first-order logic #2”

  1. Dear Peter, sorry about a late comment, but here is something people discussing the teaching of logic should know about eigenvariables in quantifier inferences:

    The way Gentzen does it, a distinct fresh variable in UI, EE, is a must. This matter is known since 1986, a gift from computer science to pure logic: height-preserving admissibility of alpha-conversion does not work if we don’t follow Gentzen here. The same thing in more mundane terms: you want to show that you can change bound variables without increasing the size of a derivation, and the Gentzen way is the only one.

  2. I’d like to make some comments on this issue of distinguishing arbitrary names from proper names in natural deduction. One relevant consideration would be the year level and background of the students. I have taught first-year elementary formal logic in several universities here in Australia. The students are enrolled mainly in arts (humanities) degrees. They tend to be somewhat ‘maths phobic’ and most will not continue with philosophy or logic. For that kind of audience you want to keep things as simple as possible (hence the usual choice of tableaux methods rather than natural deduction) and avoid unnecessary symbols and scary looking syntax.

    I have taught a second year level course using Lemmon’s ‘Beginning Logic’ as the textbook and as you mentioned, Lemmon distinguishes arbitrary names and proper names in his natural deduction system. But he does it in a very simple way. Instead of distinguishing arbitrary names using some extra syntactic decoration (like a circumflex or box), he uses lower case letters from the beginning of the alphabet (a, b, c, …) for arbitrary names and lower cases letters from the middle of the alphabet (m, n, o, ….) for proper names. The rules of $latex \forall$-introduction and $latex \exists$-elimination require the use of arbitrary names; $latex \forall$-elimination and $latex \exists$-introduction can operate on both kinds of name.

    I want to tentatively suggest that for the usual type of ‘introduction to formal logic’ course, there are good reasons to dispense with the distinction. Consider two proofs in Lemmon’s system of the argument you mentioned in an earlier post ‘Everyone loves pizza, everyone who loves pizza loves ice-cream, therefore everyone loves ice-cream’:

    Proof #1

    1 (1) $latex \forall$x(Fx) Premise
    2 (2) $latex \forall$x(Fx $latex \to$ Gx)
    Premise
    1 (3) Fa 1, $latex \forall$E
    2 (4) Fa $latex \to$ Ga 2, $latex \forall$E
    1,2 (5) $latex Ga$ 3, 4 MP
    1,2 (6) $latex \forall$ x(Gx) 5, $latex \forall$I

    Proof #2

    1 (1) $latex \forall$x(Fx) Premise
    2 (2) $latex \forall$x(Fx $latex \to$ Gx) Premise
    1 (3) Fn 1, $latex \forall$E
    2 (4) Fn $latex \to$ Gn 2, $latex \forall$E
    1,2 (5) $latex Gn$ 3, 4 MP
    1,2 (6) $latex \forall$ x(Gx) 5, $latex \forall$I

    The first one is a correct proof in Lemmon’s system, but the second one is not: at the final step it attempts to use $latex \forall$I on a proper name rather than arbitrary name. And yet the two proofs are both semantically perfectly valid and syntactically practically identical. In both cases, the final step satisfies the restriction on the use of $latex \forall$I which does the real work of making the use of the name ‘arbitrary’ – the name does not occur in any of the assumptions used to derive the premise for $latex \forall$I.

    Lemmon motivates the distinction between arbitrary names and proper names in the following way:

    “Think of what Euclid does when he wishes to prove that all triangles have a certain property; he begins ‘let ABC be a triangle’ and proves that ABC has the property in question; then he concludes that all triangles have the property. What here is ‘ABC’? Certainly not the proper name of any triangle, for in that case the conclusion would not follow. For example, given that Krushchev is bald, it does not follow that everyone is bald. It is natural to view ‘ABC’ as the name of an arbitrarily selected triangle, a particular triangle certainly, but any one you care to pick. For if we can show that an arbitrarily selected triangle has F, then we can soundly draw the conclusion that all triangles have F.”

    The remark “given that Krushchev is bald, it does not follow that everyone is bald” is surely true, but not relevant. After all, if we could establish that Krushchev is bald from premises none of which use the name ‘Krushchev’, then the conclusion that everyone is bald *does* follow. In Lemmon’s system, the invalid argument he mentions here would be represented like this:

    1 (1) Bn Premise (‘Krushchev is bald’)
    1 (2) $latex \forall$ x(Bx) From 1 by $latex \forall$I (‘Everyone is bald’)

    The second step is invalid because it does not satisfy the restriction on the use of $latex \forall$I – the name n occurs in an assumption on which the premise (1) depends (namely 1 itself). Furthermore, we cannot make the proof valid by replacing the proper name n with an arbitrary constant a. The proof would still be invalid because it would still not satisfy the relevant restriction. Since the use of proper as opposed to arbitrary names makes no real difference to validity (provided the relevant restrictions are met) the principle of ‘keep it simple’ suggests the distinction should be dropped.

    I think that in this setting, the distinction between proper names and arbitrary names is more pragmatic than semantic; it has to do with the way the name is used in a proof. Lemmon does not give any formal semantics or proof of soundness or completeness for his system, but it would be entirely possible to do so without making any semantic distinction at all between proper names and arbitrary names. In both cases, the relevant semantic clause would be identical; the interpretation of a name (proper or arbitrary) is a particular element of the domain – any element you like. In this sense, all the names in the system are arbitrary. Since they are all semantically arbitrary, why make a syntactic distinction? In other words, perhaps the Fregean principle you mention — that important semantic differences should be clearly represented symbolically — does not apply quite so clearly here.

    That seems to me to be the typical situation in an introductory formal logic course, where you’re using a ‘general purpose’ logical language to represent arbitrary natural language arguments in symbolic form in order to test them for validity. For that you want an unlimited supply of predicate/relation symbols and names for use in representing arguments with no special semantic restrictions on their interpretations.

    That said it might still be a good idea, to avoid possible confusion and mistakes, to avoid using the same symbol as both a proper name and an arbitrary name in the same proof. But the distinction need not be built into the rules of proof of the system, as in Lemmon’s system. It might take the form of a useful practical convention, similar to the conventions we often tell students to use when constructing tableaux – put off applying branching rules as long as possible, apply existential instantiation first and so on. These things make no difference to the correctness of a proof, but help make it shorter, simpler and make constructing tableaux less error prone. In the same way, one might suggest to students “When representing arguments to be tested, use letters from the middle of the alphabet (n, m, …) for any proper names used in the argument and then, when constructing a proof, use different letters (a, b, c, …) for any arbitrary names you want to introduce for the purpose of using $latex \forall$I or $latex \exists$E.” Again, provided the relevant restrictions are met, it would not make any difference to the correctness of a proof whether a ‘proper name’ or an ‘arbitrary name’ was used. But making a distinction might make proofs easier to understand and construct.

    Finally (as I now you see you say yourself in post #3 on this topic) it’s quite different if you’re using a natural deduction system as the underlying logic for a special theory with its own special vocabulary and axioms (arithmetic, set theory, group theory etc). In that setting, it would be essential to have an unending supply of special constants for use in natural deduction proofs in addition to the proper names of the special vocabulary (if any) such as 0, $latex \emptyset$, e (for the identity element of a group) and so on. Using those symbols as arbitrary names would be especially confusing and even more potentially error prone. And note that in this setting there are, in effect, special semantic restrictions on the interpretations of the proper names – the interpretations have to satisfy any axioms involving the proper names. So semantically, the interpretation of a proper name is no longer arbitrary and the Fregean principle has more bite.

    But this is a quite a different context to the one involved in a typical ‘introduction to formal logic’ course. For students who will not be moving on to a more advanced course, the distinction between arbitrary names and proper names is dispensable; for those who are going on to such a course, would it set them back substantially if their first introduction to logic did not make the distinction (or made it only by ‘convention’ as suggested above)? I don’t think any of this is conclusive (and I don’t think there’s anything importantly wrong about making the distinction); I just wanted to suggest some reasons against making the distinction in an elementary logic course.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top