Empty domains #2

Here is a long quotation from Oliver and Smiley in their Plural Logic, motivating their adoption of a universally free logic (i.e. one allowing empty domains and empty terms).

If empty singular terms are outlawed, we are deprived of logic as a tool for assessing arguments involving hypothetical particular things. Scientists argued about the now-discredited planet Vulcan just as they argued about the now-accepted proto-continent Pangaea; intellectuals argued about the merits of Ern Malley, the non-existent poet of the great Australian literary hoax. Sometimes of course we may know that the particular thing exists, but at others we may wish to remain agnostic or we may even have good grounds to believe that it doesn’t exist. Indeed, our reasoning may be designed precisely to supply those grounds: ‘Let N  be the greatest prime number. Then … contradiction. So there is no such thing as N’.

Now consider the effect of ruling out an empty domain. For many choices of domain —people, tables, chairs—the requirement that the domain be necessarily non-empty is frankly absurd. One may retreat to the idea that the non-emptiness of the domain is a background presupposition in any argument about the relevant kind of thing. But then we shall be deprived of logic as a tool for assessing arguments involving hypothetical kinds. We may indeed be certain that there is a human hand or two, but for other kinds this may be a matter of debate: think of the WIMPs and MACHOs of dark matter physics, or atoms of elements in the higher reaches of the Periodic Table. Sometimes we will have or will come to have good grounds to believe that the kind is empty, like the illusory canals on Mars or the sets of naive set theory, and again this may be the intended outcome of our reasoning: ‘Let’s talk about sets. They are governed by a (naive) comprehension principle … contradiction. So there are no such things.’

It may be replied that some domains are necessarily non-empty, say the domain of natural numbers. It follows that the absolutely unrestricted domain of things is necessarily non-empty too. But even if this necessity could be made out, and what’s more made out to be a matter of logical necessity, we would still not want the argument ‘∀xFx , so ∃xFx ’ to be valid. As we see it, the topic neutrality of logic means that it ought to cover both unrestricted and restricted domains, so it ought not to validate a pattern of argument which certainly fails in some restricted cases, even if not in all.

I think there are at least two issues here which we might initially pause over. First, in just what sense ought logic to be topic neutral? And second (even if we accept some kind of topic neutrality as a constraint on what counts as logic) how far does a defensible form of neutrality require neutrality about e.g. all(?) questions of existence and/or whether our language actually hooks up to the world? But leave those issues to another post — spoiler alert: I’m not minded to go with Oliver and Smiley’s arguments for a universally free logic. For the moment, what is interesting me is — whatever the force of the arguments — there is a straight parallelism between the considerations which Oliver and Smiley adduce for allowing empty names and for allowing empty domains. Certainly, I see no hint here (or in the surrounding text) that they conceive the cases as raising significantly different issues.

In the previous post, I noted that Wilfrid Hodges’s very briskly offers similar reasons of topic neutrality to argue for allowing for empty domains, but goes on to present a logical system which doesn’t allow for empty names. And, having in mind similar considerations to Oliver and Smiley’s, I suggested that this half-way-house is not a comfortable resting place: if his (and their) reason for allowing empty domains is a good one, then we should equally allow (as  Oliver and Smiley do) empty names. So the story went.

However, it was suggested that Hodges’s position is much more natural that I made out. Taking parts of two comments, Bruno Whittle puts it this way:

On a plausible way of thinking about things, to give the meaning of a name is to provide it with a referent. So to ban empty names is simply to restrict attention to those which have been given meanings. On the other hand,  …  I would say that to give the meaning of a quantifier—to ‘provide it with a domain’—is to specify zero or more objects for it to range over. To insist that one specify at least one seems barely more natural than insisting that one provide at least two (i.e. that one really does provide objects plural!)

But is it in general supposed to be a plausible way of thinking about singular terms that to give one a meaning is to provide it with a reference? Is this claim is supposed to apply, e.g., to functional terms? If it doesn’t, then coping with ’empty’ terms constructed from partial functions, for example, will still arguably need a free logic (even if we have banned empty unstructured names). While if it is still supposed to apply, then I think I need to know more about the notion of meaning in play here.

Set that aside, however. Let’s think some more about what is meant by “outlawing” or “banning” empty names and empty domains in logic — given that we can, it seems, (or so Oliver and Smiley insist) find ourselves inadventently using them, and reasoning with them. To be continued.

6 thoughts on “Empty domains #2”

  1. Here is a slightly different way of thinking about things, which I hope will be helpful — and which I hope too will make clearer why I think that allowing empty names and allowing empty domains are very different things.

    It is helpful, I think, to consider the following three notions of interpretation (i.e. for standard first-order languages).

    1. Total interpretations. These are the ones in standard logic texts (not written by Hodges!) They consist of the following items: a non-empty set (i.e. domain) $latex D$; a member of $latex D$ for each individual constant (of the language being interpreted); a (total) $latex n$-ary function from $latex D$ into $latex D$ for each $latex n$-ary function symbol; and a subset of $latex D^n$ for each $latex n$-ary predicate symbol.

    Then there are:

    2. Total* interpretations, which are just like total interpretations, except that the empty domain is permitted.

    On the other hand, there are also:

    3. Partial interpretations. These are like total interpretations, except that individual constants can be empty (i.e. lack denotation); function symbols can denote partial functions; and predicate symbols can be assigned ‘partial’ subsets of $latex D^n$, i.e. pairs $latex \langle S^+,S^-\rangle$ of disjoint subsets of $latex D^n$, where the idea is that the predicate symbol is true of the members of $latex S^+$, false of the members of $latex S^-$, and neither true nor false of any remaining members of $latex D^n$.

    As I see it, the various liberalizations involved in the move from total to partial interpretations are all essentially of a piece. That is, we should think of allowing empty names as akin to allowing function symbols to denote partial functions, and also to allowing predicate symbols to denote partial subsets of $latex D^n$.

    Here is a way to drive that point home. The objects that one uses to interpret non-logical symbols in total interpretations can all be very naturally represented as total functions. Thus, thinking (as is common) of individual constants as 0-ary function symbols, total interpretations can be thought of as assigning $latex n$-ary total functions from $latex D$ into $latex D$ to $latex n$-ary function symbols (for any $latex n \geq 0$). Similarly, these interpretations can be thought of as assigning $latex n$-ary total functions from $latex D$ into $latex \{t,f\}$ (i.e. the truth values) to $latex n$-ary predicate symbols.

    All of the liberalizations involved in going from total to partial interpretations amount simply to allowing these functions to be partial. For example, unary function symbols can instead be assigned unary partial functions from $latex D$ into $latex D$; individual constants can instead be assigned 0-ary partial functions from $latex D$ into $latex D$; unary predicate symbols can instead be assigned unary partial functions from $latex D$ into $latex \{t,f\}$; etc.

    In contrast, the move from total to total* interpretations is nothing like this. One is not going from things that can be naturally represented as total functions to things that can instead be thought of as corresponding partial ones.

    Indeed, we can of course think of specifying a domain as providing a function, since we can represent domains by their characteristic functions (as long as we are happy with functions that correspond to proper classes of ordered pairs). Thus, we can represent $latex D$ by the function that sends members of $latex D$ to $latex t$, and everything else to $latex f$ (say). But then the move from total to total* interpretations certainly does not amount to allowing these functions to be partial. (That is the liberalization for domains that would be akin to those embodied by partial interpretations.) Rather, in moving from total to total* interpretations we simply remove an apparently quite arbitrary restriction on which (total) functions domains can be specified by: i.e. the restriction that insists that these functions send at least one thing to $latex t$.

    As I say, I hope that is a helpful way of thinking about things—and I hope too that it makes clearer a way in which allowing empty names does not seem to be like allowing empty domains.

  2. The first sentence after the long quote runs into some syntactic difficulties. Here’s my attempt to reconstruct it:

    I think there are at least two issues here … (this part is ok) … and … the question of (“of” added) whether a defensible form of neutrality (“covers” deleted) requires neutrality about whether what we are talking exists/doesn’t exist.

      1. It’s clearer syntactically, but I think it’s now saying something very different. Before, it was about whether what we’re talking about exists; now it’s about whether our language actually hooks up to the world.

        Perhaps that seems the same thing, but I don’t think it is. For example, in mathematics we’re generally happy to quantify over real numbers and to say that individual real numbers exist. But if our talk of real numbers hooks up to the world, that doesn’t have to be because real numbers really do exist in the world. After all, our calculations and proofs work just as well if we’re only pretending that reals exist, or even if we think they exist but happen to be wrong.

        Looked at from the other direction, we can treat things as if they exist even if they don’t really.

        So that’s one reason why I don’t think it’s true that if empty names are disallowed, “we are deprived of logic as a tool for assessing arguments involving hypothetical particular things.” Treating a name as having a referent doesn’t mean it has to have an actually existing referent in the real world.

        (Another reason is that empty names in natural language don’t have to be treated as empty names in logic. At least I don’t think anyone has shown that they have to be.)

        Also, consider the example of using an assumption that N is the largest prime, then deriving a contradiction. That’s saying “let’s pretend there’s such a number and see what happens”; it’s temporarily treating it as if it exists. It’s not treating ‘N’ as an empty name that doesn’t refer to anything.

        1. I am very sympathetic to all this. (I have slightly changed the post again slightly in response to the first part into something more arm-waving towards a cluster of issues, which is what I meant — and I’m with you that the issues need to be separated).

          Your important thought here is that a lot of reasoning goes on in something like a “let’s pretend” mode (or “suppose for the sake of argument”, or “let’s explore where we get if …”). The devil, of course, is in getting the details of this right. But I’m inclined, like you, to think that handled right, the logician can use this thought to counter suggestions that we have to take empty names seriously if we are to not to be deprived of tools for assessing arguments involving perhaps non-existent particular things. However, I am also inclined to suspect that we can use the same thought to counter suggestions that we have to take empty domains seriously.

          1. I agree, up to a point. I don’t think real-world examples such as arguments involving hypothetical kinds mean we absolutely must have empty domains in logic, or else we “shall be deprived of logic as a tool for assessing (such arguments).”

            However, I can’t work out what the domain analogue of treating a variable as if it had a value would be. Treating a domain as if it were empty? But that’s in the opposite direction: it’s pretending a domain lacks something (members / elements / instances), while in the variable case, we pretend a variable has something (a value).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top