*The following was posted as a comment to Empty domains #1 by Vej Kse — but I think it is particularly interesting and worth highlighting as a stand-alone post. *

There is an elegant and natural (from a user’s perspective) way to allow empty domains, which has been known in categorical logic and modern type theories (based on de Bruijn’s Automath and Martin-Löf’s type theory) for over 40 years.

It consists in considering formulae only relative to a given context (list) of variables (which must contain all free variables of the formula, and can be empty if the formula is closed). In many-sorted logic, each variable is attributed a sort in the context. In a way, “standard” first-order logic (the one taught in most logic textbooks) amounts to working in a context with a countable number of variables.

In natural deduction for such a system, we want to prove a formula in a variable context from a list of formulae (an hypothesis context), where all involved formulae are well-formed in the context . Just like proving an implication in a given hypothesis context amounts to proving the consequent from the context expanded with the antecedent (the variable context hold fixed), proving a universal quantification in a variable context amounts to proving in the context expanded with (the hypothesis context hold fixed).

The main reason of using a variable context, in my view, is not that it allows empty domains, but that it models more faithfully how mathematicians reason. We do not start our informal proofs by assuming given a countable sequence of variables of our domains of interest; when we write proofs, variable introduction parallels hypothesis introduction: “Let x be a real number …” vs. “Let’s assume the axiom of choice.”. Moreover, mathematicians remember just as well the variables they introduced (stored in the “variable context” part of their memory) as they remember the hypotheses they introduced (stored in the “hypothesis context” part of their memory). “Standard” first-order logic fails to model that memory of the variables introduced.

The standard proof in “standard” first-order logic of from is odd for mathematicians precisely because a variable appears out of nowhere, without ever being introduced.

1. Let’s assume ;

2. Eliminate by applying it at , which gives us .

3. Introduce , which gives us .

In correcting a student’s work I would write in red next to the second step: “what is this ? you never introduced it”. The proof should introduce the variable before using it, which also means that it cannot just be forgotten at the end of the proof, even though it doesn’t appear in the conclusion (it’s still in the memory of the reader). To remove it from the context, a must be added, giving .

Advantages of working with variable contexts besides allowing empty domains and being more faithful to ordinary reasoning:

– Leaner semantics: No need to assign values to infinitely many unused variables; we only have to assign values to the variables in the finite context.

– Separation of concerns: The quantifier rules don’t have side-conditions any more, these become well-formedness conditions on the contexts (all variable names must be different so that, when a context is extended, the name must be new) and the formulae (they must be well-formed in the contexts in which they are used, in particular only contain variable names present in those contexts).

– The structure of the quantifier introduction and elimination rules is more transparent: it takes the form of an adjunction, like for the other connectives.

– It repairs the symmetry between assumptions and variables, implication and universal quantification, conjunction and existential quantification (broken by their very different treatments in “standard” first-order logic), paving the way to the propositions-as-types (Curry-Howard) correspondence and modern type theories.

Disadvantages:

– Prenex normal forms depend on the inhabitedness assumption (and the law of excluded middle) (so we can still use it in Peano arithmetic for instance, just not in general first-order logic).

– The weight of tradition.

Here are a few books in which first-order logic is presented using variable contexts:

– Peter T. Johnstone: “Sketches of an elephant: A topos theory compendium” (Chapter D1);

– Bart Jacobs: “Categorical logic and type theory” (Chapter 4);

– Paul Taylor: “Practical foundations of mathematics” (Section 1.5);

– Richard Bornat: “Proof and disproof in formal logic: An introduction for programmers”.

The use of a ‘variable contexts’ seems quite appealing. However, I’m a bit puzzled by many things in the post, so that I’m not sure I understand it well enough to trust the appeal that the idea seems to have.

I wonder if someone could explain some of what the post says.

Why is ‘standard’ always in quotes when applied to FOL?

Could the proof method described in the post be used with, or be adapted to be used with, ordinary FOL (I daren’t say ‘standard’), or does it require a move to categorical logic or type theory?

Since it is only ‘in a way’ that ‘ “standard” first-order logic … amounts to working in a context with a countable number of variables’ (note also the ‘amounts to’), isn’t the simplest solution to that problem (if it is a problem) simply not to see FOL in that way?

It seems to me that the question of empty domains is primarily a question of semantics and that ordinarily FOL doesn’t require any particular proof system, instead allowing a great variety of them. So why is the post (it seems) treating ‘ “standard” first-order logic’ as a proof system?

Also, where in the FOL semantics are values assigned to infinitely many unused variables? It didn’t seem like that was happening when I looked at some model theory texts the other day. In any case, when working in FOL, I’ve never thought I was assigning values to unused variables. So, again, why is FOL being seen in this ‘way’?

If the standard (no quotes) proof in “standard” first-order logic of from is as described, why are students taken to task for using it?

In any case, I don’t read that proof as bringing in a variable out of nowhere; I read it as introducing that variable, as if it said ‘let be …’. That such language is omitted when it’s written formally doesn’t seem decisive to me.

Isn’t it enough to use Gentzen’s natural deduction to fix the problem with empty domains? Maybe I’m missing something, but it seems that’s all that is needed. It is natural to also consider types and what not, but it seems to me even with a single “type” or “sort” for individuals the problem does not reappear.

> In any case, I don’t read that proof as bringing in a variable out of nowhere; I read it as introducing that variable, as if it said ‘let y be …’. That such language is omitted when it’s written formally doesn’t seem decisive to me.

What’s goes in the ellipsis? With empty domains, and reasoning semantically, we could spell that out as “Assume there exists an element that we call ”. And if you allow empty domains, that assumptions rules them out.

But if you conclude , you are omitting that assumption again.

In natural deduction, you’re making the same assumption, but you need to use -introduction to discharge the assumption, so in the end you record it in the conclusion.

By standard (the quotes being there for no good reason) first-order logic, I just meant first-order logic as it is presented in the overwhelming majority of logic textbooks, i.e. without variable contexts. More specifically, it is usually presented with three components:

1. A language: terms and formulae built from a countable number of variables, function and relation symbols, logical connectives.

2. A semantics: in a given model (with a non-empty domain), the formulae are given a truth value based on a variable assignment, mapping an object of the domain to each of the countable number of variables (only a finite number of them appearing in a given formula).

3. Proof systems, which should be sound and complete for the semantics.

Here are the relevant differences when considering first-order logic with a variable context:

1. In the language: terms and formulae are parametrised by a variable context (a finite list of variables, possibly empty) and terms/formulae in a given variable context can only use the variables declared in the context. (The terms and formulae of standard first-order logic can be recovered by using a variable context indexed by ; here is why “in a way” (things become messy when specifying the semantics of quantifiers), standard first-order logic as described above amounts to working in a countable variable context.)

2. In the semantics: in a given model (which may have an empty domain), the formulae in a given context are given a truth value based on a variable assignment, mapping an object of the domain to each variable of the context (none, if the context is empty (so that the formulae are necessarily closed)). For universal quantifiers, we consider each possible extension of the assignment to the extra variable.

3. In a natural deduction proof system: the variable context is remembered all along, a variable may be removed from the context only by introducing a universal quantifier.

There is no need to use category theory or type theory to do any of that.

As far as the accent put on the proof system, it may be because this is the hard part: having a simple and elegant proof system for a first-order logic allowing empty domains. Also, in day to day reasoning, it’s the proof system which is used.

> mapping an object of the domain to each of the countable number of variables (only a finite number of them appearing in a given formula).

1. My logic textbook doesn’t use necessarily number of variables—in fact, *the semantics* uses a vocabulary (what type theory calls a context), which can be finite or infinite, and an environment for this vocabulary—almost like you’d expect from type theory (though not exactly).

2. More importantly, the problems in #1 only appear if interpretations give a value only to used variables. On the other hand, this question makes less (or no) difference without empty domains

In more detail:

1. I’ve looked this up in “A first course in logic” (reviewed and panned here, but what I have available). There, interpretations (that is, variable assignments) are for a vocabulary (Defs. 2.12 and 2.13), which can well be finite. A vocabulary *can* however contain more variables than used—since that book sticks to non-empty domains, adding variables to a vocabulary preserves truth.

2. Empty domain cause problems with transitivity because does not hold. That’s because you can consider potential models (“structures”) with an empty domain that falsify it. Those domain must map variables to elements of the domain; luckily, there’s an empty set of variables. Instead, for , potential models must map to some domain element, hence empty models are ruled out.

The slogan on the back of a service truck that I saw today:

Everybody is somebody to us.

The most accessible presentation of this topic is in Aarne Ranta’s book Type-Theoretical Grammar (Oxford 1994). There, following a course I had given on “explicit predicate calculus,” he shows how predicate logic can be augmented by type declarations in the form of extra premisses in quantifier rules for natural deduction, each quantifier bound to a type, say existence introduction with A a type of objects and the type declaration written as a:A (as on p. 31):

a:A B(a)

————-E-I

(Ex:A)B(x)

Such bound quantifiers are very natural, say in geometry where I have used them with the type of points Pt and Lines Ln. The existence of a connecting line of two distinct points can be written as in

(forall x:Pt)(forall y:Pt)(Distinct(x,y) => (exists z:Ln)(Inc(x,z)&Inc(y,z)))

I think that once you’re using types, it’s too far from ordinary FOL.