We work in a natural deduction setting, and choose a Gentzen-style layout rather than a Fitch-style presentation (this choice is quite irrelevant to the point at issue).
The standard Gentzen-style disjunction elimination rule encodes the uncontroversially valid mode of reasoning, argument-by-cases:
Now, with this rule in play, how do we show
? Here’s the familiar line of proof:
But familiarity shouldn’t blind us to the fact that there is something really rather odd about this proof, invoking as it does the explosion rule, ex falso quodlibet. After all, as it has been well put:
Suppose one is told that A
B holds, along with certain other assumptions X, and one is required to prove that C follows from the combined assumptions X, A
B. If one assumes A and discovers that it is inconsistent with X, one simply stops one’s investigation of that case, and turns to the case B. If C follows in the latter case, one concludes C as required. One does not go back to the conclusion of absurdity in the first case, and artificially dress it up with an application of the absurdity rule so as to make it also “yield” the conclusion C.
Surely that is an accurate account of how we ordinarily make deductions! In common-or-garden reasoning, drawing a conclusion from a disjunction by ruling out one disjunct surely doesn’t depend on jiggery-pokery with explosion. (Of course, things will be look even odder if we don’t have explosion as a primitive rule but treat instances as having to be derived from other rules.)
Hence there seems to be much to be said — if we want our natural deduction system to encode very natural basic modes of reasoning! — for revising the disjunctive elimination rule to allow us to, so to speak, simply eliminate a disjunct that leads to absurdity. So we want to say, in summary, that if both limbs of a disjunction lead to absurdity, then ouch, we are committed to absurdity; if one limb leads to absurdity and the other to C, we can immediately, without further trickery, infer C; if both limbs lead to C, then again we can derive C. So officially the rule becomeswhere if both the subproofs end in
so does the whole proof, but if at least one subproof ends in C, then the whole proof ends in C. On a moment’s reflection isn’t this, pace the tradition, the natural rule to pair with the disjunction introduction rules? (At or at least natural modulo worries about how to construe “
” — I think there is much to be said for taking this as an absurdity marker, on a par with the marker we use to close of branches containing contradictions on truth trees, rather than a wff that can be embedded in other wffs.)
If might be objected that this rule offends against some principle of purity to the effect that the basic, really basic, rules for another connective like disjunction should not involve (even indirectly) another connective, negation. But it is unclear what force this principle has, and in particular what force it should have in the selection of rules in an introductory treatment of natural deduction.
The revised disjunction rule isn’t a new proposal! — it has been on the market a long time. In the next post, credit will be given where credit is due. But just for the moment, I am interested in the proposal as a stand-alone recommendation, not as part of any larger revisionary project. The proposed variant rule is obviously as well-motivated as the standard rule, and it makes elementary proofs shorter and more natural. So what’s not to like?
I’m working on a C++ library for logic programming, partly based on type theory via the Curry-Howard correspondence. Conjunction seems simple enough, but modeling disjunction is technically hard. This seems odd to me because the truth tables of the two relations have the same information content, just complemented.
I spent a long time thinking about it and came to an odd conclusion: what if conjunction and disjunction are not (or should not be) primitive? Or what if conjunction and disjunction are, in some sense, “not different”?
In logic programming, a typical use of conjunction is to build a whole out of parts; we may as well use the Curry-Howard correspondence to model this situation directly and just say that a Product Type — that is, a class type that has data members X and Y — _is_ definitionally equal to a conjunction of X and Y.
On the other hand a typical use-case of disjunction in logic programming is to iterate over some sequence, often the set of possible solutions to a sub-query. Although we need a Sum Type to store the current/individual value (i.e. the _range_), a Sum Type by itself doesn’t sufficiently capture the notion of the body of the sequence (i.e. the _domain_). We may as well use collection objects – sets, lists, arrays, etc., to represent the domain of a disjunction. A function with a switch statement and cases suffices as well.
Now we have a problem. There is no difference, vis a vis ownership of data members, between a product type containing X and Y as data members, and a type containing a list or array containing elements X and Y. That is, the types which “obviously” represent conjunction and the types which “could” represent disjunctive-domains are operationally indistinguishable (in C++). Not even a function with a switch and case statements is unambiguously disjunctive; we may just as easily wish to code a conjunction with this technique (running the switch statement multiple times could build up a joint context by grounding one conjunctand at a time without mutually excluding other cases).
The point that this leads up to is that maybe the actual primitive is the list (or set, or domain of a type – take your pick), and the list has no internal disposition as to whether its elements are conjunctive or disjunctive. That can only be layered on top of it by its relationship with an outer context. If it is overlayed with a requirement that all the elements must exist at once, it’s conjunction; if it’s overlayed with a requirement that at most one exists then it’s disjunction.
A nice feature of this layering is that other operators such as min, max, butNot, etc. also have natural definitions similar to AND and OR.
I like your proposal for an alternate disjunction elimination rule because it seems like the rule given for it implicitly invokes this principle (omega consistency?) to define how the rule outcome depends on notions of “all” and “none” over a population of operands.
For a quite different way of avoiding explosion (whether right, left, or symmetric) by controls on suitably formulated semantic decomposition trees, see http://ojs.victoria.ac.nz/ajl/issue/view/486
I like this proposal very much.
We need “argument” in order to be convinced to accept the explosion rule.
Few can accept it as a rule of inference without such an argument.
Thus it is rather more natural and desirable to have it as a derivative rule, rather than a primitive rule.
My students learn in their first year that mtp (A V B and -A gives B) and ex falso (falsity gives anything) are equivalent rules (as in the book Elements of Logical Reasoning). This should always be kept in mind when discussing ex falso, there is no place for any fuss here. It is good to keep in mind how the tree-style rules are to be read, for example: you have some assumption A that gives you a nice conclusion C (ex Goldbachs’ conjecture), then you find another assumption B that does the same. Now, instead of keeping these two things in mind, you summarize: A V B gives me C. So, if there is more than one premiss in a rule, the premisses and their derivations are independent, they can come from anywhere. In VE, it is not that we first have A V B, then the others, it can easily be the way explained above.
It seems to me, the new ∨-elimination rule is likely to comprise more power than the original.
For one, the explosion (⊥-elimination) would be redundant as a primitive rule in your system: From P∧¬P we have P and ¬P. From P we have P∨Q. Take ¬P; (P∨Q) ⊢ Q as a corollary of ∨-elimination, hence Q.
To match the enhanced expressiveness of this new rule, probably the ∧-introduction rule needs to be adapted accordingly, into something from which we may derive ⊤-introduction?
But explosion is already redundant as a primitive rule if we have e.g. the usual other negation and conjunction rules (in the presence of unrestricted transitivity). So adding this disjunction rule doesn’t increase deductive power. What it does do, the story goes, is allow us to derive results which intuitively don’t involve irrelevant reasoning without appealing to explosion.