On apparently not avoiding explosion after all

I’ve been drafting some notes on “Other logics” for Beginning Mathematical Logic, and am currently writing something about relevant logics. A seemingly obvious point occurred to me about a familiar semantic story for First Degree Entailment — one that has surely been made before, but (because I haven’t read enough, or because my eyes glazed over at some crucial point, or because my memory is playing up) I can’t recall seeing discussed. So I’m wondering what the fan of that sort of semantic story says in response. Here’s an excerpt from what I’ve drafted (which also includes stuff about disjunctive syllogism as that is relevant in the context from which this comes), raising the point in question:

Logicians are an ingenious bunch. And it isn’t difficult to cook-up a formal system for a propositional language equipped with connectives written “\neg” and “\lor’ for which analogues of disjunctive syllogism and explosion don’t generally hold.

For example, suppose we build a model which assigns every wff one of four values. Label the values T, B, N, F. And suppose that, given an assignment of such values to atomic wffs, we compute the values of complex wffs using the following tables:

A\neg A
A \lor BTBNF

These tables are to be read in the obvious way. So, for example, if \mathsf{P} takes the value B, and \mathsf{Q} takes the value N, then \mathsf{\neg P} takes the value B and \mathsf{P \lor Q} takes the value T.

Suppose in addition that we define a quasi-entailment relation as follows: some premisses \Gamma entail^* a given conclusion C — in symbols \Gamma \vDash^* C — just if, on any valuation which makes each premiss either T or B, the conclusion is also either T or B.

Then, lo and behold, the analogue of disjunctive syllogism is not always a correct entailment^*: on the same suggested valuations, both \mathsf{P \lor Q} and \neg\mathsf{P} are either T or B, while \mathsf{Q} is N, so \mathsf{P \lor Q, \neg P \nvDash^* Q}. And we don’t always get explosion either, since both \mathsf{P} and \neg\mathsf{P} are B while \mathsf{Q} is N, so \mathsf{P, \neg P \nvDash^* Q}.

Which is all fine and good: but what is the logical significance of this construction? Can we give some semantic interpretation to the assignments of values, so that our tables really do have something to do with negation and disjunction, and so that entailment^* does become a genuine consequence relation?

Well, suppose — just suppose! — that propositions can not only be plain true or plain false but can also be both true and false at the same time, or neither true nor false. In a phrase, suppose there can be truth-value gluts and truth-value gaps.

Then there will indeed be four truth-related values a proposition can take — T (true), B (both true and false), N (neither), F (false). And, interpreting the values like that, the tables we have given arguably respect the meaning of `not’ and `or’. For example, if A is both true and false, the same should go for \neg A. While if A is both true and false, and B is neither, then A \lor B is true because its first disjunct is, but it isn’t also false as that would require both disjuncts to be false (or so we might argue). Moreover, the intuitive idea of entailment as truth-preservation is still reflected in the definition of entailment^*, which says that if premisses are all true (though maybe false as well), the conclusion is true (though maybe false as well).

But what on earth can we make of that supposition that some propositions are both true and false at the same time? This will seem simply absurd to most of us.

However, a vocal minority of philosophers do famously argue that while, to be sure, regular sentences
can’t be both true or false, the likes of the paradoxical liar sentence “This sentence is false” can be. It is fair to say that few are persuaded by this line. However, I don’t want to get entangled in that debate here. For it isn’t clear that this extravagant idea actually helps very much. Suppose we do countenance the possibility that some special sentences have the deviant status of being both true and false (or being neither). Then we might reasonably propose to add to our formal logical apparatus an operator ‘!’ to signal that a sentence is not deviant in that way, governed by the following table:


Why not? After all, we have use for such a sign, given that we are confident of many sentences in use that they are not deviant cases. But then note that \mathsf{!P, P, \neg P \vDash^* Q}. And similarly, if say \mathsf{P} and \mathsf{Q} are the atoms present in A, then \mathsf{!P, !Q}, A, \neg A \vDash^* C always holds. Yet this modified form of explosion — when built out of regular claims, a contradictory pair entails anything — is surely just as unwelcome as the original unrestricted form of explosion. (Parallel remarks apply to disjunctive syllogism. We still have, e.g., \mathsf{!P, \neg P,  P \lor Q \vDash^* Q}.)

So we haven’t really got anywhere, in particular if our concern is to give a satisfyingly non-explosive account of entailment.

Or so it seems! Comments?

5 thoughts on “On apparently not avoiding explosion after all”

  1. A nice point. In effect it is a way of reflecting, in the object-language, the view that the proposed antidote to explosion is well motivated only for contexts in which there is some independent reason for moving from two to four values. If the only such contexts are those in which there is semantic self-reference or infinite semantic descent, that leaves untouched all of ordinary mathematical reasoning and most deductive inference outside mathematics. Of course, more radical critcs will suggest that there could be independent reason for needing four values even in contexts lacking any kind of self-reference – but that would need much more substantiation than it has received anywhere in the literature.

    A fact that is too exotic for the purposes of your text, but which may interest some readers of the blog, is that the move to four values for first-degree arrow formulae can be seen as a neat way of semantically encoding a suitable syntactic tweaking of their decomposition in truth-trees. The syntactic restriction is defined and studied in a paper accessible at https://drive.google.com/file/d/10KdwN29yKLz8XHWs-ALCPMQs1uk_tVMT/view, with the semantic encoding in Appendix 5. A formal caveat: The encoding works in the first-degree case only – a fact that helps explain why nobody has been able to make four-valued truth-tables work for higher degree arrow formulae without dragging in a lot of heavy auxiliary apparatus (e.g. possible worlds with three-place relations between them, satisfying poorly motivated constraints). And a personal philosophical opinion: while the syntactic tweaking of truth-tree decomposition is also well defined for formulae with unlimited iterations of arrow, it should be seen as a filter on good old classical consequence, that might possibly be convenient in some circumstances, rather than an attempt to usurp it.

  2. (I am commenting anonymously, since my comment is based on a paper that will be submitted for review soon.)

    In my paper I discuss a logical framework in which formulas may not have a defined truth-value (which is different from having a Neuter truth-value). Next to the notion of classical validity (there is no structure in which the premises are true and the conclusion false), there are also the notions of truth-preservation, non-falsehood-preservation and strong validity (preservation of both truth and non-falsehood.) In the natural deduction system I use \mathcal{D} as a “definedness” operator. In a footnote I discuss ex falso quodlibet. Here is the footnote:
    “As a consequence of their rejection of ex falso quodlibet, relevant logicians have to pay a price according to \citet[pp.~99-100]{Burgess2009-BURPL-3}: they have to give up disjunction introduction or disjunctive syllogism or the transitivity of semantic entailment. As was explained in section~??, once formulas do not need to have defined truth-values, classical validity is no longer transitive. In contrast, strong validity is transitive, but disjunction introduction is not strongly valid (i.c.\ it does not preserve truth), although one can use the definedness operator of section~?? to formulate a version of disjunction introduction (\phi, \mathcal{D}\psi \therefore \left( \phi \vee \psi\right)) that is strongly valid. Disjunctive syllogism is also not strongly valid (i.c.\ it does not preserve non-falsehood), but there is a version of it (\left( \phi \vee \psi \right), \neg\phi, \mathcal{D}\phi \therefore \psi) that is strongly valid. So, one can reconstruct the trilemma of \citet{Burgess2009-BURPL-3} in our framework.”

  3. This demonstrates that FDE is not expressively complete relative to the four-valued semantics. And LP is not expressively complete relative to its three-valued semantics. Likewise, it’s easy to add Boolean negation to the Routley-Meyer semantics for relevant logic, but doing so recovers explosion and disjunctive syllogism, and so defeats the point.

    Priest addresses a closely related issue here: https://www.jstor.org/stable/30226426
    I think his view there is that the objection can be resolved by using a non-classical logic for the metalanguage as well as the object language, in which case we do have expressive completeness relative to that semantics.

  4. My draft paper Making Sense of Relevant Semantics purports to address such issues. Briefly, the idea is to see “worlds” for relevant logic as abundant properties (or some comparable entities). But then in a straightforward way there are such worlds with both P and ~P. So relevant logic captures a sort of “property-based” entailment.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top