First, apologies to Alan Weir and all his fans who are impatiently awaiting the next episode of my stalled discussion of his book. I *will* get back to it, but I have been distracted over the last couple of weeks by various events, both in good ways and in less enlivening ways.

On the debit side, it now seems definite that I’m going to get replaced on retirement by someone remote from anything to do with logic, even very broadly construed (that’s assuming I eventually get replaced at all). Difficult not to get a bit depressed by such developments, about which no doubt more anon. Let’s just say it’s a very great pity, given the flourishing of logicky enterprises here, that we aren’t able to back success.

On the plus side, one distraction has been starting a close-reading of Kaye’s classic *Models of Peano Arithmetic* for our math logic reading group (I know the book a bit, of course, but it is different when you have to give presentations to a seminar on what exactly is going on and why). Another distraction — more time consuming but still enjoyable — was the recent annual grad conference here in the philosophy of logic and mathematics. The format is that, apart from the keynote speakers, grads from various places give papers, and locals give responses. I was replying to a nice talk by Giacomo Turbanti on modality in Brandom’s semantics, which forced me to do an amount of reading up, because I wanted to give a scene-setting talk to help those who hadn’t come across Brandom’s project before. In retrospect, having had the chance to think a bit more, I realize what I said wasn’t spot on in certain respects, so I won’t post my talk here as I was planning. But here’s one general comment I stand by, and one techie query for anyone who knows about this stuff.

The comment is this. The familiar inferentialist approach to the logical operators takes as its setting a standard consequence relation, and then adds introduction rules for the operator O which tell us when we are canonically entitled to assert a sentence with O dominant. Then we are supposed to locate the harmonious elimination rule which enables us get from a sentence with O dominant to wherever we could have got from its canonical grounds. Now, this kind of inferentialist approach to characterizing the logical operators by their introduction and harmonious elimination rules delivers intuitionistic logic but not full classical negation. Or so the usual story goes, as worked out in the hands of Prawitz, Dummett and Tennant. Brandom, though, claims that his brand of incoherence-based inferentialism does deliver classical logic. How does he pull a classically shaped rabbit out of an inferentialist hat?

*Part* of the story is that he in effect gives rejection rules rather than assertion rules for connectives. What Brandom’s rule for negation (for example) does is, in effect, tell us when to reject a proposition with negation its main operator, because adding it to some other stuff would lead to incoherence. *But why privilege rules of rejection over rules for assertion? *Why is this any better than privileging rules of assertion over rules for rejection? I would have thought that if — like Brandom — we see sapient enquiries responding to challenges and developing arguments in response, we should at most be giving *equ*al weight to the rejections and inconsistencies that prompt a reasoned response as to the assertions and deductions they elicit. But developing that line of thought would take us in the direction of Timothy Smiley and Ian Rumfitt’s bilateralism which puts assertion and denial on a par. Then we *do* arguably get an inferentialist framework which is genuinely friendly to classical logic. My comment, then, is that I don’t see why Brandom doesn’t go down this route, given his starting point.

The techie question is this. Suppose we start out with a standard single-conclusion consequence relation S |- A between finite sets of propositions S and propositions A, where “standard” means we have reflexivity, i.e. {A} |- A, dilution on the left, and cut.

Now we can define the extensional property *Inc *of being a (Post)-inconsistent finite set of sentences by saying that finite S is in *Inc *just if S |- A for all A (of the relevant language). It is immediate that *Inc *satisfies the filter condition that Brandom calls *persistence*, meaning that if S is in *Inc, *and S* is a superset of S, then S* is in *Inc.*

We can also go the other way about. We can start (as indeed Brandom wants to) with the idea of an upwardly closed set of sets of sentences *Inc, *and define a relation S |= A to hold just in case, for every set of sentences D, if D + A is in *Inc *so is D + S. It is easy to check that, so defined, |= is a standard consequence relation.

In the general case, however, if we start from a consequence relation |-, define *Inc* as suggested (using the idea of Post inconsistency), and then define |= from *Inc, *we don’t get back to where we started. We will have S |- A entails S |= A, but not always vice versa. So here’s the techie query (and someone out there must know this!):* under what general conditions does the round trip take us in a closed circle, *so S |- A iff S |= A? The relevant language having a classical negation will suffice, but what is the weakest condition? (Maybe Brandom himself tells us, and I should have been more patient hacking through his stuff: but the mode of presentation in the appendix to the fifth Locke Lecture is pretty much a paradigm of how not to present logical results in a helpful way.)