In Sec. 7 of his paper, Lavine argues that there is a distinct way of expressing generality, using “schemes” to declare that ‘any instance [has a certain property], where “any” is to be sharply distinguished from “every”‘ (compare Russell’s 1908 view). In fact, Lavine goes further, talking about the kind of generality involved here as ‘more primitive than quantificational generality’.
We are supposed to be softened up for this idea by the thought that in fact distinctively schematic generalization is actually quite familiar to us:
When, early on in an introductory logic course, before a formal language has been introduced, one says that NOT(P AND NOT P) is valid, and gives natural language examples, the letter ‘P‘ is being used as a full schematic letter. The students are not supposed to take it as having any particular domain — there has as yet been no discussion of what the appropriate domain might be — and it is, in the setting described, usually the case that it is not ‘NOT(P AND NOT P)’ that is being described as valid, but the natural-language examples that are instances of it.1
Here, talk about a full schematic variable is to indicate that ‘what counts as an acceptable substitution instance is open ended and automatically expands as the language in use expands.’
But Lavine’s motivating example doesn’t impress. Sure, in an early lecture, I may say that any proposition of the form NOT(P AND NOT P) is logically true in virtue of the meanings of ‘NOT‘ and ‘AND‘. But to get anywhere, I of course have to gloss this a bit (for a start, the very idea of a ‘substitution instance’ of that form needs quite a bit of explanation, since plugging in a declarative English sentences won’t even yield a well-formed sentence). And, glossing such principles like non-contradiction and excluded middle, I for one certainly remark e.g. that we are setting aside issues about vagueness (‘it is kinda raining and not raining, you know’), and issues about weird cases (liar sentences), and issues about sentences with empty names, and I may sometimes mention more possible exceptions. But yes, I — like Lavine — will leave things in a sense pretty ‘open-ended’ at this stage. Does that mean, though, that I’m engaged in something other than ‘quantificational generality’? Does it mean that I haven’t at least gestured at some roughly delimited appropriate domain? Isn’t it rather that — as quite often — my quantifications are cheerfully a bit rough and ready?
‘Ah, but you are forgetting the key point that ‘what counts as an acceptable substitution instance is … expands as the language in use expands.’ But again, more needs to be said about the significance of this before we get a difference between schematic and quantificational generalizations. After all, what counts as an instance of ‘All the rabbits at the bottom of the garden are white’ changes as the population of rabbits expands. Does that make that claim not quantificational?
A general methodological point, famously emphasized by Kripke in his discussion of a supposed semantic ambiguity in the use of definite descriptions: we shouldn’t multiply semantic interpretations beyond necessity, when we can explain variations in usage by using general principles of discourse in a broadly Gricean way. We shouldn’t, in the present case, bifurcate interpretations of expressions of generality into the schematic and the genuinely quantificational cases if the apparent differences in usage here can be explained by the fact that we speak in ways which are only as more or less precise and circumscribed as are needed for the various purposes at hand. And it seems that the ‘open-ended’ usage in the quoted motivating example can be treated as just a case of loose talk sufficient for rough introductory purposes.
So has Lavine some stronger arguments for insisting on a serious schematic/quantification distinction here?
1. Quite beside the present point, of course, but surely it isn’t a great idea — when you are trying to drill into beginners the idea that truth is the dimension of assessment for propositions and validity is the dimension of assessment for inferences — to turn round and mess up a clean distinction by calling logically necessary propositions ‘valid’. I know quite a few logic books do this, but why follow them in this bad practice?