AbsGen

Absolute Generality 25: Indefinitely extensible concepts, "big" and "small"

The Shapiro/Wright paper is a high point in the Absolute Generality collection. For a start,

  1. First, they focus on Dummettian considerations. I’ve already urged here that considerations against the possibility of absolutely general quantification based on Skolemite worries, or on worries about “metaphysical realism”, or indeed on worries about “interpretations”, don’t seem compelling. It seems to me that the key interesting issues hereabouts do indeed arise from considerations about indefinite extensibility (pressed by Dummett, but having their roots, as Shapiro and Wright remind us, in remarks of Cantor’s and Russell’s).
  2. It is also the case that, unlike some of the others in the collection, this paper is written with fairly relentless clarity and explicitness (and though it isn’t free of technicalia, the details are kept on a tight rein).

Shapiro and Wright take up a hint in Russell, and (in Sec. 2) consider the following — at least as a first characterization of the scope of the indefinitely extensible:

If the concept P is indefinitely extensible, then there is a one-to-one function from all the ordinals into the Ps.

The argument is this. Suppose, for reductio, that there is a one-to-one function f from the ordinals smaller than some α into the Ps. Then the collection of Ps of the form f(β), where β < α will be (on one generous but reasonable understanding) a "definite" totality of Ps. But recall that by Dummett's informal characterization, an

indefinitely extensible concept is one such that, if we can form a definite conception of a totality all of whose members fall under the concept, we can, by reference to that totality, characterize a larger totality all of whose members fall under it.

So, since by hypothesis P is indefinitely extensible, then there must be, after all, a P which isn’t one of the f(β), where β < α. Choose one, and extend the function f by setting f(α) to have that value. This shows that for any ordinal α, if all the ordinals less than α can be injected into the Ps, then the ordinals less than or equal to α can be injected into the Ps. So, by a transfinite induction along the ordinals, all the ordinals can be injected into the Ps.

Very neat. And though the argument does rest on quite powerful set-theoretic assumptions, it indeed seems rather telling. And by a similar argument,

If there is a one-to-one function from all the ordinals into the Ps, the concept P is indefinitely extensible.

So we get, plausibly, a biconditional connection between the concept P’s being indefinitely extensible and there being an injection of ordinals into the Ps — which makes the case of the ordinals the paradigm case of an indefinitely extensible totality.

Now, as Shapiro and Wright emphasize, this connection doesn’t yet give us an elucidatory account of the notion of indefinitely extensibility (for why is the concept ordinal itself indefinitely extensible?): but — if we are right so far — at least we’ve got a sharp constraint on an acceptable account. But are we right?

The trouble is that the argued connection makes all genuinely indefinitely extensible totalities big, while some Dummettian examples of indefinitely extensible totalities are small. Take for example Dummett’s discussion in his paper on the philosophical significance of Gödel’s theorem. He says that arithmetical truth (of first-order arithmetic) is shown by the theorem to be an indefinitely extensible concept. But why? After all, there’s a perfectly good and determinate Tarskian definition of the set.

But suppose we think of a ‘definite’ totality — more narrowly than before — as one that can be given as recursively enumerable (which is perhaps a thought that chimes with other Dummettian ideas). Then start with some such ‘definite’ set of arithmetical truths A0, e.g. the theorems of PA. Gödelize to extend the theory to A1, and keep on going. Any particular theory that is still r.e. can be Gödelized. But note that this time there is evidently a limit on how far along the (full, classical) ordinals we can continue the process — for there are only countably many r.e. sets available to be Gödelized and uncountably many ordinals (even ‘small’ countable ordinals).

So what are we to make of this? Well, one line would be to cleave to the Russellian alignment of the indefinitely extensible with injectability-into-the ordinals, AND similtaneously agree with Dummett that truth-of-arithmetic is indefinitely extensible, by not accepting the classical ordinals in all their glory. The more you restrict the ordinals you accept, the more indefinitely extensible concepts there will be for you. But what of those who are happy with oodles of ordinals? Then the moral seems to be this. There is a difference between saying that the concept P is such that, given any ‘definite’ totality of Ps, we can always find a P that isn’t in that totality (we can always diagonalize out of any given set of Ps), and saying that the totality is (so to speak) indefinitely indefinitely extensible. And that seems right and important.

But how can we develop these ideas of ‘definite’ totalities/indefinite extensibility? The story continues …

Absolute Generality 24: Parsons concluded

I’ve commented at length on the central, load-bearing, section of Parson’s paper. The concluding five and a bit pages I found less engaging. There are some comments on a paper by Rayo and Williamson which I might take up when I get to thinking about Rayo’s related contribution here to Absolute Generality. Then there is an un-worked-out suggestion that we take ‘Everything is identical to itself’ as systematically ambigous. And finally there are some remarks about how those who might worry about the possibility about absolutely general quantification can handle seemingly all-encompassing common-or-garden claims such as that there are no (absolutely no!) talking donkeys. The latter remarks chime with some suggestions of Hellman’s that I’ve already commented on sympathetically, so I won’t expand on them here, but just say that I agree with Parsons that common-or-garden claims about talking donkeys aren’t a serious obstacle to anti-absolutism.

So let’s move on. I’ll set aside Rayo’s technical excursus for now: so that brings us to another monster paper, this time by Shapiro and Wright …

Absolute Generality 23: Parsons on the Williamson argument again

Let’s start by presenting a Williamson-style argument in a slightly different way.

On an interpretative truth-theory for a language L, as we said, we’ll have a clause for a monadic L-predicate P along the lines of ‘for all o, P is true-of o iff Fo‘. But we are now in the business of imagining running through various different possible interpretations for P, which will result in clauses in definitions of different true-of relations, i.e. different relations ….. is true-of ….. on interpretation I. Now, it might well on the one hand seem that we needn’t think of the different interpretations that are in play here as ‘objects’ (whatever exactly that means). But, on the other hand, we might reasonably suppose that the different true-of relations could at least be indexed by some suitably big collection of objects (some class of numbers perhaps, or more generously some sets, for example).

So the clause in a definition for an indexed true-of relation true-ofα will be given in the form ‘for all o, P is true-ofα o iff Fo‘. But now, since the indexing objects are by hypothesis kosher objects, we can unproblematically define a property R which is had by an object o just in case o is an indexing object and not-(P is true-ofo o).

We can then ask: is there an index κ such that for all o, P is true-ofκ o iff Ro? The familiar argument shows that there can be no such index κ (assuming, that is, that κ falls into the range of the universal quantifier ‘for all o). But what should we conclude from that?

Well, we could conclude that, after all, the universal quantifier somehow manages to miss including the object κ in its range. But that is hardly the most natural lesson to draw! Rather, the natural moral is a Tarskian hierarchical one, that given some truth-predicates true-ofκ, we can ‘diagonalize out’ and define another truth-predicate which is not one of them.

Now, Parsons almost makes the point. But, what he actually says is that, if you resist the idea that the universal quantifier must fail to cover absolutely everything, then this “forces us to take the Tarskian view now about the predicate ‘P is true of x according to I‘. That amounts again to saying that we have determinate quantification over absolutely all interpretations but do not have an equally general notion of truth under an interpretation.” (Which Parsons suggests is a troublesome line for the believer in absolutely general quantification to manage.) But in fact that doesn’t seem quite right. For there is, we are supposing, a determinate quantification hereabouts, but we are not required to think of it as a being over ‘all interpretations’, so much as being over all the objects that index some initial bunch of interpretations. The claim, however, is that we can always diagonalize out and define a further interpretation.

And now the question arises why, in this setting when we are generalizing about Davidsonian interpretations, we can’t echo the line that Parsons took about one-off interpretations. He said, you’ll recall, that (in the case of unrelativized truth-theories) ‘true of’ had better not be in the language being interpreted on pain of paradox. So, as he put it, “the interpretation does require ‘ideology’ not present in the language interpreted, but it does not require an expansion of ontology”. Now we are going up a level and talking about different definitions of ‘true of’ on different possible interpretations. And again on pain of paradox there will be a ‘true of’ that isn’t already among those different definitions. But why can’t we say again, “this new interpretation does require ‘ideology’ not already present, but it does not require an expansion of ontology”?

So in the end, I’m not sure that Parsons has firmly put his finger on a problem for the defender of absolute quantification, or at any rate a problem that comes from ideas about ‘interpretation’.

Let me add just a quick footnote harking back to Linnebo’s paper which we skipped over. One thing I did note was that he takes the strongest response to the Williamson line of argument to be a type-theoretic one — but Linnebo goes on to discuss a simple theory of types, and in a way this seems now to be going off in the wrong direction. For what we have just seen, in the case of a hierarchy of true-of relations is a ramification into levels and it is that which is doing the paradox-avoiding. But I’ll try to return to this observation.

Absolute Generality 22: Parsons on varieties of Russell’s paradox

Parsons, however, doesn’t think that the principal problems about quantifying over everything arise from a supposed commitment to metaphysical realism but are “logical difficulties … [which] arise from considering how sentences or discourses containing quantifiers are interpreted. This apparently innocent talk of interpretation turns out to have considerable weight.” Why?

Here’s how I think the dialectic goes in the compressed but elegant Section 3 of Parsons’s paper (with some changes in notation):

  1. Quantifiers are standardly interpreted as ranging over some domain, predicates are interpreted by subsets of the domain etc. A domain is understood to be a set. In standard set theory, no set contains absolutely everything. (Going for a set + classes theory just shuffles the problem upstairs.) So quantifications aren’t over absolutely everything.
  2. But in fact, Parsons says, it isn’t specific issues about sets or classes that generate the type of difficulty we encounter here. For consider any style of semantic interpretation for one-place predicates that assigns the open wff F the entity E(F), and which tells us that ‘Fa‘ is true just when o I E(F), where o is the denotation of a, and I is some appropriate relation. (So if E(F) is a property, I is the instantiation relation; if E(F) is an extension, I is set membership; and so on.) Now, suppose that the language in question can itself talk about the entity E(F) and the relation I, so that now — within the language itself — we have ‘Fa‘ is true iff ‘a I E(F)‘ is true. Now consider the one-place predicate ‘R‘ defined so that ‘Rx’ iff ‘not-(x I x)’, and suppose a is the term E(R). Then, we’ll have ‘Ra‘ is true iff ‘a I E(R)‘ iff ‘a I a‘ iff ‘not-Ra‘. Contradiction. So either there just is no such object as E(R), in which case we have a problem about giving a familiar sort of semantics for the language: or it is not available in the domain of quantification to be picked out, and the language’s quantifiers don’t range over everything.
  3. But ahah! Maybe the trouble in (1) comes from the idea that semantic interpretation requires us to assign an entity to be the domain. Recall, e.g., Cartwright’s familiar animadversions against what he calls the All-in-One principle, the idea that a domain is another object, additional to the objects it contains. And maybe the trouble in (2) comes from the idea that semantic interpretation requires us to assign an entity to be the interpretation of a predicate. Recall, e.g. the possibility of a metaphysics-light Davidsonian style of interpretation where predicates are interpreted by translation. [Then the residue of the generalized Russell paradox, with E(F) being simply F, and R the ‘true of’ relation is just a familiar sort of semantic paradox. This indeed will lead us to say that the ‘true of’ had better not be in the language being interpreted on pain of paradox. “So,” says Parsons neatly, “the interpretation does require ‘ideology’ not present in the language interpreted, but it does not require an expansion of ontology. So far so good for the idea that the domain of the variables includes absolutely everything.”]
  4. But what, however, if one wants to generalize about Davidson-style interpretations (though, as Parsons notes, it is a moot question when we really need to). Do we get back to the sort of contradiction that we met when considering the ontologically loaded notion of interpretation deployed in (1) and (2)?
  5. If we are going self-consciously to relativize interpretative truth-theories (in a way that Davidson doesn’t) preparatory to generalizing about them, then we’ll have clauses for a predicate P like this ‘(for all o), P is true of o according to interpretation I iff Fo‘. Now suppose that an interpretation can itself be an object which P can be true of. And put Ro iff not-(P is true of o according to o). Now consider an interpretation J such that (for all o) P is true of o according to interpretation J iff Ro iff not-(P is true of o according to o). Identify o with the interpretation J and we have a contradiction again. [Thus Williamson’s version of the Russell paradox argument.]
  6. One response is to continue to allow that J is an object but conclude that it can’t fall into the range of the quantifiers, so that the quantifiers can’t be running over absolutely everything. So we again get an argument against absolutely general quantification, even though we are no longer thinking that interpretations as themselves ontologically loaded and as assigning objects as domains to quantifiers or entities as interpretations to predicates.

So far, so good! But, as I just said, that’s only one response to the Williamson argument. It isn’t the only possible one. Parsons mentions (at least) one other line of response at the end of his Section 3, though concludes that “the friends of absolute quantification” face difficulties in the other direction(s) too. But why?

Well, here things get a bit murkier. I’ll need to think for a while more …!

Absolute Generality 21: Parsons on metaphysical realism

It is a pleasure, as always, to turn to a paper by Charles Parsons (a long time ago, his “Frege’s Theory of Numbers” was one of the papers that grabbed me when I very first started philosophy and it helped get me really enthused by Frege’s project). Nice too, in a volume of overlong papers that ‘The Problem of Absolute Universality’ sticks to a reasonable length.

In his first section, Parsons quickly reviews a few reasons for supposing that sometimes, at any rate, we aim to make claims that do involve absolutely general quantification — offering the usual sort of cases. For example, there are logical examples like Everything is self identical (which surely is indeed intended to be about everything.) And there are more humdrum examples like There are no unicorns. (If we suppose ‘there are’ ranges only over some domain D, then the statement could be true even if there are unicorns outside D. Then, asks Parsons, “can we exclude this outcome short of admitting a D that is absolutely everything?”).

So what are the problems about taking such cases at face value? Parsons takes the main problems to be logical in character, and more about them in due course. But first, in his second section, he discusses “a problem of a more metaphysical character”.

Our problem is that for statements of an absolutely general kind to have a definite truth-value, it appears that there has to be a final answer to the question what objects there are, … That is metaphysical realism.

If we are suspicious about metaphysical realism, so understood, we should therefore be suspicious about quantifiers which purporting to really capture everything, once and for all.

But why be suspicious about realism, so understood? Parsons mentions the possibility of going for a trope ontology rather than object/property ontology and ending up with a different catalogue of the fundamental constituents of the universe. But it isn’t at all clear why that possibility counts against quantifying over everything, as I said before in talking about a similar discussion in Hellman’s paper. For a start, note that Parsons says the problem is that there has to be a final answer to the question of what objects there are. But the tropist and the traditionalist (if we can call her that) needn’t disagree at all about the objects that there are. In particular, the tropist needn’t deny any of the objects that the traditionalist posits; it’s just that he has his own special story about what objects are (how they are constructed from tropes).

Perhaps Parsons mis-spoke and meant to say that absolutely general quantification involves fixing not the objects but the entities in some all-embracing cross-category sense. But I suppose that well brought up Fregeans might start getting unhappy about the coherence of that idea. And in any case, since the traditionalist can treat tropes as logical constructions, the tropist and the traditionalist don’t have different stories about what entities (broad sense) there are either, but rather a different story about the entities are interrelated by metaphysical dependence relations (whatever they exactly are).

Now in fact Parsons himself raises the same concerns about the trope example. But he comments

The mere fact of the possibility of construction is not sufficient, since constructions that may be offered will not necessarily satisfy the metaphysical intuitions that drive the alternative framework. That’s a reason for thinking that even if this possibility gives us a way of talking about everything in the world that does not commit us to metaphysical realism, even making sense of it gets us into heavy-duty metaphysics.

But I’m not getting the force of that. For remember the dialectical situation. Someone purports to quantify over everything. The objector says “Ahah! Do you realize you are committed to metaphysical realism in a bad way?”. The proponent of unrestricted quantification says “Why so?”. The objector responds “You are committed to thinking the world carves up into entities in a unique way: and what about e.g. the choice between a traditionalist and trope ontology”. We’ve imagined a come-back: “You’ve not shown that that’s a substantive choice about what there is, rather than a choice about how we organize the world into basic entities and constructions out of them”. And now Parsons is offering the opponent the retort: “Hmmmm, even making sense of that gets us into heavy-duty metaphysics”. To which the original proponent might reasonably protest that it was the objector who started playing the heavy-duty metaphysics game, so he can hardly complain about that. Rather it is the objector who needs to say more about e.g. the trope example and why (i) on the one hand it is supposed to be a contentful and a genuinely different story about what there is (not just a different story about “dependence”), yet (ii) on the other hand there is some kind of free choice about whether to adopt it rather than the traditional story, i.e. there isn’t an objective fact of the matter about which is the right story, which explains why we shouldn’t be metaphysical realists.

So for the moment, until he hears rather more, the proponent of absolutely general quantification can reasonably suppose himself to live to fight another day!

Absolute Generality 20: Linnebo on sets, properties, etc.

And having said that I would write about Linnebo’s paper next, I find myself rather regretting that promise, and this will have to be a non-comment!

Linnebo begins by announcing that the “strongest argument against the coherence of unrestricted generalization” is Williamson’s variant Russell paradox about interpretations; and he then takes the most promising line of reply on the market to involve adopting a kind of hierarchical type theory. Then Linnebo locates what he thinks to be a problem with the usual kind of “type-theoretic defences”. So he changes tack, and offers a different, more revisionary response to Williamson’s argument, which depends on rethinking the very idea of an interpretation (so that now predicates get not extensions but rather properties as their semantic values). He then needs a whole framework for talking about properties as well as sets (which has, he says, some similarity with Fine’s deviant project in his 2005 paper “Class and membership”), and this gives us a new sort of hierarchy of semantic theories. As with the old-style type-theoretic defences, we can now use this new hierarchical apparatus to blunt Williamson’s argument.

Now, how exciting/illuminating you find all this — given our interest here is in absolute generality — will depend, in part, on whether you think that the Williamson variant on Russell’s paradox gets to the very heart of a certain kind of argument against the possibility of absolute general quantification, or alternatively think that it somewhat muddies the waters by dragging in tangential issues (e.g. in its talk of interpretations as objects, etc.). My hunch so far has been the latter, though of course I’m open to persuasion; and the very complexities of Linnebo’s excursus don’t do much to dislodge that hunch so far.

I might return to Linnebo later, since Rayo’s paper seems to take us back into similar territory. But at the moment, I really don’t think I have anything useful to say. So let’s for now rapidly move on to consider Parsons’s paper.

Absolute Generality 19: Lavine on McGee’s argument

There are still over twenty pages of Lavine’s paper remaining. Since, to be frank, Lavine doesn’t write with a light touch or Lewisian clarity, these are unnecessarily hard going. But having got this far, I suppose we might as well press on to the bitter end. And, as I’ve just indicated in the previous post in the series, I do have a bit of a vested interest in making better sense of his talk of schematic generalizations.

There are four Sections, 9 — 11, ahead of us. First, in Sec. 8, Lavine argues that even with schematic generalizations as he understands them in play, we still can’t get a good version of McGee’s argument that the quantifier rules suffice to determine that we are ultimately quantifying over a unique domain of absolutely everything, and so McGee’s attempt to respond to the Skolemite argument fails. I think I do agree that the rules even if interpreted schematically don’t fix a unique domain: but I’m still finding Lavine’s talk about schematic generalizations pretty murky, so I’m not sure whether that is right. Not that I particularly want to join Lavine in defending the Skolemite argument: but I am happy to agree that McGee’s way with the argument isn’t the way to go. So let’s not delay now over this.

In Sec. 9, Lavine discusses Williamson’s arguments in his 2003 paper ‘Everything’ and claims that everything Williamson wants to do with absolutely unrestricted quantification can be done with schematic generalizations. Is that right? Well, patience! For I guess I really ought now to pause here to (re)read Williamson’s paper, which I’ve been meaning to do anyway, and then return to Lavine’s discussion in the hope that, in setting his position up against Williamson, more light will be thrown on the notion of schematic general in play. But Williamson’s paper is itself another fifty page monster … So I think — just a little wearily — that maybe this is the point at which to take that needed holiday break from absolute generality and Absolute Generality.

Back to it, with renewed vigour let’s hope, in 2008!

Absolute Generality 18: More on schematic generality

In a subsection entitled ‘Schemes are not reducible to quantification’, Lavine writes

Schematic letters and quantifiable variables have different inferential roles. If n is a schematic letter then one can infer S0 ≠ 0 from Sn ≠ 0, but that is not so if n is a quantifiable variable — in that case the inference is valid only if n did not occur free in any of the premisses of the argument.

But, in so far as that is true, how does it establish the non-reducibility claim?

Of course, one familiar way of using schemes is e.g. as in Sec. 8.1 of my Gödel book where I am describing a quantifier-free arithmetic I call Baby Arithmetic, and say “any sentence that you get from the scheme Sζ ≠ 0 by subsituting a standard numeral for the place-holder ‘ζ ‘ is an axiom”. And to be sure, the role of the metalinguistic scheme Sζ ≠ 0 is different from that of the object language Sx ≠ 0. Still, it would misleading to talk of inferring an instance like S0 ≠ 0 from the schema. And here the generality, signalled by ‘any’, can — at least pending further, independent, argument — be thought of as unproblematically quantificational (though not quantifying over numbers of course). So this sort of apparently anodyne use of numerical schemes doesn’t make Lavine’s point, unless he can offer some additional considerations. So what does he have in mind?

Lavine’s discussion is not wonderfully clear. But I think the important thought comes out here:

One who doubts that the natural numbers form an actually infinite class will not take the scheme φ(n) → φ(Sn) to have a well-circumscribed class of instances and hence will not be willing to infer φ(x) → φ(Sx) from it; for the latter formula involves a quantifiable variable with the actually infinite class of all numbers as its domain or the actually infinite class of all numerals included in its substitution class.

We seemingly get a related thought e.g. in Dummett’s paper ‘What is mathematics about?’, where he argues that understanding quantification over some class of abstract objects requires that we should ‘grasp’ the domain, that is, the totality of objects of that class — which seems to imply that if there is no totality to be grasped, then here there can be no universal quantification properly understood.

But do note two things about this. First, a generalization’s failing to have a well-circumscribed class of instances because we are talking in a rough and ready way and haven’t bothered to be precise because we don’t need to be, and its failing because we can’t circumscribe the class because there is no relevant completed infinity (e.g. because of considerations about indefinite extensibility), are surely quite different cases. Lavine’s moving from an initial example of the first kind when he talked about arm-waving generalizations we make in introductory logic lectures to his later consideration of cases of the second kind suggests an unwarranted slide. Second, I can see no reason at all to suppose that sophisticated schematic talk to avoid being committed to actual infinities is “more primitive” than quantificational generality. On the contrary.

Still, with those caveats, I guess I am sympathetic to Lavine’s core claim that there is room for issuing schematic generalizations which don’t commit us to a clear conception of a complete(able) domain. In fact, I’d better be sympathetic, because I actually use the same idea myself here (where I talk about ACA0‘s quantifications over subsets of numbers, and argue that the core motivation for ACA0 in fact only warrants a weaker schematic version of the theory). So, even though I don’t think he really makes the case in his Sect. 7, I’m going to grant that there is something in Lavine’s idea here, and move on next to consider what he does with idea in the rest of the paper.

Absolute Generality 17: Schematic generality

In Sec. 7 of his paper, Lavine argues that there is a distinct way of expressing generality, using “schemes” to declare that ‘any instance [has a certain property], where “any” is to be sharply distinguished from “every”‘ (compare Russell’s 1908 view). In fact, Lavine goes further, talking about the kind of generality involved here as ‘more primitive than quantificational generality’.

We are supposed to be softened up for this idea by the thought that in fact distinctively schematic generalization is actually quite familiar to us:

When, early on in an introductory logic course, before a formal language has been introduced, one says that NOT(P AND NOT P) is valid, and gives natural language examples, the letter ‘P‘ is being used as a full schematic letter. The students are not supposed to take it as having any particular domain — there has as yet been no discussion of what the appropriate domain might be — and it is, in the setting described, usually the case that it is not ‘NOT(P AND NOT P)’ that is being described as valid, but the natural-language examples that are instances of it.1

Here, talk about a full schematic variable is to indicate that ‘what counts as an acceptable substitution instance is open ended and automatically expands as the language in use expands.’

But Lavine’s motivating example doesn’t impress. Sure, in an early lecture, I may say that any proposition of the form NOT(P AND NOT P) is logically true in virtue of the meanings of ‘NOT‘ and ‘AND‘. But to get anywhere, I of course have to gloss this a bit (for a start, the very idea of a ‘substitution instance’ of that form needs quite a bit of explanation, since plugging in a declarative English sentences won’t even yield a well-formed sentence). And, glossing such principles like non-contradiction and excluded middle, I for one certainly remark e.g. that we are setting aside issues about vagueness (‘it is kinda raining and not raining, you know’), and issues about weird cases (liar sentences), and issues about sentences with empty names, and I may sometimes mention more possible exceptions. But yes, I — like Lavine — will leave things in a sense pretty ‘open-ended’ at this stage. Does that mean, though, that I’m engaged in something other than ‘quantificational generality’? Does it mean that I haven’t at least gestured at some roughly delimited appropriate domain? Isn’t it rather that — as quite often — my quantifications are cheerfully a bit rough and ready?

‘Ah, but you are forgetting the key point that ‘what counts as an acceptable substitution instance is … expands as the language in use expands.’ But again, more needs to be said about the significance of this before we get a difference between schematic and quantificational generalizations. After all, what counts as an instance of ‘All the rabbits at the bottom of the garden are white’ changes as the population of rabbits expands. Does that make that claim not quantificational?

A general methodological point, famously emphasized by Kripke in his discussion of a supposed semantic ambiguity in the use of definite descriptions: we shouldn’t multiply semantic interpretations beyond necessity, when we can explain variations in usage by using general principles of discourse in a broadly Gricean way. We shouldn’t, in the present case, bifurcate interpretations of expressions of generality into the schematic and the genuinely quantificational cases if the apparent differences in usage here can be explained by the fact that we speak in ways which are only as more or less precise and circumscribed as are needed for the various purposes at hand. And it seems that the ‘open-ended’ usage in the quoted motivating example can be treated as just a case of loose talk sufficient for rough introductory purposes.

So has Lavine some stronger arguments for insisting on a serious schematic/quantification distinction here?

1. Quite beside the present point, of course, but surely it isn’t a great idea — when you are trying to drill into beginners the idea that truth is the dimension of assessment for propositions and validity is the dimension of assessment for inferences — to turn round and mess up a clean distinction by calling logically necessary propositions ‘valid’. I know quite a few logic books do this, but why follow them in this bad practice?

Absolute Generality 16: Lavine on the problems, continued

(3) “The third objection to everything is technical and a bit difficult to state, and in addition it is relatively easily countered,” so Lavine is brief. I will be too. Start with the thought that there can be subject areas in which for every true (∃x)Fx — with the quantifier taken as restricted to such an area — there is a name c such that Fc. There is then an issue whether to treat those restricted quantifiers referentially or substitutionally, yet supposedly no fact of the matter can decide the issue. So then it is indeterminate whether to treat c as having a denotation which needs to be in the domain of an unrestricted “everything”. And so “everything” is indeterminate.

Lavine himself comments, “the argument … works only if the only data that can be used to distinguish substitutional from referential quantification are the truth values of sentences about the subject matter at issue”. And there is no conclusive reason to accept that Quinean doctrine. Relatedly: the argument only works if we can have no prior reason to suppose that c is operating as a name with a referent in Fc (prior to issues about quantifications involving F). And there is no good reason to accept that either — read Evans on The Varieties of Reference. So argument (3) looks a non-starter.

(4) Which takes us to the fourth “objection to everything” that Lavine considers, which is the Skolemite argument again. Or to use his label, the Hollywood objection. Why that label?

Hollywood routinely produces the appearance of large cities, huge crowds, entire alien worlds, and so forth, in movies … the trick is only to produce those portions of the cities, crowds, and worlds at which the camera points, and even to produce only those parts the camera can see — not barns, but barn façades. One can produce appearances indistinguishable from those of cities, crowds, and worlds using only a minisule part of those cities, crowds, and worlds. Skolem, using pretty much the Hollywood technique, showed that … for every interpreted language with an infinite domain there is a small (countable) infinite substructure in which exactly the same sentences are true. Here, instead of just producing what the camera sees, one just keeps what the language “sees” or asserts to exist, one just takes out the original structure one witness to every true existential sentence, etc.

That’s really a rather nice, memorable, analogy (one that will stick in the mind for lectures!). And the headline news is that Lavine aims to rebut the objections offered by McGee to the Skolemite argument against the determinacy of supposedly absolutely unrestricted quantification.

One of McGee’s arguments, as we noted, appeals to considerations about learnability. I didn’t follow the argument and it turns out that Lavine too is unsure what is supposed to be going on. He offers an interpretation and readily shows that on that interpretation McGee’s argument cuts little ice. I can’t do better on McGee’s behalf (not that I feel much inclined to try).

McGee’s other main argument, we noted, is that “[t]he recognition that the rules of logical inference need to be open-ended … frustrates Skolemite skepticism.” Lavine’s riposte is long and actually its thrust isn’t that easy to follow. But he seems, inter alia, to make two points that I did in my comments on McGee. First, talking about possible extensions of languages won’t help since we can Skolemize on languages that are already expanded to contain terms “for any object for which a term can be added, in any suitable modal sense of ‘can'” (though neither Lavine nor I am clear enough about those suitable modal senses — there is work to be done there). And second, Lavine agrees with McGee that the rules of inference for the quantifiers fix (given an appropriate background semantic framework) the semantic values of the quantifiers. But while fixing semantic values — fixing the function that maps the semantic values of quantified predicates to truth-values — tells us how domains feature in fixing the truth-values of quantified sentences, that just doesn’t tell us what the domain is. And Skolemite considerations aside, it doesn’t tell us whether or not the widest domain available in a given context (what then counts as “absolutely everything”) can vary with context as the anti-absolutist view would have it.

So where does all this leave us, twenty pages into Lavine’s long paper? Pretty much where we were. Considerations of indefinite extensibility have been shelved for later treatment. And the Skolemite argument is still in play (though nothing has yet been said that really shakes me out of the view that — as I said before — issues about the Skolemite argument are in fact orthogonal to the interestingly distinctive issues, the special problems, about absolute generality). However, there is a lot more to come …

Scroll to Top