2007

"The best thing out there"

Cue sound of blowing one’s own trumpet. The first “Customer Reviews” on Amazon USA for the Gödel book have just appeared. And there is a quite terrific one by Jon Cogburn of LSU at Baton Rouge. Very cheering indeed! (Though it is also another spur to get going with developing the online sets of exercises that I’ve been meaning to put together.)

Logical Options, 1

As I noted in my last post, I’m going to be working through Bell, DeVidi and Solomon’s Logical Options in a seminar with some second year undergrads this coming term. To answer Richard Zach’s question, I might have used Ted Sider’s draft book as the main text if I’d known about it before plans were made in November — though Logical Options does still perhaps more neatly dovetail with our tree-based first year course.

Because the seminar starts on the first teaching day of term, I can’t presume much reading. So we’ll have to make a slow start. Here though are the reading notes for the first session, in case anyone else is interested. (It goes without saying that corrections and/or suggestions are always welcome!)

Back to the grindstone

Hmmm, the next ten weeks or so are going to be busy.

For a start, I must find time to finish reading the papers in the Absolute Generality collection, and I’ll no doubt continue to try to say something about them here. Not that the topic thrills me anywhere near as much as the number of posts might suggest, but it does remain puzzling, I have promised to write a BSL review, and blogging about the papers is as good a way as any of making myself do the reading moderately carefully.

Then at the beginning of term there is a Graduate Conference on the Philosophy of Logic and Mathematics in the faculty which promises well, and I’m down to comment on the first paper on vagueness (a topic on which I’ve got pretty rusty).

During term, I’m now giving some seminars to second-year students on the Bell/DeVidi/Solomon text, Logical Options. That book isn’t at all ideal, on a closer look, but I can’t think of anything better to cover the sort of ground the students need to cover, though that will certainly involve writing some supplementary notes. I’ll link to the notes here as they get done.

And then there’s going to be all the homework for the grad seminar I’m running with Thomas Forster on model theory, working through the shorter Hodges.

Oh, and I’ve promised to talk to the Jowett Society in Oxford in February. Gulp.

Well, I’ve only myself to blame, given that those are all works beyond the call of duty. Still, it should be tolerable fun (on the logician’s rather stretched understanding of ‘fun’). Better make a start tomorrow, though …

Absolute Generality 19: Lavine on McGee’s argument

There are still over twenty pages of Lavine’s paper remaining. Since, to be frank, Lavine doesn’t write with a light touch or Lewisian clarity, these are unnecessarily hard going. But having got this far, I suppose we might as well press on to the bitter end. And, as I’ve just indicated in the previous post in the series, I do have a bit of a vested interest in making better sense of his talk of schematic generalizations.

There are four Sections, 9 — 11, ahead of us. First, in Sec. 8, Lavine argues that even with schematic generalizations as he understands them in play, we still can’t get a good version of McGee’s argument that the quantifier rules suffice to determine that we are ultimately quantifying over a unique domain of absolutely everything, and so McGee’s attempt to respond to the Skolemite argument fails. I think I do agree that the rules even if interpreted schematically don’t fix a unique domain: but I’m still finding Lavine’s talk about schematic generalizations pretty murky, so I’m not sure whether that is right. Not that I particularly want to join Lavine in defending the Skolemite argument: but I am happy to agree that McGee’s way with the argument isn’t the way to go. So let’s not delay now over this.

In Sec. 9, Lavine discusses Williamson’s arguments in his 2003 paper ‘Everything’ and claims that everything Williamson wants to do with absolutely unrestricted quantification can be done with schematic generalizations. Is that right? Well, patience! For I guess I really ought now to pause here to (re)read Williamson’s paper, which I’ve been meaning to do anyway, and then return to Lavine’s discussion in the hope that, in setting his position up against Williamson, more light will be thrown on the notion of schematic general in play. But Williamson’s paper is itself another fifty page monster … So I think — just a little wearily — that maybe this is the point at which to take that needed holiday break from absolute generality and Absolute Generality.

Back to it, with renewed vigour let’s hope, in 2008!

Simple things are best.

I did fiddle around a bit today trying to get a hack to work for splitting long posts into an initial para or two, with the rest to be revealed by hitting a “Read more” link (if you want to know how to do it in Blogger, see here). But in the end, I decided I didn’t like the result. It’s not as if even my longest posts are more than about half-a-dozen moderate sized paragraphs (and it is a good discipline to keep it like that): so it is in any case easy enough to scan to the end of one post to jump on to the next. I’ll stick to the current simple format.

The Daughter recommended that I try OmniFocus ‘task management’ software which implements Getting Things Done type lists. Well, it’s not that I haven’t tasks to do, and the GTD idea really does work. But, having played about a bit with the beta version you can download, I reckon my life isn’t so cluttered that carrying on using NoteBook and iCal won’t work for me. Again, I think I’ll stick to the simpler thing.

Absolute Generality 18: More on schematic generality

In a subsection entitled ‘Schemes are not reducible to quantification’, Lavine writes

Schematic letters and quantifiable variables have different inferential roles. If n is a schematic letter then one can infer S0 ≠ 0 from Sn ≠ 0, but that is not so if n is a quantifiable variable — in that case the inference is valid only if n did not occur free in any of the premisses of the argument.

But, in so far as that is true, how does it establish the non-reducibility claim?

Of course, one familiar way of using schemes is e.g. as in Sec. 8.1 of my Gödel book where I am describing a quantifier-free arithmetic I call Baby Arithmetic, and say “any sentence that you get from the scheme Sζ ≠ 0 by subsituting a standard numeral for the place-holder ‘ζ ‘ is an axiom”. And to be sure, the role of the metalinguistic scheme Sζ ≠ 0 is different from that of the object language Sx ≠ 0. Still, it would misleading to talk of inferring an instance like S0 ≠ 0 from the schema. And here the generality, signalled by ‘any’, can — at least pending further, independent, argument — be thought of as unproblematically quantificational (though not quantifying over numbers of course). So this sort of apparently anodyne use of numerical schemes doesn’t make Lavine’s point, unless he can offer some additional considerations. So what does he have in mind?

Lavine’s discussion is not wonderfully clear. But I think the important thought comes out here:

One who doubts that the natural numbers form an actually infinite class will not take the scheme φ(n) → φ(Sn) to have a well-circumscribed class of instances and hence will not be willing to infer φ(x) → φ(Sx) from it; for the latter formula involves a quantifiable variable with the actually infinite class of all numbers as its domain or the actually infinite class of all numerals included in its substitution class.

We seemingly get a related thought e.g. in Dummett’s paper ‘What is mathematics about?’, where he argues that understanding quantification over some class of abstract objects requires that we should ‘grasp’ the domain, that is, the totality of objects of that class — which seems to imply that if there is no totality to be grasped, then here there can be no universal quantification properly understood.

But do note two things about this. First, a generalization’s failing to have a well-circumscribed class of instances because we are talking in a rough and ready way and haven’t bothered to be precise because we don’t need to be, and its failing because we can’t circumscribe the class because there is no relevant completed infinity (e.g. because of considerations about indefinite extensibility), are surely quite different cases. Lavine’s moving from an initial example of the first kind when he talked about arm-waving generalizations we make in introductory logic lectures to his later consideration of cases of the second kind suggests an unwarranted slide. Second, I can see no reason at all to suppose that sophisticated schematic talk to avoid being committed to actual infinities is “more primitive” than quantificational generality. On the contrary.

Still, with those caveats, I guess I am sympathetic to Lavine’s core claim that there is room for issuing schematic generalizations which don’t commit us to a clear conception of a complete(able) domain. In fact, I’d better be sympathetic, because I actually use the same idea myself here (where I talk about ACA0‘s quantifications over subsets of numbers, and argue that the core motivation for ACA0 in fact only warrants a weaker schematic version of the theory). So, even though I don’t think he really makes the case in his Sect. 7, I’m going to grant that there is something in Lavine’s idea here, and move on next to consider what he does with idea in the rest of the paper.

Three cheers for the Stanford Encyclopedia

Richard Zach, one of the subject editors, has noted on his blog that there’s a new entry on the Stanford Encyclopedia by Herb Enderton on Second-order and Higher-order Logic. The SEP is really developing quite terrifically, and it seems to me that the average standard of the entries on this freely accessible resource is distinctly better than e.g. on the expensive Routledge Encyclopedia. (I did write one entry for the latter, on C.D. Broad, still one of my favourite early 20th century philosophers: but I was given just half the space of his entry in the old Edwards encyclopedia — and that seems to be indicative of the shortcomings of the Routledge Encyclopedia: it covers too much too thinly.)

Absolute Generality 17: Schematic generality

In Sec. 7 of his paper, Lavine argues that there is a distinct way of expressing generality, using “schemes” to declare that ‘any instance [has a certain property], where “any” is to be sharply distinguished from “every”‘ (compare Russell’s 1908 view). In fact, Lavine goes further, talking about the kind of generality involved here as ‘more primitive than quantificational generality’.

We are supposed to be softened up for this idea by the thought that in fact distinctively schematic generalization is actually quite familiar to us:

When, early on in an introductory logic course, before a formal language has been introduced, one says that NOT(P AND NOT P) is valid, and gives natural language examples, the letter ‘P‘ is being used as a full schematic letter. The students are not supposed to take it as having any particular domain — there has as yet been no discussion of what the appropriate domain might be — and it is, in the setting described, usually the case that it is not ‘NOT(P AND NOT P)’ that is being described as valid, but the natural-language examples that are instances of it.1

Here, talk about a full schematic variable is to indicate that ‘what counts as an acceptable substitution instance is open ended and automatically expands as the language in use expands.’

But Lavine’s motivating example doesn’t impress. Sure, in an early lecture, I may say that any proposition of the form NOT(P AND NOT P) is logically true in virtue of the meanings of ‘NOT‘ and ‘AND‘. But to get anywhere, I of course have to gloss this a bit (for a start, the very idea of a ‘substitution instance’ of that form needs quite a bit of explanation, since plugging in a declarative English sentences won’t even yield a well-formed sentence). And, glossing such principles like non-contradiction and excluded middle, I for one certainly remark e.g. that we are setting aside issues about vagueness (‘it is kinda raining and not raining, you know’), and issues about weird cases (liar sentences), and issues about sentences with empty names, and I may sometimes mention more possible exceptions. But yes, I — like Lavine — will leave things in a sense pretty ‘open-ended’ at this stage. Does that mean, though, that I’m engaged in something other than ‘quantificational generality’? Does it mean that I haven’t at least gestured at some roughly delimited appropriate domain? Isn’t it rather that — as quite often — my quantifications are cheerfully a bit rough and ready?

‘Ah, but you are forgetting the key point that ‘what counts as an acceptable substitution instance is … expands as the language in use expands.’ But again, more needs to be said about the significance of this before we get a difference between schematic and quantificational generalizations. After all, what counts as an instance of ‘All the rabbits at the bottom of the garden are white’ changes as the population of rabbits expands. Does that make that claim not quantificational?

A general methodological point, famously emphasized by Kripke in his discussion of a supposed semantic ambiguity in the use of definite descriptions: we shouldn’t multiply semantic interpretations beyond necessity, when we can explain variations in usage by using general principles of discourse in a broadly Gricean way. We shouldn’t, in the present case, bifurcate interpretations of expressions of generality into the schematic and the genuinely quantificational cases if the apparent differences in usage here can be explained by the fact that we speak in ways which are only as more or less precise and circumscribed as are needed for the various purposes at hand. And it seems that the ‘open-ended’ usage in the quoted motivating example can be treated as just a case of loose talk sufficient for rough introductory purposes.

So has Lavine some stronger arguments for insisting on a serious schematic/quantification distinction here?

1. Quite beside the present point, of course, but surely it isn’t a great idea — when you are trying to drill into beginners the idea that truth is the dimension of assessment for propositions and validity is the dimension of assessment for inferences — to turn round and mess up a clean distinction by calling logically necessary propositions ‘valid’. I know quite a few logic books do this, but why follow them in this bad practice?

Absolute Generality 16: Lavine on the problems, continued

(3) “The third objection to everything is technical and a bit difficult to state, and in addition it is relatively easily countered,” so Lavine is brief. I will be too. Start with the thought that there can be subject areas in which for every true (∃x)Fx — with the quantifier taken as restricted to such an area — there is a name c such that Fc. There is then an issue whether to treat those restricted quantifiers referentially or substitutionally, yet supposedly no fact of the matter can decide the issue. So then it is indeterminate whether to treat c as having a denotation which needs to be in the domain of an unrestricted “everything”. And so “everything” is indeterminate.

Lavine himself comments, “the argument … works only if the only data that can be used to distinguish substitutional from referential quantification are the truth values of sentences about the subject matter at issue”. And there is no conclusive reason to accept that Quinean doctrine. Relatedly: the argument only works if we can have no prior reason to suppose that c is operating as a name with a referent in Fc (prior to issues about quantifications involving F). And there is no good reason to accept that either — read Evans on The Varieties of Reference. So argument (3) looks a non-starter.

(4) Which takes us to the fourth “objection to everything” that Lavine considers, which is the Skolemite argument again. Or to use his label, the Hollywood objection. Why that label?

Hollywood routinely produces the appearance of large cities, huge crowds, entire alien worlds, and so forth, in movies … the trick is only to produce those portions of the cities, crowds, and worlds at which the camera points, and even to produce only those parts the camera can see — not barns, but barn façades. One can produce appearances indistinguishable from those of cities, crowds, and worlds using only a minisule part of those cities, crowds, and worlds. Skolem, using pretty much the Hollywood technique, showed that … for every interpreted language with an infinite domain there is a small (countable) infinite substructure in which exactly the same sentences are true. Here, instead of just producing what the camera sees, one just keeps what the language “sees” or asserts to exist, one just takes out the original structure one witness to every true existential sentence, etc.

That’s really a rather nice, memorable, analogy (one that will stick in the mind for lectures!). And the headline news is that Lavine aims to rebut the objections offered by McGee to the Skolemite argument against the determinacy of supposedly absolutely unrestricted quantification.

One of McGee’s arguments, as we noted, appeals to considerations about learnability. I didn’t follow the argument and it turns out that Lavine too is unsure what is supposed to be going on. He offers an interpretation and readily shows that on that interpretation McGee’s argument cuts little ice. I can’t do better on McGee’s behalf (not that I feel much inclined to try).

McGee’s other main argument, we noted, is that “[t]he recognition that the rules of logical inference need to be open-ended … frustrates Skolemite skepticism.” Lavine’s riposte is long and actually its thrust isn’t that easy to follow. But he seems, inter alia, to make two points that I did in my comments on McGee. First, talking about possible extensions of languages won’t help since we can Skolemize on languages that are already expanded to contain terms “for any object for which a term can be added, in any suitable modal sense of ‘can'” (though neither Lavine nor I am clear enough about those suitable modal senses — there is work to be done there). And second, Lavine agrees with McGee that the rules of inference for the quantifiers fix (given an appropriate background semantic framework) the semantic values of the quantifiers. But while fixing semantic values — fixing the function that maps the semantic values of quantified predicates to truth-values — tells us how domains feature in fixing the truth-values of quantified sentences, that just doesn’t tell us what the domain is. And Skolemite considerations aside, it doesn’t tell us whether or not the widest domain available in a given context (what then counts as “absolutely everything”) can vary with context as the anti-absolutist view would have it.

So where does all this leave us, twenty pages into Lavine’s long paper? Pretty much where we were. Considerations of indefinite extensibility have been shelved for later treatment. And the Skolemite argument is still in play (though nothing has yet been said that really shakes me out of the view that — as I said before — issues about the Skolemite argument are in fact orthogonal to the interestingly distinctive issues, the special problems, about absolute generality). However, there is a lot more to come …

Scroll to Top