Superdegree theories of vagueness

The first day of the first Cambridge Graduate Conference on the Philosophy of Logic and Mathematics. It seems to be going quite well. I was responding briefly to the first paper, by Elia Zardini, on a kind of degree theory of vagueness. Since he hasn’t published the paper, I won’t discuss it in detail here. But here are some rather general worries about certain kinds of logical theory of the family Elia seems to like.

Suppose then — just suppose — you like the general idea of a degree-theory of vagueness, according to which you assign propositions values belonging to some many-membered class of values. And it will do no harm for present purposes to simplify and suppose that the class of values is linearly ordered. The minimal proposal is that propositions attributing baldness, say, to borderline cases get appropriately arranged values between the maximum and minimum. There are lots of immediate problems about how on earth we are to interpret these values, but let that pass just for a moment. Let’s instead note that there are of course going to be various ways of mapping values of propositions to unqualifiedly correct utterances of those propositions. And there are going to be various ways of mapping formal relations defined over values onto unqualifiedly correct inferences among evaluated propositions.

One way to go on the first issue would be to be strict. Unqualifiedly correct propositions are to get the maximum value: any other value corresponds to a less than unqualified assertoric success. Alternatively, we could be more relaxed. We could so assign values that any proposition getting a value above some threshold is quite properly assertible outright, but perhaps we give different such propositions different values, on the basis of some principle or other. (For a possible model, suppose, just suppose, you think of values as projections of rational credences; well, we can and do quite properly assert outright propositions for which our credence is less than the absolute maximum, and we can give principles for fine-tuning assignments of credences.)

Suppose then we take the relaxed view: we play the values game so that there’s a threshold such that propositions which get a value above the threshold — get a designated value, in the jargon — are taken as good enough for assertion. Now, this generous view about values for assertion can be combined with various views about what makes for unqualifiedly correct inferences. The familiar line, perhaps, is to take the view that correct inferences must take us from premisses with designated values to a conclusion with a designated value. But there are certainly other ways to be play the game. We could, for example (to take us to something in the vicinity of Elia’s recommendation), again be more relaxed and say that acceptable inferences — inferences good enough to be endorsed without hesitation as unqualifiedly good — are those that take us from premisses with good, designated, values to a conclusion which is, at worst, a pretty near miss. We could tolerate a small drop in value from premisses to conclusion.

Well, ok, suppose, just suppose, we play the game in that relaxed mode. Then we should be able to sprinkle values over a long sorites chain so that the initial premiss is designated (is unqualifiedly assertible): the first man is bald. Each conditional in the sorites series is designated (so they are all assertible too); if the n-th man is bald, so is the n+1th). Each little inference in the sorites is good enough (the value can’t drop too much from premisses to conclusion). But still the value of ‘man n is bald’ can eventually drop below the threshold for being designated.

Terrific. Or it would be terrific if only we had some principled reason to suppose that the sprinkling of values made any kind of semantic sense. But do we? As we all know, degree theories are beset with really nasty problems. There are those problems which we shelved about how to interpret the values in the first place. And when we get down to details, there are all sorts of issues of detail. Just for a start, how many values are there? Too few (like three) and — for example — the tolerance story is difficult to take seriously, and in any case the many-valued theory tends to collapse into a different mode of presentation of a value-gap theory. Too many values and we seem to be faced with daft questions: what could make it the case that ‘Smith is bald’ gets the value 0.147 in the unit interval of values, as against the value 0.148? (Well, maybe talk about ‘what makes it the case’ is just too realist; maybe the degree theorist’s plan is to tell some story about projected credences: but note the seriously anti-realist tendencies of such a theory.) And to return to issues of arbitrariness, even if we settle on some scale of values, what fixes the range of designated values? Why set the threshold at 0.950 rather than 0.951 in the unit interval, say? And what fixes the degree of tolerance that we allow in acceptable inference in taking us from designated premisses to near-miss conclusions?

Well, there is a prima facie response at least to the issues about arbitrariness, and it is the one that Elia likes. Don’t fix on any one generous degree theory or any one version of the relaxedly tolerant story about inferences. Rather, generalize over such theories and go for a logic of inferences that is correct however we set the numerical details. Then, the story goes, we can diagnose what is happening in the sorites without committing ourselves to any particular assignments of values.

It would be misleading to call this a species of supervaluationism — but there’s a family likeness. For the supervaluationist, any one choice of acceptable boundary sharpening is arbitrary; so what we should accept as correct is what comes out robustly, irrespective of sharpenings. Similarly for what we might call the relaxed superdegree theorist: it can be conceded that any one assignment of values to propositions and degree of tolerance in inferences is arbitrary; the claims we are to take seriously about the logic of vagueness are those that come out robustly, irrespective of the detailed assignments.

Well, as they say, it is a view. Here is just one question about it. And it presses on an apparent significance difference of principle between supervaluationism and the relaxed superdegree theory. On the familiar supervaluationist story, faced with a vague predicate F, we imagine various ways of drawing an exact line between the Fs and the non-Fs. Now, that will be arbitrary up to a point, so long as we respect the uncontentious clear cases of Fs and non-Fs. Still, once we’ve drawn the boundary, and got ourselves a refined sharp predicate F*, we can understand perfectly well what we’ve done, and understand what it means for something to be F* or not F*. The supervaluation base, the various sharpenings of F*, can at least in principle be perfectly well understood. On the other hand, the relaxed superdegree theory is generalizing over a spectrum of many-valued assignments of degrees of truth (or whatever) to propositions. It’s not clear what the constraints on allowed assignments would be. But there’s a more basic problem. Take any one assignment. Take the 1742 value theory with the top 37 values designated and inferential tolerance set at a drop of 2 degrees. Well, I’ve said the words, but do you really understand what that theory could possibly come to? What could constitute there being 1742 different truth-values? I haven’t the foggiest, and nor have you. We just wouldn’t understand the supposed semantic content of such a theory. So, given that we don’t begin to understand almost any particular degree theory, what (I wonder) can be so great about generalizing all over them? To put the point bluntly: can abstracting and generalizing away from the details of a lot of specific theories that we don’t understand give us a supertheory we do understand and which is semantically satisfactory?

Stewart Shapiro, “Computability, Proof, and Open-Texture”

It was my turn briefly to introduce the Logic Seminar yesterday, and we were looking at Stewart Shapiro’s “Computability, Proof, and Open-Texture” (which is published in Church’s Thesis After 70 Years). I’ve blogged about this before, and although I didn’t look back at what I said then in rereading the paper, I seemed to come to much the same view of it. Here’s a combination of some of what I said yesterday and what I said before. Though let me say straight away that it is a very nice paper, written with Stewart’s characteristic clarity and good sense.

Leaving aside all considerations about physical computability, there are at least three ideas in play in the vicinity of the Church-Turing Thesis. Or rather there is first a cluster of inchoate, informal, open-ended, vaguely circumscribed ideas of computability, shaped by some paradigm examples of everyday computational exercises; and then second there is the semi-technical idea of effective computability (with quite a carefully circumscribed though still informal definition); and then thirdly there is the idea of Turing computability (and along with that, of course, the other provability equivalent characterizations of computability as recursiveness, etc.).

It should be agreed on all sides that our original inchoate, informal, open-ended ideas could and can be sharpened up in various ways. The notion of effective computability takes some strands in inchoate notion and refines and radically idealizes them in certain ways. But there are other notions, e.g. of feasible/practicable computability, that can be distilled out. It isn’t that the notion of effective computability is — so to speak — the only clear concept waiting to be revealed as the initial fog clears.

Now, I think that Shapiro’s rather Lakatosian comments in his paper about how concepts get refined and developed and sharpened in mathematical practice are all well-taken, as comments about how we get from our initial inchoate preformal ideas to the semi-technical notion of effective computability. And yes, I agree, it is important to emphasize is that we do indeed need to do some significant pre-processing of our initial inchoate notion of computability before we arrive at a notion, effective computability, that can reasonably be asserted to be co-extensive with Turing computability. After all, ‘computable’ means, roughly, ‘can be computed’: but ‘can’ relative to what constraints? Is the Ackermann function computable (even though for small arguments its value has more digits than particles in the known universe)? Our agreed judgements about elementary examples of common-or-garden computation don’t settle the answer to exotic questions like that. And there is an element of decision — guided of course by the desire for interesting, fruitful concepts — in the way we refine the inchoate notion of computability to arrive at the idea of effective computability (e.g. we abstract entirely away from consideration of the number of steps needed to execute an effective step-by-step computation, while insisting that we keep a low bound on the intelligence required to execute each particular step). Shapiro writes well about this kind of exercise of reducing the amount of ‘open texture’ in an inchoate informal concept (or concept-cluster) and arriving at something more sharply bounded.

Where a question arises is about the relation between the semi-technical notion of effective computability and the notion of Turing computability. Shapiro writes as if the move onwards from the semi-technical notion is (as it were) just more of the same: the same Lakatosian dynamic (rational conceptual development under the pressure of proof-development) is at work in first getting from the original inchoate notion of computability to the notion of effective computability, as then it getting eventually to refine out the notion of Turing computability. Well, that’s one picture. But an alternative picture is that once we have got as far as the notion of effective computable functions, we do have a notion which, though informal, is subject to sufficient constraints to ensure that it does indeed have a determinate extension (the class of Turing-computable functions). For some exploration of the latter view, see for example Robert Black’s 2000 Philosophia Mathematica paper.

The key issue question here is which picture is right? Looking at Shapiro’s paper, it is in fact difficult to discern any argument for supposing that things go his way. He is good and clear about how the notion of effective computability gets developed. But he seems to assume, rather than argue, that we need more of the same kind of conceptual development before we settle on the idea of Turing computability as a canonically privileged concept of computability. But supposing that these are moves of the same kind is in fact exactly the point at issue in some recent debates. And that point, to my mind, isn’t sufficiently directly addressed by Shapiro in his last couple of pages to make his discussion of these matters entirely convincing.

Starting the shorter Hodges

First model theory seminar today. We were just limbering up reading the first chapter and a bit into the second chapter. It fell to me to try to say something to introduce the reading — difficult as nothing very exciting is going on yet: the interesting stuff starts next week. So I offered a few arm-waving thoughts about the way Hodges defines ‘structure’ and about his (and other model theorist’s) stretched use of ‘language’. Here is what I said.

[Later: In the light of comments, I’ve slightly revised the intro piece to make what I was saying clearer at a few points. And of course, I was struggling a bit to find anything thought-provoking to say about the very introductory opening pages of Hodges’s book! — so I’m no doubt slightly over-stating the points too.]

Logical Options, 2

Here are the reading notes for the second session on Logical Options. These are focussed on the discussion in Section 1.5 of styles of proof for propositional logic (other than trees/semantic tableaux), namely axiomatic, natural deduction, and sequent proofs. I’ve ended up writing rather more than Bell, DeVidi and Solomon do because I found their discussion oddly patchy (for example, why introduce the idea of natural deduction proofs without at least outlining how they are usually set out, either as Fitch-style indented proofs, or classically Gentzen-style?). I’ll probably get round to editing these notes, energy and time permitting, once we’ve had the class and I get some feedback. But meanwhile, if you are involved with a similar course (as teacher or a student) you might find these notes useful. Comments welcome.

OK, that’s another small beginning of term task done. Next up, I’ve got to put together some thoughts to introduce the first seminar (for a very different bunch!) on the shorter Hodges. Gulp.

Philosophy of Mathematics: Five Questions

I’ve mentioned before the now newly published Philosophy of Mathematics: Five Questions, in which some twenty eight philosophers, logicians and mathematicians respond to a bunch of questions related to how they see the current state of the philosophy of mathmatics. My copy arrived today. (Some of the contributions are on the authors’ web-sites: Jeremy Avigad, Mark Colyvan, Solomon Feferman, Edward Zalta. There are other pieces worth reading by e.g. Geoffrey Hellman, Stewart Shapiro, Alan Weir, and Crispin Wright.)

My first impressions are that (i) it is worth ordering for your university’s library (I imagine that some of the pieces would be quite useful and interesting orientation for students), but (ii) it is a very mixed bag (thus, Feferman offers twenty informative pages, while Thomas Jech and Penelope Maddy provide barely three pages between them), and overall (iii) I guess it is rather disappointing, with too many remarks too brisk and allusive to be very useful to anyone. It is no surprise, for example, that Steve Awodey’s answer to the question “What do you consider the most neglected topics and/or contributions in late 2oth century philosophy of mathematics?” is “Category theory”. But he doesn’t tell us why.

Maybe the editors should have been more directive.


Over the holiday season, the reviews pages were full of lists of books of the year (inexplicably, no-one thought to do a list of “best books on Gödel’s theorems in 2007”). I’ve read embarrasingly few of the listed books, though Orlando Figes’s The Whisperers is on the table, still waiting for me to quite finish his wonderful Natasha’s Dance.

The latest list was in The Times over the weekend, a rather bizarre list of the fifty Greatest British Writers Since 1945. They also published a further list of also-rans. One very odd omission who didn’t even make their long list was Jonathan Raban. His Old Glory and Passage to Juneau (for example) are wonderful books. Here’s part of what Douglas Kennedy said about the latter in the Independent:

Raban is, for my money, one of the key writers of the past three decades – not only for his immense stylistic showmanship, but also for the way he has taken that amorphous genre called “travel writing” and utterly redefined its frontiers … Passage to Juneau is his finest achievement to date. Ostensibly an account of a voyage Raban took from his new home in Seattle to the Alaskan capital through that labyrinthine sea route called the Inside Passage, it is, in essence, a book about the nature of loss …You close this extraordinary book marvelling at this most distressing but commonplace of ironies. He’s home, but he’s lost. Just like the rest of us.

But I don’t think “stylistic showmanship” is quite right. The prose is faultless, “as beautiful and clear as the blue ocean on a crisp morning” as another reviewer put it, but not showy, and never inviting you to admire its cleverness. Raban, for my money, is worth a dozen Martin Amis’s.

[Later I hadn’t noticed that, as it happens, Raban wrote an illuminating piece on Obama in this last Saturday’s Guardian.]

Godard … and model theory

Pierrot Le Fou
Two more books from the CUP sale. One is David Wills’ Jean-Luc Godard: Pierrot le Fou in the Cambridge Film Handbooks series, which should be a fun read. Shame that there are no colour stills suggesting the amazingly vibrant look of the thing: but since I only paid £3 I really can’t complain. I’m sure that I never even a quarter understood the film, and I’ve not seen it for many years: but I’ve always remembered it as terrific.

The other purchase is Joyal and Moerdik’s Algebraic Set Theory. Frankly, I bought that more in ambitious hope than in any firm belief that I’ll get my head around it, as my category theory is fragmentary and fragile. But I’ll give it a shot.

Not soon, though, as this term is model-theory term: as I’ve mentioned before, Thomas Forster and I are going to be running a reading group for a mixed bunch, to work through Hodges’s Shorter Model Theory. I’m going to be doing some more much needed background homework/revision over the next week, and at the moment I’m working through some of Chang and Keisler. Incidentally, that’s surely another candidate for a Dover reprint: even though in some ways it isn’t fantastically well written, and so the authors don’t always make the reader’s job a comfortable one, I guess it is still a rightly classic treatment.

Three cheers for the CUP bookshop sale

Ah, it is that time of year again: so off to gather up some absolute bargains at the CUP bookshop sale. Notionally, they are flogging off ‘damaged’ books: but the Press has a remarkably idealized view of what counts as damaged (indeed, in many cases, the only perceptible damage is produced by a large red “DAMAGED” stamped on the title page). My best buy today: I picked up a copy of Aczel/Simmons/Wainer’s Proof Theory for a tenth of the list price — ok, a paperback is in fact due next month, but that’s still an amazing saving.

I also got the volume on Paul Churchland in the ‘Contemporary Philosophy in Focus’ series, which looks fun. And just to show that I’m not merely a scientistically minded logician, I bought a volume of Alasdair MacIntyre’s essays which look, erm, uplifting (well, for a mere £3, I thought I could do with some enlightenment). If the threatened snow holds off, I’ll return tomorrow, as they keep putting out more sale books to tempt one back. The fact that I haven’t yet got round to reading the purchases from last year’s sales of course doesn’t dampen my enthusiasm one jot …

Scroll to Top