Superdegree theories of vagueness
The first day of the first Cambridge Graduate Conference on the Philosophy of Logic and Mathematics. It seems to be going quite well. I was responding briefly to the first paper, by Elia Zardini, on a kind of degree theory of vagueness. Since he hasn’t published the paper, I won’t discuss it in detail here. But here are some rather general worries about certain kinds of logical theory of the family Elia seems to like.
Suppose then — just suppose — you like the general idea of a degree-theory of vagueness, according to which you assign propositions values belonging to some many-membered class of values. And it will do no harm for present purposes to simplify and suppose that the class of values is linearly ordered. The minimal proposal is that propositions attributing baldness, say, to borderline cases get appropriately arranged values between the maximum and minimum. There are lots of immediate problems about how on earth we are to interpret these values, but let that pass just for a moment. Let’s instead note that there are of course going to be various ways of mapping values of propositions to unqualifiedly correct utterances of those propositions. And there are going to be various ways of mapping formal relations defined over values onto unqualifiedly correct inferences among evaluated propositions.
One way to go on the first issue would be to be strict. Unqualifiedly correct propositions are to get the maximum value: any other value corresponds to a less than unqualified assertoric success. Alternatively, we could be more relaxed. We could so assign values that any proposition getting a value above some threshold is quite properly assertible outright, but perhaps we give different such propositions different values, on the basis of some principle or other. (For a possible model, suppose, just suppose, you think of values as projections of rational credences; well, we can and do quite properly assert outright propositions for which our credence is less than the absolute maximum, and we can give principles for fine-tuning assignments of credences.)
Suppose then we take the relaxed view: we play the values game so that there’s a threshold such that propositions which get a value above the threshold — get a designated value, in the jargon — are taken as good enough for assertion. Now, this generous view about values for assertion can be combined with various views about what makes for unqualifiedly correct inferences. The familiar line, perhaps, is to take the view that correct inferences must take us from premisses with designated values to a conclusion with a designated value. But there are certainly other ways to be play the game. We could, for example (to take us to something in the vicinity of Elia’s recommendation), again be more relaxed and say that acceptable inferences — inferences good enough to be endorsed without hesitation as unqualifiedly good — are those that take us from premisses with good, designated, values to a conclusion which is, at worst, a pretty near miss. We could tolerate a small drop in value from premisses to conclusion.
Well, ok, suppose, just suppose, we play the game in that relaxed mode. Then we should be able to sprinkle values over a long sorites chain so that the initial premiss is designated (is unqualifiedly assertible): the first man is bald. Each conditional in the sorites series is designated (so they are all assertible too); if the n-th man is bald, so is the n+1th). Each little inference in the sorites is good enough (the value can’t drop too much from premisses to conclusion). But still the value of ‘man n is bald’ can eventually drop below the threshold for being designated.
Terrific. Or it would be terrific if only we had some principled reason to suppose that the sprinkling of values made any kind of semantic sense. But do we? As we all know, degree theories are beset with really nasty problems. There are those problems which we shelved about how to interpret the values in the first place. And when we get down to details, there are all sorts of issues of detail. Just for a start, how many values are there? Too few (like three) and — for example — the tolerance story is difficult to take seriously, and in any case the many-valued theory tends to collapse into a different mode of presentation of a value-gap theory. Too many values and we seem to be faced with daft questions: what could make it the case that ‘Smith is bald’ gets the value 0.147 in the unit interval of values, as against the value 0.148? (Well, maybe talk about ‘what makes it the case’ is just too realist; maybe the degree theorist’s plan is to tell some story about projected credences: but note the seriously anti-realist tendencies of such a theory.) And to return to issues of arbitrariness, even if we settle on some scale of values, what fixes the range of designated values? Why set the threshold at 0.950 rather than 0.951 in the unit interval, say? And what fixes the degree of tolerance that we allow in acceptable inference in taking us from designated premisses to near-miss conclusions?
Well, there is a prima facie response at least to the issues about arbitrariness, and it is the one that Elia likes. Don’t fix on any one generous degree theory or any one version of the relaxedly tolerant story about inferences. Rather, generalize over such theories and go for a logic of inferences that is correct however we set the numerical details. Then, the story goes, we can diagnose what is happening in the sorites without committing ourselves to any particular assignments of values.
It would be misleading to call this a species of supervaluationism — but there’s a family likeness. For the supervaluationist, any one choice of acceptable boundary sharpening is arbitrary; so what we should accept as correct is what comes out robustly, irrespective of sharpenings. Similarly for what we might call the relaxed superdegree theorist: it can be conceded that any one assignment of values to propositions and degree of tolerance in inferences is arbitrary; the claims we are to take seriously about the logic of vagueness are those that come out robustly, irrespective of the detailed assignments.
Well, as they say, it is a view. Here is just one question about it. And it presses on an apparent significance difference of principle between supervaluationism and the relaxed superdegree theory. On the familiar supervaluationist story, faced with a vague predicate F, we imagine various ways of drawing an exact line between the Fs and the non-Fs. Now, that will be arbitrary up to a point, so long as we respect the uncontentious clear cases of Fs and non-Fs. Still, once we’ve drawn the boundary, and got ourselves a refined sharp predicate F*, we can understand perfectly well what we’ve done, and understand what it means for something to be F* or not F*. The supervaluation base, the various sharpenings of F*, can at least in principle be perfectly well understood. On the other hand, the relaxed superdegree theory is generalizing over a spectrum of many-valued assignments of degrees of truth (or whatever) to propositions. It’s not clear what the constraints on allowed assignments would be. But there’s a more basic problem. Take any one assignment. Take the 1742 value theory with the top 37 values designated and inferential tolerance set at a drop of 2 degrees. Well, I’ve said the words, but do you really understand what that theory could possibly come to? What could constitute there being 1742 different truth-values? I haven’t the foggiest, and nor have you. We just wouldn’t understand the supposed semantic content of such a theory. So, given that we don’t begin to understand almost any particular degree theory, what (I wonder) can be so great about generalizing all over them? To put the point bluntly: can abstracting and generalizing away from the details of a lot of specific theories that we don’t understand give us a supertheory we do understand and which is semantically satisfactory?