Me and J.K. Rowling

Good heavens! Amazon UK reports the Gödel book this morning as 3,069 in the sales ranking. That makes me and J.K. Rowling, who lives permanently at number 1, practically neighbours. I’m preparing myself for the inevitable change of life-style.

Googling around to see what their sales rankings really mean, the answer is that a snapshot ranking means diddly squat. Still … in the academic philosophy rankings, currently Gödel (at 7) beats the pomos, so that is — just for the moment! — cheering.

Isaacson again

Hooray! A version of my talk at the Isaacson day we had in Cambridge a couple of months ago has been accepted by Analysis, and will appear in January. Michael Clark has kindly agreed to publish it as a preprint on the Analysis website shortly (as soon as I can un-LaTeX it into a W*rd document, arggghhh!).

For the moment, I’ve put a link to a late draft of the paper in the “Other materials” page on the Gödel book website which (at last) I’m starting slowly to build up. I need in particular to put my mind to compiling fun(?) sets of exercises. That’s because IGT does not contain end-of-chapter exercises, for two reasons. First, the book is already long and adding copious exercises would have made it longer still. Secondly, I didn’t want to put off the more philosophically inclined half of my readers by making the book look too forbidding.

I discovered the first misprints today. But fortunately tiny ones — on p. 341 I oddly use “primitively recursive” twice. But as misprints go, these are not going to cause any loss of sleep!

Absolute Generality 1: Kit Fine and the All in One Principle

OK, time to make a start on blogviewing Absolute Generality, edited by Augustín Rayo and Gabriel Uzquiano (OUP, 2006).

As in the Church’s Thesis volume, the editors take the easy line of printing the papers in alphabetical order by the authors’ names, and they don’t offer any suggestions as to what might make a sensible reading order. So we’ll just have to dive in and see what happens. First up is a piece by Kit Fine called “Relatively Unrestricted Quantification”.

And it has to be said straight away that this is, presentationally, pretty awful. Length issues aside, no way would something written like this have got into Analysis when I was editing it. This isn’t just me being captious: sitting down with three very bright and knowledgeable graduate students and a recent PhD, we all struggled to make sense of it. There really isn’t any excuse for writing this kind of philosophy with less than absolute clarity and plain speaking directness. It could well be, then, that my comments — such as they are — are based on misunderstandings. But if so, I’m not sure this is entirely my fault!

Fine holds that if there is a good case to be made against absolutely unrestricted quantification, then it will be based on what he calls “the classic argument from indefinite extendibility”. So the paper kicks off by presenting a version of the argument. Suppose the ‘universalist’ purports to use a (first-order) quantifier ∀ that ranges over everything. Then, the argument goes, “we can come to an understanding of a quantifier according to which there is an object … of which every object, in his sense of the quantifier, is a member”. Then, by separation, we can define another object R whose members are all and only the things in the universalist’s domain which are not members of themselves — and on pain of the Russell paradox, this object cannot be in the original domain. So we can introduce a quantifier ∀+ that runs over this too, and hence the universalist’s quantifier wasn’t absolute general.

Well, this general line of argument is of course very familiar. What I initially found a bit baffling is Fine’s claim that it doesn’t involve an appeal to what Cartwright calls the All in One principle. Here’s a statement of the principle at the end of Cartwright’s paper:

Any objects that can be taken to be the values of the variables of a first-order language constitute a domain.

where a domain is something set-like. Which looks to be exactly the principle appealed to in the first step of Fine’s argument. So why does Fine say otherwise?

Well, Fine picks up on Cartwright’s initial statement of the principle:

to quantify over certain objects is to presuppose that those objects constitute a ‘collection’ or a ‘completed collection’ — some one thing of which those objects are members.

And then Fine leans heavily on the word ‘presuppose’, saying that extendibility argument isn’t claiming that an understanding of the universalist’s ∀ already presupposes a conception of the domain-as-object and hence an understanding of ∀+; it’s the other way around — an understanding of ∀+ presupposes an understanding of ∀. Well, sure. But Cartwright was not saying otherwise, but at worst slightly mis-spoke. His idea, as the rest of his paper surely makes clear, is that the extendibility argument relies on the thought that where there is quantification over certain objects then we must be be able to take those objects as a completed collection — but Cartwright isn’t saying that understanding quantification presupposes thinking of the the objects quantified over constitute another object. Anyone persuaded by Cartwright’s paper, then, won’t find Fine’s version of the extendibility argument any more convincing than usual.

[To be continued]

Gödel mangled

Here is E. T. Jaynes writing in Probability Theory: The Logic of Science (CUP, 2003).

A famous theorem of Kurt Gödel (1931) states that no mathematical system can provide a proof of its own consistency. … To understand the above result, the essential point is the principle of elementary logic that a contradiction implies all propositions. Let A be the system of axioms underlying a mathematical theory and T any proposition, or theorem, deducible from them. Now whatever T may assert, the fact that T can be deduced from the axioms cannot prove that there is no contradiction in them, since if there were a contradiction, T could certainly be deduced from them! This is the essence of the Gödel theorem. [pp 45-46, slightly abbreviated]

This is of course complete bollocks, to use a technical term. The Second Theorem has nothing particularly to do with the claim that in classical systems a contradiction implies anything: for a start, the Theorem applies equally to theories built in a relevant logic which lacks ex falso quodlibet.

How can Jaynes have gone so wrong? Suppose we are dealing with a system with classical logic, and Con encodes ‘A is consistent’. Then, to be sure, we might reflect that, even were A to entail Con, that wouldn’t prove that A is consistent, because it could entail Con by being inconsistent. So someone might say — students sometimes do say — “If A entailed its own consistency, we’d still have no special reason to trust it! So Gödel’s proof that A can’t prove its own consistency doesn’t really tell us anything interesting.” But that is thumpingly point missing. The key thing, of course, is that since a system containing elementary arithmetic can’t prove its own consistency, it can’t prove the consistency of any stronger theory either. So we can’t use arithmetical reasoning to prove the consistency e.g. of set theory — thus sabotaging Hilbert’s hope that we could do exactly that sort of thing.

Jaynes’s ensuing remarks show that he hasn’t understood the First Theorem either. He seems to think it is just the ‘platitude’ that the axioms of a [mathematical] system might not provide enough information to decide a given proposition. Sigh.

How does this stuff get published? I was sent the references by a grad student working in probability theory who was suitably puzzled. Apparently Jaynes is well regarded in his neck of the woods …

Hurry, hurry, while stocks last …

A knock on my office door an hour ago, and the porter brought in two boxes, with half a dozen pre-publication copies each of the hardback and the paperback of my Gödel book.

It looks terrific. Even though I did the LaTeX typesetting, I’m happily surprised by the look of the pages (they are symbol-heavy large format pages in small print, yet they don’t seem off-puttingly dense).

As for content, I’ve learnt from experience that it’s best just to glance proudly at a new book and then put it on the shelf for a few months — for if you start reading, you instantly spot things you don’t like, things that could have been put better, not to mention the inevitable typos. But of course, the content is mostly wonderful … so hurry, hurry to your bookshop or to Amazon and order a copy right now.

Forthcoming attractions …

Well, having perhaps rather foolishly said I was thinking about blogging on the Absolute Generality collection edited by Agustin Rayo and Gabriel Uzquiano, I’ve been asked to review it for the Bulletin of Symbolic Logic. So that decides the matter: it is unrestricted quantification, indefinite extensibility, and similar attractions next! Oh what fun …

You can get a good idea of what is in the collection be reading the introduction here. And I’ll start commenting paper by paper next week — starting with Kit Fine’s paper — as a kind of warm up to reviewing the book “properly” for BSL. All comments as we go along will of course be very welcome!

Church’s Thesis 15: Three last papers

The next paper in the Olszewski collection is Wilfried Sieg’s “Step by recursive step: Church’s analysis of effective computability”. And if that title seems familiar, that’s because the paper was first published ten years(!) ago in the Bulletin of Symbolic Logic. I’ve just reread the paper, and the historical fine detail is indeed interesting, but not (I think) particularly exciting if your concerns are about the status of Church’s Thesis now the dust has settled. So, given that the piece is familiar, I don’t feel moved to comment on it further here.

Sieg’s contribution is disappointing because it is old news; the last two papers are disappointing because neither says anything much about Church’s Thesis (properly understood as a claim about the coextensiveness of the notions of effective computability and recursiveness). Karl Svozil, in “Physics and Metaphysics Look at Computation”, instead writes about what physical processes can compute, and in particular says something about quantum computing (and says it too quickly to be other than fairly mystifying). And David Turner’s “Church’s Thesis and Functional Programming” really ought to be called “Church’s Lambda Calculus and Functional Programming”.

Which brings us to the end of the collection. A very disappointing (at times, rather depressing) read, I’m afraid. My blunt summary suggestion: read the papers by Copeland, Shagrir, and Shapiro and you can really give the other nineteen a miss …

Church’s Thesis 14: Open texture and computability

Back at last to my blogview of the papers in Church’s Thesis After 70 Years (new readers can start here!) — and we’ve reached a very nice paper by Stewart Shapiro, “Computability, Proof, and Open-Texture”, written with his characteristic clarity and good sense. One of the few ‘must read’ papers in the collection.

But I suspect that Shapiro somewhat misdescribes the basic logical geography of the issues in this area: so while I like many of the points he makes in his paper, I don’t think they support quite the conclusion that he draws. Let me explain.

There are three concepts hereabouts that need to be considered. First, there is the inchoate notion of what is computable, pinned down — in so far as it is pinned down — by examples of paradigm computations. Second, there is the idealized though still informal notion of effective computability. Third, there is the notion of Turing computability (or alternatively, recursive computability).

Church’s Thesis is standardly taken — and I’ve been taking it — to be a claim about the relation between the second and third concepts: they are co-extensive. And the point to emphasize is that we do indeed need to do some significant pre-processing of our initial inchoate notion of computability before we arrive at a notion, effective computability, that can reasonably be asserted to be co-extensive with Turing computability. After all, ‘computable’ means, roughly, ‘can be computed’: but ‘can’ relative to what constraints? Is the Ackermann function computable (even though for small arguments its value has more digits than particles in the known universe)? Our agreed judgements about elementary examples of common-or-garden computation don’t settle the answer to exotic questions like that. And there is an element of decision — guided of course by the desire for interesting, fruitful concepts — in the way we refine the inchoate notion of computability to arrive at the idea of effective computability (e.g. we abstract entirely away from consideration of the number of steps needed to execute an effective step-by-step computation, while insisting that we keep a low bound on the intelligence required to execute each particular step). Shapiro writes well about this kind of exercise of reducing the amount of ‘open texture’ in an inchoate informal concept and arriving at something more sharply bounded.

However, the question that has lately been the subject of some debate in the literature — the question whether we can give an informal proof of Church’s Thesis — is a question that arises after an initial exercise of conceptual refinement has been done, and we have arrived at the idea of effective computability. Is the next move from the idea of effective computability to the idea of Turing computability (or some equivalent) another move like the initial move from the notion of computability to the idea of effective computability? In other words, does this just involve further reduction in open texture, guided by more considerations ultimately of the same kind as are involved in the initial reduction of open texture in the inchoate concept of computability (so the move is rendered attractive for certain purposes but is not uniquely compulsory). Or could it be that once we have got as far as the notion of effective computability — informal though that notion is — we have in fact imposed sufficient constraints to force the effectively computable functions to be none other than the Turing computable functions?

Sieg, for example, has explored the second line, and I offer arguments for it in my Gödel book. And of course the viability of this line is not in the slightest bit affected by agreeing that the move from the initial notion of computability to the notion of effective computability involves a number of non-compulsory decisions in reducing open texture. Shapiro segues rather too smoothly from discussion of the conceptual move from the inchoate notion of computability to the notion of effective computability to discussion of the move from effective computability to Turing computability. But supposing that these are moves of the same kind is in fact exactly the point at issue in some recent debates. And that point, to my mind, isn’t sufficiently directly addressed by Shapiro in his last couple of pages to make his discussion of these matters entirely convincing.

But read his paper and judge for yourself!

Normal service will be soon resumed …

The last tripos meeting today, and as always justice was of course done. I wish. (Oh, roll on the day when we stop having to place students in artificially bounded classes — first, upper second, etc. — and just rank-order on each paper and give transcripts …). As chair of the examining boards for Parts IB and II, I’ve been much distracted this week, but I hope to get back to finishing my blogview of the Olszewski collection over the weekend, and then I’m thinking of turning to the papers in the recent collection on Absolute Generality edited by Agustin Rayo and Gabriel Uzquiano.

A delight in the post today (a belated birthday present). Four Haydn CDs to fill random holes in my otherwise almost complete collection of the Lindsays‘ recordings (almost, because I’ve never been moved to get the remainder of their Tippett disks). One of the terrific things about living in Sheffield in the nineties is that we overlapped with the Lindsays at their peak. They lived in the city, and often played at the Crucible Studio Theatre. That was a wonderful small space: they would sit facing each other in a square with three hundred or so closely packed around in tiers, and in that intimate atmosphere play with an unmatched intensity and directness, talking to the audience between pieces. Nothing has come close since.

It’s tough being a philosophy student …

The last tripos scripts marked! And by happy chance, the answers I marked today were among the best of the season, and cheered me up a lot. (I’m Chair of the Examining Board for Parts IB and II this year so I’ve meetings to organize this coming week, and still have to examine three M.Phil. dissertations: but at least I’ve read all the relevant undergraduate scripts for the year. And I’m on leave next summer term, so that’s it for two years. Terrific.)

It strikes me again while marking that it’s quite tough being a philosophy student these days: the disputes you are supposed to get your head around have become so sophisticated, the to and fro of the dialectic often so intricate. An example. When I first started teaching, Donnellan’s paper on ‘Reference and Definite Descriptions’ had quite recently been published — it was state of the art. An undergraduate who could talk some sense about his distinction between referential and attributive uses of descriptions was thought to be doing really well. Just think what we’d expect a first class student to know about the Theory of Descriptions nowadays (for a start, Kripke’s response to Donnellan, problems with Kripke’s Gricean manoeuvres, etc.). True there are textbooks, Stanford Encyclopedia articles, and the like to help the student through: but still, the level of sophistication we now expect from our best undergraduates is daunting.

Scroll to Top