## Gödel mangled

Here is E. T. Jaynes writing in Probability Theory: The Logic of Science (CUP, 2003).

A famous theorem of Kurt Gödel (1931) states that no mathematical system can provide a proof of its own consistency. … To understand the above result, the essential point is the principle of elementary logic that a contradiction implies all propositions. Let A be the system of axioms underlying a mathematical theory and T any proposition, or theorem, deducible from them. Now whatever T may assert, the fact that T can be deduced from the axioms cannot prove that there is no contradiction in them, since if there were a contradiction, T could certainly be deduced from them! This is the essence of the Gödel theorem. [pp 45-46, slightly abbreviated]

This is of course complete bollocks, to use a technical term. The Second Theorem has nothing particularly to do with the claim that in classical systems a contradiction implies anything: for a start, the Theorem applies equally to theories built in a relevant logic which lacks ex falso quodlibet.

How can Jaynes have gone so wrong? Suppose we are dealing with a system with classical logic, and Con encodes ‘A is consistent’. Then, to be sure, we might reflect that, even were A to entail Con, that wouldn’t prove that A is consistent, because it could entail Con by being inconsistent. So someone might say — students sometimes do say — “If A entailed its own consistency, we’d still have no special reason to trust it! So Gödel’s proof that A can’t prove its own consistency doesn’t really tell us anything interesting.” But that is thumpingly point missing. The key thing, of course, is that since a system containing elementary arithmetic can’t prove its own consistency, it can’t prove the consistency of any stronger theory either. So we can’t use arithmetical reasoning to prove the consistency e.g. of set theory — thus sabotaging Hilbert’s hope that we could do exactly that sort of thing.

Jaynes’s ensuing remarks show that he hasn’t understood the First Theorem either. He seems to think it is just the ‘platitude’ that the axioms of a [mathematical] system might not provide enough information to decide a given proposition. Sigh.

How does this stuff get published? I was sent the references by a grad student working in probability theory who was suitably puzzled. Apparently Jaynes is well regarded in his neck of the woods …

## Hurry, hurry, while stocks last …

A knock on my office door an hour ago, and the porter brought in two boxes, with half a dozen pre-publication copies each of the hardback and the paperback of my Gödel book.

It looks terrific. Even though I did the LaTeX typesetting, I’m happily surprised by the look of the pages (they are symbol-heavy large format pages in small print, yet they don’t seem off-puttingly dense).

As for content, I’ve learnt from experience that it’s best just to glance proudly at a new book and then put it on the shelf for a few months — for if you start reading, you instantly spot things you don’t like, things that could have been put better, not to mention the inevitable typos. But of course, the content is mostly wonderful … so hurry, hurry to your bookshop or to Amazon and order a copy right now.

## Forthcoming attractions …

Well, having perhaps rather foolishly said I was thinking about blogging on the Absolute Generality collection edited by Agustin Rayo and Gabriel Uzquiano, I’ve been asked to review it for the Bulletin of Symbolic Logic. So that decides the matter: it is unrestricted quantification, indefinite extensibility, and similar attractions next! Oh what fun …

You can get a good idea of what is in the collection be reading the introduction here. And I’ll start commenting paper by paper next week — starting with Kit Fine’s paper — as a kind of warm up to reviewing the book “properly” for BSL. All comments as we go along will of course be very welcome!

## Church’s Thesis 15: Three last papers

The next paper in the Olszewski collection is Wilfried Sieg’s “Step by recursive step: Church’s analysis of effective computability”. And if that title seems familiar, that’s because the paper was first published ten years(!) ago in the Bulletin of Symbolic Logic. I’ve just reread the paper, and the historical fine detail is indeed interesting, but not (I think) particularly exciting if your concerns are about the status of Church’s Thesis now the dust has settled. So, given that the piece is familiar, I don’t feel moved to comment on it further here.

Sieg’s contribution is disappointing because it is old news; the last two papers are disappointing because neither says anything much about Church’s Thesis (properly understood as a claim about the coextensiveness of the notions of effective computability and recursiveness). Karl Svozil, in “Physics and Metaphysics Look at Computation”, instead writes about what physical processes can compute, and in particular says something about quantum computing (and says it too quickly to be other than fairly mystifying). And David Turner’s “Church’s Thesis and Functional Programming” really ought to be called “Church’s Lambda Calculus and Functional Programming”.

Which brings us to the end of the collection. A very disappointing (at times, rather depressing) read, I’m afraid. My blunt summary suggestion: read the papers by Copeland, Shagrir, and Shapiro and you can really give the other nineteen a miss …

## Church’s Thesis 14: Open texture and computability

Back at last to my blogview of the papers in Church’s Thesis After 70 Years (new readers can start here!) — and we’ve reached a very nice paper by Stewart Shapiro, “Computability, Proof, and Open-Texture”, written with his characteristic clarity and good sense. One of the few ‘must read’ papers in the collection.

But I suspect that Shapiro somewhat misdescribes the basic logical geography of the issues in this area: so while I like many of the points he makes in his paper, I don’t think they support quite the conclusion that he draws. Let me explain.

There are three concepts hereabouts that need to be considered. First, there is the inchoate notion of what is computable, pinned down — in so far as it is pinned down — by examples of paradigm computations. Second, there is the idealized though still informal notion of effective computability. Third, there is the notion of Turing computability (or alternatively, recursive computability).

Church’s Thesis is standardly taken — and I’ve been taking it — to be a claim about the relation between the second and third concepts: they are co-extensive. And the point to emphasize is that we do indeed need to do some significant pre-processing of our initial inchoate notion of computability before we arrive at a notion, effective computability, that can reasonably be asserted to be co-extensive with Turing computability. After all, ‘computable’ means, roughly, ‘can be computed’: but ‘can’ relative to what constraints? Is the Ackermann function computable (even though for small arguments its value has more digits than particles in the known universe)? Our agreed judgements about elementary examples of common-or-garden computation don’t settle the answer to exotic questions like that. And there is an element of decision — guided of course by the desire for interesting, fruitful concepts — in the way we refine the inchoate notion of computability to arrive at the idea of effective computability (e.g. we abstract entirely away from consideration of the number of steps needed to execute an effective step-by-step computation, while insisting that we keep a low bound on the intelligence required to execute each particular step). Shapiro writes well about this kind of exercise of reducing the amount of ‘open texture’ in an inchoate informal concept and arriving at something more sharply bounded.

However, the question that has lately been the subject of some debate in the literature — the question whether we can give an informal proof of Church’s Thesis — is a question that arises after an initial exercise of conceptual refinement has been done, and we have arrived at the idea of effective computability. Is the next move from the idea of effective computability to the idea of Turing computability (or some equivalent) another move like the initial move from the notion of computability to the idea of effective computability? In other words, does this just involve further reduction in open texture, guided by more considerations ultimately of the same kind as are involved in the initial reduction of open texture in the inchoate concept of computability (so the move is rendered attractive for certain purposes but is not uniquely compulsory). Or could it be that once we have got as far as the notion of effective computability — informal though that notion is — we have in fact imposed sufficient constraints to force the effectively computable functions to be none other than the Turing computable functions?

Sieg, for example, has explored the second line, and I offer arguments for it in my Gödel book. And of course the viability of this line is not in the slightest bit affected by agreeing that the move from the initial notion of computability to the notion of effective computability involves a number of non-compulsory decisions in reducing open texture. Shapiro segues rather too smoothly from discussion of the conceptual move from the inchoate notion of computability to the notion of effective computability to discussion of the move from effective computability to Turing computability. But supposing that these are moves of the same kind is in fact exactly the point at issue in some recent debates. And that point, to my mind, isn’t sufficiently directly addressed by Shapiro in his last couple of pages to make his discussion of these matters entirely convincing.

But read his paper and judge for yourself!

## Normal service will be soon resumed …

The last tripos meeting today, and as always justice was of course done. I wish. (Oh, roll on the day when we stop having to place students in artificially bounded classes — first, upper second, etc. — and just rank-order on each paper and give transcripts …). As chair of the examining boards for Parts IB and II, I’ve been much distracted this week, but I hope to get back to finishing my blogview of the Olszewski collection over the weekend, and then I’m thinking of turning to the papers in the recent collection on Absolute Generality edited by Agustin Rayo and Gabriel Uzquiano.

A delight in the post today (a belated birthday present). Four Haydn CDs to fill random holes in my otherwise almost complete collection of the Lindsays‘ recordings (almost, because I’ve never been moved to get the remainder of their Tippett disks). One of the terrific things about living in Sheffield in the nineties is that we overlapped with the Lindsays at their peak. They lived in the city, and often played at the Crucible Studio Theatre. That was a wonderful small space: they would sit facing each other in a square with three hundred or so closely packed around in tiers, and in that intimate atmosphere play with an unmatched intensity and directness, talking to the audience between pieces. Nothing has come close since.

## It’s tough being a philosophy student …

The last tripos scripts marked! And by happy chance, the answers I marked today were among the best of the season, and cheered me up a lot. (I’m Chair of the Examining Board for Parts IB and II this year so I’ve meetings to organize this coming week, and still have to examine three M.Phil. dissertations: but at least I’ve read all the relevant undergraduate scripts for the year. And I’m on leave next summer term, so that’s it for two years. Terrific.)

It strikes me again while marking that it’s quite tough being a philosophy student these days: the disputes you are supposed to get your head around have become so sophisticated, the to and fro of the dialectic often so intricate. An example. When I first started teaching, Donnellan’s paper on ‘Reference and Definite Descriptions’ had quite recently been published — it was state of the art. An undergraduate who could talk some sense about his distinction between referential and attributive uses of descriptions was thought to be doing really well. Just think what we’d expect a first class student to know about the Theory of Descriptions nowadays (for a start, Kripke’s response to Donnellan, problems with Kripke’s Gricean manoeuvres, etc.). True there are textbooks, Stanford Encyclopedia articles, and the like to help the student through: but still, the level of sophistication we now expect from our best undergraduates is daunting.

## Quackery and the philosophy of science

I’ve just added a link to Prof. David Colquhoun’s blog on quackery, now renamed Improbable Science (after a bit of a legal kerfuffle, recounted in today’s Guardian). Great stuff.

And indeed, there are some really terrific internet resources out there, doing their best to counteract the tide of unreason that sometimes seems to be lapping at our doors. A question though. You’d have thought that philosophers of science would have a strong interest in joining in enthusiastically with the public defence of scientific reason, and doing their bit to chip away at the complete bollocks of creationism, intelligent design, and all the rest. Where are they all? Maybe it is just incompetent googling on my part, but I haven’t noticed any vigorous internet interventions. But I’d be very happy to be proved wrong … any pointers to blogs for me to link to?

## Church’s Thesis 13: Gödel on Turing

Phew! At last, I can warmly recommend another paper in Church’s Thesis after 70 Years

Although initially Gödel was hesitant, by about 1938 he is praising Turing’s work as establishing the “correct definition” of computability. Yet in 1972 he writes a short paper on undecidability, which includes a section headed “A philosophical error in Turing’s work”, in which Gödel takes issue with Turing’s implicit assumption of the finitude of the human mind. Question: is this a change of view on Gödel’s part (from seeing Turing’s work as successful to seeing it as fatally flawed)?

Surely not. What Gödel was praising in 1938 was Turing’s analysis of a finite step-by-step computational procedure. (Recall the context: Turing was originally fired up by the Entscheidungsproblem, which is precisely the question whether there is a finitistic procedure that can be mechanically applied to decide whether a sentence is a first-order logical theorem. So it is analysis of such procedures that is called for, and that was of concern to Gödel too.) What the later Gödel was resisting in 1972 was Turing’s further thought that, in the final analysis, human mental procedures cannot go beyond such finite mechanical procedures (he was inclined to think that, in the words of his Gibbs Lecture, the human mind “infinitely surpasses the powers of any finite machine”). There is no change of mind, just a change of topic.

That’s at any rate what I have previously taken to be the natural view, without doing any careful homework to support it (but it is of course the view that fits with the important distinction I’ve been stressing in these comments, between CT as a claim about effective computability, and bolder theses about what can be computed by machines and/or minds). And so it is very good to see that now Oron Shagrir has given a careful and convincing defence of this view, with lots of detailed references, in his (freely downloadable!) paper “Gödel on Turing on computability”. Good stuff!

## Church’s Thesis 12: Kreisel, Church

I’m going to pass over the next three papers in the Olszewski collection pretty quickly, for various reasons.

First, there’s a piece on “Analog computation and Church’s Thesis” by Jerzy Mycka: I just don’t know anything about this stuff, and am certainly not in a position to comment on this piece, though it looks decent enough.

Then there is a paper by Piergiorgio Odifreddi on “Kreisel’s Church”. Now, I find Kreisel’s writings intriguing and irritating in roughly equal measure (all sorts of interesting things seem to be going on, but he just can’t or won’t write flat-footed technical exposition and direct, transparent, plain-spoken, philosophical commentary). So I’d hoped that Odifreddi might take up one or two of Kreisel’s themes from his discussions of CT and give them a careful development and exploration — for Odifreddi’s wonderful book on Recursion Theory shows how clear and judicious he can be. But regrettably, he does something quite different: he takes us on a lightening survey of Kreisel’s many relevant articles, with a lot of quotations and some references to related later work. But if you were puzzled by Kreisel before, you’ll stay pretty puzzled — though you’ll have a longer bibliography!

Next, Adam Olszewski contributes a short piece on “Church’s Thesis as interpreted by Church”. Here Olszewski does at least pick up on the point that I noted that Murawski and Wolenski slurred over without comment. CT is nowadays usually taken to be a claim about the co-extensiveness of the two notions of effective computability and recursivess; but the Founding Fathers were wont to talk of the notions being identical, or of one notion being a definition of the other. Church in 1936 himself uses both identity talk and definition talk. But actually, it isn’t too likely that — at that early date — Church had a clearly worked out position in mind on the status of the correlation he was proposing between computability and recursivess, and I don’t think we can read too much into his choice of words here. The headline is that Church, at least originally, seemed to take the modest view of the correlation as having something like the status of a Carnapian explication, while Turing thought, more boldly, that it could be forced on us by an analysis of the very notion of a computation. This is familiar stuff, however, and Olszewski’s brief remarks don’t in any way change the familiar picture.

Scroll to Top