England in May

Near Newton BlossomvilleOften we think about going to live in Italy in a couple of years or so, but it would be a wrench not to see the wooded English countryside in May (we forgot to take the camera when walking last weekend in a quite unregarded area of Bedfordshire — but it was stunning in its quiet way, as even a phone snap hints).

One of the less welcome delights of May, however, is tripos-marking, which starts for me at the end of this week. And before then, I have the tough task of ranking thirty six applications for the Analysis studentship, many of them impressive. So this blog will probably now slow down a bit after the recent flurry of postings.

Church’s Thesis 5: Effective computability, machine computability

The sixth paper in the Olszewski collection is Jack Copeland’s “Turing Thesis”. Readers who know Copeland’s previous writings in this area won’t be surprised by the general line: but truth trumps novelty, and this is all done with great good sense. To take up a theme in my last posting, Copeland insists that ‘effective’ is a term of art, and that

The Entscheidungsproblem for the predicate calculus [i.e. the problem of finding an effective decision procedure] is the problem of finding a humanly executable procedure of a certain sort, and the fact that there is none is consistent with the claim that some machine may nevertheless be able to decide arbitrary formulae of the calculus; all that follows is that such a machine, if it exists, cannot be mimicked by a human computer.

And he goes on to identify Turing’s Thesis as a claim about what can be done ‘effectively’, meaning by finite step-by-step procedure where each step is available to a cognitive agent of limited (human-like) abilities, etc. Which seems dead right to me (right historically, but also right philosophically in drawing a key conceptual distinction correctly, as I’ve said in previous posts).

Copeland goes on to reiterate chapter and verse from Turing’s writings to verify his reading, and he critically mangles Andrew Hodges’s claims (later in this volume and elsewhere) that Turing originally had a wider thesis about mechanism and also that Turing changed his views after the war about minds and mechanisms. I’m not one for historical minutiae, but Copeland seems clearly to get the best of this exchange.

Church’s Thesis 4: Computability by any means

The next paper is “The Church-Turing Thesis. A last vestige of a failed mathematical program” by Carol E. Cleland. Oh dear. This really is eminently skipable. The first five sections are a lightning (but not at all enlightening) tour through the entirely familiar story of the development of analysis up to Weierstrass, Dedekind and Cantor, the emergence of a set theory as a foundational framework, the ‘crisis’ engendered by the discovery of the paradoxes, Hilbert’s formalizing response, the Entscheidungsproblem as a prompt to the development of a theory of effective computation. No one likely to be reading the Olszewski collection needs the story rehearsing again at this naive level.

And when Cleland comes to the Church-Turing Thesis she without comment runs together two importantly different ideas. On p. 133 the claim is [A] one about the ‘effectively computable’ numerical functions — which indeed is the version of the Thesis relevant to the Entscheidungsproblem. But by p. 140 the Thesis is being read as a claim [B] about the functions which are ‘computable (by any means)’. And these are of course distinct claims, requiring distinct arguments. For example, suppose you think that the kind of hypercomputation that exploits Malament-Hogarth spacetimes is in principle possible: then, on that view, there indeed can be computations which are not effective in the standard sense as explicated e.g. by Hartley Rogers, i.e. involving algorithmic procedures which terminate after some finite number of steps. And the questions we can raise about the Hogarth argument are highly relevant to [B] but not to [A].

Cleland’s last section offers some weak remarks about whether computation ‘by any means’ goes beyond Turing computability; but (I’m afraid) nothing here seriously advances discussion of that topic.

Paris 1967, Paris 2007

Well, time to turn to serious matters after such the logical diversions … No doubt you’ve all supported the campaign to absolve Paris Hilton from her prison sentence: after all, in the words of the petition, “She provides hope for young people all over the U.S. and the world. She provides beauty and excitement to (most of) our otherwise mundane lives.” I couldn’t have put it better.

But Guy Debord did, forty years ago (albeit in a French style which isn’t quite mine!):

Behind the glitter of spectacular distractions, a tendency toward making everything banal dominates modern society the world over, even where the more advanced forms of commodity consumption have seemingly multiplied the variety of roles and objects to choose from. … The celebrity, the spectacular representation of a living human being, embodies this banality by embodying the image of a possible role. As specialists of apparent life, stars serve as superficial objects that people can identify with in order to compensate for the fragmented working lives that they actually live. Celebrities exist to act out in an unfettered way various styles of living … They embody the inaccessible result of social labour by dramatizing its by-products of power and leisure (magically projecting them). The celebrity who stars in the spectacle is the opposite of the individual, the enemy of the individual in herself as well as in others. Passing into the spectacle as a model for identification, the celebrity renounces all autonomous qualities …

And there is much more in the same vein, in his La société du spectacle, which I’ve been looking at again (so long after my mispent youth in various leftist groups interminably debating such ideas). Paris 1967 anticipates Paris 2007.

Church’s Thesis 3: Constructivism, informal proofs

The third, short, paper in the Olszewski collection is by Douglas S. Bridges — the author, with Fred Richman, of the terrific short book Varieties of Constructive Analysis. The book tells us a bit about what happens if you in effect add an axiom motivated by Church’s Thesis to Bishop-style constructive analysis. This little paper says more on the same lines, but I don’t know enough about this stuff to know how novel/interesting this is.

The fourth paper is “On the Provability, Veracity, and AI-Relevance of the Church-Turing Thesis” by Selmer Bringsjord and Konstantine Arkoudas. At least these authors get it right about what the core Thesis is: a numerical function is effectively computable if and only if it is Turing-computable. But that’s about as good as it gets. The first main section is a bash against Mendelson’s argument against “the standard conception of the thesis as mathematically unprovable”. Now, although I am sympathetic to Mendelson’s conclusion, I’d want to argue for it in a rather different way (and do so in the Gödel book). But Bringsjord and Arkoudas’s objections just seem badly point-missing about the possibility of a Mendelsonian line. Their argument (p. 69) depends on a bald disjunction between proofs in formal systems and what they call “empirical evidence” for CTT. But of course, tertium datur. Take, for example, the familiar theorem that there are effectively computable functions which aren’t primitive recursive. I’m not being tendentious in calling that a theorem — that’s how the textbooks label the result. And the textbooks, of course, give a proof using a diagonalization argument. And it is a perfectly good proof even though it involves the informal notion of an effectively computable function (the argument isn’t a fully formalizable proof, in the sense that we can’t get rid of the informal notions it deploys, but it doesn’t just give “empirical” support either). Now, the Mendelsonian line is — I take it — precisely to resist that dichotomy formal/formalizable proof vs mere quasi-empirical support and to remind us that there are perfectly good informal proofs involving informal concepts (and Mendelson invites us to be sceptical about whether the informal/formal division is in fact as sharp as we sometimes pretend). And the key question is whether it is possible to give an informal mathematical proof of CTT as compelling as the proof that not all effectively computable functions are primitive recursive. Reiterating the rejected dichotomy is of course no argument against that possibility.

That’s not a great start to the paper (though the mistake is a familiar one). But things then go further downhill. For much of the rest of the paper is devoted to a discussion of Bringsjord’s claim that membership of “the set of all interesting stories” (!!) is effectively decidable but not recursively decidable (Gödel number the stories and we’d have a counterexample to CTT). And what, according to Bringsjord, is the rationale behind the claim that members of that set is effectively decidable? “The rationale is simply the brute fact that a normal, well-adjusted human computist can effectively decide [membership]. Try it yourself!” Well, you might be able to decide in many cases (always? just how determinate is the idea of an interesting story?): but who says that it is by means of implementing a step-by-step algorithm? Effective decidability is a term of art! It doesn’t just mean there is some method or other for deciding (as in: ask Wikipedia or use a cleverly tuned neural net). It means that there is an algorithmic procedure of a certain sort; and in trying to judge whether a story is interesting it most certainly isn’t available to inspection whether I’m implementing a suitable algorithm. This whole discussion just seems badly misguided.

Church’s Thesis 2: What’s an algorithm?

Andreas Blass and Yuri Gurevich’s paper “Algorithms: A Quest for Absolute Definitions” really covers too much too fast to be very satisfactory. The first part is a quick review of the separate histories of Church’s Thesis and Turing’s Thesis, followed by a quick overview of the path from Turing’s original analysis of what we might call a classical algorithmic procedure to its generalization in the work of Kolmogorov and Uspenskii, and Schönhage. But they also say

In fact the notion of algorithm is richer these days than it was in Turing’s days. And there are algorithms … not covered directly by Turing’s analysis, for example, algorithms that interact with their environments, algorithms whose inputs are abstract structures, and geometric or, more generally, non-discrete algorithms.

And they go on to describe very briefly — or at least, too briefly for this reader — some of work on more abstract general notions of computation.

But, by my lights, once we go beyond Kolmogorov and Uspenskii we lose touch with discussions that are directly relevant to the Church-Turing Thesis, construed as a claim about effectively computable functions (where the notion of effective computability is elucidated in the traditional way, in terms of what can be done by a sequential, step-by-deterministic-step procedure, where each small step is available to a computing agent of limited cognitive abilities, etc.). And indeed, Blass and Gurevich themselves don’t challenge the Thesis: their concern is more with extensions of the core concept of an algorithm — or at least, that’s how I prefer to describe what they are up to.

Church’s Thesis 1: CTT, minds and supertasks

I mentioned a few posts ago the collection Church’s Thesis After 70 Years edited by Adam Olszewski et al. Since the editors helpfully refrain from suggesting a sensible reading order, I’m just going to dive in and read the contributed papers in the order they are printed (they are arranged alphabetically by the authors’ names). And to keep my nose to the grindstone, I’ve promised myself to post comments and thoughts here — so it will be embarrassing to stop doing my homework! Here goes, then, starting with Darren Abramson, “Church’s Thesis and Philosophy of Mind”.

Abramson identifies “the Church-Turing Thesis” with the claim “[…] no human computer, or machine that mimics a human computer, can out-compute a universal Turing machine”. That doesn’t strike me as a terribly helpful move, for it runs together two claims, namely (i) no human computer can (in a reasonable sense) compute something that is not effectively computable (by a finite, step-by-step, algorithmic process), and (ii) whatever is effectively computable is Turing-computable/recursive. In fact, in my Gödel book, I call just (ii) “the Church-Turing Thesis”. But irrespective of the historical justification for using the label my way (as many do), this thesis (ii) is surely to be sharply separated from (i). For a start, the two claims have quite different sorts of grounds. For example, (i) depends on the impossibility of the human performance of certain kinds of supertask. And the question whether supertasks are possible is quite independent of the considerations that are relevant to (ii).

I didn’t know, till Abramson told me, that Bringsjord and Arkoudas have argued for (i) by purporting to describe cases where people do in fact hypercompute. Apparently, according to them, in coming to understand the familiar pictorial argument for the claim that lim n → ∞ of 1/2^n is 0 we complete an infinite number of steps in a finite amount of time. Gosh. Really?

Abramson makes short shrift of the arguments from Bringsjord and Arkoudas that he reports. Though I’m not minded to check now whether Abramson has dealt fairly with them. Nor indeed am I minded to bother to think through his discussion of Copeland on Searle’s Chinese Room Argument: frankly, I’ve never felt that that sort of stuff has ever illuminated serious issues in the philosophy of mind that I might care about. So I pass on to the next paper ….

Interrupted categories

Well, the reading group which was slowly working through Goldblatt’s Topoi got about half-way through but has collectively decided that enough is enough! In retrospect I probably suggested the wrong thing to read — perhaps, after all, the much shorter Lawvere and Rosebrugh’s Sets For Mathematics would have been the better bet, as at least we’d have had the satisfaction of getting through it. But Goldblatt’s book promised much more meat for logicians to chew on. But most of the group lost its faith that ploughing on was going to deliver more insights. One thing is clear, the book that is going to persuade the generality of logicians or those interested in the foundations of mathematics that it really is important and rewarding to get on top of a substantial amount of category theory has yet to be written. Still, even if I was in the minority, I was left wanting to know more — so I’m sure I’ll return here to the categorial theme in due course.

Kleeneness is next to Gödelness

This is a pretty shameless trailer for my forthcoming book (which I confess I’m still fiddling with, since the final final version doesn’t have to with the Press for a week more).

It’s fun and illuminating to show that the First Incompleteness Theorem can be proved without any appeal to sentences that “say” that they are unprovable, or indeed without any appeal to the apparatus of Gödel numbering, Diagonal Lemmas and the like. This can be done in various ways, of course. But there is a simple argument from Kleene’s Normal Form theorem to Incompleteness, which doesn’t seem to be well known. Here’s a version extracted from the book — a version that relies on Church’s Thesis, but only to save labour and make for cuteness. Enjoy! Pre-order the book on Amazon for lots more where that came from!

Wandering stars

Well, if James can link to a YouTube video, I guess I can! Hardly Shaun the Sheep, and perhaps not adding to the gaiety of nations in quite the same way …. but here’s a rare recent sighting of a certain group, for late-night listening, from an unannounced performance in Bristol in February. Even so stripped down, the magic still works.

If Beth Gibbons’ words here have always seemed hauntingly evocative, maybe that is because she is drawing from — or should I say sampling? — the King James Version rendering of Jude 13.1: “Raging waves of the sea, foaming out their own shame; wandering stars, to whom is reserved the blackness of darkness for ever.”

There still surface rumours of another CD, 10 years after the last studio album.

Scroll to Top