Logic

Church’s Thesis 1: CTT, minds and supertasks

I mentioned a few posts ago the collection Church’s Thesis After 70 Years edited by Adam Olszewski et al. Since the editors helpfully refrain from suggesting a sensible reading order, I’m just going to dive in and read the contributed papers in the order they are printed (they are arranged alphabetically by the authors’ names). And to keep my nose to the grindstone, I’ve promised myself to post comments and thoughts here — so it will be embarrassing to stop doing my homework! Here goes, then, starting with Darren Abramson, “Church’s Thesis and Philosophy of Mind”.

Abramson identifies “the Church-Turing Thesis” with the claim “[…] no human computer, or machine that mimics a human computer, can out-compute a universal Turing machine”. That doesn’t strike me as a terribly helpful move, for it runs together two claims, namely (i) no human computer can (in a reasonable sense) compute something that is not effectively computable (by a finite, step-by-step, algorithmic process), and (ii) whatever is effectively computable is Turing-computable/recursive. In fact, in my Gödel book, I call just (ii) “the Church-Turing Thesis”. But irrespective of the historical justification for using the label my way (as many do), this thesis (ii) is surely to be sharply separated from (i). For a start, the two claims have quite different sorts of grounds. For example, (i) depends on the impossibility of the human performance of certain kinds of supertask. And the question whether supertasks are possible is quite independent of the considerations that are relevant to (ii).

I didn’t know, till Abramson told me, that Bringsjord and Arkoudas have argued for (i) by purporting to describe cases where people do in fact hypercompute. Apparently, according to them, in coming to understand the familiar pictorial argument for the claim that lim n → ∞ of 1/2^n is 0 we complete an infinite number of steps in a finite amount of time. Gosh. Really?

Abramson makes short shrift of the arguments from Bringsjord and Arkoudas that he reports. Though I’m not minded to check now whether Abramson has dealt fairly with them. Nor indeed am I minded to bother to think through his discussion of Copeland on Searle’s Chinese Room Argument: frankly, I’ve never felt that that sort of stuff has ever illuminated serious issues in the philosophy of mind that I might care about. So I pass on to the next paper ….

Kleeneness is next to Gödelness

This is a pretty shameless trailer for my forthcoming book (which I confess I’m still fiddling with, since the final final version doesn’t have to with the Press for a week more).

It’s fun and illuminating to show that the First Incompleteness Theorem can be proved without any appeal to sentences that “say” that they are unprovable, or indeed without any appeal to the apparatus of Gödel numbering, Diagonal Lemmas and the like. This can be done in various ways, of course. But there is a simple argument from Kleene’s Normal Form theorem to Incompleteness, which doesn’t seem to be well known. Here’s a version extracted from the book — a version that relies on Church’s Thesis, but only to save labour and make for cuteness. Enjoy! Pre-order the book on Amazon for lots more where that came from!

What are sets for?

Yiannis Moschovakis on p. 1 of his very useful Notes on Set Theory writes that one “basic property of sets” is that

Every set A has elements or members.

And then, on p. 2, he writes

Somewhat peculiar is the empty set ∅ which has no members.

But of course he can’t have it both ways. Either every set has elements (and there is no empty set) or there is an empty set (and so not every set has elements).

I offer this as another example to my esteemed colleagues Alex Oliver and Timothy Smiley who have a lot of fun with this sort of thing at the beginning of their recent paper “What are sets and what are they for?” (in John Hawthorne, ed., Metaphysics), and who give lots of other examples of set theorists’ arm-waving introductory chat about sets being similarly hopeless. But what are we do about that? Alex and Timothy take the stern line that we should take such set theorists at their introductory word, and if that word is confused, then so much the worse for them and for the very idea of the empty set (and for the idea of singletons too, if sets are defined to be things with members, plural). Pending a secure argument for the overwhelming utility of postulating them, we should do without empty sets and singletons and indeed without the whole universe of pure sets.

But it doesn’t seem good strategy to me to take set theorists at their first word — any more than it would be a good strategy to take quantum theorists at their first word (if that introductory word involves a metaphysical tangle about particles-cum-waves). However, I’m with Alex and Timothy that the question “What are sets for?” surely is just the right question to ask. Moschovakis indicates one sort of answer (indeed, they mention it): the universe of sets provides a unified general framework in which we can give “faithful representations” of systems of mathematical objects by structured sets. Now this is, of course, the sort of thing that category theorists aim to talk about: in their terms, there will be structure-preserving functors from other (small) categories to the category of (pure) sets — roughly because the category of sets has such a plenitude of objects and morphisms to play with. What we need to do here is think through what this kind of use for sets — a use that can be illuminated in category-theoretic terms — really comes to. My hunch is that we’ll get to a rather different place than Alex and Timothy.

I’ve been a bit slow to get thinking about set theoretic matters since being back in Cambridge (there’s the decidedly daunting prospect of having Michael Potter and Thomas Forster locally breathing down my neck …). But Alex and Timothy’s provocations are hard to resist. So something else to add to the list of things to think about.

Issacson Day

My colleague Michael Potter organized a one-day workshop today focusing on issues arising out of Dan Isaacson’s paper, “Arithmetical truth and hidden higher-order concepts”, which appeared 20 years ago.

It’s quite difficult to tease out the full content of Dan’s claims about the status of first-order PA in his paper, but a core claim — I call it Isaacson’s Thesis — is surely this:

If we are to give a proof for any true sentence of the language of first-order PA which is independent of PA, then we will need to appeal to ideas that go beyond those that are required in understanding PA (and more generally, but more vaguely, beyond those that are required to understand the structure of the natural numbers and thus elementary pure arithmetic).

Dan wants to say more (though that’s where things get a bit murkier), but the core Thesis is reasonably clear and interesting in itself — and if not even that is true, then Dan’s stronger claims won’t be true. In my talk “Ancestral Arithmetic and Isaacson’s Thesis” (NB the link is to an unrevised version!), I suggested that we can construct a formalized arithmetic PA* with a primitive ancestral operator which in fact is within the conceptual grasp of someone who understands PA. If PA* enabled us to prove new theorems in the language of PA that would be a strike against the Thesis. But I outlined a proof that PA* is in fact conservative over PA, a result not just in harmony with the Thesis but giving it positive support. (The talk went pretty well, but I made a bit of a hash of the questions, but what’s new?)

The other talks were by Ben Colburn (on of our PhD students) and Hannes Leitgeb, who talked more about what we might reasonably mean by saying with Dan that PA is sound and complete for “arithmetical truth”. And Luca Incurvati (another of our students) and Dan Isaacson himself talked about analogues for set theory of Dan’s thesis for arithmetic. In a way, the day ended too soon, as the discussions after the final paper by Hannes were just seeming to get to the heart of issues about arithmetic when we broke up. But a very productive and enjoyable day all the same.

Serendipitous Kreisel

abebooks.com is a wonderful thing — you can so often find cheap copies of in-print books that you want, and (often not-so-cheap) copies of out-of-print books as well. The trouble is that booksellers can of course use the site too to find out what particular books in their stock are worth, and these days you rarely pick up real philosophical or logical bargains sold at silly prices because the seller has little clue of their value (I not so very long ago picked up a complete Principia in excellent condition for £30 — that sort of thing wouldn’t happen now).

But you still make serendipitous little finds. In the “all hardbacks 50p” bargain tray outside a little bookshop at a National Trust house, a copy of Bertrand Russell: Philosopher of the Century. Which is very much a period piece, reminding us how the reputations of philosophers can change so markedly. But there is an intriguing 72 page essay by Kreisel, ‘Mathematical logic: what has it done for the philosophy of mathematics’, which I confess I can’t recall reading more than a few pages of before. Like other Kreisel papers, you just wish it were written in a more conventional, less allusive (ok, more flat-footed) style. Just imagine what Kreisel’s reputation would be if he had been able to write with e.g. Putnam’s directness and lucidity (Putnam has a piece in the collection too). So often, he seems to have got there first, but we — well, most of us — can only really see it with hindsight.

Logical options

There’s an afternoon planned soon to review the Faculty’s teaching, so it will be a good occasion to rethink some of our current arrangements for logic teaching. At the moment this looks to me to be about the right pattern, given the calibre of our students (good) and our resources (limited).

  • First year, propositional and predicate logic by trees (up to and including completeness for propositional trees: this is in fact what we do at the moment, using my Introduction to Formal Logic). We should also perhaps have a few lectures on set notation etc. (again, something we do at the moment). That’s compulsory for all students.
  • Second year, we could have five units corresponding roughly to the five main chapters of Logical Options by Bell, DeVidi and Solomon. So that’s a unit on other ways of doing logic, using propositional logic as the illustration — in particular, natural deduction. A unit treating the semantics for predicate logic more carefully, and an explanation of the completeness proof. Something on axiomatic systems built using a first-order logic. A unit introducing modal logic. And a unit on non-classical logics, in particular intuitionistic logic. (Again nearly all those are already on the syllabus, but we don’t teach them in a methodogical and integrated way. Using Logical Options as a course text could be a way of imposing order on the current slight mess. The book strikes me, having just got a copy for the first time, to be very good: it goes quite snappily and needs support from lectures, but it is the right kind of coverage and the right kind of level. This too is for a compulsory paper, though students can avoid answering too-technical questions by concentrating on some associated philosophical logic.)
  • Third year, mathematical logic. This could remain pretty much as we do it at the moment. Some basic model theory, comparisons of first and second order logic, etc. Then Michael Potter’s course using his Set Theory and its Philosophy, and my course using An Introduction to Gödel’s Theorems. We could also make the technical parts of the current paper available as a graduate course for those who haven’t done enough logic when they come to us from elsewhere.

It is my impression is that even good and large departments elsewhere in the UK are dropping serious logic teaching (so that e.g. a Single Honours student may have access only to one optional honours module using Tomassi’s very, very elementary book). But before getting too upset about this and bemoaning the Dumbing Down of Courses and the general Decline of the West, I am minded to try to organize some kind of survey to find out the state of logic teaching in UK philosophy departments. I’ve put a message out on Philos-L to check that no-one else is in the middle of doing this (and check too that a survey hasn’t been done recently and I’ve just failed to notice!). Watch this space.

Fixed point logics

At CUSPOMMS today, the audience was depressingly tiny, which was a great shame because Anuj Dawar gave a terrific introductory talk about the problems he is working on in finite model theory. In a particular, he said a little about the expressive power of logics which have a ‘least fixed point’ operator, and the relation of this to the question whether P = NP.

The topic of fixed point logics links up very closely with something I need to think about soon, for I’ve promised to put together a talk on Dan Isaacson’s Conjecture for a workshop next month (I mean the claim that, if we are to give a rationally compelling proof of any true sentence of the language of first-order arithmetic which is independent of PA, then we will need to appeal to ideas that go beyond those which are constitutive of our understanding of basic arithmetic). What’s the connection? Well, it is plausible that our understanding of arithmetic involves grasp of the ancestral of the successor relation (“the natural numbers are 0, the next number, the next one, the one after that, and so on, and nothing else”). So it is natural to think about theories of arithmetic which are embedded in stronger-then-first-order logics with a primitive ancestral-forming operator (i.e. a transitive closure operator, which is closely related to a least fixed point operator). We know that such arithmetics semantically entail more than PA but are unaxiomatizable. But what about partial axiomatizations of them? I know that the very natural partial axiomatization that goes back to Myhill gives us a theory which is in fact a conservative extension of PA (so that result is in harmony with Isaacson’s Conjecture: trying to pin down a concept arguably essential to our understanding of arithmetic which isn’t definable in the language of PA still doesn’t take us beyond PA). But I need to discover more about fixed point logics to see what else interesting is to be said here.

Anuj recommended Leonid Libkin’s Elements of Finite Model Theory as a good place to start finding out more. Great. I’ve ordered a copy.

Back to work: SOSOA

I have been mightily distracted from this blog by, first, trying to get the Gödel book into a state where it could go to CUP’s proof-reader and then, second, by the beginning of term. But the book is off, and life is settling into what passes for normality in term-time.

The high point so far is our new Mathematical Logic Reading Group (started by popular request). We are starting by working through the opening chapter of Simpson’s Subsystems of Second Order Arithmetic. The trouble we are having–wearing our philosophical hats–is to get the five key subsystems that Simpson highlights aligned to clear philosophical motivations. We can do that for ACA0; but not, for example, for the base system RCA0 with its curious mismatch between its Delta1 comprehension axiom and Sigma1 induction. And it is worrying–isn’t it?–that the chosen base system resists a clear conceptual defence. I’ll report back if we come to any less inchoate conclusions.

Greg Restall on arithmetic

Greg Restall has put a very nice paper online, called ‘Anti-realist classical logic and realist mathematics’. I’ve always been tempted by logicism in the very broadest sense: but Greg’s critics, of course, will say that he’s just smuggled the rabbit into the hat before pulling it out again. Still, his piece is nicely thought-provoking.

Oddly, given his multiple-conclusion logical framework, Greg doesn’t mention Shoesmith and Smiley’s great book in his biblio. (Very regrettably, it had the bad luck to be published in 1978, around when a number of publishers used early computers to print books with what looked like typewritten pages. The first edition of Fogelin’s fine book on Wittgenstein suffered the same fate. And in both cases, I think the repellent and amateurish look of the results was enough to put readers off and stop the books making the impact they deserved at the time. At least Fogelin got a properly printed second edition. But Shoesmith and Smiley has gone out of print, and seems widely forgotten.)

Categories: episode three

Kai von Fintel has just e-mailed, to send a link to the Good Math, Bad Math blog on category theory which is excellent — a series of mini-essays on concepts of category theory with some very helpful introductory explanations of some of the Big Ideas. Worth checking out, so thanks Kai!

I’ve now read quite a lot of Steve Awodey’s new book. Disappointing: or at least, it doesn’t do quite what I was hoping it would do. Awodey’s two papers in Philosophia Mathematica were among the pieces that got me interested in category theory in the first place (see his ‘Structure in mathematics and logic: a categorical perspective’, 1996, and his reply to Hellman, 2004). So I suppose I was hoping for a book that had more of the discursive, explanatory, commentary that Awodey is good at. But there’s very little of that. And although Awodey says in the preface that, if Mac Lane’s book is for mathematicians, his is for ‘everyone else’, in fact Category Theory is still pretty well orientated to maths students (for example, there is a telegraphic proof sketch of Cayley’s Theorem that every group is isomorphic to a permutation group by pp. 11-12!). Of course, there are good things: for just one example it helped me understand better the general idea of limits and colimits. But I wouldn’t really recommend this book to the non-mathematician.

So over the next week or two, I think it’s Goldblatt’s lucid Topoi, and Lawvere and Rosebrugh’s Sets for Mathematics for me.

Scroll to Top