I’ve just written a (partial) draft of another chunk of my developing Teach Yourself Logic reading guide. As always, comments and suggestions will be very welcome indeed. I’ve tried to restrain myself, but this segment of the Guide is already rather long. But here goes …
A standard part of any logician’s education will be an encounter with first-order Peano Arithmetic as a non-trivial example of a rigorously formulated axiomatic system, and with Gödel’s epoch-making proof of the incompleteness of PA, and indeed of any nice enough theory that can ‘do’ enough arithmetic.
Gödel’s 1931 proof uses facts about primitive recursive functions (a subclass of the computable functions), and you certainly need to know about them. A more general treatment of the idea of an effectively computable function (arguably capturing all of them) was developed a few years later, is of the greatest intrinsic interest and also throws more light on the incompleteness phenomenon. So there’s a choice to be made. Do we look at things in a roughly historical order, proving initial versions of Gödel’s incompleteness theorem before moving on to look at the general treatment of computable functions. Or do we do some of the general theory first? Here are two books to start with which take the two different routes:
- Peter Smith, An Introduction to Gödel’s Theorems (CUP 2007) does things in the historical order. Don’t be put off by the series title ‘Cambridge Introductions to Philosophy’: putting it in that series was the price I paid for cheap paperback publication. This is still quite a meaty logic book, with a lot of theorems and a lot of proofs, but I hope rendered very accessible.
- Richard Epstein and Walter Carnielli, Computability: Computable Functions, Logic, and the Foundations of Mathematics (Wadsworth 2nd edn. 2000) does computability theory first. This is a nicely introductory book, clearly done, with lots of interesting historical information too in Epstein’s 28 page timeline on ‘Computability and Undecidability’ at the end of the book.
As you’ll already see from these two books, this is a delightful area. Elementary computability theory is conceptually very neat and natural, and the early Big Results are proved in quite remarkably straightforward ways (just get the hang of the basic ‘diagonalization’ construction, the idea of Gödel-style coding, and one or two other tricks and off you go …).
A notch up in difficulty, here’s another book that focuses on the general theory of computation:
- George Boolos, John Burgess, Richard Jeffrey, Computability and Logic (CUP 5th edn. 2007). This is the latest edition of an absolute classic. The first version (just by Boolos and Jeffrey) was published in 1974; and there’s in fact a great deal to be said for the 1990 third edition as being the best. The later versions have been done by Burgess and have perhaps lost elegance and some of the individuality, and the level of difficulty has become uneven. But this is still great stuff, and you will want to read the first two parts of the book at an early stage, perhaps being more selective when it comes to the last part, ‘Further Topics’.
An interestingly unusual route to the incompleteness phenomenon is traced in
- Melvin Fitting, Incompleteness in the Land of Sets (College Publications, 2007). Short, also fairly elementary, elegant and very clear.
After these, where should you go? Here are three stand-out suggestions:
- Nigel Cutland, Computability: An Introduction to Recursive Function Theory (CUP 1980). This is a rightly much-reprinted classic and is beautifully put together. It does have the look-and-feel of a maths text book. But if you can cope with Boolos and Jeffrey, you can manage this.
- Raymond Smullyan, Godel’s Incompleteness Theorems (Oxford Logic Guides, OUP, 1992) is another modern classic. Terse and elegant, proving some beautiful slightly abstract versions of the incompleteness theorems.
- Richard Kaye, Models of Peano Arithmetic (Oxford Logic Guides, OUP, 1991) is a third classic, connecting up the treatment of arithmetic with material on model theory, and telling us a great deal about non-standard models of PA (models which contain more than the ‘real’ natural numbers). And this reveals more about what PA can and can’t prove — though be warned that this book is rather more challenging in places.
That is probably more than enough about the logical basics. The literature on computability is huge, however, and I haven’t yet mentioned any books that approach things from a computer-science rather than a pure-logic perspective, and deal inter alia with the fascinating topic of computational complexity. So I had better touch on a few of those. But first, here’s an aside mentioning a few more books with a logical flavour, at very varied levels:
[At this point, in small print, I go on to mention (with shorter comments)
- Rósza Péter, Recursive Functions (1967).
- Hartley Rogers, Jr., Theory of Recursive Functions and Effective Computability (1967)
- Herbert B. Enderton, ‘Elements of Recursion Theory’, in J. Barwise, ed., Handbook of Mathematical Logic (1977).
- Piergiorgio Odifreddi, Classical Recursion Theory , Vol. 1 (1989)
- Martin Goldstern and Haim Judah, The Incompleteness Phenomenon (1998).
- A. Shen and N. K. Vereshchagin, Computable Functions (2003).
- Per Lindström, Aspects of Incompleteness (2003). Terse, not always reader-friendly, but well repays the effort.
- Torkel Franzén, Inexaustibility (2004).
- S. Barry Cooper, Computability Theory (2004).
As I mentioned, computer scientists are — surprise, surprise! — interested in the theory of computation, and their books tend to have a different slant. It is certainly worth knowing at least a little about the topic of computational complexity. I’ll mention just three places to start:
- Michael Sipser, Introduction to the Theory of Computation (Thomson, 2nd edn. 2006) is a standard and very well regarded text. It aims to be very accessible and to take its time giving clear explanations of key concepts and proof ideas. I think it is successful. The last third of the book is on computational complexity.
- Ofed Goldreich, P, NP, and NP-Completeness (CUP, 2010). Short, clear, and introductory stand-alone treatment of computational complexity.
- Ashley Montanaro, Computational Complexity, notes for a 2012 lecture course here in Cambridge. This has a useful quick guide to further reading, so I don’t need to add anything further here.
And that’s surely more than enough! But any suggestions for additions? deletions? improvements?
As for introductions to computability, I like N. Vershchagin’s Computable Functions (translated from the Russian original and published by the AMS, 2003). It’s a very nice little book, but gets remarkably far, for instance including a proof of the Friedberg-Muchnik theorem. Definitely worth a look.
Yes I very much agree. In fact the Shen/Vershchagin book is already in the small print in the current version — and will get a longer entry in the next version.
(More apropos for the Set Theory references, of course; but is Lawvere and Rosebrugh’s Sets for Mathematics too “maths-y” for the list?)
Wayne
I am planning to put that in the Category Theory section (as a way in from the familiar to the unfamiliar).
I quite like the look of Rebecca Weber’s Computability Theory but haven’t had time to look at it in detail.
There’s supposed to be a new edition of Cooper’s Computability Theory soon (like next week).
Models of Peano Arithmetic is out of print and very expensive. “New” copies are for sale for amounts between £100 and £150. (It’s annoying that, despite the availability of print-on-demand technology, such books are not more readily available; and also that books that now are print-on-demand, such as Smullyan’s Godel book, aren’t available more cheaply. (That one’s currently £38.99 on Amazon UK — though that is cheap compared to Ofed Goldreich’s P, NP, and NP-Completeness, which costs £67.45.)
Thanks very much for the pointer to Rebecca Weber’s Computability Theory which I didn’t know of. Taking a “look inside” on Amazon, this indeed looks promising.
Yes, I’ve had a copy of Cooper Mk II on order for ages: it at last seems to be on its way.
As to book pricing, I agree that OUP’s policy seems to be outrageous (Peter Johnstone’s Elephant is now an elephantine £265.00.) CUP tend to be quite a bit better in this respect: the Goldreich is in fact £24.99.
Thanks for that about Goldreich. I hadn’t noticed there was also a paperback.
This is somewhat off-topic..,but there Kurt Gödel’s Interview! in (blog) “Gödel’s Lost Letter and P=NP”.
Any thoughts on Francesco Berto, There’s Something About Gödel: The Complete Guide to the Incompleteness Theorem? It is more chatty than mathematical, so it might be one for students to take to the beach before the course starts. Some professors might disagree with some of the philosophical positions taken. It does have the virtue of conducting the logic in typographical number theory, and thereby emphasizing that the results are syntactic (see page 55).
If you search here for “Berto” you’ll find a handful of blog-posts on Berto’s book which explain why I don’t rate it very highly. (I ought to put those comments together into a single page I guess.)
Thanks very much, I see what you mean. I wasn’t following your blog back then.
It seems to me to be very common for ahoutrs to confuse CT1 with CT2, but I think the confusion here is more subtle. The B&S argument seems to be interested in CT2a: what a human can compute *in principle*, which may be much more than CT2b: what a human can compute while following a fixed algorithm. For example, the final sentence of B&S reads When a problem-solver tackles some problem, it seems to us that he or she, over any session, is capable of an arbitrarily large number of moves. . But a human engaged in algorithmic computation is not tackling a problem, and is not engaged in any kind of original thought or problem solving. She is simply following a particular finite list of instructions that have been handed down to her about how to perform the algorithm. Thus CT2b, in the sense of human computation, does not refer to all possible ways that a human might solve a problem, but only to ways that follow fixed algorithms. For example, one aspect of algorithmic computation is that there is no need for the agent to learn or become smarter during the execution of the algorithm, except in ways that are laid out in the algorithm from the beginning. The activity that is described in section 7.1 of B&S does not seem to be algorithmic, or even to produce a function.