As I’ve said before, CUP have agreed to publish a second edition of An Introduction to Gödel’s Theorems. Camera-ready copy is due to be sent to them rather implausibly soon, by the end of July 2012, with publication six months or so later(?). I have a slightly larger page budget, but don’t at the moment intend adding chapters covering much new material: the plan is to do what the current edition does, but do it better (and with some proofs that are currently only sketched filled out better, as the book has a larger mathematical readership than perhaps I was expecting). I also plan (at last!) to have a proper sequence of sets of exercises, but here on the web-site, as there isn’t room even in an expanded book.
(these replace the current Chs 1 to 7). I’ve actually rather enjoyed doing this, and am quite pleased with the results — in particular, I think I’ve much improved the too-compressed incompleteness argument in the old Ch. 5. But I’m simultaneously also a bit cheesed off to see how much those early chapters did need improving. It all just goes to show that when you’ve finished writing a book, you should really put it in a drawer for three years until you realize what you were really trying to say, and then re-write it. You can just see your research-driven university approving of that policy, eh?
All suggestions/corrections for these revised chapters will be most gratefully received. You can put them in the comments below, though it is probably better all round to email them (my email address is at the bottom of the “About Peter Smith” page). For copyright reasons, I won’t be able to make all the revised chapters so freely available when I’ve done them: but anyone who emails comments will be put on a circulation list for future tranches of the new version.
… and to dent your bank balances, here are three more rather sizeable logic books.
First up, spotted in the CUP bookshop and snapped up, is the just-published Proofs and Computations by Helmut Schwichtenberg and Stanley S. Wainer. A mere 450 action-packed pages, this looks as if it should be an instant classic, a welcome filling of a gap in the literature on the interactions between proof theory and computability theory.
Arnie Koslow told me about Lloyd Humberstone’s The Connectives which has been been out a couple of months and somehow I’d missed seeing. This one weighs in at some 1500 pages (which makes the price rather remarkably cheap). Again, on a quick browse it looks daunting but amazing.
Very differently, I spotted an announcement a couple of days ago by Michael Gabbay of the publication of the first instalment of a translation of Hilbert and Bernays (or rather a bilingual text, German and English on facing pages). This only gets to p. 44 of the German text (over fifty pages of the book reprint a long essay by Wilfried Sieg on Hilbert’s proof theory). But it is again very inexpensive, and being German-less I certainly wish the project well. So I’ll be sending off for a copy, and will report back here.
So there you are: how can we resist? (Any suggestions for other recent books on logic matters that I might have missed?)
Time then to pause to draw breath for Christmas after a busy/distracting time. But then what’s next, logically speaking?
Well, I plan to blog here in the new year about two more books that I’ve been asked to review together — Leon Horsten’s The Tarskian Turn: Deflationism and Axiomatic Truth and Volker Halbach’s Axiomatic Theories of Truth. Not that I claim any special expertise about their topic: but then both books are written for a reader like me — a philosopher/logician interested in theories of truth, who wants to get a handle on recent some formal developments (to which the respective authors have been notable contributors). Should be very interesting. And since both books have been out for few months, I hope some readers of the blog will be able to chip in helpfully in comments!
But mostly, it will have to be back to Gödel. At the moment, I’m re-writing the opening chapters of the book for the second edition, and I think very much improving them — about which I have mixed feelings! On the one hand, it’s very good to feel the effort of doing a second edition is going to be worth while, but on the other hand, I’m a bit downcast to see how very far from ideal those chapters previously were. Sigh. Anyway, when I’ve got the first tranche of chapters more to my liking, I’ll post them here for comments.
And I’ve one or two other plans too … So watch this space!
This is a very belated follow-up to an earlier post on Penelope Maddy’s short but intriguing Defending the Axioms.
In my previous comments I was talking about Maddy’s discussion of Thin Realism vs Arealism, and her claim that there in the end — for the Second Philosopher — there is nothing to choose between these positions (even though one line talks of mathematical truth and the other eschews the notion). What we are supposed to take away from that is the rather large claim that the very notions of truth and existence are not as central to our account of mathematics as philosophers like to suppose.
The danger in downplaying ideas of truth and existence is, of course, that mathematics might come to be seen as a game without any objective anchoring at all. But surely there is something more to it than that. Maddy doesn’t disagree. Rather, she suggests that it isn’t ontology that underpins the objectivity of mathematics and provides a check on our practice (it is not ‘a remote metaphysics that we access through some rational faculty’), but instead what does the anchoring are ‘the entirely palpable facts of mathematical depth’ (p. 137). So ‘[t]he objective ‘something more’ our set-theoretic methods track is the underlying contours of mathematical depth’ (p. 82).
This, perhaps, is the key novel turn in Maddy’s thought in this book. The obvious question it raises is whether the notion of mathematical depth is robust and settled enough really to carry the weight she now gives it. She avers that ‘[a] mathematician may blanch and stammer, unsure of himself, when confronted with questions of truth and existence, but on judgements of mathematical importance and depth he brims with conviction’ (p. 117). Really? Do we in fact have a single, unified phenomenon here, and shared confident judgements about it? I wonder.
Maddy herself writes: ‘A generous variety of expressions is typically used to pick out the phenomenon I’m after here: mathematical depth, mathematical fruitfulness, mathematical effectiveness, mathematical importance, mathematical productivity, and so on.’ (p. 81) We might well pause to ask, though, whether there is one phenomenon with many names here, or in fact a family of phenomena. It becomes clear that for Maddy seeking depth/fruitfulness/productivity also goes with valuing richness or breadth in the mathematical world that emerges under the mathematicians’ probings. But does it have to be like that?
In a not very remote country, Fefermania let’s say (here I’m picking up some ideas that emerged talking to Luca Incurvati), most working mathematicians—the topologists, the algebraists, the combinatorialists and the like—carry on in very much the same way as here; it’s just that the mathematicians with ‘foundational’ interests are a pretty austere lot, who are driven to try to make do with as little as they really need (after all, that too is a very recognizable mathematical goal). Mathematicians there still value making the unexpected connections we call ‘deep’, they distinguish important mathematical results from mere ‘brilliancies’, they explore fruitful new concepts, just like us. But when they turn to questions of ‘foundations’ they find it naturally compelling to seek minimal solutions, and look for just enough to suitably unify the rest of their practice, putting a very high premium on e.g. low-cost predicative regimentations. Overall, their mathematical culture keeps free invention remote from applicable maths on a somewhat tighter rein than here, and the old hands dismiss the baroquely extravagant set theories playfully dreamt up by their graduate students as unserious recreational games. Can’t we rather easily imagine that mind-set being the locally default one? And yet their local Second Philosopher, surveying the scene without first-philosophical prejudices, reflecting on the mathematical methods deployed, may surely still see her local mathematical practice as being in intellectual good order by her lights. Why not?
Supposing that story makes sense so far (I’m certainly open to argument here, but I can’t at the moment see what’s wrong with it) let’s imagine that Maddy and the Fefermanian Second Philosopher get to meet and compare notes. Will the latter be very impressed by the former’s efforts to ‘defend the axioms’ and thereby lure her into the wilder reaches of Cantor’s paradise? I really doubt it, at least if Maddy in the end has to rely on her appeal to mathematical depth. For her Fefermanian counterpart will riposte that her local mathematicians also value real depth (and fruitfulness when that is distinguished from profligacy): it is just that they also strongly value cleaving more tightly to what is really needed by way of rounding the mainstream mathematics they share with us. Who is to say which practice is ‘right’ or even the more mathematically compelling?
Musings such as these lead me to suspect that if there is objectivity to be had in settling on our set-theoretic axioms, it will arguably need to be rooted in something less malleable, less contestable than Maddy’s frankly rather arm-waving appeals to ‘depth’.
Which isn’t to deny that may be some depth to the phenomenon of mathematical depth: all credit to Maddy for inviting philosophers to think hard about its role in our mathematical practice. Still, I suspect she overdoes her confidence about what such reflections might deliver. But dissenting comments are most welcome!
Piled on my study floor — part of the detritus from clearing my faculty office — are some box files containing old lecture notes and the like. I’m going through, trashing some bundles of pages and scanning others for old times’ sake. (By the way, I can warmly recommend PDFscanner to any Mac user).
In particular, there is a long series of notes, some hundreds of pages, from a philosophy of language course that I must have given in alternate years, back in Aberystwyth. The set is dated around 1980 and would have been bashed out on an old steam typewriter. Those were the days. Some of the notes now seem misguided, some seem oddly skew to what now seem the important issues (such are the changes in philosophical fashion). But some parts even after all this time seem to read quite well and might be useful to students: so I’ll link a few excerpts — either in their raw form or spruced up a bit — to the ‘For students’ page. Here, for example, is some very introductory material on Grice’s theory of meaning. Having read too many tripos examination answers recently claiming e.g. that Searle refutes Grice, these ground-clearing introductory explanations might still provide a useful antidote!
If you want to insert LaTeX maths into a comment (‘cos the ascii mock-up of some bit of logical notation is just too horrible), then you now can. If ‘$ some-code $’ gives you what you want in standard LaTeX, then ‘$lat*x some-code $’ should work here (when you replace the ‘*’ with ‘e’ of course!).
In defining a first order syntax, there’s a choice-point at which we can go two ways.
Option (A): we introduce a class of sentence letters (as it might be, ) together with a class of predicate letters for different arities (as it might be , , ). The rule for atomic wffs is then that any sentence letter is a wff, as also is an -ary predicate letter followed by terms.
Option (B): we just have a class of predicate letters for each arity (as it might be ). The rule for atomic wffs is then that any -ary predicate letter followed by terms is a wff.
What’s to choose? In terms of resulting syntax, next to nothing. On option (B) the expressions which serve as unstructured atomic sentences are decorated with subscripted zeros, on option (A) they aren’t. Big deal. But option (B) is otherwise that bit tidier. One syntactic category, predicate letters, rather than two categories, sentence letters and predicate letters: one simpler rule. So if we have a penchant for mathematical neatness, that will encourage us to take option (B).
However, philosophically (or, if you like, conceptually) option (B) might well be thought to be unwelcome. At least by the many of us who follow Great-uncle Frege. For us, there is a very deep difference between sentences, which express complete thoughts, and sub-sentential expressions which get their content from the way they contribute to fix the content of the sentences in which they feature. Wittgenstein’s Tractatus 3.3 makes the Fregean point in characteristically gnomic form: ‘Only the proposition has sense; only in the context of a proposition has a name [or predicate] meaning’.
Now, in building the artificial languages of logic, we are aiming for ‘logically perfect’ languages which mark deep semantic differences in their syntax. Thus, in a first-order language we most certainly think we should mark in our syntax the deep semantic difference between quantifiers (playing the role of e.g. “no one” in the vernacular) and terms (playing the role of “Nemo”, which in the vernacular can usually be substituted for “no one” salve congruitate, even if not always so as myth would have it). Likewise, we should mark in syntax the difference between a sentence (apt to express a stand-alone Gedanke) and a predicate (which taken alone expresses no complete thought, but whose sense is fixed in fixing how it contributes to the sense of the complete sentences in which it appears). Option (B) doesn’t quite gloss over the distinction — after all, there’s still the difference between having subscript zero and having some other subscript. However, this doesn’t exactly point up the key distinction, but rather minimises it, and for that reason taking option (B) is arguably to be deprecated.
It is pretty common though to officially set up first-order syntax without primitive sentence letters at all, so the choice of options doesn’t arise. Look for example at Mendelson or Enderton for classic examples. (I wonder if they ever asked their students to formalise an argument involving e.g. ‘If it is raining, then everyone will go home’?). Still, there’s another analogous issue on which a choice is made in all the textbooks. For in an analogous way, in defining a first order syntax, there’s another forking path.
Option (C): we introduce a class of constants (as it might be, ); we also have a class containing function letters for each arity (as it might be , ). The rule for terms is then that any constant is a term, as also is an -ary function letter followed by terms for .
Option (D): we only have a class of function letters for each arity (as it might be ). The rule for terms is then that any -ary function letter followed by terms is a term for .
What’s to choose? In terms of resulting syntax, again next to nothing. On option (D) the expressions which serve as unstructured terms are decorated with subscripted zeros, on option (C) they aren’t. Big deal. But option (D) is otherwise that bit tidier. One syntactic category, function letters, rather than two categories, constants and function letters: one simpler rule. So mathematical neatness encourages many authors to take option (D).
But again, we might wonder about the conceptual attractiveness of this option: does it really chime with the aim of constructing a logically perfect language where deep semantic differences are reflected in syntax? Arguably not. Isn’t there, as Great-uncle Frege would insist, a very deep difference between directly referring to an object and calling a function (whose application to one or more objects then takes us to some object as value). Again, so shouldn’t a logically perfect notation sharply mark the difference in the the devices it introduces for referring to objects and calling functions respectively? Option (D), however, downplays the very distinction we should want to highlight. True, there’s still the difference between having subscript zero and having some other subscript. However, this again surely minimises a distinction that a logically perfect language should aim to highlight. That seems a good enough reason to me for deprecating option (D).
Here’s a small niggle, that’s arisen rewriting a very early chapter of my Gödel book, and also in reading a couple of terrific blog posts by Tim Gowers (here and here).
We can explicitly indicate that we are dealing with e.g. a one-place total function from natural numbers to natural numbers by using the standard notation for giving domain and codomain thus: $latex f\colon\mathbb{N}\to\mathbb{N}$. What about two-place total functions from numbers to numbers, like addition or multiplication?
“Easy-peasy, we indicate them thus: $latex f\colon\mathbb{N}^2\to\mathbb{N}$.”
But hold on! $latex \mathbb{N}^2$ is standard shorthand for $latex \mathbb{N}\times \mathbb{N}$, the cartesian product of $latex \mathbb{N}$ with itself, i.e. the set of ordered pairs of numbers: and an ordered pair is standardly regarded as one thing with two members, not two things. So a function from $latex \mathbb{N}^2$ to $latex \mathbb{N}$ is in fact a one-place function that maps one argument, an ordered pair object, to a value, not (as we wanted) a two-place function mapping two arguments to a value.
“Ah, don’t be so pernickety! Given two objects, we can find a pair-object that codes for them, and we can without loss trade in a function from two objects to a value to a related function from the corresponding pair-object to the same value.”
Yes, sure, we can eventually do that. And standard notational choices can make the trade invisible. For suppose we use $latex `(m, n)’$ as our notation for the ordered pair of $latex m$ with $latex n$, then $latex `f(m, n)’$ can be parsed either way, as representing a two-place function with arguments $latex m$ and $latex n$ or as a corresponding one-place function with the single argument $latex (m, n)$. But the fact that trade between the two-place and the one-place function is glossed over doesn’t mean that it isn’t being made. And the fact that the trade can be made (even staying within arithmetic, using a pairing function) is a result and not quite a triviality. So if we are doing things from scratch — including proving that there is a pairing function that matches two things with one thing in such a way that we can then extract the two objects we started with — then we do need to talk about two-place functions, no? For example, in arithmetic, we show how to construct a pairing function from the ordinary school-room two-place addition and multiplication functions, not some surrogate one-place functions!
So what should be our canonical way of indicating the domains (plural) and codomain of e.g. a two-place numerical function? An obvious candidate notation is $latex f\colon\mathbb{N}, \mathbb{N} \to\mathbb{N}$. But I haven’t found this used, nor anything else.
Assuming it’s not the case that I (and one or two mathmos I’ve asked) have just missed a widespread usage, this raises the question: why is there this notational gap?
The Belcea Quartet were playing in Cambridge again a couple of days ago, in the wonderfully intimate setting of the Peterhouse Theatre. They are devoting themselves completely to Beethoven for a couple of years, playing all-Beethoven concerts, presumably working up to recording a complete cycle. Their programme began with an ‘early’ and a ‘middle’ Quartet (Op. 18, no. 6, and Op. 95). The utter (almost magical) togetherness, the control, the range from haunting spectral strangeness to take-no-prisoners wildness, the consistent emotional intensity, was just out of this world.
The violist Krzysztof Chorzelski gave a short introduction before the concert and he said how draining they find it to play the Op. 95; and I’m not sure that the Op. 127 after the interval caught fire in quite the same way (though in any other circumstance you’d say it was the terrific performance you’d expect from the Belcea). But the first half of the evening was perhaps the most stunning live performance of any chamber music that I’ve ever heard. I’ve been to a lot of concerts by the truly great Lindsays in their heyday, but this more than bore comparison. Extraordinary indeed. The recordings when they come should be something else.
Meanwhile, let’s contain our patience! Here’s some other recordings to recommend, that can be mentioned in the same breath — the new two CD set of Schubert from Paul Lewis, the D. 850 Sonata, the great D. 894 G major, the unfinished ‘Reliquie’ D. 840, the D.899 Impromptus — and last but very certainly not least the D. 946 Klavierstücke (try the second of those last pieces for something magical again). By my lights, simply wonderful. I’ve a lot of recordings of this repertoire, but these performances are revelatory. Which is a rather feebly inarticulate response, I do realize — sorry! But if you love Schubert’s piano music then I promise that this is just unmissable.