2013

Edmund de Waal at the Fitz

Image linked from Apollo Magazine’s article ‘Fragile Histories’ by Jon Sanders

There’s a wonderful small exhibition ‘On White: Porcelain Stories from the Fitzwilliam Museum’ by Edmund de Waal at the Fitz until Sunday 23 February 2014. There are three works by de Waal himself, and a series of cases in which he chooses some favourite pieces of porcelain from the museum’s collection, and comments illuminatingly on their significance and strangeness. A delight if you are in Cambridge and want to escape the madness of the town centre during the sales.

Intro to Formal Logic: seventh time lucky?

My Introduction to Formal Logic was published in 2003, and CUP’s initial print run was rather large, so I didn’t get the chance to correct the inevitable typos and thinkos until a reprint in 2009. By that time, needless to say, there were quite a few little presentational things I wanted to change, so I slipped in a load of minor rewritings too. This revised version has been reprinted a number of times. (Oh yes, but of course, I’m making an absolute fortune …)

Along the way, Joseph Jedwab kindly sent me an embarrassingly long list of further errors in the revised printings (I have to put my hand up to having introduced quite a number of these in making the “improvements” in the second printing: thankfully, they were nearly all minor typos). Eventually I had the opportunity to make the needed further corrections, and I’ve just picked up the seventh printing from CUP bookshop. I hope this latest version is a heck of a lot cleaner than the previous ones. Fingers crossed.

A revised printing is not a new edition with a new ISBN, so I’m afraid you can’t put in a bookshop request for the seventh printing and be guaranteed to get one. But eventually the new version will propagate through the distribution system, and jolly good it is too. Or at any rate, reading through while making corrections and looking for any that Joseph Jedwab had missed (none, as far as I could find), I found I didn’t actually hate the book. Distance lends enchantment, eh?

TYL, #19: the Teach Yourself Guide reorganized and updated

Sooner than I was planning, there’s now yet another update for the Teach Yourself Logic Guide. So here is Version 9.4 of the Guide (pp. iii +  72).  

The main change — though it is a significant one, which is why it is worth propagating this new version ahead of schedule — is that the Guide has been reorganized to make it easier to navigate, and hopefully less daunting. Topics on the standard “mathematical logic” curriculum (of interest of mathematicians and philosophers alike) are now separated more sharply from topics likely to be more specialized interest to some philosophers. I’ve also added comments on books by Devlin, Hodel, Johnstone, and Sider.

As I’ve said before, do spread the word to anyone you think might have use for the Guide. As always, there’s a stable URL for the page which links to the latest version, http://logicmatters.net/students/tyl/. You can reliably use that link in reading lists, or on your website’s resources page  for graduate students, etc.

On Sider’s Logic for Philosophy — 2

Suppose that you have some background in classical first-order logic, and want to learn something about modal logic (including quantified modal logic) and, relatedly, about Kripke semantics for intuitionistic logic. Then the second half of Sider’s Logic for Philosophy certainly aims to cover the ground, and it will tell you about formal theories of counterfactuals too. How well does it succeed, especially if you skip the first half of the book and dive straight in, starting with Ch. 6?

These later chapters in fact seem to me to work fairly well (assuming a logic-competent reader). Compared with the early chapters with their inconsistent levels of coverage and sophistication, the discussion here develops more systematically and at a reasonably steady level of exposition. There is a lot of (acknowledged) straight borrowing from Hughes and Cresswell, and student readers would probably do best by supplementing Sider with a parallel reading of that approachable classic text. But if you want a pretty clear explanation of Kripke semantics together with an axiomatic presentation of some standard modal propositional systems, and want to learn e.g. how to search systematically for countermodels, Sider’s treatment could well work as a basis. And then the later treatments of quantified modal logic (and some of the conceptual issues they raise) are also lucid and tolerably approachable.

This is a game of two halves then. Before the interval, Logic for Philosophy is pretty scrappy and I wouldn’t recommend it. After the interval, when Sider plays through some standard modal logics, things look up. I wouldn’t have him at the top of the league for modality-for-philosophers (see the current version of the Guide for preferred recommendations); but Sider’s book-within-a-book turns in a respectable performance.

On Sider’s Logic for Philosophers — 1

It hasn’t been mentioned yet in the Teach Yourself Logic Guide, so I’ve predictably been asked a fair number of times: what do I think about Ted Sider’s Logic for Philosophy (OUP 2010)? Isn’t it a rather obvious candidate for being recommended in the Guide?

Well, I did see some online draft chapters of the book a while back and wasn’t enthused. But, yes, I am more than overdue to take a look at the published version. So here goes …

The book divides almost exactly into two halves. The first half (some 132 pages), after an initial chapter ‘What is logic?’, reviews classical propositional and predicate logic and some variants. The second half (just a couple of pages longer) is all about modal logics. I’ll look at the first half of the book for this post, and leave the second half (which looks a lot more promising) to be dealt with a follow-up.

OK. I have to say that the first half of Sider’s book really seems to me to be ill-judged (showing neither the serious philosophical engagement you might hope for, or much mathematical appreciation).  

Here is one preliminary point. The intended audience for this book is advanced philosophy students, so presumably students who have read or will read their Frege and their Tractatus. So what, for example, will they make of being baldly told in §1.8, without defence or explanation, that relations are in fact objects (sets of ordered pairs), and that functions are objects too (more sets of ordered pairs)? There’s nothing here about intension and extension, and about why we should identify functions with their graphs. We are equally baldly told to think of binary functions  as one-place functions on ordered pairs (and the function that maps two things to their ordered pair …?). Puzzled philosophers might well want to square what they have learnt from Frege and the Wittgenstein with modern logical practice as they first encountered  it in their introductory logic courses: so you’d expect a second level book designed for such students to proceed more cautiously and address the obvious worries. But that doesn’t happen here.

And in fact we get a pretty skewed description of modern logic anyway, even from the very beginning, starting with the Ps and Qs. Sider seems stuck with thinking of the Ps and Qs as Mendelson does (the one book which he says in the introduction that he is drawing on for the treatment of propositional and predicate logic). But Mendelson’s Quinean approach is actually quite unusual among logicians, and certainly doesn’t represent the shared common view of ‘modern logic’. I won’t rehearse the case again now, as I’ve explained it at length here. But students need to know there isn’t a uniform single line to be taken here.

When Sider turns to looking at formal systems for propositional logic we get sequent proofs in what is pretty much the style of Lemmon’s book. Which as anyone who spent their youth teaching a Lemmon-based course knows, students do not find user friendly. Why do things this way? And how are we to construe such a system? One natural way of understanding what is going on is that the system is a formalized meta-theory about what follows from what in a formal object-language. But no: according to Sider sequent proofs aren’t metalogic proofs because they are proofs in a formal system. Really? (Has Sider not noticed that in Mendelson too, the formal proofs are all metalogical?)

OK, so the philosophical student is introduced to an unfriendly version of a sequent calculus for propositional logic, and then to an even more unfriendly Hilbertian axiomatic system. Good things to know about, but probably not when done like this. And not — in a book addressed to puzzled philosophers — without a lot more discussion of how this all hangs together with what the student is likely to already know about, natural deduction and/or a tableau system. And not without a better discussion, too, of the way the conception of logic changed between e.g. Principia and Gentzen, from being seen as regimenting a body of special truths to being seen as regimenting inferential practice. Further, the decisions about what to cover and what not to cover are pretty inexplicable. For example, why pages actually proving the deduction theorem for axiomatic propositional logic, and later just one paragraph on the compactness theorem for FOL, which students might really need to know about and understand some applications of?

Predicate logic is then dealt with by an axiomatic system (apparently because this approach will come in handy in the second half of the book — I’m beginning to suspect that the real raison d’être of the book is indeed the discussion of modal logic). I can’t think this is the best way to equip philosophers who have a perhaps shaky grip on formal ideas with a better understanding of first-order logic. The explanation of the semantics of a first-order language isn’t bad, but not especially good either. This certainly isn’t the go-to treatment for giving philosophers what they need.

True, a nice feature of this part of Sider’s book is that it does have a discussion of some non-classical propositional logics, and has a little about descriptions and free logic.  But actually the philosophically serious issues of intuitionistic logic and second-order logic are dealt with far too quickly to be useful, so the breadth of Sider’s coverage goes with superficiality.

I could go on. But the headline summary about the first part of Sider’s book is that I found it (whether wearing my mathematician’s or philosopher’s hat) irritating and unsatisfactory. Sorry to be carping!

Comments from those who have used/taught/learnt from the book?

The Škampa Quartet play Mozart and Smetena

We’d booked to see the Pavel Haas Quartet play at lunchtime at the Wigmore Hall today, but they have had to delay restarting their concert schedule, and so we heard the Škampa Quartet as their truly excellent stand-ins. They played Mozart’s Quartet in D, K575 with considerable finesse and charm and persuasiveness. But what made the concert was a performance of the first of Smetena’s quartets, ‘From my life’. Passionate, intimate, dancing and lyrical by turn, the  Škampa  played with true heart and soul (and wonderful togetherness). This was a very fine performance indeed. We heard the Pavel Haas play the Smetena last year, and that was perhaps even more remarkable an experience: but the  Škampa in their current line-up were definitely more than worth the day trip to London to hear.

I’m mentioning this because you can hear the concert for the next week on BBC Radio 3, here.

TYL, #18: another update for the Teach Yourself Logic Guide

After a bit of a hiatus, there’s now another update for the Teach Yourself Logic Guide. So here is Version 9.3 of the Guide (pp. iii +  68).  Once more, do spread the word to anyone you think might have use for it.

And by the way, there’s a stable URL for the page which always links to the latest version, http://logicmatters.net/students/tyl/, which you can use in reading lists, or on your website’s resources page  for graduate students, etc.

The main new addition is a two-page overview of Peter Hinman’s blockbuster, Fundamentals of Mathematical Logic. But there are a couple of other additional reviews in the Big Books appendix, and as always there has also been some minor tinkering throughout.

The previous version from 1 September has been downloaded over 2500 times in ten weeks.  As I’ve said before, I’m therefore encouraged to continue occasionally revising and expanding the Guide as people seem to be finding it useful.  So keep watching this space …

Gödel’s incompleteness theorems, on SEP at last

The Stanford Encyclopedia of Philosophy (what would we do without it?) has at last filled one of its notable gaps in coverage: there is now a entry by Panu Raatikainen on Gödel’s Incompleteness Theorems. It isn’t quite how I would have written such an entry, but it is clear and very sensible, and certainly won’t lead the youth too badly astray (which needs to be said when it comes to discussions of these things!).

Here, though, are a few comments. I start with three places where the discussion could perhaps mislead:

(1)  There’s an early section on ‘The Relevance of the Church-Turing Thesis’, which starts

Gödel originally only established the incompleteness of a particular though very comprehensive formalized theory P, a variant of Russell’s type-theoretical system  it was, at the time, unclear just how general [his result] really was … What was still missing was an analysis of the intuitive notion of decidability, needed in the characterization of the notion of an arbitrary formal system.

This rather suggests that we only get a clean general result round about 1936 when we have homed in on a sharp general account of effective decidability. But of course, Gödel’s original paper already gives a perfectly sharp general formulation: if the axioms and rules of inference of a theory are primitive-recursively definable, and represents every primitive recursive relation, then T is incomplete so long as omega-consistent (and indeed there will be arithmetical sentences undecidable by T). Moreover, the restriction here to primitively-recursively axiomatized systems is no real restriction. And I’m not thinking here of the technical point that any recursively axiomatized theory can be primitive-recursively re-axiomatized; there’s a more humdrum point — any effectively axiomatized system you are likely to dream up will already be primitive-recursively axiomatized (unless we are trying to be perverse, we don’t ordinarily define a class of axioms, for example, in such a way that it will require an open-ended search to determine whether a given sentence belongs to the class). So: I think it would be better to say that Gödel 1931 already gives a beautifully general result. And the fact that we can extend it from the primitively-recursively axiomatized to recursively axiomatized theories is pretty much a technical footnote.

(2) In the section describing how the First Theorem might be proved, we read

The next and perhaps somewhat surprising ingredient of Gödel’s proof is the following important lemma … The Diagonalization Lemma

Let A(x) be an arbitrary formula of the language of F with only one free variable. Then a sentence D can be mechanically constructed such that

F ⊢ D ↔ A(D).

Well, not Gödel’s proof. As in fact Raatikainen himself notes later.

As to the Diagonalization Lemma, actually Gödel himself originally demonstrated only a special case of it, that is, only for the provability predicate. The general lemma was apparently first discovered by Carnap 1934.

But that’s still wrong on both counts. Gödel didn’t state even the restricted version in 1931, nor in his 1934 lectures. Nor does Carnap’s 1934 Logische Syntax der Sprache. 

We need to distinguish two different claims, which we might call the Diagonal Equivalence  and the Diagonal Lemma. The Lemma we have just met, and is a syntactic claim about what can be proved. The Equivalence is a semantic claim, to the effect that given an arbitrary formula A(x) we can construct as sentence D such that D is true on interpretation just in case A(⌈D⌉) is. And it is this semantic Equivalence claim that Gödel refers to in his 1934 lectures.

Now, in §35 of his 1934, Carnap neatly proves the general Diagonal Equivalence, and in §36 he uses this, together with the assumption that not provable is expressible by an open wff in his Language II to show that Language II is incomplete. But note that constructing the semantic Diagonal Equivalence is not to establish the Diagonalization Lemma. And Carnap doesn’t actually state or prove that in his §35. And when we turn to §36, we see that Carnap’s argument for incompleteness is the simple semantic argument depending on the assumed soundness of his Language II. So at this point Carnap is giving a version of the semantic incompleteness argument sketched in the opening section of Gödel 1931 (the one that appeals to a soundness assumption), and not a version of Gödel’s official syntactic incompleteness argument which appeals to omega-consistency. Indeed, Carnap doesn’t even mention omega–consistency in the context of his §36 incompleteness proof. He doesn’t need to.

Anyway, neither Gödel 1931 or 1934 nor Carnap 1934 state the modern syntactic Diagonalization Lemma (as opposed to the semantic Equivalence). And I’m not sure who first did.

(3) We’ve just touched on the point that Gödel 1931 has two proofs that there are undecidable sentences in sufficiently rich theories, the first one depending on the semantic assumption that the theory is sound (and the weak assumption that it can express primitive recursive relations), the other depending on the syntactic assumption that the theory is omega-consistent (and the stronger assumption that it can represent p.r. relations). Students ought to know this, or they will get confused when they read some other discussions.

Well, Raatikainen does bury some relevant remarks at the end of his piece. He speaks of

a weak version of the incompleteness result: the set of sentences provable in arithmetic can be defined in the language of arithmetic, but the set of true arithmetical sentences cannot; therefore the two cannot coincide. Moreover, under the assumption that all provable sentences are true, it follows that there must be true sentences which are not provable. This approach, though, does not exhibit any particular such sentence.

But this perhaps forgets that Gödel’s own initial semantic argument (p. 149 in Collected Papers Vol. I) does of course exhibit a particular undecidable sentence.

A couple more observations. First, I thought section on ‘Feferman’s Alternative Approach to the Second Theorem’ was probably too compressed to be very useful.  Second, it is natural to ask: are there ‘ordinary’ arithmetical statements which (like Gödel sentences) we can frame in the language of PA, which are also unprovable in PA, though we can prove them true using richer resources? The section on ‘Concrete Cases of Unprovable Statements’ addresses this, briefly, but then moves on to talking about unprovable statements in richer languages, unprovable in richer theories. Students might be puzzled about whether there is supposed to be any unity to these examples and what, if anything, they are supposed to show which they haven’t learnt from Gödel’s theorems.

But all that said, it is excellent to see the SEP filling an obvious gap with an accessible and (mostly) very clear discussion!

Does mathematics need a philosophy? — 3

Some final thoughts after the TMS meeting last week (again, mostly intended for local mathmos rather than the usual philosophical readers of this blog …).

Consider again that rather unclear question ‘Does mathematics need a philosophy?’. Here’s another way of construing it:

Are mathematicians inevitably guided by some general conception of their enterprise —  by some ‘philosophy’, if you like —  which determines how they think mathematics should be pursued, and e.g. determines which modes of argument they accept as legitimate?

Both Imre Leader and Thomas Forster touched on this version of the question in very general terms. But to help us to think about it some more, I suggest it is illuminating to have a bit of detail and revisit a genuine historical debate.

We need a bit of jargon first (which comes from Bertrand Russell). A definition is said to be impredicative if it defines an object E by means of a quantification over a domain of entities which includes E itself. An example: the standard definition of the infimum of a set X is impredicative. For we say that y = inf(X) if and only if is a lower bound for X, and for any lower bound z of  X, z ≤ y. And note that this definition quantifies over the lower bounds of X, one of which is the infimum itself (assuming there is one).

Now Poincaré, for example, and Bertrand Russell following him, famously thought that impredicative definitions are actually as bad as more straightforwardly circular definitions. Such definitions, they suppose, offend against a principle banning viciously circular definitions. But are they right? Or are impredicative definitions harmless?

Well, local hero Frank Ramsey (and Kurt Gödel after him) equally famously noted that some impredicative definitions are surely entirely unproblematic. Ramsey’s example: picking out someone as the tallest man in the room (the person such that no one in the room is taller) is picking him out by means of a quantification over the people in the room who include that very man, the tallest man. And where on earth is the harm in that? Surely, there’s no harm at all! In this case, the men in the room are there anyway, independently of our picking any one of them out. So what’s to stop us identifying one of them by appealing to his special status in the plurality of them? There is nothing logically or ontologically weird or scary going on.

Likewise, it would seem, in other contexts where we take a realist stance, and where we suppose that – in some sense – reality already supplies us with a fixed totality of the entities to quantify over. If the entities in question are ‘there anyway’, what harm can there be in picking out one of them by using a description that quantifies over some domain which includes that very thing?

Things are otherwise, however, if we are dealing with some domain with respect to which we take a less realist attitude. For example, there’s a line of thought which runs through Poincaré, an early segment of Russell,  the French analysts such as Borel, Baire, and Lebesgue, and then is particularly developed by Weyl in his Das Kontinuum: the thought is that mathematics should concern itself only with objects which can be defined. [This connects with something Thomas Forster said, when he rightly highlighted the distinctively modern conception of a function as any old pairing of inputs and outputs, whether we can define it or not — this is the ‘abstract nonsense’, as Thomas called it, that the tradition from Poincaré to Weyl and onwards was standing out against.]  In that tradition, to quote the later great constructivist mathematician Errett Bishop,

A set [for example] is not an entity which has an ideal existence. A set exists only when it has been defined.

On this line of thought, defining a set is – so to speak – defining it into existence. And
from this point of view, impredicative definitions will indeed be problematic. For the definitist thought suggests a hierarchical picture. We define some things; we can then define more things in terms of those; and then define more things in terms of those; keep on going on. But what we can’t do is define something into existence by impredicatively invoking a whole domain of things already including the very thing we are trying to define into existence. That indeed would be going round in a vicious circle.

So the initial headline thought is this. If you are full-bloodedly realist —  ‘Platonist’ — about some domain, if you think the entities in it are ‘there anyway’, then you’ll take it that impredicative definitions over that domain can be just fine. If you are some stripe of anti-realist or constructivist, you will probably have to see impredicative definitions as illegitimate.

Here then, we have a nice example where your philosophical Big Picture take on  mathematics (‘We are exploring an abstract realm which is “there anyway”’ vs. ‘We are together constructing a mathematical universe’) does seem to make a difference to what mathematical devices you can, on reflection, take yourself legitimately to use. Hence the fact that standard mathematics is up to its eyes in impredicative constructions rather suggests that, like it or not, it is committed to a kind of realist conception of what it is up to. So yes, it seems that most mathematicians are implicitly caught up in some general realist conception of their enterprise, as Imre and Thomas in different ways came close to suggesting. In the terms of the previous instalment, we can’t, after all, so easily escape entangling with some of the Big Picture issues by saying ‘not our problem’.

Return to the story I gestured at in the last instalment about what I called the the Battle of the Isms. I rather cheated by then assuming that the game was taking mathematics uncritically as it is and seeing how it fits in the rest of our story of the world and of our cognitive grasp of the world. In other words, I took it for granted that the enterprise of trying to get an overview, trying to understand how mathematics fits together with other forms of enquiry, isn’t going to produce some nasty surprises and reveal that the mathematicians might somehow have being doing some of it wrong, and need to mend their ways! But as we’ve  just been noting, historically that isn’t how it was at all. So while Logicism (which Imre mentioned) and Hilbert’s sophisticated version of Formalism were conservative Isms, which were supposed to give us ways of holding on to the idea that — despite its very peculiar status — classical mathematics is just fine as it is, these positions were up against some radically critical strands of thought. These included famously Brouwer’s Intutionism as well as Weyl’s Predicativism. The critics argued that the classical maths of the late nineteenth century had over-reached itself in descending into ‘abstract nonsense’ (which was why we got a crisis in foundations when the set-theoretic and other paradoxes were discovered), and to get out of the mess we need to stick to more constructivist/predicativist styles of reasoning, recognising that world of mathematics is in some sense our construction (which you might think has something to do with how we can get to know about it).

Now, that’s more than a little crude and we can’t follow those debates any further here. As a thumbnail history, though, what happened is that as far as mathematical practice is concerned the conservative classical realists won. Predicative analysis, for example, survives in a small back room of the mansion of mathematics, where its practitioners still like to show off how you far you can get hopping on one leg, with an arm tied behind your back — as the lovers of abstract nonsense, as Thomas described himself, might put it. Though by the way, it very importantly turns out that predicative analysis is all that science actually needs (so we don’t have, so to speak, external, practical reasons for going classical). But the victory of the classical realists wasn’t a conceptually well-motivated philosophical victory — there are such things, sometimes, but this certainly wasn’t one of them. The conceptual debates spluttered on and on, but the magisterial authority of Hilbert and others was enough to convince most mathematicians that they needn’t change their way of doing things. So they didn’t.

Yet it seems that we can imagine things having gone differently on some Twin Earth, where the internal culture (the philosophy, if you like) of mathematicians developed differently over a hundred years, so that low-commitment approaches were particularly prized, and the constructivists/predicativists got to occupy the main rooms of the mansion, dishing out the grants to their students, while the lovers of abstract nonsense were banished to the attics to play with their wild universe of sets in the Department of Recreational Mathematics. Or if we can’t imagine that, why not?

There’s a lot more to be said. But maybe, just maybe, it does behove mathematicians — before they scorn the philosophers — to reflect occasionally that it really isn’t quite so obvious that our mathematical practice is free from deep underlying philosophical presumptions (even in a broad, Big Picture sense).

Scroll to Top