I missed Mic Detlefsen’s talk, and arrived half-way through Steve Awodey’s “Homotopy type theory and univalent foundations”, which was a beautifully presented advertising pitch for the project you can read a lot more about here.
James Studd (a graduate student in Oxford) then gave a paper on a bi-modal axiomatization of the iterative conception of set. Why bimodal? Start with the familiar informal tensed chat where we talk of forming levels of the hierarchy successively. We want to say things like: the new sets formed at a level must have members already formed at past levels, and once formed must persist at future levels. So it is pretty natural to try regimenting such past-looking and future-looking talk with a pair of tense-like modalities. Except, as Studd himself emphasised, they aren’t really temporal modalities. But then, of course, the problem is to make non-metaphorical sense of them. And (as Robert Black pointed out in discussion) the trouble is that the natural ways of doing this presuppose an understanding of talk of levels in the iterative hierarchy, so the modalities thus explained can’t really serve as giving an independent handle on how to understand the iterative conception. If instead, as I think Studd wants, we ultimately take the modalities as new primitives, then it is pretty unclear whether anything has been gained at all over e.g. the Boolos axiomatization of stage theory.
Next up, Michael Potter talked about whether Replacement can be justified, developing worries already expressed in his Set Theory and its Philosophy. Replacement amounts to two claims: roughly, (a) every set is a subset of a level, and (b) there is a level for every set size. In a Scott-style axiomatization, (a) is built in. But what would justify adding (b), which is equivalent to a reflection principle?
‘External’, regressive, arguments — from the supposed need to assume (b) if we are to prove some desired mathematical results outside set theory — don’t work: full Replacement isn’t needed for (enough) “ordinary” mathematics. So we need to appeal to ‘internal’ considerations, flowing from our conception of the set universe. A limitation of size principle might support Replacement, but is itself difficult to motivate. Meanwhile — and this is the key point Michael wants to press — the iterative conception doesn’t deliver the goods.
This particularly nice talk was followed by Hannes Leitgeb on ‘A theory of propositions and truth’. The idea is to model a theory of propositional functions and ‘aboutness’ on a theory of sets and membership: so the aim is an untyped but paradox-free theory, on which is grafted an untyped theory of truth-for-propositions, and then a theory of syntax and a derived theory of truth-for-sentences. Hannes ends up in the Tractarian position of having to say that the axioms of his theory don’t express propositions, a conclusion he cheerfully embraced. But I, for one, am quite unclear what are the rules of the game for this kind of constructional exercise in defining formal gadgets, and hence quite unclear whether such insouciance is justifiable.
In the final talk of the day, Alex Paseau talked interestingly about the possibility and scope of non-deductive knowledge of mathematical propositions. He had two different sorts of cases. First, there’s what we might call ‘experimental’ evidence — as when we draw enough diagrams to convince ourselves that the perpendicular bisectors of a triangle intersect at a point. And then there is mathematical evidence such as probabilistic considerations, neighbouring theorems, etc. — as in our evidence for Goldbach’s Conjecture or the Riemann Hypothesis. These cases do seem very different to me, however, and I am rather inclined (without much by way of argument) to return a split verdict. That is to say, I find it happier to say I can get to know some propositions of Euclidean geometry by the experimental method than I am to say I could get to know Goldbach’s conjecture by totting up more evidence short of a proof. Perhaps that’s because of the nagging doubts engendered by the dim recollection that there are other number theoretic conjectures which have only immense, and immensely sparse, counter-examples: so, the thought remains that for all we currently know — even augmented with more of the same — we could still be in for a nasty surprise.
3 thoughts on “Cambridge Foundations, 2”
Replacement is an axiom, rather like Choice, that seems like it ought to be true anyway, given the sort of structure that the iterative conception gives us, especially if Replacement amounts to “(a) every set is a subset of a level, and (b) there is a level for every set size.” That it needs to be a separate axiom (scheme), and is tricky to justify, is rather odd, really. So Michael Potter’s talk sounds very interesting, to me. Is there an online version of it anywhere?
The talks were video-recorded so short shortly appear on the Faculty website. I’ll keep people posted.
Here is a possible psychological explanation for a greater willingness to accept experimental evidence for propositions in Euclidean geometry than to accept it for Goldbach’s Conjecture.
If you draw lots of triangles, you may feel that you have considered triangles drawn from right across the range of triangles. You will not have omitted any type that is obviously definable (isosceles, scalene, very sharply peaked, very flat, etc). You may therefore feel that you have considered a representative sample of triangles, even though you have only drawn a finite number of triangles, and there is an infinity of triangles waiting to be drawn.
If you check some even numbers, and find that each is the sum of two primes, you likewise only consider a finite number of cases, out of an infinity. But your sample is skewed towards the low end, and is to that extent not representative. Whatever the largest even number you have considered, the territory above it is vastly greater than the territory up to it. There might, for all you know, be some mathematical reason why any counter-examples should all lurk in numbers that are large, by the standards of those so far tested. Then the non-representative nature of your sample would matter. It would be as if you had only drawn triangles in which the smallest angle was 50 degrees or more, while counter-examples lurked in triangles that had very sharp peaks.
Of course, this is only a psychological explanation, not a justification. And I don’t suppose that it extends to evidence for Goldbach, Riemann, and the like which is not a matter of trying out lots of examples, but of considering related theorems in the relevant areas of mathematics. When, however, one thinks about the relevance of related theorems, rather than about lots of examples, one’s thinking is at a level of sophistication that ought to deny psychological factors any influence.