Conceptions of Set

Luca Incurvati’s Conceptions of Set, 15

My comments on Ch. 6 ended inconclusively. But I’ll move on to say just a little about the final chapter of the book, Ch. 7 ‘The Graph Conception’.

Back to the beginning. On the iterative conception, the hierarchy of sets is formed in stages; at each new stage the set of operation is applied to some objects (individuals and/or sets) already available at that stage, and outputs a new object. This conception very naturally leads to the idea that sets can’t be members of themselves (or members of members of themselves, etc.), which in turn naturally gives us the Axiom of Foundation.

But now turn the picture around. Instead of thinking of a set as (so to speak) lassoing already available objects, what if we think top down of a set as like a dataset pointing to some things (zero or more of them)? On this picture, being given a set is like being given a bundle of arrows pointing to objects (via the has as member relation) — and why shouldn’t one of these arrows loop round so that it points to the very object which is its source (so we have a set one of whose members is that set itself)?

Elaborating this idea a bit more, we’ll arrive at what we might call a graph conception of set. Roughly: take a root node with directed edges linking it to zero or more nodes which in turn have further directed edges linking them to nodes, etc. Then this will be a picture showing the membership structure of a pure set with its members and members of members etc. (a terminal node with no arrows out picturing the empty set); and any pure set can be pictured like this. But there is nothing in this conception as yet which rules out edges forming short or long loops. So on this conception, the Axiom of Foundation will fail.

Talking of graphs in this way takes us into the territory of the non-well-founded set theories introduced by Peter Aczel. And these are the focus of Luca’s interesting chapter. I’m not going to go into any real detail here, because much of the chapter is already available as a standalone paper, The Graph Conception of Set, J. Philosophical Logic 2014. But in §7.2, Luca explains — a bit too briskly for some readers, I imagine — Aczel’s four systems. Then in §7.3, he argues that these systems are not mere technical curiosities but arise out of what we’ve just called a graph conception of set: think of sets top down as objects which may have members (as I put it, point to some objects) which may have members etc., and “one can just take sets to be what is depicted by graphs of the appropriate form”. More specifically, in §7.4 Luca argues that the particular anti foundation axiom AFA is very naturally justified on the graph conception. In §7.5, it is argued that some other core set theoretic axioms are equally naturally justified on the graph conception, while Replacement and Choice remain outliers as on the iterative conception.

In the end, then, the claim is that ZFA (ZFC with Foundation replaced by AFA) is as well justified by graph conception as ZFC is justified by the iterative conception. This is among the most original claims in the book, I think, and seems to be to be very well defended. In §§7.6-7.8, Luca fends off some objections to a set theory based on the graph conception, though he argues that the graph conception doesn’t so naturally accommodate impure sets with urelements. But §7.9 then worries that “ultimately, non-well-founded set theory must be justified by appealing to considerations that come from the theory of graphs. Thus, in this sense, non-well-founded set theory is not justificatory autonomous from graph theory. … [I]f set theory is to provide a foundation for mathematics in the sense … endorsed by many a set theorist, the iterative conception fares better than the graph conception.”

Now “faring better” isn’t ”beating hands down”: but Luca will live with that. In the very brief Conclusion which follows Chapter 7, he is open to a “a moderate form of pluralism about conceptions of set, according to which, depending on the goals one has, different conceptions of set might be preferable”. Nonetheless, Luca’s final verdict is conservative: “[W]hen it comes to the concept of set, and if set theory is to be a foundation for mathematics, then the iterative conception fares better than its currently available rivals. This vindicates the centrality of the iterative conception, and the systems it appears to sanction, in set theory and the philosophy of mathematics.”

“We shall not cease from exploration/And the end of all our exploring/Will be to arrive where we started/And know the place …” well, not perhaps for quite the first time, but better than we did! And when I’ve had the chance to think about things a bit more, I’ll edit these blogposts into a single booknote, and may also add some afterthoughts. But I’ve enjoyed reading Conceptions of Set and blogging about it a good deal, and I hope I’ve encouraged some of you to read the book (and all of you to ensure it is in your university library). Congratulations again to Luca!

Luca Incurvati’s Conceptions of Set, 14

We were considering the logical conception of set, according to which a set is the extension of a property. But how are we to understand ‘property’ here? In the last post, I mentioned David Lewis’s well-known theory of properties. If we adopted that theory, which sorts of property would sets the extensions of? The ‘natural’ ones? — no, too few. The ‘abundant’ ones? — too many, it seems, unless we are just to fall back into the combinatorial conception. OK, perhaps Lewis’s isn’t the right choice of a theory of properties! But then what other account of properties gives us a suitable setting for developing a distinctive logical conception of set? Now read on …

Luca does mention the problem just noted about Lewisian abundant properties in his §1.8; but having remarked that this notion of property won’t serve the cause of a logical conception of set, he doesn’t I think offer much guidance about what notion of property will be appropriate. This seems a rather significant gap. (Given a prior conception of sets, we might aim to reverse-engineer a conception of properties such that sets can be treated as extensions of properties so conceived, as in effect Lewis does for his abundant properties: but we are here trying to go in the opposite direction, elucidating a conception of sets in terms of a prior notion of property that will surely itself need some clarification.)

Be that as it may. Let’s suppose we have settled on a suitable story about properties (which will presumably be type-disciplined, distinguishing the type of properties of objects from the type of properties of properties from the type of properties of properties of properties, etc.).

Now on the type-theoretic conception of the universe, the types are incommensurable. As Quine pointed out, this is an ontological division. But, at least on an immediate reading, when the types are collapsed [as in NF] this ontological division is removed: properties (of whatever order) are now objects, entities in the first-order domain. Thus, on this reading, NF becomes a theory of [objectified] properties and ∈ becomes a predication relation, by which a property can be predicated of other objects: xy is to be read as x has property y.

So the idea is that we in particular are to move from (i) a claim attributing a property P to the object a to the derivative type-shifted claim (ii) that a stands in the membership relation to an object (an extension, or as Luca says an objectified property) associated with the property P.

But how tight is the association between a property and this associated object, the objectified property? The rhetoric of “objectification” might well suggest a non-arbitrary correlation between items of different types (as non-arbitrary as another type-shifting correlation, that between an equivalence relation and the objects introduced by an abstraction principle — prescinding from Caesar problems, it is surely not an accident that the equivalence relation is parallel to gives rise by abstraction to directions rather than e.g. numbers). Luca suggests a different sort of comparison: we can think of the introduction of objectified properties as an ontological counterpart of the linguistic process of nominalization, where we go from e.g. the property-ascribing predicate runs to the nominal expression running. This model too suggests some kind of internal connection between a property and its objectification — after all, it isn’t arbitrary that runs goes with running as opposed to e.g. sitting! If we are going to run with this model(!) then there should similarly be a non-arbitrary connection between the property you have when you run and the object that is its objectivization.

A page later, however, we get what seems to be a crashing of the gears. Luca tells us that sets are objectified properties in the sense of proxies for properties — and

a particular association of properties with objects is arbitrary: there is no reason for thinking of an object as a proxy for a certain property rather than another one.

Really? Well, we don’t want to be quibbling about terminology, but it does still seem to me a bit of a stretch to call a mere proxy an objectification (for that surely does still sound like some kind of internal ontological relationship). If I arbitrarily associate the properties of being red, being blue, and being yellow with respectively the numbers 1, 2, and 3 as proxies, aren’t the numbers more like mere labels? And this now suggests a picture introduced by Randall Holmes in motivating NF: a singleton is like a label for its ‘member’ (different objects get different labels), and a set comprising some objects, having their singletons as parts, is thus like a catalogue of these objects. Now, this conception gives rise to the thought that the resulting set-theoretic truths ought to be invariant under permutations of labels (since labellings in forming catalogues are indeed arbitrary). And then we can argue that, with a few extra assumptions in play, the desired permutation-invariance is reflected by NF’s requirement of stratification in its comprehension principle. For some details, see Ch. 8 of Holmes’s book.

Because Luca also makes the association of sets with properties arbitrary, he too wants a similar permutation invariance of the resulting truths about sets, and so he claims he can too use an argument that this invariance will be reflected by an NF-style theory: “The stratification requirement, far from being ad hoc, turns out to be naturally motivated by the idea that sets are objectified properties.” (Luca’s story seems to have less moving parts than Holmes’s, for on the latter story it seems to be important that sets not only have labels as parts but are themselves labelled. I haven’t worked out whether this matters for Luca’s argument from permutation invariance.)

So where does this leave us? Given the linkage just argued for, Luca can call his picture of arbitrary proxies for properties the ‘stratified conception’, and he writes:

If we accept that there is a sensible distinction between a logical and a combinatorial conception of a collection, this opens up the way for regarding the stratified conception as existing alongside the iterative conception. According to this proposal, the sets – the entities that we use in our foundations for mathematics – are provided by the iterative conception. This conception is often taken to be, and certainly can be spelled out as, a combinatorial conception of collection. By contrast, objectified properties – the entities that we use in the process of nominalization – are provided by the stratified conception. This conception is a logical conception of collection. … [If] the stratified conception is best regarded as a conception of objectified properties, i.e. extensions, it seems possible for the NF and NFU collections to exist alongside iterative sets.

I’m not sure the NF-istes would be too happy about this proposal: their usual view is that the NF universe includes the iterative hierarchy as a part — they just believe in more sets that the ZF-istes, more sets of the same ontological kind (i.e. they don’t see themselves as changing the subject, and talking about something different). But let that pass. What you make of all this will depend in part on what you think of this talk of objectified properties as mere arbitrary proxies. Holmes’s talk of sets-as-catalogues-based-on-arbitrary-labelling does seem a franker version of the same basic conception. Does that make it more or less attractive?

Luca Incurvati’s Conceptions of Set, 13

Among other things, I need to get more answers to the exercises in IFL2 online before publication, and that’s a ridiculously time-consuming task, which is no doubt why I’ve been rather putting it off! Doing some of the needed work partly explains the hiatus in getting back to Luca’s book. But there’s another reason for the delay too. I’ve found it quite difficult to arrive at a clear view of the second half of his Chapter 6 on NF. However, I must move on, so these remarks will remain tentative:

Early on, in §1.8, Luca distinguished what he called logical and combinatorial conceptions of set. And now in §6.7, he tells us that NF can be treated as a theory of logical collections.

It is a familiar claim that some such distinction between logical and combinatorial collections is to be made. And it seems tolerably clear at least how to make a start on elaborating a combinatorial conception: the initial idea is that, take any objects, however assorted and however arbitrarily selected they might be, they can be combined to form a set with just those objects as members. And then it is reasonable to argue that the iterative conception of set is a natural development of this idea. It is much less clear, however, even how to make a start on elaborating the so-called logical conception. Let’s pause over this again before turning to the details of §6.7.

In §1.8, Luca suggests “Membership in a logical collection is determined by the satisfaction of the relevant condition, falling under the relevant concept or having the relevant property. Membership is, in a sense, derivative: we can say that an object a is a member of a [logical] collection b just in case b is the extension of some predicate, concept, or property that applies to a.” But are there going to be enough actual predicates (linguistic items) to go around to give us the sets we want? Which language supplies the predicates? If we say ‘a logical set is the extension of a possible predicate’ then we are owed an account of possible predicates — and in any case, this doesn’t really seem to tally with the idea of membership as a derivate notion: the picture now would rather seem to be that here already are the sets with their members, and they are (as it were) waiting to be available to serve as extensions for any predicates that we might care to cook up in this or that possible language.

So maybe we need to concentrate on the concepts or properties? The notion of a concept here is slippery to say the least, so let’s think about properties for a bit. How plentiful are properties? We don’t want to get too bogged down in metaphysical discussions here, but for orientation let’s recall that David Lewis famously makes a distinction between the sparse natural ‘elite’ properties (which can appear in laws, where sharing such a property makes for real resemblance, etc., etc.) and the abundant non-natural properties (where Lewis explicitly explains that for any combinatorial set of actual and possible objects, however gerrymandered, there is an abundant property, namely the property an object has just in case it is a member of the given set). Now, taking logical sets to be the extensions of properties in an abundant sense which is anything like Lewis’s will just collapse the supposed logical/combinatorial distinction.

But on the other hand, taking logical sets to be always the extensions of properties in some narrower sense would again seem to be in danger of giving us too few sets. It is a common argument, for example, that x’s being, as we might casually say, F or G is really just a matter of x’s being F or x’s being G — the world doesn’t contain, as part of its furniture, as well as the property F and the property G the disjunctive property F-or-G. In other words, even if F and G are real properties which have logical sets as extensions, there is no real property F-or-G to have the union of those sets as its extension. Drat! So perhaps we do want to be thinking in terms of predicates after all, since we can apply Boolean operations of predicates (and get unions and intersections of extensions) in a way we can’t apply them to natural properties (at least on some popular and well-defended views). But defining sets in terms of predicates wasn’t looking a great idea …

The nagging suspicion begins to emerge that the idea that we can (i) characterize a logical set as “the extension of some predicate … or property” while (ii) not collapsing the idea of a logical set into the notion of a combinatorial set depends on cheating a bit by blurring the predicate/property distinction. We need something predicate-like to give us e.g. the Boolean operations; we need something sufficiently natural-property-like to avoid the unwanted collapse. So what’s the honest story about logical collections as extensions going to be? Let’s see what more Luca has to say!

To be continued

Luca Incurvati’s Conceptions of Set, 12

We turn then to Chapter 6 of Luca’s book, ‘The Stratified Conception’.

This chapter starts with a brief discussion of Russell’s aborted ‘zigzag’ theory, which tries to modify naive comprehension by requiring that it applies only to sufficiently “simple” properties (or strictly, simple propositional functions). It seems that Russell thought of the required simplicity as being reflected in a certain syntactic simplicity in expressions for the relevant properties. But he never arrived at a settled view about how this could be spelt out. It is only later that we get a developed set theory which depends on the comprehension principle being constrained by a syntactic condition. In Quine’s NF, the objects which satisfy a predicate A form a set just when A is stratifiable — when we can assign indices to its (bound) variables so that the resulting A* would be a correctly formed wff of simple type theory. And so the rest of Chapter 6 largely concerns NF.

Luca also touches on NFU, the version of Quine’s theory which allows urelemente. And — though this is a matter of emphasis — I’m was a bit surprised that the main focus here isn’t more consistently on this version. At the beginning of the book, Luca seems to hold that the most natural form of a set theory should allow for individuals: thus he describes the iterative conception (for example) as being of a universe which starts with individuals, and then builds up a hierarchy of sets from them. And if, when considering its technical development, we then concentrate on an iterative set theory without individuals, that’s because of easy equi-consistency results: adding urelements to ZFC’s theory of pure sets doesn’t change the scene in a deep way, so for many purposes it just doesn’t matter whether we discuss ZFC or ZFCU. But, famously, it isn’t like this with NF vs NFU. The consistency status of NF is still moot (Randall Holmes claims a proof, but its degree of opacity remains extremely high), and NF is inconsistent with Choice. While NFU is well known to be consistent (if enough of ZF is) and is consistent with Choice. Thus Holmes himself writes of NF and its variants that “only NFU with Infinity, Choice and possibly further strong axioms of infinity … is really mathematically serviceable.” That’s a judgement call, given Rosser’s earlier work on maths within NF. But it is certainly arguable that NF is something of a curiosity, while NFU wins mathematically. And given that it is more natural anyway in allowing for individuals, I’d have rather expected that NFU would have been the ‘best buy’ stratified theory for Luca to choose to highlight.

Now, some would say that a chapter called ‘The Stratified Conception’ must be mistitled — for there isn’t really a conception of the set-theoretic world at work here! In §6.4, Luca talks of the ‘received view’ of NF(U) as being, precisely, a set theory which lacks conceptual motivation. Thus he quotes D. A. Martin calling it “the result of a purely formal trick … there is no intuitive concept”. Similarly, Boolos writes that NF “appear[s] to lack a motivation independent of the paradoxes”. Fraenkel said “there is no mental image of set theory” which leads to NF’s characteristic axiom. If this received view is right, then Luca’s chapter could be very short!

But after outlining NF (and some of its oddities!) in §6.2, putting it into the context of the paradox-avoiding choice between indefinitely extensibility and universality in §6.3 (NF chooses the second), and noting the received view in §6.4, Luca turns in §6.5 to describe how there is a path from type theory to NF. To negotiate the annoying repetitions of e.g. cardinal numbers at each type, Russell had already adopted a policy of using untyped claims which are to be read as typically ambiguous. So allowed instances of comprehension should be stratifiable though the explicit type indications are dropped. Then, as Luca explains, it is a rather natural step to consider keeping the stratifiable instances of comprehension while no longer supposing that the hidden stratifications are ontologically significant — i.e. we collapse the ontological type hierarchy into a single untyped domain. Which is what NF does (and indeed the theory is equiconsistent with adding to the simple theory of types an axiom schema that registers type ambiguity by saying that a typed wff is equivalent to the result of raising the types of all variables in that wff by one). So — to some extent contra the ‘received view’ (though in fact this is quite well known) — there is a pretty natural route which ends up with NF(U).

So far so good. But while observing that we can arrive at NF(U) by collapsing types gives the theory some motivation, more than the baldest version of the ‘received view’ might allow, that doesn’t yet really give us a positive conception of the resulting world of sets. We have a genealogy, but more needs to be said: and that’s going to be the business of Luca’s §§6.6 and 6.7, to be discussed in the next post.

But first, it is perhaps worth noting that NFU can be developed without initially introducing the stratifiability condition at all (well, I at least find this intriguing!). Here I’m thinking of the axiomatization for set theory given by Randall Holmes in his Elementary set theory with a universal set. The domain contains atoms, sets, and (primitive) ordered pairs. Assume as fixed background that (i) extensionality holds for sets, (ii) atoms have no members. And now consider the following bunches of axioms:

  1. Axioms telling us that the sets form a Boolean algebra — there’s the empty set at the bottom, the universal set at the top, every set has a complement, any two sets have an intersection and a union which is also a set.
  2. An axiom of set union of the usual kind (given a set A of sets, there’s a set which is the union of the members of the members of A).
  3. Some axioms telling us that every object has a singleton. Some axioms telling us that for any two objects there is an ordered pair of them, and that pairs and projections from pairs behave sensibly. Various axioms telling us sets have Cartesian products, and that binary relations as sets of ordered pairs behave sensibly.

So far, with details filled in, we get what might look to be a pretty natural base theory if we are trying to articulate a conception of sets which permits sets to behave the Boolean way we naively expect, and also allows a few familiar elementary constructs in the theory of relations and functions. Now suppose we add to this base theory two more specific axioms. These might not be ‘axioms we first think of’ but the first looks entirely unsurprising as a set-theoretic truth; and while the second gives us a set which is ‘too big’ by limitation-of-size principles, if we are going to buy a universal set, then this seems a reasonably natural assumption of the same flavour too.

  1. For any relation, there is a set of the singletons of the objects that stand in that relation.
  2. The pairs (x, y) where x is a subset of y form a set.

And now here’s the pay-off. A theory with these axioms proves NFU’s stratified comprehension axiom as a theorem. And (still assuming the fixed principles (i) and (ii)), stratified comprehension gives us everything else except the defining axioms for pairs. I’m not quite sure what to make of this re-axiomatization result. But it might suggest that we can also motivate a route to NFU that doesn’t depend on what could perhaps still look like trickery with types.

To be continued.

Guest post: Thomas Forster on Conceptions of Set and motivating NF

The next chapter of Conceptions of Set discusses set theories like NF that modify naive comprehension by imposing a stratification condition. My friend Thomas Forster, NF-iste extraordinaire, has been looking at some of Luca’s book too, and dropped me this note, which I thought it would be good to share here.

The usual on dit about stratification is that it has no semantics. This is the kind of thing people go around saying. And, as Luca points out, it simply isn’t true. There is this theorem of mine (building on work of Petry and Henson) that says that the stratifiable formulae are precisely those that are invariant under Rieger-Bernays permutations. I’m pretty sure that Dana Scott knew this result ages ago but didn’t feel the need to write it out. Henson must have known it too. I wrote it up and published it because it seemed to me that it mattered and that it should be brought out into the open. And that the on dit needed to be knocked on the head.

Here is another way of making the same point. There is the di Giorgi picture of (structures for the language of) set theory. A structure for the language of set theory is a set equipped with an injection into its power set (after all, a structure for the language of set theory is a set with an extensional relation on it, and an extensional relation on X is an injection from X into its power set [see my Logic, Induction and Sets, esp. §8.1]). We can think of this picture as each member of the set “coding” a subset of the set. Now Cantor’s theorem tells us that no such injection can be surjective. So some subsets must be left uncoded. So, in constructing a structure for the language of set theory, one has two steps to take. (i) One decides which sets are to be left out, and then (ii) one decides which surviving sets are to be coded by which elements. Natural question Q: which sentences have their truth-values already determined by stage (i)? That is, what sentences have the feature that their truth-values are determined purely by our decision about which subsets are going to be coded, and are not affected by our decision about which subsets are coded by which elements? Some examples are obvious. The structure believes the empty set axiom iff the empty set is in the range of the injection. The structure believes that every set has a singleton iff, whenever s is a subset that is coded, then the singleton of the element coding s is also a coded subset (never mind what the coding is). Answer to Q (of course): the stratified sentences!

But there is a larger question here. The Petry-Henson-Forster theorem relates one particular kind of invariance to one particular syntactic feature (namely stratification). In this case, it’s invariance under change of implementation. But there are other kinds of implementation-insensitivity, and typing systems that accompany them. There are people who understand the axiom scheme of replacement (not very many, it has to be said) and how it is all about implementation insensitivity. Here is an example: Does it make any difference to the abstract theory of ordinals you get whether you use von Neumann ordinals or Scott’s trick ordinals? It shouldn’t! But if you want to prove that it doesn’t then you need replacement.

Tomorrow, I hope, back to me on Luca’s chapter on NF.

Luca Incurvati’s Conceptions of Set, 11

We are continuing to discuss Luca’s Chapter 5. The naive comprehension principle — for every property F, there is a set which is its extension — seems intuitively appealing but leads to paradox. So how about modifying the principle along the following lines: for every good property F, there is a set which is its extension (a set of Fs)? Such a principle might inherit something of the intuitive appeal of the unmodified naive principle, but (with a suitable choice of what counts for goodness) avoid contradiction. So what could make for goodness, here? One suggestion that goes back to Cantor, Russell, and von Neumann, is that a property F is good if not too many things fall under it — in other words, we should modify naive comprehension by imposing what Russell called a ‘limitation of size’. How should the story then go?

In §5.2 and §5.3 Luca carefully explores the roots of the Cantorian idea that F is a good property if there are fewer Fs than ordinals. In §5.4. we then meet a proposal inspired by remarks of von Neumann’s: F is good if there are fewer Fs than sets. Luca then goes on to discuss one familiar way of implementing the von Neumann approach, famously explored by Boolos. We add to second order logic a Frege-like abstraction principle that says (roughly) that, when F and G are good, if everything which is an F is a G and vice versa, then the set of Fs is the set of Gs.

But how exactly are we to formulate the required abstraction principle (a ‘New V’ to replace Frege’s disastrous unrestricted Old Axiom V)? And then just how strong a set theory does the resulting Frege-von Neumann theory yield? §5.5 reviews Boolos’s own discussion, explaining Boolos’s New V, and noting that we get as a result e.g. Separation, Choice and Replacement, plus versions of Foundation and Union, but we don’t get Powerset or Infinity. [There is a sense-destroying typo in the displayed formula on p. 144, which also distractingly uses the same variable both free and quantified.] In §5.6, Luca then considers an objection to Boolos’s version of New V (that it allows pathological cases which we don’t want), and introduces a revised version due to Alex Paseau, New V, which gets round that problem, but still leaves us without Powerset or Infinity. Indeed, Russell had noted this very issue about a limitation of size principle: it tells us that the universe isn’t too big — but this leaves it open that the universe is very small.

Now, one reasonable response to this observation could be “Fair point: but then the pure iterative conception doesn’t by itself tell us how high the universe goes either. The idea that sets are built up in levels leaves it open how many levels there are; and in this respect a limitation of size approach is on a par in leaving it open how big the universe is (other than not being too big!). Both conceptions need supplementation by further thoughts, in particular to give us axioms of infinity.” And this is indeed Luca’s response in §5.7. Which some may find a bit of a surprise, given that the author of §2.1 does seem to build into the iterative conception the idea that there are infinite stages in the iterative hierarchy which are indexed by limit ordinals; and indeed I grumbled a bit in my comments about that section that this seemed to be running together the basic iterative conception with a further thought about the height of the universe. But now Luca seems happy to separate those thoughts more emphatically than he did at the outset. So I’m with him on that! However, he doesn’t tell us what kinds of axiom of infinity would sit most naturally inside the second-order framework of Boolos-style Frege-von Neumann set theory.

Back to fundamentals. For Cantor-style and von-Neumann-style limitation of size principles, the choice of ‘the yardstick of excessive bigness’ seems to be baldly motivated by the requirement that we avoid contradiction. So, it might be argued, limitation of size theories don’t reflect an explanation of what gives rise to the paradoxes in the the way that the iterative conception does: it is just a brute fact that Bigness is Bad. In §5.8, Luca — correctly, I think — defends this line of criticism against an argument from Linnebo that purports to show that developing the iterative conception must also implicitly rely on a limitation of size principle. So, he concludes, “the limitation of size conception’s explanation of the paradoxes is less attractive than the iterative one, because it ultimately rests on the fact that supposing certain properties to determine a set leads to paradox. Following Boolos, this means that the limitation of size conception is not natural.” Though I suppose we might well wonder: does being not-so-explanatory imply being not natural? (Being less explanatory is already a shortcoming, and a charge I find it easier to get my head around than ‘lack of naturalness’).

In §5.9, Luca considers a different way of elaborating the thought that only good properties have extensions, turning to the idea that F is good if it is definite, i.e. not indefinitely extensible — another idea that goes back to Russell. So what if we modify a Boolos-style FN set theory by adopting a revised New V, call it Def(V) — if F and G are definite, the set of Fs is the set of Gs if and only if every F is a G and vice versa? I found Luca’s discussion of this proposal very interesting; and it engages with a thought-provoking recent paper by Linnebo which was new to me exploring ‘Dummett on Indefinite Extensibility’ (Philosophical Issues, 2018). I won’t try to summarize the ins and outs of the discussion here. But Luca concludes that it remains obscure how much set theory we’d get from adopting Def(V), and in any case it is the iterative conception which “explains the key insight of the definite conception that, when faced with the paradoxes, we ought to retain indefinite extensibility at the expense of universality”. [Note, by the way, that there are two potentially troubling misprints in the discussion on his p. 155. The ‘Y’ in the second displayed formula should be another ‘X’; and before the third displayed formula we should read ‘if and only if there is a function $latex \delta$ such that the following holds’.]

This chapter leaves me with two related questions. (1) Having raised the possibility of adding an axiom of infinity to a Boolos-style FN framework to give us a more competent set theory, Luca doesn’t tell us more about the options here, and whether the resulting theory would then be stronger than standard (second-order?) ZFC. I’d have really liked to hear more about the how the further story would go here. What would second-order FN set theory with enough of Powerset/Infinity look like? (2) How does a Boolos-style FN relate to the more usual, first-order, theory that is often presented with a Limitation of Size axiom, namely NBG (see §5.1 here)?

But here we will have to leave those questions hanging.

Luca Incurvati’s Conceptions of Set, 10

I’m picking up Luca’s book again, at Chapter 5. In the previous chapter, the question was: can we save the naive conception of set from ruin by tinkering with our logic? Short answer: no, not in a well-motivated way that will leave us with a set theory worth having. In this chapter, the question is: can we save the essence of the naive conception while retaining classical logic by minimally restricting naive comprehension?

Quine wrote:

Only because of Russell’s paradox and the like do we not adhere to the naive and unrestricted comprehension schema […] Having to cut back because of the paradoxes, we are well advised to mutilate no more than what may fairly be seen as responsible for the paradoxes.

Which suggests a simple-minded approach. Take one step back from disaster, and just accept all the instances of comprehension that do not generate paradox.

But what would that mean? First option: we should severally accept each instance of comprehension that does not entail contradiction. But it is easy to see that this is won’t work, because instances of comprehension which — taken separately — are consistent can together entail contradiction. Luca gives nice examples.

Second option, and surely more in keeping with Quine’s intention: we should accept those instances of comprehension which taken together do not entail contradiction. But this idea too doesn’t work.

As Luca points out, the proposal now is reminiscent of another paradox-avoiding proposal: be almost naive about truth by accepting just the maximal consistent set of all the instances of the T-schema. But McGee has a nice argument showing that that idea won’t wash; and Luca with Julien Murzi has generalized McGee’s argument so it applies here.

As a warm up, we can show that a maximally consistent set of instances of naive comprehension that e.g. includes the claim that there is an empty set is negation complete and hence (assuming the resulting set theory interprets Robinson arithmetic, a very weak demand!) is not recursively axiomatizable. Now, Luca says that such a set of instances can then hardly count as a set theory; but I’m not quite sure how much that would worry Quine. OK, any nicely axiomatized subset of that maximally consistent won’t be the full story about sets; but if we regiment enough of those consistent instances of comprehension into a theory rich enough for the working mathematician’s ordinary set-theoretic purposes (extending the theory if and when needed), why worry? We have a working theory (in one familiar sense), and a supposed story about why it is nice, i.e. it is part of the maximal naive set theory (in another familiar sense).

But let’s not pause over the hors d’oeuvre: the main course is the demonstration that (as in the case with instances of the T-schema) there after all there isn’t a unique maximal consistent set of all the instances of naive comprehension. Well, you might think that that too wouldn’t be too disastrous if the possible divergences between consistent stories were remote from ordinary business. But no such luck — uniqueness fails as badly as possible:

For any consistent sentence σ, there is a maximally consistent set of instances of naive comprehension implying σ, and another one implying ¬σ.

So the proposal to look for a maximal consistent set of instances of naive comprehension fixes no determinate theory at all (axiomatizable or otherwise). Hence the simple-minded reading of Quine’s one-step-back-from-disaster is indeed simply hopeless.

Two pernickety quibbles. The McGee 1992 paper in the biblio is the wrong one, it should his ‘Maximal Consistent Sets of Instances of Tarski’s Schema (T)’ in JPL. And the beginning of the Appendix on the Incurvati/Murzi upgrade of McGee’s theorem isn’t as clear as it ideally could be: e.g. a reader will pause to wonder what $latex \mathsf{S}^{\prime}$ is. But the argument of §5.1 indeed seems conclusive. The rest of this chapter is then about  ‘limitation of size’ principles. So let’s pause here and consider them in the next post.

Luca Incurvati’s Conceptions of Set, 9

Naive set theory entails contradictions. Really bad news. Or so most of us think. But what if we are prepared to be more tolerant of contradictions, e.g. by adopting a dialethic and paraconsistent logic, which allows there to be contradictions which are true (as well as false) and where contradictions don’t entail everything? Could we rescue the naive conception of set, accommodate e.g. the idea of a Russell set, by departing from classical logic in this way? A desperate measure, most of us will think. Even if willing, once upon a time, to pause to be amused by varieties of dialethic logic, at this late stage in the game, I don’t have much patience left for the idea of going naive about sets by going far-too-clever-by-half about logic. But Luca is evidently a lot more patient that I am! He devotes Chapter 4 of his book to investigating various suggestions about how to save naive set theory by revising our logic. How does the story go?

Luca very helpfully divides his discussion into three main parts, corresponding to three dialethic strategies.

The first he labels the The Material Strategy — we adopt a non-classical logic which keeps the material conditional, so that $latex P \to Q$ is still simply equivalent to $latex \neg P \lor Q$, while we reject the classically valid disjunctive syllogism, and hence material modus ponens.

Graham Priest initially thought that a ‘simple and natural choice’ here is his LP, the Logic of Paradox. But LP doesn’t validate the transitivity of the material conditional, and this hobbles the proof of various elementary theorems of set theory — even the usual proof of Cantor’s Theorem fails. And on the usual definition of set identity in terms of coextensionality, Leibniz’s Law fails too.

Can we tinker with LP to avoid these troubles? Luca mentions a few options: all of them have equally unattractive features — giving us a set theory that is too weak to be useful. So, Luca’s verdict seems right: the prospects of saving naive set theory as a formal theory by developing the material strategy in anything like the way originally suggested by Priest are dim indeed.

What about the second strategy The Relevant Strategy, where we adopt a relevant logic that does validate modus ponens? Luca dives into an extended discussion of so-called depth-relevant logics, also advocated by Priest. We won’t follow the ins and out of the arguments here. But it is again hard not to agree with Luca’s eventual verdict that (i) “Priest does not seem to have offered a good argument for focusing on the logic he is considering,” and further (ii) the logic is in any case still too weak for us to carry out standard arguments in set theory. At which point, most of us would get off this particular bus! But Luca takes us on another couple of stops, considering some proposals from Zach Weber for strengthening the depth-relevant logic. The argument become more involved, but again Luca arrives at a strongly negative verdict (enthusiasts can follow up the details): Weber’s logic remains poorly motivated — I would say it smacks of ad-hoc-ery — and Weber’s resulting theory lacks a genuine principle of extensionality so in the end can’t be regarded as a set theory anyway.

That leaves us with a third strategy also to be found in Priest, The Model-Theoretic Strategy. An intuitionist can allow that classical logic is just fine when we are reasoning about decidable matters. Similarly, the idea now is, that a dialetheist can allow that classical logic is just fine when we are reasoning over consistent domains. So the dialetheist can continue to reason classically about a consistent core subdomain of sets — e.g. the cumulative hierarchy — while asserting that this is just part of the full universe of naive set theory, about which we must reason dialethically.

There is, you might well think, an immediate problem about this position. If you accept the cumulative hierarchy as a consistent core universe of sets about which we can argue classically, and which gives us all the sets we can want for mathematical purposes (ok, push the hierarchy up high enough to keep the category theorists happy!), then just what are we getting by adding by more putative sets, the inconsistent ones? It’s not like adding points at infinity to the Euclidean plane, for example, to give us an interestingly enriched mathematical structure (results about which can be reflected down into interesting new results in the original domain). And I’m ‘naturalist’ enough in Maddy’s sense to ask: if the putative enriched structure of naive set theory isn’t mathematically interesting, why bother?

But I’ve jumped the gun: what does this supposed enriched structure look like? With some trickery which Luca describes, we can explain how to extend a model of ZF to become a model of a naive set theory with LP as its logic (so the formal theory might be inadequate to work inside, as argued before, but still have a rich model). But why suppose that this particular fancied-up model captures the structure of the full supposed universe of a naive set theory? As Luca points out, we have no good reason to suppose that. But then “the paraconsistent set theorist needs, after all, to say more about what the universe of sets looks like. It is not enough to simply suppose that it contains some paradigmatic inconsistent sets and has the cumulative hierarchy as an inner model.” The story is radically incomplete.

Summing up, the idea of rescuing naive set theory by going dialethic, using any of the three strategies, looks to be degenerating research programme. Probably many of us would have predicted that outcome! — but we can thank Luca for doing the hard work of confirming this in some detail.

And that takes us to the end of Chapter 4, and a bit more than half way through the book. And for most readers, the three remaining chapters promise to be considerably more interesting than the one we’ve just been looking. Next up, for example, is a discussion of the commonly-encountered idea of ‘Limitation of Size’.

Luca Incurvati’s Conceptions of Set, 8

We have reached the last few pages of §3.6; we are still considering whether the iterative conception delivers Replacement. Luca has critically considered two proposed routes from the one to the other; he now turns to discuss a third, an Argument from Reflection.

Naively put, reflection principles tell us that, for any property — or at least, any kosher property of the right kind! — belonging to the universe of all sets, we can find a level of the hierarchy which already has that property. In other words, kosher properties of the whole universe are “reflected” down to a set-sized sub-universe. Three interrelated questions arise:

  1. Does the iterative conception sanction any form of reflection principle like this, and if so why?
  2. How are we to spell out, more formally, some acceptable form(s) of reflection principle? What are the kosher properties that can get reflected down?
  3. Which formal reflection principles entail Replacement?

On (1), Luca outlines a supposed link between (i) the idea that iteration goes on ‘as far as possible’ with the idea (ii) that the hierarchy should be ‘absolutely infinite’ in the sense that it resists unique characterization by any non-trivial property. And from (ii) it is supposed to follow that any property we can correctly assign to the universe must fail to pin down the full universe, so (iii) that property will already exemplified by some initial part of the hierarchy.

But I don’t really get the supposed link between (i) and (ii). At the end of his discussion, Luca claims that “absolute infinity is a natural way of understanding the idea that the iteration process is to be carried out as far as possible.” But I could have done with rather more explanation of why it should seem natural. And indeed I could have done with a more about the step from (ii) to (iii).

But ok, let’s grant that for some class of kosher properties, there is an intuitively appealing chain of reasoning that leads naturally enough from the iterative conception to reflection for those properties.

We now need to go formal: how do we formally capture the relevant reflection principles? The discussion of (2) and (3) inevitably becomes more technical. I won’t attempt to summarize here Luca’s already rather compressed exploration (which also touches on closure principles as a weaker alternative to reflection principles). In fact, I suspect many readers of the book will find this episode pretty tough going. Those beginners who needed e.g. to be reminded about cardinals and ordinals at the end of Chapter 1 are surely not going to easily follow this discussion which turns, inter alia, on distinguishing which order of higher-order variables are allowed to occur parametrically in formal versions of reflection principles. But yes, a strong enough formal reflection principle will entail Replacement. Still, I suppose someone might well pause to wonder whether the required open expressions with higher order parameters express kosher properties of the kind that were being countenanced in the intuitive considrations at stage (1).

Overall, I found Luca’s discussion of what he calls the Argument for Reflection intriguing (it got me re-reading some of the literature he mentions), but inconclusive. But then he too in the end is cautious. He says that “if the idea of iteration as far as possible is perhaps not part of the iterative conception, it harmonizes well with it. Understood or augmented in this way, the iterative conception sanctions the Axiom of Ordinals and possibly the Replacement Schema”. [my emphasis]

Which gets us to the end of Chapter 3, responding to some initial ‘Challenges to the Iterative Conception’. So let’s pause here. The rest of the book looks at various rivals to the iterative conception, starting in Chapter 4 with the naïve conception again — can we, after all, rescue it from its apparently damning inconsistency by departing from classical logic? I’ll discuss the whole chapter in one bite in the next posting!

Scroll to Top