Phil. of maths

Encore #16: Weir’s formalism

One of the books which I blogged about at length here was Alan Weir’s Truth Through Proof (OUP, 2010). I found this difficult and puzzling, but also enjoyable to battle with (not least because Alan responded in comments at length). Things soon got intricate, but here is a (revised version) of a very early post in the series, where I try to initially locate Alan’s position on the map. So this is perhaps of stand-alone interest, and there are themes here connecting to issues raised in the previous Encore on Maddy.

TTP, 2. Introduction: Options and Weir’s way forward (May 30, 2011)

Our conception of ourselves as natural agents without God-like powers “imposes a non-trivial test of internal stability” (as Weir puts it) when combined with platonism. As Benacerraf frames the problem in his classic paper, ‘a satisfactory account of mathematical truth … must fit into an over-all account of knowledge in a way that makes it intelligible how we have the mathematical knowledge that we have’. Faced with this challenge, what are the options? Weir mentions a few; but he doesn’t give anything like a systematic map of the various possible ways forward. It might be helpful if I do something to fill the gap — not a complete map, of course, but noting various choice points, and the way Weir goes at each.

Start with this question:

  1. Can we say — without qualification, without our fingers crossed behind our backs! — that yes, 3 is prime and, yes, the Klein four-group is the smallest non-cyclic group?

The platonism will answer ‘yes’. A gung-ho fictionalist of a certain stripe, for instance, will say ‘no’, our ordinary talk of numbers or groups commits us to a platonist ontology of abstracta that a sensible naturalist with a coherent epistemology has no business believing in: the mathematical claims, as they stand, unqualified, are false.

Platonists, however, aren’t the only people who answer ‘yes’ to 1. Move on, then, to a second question:

  1. Are ‘3 is prime’ and ‘the Klein four-group is the smallest non-cyclic group’, for example, to be construed — as far as their ‘logical grammar’ is concerned — at face value, as the surface form suggests (on the same plan as e.g. ‘Alan is clever’ and ‘the tallest student is the smartest philosopher’)?

A more conciliatory stripe of fictionalist can answer ‘yes’ to (1) but ‘no’ to (2), since she doesn’t take ‘3 is prime’ as wearing its logical form on its face but re-construes it as short for ‘in the arithmetic fiction, 3 is prime’ or some such. Likewise, for a certain brand of naive (or naive-ish) formalist who takes ‘3 is prime’ as not attributing a property to a number but as saying that a certain sentence can be derived in a formal game.

Eliminative and modal structuralists will also answer ‘yes’ to (1) and ‘no’ to (2), this time construing the mathematical claims as quantified conditional claims about non-mathematical things (schematically: anything, or anything in any possible world, that satisfies certain structural conditions will satisfy some other conditions). It is actually none too clear how structuralism helps us epistemologically, and when given a modal twist it’s not clear either how it helps us ontologically. But that’s quite another story.

Suppose, however, we answer ‘yes’ to (1) and (2). Then we are committed to agreeing that there are prime numbers and there are non-cyclic groups, etc. (for it is true that 3 is prime and that the the Klein four-group is the smallest non-cyclic group, and — construed as surface form suggests — that requires there to be prime numbers and non-cyclic groups). Next question:

  1. Is there a distinction to be drawn between saying there are prime numbers (as an unqualified truth of mathematics, construed at face value) and saying THERE ARE prime numbers? – where ‘THERE ARE’ indicates a metaphysically committing existence-claim, one which aims to represent how things stand with ‘denizens of the mind-independent, discourse-independent world’ (following Weir in borrowing Terence Horgan’s words and Putnam’s typographical device)

According to one central tradition, there is no such distinction to be drawn: thus Quine on the univocality of ‘exists’.

The Wright/Hale brand of neo-Fregean logicism likewise rejects the alleged distinction. Their opponents are sometimes puzzled by the Wright/Hale argument for platonism on the cheap. For the idea is that, once we answer (1) and (2) positively (and just a little more), i.e. once we agree that ‘3 is prime’ is true in some anodyne, minimalist, sense, and that ‘3’ walks, swims and quacks like a singular term, then we are committed to ‘3’ being a successfully referring expression, and so committed to its referent, which (on modest and plausible further assumptions) has to be an abstract object; so there indeed exists a first odd prime which is an abstract object. Opponents think this is too quick as an argument for full-blooded platonism because they think there is a gap to negotiate between the likes of ‘there exists a first odd prime number’ as an anodyne mathematical truth and ‘THERE EXISTS a first odd prime number’. Drawing on inter alia early Dummettian themes (which have Fregean and Wittgensteinian roots), the neo-logicist platonist denies there is a gap to be bridged.

Much recent metaphysics, however, sides with Wright and Hale’s opponents (wrong-headedly maybe, but that’s where the troops are marching). Thus Ted Sider can write ‘There is a growing consensus that taking ontology seriously requires making some sort of distinction between ordinary and ontological understandings of existential claims’ (that’s from his paper ‘Against Parthood’). From this perspective, the claim would be that we must indeed distinguish granting the unqualified truth of mathematics, construed at face value, from being committed to a full-blooded PLATONISM which makes genuinely ontological claims. It is one thing to claim that prime numbers exists, speaking with the mathematicians, and another thing to claim that THEY EXIST ‘in the fundamental sense’ (as Sider likes to say) when speaking with the ontologists.

Now, we can think of Sider et al. as mounting an attack from the right wing on the Quine/neo-Fregean rejection of a special kind of philosophical discourse about what exists: the troops are mustered under the banner ‘bring back old-style metaphysics!’ (Sider: ‘I think that fundamental ontology is what ontologists have been after all along’). But there is a line of attack from the left wing too. Consider, for example, Simon Blackburn’s quasi-realism about morals, modalities, laws and chances. Blackburn is no friend of heavy-duty metaphysics. But the thought is that certain kinds of discourse aren’t representational but serve quite different purposes, e.g. to project our moral attitudes or subjective degrees on belief onto the world (and a story is then told about why a discourse apt for doing that should to a large extent retain the same logical shape of representational discourse). So, speaking inside our moral discourse, there indeed are virtues (courage is one): but as far as the make-up of the world on which we are projecting our attitudes goes, virtues do not EXIST out there. From the left, then, it might be suggested that perhaps mathematics is like morals, at least in this respect: talking inside mathematical discourse, we can truly say e.g. that there are infinitely many primes; but mathematical discourse is not representational, and as far as the make-up of the world goes – and here we are switching to representational discourse – THERE ARE NO prime numbers.

To put it crudely, then, we can discern two routes to distinguishing ‘there are prime numbers’ as a mathematical claim and ‘THERE ARE prime numbers’ as a claim about what there really is. From the right, we drive a wedge by treating ‘THERE ARE’ as special, to be glossed ‘there are in the fundamental, ontological, sense’ (whatever that exactly is). From the left, we drive a wedge by treating mathematical discourse as special, as not in the ordinary business of making claims purporting to represent what there is.

And now we’ve joined up with Weir’s discussion. He answers ‘yes’ to all three of our questions. A fourth then remains outstanding:

  1. Given there is a distinction between saying that there are prime numbers and saying THERE ARE prime numbers, is the latter stronger claim also true?

If you say ‘yes’ to that, then you are buying into a version of platonism that does indeed look epistemically particularly troubling (in a worse shape, at any rate, than for the gap-denying neo-logicist position; for what can get us over the claimed gap between the ordinary mathematical claim and the ontologically committing claim)? Weir thinks this position is hopeless. Hence he answers ‘no’ to (4). Hence he endorses claims like this: There are infinitely many primes but THERE ARE no prime numbers. (p. 8 )

But this isn’t because he is, as it were, coming from the right, deploying a special ‘ontological understanding of existence claims’. Rather, he is coming more from the Blackburnian left: his ‘THERE ARE’ is ordinary existence talk in ordinary representational discourse, and the claim is that ‘there are infinitely many primes’, as a mathematical claim, belongs to a different kind of discourse.

OK, what kind of discourse is that? “The mode of assertion of such judgements, I will say, is formal, not representational”. And what does ‘formal’ mean here? Well, part of the story is hinted at by the claim that the formal, inside-mathematics, assertion that there are infinitely many primes is made true by “the existence of proofs of strings which express the infinitude of the primes” (p. 7). Of course, that raises at least as many questions as it answers. There are hints in the rest of the Introduction about how this initially somewhat startling claim is to be rounded out and defended in the rest of the book. But they are much too quick to be usefully commented on here; so I think it will be better to say no more here but take them up as the full story unfolds.

Still, we now have an initial specification of Weir’s location in the space of possible positions. His line is going to be that, as a mathematical claim, it is true that are an infinite number of primes: and this truth isn’t to be secured by reconstruing the claim in some fictionalist, structuralist or other way. But a mathematical claim is one thing, and a representational claim about how things are in the world is another thing. And the gap is to be opened up, not by inflating talk of what EXISTS into a special kind of ontological talk, but by seeing mathematical discourse (like moral discourse) as playing a non-representational role (or dare I say: as making moves in a different language game?). That much indeed sounds not unattractive. The question is going to be whether the nature of this non-representational game can be illuminatingly glossed in formalist terms. Now read on …

If you want to follow the discussions of Alan Weir’s book here on the blog, with some illuminating replies by Alan, then here they are (in reverse order). The long review I wrote for Mind is here.

Encore #15: Maddy, realism and arealism

Continuing the rather random selection of reposts, as the blog’s tenth birthday approaches, here’s a post from 2011 on a fascinating book by Maddy: 

Thin Realism, Arealism, and other Big Ideas (May 30, 2011)

Penelope Maddy’s recent Defending the Axioms is my sort of book. It is short (150 pages), beautifully clearly written (if there are obscurities, they are in the philosophy, not the prose), and I’m in fact rather sympathetic to her overall approach (I share her doubts about ‘First Philosophy’, I share in particular her doubts about the force of supposedly a priori arguments for nominalism and against abstracta, and I rather like her post-Quinean conception of the task for the ‘Second Philosopher’).

I’m not sure, however, that there is much in the book that will be a major surprise to the reader of Maddy’s previous two books. Still, the brevity and the tight focus ‘On the Philosophical Foundations of Set Theory’, to quote the subtitle, might help to make her ideas available to new readers daunted by the length and sweep of Second Philosophy. How well does Maddy succeed in doing that?

I’ve a lot of questions. The book is dedicated to ‘The Cabal’ of Californian set theorists. And — talking things over with Luca Incurvati — one interesting issue that came up is whether Maddy’s vision of the working methods of set theorists isn’t rather skewed by a restricted diet of examples from her local practitioners. But here I’m going muse about the central metaphysical theme in the book: does the book succeed in selling Maddy’s story about that?

Maddy discusses two positions that she suggests are prima facie open to those with no first-philosophical axes to grind. One she dubs ‘Thin Realism’. Starting from the mathematized science of the early nineteenth century, mathematical ideas came to be pursued increasingly for their own sake, leading inter alia to the great unifying endeavour which is set theory. This is hugely productive of ideas and results and is mathematically deep. If you are not already in the grip of some extra-mathematical prejudices, what’s not to like? So as good second philosophers, who don’t pretend to have special extra-mathematical reasons to criticize successful mathematical practice,  we should say that there are sets, and that set theory tells us about them — and also that there is no more to be said about them than what set theory says (other, perhaps, than negative things such as that they aren’t already-familiar bits of the causal, spatiotemporal, order). A gung-ho ‘Robust Realist’ is tempted to say more: she is gripped by an extra-mathematical picture of the sets as genuinely ‘out there’ quite independently of the natural world, forming a parallel world of entities sitting in a platonic heaven, with a great gulf fixed between the mathematical abstracta and the sublunary world. The harder she pushes that picture, the tougher it is for the Robust Realist to account for why we should think that the methods we sublunary mathematicians use should be a reliable guide to the lie of the land beyond the great gulf. It can then seems that our methods need backing up by some kind of certification that they will deliver the epistemic goods (and what could that look like?). Maddy, by contrast, thinks that the demand for such a certification is misplaced: why — as a second philosopher, working away in the thick of our best practices in science and maths — suppose that perfectly standard mathematical reasoning should stand in need of the sort of external supplementation that a Robust Realism seems to imply it requires?

Now, this might make it sound as if Maddy’s Thin Realist is going to end up with the sort of thin line about truth that we find e.g. in Crispin Wright. Then the thought would be: here is a discourse in good order, with appropriate disciplines and standards for making moves within it, so we can construct a minimalist truth-predicate for it. So we can not only say e.g. that the set of natural numbers has a powerset, but that it is true that there exists such a powerset, etc. However, this quick route to a thin realism isn’t Maddy’s line. Indeed, she explicitly contrast her Thin Realism with Wright’s minimalism:

Wright’s minimalist takes set theory to be a body of truths because it enjoys certain syntactic resources and displays well-established standards of assertion that our set-theoretic claims can be seen to meet: the idea is that a minimalist truth predicate can be defined for any such discourse in such a way that statements assertable by its standards come out true. In contrast, the Thin Realist takes set theory to be a body of truths, not because of some general syntactic and semantic features it shares with other discourses, but because of its particular relations with the defining empirical inquiry from which she begins.

It is important for Maddy’s Thin Realist, then, that our set theory — however wildly abstract it seems — has its connections to less abstract mathematics, which in turn has its connections … ultimately to the messy business of engineering-level science. Set theory, the rather Quinean thought goes, is an outlying but not entirely disconnected part of a network of enquiry with empirical anchors.

But put like that, we might wonder whether this kind of Thin Realist protests too much. To be sure, looking at the historical emergence of modern mathematics, we can trace the slow emergence from roots in mathematized science of purely abstract studies driven increasingly by a purely mathematical curiosity, and can see the (albeit very stretched) lines of connection. Starting from where we now are, however, the picture changes quite sharply: here we are with feet-on-the-ground physicists doing their thing using one bunch of methods and over there are the modern set theorists doing their improvisatory thing with a quite different bunch of rules of play (ok, let’s not worry about where string-theory fantasists fit in!). Physicists and set theorists are, it now seems, playing an entirely different game by different rules. We might ask: whatever the historical route by which we got here, is there really still a sense, however stretched, in which the physicists and the set theorists remain in the same business, so that we can sensibly talk of them both as trying to ‘uncover truths’?

The new suggestion, then, is that mathematicians have such very different fish to fry that it serves no good purpose for the second philosopher to say that the mathematicians too are talking about things that ‘exist’ (sets), or that set-theoretic claims are ‘true’. And note that it isn’t that the mathematicians should now be thought of as trying to talk about existents but failing: to repeat, the idea is that they just aren’t in the same world-tracking game. No wonder, then, that — as Maddy herself puts it on behalf of such a difference-emphasizing philosopher — our ‘well-developed methods of confirming existence and truth aren’t even in play here’. Call this second line, according to which set theory isn’t in the truth game, ‘Arealism’.

So what’s it to be for the second philosopher, Thin Realism or Arealism? What’s to choose? In the end, nothing says Maddy. Here’s modern science and its methods; here’s modern maths and its methods; here’s the developmental story; here’s a chain of connections; here are the radical differences between the far end points. Squint at it one way, and a sort of tenuous residual unity can be seen: and then we’ll incline to be Realists across the board — while, of course, eschewing over-Robust mythologies. Squint at it all another way, and the modern disunity will be foregrounded, and (so the story goes) Arealism becomes more attractive. There’s no right answer (rather, what this all goes to show is that the very notions of ‘truth’ and ‘existence’ are more malleable that we sometimes like to think).

Thus, roughly put, goes a central line in Maddy’s thought here, if I’m following aright.

But I wonder what underpins Maddy’s hankering here after a more-than-logical conception of truth? The Thin Realist, recall, thinks that Wrightian minimalism about truth isn’t enough: she wants to talk of set-theoretic truth so as to point up the (albeit distant) links from the maths to good old empirical enquiry. The Arealist doesn’t want to talk about set-theoretic truth because she wants to point up the differences between maths and good old empirical enquiry. But look at what they share: they both assume that the idea of truth needs anchoring in some way in the notion of the correct representation of an empirical world (rather, than, say in some cross-discourse formal role for the idea). Why so?

What we need here, if we are going to make progress from this point, is more reflection on the very concepts of truth and existence. And I don’t mean that we need an unwanted injection of first philosophy (so I’m not begging any question against Maddy)! Grant that our malleable inchoate ideas about truth can indeed be pressed in different directions. Still, the naturalistic second philosopher can take a view about the best way to go. She will want to look at our practices of talking and thinking and inferring, and she will want to have a theory about what is going on in various areas of discourse (empirical chat, moral chat, pure-mathematical chat, etc.). Her preferred developed notion of truth should then be the one that does the best theoretical work in her story about those discourses. And it certainly isn’t obvious at the outset how things should go. Will she end up more like a Blackburnian projectivist, privileging a class of representational discourse as the home territory for a basic notion of truth (so that other discourses are playing a different game, and have to earn their right to borrow the clothes that are cut to fit the representational case)? Or will become a more thorough-going pragmatist (holding that there is no privileged core). Or — in a different key — will she end up more like Wright’s minimalist? Certainly, the reader of Defending the Axioms isn’t given any reason to suppose that things must fall out anything like the first way, keeping room for a more substantial notion of truth.

And that gap in the end makes the current book rather unsatisfactory as a stand-alone affair (which isn’t to say it is not a fun read). Of course Maddy herself has written extensively about truth elsewhere so as to fill in something of what’s missing here. But this means that, after all, you really will have to go back to read the weighty Second Philosophy to get the whole story, and hence the full defence of the line Maddy wants to take about sets.

You can read the book review that Luca Incurvati and I wrote for Mind, which takes up rather different themes, here.

Encore #13: Tasks for philosophers of mathematics?

Here again is one of my very first blog-posts, musing about what philosophers of mathematics with my cast of mind might usefully get up to …

Tired of ontology? (May 13, 2006)

It requires a certain kind of philosophical temperament — which I do seem to lack — to get worked up by the question “But do numbers really exist?” and excitedly debate whether to be a fictionalist or a modal structuralist or some other -ist. As younger colleagues gambol around cheerfully chattering about these things, wondering whether to be hermeneutic or revolutionary, I find myself sitting on the side-lines, slightly grumpily muttering under my breath ‘And who cares?’.

To exaggerate a bit, I guess there’s a basic divide here between two camps. One camp is primarily interested in analytical metaphysics, or epistemology, or the philosophy of language, and sees mathematics as a test case for their preferred Quinean naturalist line or their Kantian framework (or whatever). The other camp is puzzled by some internal features of the practice of real mathematics and would like to have a satisfying story to tell about them .

Well, if you’re tired of playing the ontology game with the first camp, then there’s actually quite a bit of fun to be had in the second camp, and maybe more prospect of making some real progress. In the broadest brush terms, here are just a few of the questions that bug me (leaving aside Gödelian matters):

  1. How should we develop/improve/augment/replace Lakatos’s model of how mathematics develops in his Proofs and Refutations?
  2. What makes a mathematical proof illuminating/explanatory? (And what are we to make of unsurveyable computer proofs?)
  3. Is there a single conceptual grounding for the standard axioms of set theory? (And what are we to make of the standing of various large cardinal axioms?)
  4. What is the significance of the reverse mathematics project? (Is it just a technical “accident” that RCA_0 is used a base theory in that project? Can some kind of conceptual grounding be given for that theory? Would it be more principled to pursue Feferman’s predicative project?)
  5. Is there any sense in which category theory provides new foundations/suggests a new philosophical understanding for mathematics?

There’s even a possibility that your local friendly mathematicians might be interested in talking about such things!

That still strikes me as quite a good list of questions that still interest me (particularly, at the moment, the last!). But what really good has been published on these in the intervening ten years? Suggestions please!

Encore #10: Parsons on intuition

Just yesterday, Brian Leiter posted the results of one of his entertaining/instructive online polls, this time on the “Best Anglophone and German Kant scholars  since 1945“. Not really my scene at all. Though I did, back in the day, really love Bennett’s Kant’s Analytic (as philosophy this is surely brilliant, whatever its standing as “scholarship”). I note that in comments after his post, Leiter expresses regret for not having listed Charles Parsons in his list of contributors to Kant scholarship to be voted on. Well, true enough, Parsons has battled over the years to try to make sense of/rescue something from Kantian thoughts about ‘intuition’ as grounding e.g. arithmetical knowledge. But with what success, I do wonder? I found the passages about intuition in Mathematical Thought and Its Objects rather baffling. Here I put together some thoughts from 2008 blog posts.

Is any of our arithmetical knowledge intuitive knowledge, grounded on intuitions of mathematical objects? Parsons writes, “It is hard to see what could make a cognitive relation to objects count as intuition if not some analogy with perception” (p. 144). But how is such an analogy to be developed?

Parsons tries to soften us up for the idea that we can have intuitions of abstracta (where these intuitions are somehow quasi-perceptual) by considering the putative case of perceptions – or are they intuitions? – of abstract types such as letters. The claim is that “the talk of perception of types is something normal and everyday” (p. 159).

But it is of course not enough to remark that we talk of e.g. seeing types: we need to argue that we can take our talk here as indeed reporting a (quasi)-perceptual relation to types. Well, here I am, looking at a squiggle on paper: I immediately see it as being, for example, a Greek letter phi. And we might well say: I see the letter phi written here. But, in this case, it might well be said, ‘perception of the type’ is surely a matter of perceiving the squiggle as a token of the type, i.e. perceiving the squiggle and taking it as a phi.

Now, it would be wrong to suppose that – at an experiential level – ‘seeing as’ just factors into a perception and the superadded exercise of a concept or of a recognitional ability. When the aspect changes, and I see the lines in a drawing as a picture of a duck rather than a rabbit, at some level the content of my conscious perception itself, the way it is articulated, changes. Still, in seeing the lines as a duck, it isn’t that there is more epistemic input than is given by sight (visual engagement with a humdrum object, the lines) together with the exercise of a concept or of a recognitional ability. Similarly, seeing the squiggle as a token of the Greek letter phi again doesn’t require me to have some epistemic source over and above ordinary sight and conceptual/recognitional abilities. There is no need, it seems, to postulate something further going on, i.e. quasi-perceptual ‘intuition’ of the type.

The deflationist idea, then, is that seeing the type phi instantiated on the page is a matter of seeing the written squiggle as a phi, and this involves bring to bear the concept of an instance of phi. And, the suggestion continues, having such a concept is not to be explained in terms of a quasi-perceptual cognitive relation with an abstract object, the type. If anything it goes the other way about: ‘intuitive knowledge of types’ is to be explained in terms of our conceptual capacities, and is not a further epistemic source. (But note, the deflationist who resists the stronger idea of intuition as a distinctive epistemic source isn’t barred from taking Parsons’s permissive line on objects, and can still allow the introduction of talk via abstraction principles of abstract objects such as types. He needn’t have a nominalist horror of talk of abstracta.)

Let’s be clear here. It may well be that, as a matter of the workings of our cognitive psychology, we recognize a squiggle as a token phi by comparing it with some stored template. But that of course does not imply that we need be able, at the personal level, to bring the template to consciousness: and even if we were to have some quasi-perceptual access to the template itself, it wouldn’t follow that we have quasi-perceptual access to the type. Templates are mental representations, not the abstracta represented.

Parsons, however, explicitly rejects the sketched deflationary story about our intuition of types when he turns to consider the particular case of the perception of expressions from a very simple ‘language’, containing just one primitive symbol ‘|’ (call it ‘stroke’), which can be concatenated. The deflationary reading

does not accurately render our perceptual consciousness of strokes. It would make what I want to call intuition of a string an instance of seeing a certain inscription as  of a type …. But in actual cases, the identification of the type will be firmer and more explicit that the identification of any physical inscriptions that is an instance of the type. That the inscriptions are real physical objects with definite physical properties plays no role in the mathematical treatment of the language, which is what concerns us. An illusory presentation of a string, provided it is sufficiently clear, will do as well to illustrate a mathematical notion as a real one. (p. 161)

There seem to be two points here, neither of which will really trouble the deflationist.

The first point is that the identification of a squiggle’s type may be “firmer and more explicit” than our determination of its physical properties as a token (which I suppose means that a somewhat blurry shape may still definitely be a letter phi). But so what? Suppose we have some discrete conceptual pigeon-holes, and have reason to take what we see as belonging in one pigeonhole or another (as when we are reading Greek script, primed with the thought that what we are seeing will be a sequence of letters from forty eight upper and lower case possibilities). Then fuzzy tokens can get sharply pigeonholed. But there’s nothing here that the deflationist about seeing types can’t accommodate.

The second point is that, for certain illustrative purposes, illusory strings are as good as physical strings. But again, so what? Why shouldn’t seeing an illusory strokes as a string be a matter of our tricked perceptual apparatus engaging with our conceptual and/or /recognitional abilities? Again, there is certainly no need to postulate some further cognitive achievement, ‘intuition of a type’.

Oddly, Parsons himself, when wrestling with issues about vagueness, comes close to making these very points. For you might initially worry that intuitions which are founded in perceptions and imaginings will inherit the vagueness of those perceptions or imaginings – and how would that then square the idea that mathematical intuition latches onto sharply delineated objects? But Parsons moves to block the worry, using the example of seeing letters again. His thought now seems to be the one above, that we have some discrete conceptual pigeon-holes, and in seeing squiggles as a phi or a psi (say), we are pigeon-holing them. And the fact that some squiggles might be borderline candidates for putting in this or that pigeon-hole doesn’t (so to speak) make the pigeon-holes less sharply delineated. Well, fair enough. But thinking in these terms surely does not sustain the idea that we need some basic notion of the intuition of the type phi to explain our pigeon-holing capacities.

So, I’m unpersuaded that we actually need (or indeed can make much sense of) any notion of the quasi-perceptual ‘intuition of types’ – and in particular, any notion of the intuition of types of stroke-strings – that resists a deflationary reading. But let’s suppose for a moment that we follow Parsons and think we can make sense of such a notion. Then what use does he want to make of the idea of intuiting strokes and stroke-strings?

Parsons writes

What is distinctive of intuitions of types [here, types of stroke-strings] is that the perceptions and imaginings that found them play a paradigmatic role. It is through this that intuition of a type can give rise to propositional knowledge about the type, an instance of intuition that. I will in these cases use the term ‘intuitive knowledge’. A simple case is singular propositions about types, such as that ||| is the successor of ||. We see this to be true on the basis of a single intuition, but of course in its implications for tokens it is a general proposition. (p. 162)

This passage raises a couple of issues worth commenting on.

One issue concerns the claim that there is a ‘single intuition’ here on basis of which we see that that ||| is the successor of ||. Well, I can think of a few cognitive situations which we might agree to describe as grounding quasi-perceptual knowledge that ||| is the successor of || (even if some of us would want to give a deflationary construal of the cases, one which doesn’t appeal to intuition of abstracta). For example,

  1. We perceive two stroke-strings
    and aligning the two, we judge one to be the successor or the other.
  2. We perceive a single sequence of three strokes ||| and flip to and fro between seeing it as a threesome and as a block of two followed by an extra stroke.

But, even going along with Parsons on intuition, neither of those cases seems aptly described as seeing something to be true on the basis of a single intuition. In the first case, don’t we have an intuition of ||| and a separate intuition of || plus a recognition of the relation between them? In the second case, don’t we have successive intuitions, and again a recognition of the relation between them? It seems that our knowledge that ||| is the successor of || is in either case grounded on intuitions, plural, plus a judgement about their relation. And now the suspicion is that it is the thoughts about the relations that really do the essential grounding of knowledge here (thoughts that could as well be engaging with perceived real tokens or with imagined tokens, rather than with putative Parsonian intuitions that, as it were, reach past the real or imagined inscriptions to the abstracta).

The other issue raised by the quoted passage concerns the way that the notion of ‘intuitive knowledge’ is introduced here, as the notion of propositional knowledge that arises in a very direct and non-inferential way from intuition(s) of the objects the knowledge is about: “an item of intuitive knowledge would be something that can be ‘seen’ to be true on the basis of intuiting objects that it is about” (p. 171). Such a notion looks very restrictive – on the face of it, there won’t be much intuitive knowledge to be had.

But Parsons later wants to extend the notion in two ways. First

Evidently, at least some simple, general propositions about strings can be seen to be true. I will argue that in at least some important cases of this kind, the correct description involves imagining arbitrary strings. Thus, that will be included in ‘intuiting objects that a proposition is about’. (p. 171)

But even if we now allow intuition of ‘arbitrary objects’, that still would seem to leave intuitive knowledge essentially non-inferential. However,

I do not wish to argue that the term ‘intuitive knowledge’ should not be used in that [restrictive] way. Our sense, following that of the Hilbert School, is a more extended one that allows that certain inferences preserve intuitive knowledge, so that there can actually be a developed body of mathematics that counts as intuitively known. This seems to me a more interesting conception, in addition to its historical significance. Once one has adopted this conception, one has to consider case by case what inferences preserve intuitive knowledge. (p. 172)

Two comments about his. Take the second proposed extension first. The obvious question to ask is: what will constrain our case-by-case considerations of which kinds of inference preserve intuitive knowledge? To repeat, the concept of intuitive knowledge was introduced by reference to an example of knowledge seemingly non-inferentially obtained. So how are we supposed to ‘carry on’, applying the concept now to inferential cases? It seems that nothing in our original way of introducing the concept tells us which such further applications are legitimate, and which aren’t. But there must be some constraints here if our case-by-case examinations are not just to involve arbitrary decisions. So what are these constraints? I struggle to find any clear explanation in Parsons.

And what about intuiting ‘arbitrary’ strings? How does this ground, for example, the knowledge that every string has a successor? Well, supposedly, (1) “If we imagine any [particular] string of strokes, it is immediately apparent that a new stroke can be added.” (p. 173) (2) But we can “leave inexplicit its articulation into single strokes” (p. 173), so we are imagining an arbitrary string, and it is evident that a new stroke can be added to this too. (3) “However, …it is clear that the kind of thought experiments I have been describing can be taken as intuitive verifications of such statements as that any string of strokes can be extended only if one carries them out on the basis of specific concepts, such as that of a string of strokes. If that were not so, they would not confer any generality.” (p. 174) (4) “Although intuition yields one essential element of the idea that there are, at least potentially, infinitely many strings …more is involved in the idea, in particular that the operation of adding an additional stroke can be indefinitely iterated. The sense, if any, in which intuition tells us that is not obvious.” (p. 176) But (5) “Once one has seen that every string can be extended, it is still another question whether the string resulting by adding another symbol is a different string from the original one. For this it must be of a different type, and it is not obvious why this must be the case. … Although it will follow from considerations advanced in Chapter 7 that it is intuitively known that every string can be extended by one of a different type, ideas connected with induction are needed to see it” (p. 178).

There’s a lot to be said about all that, though (4) and (5) already indicate that Parsons thinks that, by itself, ‘intuition’ of stroke-strings might not take us terribly far. But does it take us even as far as Parsons says? For surely it is not the case that imagining/intuiting adding a stroke to an inexplicitly articulated string, together with the exercise of the concept of a string of strokes, suffices to give us the idea that any string can be extended. For we can surely conceive of a particularist reasoner, who has the concept of a string, can bring various arrays (more or less explicitly articulated) under that concept, and given a string can recognize that this one can be extended – but who can’t advance to frame the thought that they can all be extended? The generalizing move surely requires a further thought, not given in intuition.

Indeed, we might now wonder quite what the notion of intuition is doing here at all. For note that (1) and (2) are a claims about what is imaginable. But if we can get to general results about extensibility by imagining particular strings (or at any rate, imagining strings “leaving inexplicit their articulation into single strokes”, thus perhaps |||| with a blurry filling) and then bring them under concepts and generalizing, why do we also need to think in terms of having cognitive access to something else which is intrinsically general, i.e. stroke-string types? It seems that Parsonian intuitions actually drop out of the picture. What gives them an essential role in the story?

Finally, note Parsons’s pointer forward to the claim that ideas “connected with induction” can still be involved in what is ‘intuitively known”. We might well wonder again as we did before: what integrity is left to the notion of intuitive knowledge once it is no longer tightly coupled with the idea of some quasi-perceptual source and allows inference, now even non-logical inference, to preserve intuitive knowledge? I can’t wrestle with this issue further here: but Parsons ensuing discussion of these matters left me puzzled and unpersuaded.

Again, as with the last post, that’s how things seemed to be more than seven years ago. If other readers have a better of sense of what a Parsonian line on intuition might come to, comments are open!

Encore #9: Parsons on noneliminative structuralism

I could post a few more encores from my often rather rude blog posts about Murray and Rea’s Introduction to the Philosophy of Religion. But perhaps it would be better for our souls to to an altogether more serious book which I blogged about at length, Charles Parsons’ Mathematical Thought and Its Objects. I got a great deal from trying to think through my reactions to this dense book in 2008. But I often struggled to work out what was going on. Here, in summary, is where I got to in a series of posts about the book’s exploration of structuralism. I’m very sympathetic to structuralist ideas: but I found it difficult to pin down Parsons’s version.

In his first chapter, Parsons defends a thin, logical, conception of ‘objects’ on which “Speaking of objects just is using the linguistic devices of singular terms, predication, identity and quantiÞcation to make serious statements” (p. 10). His second chapter critically discusses eliminative structuralism. The third chapter presses objections against modal structuralism. But Parsons still finds himself wanting to say that “something close to the structuralist view is true” (p. 42), and he now moves on characterize his own preferred noneliminative version. We’ll concentrate on the view as applied to arithmetic.

Parsons makes two key initial points. (1) Unlike the eliminative structuralist, the noneliminativist “take[s] the language of mathematics at face value” (p. 100). So arithmetic is indeed about numbers as objects. What characterizes the position as structuralist is that we don’t “take more as objectively determined about the objects about which it speaks than [the relevant mathematical] language itself specifies” (p. 100). (2) Then there is “the aspect of the structuralist view stressed by Bernays, that existence for mathematical objects is in the context of a background structure” (p. 101). Further, structures aren’t themselves objects, and “[the noneliminativist] structuralist account of a particular kind of mathematical object does not view statements about that kind of object as about structures at all”.

But note, thus far there’s nothing in (1) and (2) that the neo-Fregean Platonist (for example) need dissent from. The neo-Fregean can agree e.g. that numbers only have numerical intrinsic properties (pace Frege himself, even raising the Julius Caesar problem is a kind of mistake). Moreover, he can insist that individual numbers don’t come (so to speak) separately, one at a time, but come all together forming an intrinsically order structured — so in, identifying the number 42 as such, we necessarily give its position in relation to other numbers.

So what more is Parsons saying about numbers that distinguishes his position from the neo-Fregean? Well, he in fact explicitly compares his favoured structuralism with the view that the natural numbers are sui generis in the sort of way that the neo-Fregean holds. He writes

One further step that the structuralist view takes is to reject the demand for any further story about what objects the natural numbers are [or are not]. (p. 101)

The picture seems to be that the neo-Fregean offers a “further story” at least in negatively insisting that numbers are sui generis, while the structuralist refuses to give such a story. As Parsons puts it elsewhere

If what the numbers are is determined only by the structure of numbers, it should not be part of the nature of numbers that none of them is identical to an object given independently.

But of course, neo-Fregeans like Hale and Wright won’t agree that their rejection of cross-type identities is somehow an optional extra: they offer arguments which — successfully or otherwise — aim to block the Julius Caesar problem and reveal certain questions about cross-type identifications as ruled out by our very grasp of the content of number talk. So from this neo-Fregean perspective, we can’t just wish into existence a coherent structuralist position that both (a) construes our arithmetical talk at face value, as referring to numbers as genuine objects, yet also (b) insists that the possibility of cross-type identifications is left open, because — so this neo-Fregean story goes — a properly worked out version of (a), together with reflection on the ways that genuine objects are identified under sortals, implies that we can’t allow (b).

Now, on the sui generis view about numbers, claims identifying numbers with sets will be ruled out as plain false. Or perhaps it is even worse, and such claims fail to make the grade for being either true or false (though it is, of course, notoriously difficult to sustain a stable, well-motivated, distinction between the neither-true-nor-false and the plain false — so let’s not dwell on this). Conversely, assuming that numbers are objects, if claims identifying them with sets and the like are false or worse, then numbers are sui generis. So it seems that if Parsons is going to say that numbers are objects but are not sui generis, he must allow space for saying that claims identifying numbers with sets (or if not sets, at least some other objects) are true. But then Parsons is faced with the familiar Benacerraf multiple-candidates problem (if not for sets, then presumably an analogous problem for other candidate objects, whatever they are: let’s keep things simple by running the argument in the familiar set-theoretic setting). How do we choose e.g. between saying that the finite von Neumann ordinals are the natural numbers and saying that the finite Zermelo ordinals are?

It seems arbitrary to plump for either choice. Rejecting both together (and other choices, on similar grounds) just takes us back to the sui generis view — or even to Benacerraf’s preferred view that numbers aren’t objects at all. So that, it seems, leaves just one position open to Parsons, namely to embrace both choices, and to avoid the apparently inevitable absurdity that $latex \{\emptyset,\{\emptyset\}\}$ is identical to $latex \{\{\emptyset\}\}$ (because both are identical to 2) by going contextual. It’s only in one context that ‘$latex 2 = \{\emptyset,\{\emptyset\}\}$’ is true; and only in another that ‘$latex 2 = \{\{\emptyset\}\}$’ is true.

And this does seem to be the line Parsons seems inclined to take: “The view we have defended implies that [numbers] are not definite objects, in that the reference of terms such as ‘the natural number two’ is not invariant over all contexts” (p. 106). But how are we to understand that? Is it supposed to be rather like the case where, when Brummidge United is salient, ‘the goal keeper’ refers to Joe Doe, but when Smoketown City is salient, ‘the goal keeper’ refers to Richard Roe? So when the von Neumann ordinals are salient, ‘2’ refers to $latex \{\emptyset,\{\emptyset\}\}$ and the Zermelo ordinals are salient, ‘2’ refers to $latex \{\{\emptyset\}\}$? But then, to pursue the analogy, while ‘the goal keeper’ is indeed sometimes used to talk about now this particular role-filler and now that one, the designator is apparently also sometimes used more abstractly to talk about the role itself — as when we say that only the goal keeper is allowed to handle the ball. Likewise, even if we do grant that ‘2’ sometimes refers to role-fillers, it seems that sometimes it is used to talk more abstractly about the role — perhaps as when we say, when no particular $latex \omega$-sequence of sets is salient, that 2 is the successor of the successor of zero. Well, is this the way Parsons is inclined to go, i.e. towards a structuralism developed in terms of a metaphysics of roles and role-fillers?

Well, Parsons does explicitly talk of “the conclusion that natural numbers are in the end roles rather than objects with a definite identity” (p. 105). But why aren’t roles objects after all, in his official thin ‘logical’ sense of object? — for we can use “the linguistic devices of singular terms, predication, identity and quantification to make serious statements” about roles (and yes, we surely can make claims about identity and non-identity: the goal keeper is not the striker). True, roles are as Parsons might say, “thin” or “impoverished” objects whose intrinsic properties are determined by their place in a structure. But note, Parsons’s official view about objects didn’t require any sort of ‘thickness’: indeed, he is “most concerned to reject the idea that we don’t have genuine reference to objects if the ‘objects’ are impoverished in the way in which elements of mathematical structures appear to be” (p. 107). And being merely ‘thin’ objects, roles themselves (e.g. numbers) can’t be the same things as ‘thick’ role-fillers. So now, after all, numbers qua number-roles do look to be sui generis entities with their own identity — objects, in the broad logical sense, which are not to be identified with any role-filler — in other words, just the kind of thing that Parsons seems not to want to be committed to.

The situation is further complicated when Parsons briefly discusses Dedekind abstraction, though similar issues arise. To explain: suppose we have a variety of ‘concrete’ structures, whether physically realized or realized in the universe of sets, that satisfy the conditions for being a simply infinite system. Then Dedekind’s idea is that we ‘abstract’ from these a further structure $latex \langle N, 0, S\rangle$ which is — so to speak — a ‘bare’ simply infinite system without other inessential physical or set-theoretic features, and it is elements of this system which are the numbers themselves. (Erich H. Reck nicely puts it like this: “[W]hat is the system of natural numbers now? It is that simple infinity whose objects only have arithmetic properties, not any of the additional, ‘foreign’ properties objects in other simple infinities have.”) Since the bare structure is all that is generated by the Dedekind abstraction, “it conforms to the basic structuralist intuition in that the number terms introduced do not give us more than the structure” (p. 105), to borrow Parsons’s words. But, he continues,

This procedure gets its force from the use of a typed language. Thus, the question arises what is to prevent us from later, for some specific purpose, speaking of numbers in a first-order language and even affirming identities of numbers and objects given otherwise.

To which the answer surely is that, to repeat, on the Dedekind abstraction view, the ‘thin’ numbers determinately do not have intrinsic properties other than those given in the abstraction procedure which introduces them: so, by assumption, they are determinately distinct from any ‘thicker’ object with such further properties. Why not?

So now I’m puzzled. For Parsons, does ‘the natural number two’ (i) have a fixed reference to a sui-generis ‘thin’ role-object (or Dedekind abstraction, if that’s different), or (ii) have a contextually shifting reference to a role-filler, or (iii) both? The latter is perhaps the most charitable reading. But it would have helped a lot if Parsons had much more explicitly related his position to an articulated metaphysics of role/role-filler structuralism. Elsewhere, he writes that “the metaphysical tradition is likely to be misleading as a source of ideas about the objects of modern mathematics”. Maybe that’s right. But then it is all the more important to be absolutely clear and explicit about what new view is being proposed. And here, I fear, Parsons’s writing falls short of that.

Or so I thought now over seven years ago. I haven’t re-read Parsons’s text since. I would be very interested to get any comments from readers who worked their way to some clearer understanding of his position. 

Encore #6: Gödel vs Turing

And from the same collection of articles, a link to a paper that I (for once!) unreservedly praised and agreed with.

Church’s Thesis 13: Gödel on Turing (June 14, 2007)

Phew! At last, I can warmly recommend another paper in Church’s Thesis after 70 Years

Some headline background: Although initially Gödel was hesitant, by about 1938 he is praising Turing’s work as establishing the “correct definition” of computability. Yet in 1972 he writes a short paper on undecidability, which includes a section headed “A philosophical error in Turing’s work”.

So an issue arises. Has Gödel changed his mind?

Surely not. What Gödel was praising in 1938 was Turing’s analysis of a finite step-by-step computational procedure. (Recall the context: Turing was originally fired up by the Entscheidungsproblem, which is precisely the question whether there is a finitistic procedure that can be mechanically applied to decide whether a sentence is a first-order logical theorem. So it is analysis of such procedures that is called for, and that was of concern to Gödel too.)

What the later Gödel was resisting in 1972 was Turing’s further thought that, in the final analysis, human mental procedures cannot go beyond such finite mechanical procedures. Gödel was inclined to think that, in the words of his Gibbs Lecture, the human mind “infinitely surpasses the powers of any finite machine”. So, surely, there is no change of mind, just an important change of topic.

That’s at any rate what I have previously taken to be the natural view. But I confess I’d never done any careful homework to support it. Perhaps because it chimes with view I’ve been at pains to stress in various comments on this collection of articles — namely that there is a very important distinction between (1)  the “classic” Church-Turing thesis that the effectively computable functions (step-by-small-step algorithmically computable functions) are exactly the recursive functions, and (2)  various bolder theses about what can be computed by (idealised) machines and/or by minds.

And so it is very good to see that now Oron Shagrir has given a careful and convincing defence of this natural view of Gödel’s thoughts on Turing, with lots of detailed references, in his (freely downloadable!) paper “Gödel on Turing on computability”. Good stuff!

Encore #5: Church’s Thesis and open texture

At various times, I have blogged a series of posts as I read through a book, often en route to writing a review. One of first books to get this treatment was  the collection of articles Church’s Thesis After 70 Years edited by Adam Olszewski et al. This was a very mixed bag, as is often the way with such collections. But some pieces stood out as worth thinking about. Here’s one (which I initially posted about in 2007, but returned to a bit later when we read it one Thursday in the Logic Seminar.

Stewart Shapiro, “Computability, Proof, and Open-Texture” (January 18, 2008)

Let me say straight away that it is a very nice paper, written with Stewart Shapiro’s characteristic clarity and good sense.

Leaving aside all considerations about physical computability, there are at least three ideas in play in the vicinity of the Church-Turing Thesis. Or betters there is first a cluster of inchoate, informal, open-ended, vaguely circumscribed ideas of computability, shaped by some paradigm examples of everyday computational exercises. Then second there is the semi-technical idea of effective computability (with quite a carefully circumscribed though still informal definition, as given in various texts, such as Hartley Rogers’ classic). Then thirdly there is the idea of Turing computability (and along with that, of course, the other provability equivalent characterizations of computability as recursiveness, etc.).

It will be agreed on all sides that our original inchoate, informal, open-ended ideas could and can be sharpened up in various ways. Hence, the notion of effective computability takes some strands in inchoate notion and refines and radically idealizes them in certain ways (e.g. by abstracting from practical considerations of the amount of time or memory resources a computation would use). But there are other notions, e.g. of feasible computability, that can also be distilled out. Or notions of what is computable by a physically realisable set-up in this or other worlds. It isn’t that the notion of effective computability is — so to speak — the only clear concept waiting to be revealed as the initial fog clears.

So I think that Shapiro’s rather Lakatosian comments in his paper about how concepts get refined and developed and sharpened in mathematical practice are all well-taken, as comments about how we get from our initial inchoate preformal ideas to, in particular, the semi-technical notion of effective computability. And yes, I agree, it is important to emphasize is that we do indeed need to do some significant pre-processing of our initial inchoate notion of computability before we arrive at a notion, effective computability, that can reasonably be asserted to be co-extensive with Turing computability. After all, ‘computable’ means, roughly, ‘can be computed’: but ‘can’ relative to what constraints? Is the Ackermann function computable (even though for small arguments its value has more digits than particles in the known universe)? Our agreed judgements about elementary examples of common-or-garden computation don’t settle the answer to exotic questions like that. And there is an element of decision — guided of course by the desire for interesting, fruitful concepts — in the way we refine the inchoate notion of computability to arrive at the idea of effective computability (e.g. we abstract entirely away from consideration of the number of steps needed to execute an effective step-by-step computation, while insisting that we keep a low bound on the intelligence required to execute each particular step). Shapiro writes very well about this kind of exercise of reducing the amount of ‘open texture’ in an inchoate informal concept (or concept-cluster) and arriving at something more sharply bounded.

But another  question arises about the relation between the semi-technical notion of effective computability, once we’ve got there, and the notion of Turing computability. Now, Shapiro writes as if the move onwards from the semi-technical notion is (as it were) just more of the same. In other words, the same Lakatosian dynamic (rational conceptual development under the pressure of proof-development) is at work in first getting from the original inchoate notion of computability to the notion of effective computability, as in then going on eventually to refine out the notion of Turing computability. Well, that’s a good picture of what is going on at the conceptual level. But Shapiro seems to assume that this conceptual refinement goes along with a narrowing of extension  (in getting our concepts sharper, we are drawing tighter boundaries). But that doesn’t obviously follow.  An alternative picture is that once we have got as far as the notion of effective computable functions, we do have a notion which, though informal, is subject to sufficient constraints to ensure that it does indeed have a determinate extension (the class of Turing-computable functions). We can go on to say more about that extension, in coming up with various co-extensive technical notions of computability, but still the semi-technical notion of effective computability does enough for fix the class of functions we are talking about. For some exploration of the latter view, see for example Robert Black’s 2000 Philosophia Mathematica paper.

So a key issue here is this: is further refinement of “open texture” in the notion of effective computability required to determine a clear extension? Shapiro seems to think so. But looking at his paper, it is in fact difficult to discern any argument for supposing that things go his way. He is good and clear about how the notion of effective computability gets developed. But he seems to assume, rather than argue, that we need more of the same kind of conceptual development before we are entitled to settle the Turing computable/the recursive as a canonically privileged class of effectively computable function. But supposing that these are moves of the same kind is in fact exactly the point at issue in some recent debates. And that point, to my mind, isn’t sufficiently directly addressed by Shapiro in his last couple of pages to make his discussion of these matters entirely convincing.

Conference: Philosophy of mathematics — truth, existence and explanation.

Philosophy of maths AND Italy — what’s not to like? So let me note that the second conference of the Italian Network for the Philosophy of Mathematics has been announced for 26-28 May 2016, University of Chieti-Pescara, Chieti, Italy.  

The invited speakers are Volker Halbach (University of Oxford), Enrico Moriconi (University of Pisa), Achille Varzi (Columbia University) together with ‘early career’ speakers Marianna Antonutti Marfori (IHPST, Paris) and Luca Incurvati (University of Amsterdam).

This is an English language conference, and there is a call for abstracts for contributed talks “in any area of philosophy of mathematics connected with the issues of truth, existence, and explanation”.  All the details can be found at the FilMat website here.

Mathematical depth

In our Mind review of Penelope Maddy’s Defending the Axioms, Luca Incurvati and I were rather skeptical about whether she could really rely on the notion of mathematical depth to do as much work as she wants it to do in that book. But we did add “We agree that there is depth to the phenomenon of mathematical depth: all credit to Maddy for inviting philosophers of mathematics to think hard about its role in mathematical practice.”

Since then, there has been a workshop on mathematical depth at UC Irvine co-organized by Maddy, and now versions of the papers there have been made available as a virtual issue of Philosophia Mathematica which will remain freely available until November this year. Looks interesting.

Scroll to Top