In his 1967 paper, ‘God, the Devil, and Gödel’, Paul Benacerraf famously gives a nice argument, going via Gödel’s Second Theorem, that proves that either my mathematical knowledge can’t be simulated by some computing machine (there is no particular Turing machine which enumerates what I know), or if it can be then I don’t know which machine does the trick. Benacerraf’s argument is perhaps not ideally presented, so for a crisper, streamlined, version see my Gödel book, §28.6: but the idea should be familiar.

Of course, how interesting you think this result is will depend on just how seriously you take the notion that there might such a determinate body of truths as my mathematical knowledge. For one thing, any real-world mathematician makes mistakes: what I *know* will be a subset of what I think I know, and I won’t in fact know *which* subset (so it’s no surprise if I wouldn’t recognize which Turing machine enumerates my actual knowledge). OK, it will be replied that the Benacerraf argument is supposed to apply to my *idealized* knowledge, prescinding from mistakes in performance etc. But how is that story supposed to work? And even if we can make the idea fly, and can sensibly idealize away from common-or-garden error, isn’t it going to be vague at the margins what I count as a proof? So isn’t it still going to be irredeemably *vague* what belongs to my idealized mathematical knowledge? If so, the question of simulating it with the crisply determinate output of a Turing doesn’t arise.

Similar worries about idealizing mathematicians and the vagueness of the informal notion of proof will beset other attempts to get sharp anti-computationalist conclusions about the mind from Gödelian considerations. And in the his quite brief paper, ‘The Gödel Theorem and Human Nature’, Hilary Putnam brings such worries to bear against Penrose in particular. Rather than pick holes again in the details of Penrose’s arguments (which have been chewed over enough in the literature, by Putnam among many others), he now stresses that the whole enterprise is misguided. “The very notion of an ideal mathematician is too problematic” to enable us to set up a contrast between what a suitably idealized version of us can do and what a naturalistically kosher mechanism can do. The complaint is quite a familiar one, but perhaps none the worse for that.

But interestingly, for all his worries about the pointfulness of such tricksy arguments, Putnam does return to explore a relation of Benacerraf’s argument, spelt out this time in terms of the notion of justified belief rather than knowledge.

The target is a (surely implausible!) Chomskian hypothesis to the effect that we have a ‘scientific faculty’ such that this faculty — in idealized form — can be simulated by some particular Turing machine *T*. In other words, (C) *T* enumerates (a coded version of) every true sentence of the form ‘we are justified in accepting *p* on evidence *e*‘. Then Putnam has an argument that either (C) isn’t true, or if it is we aren’t justified in believing it (I can’t have a justified belief about which machine does the simulation trick).

Oddly, however, Putnam doesn’t mention the analogous Benacerraf argument at all, so — if you are interested in this sort of thing — you’ll need to do your own “compare and constrast” exercise. And as with his predecessor’s argument, Putnam’s too isn’t ideally well presented and a bit of work needs to be done. Perhaps I’ll return to the exercise in a later posting, if it proves fun enough.

Or then again, perhaps I won’t … For in any case, the more interesting tack is to return to Penrose and ask whether he or a defender can sidestep the sort of general worry that Putnam has about arguments with a Lucas/Penrose flavour. Well, the next paper in *KGFM* is another shot by Penrose himself. So let’s turn to that.