# Stewart Shapiro, “Computability, Proof, and Open-Texture”

It was my turn briefly to introduce the Logic Seminar yesterday, and we were looking at Stewart Shapiro’s “Computability, Proof, and Open-Texture” (which is published in Church’s Thesis After 70 Years). I’ve blogged about this before, and although I didn’t look back at what I said then in rereading the paper, I seemed to come to much the same view of it. Here’s a combination of some of what I said yesterday and what I said before. Though let me say straight away that it is a very nice paper, written with Stewart’s characteristic clarity and good sense.

Leaving aside all considerations about physical computability, there are at least three ideas in play in the vicinity of the Church-Turing Thesis. Or rather there is first a cluster of inchoate, informal, open-ended, vaguely circumscribed ideas of computability, shaped by some paradigm examples of everyday computational exercises; and then second there is the semi-technical idea of effective computability (with quite a carefully circumscribed though still informal definition); and then thirdly there is the idea of Turing computability (and along with that, of course, the other provability equivalent characterizations of computability as recursiveness, etc.).

It should be agreed on all sides that our original inchoate, informal, open-ended ideas could and can be sharpened up in various ways. The notion of effective computability takes some strands in inchoate notion and refines and radically idealizes them in certain ways. But there are other notions, e.g. of feasible/practicable computability, that can be distilled out. It isn’t that the notion of effective computability is — so to speak — the only clear concept waiting to be revealed as the initial fog clears.

Now, I think that Shapiro’s rather Lakatosian comments in his paper about how concepts get refined and developed and sharpened in mathematical practice are all well-taken, as comments about how we get from our initial inchoate preformal ideas to the semi-technical notion of effective computability. And yes, I agree, it is important to emphasize is that we do indeed need to do some significant pre-processing of our initial inchoate notion of computability before we arrive at a notion, effective computability, that can reasonably be asserted to be co-extensive with Turing computability. After all, ‘computable’ means, roughly, ‘can be computed’: but ‘can’ relative to what constraints? Is the Ackermann function computable (even though for small arguments its value has more digits than particles in the known universe)? Our agreed judgements about elementary examples of common-or-garden computation don’t settle the answer to exotic questions like that. And there is an element of decision — guided of course by the desire for interesting, fruitful concepts — in the way we refine the inchoate notion of computability to arrive at the idea of effective computability (e.g. we abstract entirely away from consideration of the number of steps needed to execute an effective step-by-step computation, while insisting that we keep a low bound on the intelligence required to execute each particular step). Shapiro writes well about this kind of exercise of reducing the amount of ‘open texture’ in an inchoate informal concept (or concept-cluster) and arriving at something more sharply bounded.

Where a question arises is about the relation between the semi-technical notion of effective computability and the notion of Turing computability. Shapiro writes as if the move onwards from the semi-technical notion is (as it were) just more of the same: the same Lakatosian dynamic (rational conceptual development under the pressure of proof-development) is at work in first getting from the original inchoate notion of computability to the notion of effective computability, as then it getting eventually to refine out the notion of Turing computability. Well, that’s one picture. But an alternative picture is that once we have got as far as the notion of effective computable functions, we do have a notion which, though informal, is subject to sufficient constraints to ensure that it does indeed have a determinate extension (the class of Turing-computable functions). For some exploration of the latter view, see for example Robert Black’s 2000 Philosophia Mathematica paper.

The key issue question here is which picture is right? Looking at Shapiro’s paper, it is in fact difficult to discern any argument for supposing that things go his way. He is good and clear about how the notion of effective computability gets developed. But he seems to assume, rather than argue, that we need more of the same kind of conceptual development before we settle on the idea of Turing computability as a canonically privileged concept of computability. But supposing that these are moves of the same kind is in fact exactly the point at issue in some recent debates. And that point, to my mind, isn’t sufficiently directly addressed by Shapiro in his last couple of pages to make his discussion of these matters entirely convincing.

This entry was posted in Logic. Bookmark the permalink.

### One Response to Stewart Shapiro, “Computability, Proof, and Open-Texture”

1. TechTonics says:

SH: I was intrigued by the notion that a particular TM with a particular input
could solve individual decidability problems by virtue of a selection process of existence by an uncomputable function. I thought it violated the Halting problem. The intensional versus extensional musing included CT briefly at the end. Perhaps you will find this excerpt from the discussion interesting.

CPOT: "Are there any borderline computable functions? Can we construct a sorites series from a clearly computable function to a clearly non-computable function?"

The Age of Alternative Logics Assessing Philosophy of Logic and Mathematics Today Chapter 4, Effectiveness, page 45 and 46, by Stewart Shapiro

"This leads to my third theme, intensionality. Notice ﬁrst that the pre-formal notion of effectiveness is pragmatic, or epistemic. It is not a property of sets or functions themselves, independent of the way they are presented. To focus on an example, let HT be a function on strings whose value at any solvable diophantine equation is “yes” and whose value at any unsolvable diophantine equation is “no”. The name “HT” abbreviates “Hilbert’s tenth”. To engage in counter-logical fantasy, suppose that there was a non-constructive proof that HT is recursive, one which did not specify a recursive derivation for it. It would then be known that there is such a derivation (and a Turing machine, etc.), but no one would know of one. This would not count as a solution to Hilbert’s tenth problem, as originally stated. It would still be the case that no one has devised “a process according to which it can be determined by a ﬁnite number of operations whether [a given diophantine] equation is solvable in rational integers”. The non-constructive proof would assure us that there is, in fact, such a process, but the envisioned development would not even assure us that a positive solution could be found. For all we would know, there might be some sort of epistemic barrier preventing us from knowing of a particular Turing machine that it computes HT.

On the other hand, the notion of computability, as deﬁned above, is an extensional property of functions. If f is computable and g is the same function as f, then g is computable. Clearly, effectiveness and computability are closely connected. A function is computable if and only if there is an effective presentation that denotes it. But functions are not the same as presentations of functions. Some of the later, philosophical literature on Church’s thesis suffers from confusing these concepts."
If this comment duplicates, please delete.