Squeezing Church’s Thesis: Bringsjord and Sundar G. miss the point

Distinguish (1) initial, inchoate, talk about “mechanical computation” from (2) much more sharply characterized, but still informal, talk about “algorithmic” or “effective” computation (in a finite number of steps, fixed by the instructions of a deterministic algorithm, where each step can be executed by a limited agent of fixed finite capacity, though without putting any limit on the number of steps, amount of paper consumed, etc.). Computation in sense (2) is, of course, the notion articulated in the informal introductory pages of Hartley Rogers’s classic Theory of Recursive Functions and Effective Computation, and in dozens of books since. NB, the talk about the fixed finite bound on the capacity of the ability of the computing agent is quite explicitly there in Rogers (see his p.4). It’s his way of requiring that the steps should be idiot-proof, should be “small”, as of course we want from an algorithm in the traditional sense.

Distinguish now (CT1), the claim that any function computable by a mechanism, in the inchoate broad sense that features in (1), is recursive, from (CT2), the claim that any effectively computable function in the sense (2), is recursive.

(CT1) looks false to many. For a start, hypercomputation a la Hogarth would seem to be a challenging counterexample. Well, there are things to be said about such fantasies. But I have no very settled view, and in fact don’t really think (CT1) is clear enough to worry about.

Instead, I make two claims.

(A) [Not so controversial, though controverted] The Church-Turing Thesis is best thought of as being (CT2). “Best” historically: (CT2) captures the thesis the founding fathers were after. “Best” conceptually: we’ve now got something tolerably sharp to think about.

(B) [Much more controversial] Understood as (CT2), The Church-Turing Thesis is open to informal proof (on a par perhaps with the proof that there are effective computations which aren’t primitive recursive — another proof relating the still-informal notion of effective computability to a formally defined notion). I argue this in the final chapter of IGT, using a squeezing argument.

Now whether or not the historical claim (A) is true — and I don’t care that much — claim (B) is still interesting. So is (B) right? I won’t, and don’t need to, repeat the details of the squeezing argument here: all you need to recall is that it goes via the KU generalized model of computation.

Selmer Bringsjord and Naveen Sundar Govindarajulu (henceforth B&S) have written ‘In defense of the unprovability of the Church Turing Thesis’, discussing that final chapter of mine (thanks here to Bastian Stern for pointing this out to me). They think I go wrong three times over. I think they just change the subject and aren’t talking about the Chuch-Turing Thesis in the form of (CT2) which I thought I’d pretty clearly signalled was my concern (clearly enough in the 2007 first printing, even more clearly in the 2009 revision, and hammered home in my 2011 Analysis note on squeezing arguments).

Thus their first complaint is that I (initially) accept the constraints of the KU model of computation that, for a given algorithm, (i) there should be a fixed finite bound on the size of the dataspace that a single step in the execution of algorithm operates on, and (ii) there should be a fixed finite bound to how big a jump between patches of dataspace can be made between successive steps. These bounds can be enormous. The thought is just that if we are to cleave at all to the initial intuitive specification of algorithmic computation as moving by small (idiot-proof) step, some bound is needed, as Rogers requires. Drop the bound, and the steps can no longer be deemed small and we are no longer considering algorithmic computation in the sense involved in (CT2).

B&S write “Now, we are not saying that we know, let alone that you, reader, know, that jumps [bigger than a given bound occur] during human problem-solving “runs” … . We are saying that for all anyone knows, such jumps happen, especially in light of the kind of “Eureka!” phenomena that routinely occur during successful human problem-solving.” Which may very well be true. But if they occur, such outsized “Eureka” jumps aren’t the small steps of executing an algorithm in Rogers’s sense. There may well be human cognition which isn’t algorithmic in that sense. But that is entirely irrelevant to the question whether numerical computation which is algorithmic is always of a recursive function.

B&S’s second complaint is that I (along with the KU model of computation) doesn’t allow computation “over infinitely long expressions”. But it of course allows everything finite agents can do by way of operating on finite descriptions of infinite expressions (and their supposed example involves no more). So is the complaint that the KU model doesn’t allow an infinite amount of data to be written and operated on? But that’s no complaint. A genuinely algorithmic procedure (in the sense explicated by Rogers) that delivers an output in a finite number of small steps can only operate as it goes along on a total finite amount of data, and untouched data can be expunged without loss. At any stage in the proceedings, then, the dataspace can be treated as finite (though expandable).

Which answers too B&S’s third query: ‘why not infinite but rule-governed workspaces’. Sure, we can play with such ideas: but they certainly don’t regiment the idea of algorithmic computation involved in (CT2).

“Limiting a general problem solver to a fixed cognitive capacity … seems unnatural,” say B&S. Maybe so. But that’s quite another debate, and nothing at all to do with what I was talking about in arguing for (CT2). One more time: my squeezing argument concerned not what can be done by general problem solvers, or by human or other mechanisms: it concerned what can be done, specifically, in a finite number of steps by following a finite deterministic step-by-step algorithmic computation where each step is available to a computing agent whose capacity is subject to a fixed finite bound. B&S’s reflections about Eureka moments or infinitary logics are irrelevant to that question.

8 thoughts on “Squeezing Church’s Thesis: Bringsjord and Sundar G. miss the point”

  1. BTW, I think a key point for understanding B&SG is in a footnote on page 2 of their paper:

    If computationalism is false … then it seems reasonable to hold that it would seem to human persons that some of their hypercomputational problem-solving moves are automatic and instinctive, and hence in the space of the kind of “algorithmic” problem-solving that Smith is focused upon in his case for CTT.

    These moves would count as “small” steps, even if they meant the KU model had to be expanded, in order to accommodate them, in ways that broke the squeezing argument.

  2. I was going to say I found the squeezing argument convincing, but then I realised that I didn’t. What I hadn’t noticed at first was that the “state of play” function in the encoding argument that KU-computability entails μ-recursiveness ignores the instructions in the KU program. So how does the c(n, j) function know the sequence of states for inputs it hasn’t seen before? Unless it were “magically” given the values for every possible case in advance, it would have to run the KU machine to see what it actually did when given those inputs. Neither seems satisfactory.

    So I looked back at the similar argument for Turing machines. I am, BTW, happy to accept that Turing-machine computability entails μ-recursiveness, because my “intuitive” stand-in for μ-recursiveness is programming in a simple functional language, and I know that a program could take a description of a Turing machine’s instructions and either interpret the instructions to decide what to do, for any input, or else “compile” them into a function that would perform the computation. The coding argument in the book doesn’t look like it quite works that way, but at least each “state of play” includes “the label for the i-quadruple we will need to execute next”, and the argument says “just reflect that in running through the first j steps in a Turing computation, any searches for the next instruction are bounded by the length of the Turing program”.

    Now, it seems to me that either that part about the length of the program is irrelevant (nothing like it is mentioned in the argument about KU machines), or the computation of c(n, j) looks at the instructions in the program. Either way, I think it could be explained a bit more clearly; and if it does use the instructions to help determine the next state-of-play, I’d find the argument convincing in a way that I don’t find the one about KU machines convincing.

  3. I do not know if this is besides the philosophical point, but, mathematically, many (all?) formal versions of Church’s Thesis are plainly false in the presence of classical logic. There are other worlds, like intuitionistic Analysis, that also falsify it, too. The existence of these counter-models suggests that CT is at least independent (not provable).

    1. I wonder if this, too, involves changing the subject? Given that Church’s Thesis (as canonically understood by the Founding Fathers) is a claim about the co-extensiveness of an informally (or at least less formally) characterised concept of effective computability and a formally (or at least more formally) characterised concept of recursiveness, in what sense exactly are there “formal versions” of this claim?

      1. Of course, I know that the original Church’s Thesis is informal, but I wonder then why do you claim to prove it? Do you have a method of proof? If so, and assuming that you can describe it in sufficient detail, are you not doing mathematics? And in that way just building one model where CT hold, while there are others where it does not?

        1. Consider the result that not all effectively computable functions are primitive recursive. That is usually labelled a theorem, and the standard diagonal argument is usually called a proof, even though it involves the informal concept of an effectively computable function. All I claim is that the squeezing argument for Church’s Thesis is a proof in the same sense of the word. [For more, read the book!]

  4. It seems to me to be very common for authors to confuse CT1 with CT2, but I think the confusion here is more subtle. The B&S argument seems to be interested in CT2a: what a human can compute *in principle*, which may be much more than CT2b: what a human can compute while following a fixed algorithm. For example, the final sentence of B&S reads “When a problem-solver tackles some problem, it seems to us that he or she, over any session, is capable of an arbitrarily large number of moves.”. But a human engaged in algorithmic computation is not “tackling” a problem, and is not engaged in any kind of original thought or problem solving. She is simply following a particular finite list of instructions that have been handed down to her about how to perform the algorithm. Thus CT2b, in the sense of human computation, does not refer to all possible ways that a human might solve a problem, but only to ways that follow fixed algorithms. For example, one aspect of algorithmic computation is that there is no need for the agent to learn or become smarter during the execution of the algorithm, except in ways that are laid out in the algorithm from the beginning. The activity that is described in section 7.1 of B&S does not seem to be algorithmic, or even to produce a function.

  5. Indeed. And (expanding on your “best” conceptually point), separating out CT2 from theses about human capabilities is very useful; it is then easier (though not easy) to ask about the relation of of human capabilities to effective computations rather than the harder to make out question of whether, for example, humans are Turing Machines.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top