I’ll be briefer than usual, though today’s attached episode is longer than usual. After a short Interlude (where we’ve been, where we are going), there’s a quite substantial chapter ‘Primitive Recursive Functions’. And there’s a nice diagonal argument to enjoy!
[Link now removed]
Two typos:
p. 52, point 6 in the interlude. ‘First Theory’ should be ‘First Theorem’.
p. 55 in the recursive definition of the exponential, x^y. For consistency, I think the base case should be x^0 rather than y^0.
Indeed, thanks!
Page 62:
I don’t think it’s quite as quick or easy as that makes it sound.
For example, consider the argument that being prime is p.r. on p 65. It says “and if we find a divisor of n other than 1, return the value 1, and otherwise return the value 0” which can sound as if we could break out of the loop and return 1. But there’s no ‘return’ in the Loop language you’ve described, or any way to exit a loop before its full number of iterations. The tricks that will eventually be used in such cases are not obvious ones.
I don’t think it would be clear to many people how to do interesting things in so limited a language unless
* they’re extremely clever,
* they already have experience with such things,
* they develop experience by experimenting (perhaps with help from seeing how other people have done things), or
* they’re lucky enough to hit on the right ideas right away.
(This isn’t an issue only with the Loop language; there’s much the same sort of problem with Turing machines and lambda calculus and other very minimal models of computation. You have to build up a toolkit of techniques before it’s clear how to do the things that can be done in ordinary programming languages.)
For the Loop language, examples like the factorial are good because it’s easy to see how to do them with a loop and to see that the loop computes the right thing. But that nice fit with a simple loop also makes them poor at suggesting how things that don’t fit so neatly can be done.
– * –
I don’t think it’s ever made clear whether it’s possible to have a loop that would go around zero times; and there are a couple of things that suggest you might be thinking that loops have to go around at least once.
One is the notation ‘For y = 0 to n-1’. For that to go around zero times, n would have to be zero. But are we meant to think n-1 is defined in that case?
The other is that in the main text (as opposed to footnote 4), you don’t say “‘while’ loop”: you say “‘do while’ loop”; and a ‘do while’ loop is normally one that has the condition at the end and goes around at least once.
Whatever the rule is, it should be stated explicitly. However, I think zero iterations should be allowed. Some reasons:
* Zero-iteration loops are possible in pretty much every programming language.
* Some things are harder to do if zero iterations aren’t possible. For instance, it’s easy to construct an equivalent of an ‘if’ if the loop body needn’t be executed at all.
* The Loop language in the Tourlakis paper referred to in IGT, p 105 n 5, allows zero iterations, and that paper is the closest thing there is to a precise specification of the Loop language you use in IGT and GWT. (It has a different syntax that doesn’t raise issues about n-1.)
– * –
Speaking of footnote 4 (p 62), while it’s true that some language don’t make a ‘for’ vs ‘while’ distinction in which ‘for’ loops have a number of iterations fixed in advance, and that such languages include ones in the C syntactic tradition, it doesn’t neatly divide on new vs Old School lines. Pascal is only a bit older than C, for a start, and even a decade earlier Algol 60’s ‘for’ could be used for unbounded search with the following syntax:
for i := 〈expression〉 while 〈condition〉
Many newer languages don’t follow C in their loop syntax.
C also has while-loops written using ‘while’, btw. C’s ‘for’ isn’t quite a case of ‘while’; it’s more a general framework for iterations that aren’t necessarily even numeric.
– * –
P 64, Theorem 25
Is there an intuitive explanation for why the effective listing doesn’t include algorithms for all computable functions?
It certainly seems we can effectively list all recipes (programs), in much the same way as described for p.r. functions on p 63: pick a suitable formal language; generate all possible strings in alphabetical order; select the strings that are valid programs. Even if the test for valid programs lets some invalid ones through, the list would still contain all the valid ones.
Part of the explanation could be that there are too many computable functions. There are at least too many *functions* from the natural numbers to the natural numbers, because that’s uncountable, while the recipes are countable. But a further step is needed to show there are also too many *computable* functions.
One problem is that the diagonal function looks like it has a recipe. It’s a recipe-lister combined with an interpreter for recipes that adds one to the final result. So why isn’t it in the list? What would stop the recipe-lister from including it?
– * –
Also, the argument in p 64 leading up Theorem 25 looks like an argument that we can’t effectively list the recipes (because it begins “Suppose we can effectively list the recipes” and then finds a contradiction). But it at least seems we can effectively list the recipes in the way I suggested earlier.
Theorem 25 says instead “No effective listing of algorithms”. However, its wording is ambiguous. Does it mean there will be computable functions that don’t correspond to any algorithm in the list (too many functions) or that there are algorithms that aren’t in the list (too many algorithms as well as too many functions)? Are we meant to think every computable function has an algorithm, or not?
Many thanks for this!
First point: I need to (and will!) make it clearer from the outset that the key contrast we need is between bounded-in-advance loops and open ended do while/do until procedures. I think, for motivational purposes, it is good enough to lean on that distinction (I frankly allow we are being a bit rough and ready!)
On zero looping: fair point but I don’t want to break the currently close tracking between the informal recursion equations and the schematic program (even if that means the schematic program isn’t quite like a “real” program).
ON Theorem 25: I meant, of course, effectively list all and ONLY the total computable functions. Bunging in a few “only”s would tidy things up. But in the interests of keeping this section focused, I’ve decided on second thoughts to delete this concluding aside!
I don’t think that allowing 0 iterations (so that the body of the loop would not be executed at all) would break the close tracking between the recursion equations and the schematic program. I’ll try to explain.
We may have been starting from less common understanding that I thought, and I’ve noticed something in the new Ch 8, p 61, that I didn’t before. I’ve also looked again at IGT p 104 and GWT2f p 48. So I’ll approach it differently this time.
In effect, you’re already having the loop go around 0 times when n=0, just in a different way. Because what you’re doing now — and this is different from IGT and GWT2f — is to have the schematic program correspond only to the 2nd equation in the 2-equation definitions, and to have it apply only when n is greater than 0. (That’s what I hadn’t noticed.)
See p 60 for factorial and 61 for the general version. Both say the schematic program takes n greater than 0 as input.
So in the new version, the schematic program and its loop aren’t used when n=0.
In IGT and GTW2f, however, the schematic program corresponds to the whole, 2-equation p.r. definition.
I don’t know how to make indentation appear in comments here, so I’ll write loops all on one line, as in this copy of the schematic program:
func := g
For y = 1 to n-1: func := h(y, func)
Now, as in IGT and GWT2f, have that correspond to this entire definition:
f(0) = g
f(Sy) = h(y, f(y))
When you define a function using equations, it’s defined for n=0. The first equation is ‘f(0)=g’. Imagine the program is also used with n=0. When n = 0, the result should be g, which is exactly what happens if the loop then goes around zero times.
So it all works, and there’s a nicer, tidier correspondence between the two definition forms (equations and schematic programs) which matters when you’re talking about translating both ways, as you are for Theorems 22 and 23.
What happens when n = 1 and the computer is faced with the instruction “from y = 1 to 0, do some stuff”?
Sorry, I meant ‘0 to n-1’. I don’t know how ‘1’ got in there instead. It was meant to be the same as what’s in IGT and GWT: ‘For y = 0 to n-1’.
However, I’ll take this as an opportunity to answer about what happens in practice when a ‘For y = 0 to n-1’ loop starts at 0 and n = 0, so that a computer is faced with ‘from 0 to -1, do some stuff’.
The syntax for such loops is different in different languages, but generally it would not do the stuff (ie, it would iterate 0 times).
Here’s a motivating example. Suppose I’m using a language in which the indexes of characters in a string start at 0 and I want a loop that looks at each character, perhaps calling char(s, i) to get the i-th char of s. This looks like it should work:
for i = 0 to length(s) – 1: … something with char(s,i) …
However, the string might be empty, so that its length is 0 and the loop was from 0 to -1. If the language then regarded the loop as undefined, or an error, or always went around at least once, I’d have to wrap any such loop in an ‘if’ statement that checked whether the string was empty and handle empty strings as a special case.
There might be exceptions, but languages have generally decided to treat such loops as legitimate and as executing the stuff in the loop zero times (ie, not at all)
And generally the ‘should I execute the loop body?’ test is done before each time through the loop, so that when it’s ‘no’ initially, the stuff in the loop isn’t executed.
‘While’ and ‘until’ loops are generally like that too, with the test at the start.
However, some languages also have a construct that always does the stuff at last once and has the test at the end. It’s typically written in a way that also places the test textually at the end, something like this:
do
… stuff …
until condition
BTW, in IGT p 286, the ‘do-until’ loop is pretty clearly meant to be able to loop zero times, or else the result could never be 0. And it has the ‘should I execute the loop body?’ test at the beginning.
I think it’s better for ‘for’ loops to fit into the same pattern: test at the start and able to loop zero times
It’s quit rare in my experience to want an always-at-least-once or test-at-the-end loop.
Many thanks for this! (I think this must have been something I had a grip on when writing IGT, and havev wobbled about since — good to know I wobbled unnecesarily, and I’ll revise accordingly!)