And at last, we get to a proof of ‘The First Incompleteness Theorem, semantic version’, uisng pretty much Gödel’s materials. And given our background work over previous chapters, it is very easy. Grasp the trick in constructing the described Gödel sentence, and everything then falls speedily and simply into place. Enjoy!
[Link now removed]
6 thoughts on “Gödel Without Tears, slowly, 11”
Re ” Could a p.r ‘decode(n)’ be defined by generating all expressions (as on GWT3 p 21, so that it’s 1-symbol strings, then 2-symbol, and so on), calling ‘encode’ on each, and seeing if the result equals n?”
With our coding, as with any naturally set-up numerical coding for wffs, not all numbers are codes for wffs. So the “generate codes for wffs until we hit n” procedure may not terminate.
I can think of two possible answers to that.
One is that we’re decoding numbers that are encodings for wffs, rather than taking arbitrary numbers and seeing if they correspond to wffs. At least at the start of Ch 11 the description is “algorithms which take us from expressions to code numbers and back again”, which does sound like it’s only codes that have to be decoded.
The other is that the encoding might produce larger numbers for larger wffs in such a way that we could see that n wasn’t a code once once ‘encode’ was producing numbers sufficiently larger. It’s at least not immediately clear that that would have to involve a bound computed in advance.
In any case, I’m actually more interested in the larger issue of how p.r. is defined and how the reader is meant to think when informal proofs talk about what can be done without unbounded search or using only bounded ‘for’ loops. There’s a difference between there being a bound and the bound being computed in advance, as in my example about lists above which has a bound without computing it in advance.
Besides, since Wff(n) is p.r., couldn’t it be used to deal with the numbers that aren’t codes?
I originally had some comments on the “first decode” arguments and “using just ‘for’ loops”, but I decided to delete them and instead give an I hope better explanation, later, of what I think the problems with such things are. (See also my reply on Slowly, 10 — the reply with the Haskell example.)
p 81, i — Do we need an algorithm that takes code numbers back to expressions? It’s used in the informal proofs that say “first decode n”, but it is needed for anything else? Some bits of decoding are sometimes done along the way, for instance by using exf, but there doesn’t seem to be any full decoding back to the entire expression that was encoded.
p 82, Theorem 32 — if n isn’t the g.n. of a suitable wff, does it matter that it’s n in particular that diag(n) returns? (I’m thinking the answer is ‘no’.)
p 83, Defn 45 — I find the English description a bit hard to follow. ‘Prf (m, diag(n))’ is easier to understand. (I’m not saying there shouldn’t be an English description, just that that one’s a bit hard to follow.)
I also prefer the official proof to the informal one.
p 83, Defn 45 and the official proof of Theorem 34, “defined to hold just when”, “holds when” —
What do those expressions mean? They each sound like a one-way ‘if’ rather than a both-ways ‘iff’, and the implications even seem to be in opposite directions (like ‘if’ and ‘only if’ are in opposite directions).
Yet when the relationship is expressed in terms of characteristic functions, it seems to be an equivalence (an ‘iff’).
[I looked up ‘just if’, and some people said it meant ‘if and only if’. It doesn’t sound like it means that, though, so I don’t think it can be assumed that readers will know ‘just when’ means ‘when and only when’. In any case, although the definition says ‘just when’, the proof doesn’t.]
p 83, construction of G — Can a bit hard to follow. Are there two diagonalisations involved, or just one?
p 84, proof of Theorem 36 — is the truth of G related to the ‘remarkable corollary’ from Ch 7 (if a consistent theory which includes Q proves a ?1 sentence, the sentence must be true), and related in a way that would make it worth mentioning that result here?
A lot more to think about!
On p. 81.i — The idea of coding surely suggests two-way algorithms; but yes, we only appeal to the decoding (and only need that to be bounded procedure) in various proofs. The “it is crucial” remark governing algorithms in both directions seems to be overkill.
On p. 82 — Mmmm can’t remember whether there was any nice reason (from use elsewhere) for getting diag to default to n rather then e.g. 0. But here maybe defaulting to zero would be better as wouldn’t raise any questions (such as yours!).
On Defn 45 — I’ll add a further explanatory sentence.
On “holds when” (or “just when”, “just if”) instead of “iff” — a bad habit of mine. I’ll need to check other occurrences of this when I do a final version.
On construction of G — Well, yes there is a use of diagonalization in defining the relation Gdl; and then there is a diagonlization in constructing G from the formal predicate Gdl which captures Gdl. So in that sense that idea of diagonalization features twice.
p. 84 — No: there can’t be that connection because G is a ?1 sentence that ISN’T proved by the relevant theory!
“we only appeal to the decoding (and only need that to be bounded procedure) in various proofs” —
I think it matters whether it’s needed in official proofs, or only in informal ones (and either way it could be interesting to know which proofs).
In IGT, pp 138-9, the ability to decode back to the original is used as a kind of universal translator (if something works when encoding A is used, and someone wants to use B instead, B can be handled by decoding back to the original, re-encoding using A, and then doing the same as before).
However, although the translation idea has quite general applicability, it’s being used there only to show that things (such as Term) that are p.r. when applied to numbers produced by our chosen scheme would still be p.r. if a different scheme were used; and when GWT3 turns to that issue (pp 78-9), it doesn’t use the translation idea, instead arguing that we can just use the same informal proof that something is p.r. that we used for the chosen scheme. So it seems decoding back the the full original isn’t necessary there (despite being used in IGT).
Of course, the informal proofs do tend to begin with “first decode”, so that they use full decoding after all. That might not be essential, though.
BTW, suppose only a p.r. ‘encode’ was provided. Could a p.r ‘decode(n)’ be defined by generating all expressions (as on GWT3 p 21, so that it’s 1-symbol strings, then 2-symbol, and so on), calling ‘encode’ on each, and seeing if the result equals n?
The ‘generate’ part is open-ended as described but — unlike the search for a proof on p 21 — it should always return an answer after a finite amount of computation. Would it be p.r or, if not, why not?
That raises an issue about the way p.r. is defined in terms of loops or search with a specific, numeric bound known in advance. There are certainly cases in ordinary programming where something is written without a bound calculated in advance, but where there is a bound.
An example is looking at the elements of a list to see if any satisfy some test, when only finite lists can be created. The bound is the length of the list. However, it’s quite common to implement lists in a way that doesn’t record the length. (That’s how Lisp does it, for example.) To find the length, you have to go through the elements as before, except counting rather than testing. So finding the length also doesn’t have a bound calculated in advance. Yet it’s pretty clearly p.r. (as is the search, so long as the test is p.r.)
Could the ‘decode’ algorithm described above be a case like that, or is there some reason why it’s not p.r.?
Anyway, this makes me think: for “The set of p.r. functions is effectively enumerable”, on p 63, you say “‘in alphabetical order'” without mentioning the by-length part. Perhaps add something that does, or a ref to p 21.
p 84 and ?1 — don’t know how I missed that! (That G isn’t proved.) Perhaps still fuddled by trying to keep diagionalisations straight.