A good time to join the party, if you haven’t yet been following along with this chapter-by-chapter posting of a new version of *Gödel Without (Too Many) Tears.*

Along with revised versions of early chapters (thanks to all those who sent comments and corrections, a few here, and more by e-mail), here is Chapter 10, on ‘The Arithmetization of Syntax’. So this is where we get to today:And then, over the next four chapters, we (at last) get to prove the first incompleteness theorem, in pretty much Gödel’s way. We also introduce the Diagonalization Lemma, prove Rosser’s Theorem and Tarski’s Theorem. A busy but fun week!

[*Link now removed*]

Rowsety MoidTo avoid reread burnout, I won’t reread earlier chapters yet. (I also didn’t reread Chs 1-4 when when Chs 1-5 appeared.)

*

p 75 “What counts as being ‘sensibly axiomatized’ will become clear soon enough!” — not already? For instance, “effectively axiomatized” (p 30).

*

p 75 “when we are dealing with finite objects, we can associate them with numerical codes” —

Not just

associate. “Associate” reminds me of the horrible office software that required every activity to be given a numeric code so that to know what any code meant, you had to look it up in a table.*

p 75, “in the late 1920s this wonderfully simple but powerful idea must have seemed revelatory” —

I wonder. It must have been clear, for example, that the endless variety of literature in English (except for the few cases that rely on peculiar layouts and arrangements, drawings, etc) could be expressed using a finite alphabet plus a finite number of digit and punctuation symbols.

It doesn’t seem all that large a step to then see that each character in a text could be seen as a digit in a suitable base, so that the whole text was effectively a very large number.

I think what’s harder to see is that / how you can use arithmetical calculations and logical formulas to manipulate the numbers in ways that correspond to things you’d want to do with the original objects.

That can still be hard to see today. You implicitly acknowledge this when you have informal proofs that begin “first decode n”.

*

p 77, Defn 40: This has the same problem I mentioned for Defn 15 on p 16 when I said: It feels like there’s something that would include a verb missing after “Then, relative to that numbering scheme,”

Now it’s after “Given our scheme for Godel-numbering PA expressions and sequences of expressions”

It doesn’t read right as English to go straight from that to “Wff(n) iff n …”.

*

p 78 Godel “had to do all the hard work … we can perhaps short-cut all that effort and be entirely persuaded by the following:” —

I’m not entirely persuaded by it. It’s only because I know other things that I’m persuaded that Theorem 31 is correct.

There are gaps that need to be bridged by looking at some ‘hard work’. They aren’t eliminated by having “only a very modest familiarity with the ideas of computer programs and p.r. functions”.

There’s even a gap between the simple Loop language used in Ch 8 and p.r. definitions using equations. There’s a bigger gap between either of those and what people do in ordinary programming languages. And some of the gap-bridging requires tricks that are not obvious.

As I said under

Slowly 8, you have to build up a toolkit of techniques before it’s clear how the gap is bridged, and I don’t think many people would quickly work it out unless they were extremely clever, or very lucky, or else had or developed experience with such things.I think that at least ought to be acknowledged.

(In IGT, some helpful gap-bridging is done in the proof of Theorem 14.3.)

*

There’s another gap between talking so much in terms of ‘search’ and what people ordinarily do, and tricks can be needed to turn something that isn’t normally a search into a search.

That leads to another point. I think there’s an interesting / useful rationale for using the equation- and search-based characterisation of p.r.

And that is that it’s ‘half way to logic’. For example, the search operator naturally corresponds to an existential quantifier. Or consider how easily the factorial equations can be turned into a formula about encoded sequences.

Another point in it’s favour is that transition from restricted, primitive recursion to the full Turing-machine-equivalent case is so simple: just bounded vs unbounded search.

*

p 78, “we can implement this as working on their ASCII codes or whatever.”

Think of how someone would actually write a program that determined whether a text string was syntactically a wff. It wouldn’t treat the whole string as a giant number. Nor would it be treated as a giant number at the machine code level.

Peter SmithThanks for all these comments! I’ll do some editing to address all of them. But the main worry is about the “we can perhaps short-cut all that effort and be entirely persuaded by the following” passage. I agree I’m arm-waving and that perhaps a good student, in particular, would/should worry. So yes, I think I will add a footnote explicitly acknowledging that.

Rowsety MoidThough some of my comments could give a different impression, I think your approach is a good one.

In IGT, you build the sort of toolkit I’m talking about in the equation-based traditional p.r. formulation. Once you’ve shown how to handle bounded quantifiers and bounded search, you’ve developed the toolkit to the point where it’s much easier to see how interesting things can be done — especially the interesting things you’ll be doing in logic with existential quantifiers.

The definitions of

len, exf,and*also provide very useful tools. Look at how simple the definition of ‘diag’ is on IGT p 146, for example.Also, it can be much easier these days than it’s often seemed in the past to see that the equation-based language is, in effect, already a programming language. Some already existing programming languages can even handle such things quite directly. For instance, here’s successor-based addition defined in Haskell:

data Peano = Zero | Succ Peano

add Zero b = b

add (Succ a) b = Succ (add a b)

(From https://wiki.haskell.org/Peano_numbers)

It’s not necessary to build a toolkit in a more conventional loop-and-assignment language as some people have done in the past. So I think your quite minimal use of such a language is good.

I’m going to try (later, not sure quite when) to give a better explanation of what I think the problems are in some of the ways you’ve done things.