Gödel news!

As I said in my previous post, I’ve been working on a revised version of Gödel Without Tears. Predictably, I’ve not been able to resist tinkering more than I had planned to do. So it will take a few weeks more to finish the job. But I’m not complaining: it’s an enjoyable enough project!

GWT is a sort of cut-down introduction to some of the themes of my  Gödel book. Unexpectedly, I have now acquired the rights to that book. I think the friendly thing to do is to immediately make the second edition of  An Introduction to Gödel’s Theorems available as a free PDF downloadable from Logic Matters, taking the opportunity to correct three dozen minor misprints. This way, students and other readers anywhere can get free access. So here it is! Spread the word …

(I may in due course also make this corrected version of the book available as an inexpensive print-on-demand book via Amazon, for those who want a physical copy. But I doubt that there would be a big demand for that, so one step at a time.)

6 thoughts on “Gödel news!”

  1. Thanks so much, I ordered your book which according to Amazon in Holland would arrivé in a week..My bookshop in Amsterdam said it was not sure when or if it arrived because it was print on demand (and quite expensive). So thanks

  2. If it does not take too much of your time, it would be nice to have the text as a print-on-demand on Amazon. A printed books is much nicer and tidier than a 400-ish bunch of papers!

  3. I think that IGT being available free makes a case for GWT to differ more from IGT than it did before, to cut back in some areas or to explain some things more than before, or differently.

    I’ve been looking at old GWT (GWT2f), and I’m not sure it’s W fewer Ts than IGT when it comes to the things they both cover. GWT even seems more demanding in some ways.

    For example, when introducing |=, GWT p 4, comes right out and talks of models, relying on a mere “(re)interpreting the non-logical vocabulary” to explain “model”; IGT doesn’t get to |= until p 33 and approaches it via the useful terminology “semantically complete”, with propositional logic truth tables as an explanatory example.

    GWT doesn’t seem to use “semantically complete” — and its useful contrast with negation-complete — at all. IGT does use that contrast and reinforces it by saying semantically complete logic and negation-complete theory.

    So IGT seems friendlier. And it does something I think is good and hard to do successfully: clarify to avoid a potential confusion without causing the confusion or making it seem tricky and difficult to untangle.

    I think GWT’s §1.8, “Go ̈del’s completeness theorem vs his incompleteness theorem” is less successful there.

    In the bullet points at the start of Episode 1, it says “A reality check: why there’s no tension between G ̈odel’s completeness theorem and his incompleteness result”. When we get to §1.8, however, there’s no “why” explanation. It just describes and states the two theorems and then says “There is no evident tension at all between these two quite different results” and “think about it!”

    I’ve thought about it, and I think there is some tension and that there’s something more interesting and useful that might be said.

    Why some tension? A theory T “cast in a first-order language with a standard first-order deductive apparatus” can also be a “sound theory that can express enough arithmetic”, so that both theorems apply to T. And if φ is G, the Godel sentence for T, completeness can look like it might be saying G can be proved, while incompleteness is saying it can’t.

    Also, the Godel sentence G for T is true. And, roughly, seem from a distance, completeness can seem to say that things that are true can be proved (while soundness says things that can be proved are true). That can even seem to be what you read if you look “completeness” up on the internet. For example, I just searched for: completeness logic true. One of the first results said:

    Intuitively, a system is called complete in this particular sense, if it can derive every formula that is true.[Completeness (logic) – Wikipedia]

    So, for various reasons, it can make sense to ask why doesn’t completeness say G can be proved?

    A closer look shows why: what allows G not to be provable must be that G isn’t true in every model of T.

    But when you don’t know that’s the answer, it can be hard to spot it when taking a closer look. It’s better to tell the reader explicitly, and I think it’s useful too. It’s at least something I’ve found useful as part of remembering how things fit together.

    I don’t think this is covered in IGT until the discussion of PA2 and categoricity in §29.6-7, and even then I don’t think its significance for the 1st-order case is brought out as explicitly as it could be. So this may be an opportunity for GWT.

  4. I’ve been reading the sections of IGT and GWT about primitive recursive functions. I’ve also written some code I can use to play with some of the ideas. (It’s now in Python, originally in a different language.)

    I’ve found it difficult to fit my comments into a single, nice, overarching structure, so I’ll mention an assortment of things here and perhaps (probably) say something more later.

    Here are some things I think could be given better, more intuitive explanations.

    * Why use the 2-equation language?

    Is there a name for it? I mean the language in which definitions for f(n) by primitive recursion use two equations, one for 0 and one for Sn.

    Why make so much use of that language? It’s unusual and quite hard to use, from a programming point of view.

    I think the answer is that it’s half-way to logic, once it’s been shown that enough notations can be interpreted as specifying p.r. functions. Look at the definition of Prime(n) on IGT p 111, for example. Nice!

    * Why care so much about p.r.?

    Why, for instance, prove that a theory can capture all p.r. functions and relations? There must be zillions of p.r. functions that don’t matter.

    I’m thinking the answer will be something like: it’s a neatly characterised restricted form of computability (bounded vs unbounded search) that lets us show that surprisingly simple theories of arithmetic are subject to incompleteness. And it’s easier and nicer to prove all p.r. functions are captured than to have special proofs for each case that matters.

    * Why are the projection functions (GWT p 45) necessary?

    Why can’t the h for addition just be

    h(x,y,u) = Su

    We’re told the h for factorial is

    h(y,u) = u * Sy

    If that’s legit, why isn’t this:

    h(y,u) = 1 * Su

    and if that’s ok, why not

    h(y,u) = Su

    and similarly

    h(x,y,u) = Su

    * Why can’t the diagonal function (in the proof that there are effectively computable numerical functions which aren’t primitive recursive) be p.r.?

    I’d expect an argument that it requires unbounded search, but the argument on p 51 doesn’t seem to do that.

    Instead, the argument seems to be that it would have to evaluate different function for each value of n, and for that to be p.r. there would have to be a “nice pattern in the successive computations of all the different functions”.

    However, it can also be seen as evaluating a single function: the function that is given a recipe, r, and a number, n, and sees what r would do when applied to n. Given all the clever encoding and other tricks used to do interesting things as p.r. functions, I don’t think it’s obvious that there can’t be a p.r. interpreter for p.r. recipes.

    Footnote 6 is helpful. However, the idea seems to be that to evaluate a function that has loops of some depth, you have to use at last that depth of nesting. That’s not how it normally is in an interpreter, though. An interpreter for a language that can have loops nested 100s deep wouldn’t have that many loops in its own code.

    1. *On the 2-equation format*

      We are defining the zero case, f(0). We are then saying how to get f(Sn) from f(n). Two bits of data: two separate equations surely come naturally! So the traditional format seems more-than-conventional.

      * Why care so much about p.r.?*

      In the current context, for the answer you give. But also there’s intrinsic interest in primitive recursive arithmetic (right or wrong, it’s not for nothing that there’s a long tradition for treating PRA as the limit of finitistic reasoning, for example).

      * Why are the projection functions necessary?*

      I guess they are not necessary. As you say, instead of the I(x,y,z) which projects to y, for example, we could have e.g. Z(x) + y + Z(z), where Z is the zero function. Still, the traditional line of having a full suite of identity functions seems conceptually more appropriate as well as being neat (for taking the second value from x, y, z doesn’t seem to conceptually depend on the notion of addition or multiplication).

      * Why can’t the diagonal function (in the proof that there are effectively computable numerical functions which aren’t primitive recursive) be p.r.?*

      I think you are right when you say “Given all the clever encoding and other tricks used to do interesting things as p.r. functions, I don’t think it’s obvious that there can’t be a p.r. interpreter for p.r. recipes.” I always had a sense of arm-waving hereabouts! But I guess that the arm-waving is designed not to show (independently of the argument by contradiction) that the diagonal function can’t be p.r. but rather to show that of reflection we didn’t have a strong reason to expect that the diagonal function is p.r.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top