# Gödelian bullshit

I asked, “What is the mistake in Gödel’s faulty proof of the incompleteness theorem?”

Gödel’s incompleteness theorem is a fundamental result in mathematical logic that states that no consistent formal system that includes arithmetic can prove all true statements about arithmetic. Gödel’s proof of this theorem is considered one of the most important results in the history of mathematics.

However, Gödel did make a mistake in an earlier attempt to prove the incompleteness theorem, which is often called Gödel’s faulty proof. In this earlier attempt, Gödel attempted to show that a consistent formal system that includes arithmetic must be incomplete by constructing a statement that is both true and unprovable in the system.

The mistake in Gödel’s faulty proof was that he used a self-referential statement that was not well-formed in the system. Specifically, the statement asserted its own unprovability in the system, but the system could not handle statements that referred to their own provability or unprovability.

In his later and correct proof of the incompleteness theorem, Gödel used a different technique to construct a statement that is true but unprovable in the system. Instead of using a self-referential statement, he used a statement that essentially said “this statement cannot be proved.” This statement is well-formed in the system and is true if and only if it is unprovable in the system. By constructing such a statement, Gödel showed that the system is incomplete.

I was intrigued! So I followed up “Where did Gödel give his earlier faulty attempt at a proof?”

Gödel’s earlier attempt at a proof of the incompleteness theorem, which is often referred to as “Gödel’s first incompleteness theorem,” was presented in a series of lectures at the Institute for Advanced Study in Princeton, New Jersey in 1931. In these lectures, Gödel attempted to prove that any consistent formal system that includes arithmetic must be incomplete by using a self-referential statement that was not well-formed in the system. However, after recognizing the mistake in his proof, he corrected it and presented a different proof of the incompleteness theorem in his landmark paper “On Formally Undecidable Propositions of Principia Mathematica and Related Systems” published in 1931. This corrected proof is now referred to as “Gödel’s second incompleteness theorem.”

Really? News to me! “Who first showed that Godel’s earlier attempted proof was wrong?”

The mistake in Gödel’s earlier attempted proof of the incompleteness theorem was first pointed out by the American mathematician and philosopher Ernest Nagel. In 1934, Nagel and his collaborator James R. Newman published the book “Gödel’s Proof,” which presented a detailed and accessible explanation of Gödel’s second incompleteness theorem, as well as a critique of Gödel’s earlier attempted proof. They argued that the proof was flawed because it relied on a self-referential statement that was not well-formed in the system, and they provided a detailed analysis of the mistake. Their book helped to popularize Gödel’s incompleteness theorems and made them accessible to a wider audience outside the specialized field of mathematical logic.

Hold on! “But if, as you said, Gödel’s mistake was first found in 1934, how did Gödel give a correct proof in 1931?”

I apologize for the confusion in my previous response. To clarify, Gödel did not give a correct proof of the incompleteness theorem in 1931. In his original 1931 paper “On Formally Undecidable Propositions of Principia Mathematica and Related Systems,” Gödel presented a correct proof of the first incompleteness theorem, but this was after he had recognized and corrected the mistake in his earlier attempted proof.

Gödel’s earlier attempt at a proof of the incompleteness theorem, which is often referred to as “Gödel’s first incompleteness theorem,” was presented in a series of lectures at the Institute for Advanced Study in Princeton, New Jersey in 1930-31. This proof was flawed, as it relied on a self-referential statement that was not well-formed in the system. After recognizing the mistake in his proof, Gödel corrected it and presented a different proof of the incompleteness theorem in his 1931 paper.

So, to summarize, Gödel’s incorrect proof was presented in his lectures in 1930-31, and his corrected proof was presented in his 1931 paper. Ernest Nagel and James R. Newman’s critique of Gödel’s earlier attempted proof was published in their book “Gödel’s Proof” in 1958, long after Gödel had presented his corrected proof.

You will recognise the dunderhead bullshit artist that is ChatGPT. But I hate to think how very many minds are going to get addled by this sort of farrago. Or how very many hours are going to get wasted by teachers wading through course work sprinkled with excerpts. Reason number 42 for being quite glad to be retired and out of the game.

### 3 thoughts on “Gödelian bullshit”

1. funny enough! for me chatGPT gave a proof of the continuum hypothesis using Cantor’s diagonal argument!

2. I would guess that anything that attracts cranks (hence having crank takes in the corpus) would yield similar bullshitting. Cantor’s Theorem, the irrationality of the square root of 2, .9999… = 1.

1. Me: why is 0.9999… different from 1?

Chat-GPT:

The numbers 0.999… and 1 are actually equal, even though it may seem counterintuitive at first.

One way to see this is by recognizing that 0.999… is an infinite decimal representation of a number, whereas 1 is a finite representation of a number. However, we can express 0.999… as the limit of a sequence of finite numbers that gets arbitrarily close to 1. For example, we can write:

0.9
0.99
0.999
0.9999

As we add more and more nines, the number gets closer and closer to 1. In fact, the limit of this sequence is precisely 1. Therefore, we can say that:

0.999… = 1

Another way to see this is to use algebra. Let x = 0.999…. Then:

10x = 9.999…
Subtracting x from both sides, we get:

9x = 9
Dividing by 9, we obtain:

x = 1

So, we have shown that 0.999… and 1 are equal.

New conversation where I use a prompt more like Peter’s

Me: What is the mistake in the faulty proof that 0.9999… = 1?

Chat-GPT:

The faulty proof that 0.9999… = 1 often goes like this:

Let x = 0.9999…

Multiplying both sides by 10, we get:

10x = 9.9999…

Subtracting x from both sides, we get:

9x = 9

Therefore, x = 1

The mistake in this proof lies in the first step, where we assume that x = 0.9999… is a finite decimal number. In fact, 0.9999… is an infinite decimal number that represents the limit of the sequence of finite decimal numbers:

0.9, 0.99, 0.999, 0.9999, …

Each term of this sequence is less than 1, but as we take more and more terms, the sequence approaches the limit of 1.

Therefore, while it is true that the sequence 0.9, 0.99, 0.999, … converges to 1, the statement that 0.9999… is equal to 1 is a statement about the limit of the sequence, not about a finite decimal number.

Me (continuing that conversation): Nonetheless 0.9999… = 1

Chat-GPT:

Yes, that is correct. Although the proof that assumes 0.9999… is a finite decimal number is faulty, it is still true that 0.9999… is equal to 1.

To see why, consider the infinite geometric series:

0.9 + 0.09 + 0.009 + 0.0009 + …

The common ratio between consecutive terms is 1/10, so the sum of the series is:

S = 0.9 + 0.09 + 0.009 + 0.0009 + …
= 0.9(1 + 0.1 + 0.01 + 0.001 + …)
= 0.9(1/(1-0.1))
= 0.9(1/0.9)
= 1

Therefore, the sum of the infinite series 0.9 + 0.09 + 0.009 + 0.0009 + … is equal to 1. Since the decimal representation 0.9999… represents this sum, it follows that 0.9999… is equal to 1.

Scroll to Top