# Gödel's second incompleteness theorem

## 1. Introduction

The first incompleteness theorem (and the Gödel-Rosser theorem which came a bit later) show that sufficiently strong recursive theories of arithmetic are necessarily incomplete. They do not quite settle the most pressing foundational question that was around at 1931, which is whether a theory of arithmetic can establish the consistency of a stronger and more useful theory for mathematics. Gödel's second incompleteness theorem establishes a negative answer to this question. In fact it says that a consistent theory of arithmetic $T$ cannot even prove its own consistency, $Con ( T )$.

## 2. The second theorem as a formalisation of the first

One part of the first incompleteness theorem gives, for each recursive theory $T$ extending $DOR$, a statement $G$ equivalent to $∀ x ¬ Proof T ( x , ⌜ G ⌝ )$, and the first part of the argument shows that, assuming $T$ is consistent, $T ⊬ G$, to which we can step outside $T$ momentarily and conclude that $G$ must actually be true because $T$ really cannot prove it.

There's a lot that has been made about this argument, to wit that $T$ cannot know that $G$ is true but we can see that $G$ is in fact true and therefore we are in some sense cleverer than $T$. (The Lucas argument against artificial intelligence, for example.) Much of what has been said doesn't bear too close scrutiny because ultimately it depends on us knowing that the hypotheses of the Gödel first theorem are true, and the most tricky hypothesis that we need to know is that $T$ is consistent. Gödel realised this and in fact this is how he originally stated his second incompleteness theorem: a sufficiently strong theory $T$ can formalise the argument just given and prove $Con ( T ) → G$. Since $T ⊬ G$, Gödel concludes that $T ⊬ Con ( T )$.

It's worth looking at this argument in a bit more detail, because it is illuminating and indicates what we expect such theories $T$ to be able to do, and of course it is essential if we are to say what we mean by sufficiently strong. We start with our recursive theory $T$ extending $DOR$, our predicate $Proof T ( x , y )$, and our sentence $G$ equivalent (in $DOR$) to $∀ x ¬ Proof T ( x , ⌜ G ⌝ )$. We also write $Con ( T )$ for $∀ x ¬ Proof T ( x , ⌜ ⊥ ⌝ )$. We attempt to prove $Con ( T ) → G$ in $T$ so we take an arbitrary model $M ⊨ T + ¬ G$ and attempt to show $M ⊨ ¬ Con ( T )$.

• We have from $M ⊨ ¬ G$ that $M ⊨ ∃ x Proof T ( x , ⌜ G ⌝ )$. This statement true in $M$ is a $Σ 1$ statement.
• Any $Σ 1$ statement true in the real world is provable in $DOR$. But this is a statement about provability and hence (via Gödel-numbering) a statement of arithmetic. When written as a statement of arithmetic it is the assertion that $σ → ∃ x Proof DOR ( x , ⌜ σ ⌝ )$ for each $Σ 1$ sentence $σ$. That $T$ proves all such assertions is a reasonable assumption to make as part of what we mean by $T$ is sufficiently strong..
• Suppose $T$ is strong enough to prove the assertion that all true $Σ 1$ sentences are provable, as in the previous paragraph. Then we can apply this in $M$ to the Gödel statement $¬ G$ in the paragraph before it, since $M ⊨ ¬ G$ and $¬ G$ is $Σ 1$. We conclude that $M ⊨ ∃ y Proof T ( y , ⌜ ¬ G ⌝ )$.
• But then $M$ contains proofs of both $G$ and $¬ G$. These proofs could be combined to obtain a single proof of $⊥$, a proof that is an element of $M$. In other words, $M ⊨ ∃ y Proof T ( y , ⌜ ⊥ ⌝ )$ or $M ⊨ ¬ Con ( T )$.
• Since we have concluded that $M ⊨ ¬ Con ( T )$ and $M$ was arbitrary model of $T + ¬ G$, it follows by Completeness that $T ⊢ ¬ G → ¬ Con ( T )$, as required.

There are some small details left out here, to do with how $T$ and $M$ can manipulate logic and proofs, in particular the logic of $→$, $¬$ and $⊥$. These details are filled in in the next section.

## 3. Provability predicates

There is a more modern axiomatic treatment of the Gödel second incompleteness theorem that is basically the same as just given, but identifies exactly what it is about the logic and provability that is required for it to work.

Definition.

Suppose $T$ is a theory extending $DOR$. A first order formula $Prov ( x )$ in the language of $DOR$ with single free-variable $x$ is called a provability predicate for $T$ if the following hold for all sentences $σ , τ$ in the language of $T$:

1. if $T ⊢ σ$ then $T ⊢ Prov ( ⌜ σ ⌝ )$;
2. $T ⊢ Prov ( ⌜ σ → τ ⌝ ) → ( Prov ( ⌜ σ ⌝ ) → Prov ( ⌜ τ ⌝ ) )$;
3. $T ⊢ Prov ( ⌜ σ ⌝ ) → Prov ( ⌜ Prov ( ⌜ σ ⌝ ) ⌝ )$.

The first of these clauses connects $Prov ( ⌜ σ ⌝ )$ to provability; the second provides all the basic logic needed and the third indicates that provability when true is computationally easily checkable hence provably true.

Definition.

If a provability predicate $Prov ( x )$ is given for a theory $T$ then $Con ( T )$ is $¬ Prov ( ⌜ ⊥ ⌝ )$.

Theorem.

Suppose $T$ is a consistent theory extending $DOR$ with a provability predicate $Prov ( x )$. Then $T ⊬ Con ( T )$.

Proof.

By diagonalisation let $G$ satisfy $DOR ⊢ G ↔ ¬ Prov ( ⌜ G ⌝ )$. Then if $T ⊢ G$ then $T ⊢ Prov ( ⌜ G ⌝ )$ hence $T ⊢ ¬ G$ so $T ⊢ ⊥$, against assuption. So $T ⊬ G$ and $T ⊬ ¬ Prov ( ⌜ G ⌝ )$.

Now let $σ$ be

$G → ( Prov ( ⌜ G ⌝ ) → ⊥ )$

Note that $σ$ is equivalent (in $DOR$) to $G → ¬ Prov ( ⌜ G ⌝ )$ and hence to $¬ Prov ( ⌜ G ⌝ ) → ¬ Prov ( ⌜ G ⌝ )$ and therefore

$T ⊢ σ$

By the property of being a provability predicate

$T ⊢ Prov ( ⌜ σ ⌝ )$

i.e.

$T ⊢ Prov ( ⌜ G → ( Prov ( ⌜ G ⌝ ) → ⊥ ⌝ ) )$

So using the second axiom for a provability predicate twice

$T ⊢ Prov ( ⌜ G ⌝ ) → ( Prov ( ⌜ Prov ( ⌜ G ⌝ ) ⌝ ) → Prov ( ⌜ ⊥ ⌝ ) )$

But the third axiom gives

$T ⊢ Prov ( ⌜ G ⌝ ) → Prov ( ⌜ Prov ( ⌜ G ⌝ ) ⌝ )$

hence

$T ⊢ Prov ( ⌜ G ⌝ ) → Prov ( ⌜ ⊥ ⌝ )$

or

$T ⊢ ¬ Prov ( ⌜ ⊥ ⌝ ) → ¬ Prov ( ⌜ G ⌝ )$

so as $T ⊬ ¬ Prov ( ⌜ G ⌝ )$ we have

$T ⊬ ¬ Prov ( ⌜ ⊥ ⌝ )$

or

$T ⊬ Con ( T )$

as required.