Gödel's second incompleteness theorem

1. Introduction

The first incompleteness theorem (and the Gödel-Rosser theorem which came a bit later) show that sufficiently strong recursive theories of arithmetic are necessarily incomplete. They do not quite settle the most pressing foundational question that was around at 1931, which is whether a theory of arithmetic can establish the consistency of a stronger and more useful theory for mathematics. Gödel's second incompleteness theorem establishes a negative answer to this question. In fact it says that a consistent theory of arithmetic T cannot even prove its own consistency, Con ( T ) .

2. The second theorem as a formalisation of the first

One part of the first incompleteness theorem gives, for each recursive theory T extending DOR , a statement G equivalent to x ¬ Proof T ( x , G ) , and the first part of the argument shows that, assuming T is consistent, T G , to which we can step outside T momentarily and conclude that G must actually be true because T really cannot prove it.

There's a lot that has been made about this argument, to wit that T cannot know that G is true but we can see that G is in fact true and therefore we are in some sense cleverer than T . (The Lucas argument against artificial intelligence, for example.) Much of what has been said doesn't bear too close scrutiny because ultimately it depends on us knowing that the hypotheses of the Gödel first theorem are true, and the most tricky hypothesis that we need to know is that T is consistent. Gödel realised this and in fact this is how he originally stated his second incompleteness theorem: a sufficiently strong theory T can formalise the argument just given and prove Con ( T ) G . Since T G , Gödel concludes that T Con ( T ) .

It's worth looking at this argument in a bit more detail, because it is illuminating and indicates what we expect such theories T to be able to do, and of course it is essential if we are to say what we mean by sufficiently strong. We start with our recursive theory T extending DOR , our predicate Proof T ( x , y ) , and our sentence G equivalent (in DOR ) to x ¬ Proof T ( x , G ) . We also write Con ( T ) for x ¬ Proof T ( x , ) . We attempt to prove Con ( T ) G in T so we take an arbitrary model M T + ¬ G and attempt to show M ¬ Con ( T ) .

There are some small details left out here, to do with how T and M can manipulate logic and proofs, in particular the logic of , ¬ and . These details are filled in in the next section.

3. Provability predicates

There is a more modern axiomatic treatment of the Gödel second incompleteness theorem that is basically the same as just given, but identifies exactly what it is about the logic and provability that is required for it to work.


Suppose T is a theory extending DOR . A first order formula Prov ( x ) in the language of DOR with single free-variable x is called a provability predicate for T if the following hold for all sentences σ , τ in the language of T :

  1. if T σ then T Prov ( σ ) ;
  2. T Prov ( σ τ ) ( Prov ( σ ) Prov ( τ ) ) ;
  3. T Prov ( σ ) Prov ( Prov ( σ ) ) .

The first of these clauses connects Prov ( σ ) to provability; the second provides all the basic logic needed and the third indicates that provability when true is computationally easily checkable hence provably true.


If a provability predicate Prov ( x ) is given for a theory T then Con ( T ) is ¬ Prov ( ) .


Suppose T is a consistent theory extending DOR with a provability predicate Prov ( x ) . Then T Con ( T ) .


By diagonalisation let G satisfy DOR G ¬ Prov ( G ) . Then if T G then T Prov ( G ) hence T ¬ G so T , against assuption. So T G and T ¬ Prov ( G ) .

Now let σ be

G ( Prov ( G ) )

Note that σ is equivalent (in DOR ) to G ¬ Prov ( G ) and hence to ¬ Prov ( G ) ¬ Prov ( G ) and therefore

T σ

By the property of being a provability predicate

T Prov ( σ )


T Prov ( G ( Prov ( G ) ) )

So using the second axiom for a provability predicate twice

T Prov ( G ) ( Prov ( Prov ( G ) ) Prov ( ) )

But the third axiom gives

T Prov ( G ) Prov ( Prov ( G ) )


T Prov ( G ) Prov ( )


T ¬ Prov ( ) ¬ Prov ( G )

so as T ¬ Prov ( G ) we have

T ¬ Prov ( )


T Con ( T )

as required.