The incompleteness theorems of Gödel (1931) are rather well-known
through many popular accounts, but are often stated incorrectly, and
there are a number of misconceptions about them - especially with
respect to their philosophical

consequences - which I hope to
make a start in putting right here. I start by stating the three main
theorems, almost as slogans, and then will define most of the terms in
these statements, commenting on the others.

**Theorem 1.1**** (Gödel's first incompleteness theorem)**

A recursive first-order theory which is $\omega $-consistent and sufficiently strong for this theorem is incomplete.

**Theorem 1.2**** (Gödel's second incompleteness theorem)**

A recursive first-order theory which is consistent and sufficiently strong for this theorem cannot prove its own consistency.

**Theorem 1.3**** (The Gödel-Rosser incompleteness theorem)**

A recursive first-order theory which is consistent and sufficiently strong for this theorem is incomplete.

Definitions of most of the terminology here appear on this web
page. Anything that does not will be discussed in other web pages in
this set. In particular I do not say here what sufficiently
strong

means though it is discussed later. The important points to
make about these notions are that (traditionally at least) the
good

theories that we have such as Peano Arithmetic and Zermelo
Fraenel Set theory are recursive and we have strong evidence to
believe them to be at least consistent and hopefully also
$\omega $-consistent (this is a slight strengthening of consistent

).
They are also quite easily seen to be sufficiently strong in the
senses required. Note that the idea of sufficiently strong

is
slightly different for each of the theorems above, but this is not a
problem as we expect all such good theories to be sufficiently strong
in all three senses, and (as the terminology suggests) any
strengthening of a sufficiently strong theory that can prove more is
also sufficiently strong.

The theorems stated above concern
formal theories of arithmetic, which can be taken to mean formal
theories (axioms and proof rules) for true statements about the
natural numbers with addition and multiplication. Sufficiently strong

means that some basic facts about arithmetic must be provable.
Surprisingly little is required for the first theorem and for Rosser's
version of it. Rather more is required for the second theorem: in
particular we will have to explain how it happens that a theory of
arithmetic can even express its own consistency in the formal
language. In particular, a theory needs to be a bit stronger for the
second theorem to work.

Except for the definition of what sufficiently strong

means, I
shall provide some background information and definitions to
understand these theorems here.

As already said, these theorems discuss formal theories that prove true statements about the natural numbers. We shall take this to mean that they all discuss a first-order theory in some first-order language $L$. So the theories are completely specified (by the usual rules of first-order logic) when we have the specification of $L$ itself and a set of sentences of $L$ that will be taken as the theory's mathematical (i.e. non-logical) axioms.

**Definition 2.1**

A first-order language $L$ is *recursive* if it
has countably many nonlogical symbols, each one identified with a
distinct natural number, and there is a Turing machine (or Register machine, or
your prefered type of machine characterising computable

) that can
compute the set of these non-logical symbols together with their
arities.

In fact, we shall see later that there are sufficiently strong and interesting languages with only finitely many nonlogical symbols. These are obviously recursive by the above definition. (I take a finite set to be countable.) It is important (as you will have observed in your previous course in logic) that the formation rules for first-order languages are computable in a natural sense. Again, they are, and in fact they are rather easily computable. Most of the relevant syntactic operations in a first-order language are polynomial-time computable.

**Definition 2.2**

A first-order theory $T$ is specified by a first order language $L$ and a set of sentences of a language $L$, the non-logical axioms of $T$. Since $L$ is usually understood from the context and first-order logic is the default and is implicit, we generally identify a theory $T$ with its set of non-logical axioms. The theorems of $T$, $\text{Th}\left(T\right)$, is the set of $L$-sentences that can be proved from $T$ using first-order logic: $\text{Th}\left(T\right)=\left\{\sigma :T\u22a2\sigma \right\}$.

Any sentence or other expression in a first-order language $L$ is a string of symbols of finite length. If the nonlogical symbols of $L$ are identified with natural numbers in a computable way we shall see how this means that all expressions in $L$ can also be mapped to natural numbers in a computable way. (This is called Gödel-numbering.) The Gödel-number of a string of symbols $\sigma $ is written $\u231c\sigma \u231d$.

**Definition 2.3**

A first-order theory $T$ is recursive if it is in a recursive language and the set of Gödel-numbers of the axioms of $T$ is a recursive (computable) set of natural numbers.

Recall $\perp $ is a false statement in first-order logic.

**Definition 2.4**

A first-order theory $T$ is *consistent* if
$T\u22ac\perp $ i.e. $\perp \notin \text{Th}\left(T\right)$.

**Definition 2.5**

A first-order theory $T$ is *complete* if
for every sentence $\sigma $ of the language of $T$, either
$T\u22a2\sigma $ or $T\u22a2\neg \sigma $.

The last definition here is the most difficult one. The languages we shall consider for arithmetic will have expressions (or closed terms) representing each natural number. It is possible to add a constant symbol $n$ for each natural number, but if we prefer we do not need to add infinitely many constants, but just add symbols for $0,1$ and a symbol for addition. Thus where used in a sentence or formulas of $L$, the number $n$ might be understood to be some canonical closed term $\text{cterm}\left(n\right)$ such as $(\dots (1+1)+\dots +1)$ with $n$ $1$s, and some standard convention on brackets.

**Definition 2.6**

In a first-order language $L$ with constant symbols $0,1$
and binary function symbol $+$ the *canonical term * of a natural number $n\in \mathbb{N}$,
$\text{cterm}\left(n\right)$, is defined by:
$\text{cterm}\left(0\right)$ is the string with single symbol $0$;
and for all $n\ge 0$, $\text{cterm}(n+1)$ is the string $\left(\text{cterm}\right(n)+1)$
i.e. the string formed from concatenating the symbol $($,
the string $\text{cterm}\left(n\right)$ and the string $+1)$.

**Definition 2.7**

A first-order theory $T$ in such a language $L$
is *
$\omega $-consistent* if
whenever $\theta \left(x\right)$ is a $L$-formula with a single free-variable $x$
and $T\u22a2\theta \left(\text{cterm}\right(n\left)\right)$ for all $n\in \mathbb{N}$
then $T\u22ac\exists x\neg \theta \left(x\right)$.

When confusion does not arise we will just write $n$ for $\text{cterm}\left(n\right)$.

Thus a theory is $\omega $-consistent if it cannot proclaim

(i.e. prove!) some
statement of the form there is a number $x$ with the property $\theta \left(x\right)$

unless some concrete number $0,1,2,\dots $ has this property *consistent with $T$
*.

**Exercise 2.1**

Verify the assertion just made, i.e. show that $T$ is $\omega $-consistent if and only if whenever $\theta \left(x\right)$ is a $L$-formula with a single free-variable $x$ and $T\u22a2\exists x\theta \left(x\right)$ then for some $n\in \mathbb{N}$ $T\cup \left\{\theta \left(\text{cterm}\right(n\left)\right)\right\}$ is consistent.

The last definition makes sense for a theory that purports to be about

natural numbers,
i.e. all objects and variables are intended to be interpreted as natural number variables.
Some theories (such as set theory) includes natural numbers as a subclass of the collection of the
objects, and this subclass is defined in a natural way by a formula $\nu \left(x\right)$ of the language.
For such theories we have a slightly different definition of $\omega $-consistency.

**Definition 2.8**

A first-order theory $T$ in a first order language $L$
with constant symbols $0,1$, binary function symbol $+$,
and formula $\nu \left(x\right)$ representing the class of natural numbers,
is *
$\omega $-consistent* if
whenever $\theta \left(x\right)$ is a $L$-formula with a single free-variable $x$
and $T\u22a2\theta \left(\text{cterm}\right(n\left)\right)$ for all $n\in \mathbb{N}$
then $T\u22ac\exists x\left(\nu \right(x)\wedge \neg \theta (x\left)\right)$.

It is easy to see that there are $\omega $-consistent theories.

**Exercise 2.2**

Show that any theory of natural numbers $T$ whose non-logical axioms are true in a structure with domain the usual natural numbers and with the usual interpretations of $0,1,+$ is $\omega $-consitent.

The existence of $\omega $-inconsistent theories follows from Gödel's first incompleteness theorem, as we shall see later.

This web page is copyright. It is one of Richard Kaye's Logic Web Pages,
designed to supplement the book *The Mathematics of Logic:
a guide to completeness theorems and their applications*
(CUP, 2007).