Gödel's incompleteness theorems came as a shock to the mathematical logic community in 1931. This was not just because it shows Russell's logicist programme had serious shortcomings (somthing I won't say much about here) but also because it shows that Hilbert's Programme for the foundation of mathematics cannot succeed either. I suspect this was a bit of a surprise even to Gödel who (it seems to me) set out to criticise Russell but was a supporter of Hilbert's position. The 1931 paper even ends with some suggestions (that do not really work) on how to ressurect Hilbert's position.

It is often stated that it is the first incompleteness theorem that kills Hilbert's Programme. In fact it is the second incompleteness theorem that is the blow. To explain, I will outline Hilbert's Programme (in rather anachronistic and modern terms) here.

Starting around 1900, Hilbert developed his Programme for the
Foundations of Mathematics over a number of years, up to 1930,
with much of the important work and papers published in the mid 1920s.
Although he did not use theories of arithmetic similar
to our $\text{DOR}$, nor the notion of ${\Sigma}_{1}$ and ${\Pi}_{1}$
statements, he had related notions. In particular
he talked about Finitistic Mathematics

which included mathematics
that could be expressed in a ${\Delta}_{0}$ way, and a little more
besides.

The starting point to Hilbert's Programme, is, as it has been elsewhere in this section, the idea that some theories such as $\text{DOR}$ are ${\Sigma}_{1}$-complete, i.e. if $\sigma $ is ${\Sigma}_{1}$ and true then it is provable in $\text{DOR}$. Hilbert understood a version of this statement for his idea of finitistic mathematics, and he understood a ${\Sigma}_{1}$ sentence as expressing that there is a complete computation that verifies a finitistic statement is correct.

Hilbert also had a notion of ${\Pi}_{1}$ statements. He called them *real*.
He understood that ${\Pi}_{1}$ statements are the negations of
${\Sigma}_{1}$ statements and vice versa. He proposed dividing
mathematics into finitistic methods, real statements and the
rest, which was called *ideal* and based on *imaginary*
concepts, analogous to the theory of the complex numbers.
Just as for complex numbers, where theories of $\u2102$
can have important consequences for $\mathbb{R}$, a theory
involving imaginary or ideal concepts can have many important real
or ${\Pi}_{1}$ consequences.

From now on, all theories considered here will be considered to include $\text{DOR}$, either directly or via an interpretation. Hilbert observed that this means it suffices to concentrate on the consistency of theories. This is because if $T$ is such a theory and proves a real (${\Pi}_{1}$) statement $\sigma $ then $\sigma $ must be actually true, because if not, $\neg \sigma $ being ${\Sigma}_{1}$ would be provable in $\text{DOR}$ hence in $T$, so $T$ would be inconsistent. Conversely, if $T$ is inconsistent then it will prove some false ${\Pi}_{1}$ statement such as $\forall x\u200a(\neg x=x)$.

This means that mathematics involving high-powered and abstract concepts, such as those found in set theory, or involving new kinds of numbers, is OK provided the theories of these things are consistent and they can also express real statements too and extend a ${\Sigma}_{1}$ theory such as $\text{DOR}$. In other words, the foundation of abstract mathematics is based on its predictions (or theorems) about real mathematics. If a theory as described above can be proved consistent then its real consequences would necessarily have to be correct, so the theory is well-founded.

As far as I can see, Hilbert never expected useful mathematical
theories to be complete, in the sense that they decide all statements
expressible in their language. (The first Gödel theorem says that
$\omega $-consistent recursively axiomatized theories cannot be complete.)
But Hilbert did famously believe that we (as mathematicians)
would eventually be able to decide the truth of all real statements. As he said,
*Wir müssen wissen; wir werden wissen!* His suggestion was to look for a hierarchy
of mathematical theories of ideal or abstract mathematics
such as $\text{FIN}\subseteq {T}_{1}\subseteq {T}_{2}\subseteq {T}_{3}\subseteq \dots $
where $\text{FIN}$ is Hilbert's base theory of finitistic mathematics
(similar to, but a bit stronger than $\text{DOR}$) and
prove $\text{FIN}\u22a2\text{Con}\left({T}_{1}\right)$,
${T}_{1}\u22a2\text{Con}\left({T}_{2}\right)$, and so on.
This way we establish in the theories $\text{FIN},{T}_{1},{T}_{2},\dots $
their consistency. The beauty of this proposal is that, consistency
being a ${\Pi}_{1}$ notion is real, so that if it is provable in a consistent
theory then it is actually true. In other words this will establish the
actual truth of the the consistency of these theories and hence the truth
of their real consequences.
I don't think there ever was any expectation that the sequence of theories will have to be
linearly ordered as just indicated. Mathematical progress might better go via a lattice of
theories or a partially ordered set of theories. But the new theories should be justified
by proving their consistency in the previously obtained theories.

The last sentence is the main point, and that is what Gödel's second incompleteness theorem says is impossible. A sufficiently strong theory such as $\text{PA}$ or $\text{FIN}$ cannot, by Gödel's second incompleteness theorem, prove its own consistency, let alone the stronger statement of the consistency of a stonger theory.