# The definition of truth

## 1. Introduction

This web page presents and discusses the precise notion of semantics for first-order logic, in particular the definition of what it means for a sentence $σ$ to be true in a structure $M$.

The definition of what it means for $M ⊨ σ$ is attributed to Tarski in the 1920s. It is not clear to me whether other people at the time even realised that this was a notion that could be defined adequately by mathematical means. (As a student I had various non-mathematical friends who clearly thought there was something heretical when I announced one day that I had just read an account of the definition of truth.) Tarski himself seemed at this time to regard his definition as methodological book-keeping, and not as useful mathematics, though it clearly is an important step before the question of completeness of first-order logic can be stated let alone proved. (The first proof of the Completeness theorem for first-order logic is due to Gödel in 1930.) In much more advanced work and certain specific circumstances (for instance the theory of satisfaction classes in models of Peano Arithmetic) this definition of truth has genuine mathematical importance, but in most contexts it really is just book-keeping, and so the discussion is relegated to these web pages.

This web page should be read in conjunction with Section 9.1 of The Mathematics of Logic. In particular we aim here to give more detail and information on Definition 9.23.

## 2. Tarski's Definition of Truth

The formal definition of $M ⊨ σ$ is by induction on the complexity of the statement $σ$, and there are several sub-clauses corresponding to the different kinds of sub-formulas that may occur in $σ$. But before we give these we indicate the main complication, which is that sub-formulas of $σ$ are not necessarily sentences but are formulas, and as such contain free-variables which must be given specific meanings too during the inductive definition.

There are two possible approaches to this problem, each with advantages and disadvantages. One approach is to work with a larger first-order language with constant symbols naming all elements of our structure $M$. This is convenient as it will eventually allow us to discuss with ease a great number of properties of $M$ and its elements. It does however rely on the idea of substitution of terms for variables, another tricky technical issue that is addressed in these web pages elsewhere. To make this web page more self-contained, we approach the definition of truth by starting with a different way of attaching meanings to free variables.

For the rest of this web page, we fix a first-order language $L$ with variables $v 0 , v 1 , … , v i , … ,$ (for $i ∈ ℕ$) and various constant symbols, relation and function symbols (of possibily different arities) as described in Definition 9.1. We denote by $VAR L$ the set $v 0 v 1 … v i …$ of variables of $L$. In particular, note that we assume a one-to-one correspondence $ℕ → VAR L$ given by $i ↦ v i$. The definition of truth given below will rely on the fact that our first-order language has exactly these variables, and the variables $v i$ are all distinct.

We also fix for this discussion an $L$-structure $M = ( M , … , c , … , f , … , R , … )$ as in Definition 9.22, where $M$ is a non-empty set, the domain of the structure, and for simplicty of notation we use the same symbol $M$ to denote both the structure and its domain.

Definition.

An assignment of variables into $M$ is a function $a : VAR L → M$.

An assignment of variables into $M$ extends to a function (which we shall also denote $a$) that takes an arbitrary term $t$ of $L$ and gives an element $a ( t )$ of the domain of $M$. This definition is very similar to the way a valution $v : X → B$ extends to a map $v : BT ( X ) → B$ on a boolean algebra $B$.

Definition.

Let $TERM L$ denote the set of terms of $L$ as defined in Definition 9.2.

An assignment of variables into $M$ extends to a function $a : TERM L → M$ as follows.

• if $t$ is a variable $v i$, then $a ( t ) = a ( v i )$, as before.
• if $t$ is a constant symbol $c$, then $a ( t )$ is the element $c ∈ M$ of the $L$-structure $M$ realising that constant symbol.
• if $t = f ( t 1 , … , t k )$ is built from a $k$-ary function symbol and $k$ subterms $t 1 , … , t k$, then $a ( t )$ is the element $f ( a ( t 1 ) , … , a ( t k ) )$ of the $L$-structure $M$ obtained by applying the function $f : M k → M$ realising the function symbol $f$ to the result of inductively applying the function $a$ to all of the subterms.

(We remark that the notation is potentially confusing as the same symbol is used for different objects: $M$ for both the structure and its domain; $c$ for the constant symbol and the element of $M$ realising it; $f$ for the function symbol and the function on $M$ realising it; $a$ for the assignment of variables and for the assignment of terms derived from it. However, the alternative would be cumbersome and pedantic. Such use of overloading of symbols is common in computer programming and mathematics and is highly beneficial to everyone once the reader has observed it and appreciated it.)

Because a term can be read in exactly one way (this is the unique readability of terms), the definition above shows that every assignment of variables extends in a unique way to an assignment of terms $a : TERM L → M$.

The next step is to define an assignment of truth values in $2 = ⊤ ⊥$ to each formula $σ$ of $L$, following Definition 9.4. The third clause here is exactly the extension of a valuation into a boolean algebra to the whole of $BT ( X )$ as given following Definition 7.1.

Definition.

If $a : VAR L → M$ is an assignment of variables, $c ∈ M$ and $i ∈ ℕ$ then $a [ c / v i ]$ denotes the assignment of variables $b : VAR L → M$ given by $b ( v i ) = c$ and $b ( v j ) = a ( v j )$ for all $j ≠ i$. By the above definition, $a [ c / v i ]$ can be considered as an assignment of terms of $L$ in the same way.

Definition.

Let $FORM L$ denote the set of formulas of $L$ as defined in Definition 9.4.

An assignment of terms $a : TERM L → M$ gives rise to an assignment of formulas $a : FORM L → 2 = ⊤ ⊥$ as follows.

• If $σ$ is $( t = s )$ where $t , s$ are terms, then $a ( σ ) = ⊤$ if $a ( t ) = a ( s )$, i.e.if these terms are assigned the same element of $M$. Otherwise $a ( σ ) = ⊥$.
• If $σ$ is $R ( t 1 , … , t k )$ where $R$ is a $k$-ary relation symbol and $t 1 , … , t k$ are terms, then $a ( σ ) = ⊤$ if $( a ( t 1 ) , … , a ( t k ) ) ∈ R ⊆ M k$, i.e.if this tuple of assignments of terms are in the relation in $M$ realising the symbol $R$. Otherwise $a ( σ ) = ⊥$.
• If $σ$ is $( ϕ ∧ ψ )$ then $a ( σ ) = a ( ϕ ) ∧ a ( ψ )$ where $∧$ is computed in the boolean algebra $2$ on the right hand side. Similarly $a ( ¬ σ ) = ¬ a ( ϕ )$, $a ( ( ϕ ∨ ψ ) ) = a ( ϕ ) ∨ a ( ψ )$, $a ( ⊤ ) = ⊤$ and $a ( ⊥ ) = ⊥$.
• If $σ$ is $∀ v i ϕ$ then $a ( σ ) = ⊤$ if $b ( ϕ ) = ⊤$ for all $b = a [ c / v i ]$ and all $c ∈ M$. $a ( σ ) = ⊥$ otherwise.
• If $σ$ is $∃ v i ϕ$ then $a ( σ ) = ⊤$ if $b ( ϕ ) = ⊤$ for some $b = a [ c / v i ]$ and some $c ∈ M$. $a ( σ ) = ⊥$ otherwise.

Once again, by unique readability, an assignment $a : VAR L → M$ extends in a unique way to an assignment $a : FORM L → 2$.

The idea of a sentence of $L$ was outlined in Definition 9.6. (A further web page on free variables explain this idea in more detail.) We claimed there that sentences are independent of any choice of meaning of variables. This can now be formally stated as a proposition and proved. In fact we will prove something slightly stronger.

Proposition.

Let $σ$ be a formula of $L$ and $a , b : VAR L → M$ be assignments such that $a ( v i ) = b ( v i )$ for all variables $v i ∈ VAR L$ that occur free in $σ$. Then $a ( σ ) = b ( σ )$.

Proof.

By induction on $σ$, where the induction hypothesis $H ( n )$ is the statement of the proposition for all formulas $σ$ that are constructed using a construction sequence of length at most $n$. We do some of the steps here, leaving the remaining steps as exercises for the reader.

If $σ$ is an atomic formula of type $R ( t 1 , … , t k )$ or $( t 1 = t 2 )$ then $a ( σ )$ and $b ( σ )$ can only depend on the values $a ( v j )$ and $b ( v j )$ for variables $v j$ actually occuring in the terms $t i$. (A formal proof of this fact requires a proof by induction on terms!). Therefore $a ( σ ) = b ( σ )$ as these values $a ( v j )$ and $b ( v j )$ are equal.

If $σ$ is a quantified formula of the type $∀ v i ϕ$ then suppose $a ( σ ) = ⊤$. We must use the indution hypothesis applied to $ϕ$ to show that $b ( ϕ ) = ⊤$. But $a ( σ ) = ⊤$ means that for all $c ∈ M$ we have $a [ c / v i ] ( σ ) = ⊤$. but for each such $c ∈ M$ the assigment $b [ c / v i ]$ agrees with $a [ c / v i ]$ on all variables free in $ϕ$, since such a variable is either free in $∀ v i ϕ$ or else is $v i$ itself and both these assignments give $c$ to $v i$. Therefore by the induction hypothesis $b [ c / v i ] ( σ ) = ⊤$ and as this holds for all such $c ∈ M$ this proves the induction step in this case. The case of $σ$ of the type $v i ϕ$ is similar.

Let $σ$ be a sentence of $L$ and $a , b : VAR L → M$ assignments. Then $a ( σ ) = b ( σ )$.

Definition.

For a sentence $σ$ of $L$, we write $M ⊨ σ$ to mean $a ( σ ) = ⊤$ for every (equivalently some) assignment of variables into $M$.