This web page puts a little more detail on
the syntactical nature of the definition of $\mathrm{BT}\left(X\right)$
the set of boolean terms over letters from $X$,
given in Chapter 6 of The Mathematics of Logic
.
We also give the unique readability theorem for such terms
and explain how definitions by induction can be made on
terms. Much of the material here consists of very precise
(almost pedantic) constructions concerning strings of
symbols, and the theorems proved here about them are
in some sense obvious
. The material here is aimed at
readers who really need or want to see the full story with
all the gory details.
In Definition 6.1 the set $\mathrm{BT}\left(X\right)$ is defined. The essential ingredients of this definition are that
letterssuch as $\left\{{x}_{n}:n\in \mathbb{N}\right\}$.) In the sequel we will always assume that this is the case.
Here then is the full definition of $\mathrm{BT}\left(X\right)$, making Definition 6.1 more precise.
Definition.
A string of symbols $\sigma $ is in $\mathrm{BT}\left(X\right)$ if there is a finite sequence ${\sigma}_{0},{\sigma}_{1},\dots ,{\sigma}_{n}$ of strings of symbols, such that ${\sigma}_{n}$ is the string $\sigma $ and for each $i\u2a7dn$ we have one of the following holding.
Definition.
A finite sequence ${\sigma}_{0},{\sigma}_{1},\dots ,{\sigma}_{n}$ of strings of symbols such that for each $i\u2a7dn$ one of the six conditions above hold is called a construction sequence for the boolean term ${\sigma}_{n}$.
Exercise.
Rewrite the above definitions so that it is given by a formal system (which you must specify), a boolean term over $X$ is a theorem of the system, and a construction sequence is essentially a proof in the formal system.
All syntactical definitions by induction in the book can be re-written more precisely in this sort of form, expressing the existence of a finite sequence of subexpressions, or by introducing other secondary formal systems.
We can use this formal definition now to prove statements that might have seemed obvious but nevertheless deserve a proof in a highly rigorous treatment.
Example.
Each construction sequence ${\sigma}_{0}$ of length $1$ consists of a single string of a single symbol. That symbol may be $\top $, $\perp $, or an element of $X$.
Proposition.
Each $\sigma \in \mathrm{BT}\left(X\right)$ is a string of finitely many symbols.
Proof.
By induction.
Subproof.
Our induction hypothesis $H\left(n\right)$ is the statement that if ${\sigma}_{0},{\sigma}_{1},\dots ,{\sigma}_{n}$ is a construction sequence then ${\sigma}_{n}$ is a string of finitely many symbols.
$H\left(1\right)$ is true by the preceding example. If we could prove $H\left(n\right)$ for all $n$ then we would have proved the proposition for if $\sigma \in \mathrm{BT}\left(X\right)$ there is a construction sequence for it of some length $n$ si by $H\left(n\right)$ there are finitely many symbols in $\sigma $.
The induction step, saying that if $H\left(k\right)$ holds for all $k<n$ then $H\left(n\right)$ holds, is proved by looking at the six individual clauses in the definition of construction sequence. Note that if ${\sigma}_{0},{\sigma}_{1},\dots ,{\sigma}_{n}$ is a construction sequence then for each $i<n$, ${\sigma}_{0},{\sigma}_{1},\dots ,{\sigma}_{i}$ is also a construction sequence. So by $H\left(i\right)$ for $i<n$ we may assume that each ${\sigma}_{i}$ is finite. It follows by looking at the individual clauses that ${\sigma}_{n}$ must be finite too.
Unique readability is the proposition that we chose a good method for encoding terms as strings. For example if we had omitted the brackets, writing $(\alpha \wedge \beta )$ instead as $\alpha \wedge \beta $, and also writing $(\alpha \vee \beta )$ instead as $\alpha \vee \beta $ the encoding we would have would not satisfy unique readability as $\neg \alpha \wedge \beta \vee \gamma $ could be variously read as $(\neg \alpha )\wedge (\beta \vee \gamma )$, $\neg (\alpha \wedge (\beta \vee \gamma \left)\right)$, $(\neg (\alpha \wedge \beta \left)\right)\vee \gamma $, $\neg \left(\right(\alpha \wedge \beta )\vee \gamma )$, etc.
Thus unique readability is a theorem about correctness of encodings of a natural mathematical idea (terms and expressions) into the realm of strings of symbols. Most of the work is done by the following technical proposition.
Proposition.
Suppose $\alpha $ and $\beta $ are boolean terms over a set $X$ and that $\alpha $ is an initial part of $\beta $, that is $\beta $ is the concatenation $\alpha \gamma $ for some string $\gamma $. Then in fact $\alpha $ and $\beta $ have the same length and are equal as strings.
Proof.
By induction on the length of construction sequences. We assume we are given $n\in \mathbb{N}$ with the property that whenever $\alpha $ and $\beta $ are boolean terms over a set $X$ both with construction sequences of length at most $n$ and $\alpha $ is an initial part of $\beta $ then $\alpha =\beta $. Observe that this is true when $n=1$ since in this case both $\alpha $ and $\beta $ are $\top $, $\perp $, or an element of $X$, and so if $\alpha $ is an initial part of $\beta $ and both have length $1$ then $\alpha =\beta $.
For the induction step we suppose we have that $\alpha $ is an initial part of $\beta $ both having construction sequences of length at most $n+1$. We look at the first symbol in $\alpha $, which is also, necessarily the first symbol in $\beta $.
If this symbol is $\top $, $\perp $ or $x\in X$ then both $\alpha $ and $\beta $ are one-symbol strings, as the only clauses in the construction allowing these as the first symbol are the first three, and these construct single symbol strings. Thus $\alpha =\beta $ in this case.
If the first symbol in $\alpha $ is $\neg $ then $\alpha =\neg \gamma $ for some $\gamma $ with construction sequence of length at most $n$ as this is the only possible construction clause. Similarly $\beta =\neg \delta $ for some $\delta $ with construction sequence of length at most $n$. Also as $\alpha $ is an initial part of $\beta $, $\gamma $ is also an initial part of $\delta $, so $\gamma =\delta $ by our induction hypothesis and hence $\alpha =\beta $.
If the first symbol in $\alpha $ is an open parenthesis then there are two possibilities: either $\alpha $ is $({\gamma}_{1}\wedge {\gamma}_{2})$ or $\alpha $ is $({\gamma}_{1}\vee {\gamma}_{2})$. Similarly $\beta $ is $({\delta}_{1}*{\delta}_{2})$ where $*$ is $\wedge $ or $\vee $. This means that ${\gamma}_{1}$ is an initial part of the string ${\delta}_{1}*{\delta}_{2}$ and hence either ${\gamma}_{1}$ is an initial part of the string ${\delta}_{1}$ or ${\delta}_{1}$ is an initial part of the string ${\gamma}_{1}$. Therefore by our induction hypothesis ${\gamma}_{1}={\delta}_{1}$. Now by looking at the rest of $\alpha $ and how it is an initial part of $\beta $ we see that the operator $*$ is the same as the operator of alpha, i.e. $\alpha $ is $({\gamma}_{1}*{\gamma}_{2})$ which is an initial part of $\beta =({\delta}_{1}*{\delta}_{2})$. So ${\gamma}_{2}$ is an initial part of ${\delta}_{2}$ and hence by the induction hypothesis they are equal. This shows $\alpha =\beta $.
Proposition.
Suppose $\sigma \in \mathrm{BT}\left(X\right)$. Then exactly one of the following holds.
Moreover, in the last three cases, the strings $\alpha ,\beta $ are uniquely determined from $\sigma $.
Proof.
Case-by-case, using the previous proposition.
The importance of unique readability is that it enables new definitions by induction on the way a boolean term $\sigma $ is built up. One hugely important example of such a definition is given in chapter 7, page 80 where a valuation, a function $f:X\to B$ is extended to a function $\mathrm{BT}\left(X\right)\to B$. If a boolean term could be ambiguously read in two or more different ways, the definition given would not be sound and the whole theory collapse.
Exercise.
Let $L$ be a first order language. Assume that the symbols of $L$, especially the variables, constant symbols, function symbols and relation symbols are all distinct. State and prove a unique readability theorem for terms of $L$.
Exercise.
With $L$ as in the previous exercise, state and prove a unique readability theorem for formulas of $L$.