# Why do we need to do analysis rigorously?

## 1. Introduction

At this stage in your mathematical career you have seen a lot of mathematics and seen how useful they can be, but it may come as a suprise if you stop to think about how a simple function such as $x y$ is defined—in other words what this expression really means for real numbers $x$ and $y$. You may have wondered about such thorny problems as what $0 0$ is. (Is it $1$ since $x 0 = 1$ for all $x$? Or is it $0$ since $0 y = 0$ for all $y$?).

Almost certainly, at least if you are a beginning first year maths student, you will never have seen a proper definition of $x y$. (You probably have not heard a good explanation of what it means to be a real number either.) The reason is that, although it is possible to give a perfectly good definition, the issues are much more complicated than is possible to explain at school level. And to solve delicate conundrums such as $0 0$ we need to be clear what we mean.

One method is to appeal to the fact that we know how to compute $x y$ when $y = k / n$, a rational number, since $x k / n$ is the $n$th root of the integer power $x k$. Now all we need to do is to find a sequence of rationals $q n$ that converge to $y$ and define $x y$ to be the limit of $x q n$. Of course we need to do a bit more than this: we also need to check that our definition is a good one, in that it can't ever give contradictory answers, but at least we have an approach to the problem.

An alternative method is to define $e x$ via its familiar power series, define $log x$ as the inverse function, and then define $x y = e y log x$. Both these methods, it will be seen, involve some kind of limiting process—in one case a sequence of rationals converging to $y$ and in the other a infinite series that should be summed—and that is what this course is going to be about.

Both methods seem pretty difficult, but they at least look like they might be possible. It would be a good idea to try them both out and prove that they give equivalent definitions of $x y$, and that also seems pretty difficult. But it's very important if we are going to have any confidence in our $x y$ function and use it much. (Of course, we do use this function a lot!)

## 2. Problems with infinite series

One of the main goals of this course is to explain and use infinite series such as

$∑ n = 1 ∞ a n$

rigorously.

You may have been using such series already, so the question I address now is Why should we need to learn about series all over again? The point will be that considerable care (rigour) is really needed, and I hope to explain why through some examples.

## 1. Geometric series

The series,

$A = ∑ n = 1 ∞ 1 2 n = 1 2 + 1 4 + 1 8 + 1 16 + ⋯$

should be familiar, and you should know its sum. In the past people have worried about even this series. In fact, going back to the Ancient Greeks, it is the series behind Zeno's Paradox which says, To go from X to Y, I first have to go half the distance. Then having got halfway, I must travel half of the remaining distance. After this I must travel half of the distance still remaining, and after this... This sentence doesn't stop, so it seems that I will never complete all the tasks required to get to Y!

The fact that motion is possible should show you that it is possible to sum this series, and as you know $A = 1$.

Before we go any further, you should take careful note of the notation for such a series. It is $∑ n = 1 ∞ 1 2 n$, and the variable $n$ is the variable that does the work. $n$ is not equal to any particular integer, but ranges over all possible positive integers all at once. Beginners often fix on a single possible value for $n$, possibly because it is easier to think of an example rather than the full range of values, but you should by now be able somehow to think in terms of the set of all positive integers in one go. The variable $n$ is often called a dummy variable, because its name doesn't matter. It could have been $m$ or $k$ or something else. The series $∑ r = 1 ∞ 1 2 r$ is just the same as $∑ n = 1 ∞ 1 2 n$.

The last series might have worried the Ancient Greeks but it didn't tax us very much. Others can be more problematic.

For example, consider,

$B = ∑ n = 1 ∞ 1 n 2 = 1 1 + 1 4 + 1 9 + 1 16 + ⋯$

What is the sum?

This one is harder to see straight away. One approach might be to try to compute a few terms on a calculator or computer. Unfortunately we need a lot of terms before things seem like they are going to settle down. I wrote a computer program to compute this sum. The answers it gave are available here and (for those who know java) the program I used is here. After doing 100000 terms, it seems that the series settles down at around 1.64492... but the sum is still increasing and I can't be sure which of the digits after the decimal point is actually correct.

Here is a sketch of an argument that shows the answer should be 1-point-something.

First note that for $n ⩾ 2$ we have

$1 n 2 < 1 n ( n - 1 )$

since $1 n < 1 n - 1$. (You should be clear why I only said this was true for $n ⩾ 2$. What goes wrong for $n = 1$?) Thus

$B = ∑ n = 1 ∞ 1 n 2 < 1 1 + 1 2 + 1 6 + 1 12 + ⋯ = 1 + ∑ n = 2 ∞ 1 n ( n - 1 ) ⁢$

But

$1 n ( n - 1 ) = 1 n - 1 - 1 n$

as you can check by putting this over a common denominator, so

$B < 1 + ∑ n = 2 ∞ 1 n ( n - 1 ) = 1 + 1 1 - 1 2 + 1 2 - 1 3 + 1 3 - 1 4 + 1 4 - 1 5 + ⋯ = 2$

since on removing the brackets adjacent terms cancel.

This seems to be OK, and happily agrees with our computer experiment. But in one of the next few examples we shall see that moving terms and brackets about inside a big infinite summation without thinking carefully can lead to some difficulties.

## 3. The harmonic series

For another example, let

$C = ∑ n = 1 ∞ 1 n ⁢$

This example is rather special and will come up again later. It is called the harmonic series. It seems to converge much more slowly than the last one, and I couldn't find any tricks to sum it so once again I had to resort to using the computer.

The computer program is here and here is its output. According to this data, after a rather a lot of terms it seems to be settling down. But you can inspect the computer output too and see if you believe me.

## 4. A nonconvergent series

The following series is defined in a rather similar sort of way. You might be able to spot its sum.

$D = ∑ n = 0 ∞ ( -1 ) n = 1 - 1 + 1 - 1 + 1 - 1 + ⋯$

By bracketing and cancelling terms just like before we get

$D = ( 1 - 1 ) + ( 1 - 1 ) + ( 1 - 1 ) + ⋯ = 0$

But by bracketing in a different way we have

$D = 1 + ( -1 + 1 ) + ( -1 + 1 ) + ( -1 + 1 ) + ⋯ = 1$

So we obtain the somewhat disconcerting news that $0 = 1$. What has gone wrong?

## 5. An alternating series

Finally, consider

$E = ∑ n = 1 ∞ ( -1 ) n + 1 n = 1 - 1 2 + 1 3 - 1 4 + 1 5 - 1 6 + ⋯$

This one too is difficult to spot what the sum is, but by bracketing we can quickly see that it must be greater than a half:

$E = 1 - 1 2 + 1 3 - 1 4 + 1 5 - 1 6 + ⋯ > 1 2$

since the first bracket is a half and the remaining brackets are all positive.

Now let's bracket the series in a different way. First write

$E = 1 - 1 2 - 1 4 + 1 3 - 1 6 - 1 8 + 1 5 - 1 10 - 1 12 + ⋯$

Strictly speaking, this is both a re-arrangement of the order of terms and a re-bracketing of the terms. But all the terms that were originally in the series are still there and we haven't duplicated any terms so it really should be the same number and we should be all right.

I am going to subtract bracketed terms from one representation of the series from bracketed terms of the other. Since they are both the same series, we should end up with zero because we are just computing $E - E$ by subtracting bracketed terms in the new arrangement from bracketed terms in the old arrangement.

For the difference of the first terms of each we get

$1 - 1 2 - 1 - 1 2 - 1 4 = 1 4$

and

$1 3 - 1 4 - 1 3 - 1 6 - 1 8 > 0$

for the second term, and

$1 5 - 1 6 - 1 5 - 1 10 - 1 12 > 0$

for the next term, and so on. The sum of $1 4$ plus a load of positive numbers is greater than a quarter, so putting all this together we obtain

$0 = E - E > 1 4$

which certainly doesn't look right...

## 3. Conclusion

We need limiting processes, such as limits of sequences and infinite series, to define many common mathematical functions including $x y$. However, a naive approach to them can lead to problems, paradoxes, or inconsistencies. It would be a great shame to spend years on our mathematics only to be shown it is wrong, so the only possible approach is to analyse these ideas and arguments rigorously and determine what parts are right and what is not right. (This means giving proofs of all the theorems and assertions we need.) That way we would have a powerful mathematical theory that can be relied on in further calculations and theorems.