At this stage in your mathematical career you have seen a lot of mathematics and seen how useful they can be, but it may come as a suprise if you stop to think about how a simple function such as ${x}^{y}$ is defined—in other words what this expression really means for real numbers $x$ and $y$. You may have wondered about such thorny problems as what ${0}^{0}$ is. (Is it $1$ since ${x}^{0}=1$ for all $x$? Or is it $0$ since ${0}^{y}=0$ for all $y$?).

Almost certainly, at least if you are a beginning first year maths student, you will never have seen a proper definition of ${x}^{y}$. (You probably have not heard a good explanation of what it means to be a real number either.) The reason is that, although it is possible to give a perfectly good definition, the issues are much more complicated than is possible to explain at school level. And to solve delicate conundrums such as ${0}^{0}$ we need to be clear what we mean.

One method is to appeal to the fact that we know how to compute
${x}^{y}$ when $y=k/n$, a *rational* number, since
${x}^{k/n}$ is the $n$th root of the integer
power ${x}^{k}$. Now all we need to do is to find a sequence
of rationals ${q}_{n}$ that converge to $y$ and define
${x}^{y}$ to be the limit of ${x}^{{q}_{n}}$.
Of course we need to do a bit more than this: we also need to check
that our definition is a good one, in that it can't ever give
contradictory answers, but at least we have an approach to the
problem.

An alternative method
is to define ${e}^{x}$ via its familiar power series,
define $logx$ as the inverse function, and then define
${x}^{y}={e}^{ylogx}$. Both these methods, it will
be seen, involve some kind of *limiting process*—in one case
a sequence of rationals converging to $y$ and in the other a
infinite series that should be summed—and that is what this
course is going to be about.

Both methods seem pretty difficult, but they at least
look like they might be possible. It would be a good idea
to try them both out and *prove* that they give equivalent
definitions of ${x}^{y}$, and that also seems pretty difficult.
But it's very important if we are going to have any confidence in our
${x}^{y}$ function and use it much. (Of course, we do use
this function a lot!)

One of the main goals of this course is to explain and use *infinite
series* such as

rigorously.

You may have been using such series already, so the question I address now
is Why should we need to learn about
series all over again?

The point will be that considerable care
(rigour) is really needed, and I hope to explain why through some
examples.

The series,

should be familiar, and you should know its sum. In the past
people have worried about even this series. In fact, going back to the
Ancient Greeks, it is the series behind *Zeno's Paradox* which
says, To go from X to Y, I first have to go half the distance.
Then having got halfway, I must travel half of the remaining distance.
After this I must travel half of the distance still remaining,
and after this... This sentence doesn't stop, so it seems that
I will never complete all the tasks required to get to Y!

The fact that motion is possible should show you that it is possible to sum this series, and as you know $A=1$.

Before we go any further, you should take careful note of the
notation for such a series. It is
$\sum _{n=1}^{\infty}\frac{1}{{2}^{n}}$, and
the variable $n$ is the variable that does the work.
$n$ is not equal to any particular integer, but ranges
over all possible positive integers all at once. Beginners often
fix on a single possible value for $n$, possibly because it
is easier to think of an example rather than the full range of
values, but you should by now be able somehow to think in terms
of the set of

in one go. The variable
$n$ is often called a dummy variable, because its name doesn't
matter. It could have been $m$ or $k$ or something else.
The series $\sum _{r=1}^{\infty}\frac{1}{{2}^{r}}$ is
just the same as
$\sum _{n=1}^{\infty}\frac{1}{{2}^{n}}$.*all* positive integers

The last series might have worried the Ancient Greeks but it didn't tax us very much. Others can be more problematic.

For example, consider,

What is the sum?

This one is harder to see straight away. One approach might be to try to compute a few terms on a calculator or computer. Unfortunately we need a lot of terms before things seem like they are going to settle down. I wrote a computer program to compute this sum. The answers it gave are available here and (for those who know java) the program I used is here. After doing 100000 terms, it seems that the series settles down at around 1.64492... but the sum is still increasing and I can't be sure which of the digits after the decimal point is actually correct.

Here is a sketch of an argument that shows the answer should be 1-point-something.

First note that for $n\u2a7e2$ we have

since $\frac{1}{n}<\frac{1}{n-1}$. (You should be clear why I only said this was true for $n\u2a7e2$. What goes wrong for $n=1$?) Thus

But

as you can check by putting this over a common denominator, so

since on removing the brackets adjacent terms cancel.

This seems to be OK, and happily agrees with our computer experiment. But in one of the next few examples we shall see that moving terms and brackets about inside a big infinite summation without thinking carefully can lead to some difficulties.

For another example, let

This example is rather special and will come up again later. It is called the harmonic series. It seems to converge much more slowly than the last one, and I couldn't find any tricks to sum it so once again I had to resort to using the computer.

The computer program is here and here is its output. According to this data, after a rather a lot of terms it seems to be settling down. But you can inspect the computer output too and see if you believe me.

The following series is defined in a rather similar sort of way. You might be able to spot its sum.

By bracketing and cancelling terms just like before we get

But by bracketing in a different way we have

So we obtain the somewhat disconcerting news that $0=1$. What has gone wrong?

Finally, consider

This one too is difficult to spot what the sum is, but by bracketing we can quickly see that it must be greater than a half:

since the first bracket is a half and the remaining brackets are all positive.

Now let's bracket the series in a different way. First write

Strictly speaking, this is both a re-arrangement of the order of terms and a re-bracketing of the terms. But all the terms that were originally in the series are still there and we haven't duplicated any terms so it really should be the same number and we should be all right.

I am going to subtract bracketed terms from one representation of the series from bracketed terms of the other. Since they are both the same series, we should end up with zero because we are just computing $E-E$ by subtracting bracketed terms in the new arrangement from bracketed terms in the old arrangement.

For the difference of the first terms of each we get

and

for the second term, and

for the next term, and so on. The sum of $\frac{1}{4}$ plus a load of positive numbers is greater than a quarter, so putting all this together we obtain

which certainly doesn't look right...

We need limiting processes, such as limits of sequences and infinite series, to define many common mathematical functions including ${x}^{y}$. However, a naive approach to them can lead to problems, paradoxes, or inconsistencies. It would be a great shame to spend years on our mathematics only to be shown it is wrong, so the only possible approach is to analyse these ideas and arguments rigorously and determine what parts are right and what is not right. (This means giving proofs of all the theorems and assertions we need.) That way we would have a powerful mathematical theory that can be relied on in further calculations and theorems.