Archive for February, 2014

Differentiating power series

February 22, 2014

I’m writing this post as a way of preparing for a lecture. I want to discuss the result that a power series \sum_{n=0}^\infty a_nz^n is differentiable inside its circle of convergence, and the derivative is given by the obvious formula \sum_{n=1}^\infty na_nz^{n-1}. In other words, inside the circle of convergence we can think of a power series as like a polynomial of degree \infty for the purposes of differentiation.

A preliminary question about this is why it is not more or less obvious. After all, writing f(z)=\sum_{n=0}^\infty a_nz^n, we have the following facts.

  1. Writing S_N(z)=\sum_{n=0}^Na_nz^n, we have that S_N(z)\to f(z).
  2. For each N, S_N'(z)=\sum_{n=1}^Nna_nz^{n-1}.

If we knew that S_N'(z)\to f'(z), then we would be done.

Ah, you might be thinking, how do we know that the sequence (S_N'(z)) converges? But it turns out that that is not the problem: it is reasonably straightforward to show that it converges. (Roughly speaking, inside the circle of convergence the series \sum_na_nz^{n-1} converges at least as fast as a GP, and multiplying the nth term by n doesn’t stop a GP converging (as can easily be seen with the help of the ratio test). So, writing g(z) for \sum_{n=1}^\infty na_nz^{n-1}, we have the following facts at our disposal.

  1. S_N(z)\to f(z)
  2. S_N'(z)\to g(z)

Doesn’t it follow from that that f'(z)=g(z)?
(more…)

Recent news concerning the Erdos discrepancy problem

February 11, 2014

I’ve just learnt from a reshare by Kevin O’Bryant of a post by Andrew Sutherland on Google Plus that a paper appeared on the arXiv today with an interesting result about the Erdős discrepancy problem, which was the subject of a Polymath project hosted on this blog four years ago.

The problem is to show that if (\epsilon_n) is an infinite sequence of \pm 1s, then for every C there exist d and m such that \sum_{i=1}^m\epsilon_{id} has modulus at least C. This result is straightforward to prove by an exhaustive search when C=2. One thing that the Polymath project did was to discover several sequences of length 1124 such that no sum has modulus greater than 2, and despite some effort nobody managed to find a longer one. That was enough to convince me that 1124 was the correct bound.

However, the new result shows the danger of this kind of empirical evidence. The authors used state of the art SAT solvers to find a sequence of length 1160 with no sum having modulus greater than 2, and also showed that this bound is best possible. Of this second statement, they write the following: “The negative witness, that is, the DRUP unsatisfiability certificate, is probably one of longest proofs of a non-trivial mathematical result ever produced. Its gigantic size is comparable, for example, with the size of the whole Wikipedia, so one may have doubts about to which degree this can be accepted as a proof of a mathematical statement.”

I personally am relaxed about huge computer proofs like this. It is conceivable that the authors made a mistake somewhere, but that is true of conventional proofs as well. The paper is by Boris Konev and Alexei Lisitsa and appears here.

Taylor’s theorem with the Lagrange form of the remainder

February 11, 2014

There are countless situations in mathematics where it helps to expand a function as a power series. Therefore, Taylor’s theorem, which gives us circumstances under which this can be done, is an important result of the course. It is also the one result that I was dreading lecturing, at least with the Lagrange form of the remainder, because in the past I have always found that the proof is one that I have not been able to understand properly. I don’t mean by that that I couldn’t follow the arguments I read. What I mean is that I couldn’t reproduce the proof without committing a couple of things to memory, which I would then forget again once I had presented them. Briefly, an argument that appears in a lot of textbooks uses a result called the Cauchy mean value theorem, and applies it to a cleverly chosen function. Whereas I understand what the mean value theorem is for, I somehow don’t have the same feeling about the Cauchy mean value theorem: it just works in this situation and happens to give the answer one wants. And I don’t see an easy way of predicting in advance what function to plug in.

I have always found this situation annoying, because a part of me said that the result ought to be a straightforward generalization of the mean value theorem, in the following sense. The mean value theorem applied to the interval [x,x+h] tells us that there exists y\in (x,x+h) such that f'(y)=\frac{f(x+h)-f(x)}h, and therefore that f(x+h)=f(x)+hf'(y). Writing y=x+\theta h for some \theta\in(0,1) we obtain the statement f(x+h)=f(x)+hf'(x+\theta h). This is the case n=1 of Taylor’s theorem. So can’t we find some kind of “polynomial mean value theorem” that will do the same job for approximating f by polynomials of higher degree?

Now that I’ve been forced to lecture this result again (for the second time actually — the first was in Princeton about twelve years ago, when I just suffered and memorized the Cauchy mean value theorem approach), I have made a proper effort to explore this question, and have realized that the answer is yes. I’m sure there must be textbooks that do it this way, but the ones I’ve looked at all use the Cauchy mean value theorem. I don’t understand why, since it seems to me that the way of proving the result that I’m about to present makes the whole argument completely transparent. I’m actually looking forward to lecturing it (as I add this sentence to the post, the lecture is about half an hour in the future), since the demands on my memory are going to be close to zero.
(more…)

How to work out proofs in Analysis I

February 3, 2014

Now that we’ve had several results about sequences and series, it seems like a good time to step back a little and discuss how you should go about memorizing their proofs. And the very first thing to say about that is that you should attempt to do this while making as little use of your memory as you possibly can.

Suppose I were to ask you to memorize the sequence 5432187654321. Would you have to learn a string of 13 symbols? No, because after studying the sequence you would see that it is just counting down from 5 and then counting down from 8. What you want is for your memory of a proof to be like that too: you just keep doing the obvious thing except that from time to time the next step isn’t obvious, so you need to remember it. Even then, the better you can understand why the non-obvious step was in fact sensible, the easier it will be to memorize it, and as you get more experienced you may find that steps that previously seemed clever and nonobvious start to seem like the natural thing to do.

For some reason, Analysis I contains a number of proofs that experienced mathematicians find easy but many beginners find very hard. I want to try in this post to explain why the experienced mathematicians are right: in a rather precise sense many of these proofs really are easy, in the sense that if you just repeatedly do the obvious thing you will solve them. Others are mostly like that, with perhaps one smallish idea needed when the obvious steps run out. And even the hardest ones have easy parts to them.
(more…)