I thought it might be good to have a post where I tried to take stock and assess the chances that Polymath5 will actually result in a solution to the Erdős discrepancy problem. But before I talk about that, I want to make the point, which I have made before, that solving the problem one sets out to solve is not a necessary condition for the attempt to count as worthwhile. There is easily enough material on the blog now for it to be possible to base a publishable paper on it, even if quite a lot of that paper ends up being a combination of a survey about the problem and a report on the interesting phenomena that have become apparent as a result of computations, some of which we understand reasonably well and some of which are still rather mysterious.
I am not saying this in order to hint that I think that it might be time to give up. At the moment it feels to me as though we still have momentum, so my enthusiasm is not flagging. What is mathematical momentum? I might define it as follows: an assault on an open problem has momentum if one’s understanding of the problem is continuing to develop. Right now I feel as though with each passing day I understand things about EDP that I did not understand before — and I presume that applies to everyone else too. And for as long as that is the case, one can never quite be sure that the problem won’t suddenly yield.
After Terry’s reduction of the problem to one about multiplicative functions, we have devoted almost all our attention to them. (Almost, but not quite: Klas and Alec are still trying to find a sequence of discrepancy 2 and length greater than 1124. It looks as though 1124 may be the upper bound though.) At this stage, I think it is fair to say that the main thing we are doing is playing around in multiplicative-function space, trying to get a feel for what it looks like. But this playing around has a serious purpose, as I shall now explain.
In my EDP4 post, I wrote about how there seem to be some very different kinds of multiplicative functions, which have unbounded partial sums for very different kinds of reasons. On the one hand you have functions like the Liouville function , which behaves fairly like a random function and therefore has partial sums that grow like (if you believe the Riemann hypothesis, and at least that fast unconditionally). On the other, there are functions like and , which have a great deal of additive structure (as well as their multiplicative structure) and give rise to logarithmic growth. Because of this, it felt as though one might be forced to classify multiplicative sequences to some extent. Some of them would be “similar to character-like functions” and would have unbounded partial sums for similar reasons to the reasons that apply to and , while others would be “not similar to character-like functions”, which would cause them to behave sufficiently randomly to have unbounded partial sums for that reason.
It is still conceivable that some kind of strategy like that could work, but certain difficulties with it have come to light. At one point, I had the following serious anxiety about the approach. The only proof I knew of that the partial sums of the Liouville function were unbounded went via the Riemann zeta function and some very special properties of . Therefore, in order to prove EDP it looked as though it would be necessary to find an entirely new, and highly generalizable, argument that applied to . And this looked as though it might be a known hard problem in number theory.
Fortunately, Terence Tao was able to supply a much simpler proof that these partial sums were unbounded. His proof relies on the fact that the Dirichlet convolution of with the constant function 1 is the characteristic function of the squares, which is a very special property of , so Terry’s proof does not look like the massively generalizable argument that one would need, but at least it is reassuringly elementary. It might still be worth trying to find new ways of understanding this fact, though.
However, I have another worry about the strategy of splitting multiplicative functions into those that are character-like and those that are not. It is that I now believe that there are probably multiplicative functions that have no obvious additive structure but that still have slow-growing partial sums. (For now let me say that the partial sums are slow-growing if they are bounded above by , but I think one might be able to get a small power of .) The evidence for this comes from thinking about algorithms for choosing the values at primes in a way that is designed to keep the partial sums small. The obvious algorithm is a greedy one: if the partial sum up to is positive then let , and if it is negative then let , and do what you like if it is zero. However, computations suggest that this fails spectacularly. I don’t know if anyone has a clear understanding of why this should be — I certainly don’t.
Equally mysterious is a fascinating discovery of Johan de Jong: that if you change your criterion slightly and choose if and only if the partial sum up to is less than -10, you do far better. I would love to have an explanation for this. I don’t care whether it is rigorous as long as it is convincing.
Those two links are part of a sub-conversation that starts here. I think that by this point is fairly safe to say that the experiments are telling us that multiplicative functions with slow-growing partial sums do not have to be all that structured. It is still possible that if you insist that the growth is at most logarithmic then your function is forced to be similar to a character-like function, but it also seems clear that we are not going to be able to prove big growth when the function is not similar to a character-like function.
To me this suggests that we may have to look for a unified proof after all. I do not feel 100% confident about this, but it is where my own energies are concentrated for the time being.