Recall from earlier posts Gil’s modular conjecture for HAPs. It states that if is large enough and is a function from to that never takes the value 0, then for every there exists a HAP such that mod . It is easy to see that this implies EDP, so it may well be very hard, or even false. However, one can hold out a little hope that, as with some strengthenings of statements, it is in fact easier, because it is in some way more symmetrical and fundamental. Given that, it makes good sense, as Gil has suggested, to try to prove modular versions of known discrepancy theorems, in the hope of developing general techniques that can then be tried out on the modular EDP conjecture.
A very obvious candidate for a discrepancy theorem that we could try to modularize is Roth’s theorem, which asserts that for any -valued function on there exists an arithmetic progression such that . That gives rise to the following problem.
Problem. Let be a prime. What is the smallest such that for every function that never takes the value 0, every can be expressed as for some arithmetic progression ?
In this post I shall collect together a few simple observations about this question.
1. We can at least prove that such an exists. Indeed, by Szemerédi’s theorem, if is large enough then there is an arithmetic progression of length on which is constant. Since takes non-zero values and is prime, the sums along initial segments of run through all the numbers mod .
So the question is really asking whether the result can be proved with a reasonable bound for .
2. Roth’s discrepancy result tells us that if and takes only the values , then takes all values mod . That is because there is a progression such that in , and by what one might call the discrete intermediate value theorem, it therefore takes all values in between 0 and or between and . This is reasonably compelling evidence that the conjecture is true with a decent bound, but of course the intermediate-value-theorem argument breaks down completely when can take arbitrary non-zero values mod .
The known lower bounds for Roth’s discrepancy theorem for APs (which are equal to the upper bounds up to a constant) show that the best possible bound for the modular question is at least .
3. To make the problem more symmetrical, and therefore potentially easier to handle, it might be a good idea to begin by tackling the following variant.
Problem. Let be a prime. What is the smallest such that for every function that never takes the value 0, every can be expressed as for some arithmetic progression in ?
The big difference here is that arithmetic progressions are allowed to “wrap around”. That means that for every arithmetic progression and every coprime to (it might be convenient to insist that is prime), the sets and are also progressions. I think the bounds in the integer case with functions show that the best bound one could hope for for this modified problem are , but I need to check that.
3. One natural approach to proving that a function takes all values mod would be to attempt to show that it takes all values with approximately the same frequency. This would potentially allow us to bring in analytic tools. However, it is not in general true. For example, let be the function that is up to and from to . Then a small calculation shows that if is a mod- AP of common difference , then is never more than . I think can probably be taken to be 2, or maybe even 1. (By I mean the smallest modulus of any number congruent to mod .) Thus, for a positive proportion of common differences , the sums along APs of common difference never exceed , say, in modulus. If is large, this implies that the values of the sums are very far from uniformly distributed, since they are concentrated around 0. On the other hand, is obviously not a counterexample to the conjecture (by which I mean the assertion that the modular statement holds with a good bound).
4. That observation does not rule out a proof that goes via showing that the values of sums along APs are approximately uniformly distributed, but it does demonstrate that in order to prove that, we would have to put some conditions on . That is, we would need a two-step argument along the following lines (reminiscent of proofs of Szemerédi’s theorem, though I would hope that this problem is easier).
(i) If is quasirandom, or at least “contains substantial quasirandomness”, then the values of are approximately uniformly distributed mod .
(ii) If is not quasirandom, or better still “does not contain substantial quasirandomness”, then we can deduce in some other way (such as finding a long AP on which is constant, but that may be too much to ask) that the sums take all values.
5. The weaker the property we can get away with in (i), the better. However, in order to get started it would be good to find any quasirandomness property that implies that the values of are approximately uniformly distributed.
6. One thing that needs deciding here is what we mean by a random progression . The simplest would be to fix some and pick random mod- APs of length . These don’t work for the modular conjecture in general, since the constant function is a counterexample, but they might work for functions with a suitable quasirandomness property.
7. The fact that we can’t fix the length of the progressions shows that the modular conjecture differs in an important way from Roth’s discrepancy theorem, where fixing the length is not a problem (especially in the mod- version). This dampens any hopes one might start off with for adapting the proof of Roth’s discrepancy theorem to cope with the modular version. But that might in the end be a good thing, since the point of the modular conjecture is to introduce new techniques that can then be brought to bear on EDP.
8. When we are looking for a suitable quasirandomness property, it is tempting to turn to Fourier analysis. However, the following (modification of a standard) example is a bit troubling. Let be defined as follows. For each we randomly decide whether to set equal to or mod (where is the residue between 0 and ). If we now choose a random mod- arithmetic progression of length 4, there is an absolute constant such that with probability at least does not wrap around, and , and . When this happens . Therefore, the value 0 occurs much too often, but according to the natural Fourier definitions of quasirandomness (which we could get by looking at the function ) this function is highly quasirandom.
9. Of course, we are looking at much longer arithmetic progressions. However, it is difficult to think of proofs that work for long arithmetic progressions and fail for short ones. (It is mildly encouraging that at least the above source of examples seems to break down when the progressions become long — though that should be checked carefully.)
10. Hmm … just after writing that I thought of the following modified example. Instead of choosing randomly whether to set equal to or , let’s do it in the following highly deterministic way: if mod 4 we set equal to , if mod 4 we set it equal to , if mod 4 we set it equal to , and if mod 4 we set it equal to . In addition, let’s suppose that is a multiple of , since I seem to need that to make this example work. (Even if we can’t get rid of this restriction somehow, it will still show that a proof that worked for prime values of would have to make strong use of the fact that was not a multiple of — and there would be many other examples of a similar kind that would lead to similar requirements — which would be quite a big restriction on what it could look like.)
Then if is a multiple of 4 and is a random mod- AP of length , there is a probability 1/16 that and mod 4. (Note that because this makes sense independently of which residue we pick.) If that is the case, we can partition into APs of length 4 such that the sum of along each is zero. This shows that with probability at least 1/16 the sum is 0 mod .
Is this function quasirandom? According to many definitions, yes, since although we split into cases in a highly structured way, the behaviour of each case of is highly quasirandom.
I mentioned above that there are many similar examples. To give just one illustration, we could take as a multiple of and let take turns taking values , , , and . Essentially the same argument would work, but now the probability would be . (Actually, I now see that the probabilities can be doubled in both cases, since works just as well as .)
I’ll leave it there. I’ve mentioned some proof strategies, and also discussed some difficulties with them. Does anyone have other proof strategies to suggest? I haven’t mentioned using the polynomial method, but that is an obvious thing to try to do: that is, write down a polynomial that vanishes identically only if the modular conjecture for APs is true, and try to prove that it vanishes identically. It is certainly worth looking into that, though it is a little discouraging that the conjecture is false for APs of any fixed length, since that would necessarily make the polynomials less symmetrical. If the above strategy seems the most promising (or should that be least unpromising?) then does anyone have any thoughts about quasirandomness properties that might do the job? (Gil mentioned something in a recent comment — it would be good to have that thought in more detail.)
I’ll end by saying that even if proving the modular conjecture for mod- APs ended up having nothing much to do with EDP, it is a very nice problem on its own, and could make an excellent spin-off Polymath project.