Problems related to Littlewood’s conjecture

This is the third in a series of posts in which I discuss problems that could perhaps form Polymath projects. Again, I am elaborating on a brief discussion that appeared in an earlier post on possible future projects. [Not for the first time in my experience, WordPress's dates have gone funny and this was posted not on the 17th as it says above but on the 20th.]

An obvious objection to the Littlewood conjecture as a Polymath project is that it is notoriously hard. On its own that might not necessarily be a convincing argument, since part of the point of Polymath is to attempt to succeed where individual mathematicians have failed. However, a second objection is that the best results in the direction of the Littlewood conjecture, due to Einsiedler, Katok and Lindenstrauss, use methods that are far from elementary (and far from understood by me). I envisage this project as an elementary one, at least to begin with, so does that make it completely unrealistic? I shall try to argue in this post that there is plenty that could potentially be done by elementary methods, even if attempting to prove the conjecture itself is probably too ambitious.

Another advantage of tackling the conjecture by elementary means is that if we find ourselves forced to reintroduce the non-elementary methods that have led to the very interesting results of Einsiedler, Katok and Lindenstrauss, we will have a deeper understanding of those methods than if we had just passively learnt about them. I myself prefer to rediscover things than to learn them: it isn’t always practical, but it’s easier if you half bear in mind that they are there and have a vague idea about them.

Still on the topic of elementary versus non-elementary, one of the factors that contributed to the success of the DHJ project was that there was a non-elementary way of looking at the problem that we were trying to solve by elementary means. That meant that those participants who knew their ergodic theory could look at proposed proof strategies and make useful comments about whether they were likely to succeed. I think of it as like trying to find one’s way out of a jungle. The non-elementary methods give you something like a satellite picture of your surroundings, which enables you to plan your route instead of wandering around in the dense vegetation with no idea where you are. But there are things that the satellite picture won’t show, such as narrow but useful paths. So sometimes the person with the satellite picture knows that the person without it is wasting time, but sometimes there are important ideas that the person with the satellite picture will not spot. With a collaboration, one can of course have the best of both worlds.

For those who are interested, there is a very nice survey article by Akshay Venkatesh about the work of Einsiedler, Katok and Lindenstrauss, which explains how dynamics comes into the problem.

Before we discuss Littlewood’s conjecture, let us look at an easier question. Suppose that \alpha is a real number. To what extent can the multiples of \alpha stay away from integers? To make this question precise, let us write \|x\| for the distance from x to the nearest integer. Then we would like to examine the sequence \|\alpha\|, \|2\alpha\|, \|3\alpha\|,\dots.

Before thinking about the right question to ask, let us quickly prove a simple and very well-known theorem. (Unless my history is letting me down, which is possible, it is due to Dirichlet.)

Theorem 1. For every real number \alpha and every positive integer n there exists a positive integer m\leq n such that \|m\alpha\|\leq n^{-1}.

Proof. Let \beta be chosen uniformly at random from the interval [0,1). Then the expected number of numbers in the sequence 0,\alpha,2\alpha,\dots,n\alpha that lie in the interval [\beta,\beta+n^{-1}) mod 1 is 1+n^{-1}. Therefore, there must exist \beta and 0\leq s<t\leq n such that both s\alpha and t\alpha lie in the interval [\beta,\beta+n^{-1}) mod 1, from which it follows that \|\alpha(t-s)\|\leq n^{-1}. QED

Note (i) that Dirichlet used the pigeonhole principle rather than an averaging argument and (ii) that we could, if we had been bothered, have obtained a bound of (n+1)^{-1} from the above argument. (This illustrates the point, which is often handy, that even if partitioning a set X into subsets of a certain form is hard — not that it is here — finding subsets Y_1,\dots Y_M of that form such that every element of X is in the same number of Y_i may well be easy. In that case, something very similar to the pigeonhole principle works. I should expand this parenthetical remark into a Tricki article.)

What this argument shows is that sometimes \|\alpha m\| must be small. Since it clearly won’t stay small (at least if \alpha\ne 0), this suggests that we should be interested in the sequence c(n,\alpha)=\min_{1\leq m\leq n}\|\alpha m\|. And the obvious question then is: how fast must (or how slowly can) c(n,\alpha) tend to 0 as n tends to infinity?

The argument above shows that c(n,\alpha)\leq n^{-1}, and this is asymptotically best possible, because it is not hard to show that if \alpha has a continued fraction with bounded quotients then c(n,\alpha)\geq an^{-1} for some positive constant a (that depends on the upper bound of those terms). In particular, if \alpha is a quadratic irrational, then \|\alpha n\| is at least an^{-1} for every positive integer n.

This suggests that the slowest possible rate at which c(n,\alpha) can tend to zero is achieved when \alpha is the golden ratio, and that is indeed the case. So this problem is completely understood: basically, the continued fraction expansion of \alpha tells you exactly how c(n,\alpha) behaves.

Littlewood’s conjecture is about what at first seems to be an innocent generalization of the above result. Instead of looking at one real number \alpha, let us look at two, \alpha and \beta. In broad terms, our goal is to understand to what extent it is possible for both \|\alpha n\| and \|\beta n\| to be reasonably large. The precise question that Littlewood asked is (equivalent to) the following.

Problem 2. Let \alpha and \beta be real numbers, and for each n define c(n,\alpha,\beta) to be \min_{1\leq m\leq n}\|m\alpha\|\|m\beta\|. Must c(n,\alpha,\beta) tend to zero faster than n^{-1}?

By “faster”, I mean at a rate o(n^{-1}). Littlewood conjectured that the answer was yes. A more usual way of formulating the conjecture is as the statement that \lim\inf_n n\|\alpha n\|\|\beta n\|=0, which is obviously equivalent to a positive answer to Problem 2.

Since c(n,\alpha,\beta)\leq c(n,\alpha), it is trivial that c(n,\alpha,\beta) tends to zero at least as fast as n^{-1}, and the only way that it could fail to tend to zero faster than that is if \beta n magically stays away from an integer whenever \alpha n threatens to get close, and vice versa.

One question that it is reasonable to ask is the following: why, when we are considering the question of whether (\alpha n,\beta n) can remain far from a pair of integers, do we multiply together \|\alpha n\| and \|\beta n\|? Is that really a natural thing to do? I shall come back to this question, but for now let me remark (slightly cryptically) that this formulation leads to symmetries that are critical to the work that has been done on it.

A more general question.

Theorem 1 leads to the following nice construction of a set X of n points in [0,1]^2 such that if (x_1,y_1) and (x_2,y_2) are distinct elements of X, then |x_1-x_2||y_1-y_2|\geq cn^{-1}. (Note that this is best possible, since just |x_1-x_2| must sometimes be at most (n-1)^{-1}.) One takes (x_r,y_r) to be (r/n,\alpha r), where \alpha r is reduced mod 1 and \alpha is a number with bounded quotients in its continued-fraction expansion (or, if you prefer, \alpha is something concrete like \sqrt{2}).

Why does this work? Well,

\displaystyle |x_r-x_s||y_r-y_s|=n^{-1}|r-s|\Bigl|\|\alpha r\|-\|\alpha s\|\Bigr|

\displaystyle \geq n^{-1}|r-s|\|\alpha(r-s)\|\geq cn^{-1},

by our choice of \alpha.

This shows that we can use a quadratic irrational, or any “counterexample to the one-dimensional analogue of the Littlewood conjecture”, to produce a set of points with the desired property. Let us define the “distance” between two points (x_1,y_1) and (x_2,y_2) to be |x_1-x_2||y_1-y_2|, noting that this is very definitely not a metric. Then what we have produced is n points in the unit square such that any two have distance at least cn^{-1} from each other.

Can we do this in three dimensions? This time, we define the “distance” between (x_1,y_1,z_1) and (x_2,y_2,z_2) to be |x_1-x_2||y_1-y_2||z_1-z_2|. Let me ask this question as a serious problem.

Problem 3. Is there a constant c>0 such that for every n it is possible to find n points in [0,1]^3 such that the distance between any two of them (as defined above) is at least cn^{-1}?

The reason for asking this question is the following simple observation.

Observation 4. If there is a counterexample to Littlewood’s conjecture, then the answer to Problem 3 is yes.

Proof. We generalize the two-dimensional construction in the obvious way, by defining (x_r,y_r,z_r) to be (r/n,\alpha r,\beta r). Again, \alpha r and \beta r are reduced mod 1, and (\alpha,\beta) is a pair of real numbers such that n\|\alpha n\|\|\beta n\| is bounded below by a positive constant c. Checking that this works is straightforward. QED

What is the point of asking this question? Well, one reason is that it is clearly very closely related to Littlewood’s conjecture, but it is far more general. Therefore, if such a set of points does exist, then it should be easier to find it than to find a counterexample to Littlewood’s conjecture, whereas if it doesn’t exist, then proving that it doesn’t exist might also be easier (because now it would be a purely combinatorial problem rather than a number-theoretic one).

Unfortunately, there are objections to the above arguments. It seems to be very hard to find a set of well-separated points in three dimensions (I’ll discuss what one can do in a moment), and part of the reason is that there are cn^2 pairs of points to worry about. So it is tempting to introduce dependences, by which I mean to come up with a construction such that if one pair of points is separated it causes many other pairs to be separated as well. But that thought leads one inexorably to the idea that the best approach could be to start by finding a counterexample to the Littlewood conjecture … (This is an issue that I have discussed before on this blog. Sometimes combinatorial generalizations seem to be no easier to tackle than the problems they generalize.)

If finding an example of a collection of well-separated points turns out not to be easier to think about than finding a counterexample to Littlewood’s conjecture, what about trying to prove that such a collection cannot exist? Formally speaking, it is of course harder, but if such a collection cannot exist, then it could be that to prove it one is forced to think about the problem in the correct, more general way. I’m not sure how I feel about this. I have no decent idea about how to go about proving that a well-separated collection doesn’t exist, and if I attempt to do so there is always a little imaginary voice saying to me, “If you managed to prove this, then you’d have shown the highly non-trivial (indeed, not known) number-theoretic fact that \lim\inf n\|n\sqrt{2}\|\|n\sqrt{3}\|=0.” Do I believe that such a fact might conceivably be provable by much more general combinatorial techniques? It seems like a long shot, but I can’t think of a good reason for its being impossible.

Incidentally, it is easy to find n points that are separated by c(n\log n)^{-1}. There are two (ultimately not all that different) ways of doing this. One is just to choose any old maximal separated set of points. The set of points at distance at most \delta from any given point has volume at most \delta\log(1/\delta) or thereabouts, so if \delta is (n\log n)^{-1} then this is about 1/n, so you can fit n points in before there is any chance of the set being maximal. The other way of doing it is to choose the points randomly until the expected number of pairs that are too close exceeds half the number of points. You then throw away one point from each pair. This uses exactly the same volume calculation (since the volume of points close to x is the probability that y gets too close to x) and gives essentially the same bound.

But the weakness with those arguments is that they don’t exploit the fact that the “distance” is not in fact a metric and the “ball” around a point is far from convex. The greedy approach takes no account of the fact that the balls around the points could overlap considerably — all the more so if you try to choose the points in order to make this happen. However, I know of no better argument. (Of course, in the two-dimensional case I have just given a better argument, but that doesn’t help us.)

A variant of Problem 3.

One of the difficulties in thinking about Problem 3 is that we do not have a big supply of examples in the two-dimensional case. However, there is another construction that doesn’t quite work, but “morally” does work, which goes as follows. For convenience, let n=2^k and take all points of the form (a,b)\in[0,1) such that both a and b have binary expansions that terminate after exactly k digits (if necessary, one should add zeros to achieve this) and the digit sequence for a is the reverse of the digit sequence for b. For example, if n=16 then one of the points is (0.1101,0.1011) in binary.

Why should this work? Well, for two distinct points to have x coordinates that are very close, they will have to differ in some late digit, which means that their y coordinates will have to differ in an early digit. More precisely, if the x coordinates first differ in the (k+1-j)th digit, then the y coordinates differ at or before the jth digit, so the product of the differences should be at least 2^{-(k+1-j)}2^{-j}=n^{-1}/2.

So why doesn’t it work? Unfortunately, that argument makes the elementary blunder of assuming that if two numbers differ in an early digit, then they must be far apart. But the example of 0.1000000 and 0.0111111 shows that that is false. And if we take the numbers 0.10000001 and 0.01111110 then we have two numbers such that both they and their reversals are close.

I mentioned this to Nets Katz once, and he came up with the suggestion of changing the metric to a dyadic one. That is, we define the distance between two numbers in [0,1) to be 2^{-j} if they first differ in the jth digit. With this definition, which is in some ways more natural than the “real” metric on [0,1), the digit-reversal construction works.

That may feel a bit like cheating, but if that’s the way you feel, then have a go at the next question, and also read on to the end of the next section.

Problem 5. Is it possible to find n points in [0,1)^3 such that if (x,y,z) and (x',y',z') are any two of them then d(x,x')d(y,y')d(z,z')\geq c/n?

Of course, one wants to do this for all n and an absolute constant c>0.

I have a hazy memory that Nets Katz and I managed to find such a set when we discussed this problem, but I’m not quite sure whether that memory is correct. In any case, I don’t have an example at my fingertips right now and it could be a good place to start. (It could be that what I am remembering is something else, namely a way of modifying the digit-reversal construction so that it works for the usual metric. Added later: I now think that that is indeed what I was remembering, since I have reconstructed the modification. See below.) Note that the dyadic distance between any two numbers is at least half the actual distance, so if one could prove that no dyadic-separated set existed, one would have proved Littlewood’s conjecture. (Added later still: I have now found a dyadic-separated set. See the sixth comment below. I don’t remember it from before, but neither am I certain that it is new. Added later still: on further reflection, it does feel faintly familiar. Nets, if you ever read this, does the example ring any bells with you?)

While I am at it, here is a general question.

Problem 6. Does the Littlewood conjecture, or indeed any of the related problems discussed here, become easy if the dimension is sufficiently high?

For example, if \alpha_1,\dots,\alpha_{100} are real numbers, is it known that \lim\inf n\|n\alpha_1\|\dots\|n\alpha_{100}\|=0? I think it may be but I have not found it by Googling. Problem 6 is potentially important in the context of the dyadic problem: if the answer to Problem 5 turns out to be yes (as I vaguely remember), then we do not have a proof of Littlewood’s conjecture. But it could be that the answer to a sufficiently high-dimensional analogue of Problem 5 is no, in which case one would have a combinatorial proof of a high-dimensional version of Littlewood’s conjecture.

Are the two constructions related?

We now have two ways of producing well-separated subsets of [0,1)^2 (one of them by changing the metric): using a quadratic irrational, and the digit-reversal idea. However, these two methods may be less different than they at first appear. To see this, let us consider what happens if we try to produce an example using the golden ratio, except that to keep the discussion finite we shall use a good approximation to it of the form F_{n-1}/F_n, where F_{n-1} and F_n are consecutive Fibonacci numbers. In fact, let us go further and take 8/13. What’s more, let us make our points live in the grid \{0,1,\dots,12\}^2 rather than in [0,1)^2. Thus, we shall take the set of all multiples of (1,8) mod 13. These are (1,8), (2,3), (3,11),(4,6),(5,1),(6,9),(7,4),(8,12),(9,7),(10,2),(11,10),(12,5). Note how beautifully these points behave themselves: when one of them is close to 0 (mod 13) then the other one stays away.

How can we connect this with digit-reversal? Obviously we do not want to take base-ten representations. The natural base if we are thinking about Fibonacci numbers is … to write numbers as sums of Fibonacci numbers. To make the representation unique, we forbid the use of consecutive Fibonacci numbers (it is an easy exercise to check that this gives a unique representation). An easy algorithm for working out the representation is to keep subtracting the largest Fibonacci number that is less than what you have: for instance, 12=8+4=8+3+1 and there is our representation. There is a small issue that needs to be sorted out: the number 1 occurs twice, so are 1 and 2 consecutive Fibonacci numbers? I shall use the algorithm just described, so I shall never need the first 1. Thus, I regard 1 and 2 as consecutive.

Let us write our numbers as we would in base 10: with the largest term at the left. If we do that, and if we put zeros at the beginnings of numbers, then the points above work out to be (00001,10000),(00010,00100),(00100,10100), (00101,01001),(01000,00001),(01001,10001), (01010,00101),(10000,10101),(10001,01010), (10010,00010),(10100,10010),(10101,01000).

The first thing to say is that we are clearly not reversing the digits here. And yet, if one looks more closely, then one starts to see patterns emerging: there seems to be a mixture of digit-reversal and moving “blocks” of numbers around. To see what I mean here, look at what happens if we take the multiples of 8 and reverse their Fibonacci digits. The resulting sequence is this: 1,3,4,10,8,9,11,12,7,5,6,2. Let us throw in 0 at the beginning of the sequence and imaginine it arranged in a cycle. Now we can partition this cycle into two chunks, one of length 5 and one of length 8. These chunks are (7,5,6,2,0,1,3,4) and (10,8,9,11,12). The first chunk naturally splits up as (7,5,6) and (2,0,1,3,4), which again consists of permutations of intervals of Fibonacci lengths. And the second does too: it splits up as (10,8,9) and (11,12). And we can keep doing this. For example, (7,5,6,2,0,1,3,4) splits up into (7,5,6) and (2,0,1,3,4), and so on.

It should be fairly easy to describe exactly what is going on here and prove that it always happens, but I have not got round to doing so. What I would like to see is this. First, there should be a simple algorithm that tells you how to unscramble the sequence of digit-reversed numbers by iterating the following process — take a sequence of numbers of Fibonacci length, divide it into two subintervals of Fibonacci length and possibly swap them round (but not always, so one would need to know when, and also when the longer interval comes first) — until you get down to singletons. The second thing should be a direct combinatorial proof that if you do this process in reverse, then you will get a sequence of numbers such that nearby terms in the sequence have reversals that are far away.

Note that it was obvious in advance that we would not get digit-reversal, because the Fibonacci base is subject to the same problem that the binary base was: it is possible for two numbers to be close and for their digit-reversals to be close. But this is interesting, as it suggests that if we understand what is going on combinatorially in the above patterns, then we might be able to come up with a flexible construction that uses digit reversal but cleverly modifies it to get round the nought-point-one-recurring difficulty. More generally, it suggests that thinking about the dyadic problem could give useful insights into the original problem.

Added later: after editing the above (to add the description of the interesting pattern that appears when you multiply by 8 and reverse the Fibonacci digits), I realized that it is indeed possible to mimic what is going on there and deal with the nought-point-one-recurring difficulty. This gives rise to a construction that is probably what Nets Katz and I came up with when we discussed the problem (about eight years ago), though it is certainly not how we came up with it.

What I want to do is construct a sequence of pairs of points (a_i,b_i) such that both a_i and b_i live in [0,1) and have k binary digits. I also want |a_i-a_j||b_i-b_j| to be at least c2^{-k} whenever i\ne j. Digit reversal doesn’t work, but what about a combination of digit reversal and “sometimes exchanging clumps”? The most obvious idea would be to organize the a_i into a binary tree, and then go down the levels of this tree, alternately not swapping and swapping. That is, I let a_i=i/2^k for every 0\leq i\leq 2^{k-1}. The first level of the tree is the partition into two intervals according to the first digit of a_i. So I do that partition and then don’t swap the intervals. At the second level, I do swap the intervals, so now I have four intervals in the following order: numbers that start 01, numbers that start 00, numbers that start 11, numbers that start 10. I continue this process.

The resulting order will be what you get if you take each number, change every other binary digit, and then put the numbers in increasing order. Let \phi be the map that changes every other binary digit of a number and let \rho be the map that reverses the digits. Then the set of points I want to take is the set of all (\phi(a),\rho(a)) such that a is of the form i/2^k.

Why does this work? Well, if a and a' first differ in the jth binary place, then \rho(a) and \rho(a') differ in the k+1-jth place and are identical after that. Therefore, |\rho(a)-\rho(a')|\geq 2^{-k}2^j/2. The only way that this can fail to help is if \phi(a) and \phi(a') are much closer than 2^{-j}. Now if \phi(a) is greater than \phi(a'), then this means that the jth digit of \phi(a) must be a 1 and several subsequent digits are 0, while the jth digit of \phi(a') is a 0 and several subsequent digits are 1. But then a and a' have alternating 0s and 1s after the jth digit, and in different places. If the lengths of these strings of alternating digits are around t, then it is not hard to see that \rho(a) and \rho(a') must differ by at least c2^t2^{-k}2^j, basically because they now differ in the k+1-j-tth place and are not followed by strings of 0s and 1s after they first differ.

Conclusion: digit reversal and shifting intervals can combine to give a second example of a well-separated subset of [0,1)^2 with respect to the Euclidean metric (in each coordinate) rather than the dyadic metric. The rough reason this works is that the digit-reversal \rho almost works, but fails because of the nought-point-one-recurring problem, while \phi takes numbers that are “close when they shouldn’t be” and pulls them apart. (Of course, \phi then has to bring together numbers that used to be far apart, but this turns out not to matter because the images of those numbers under \rho turn out to be sufficiently far apart to compensate for it.)

More variants.

Returning to Littlewood’s conjecture itself, there is a substantial weakening of it that I still do not know how to answer, though I think it has a good chance of being known and would be interested to hear from anyone if they do know the answer, though in an ideal world I would prefer to come up with an answer for myself (where “for myself” really means “by thinking about it in collaboration with other Polymath participants”). (Added later: I have now found a paper that makes it clear that the question I am about to ask is indeed a known result. I will give the reference at the end of this section.) As I have already mentioned, one of the difficulties of Littlewood’s conjecture is that the function d(x,y)=|x_1-y_1||x_2-y_2| is not a metric. This is why the usual techniques for solving packing problems (up to a constant) fail for Problem 3. However, suppose we know that the ratio of |x_1-y_1| to |x_2-y_2| is roughly C to 1. Then |x_1-y_1||x_2-y_2| is roughly C^{-1}|x_1-y_1|^2, which is roughly C^{-1}\max\{|x_1-y_1|,C|x_2-y_2|\}^2, which equals \max\{C^{-1/2}|x_1-y_1|,C^{1/2}|x_2-y_2|\}^2.

Let k\approx\log_2n and for each j between -k and k let us write d_j(x,y)=\max\{2^{-j/2}|x_1-y_1|,2^{j/2}|x_2-y_2|\}. Then it is not hard to check that d(x,y)\approx\min_{-k\leq j\leq k}d_j(x,y)^2.

Therefore, if (\alpha,\beta) were a counterexample to Littlewood’s conjecture and n=2^k, it would have to be the case that for every j between -k and k the numbers m d_j(\|\alpha m\|,\|\beta m\|)^2 were bounded below by some c>0. In particular, this would need to be the case when j=0, which would say that the numbers m\max\{\|\alpha m\|,\|\beta m\|\}^2 were bounded below. This motivates the following weak version of Littlewood’s conjecture.

Problem 7. Do there exist real numbers \alpha and \beta such that \lim\inf m^{1/2}\max\{\|\alpha m\|,\|\beta m\|\}>0?

Here is what this problem is asking geometrically. For each n the points (\alpha m,\beta m) (mod 1) with 1\leq m\leq n form a subset of [0,1)^2 of size n. A trivial volume argument shows that there must be two of these points within a distance of n^{-1/2}. Problem 7 asks whether there is some pair (\alpha,\beta) such that this trivial upper bound is correct up to a constant for every n. Thus, there is not much room to play around, but neither is there in the one-dimensional case and yet numerous examples exist. Here is a somewhat vaguer problem.

Problem 8. If the answer to Problem 7 is yes, then is there some general procedure, analogous to picking a continued fraction with bounded exponents, that yields a large family of examples?

One might think that if the answer to Problem 7 is yes, then the answer to Problem 8 is almost certainly yes as well. But it does not seem to be all that easy to generalize continued fractions in the appropriate way. (Of course, many people have tried to produce higher-dimensional generalizations of continued fractions, and it is possible that out there is just what would be needed.)

Another indication that the Littlewood conjecture is hard is that it is not known even in very concrete cases such as \alpha=\sqrt{2} and \beta=\sqrt{3}, as I have already mentioned. This means that a positive solution to the conjecture would have serious number-theoretic content. However, it might be worth thinking about whether a concrete pair of numbers like this could work for Problem 7. (Added later: this too seems to have been done.)

Here is another problem, which I think of as a “dual” version of Littlewood’s conjecture. I will try to justify this name in a moment. (Added later: it turns out that dual problems such as the ones I am about to mention have been considered in the literature as well, and they are dual in a much more direct, though closely related, sense than the one that I talk about.)

Problem 9. Do there exist real numbers \alpha and \beta such that \lim\inf (r+1)(s+1)\|r\alpha+s\beta\|>0?

The lim inf there is over all pairs of non-negative integers r and s of which at least one is positive. Again, if an example exists then it is best possible, since out of all the numbers r\alpha+s\beta with r,s\leq n^{1/2} there must be two, r\alpha+s\beta and t\alpha+u\beta, that differ by at most n^{-1}. The difference between this and Littlewood’s conjecture is that we are taking a two-dimensional collection of points in [0,1) rather than a one-dimensional collection of points in [0,1)^2.

This problem also has a weak version.

Problem 10. Do there exist real numbers \alpha and \beta such that \lim\inf \max\{r,s\}^2\|r\alpha+s\beta\|>0?

What is “dual” about these problems? I’ll give an informal argument here: yet another aspect of this project would be the question of whether the argument can be turned into a formal argument for equivalences, or at least one-way implications, between these problems and some of the earlier ones.

Let us begin with the weak versions. Problem 7 asks for \alpha and \beta such that one of \|r\alpha\| and \|r\beta\| is always at least cm^{-1/2} whenever r is a positive integer less than $m$. Let us simplify things a bit by taking n to be a prime number and trying to find a and b mod n such that at least one of ra and rb lies outside the interval [-cn/m,cn/m] whenever 0<r\leq m.

Now for any given m there are two sets of interest here. One is the set of r such that ar\in [-cn/m,cn/m], and the other is the set of r such that br\in [-cn/m,cn/m]. We would like the intersection of these two sets, which are equal to a^{-1}[-n/m,n/m] and b^{-1}[-n/m,n/m], to be the trivial one, namely \{0\}. If that happens, then we cannot find r in [1,m] with ar and br in the interval [-cn/m,cn/m].

Here is where I will get very informal. Heuristically speaking, the Fourier transform of a set such as a^{-1}[-k,k] behaves like an arithmetic progression of common difference a and length n/k. (There are all sorts of tricks for making this heuristic better, such as convolving the set with itself so that it has non-negative Fourier coefficients. I won’t worry about such issues here,) Now the intersection of two sets corresponds to the pointwise product of their characteristic functions, and that transforms to the convolution of their Fourier transforms. And the size of the intersection is given by the square of the \ell_2 norm of this convolution. If we ensure that the Fourier transform is real and positive, then the \ell_1 norm of the convolution will not depend on a and b. Therefore, in order to make the \ell_2 norm as small as possible, we want the convolution to be as “spread out” as possible, which happens if there are as few linear relations amongst a and b with coefficients less than k as possible. This suggests that we should look for a and b such that all the numbers ar+bs with r,s\in[-m/c,m/c] are as distinct as possible. And now we are getting close to Problem 10. I haven’t checked, but I think a slightly more complicated argument establishes a similar connection between Problems 2 and 9.

It’s quite possible that the answers to some of these questions are known, or at least that the questions would seem easy to experts in the area.

Added later: as I mentioned above, this is indeed the case. See for example this paper of Pollington and Vellani, which shows that the set of all pairs (\alpha,\beta) such that \alpha and \beta are badly approximable and \max\{\|m\alpha\|,\|m\beta\|\}\geq cm^{-1/2} for every m, where c=c(\alpha,\beta)>0, has Hausdorff dimension 2. (Their result is in fact more general than this.) Davenport proved in the early 1950s that the answer to Problem 7 is yes, and that there are uncountably many examples.

The paper where I found a mention of a dual problem is “Simultaneous Diophantine approximation” by Davenport, published in the Proceedings of the LMS, December 1952, pages 406-416.

Could one make a serious attempt to find a counterexample?

Let me return yet again to the question of whether elementary methods are appropriate for the Littlewood problem. As I have shown above, there are several problems that seem to be very closely related to it but that are in one way or another easier, and I don’t see any good reason for ruling out elementary approaches to those. That is particularly true of Problem 5 (the dyadic variant of the separated-points problem).

However, another reason I think that one should not rule out the use of elementary methods is that it is possible that Littlewood’s conjecture is false. I want in this section to outline a few thoughts about how one might go about disproving it.

The first thing to say is that if the conjecture is false, it is not false by very much. The headline result of Einsiedler, Katok and Lindenstrauss is that the set of counterexamples to the conjecture has Hausdorff dimension zero, and this result has a strong bearing on what kind of methods stand a chance of yielding a counterexample.

In order to elaborate on this remark, let me discuss a “combinatorial” way in which one might end up with a good real number \alpha if you want \lim\inf n\|\alpha n\| to be positive. We have already seen that any \alpha with bounded convergents in its continued fraction expansion will do: what I am about to discuss is a way of seeing that without explicitly thinking about continued fractions.

The idea is to start with an attempt \alpha_0 and to make a series of adjustments to it. So let’s start with a truly stupid choice of \alpha_0, namely 0. Instantly we are in trouble, because \|\alpha_0\| is not large. So let us make it as large as we can by taking \alpha_1=1/2. Now that sorts out \|\alpha_1\| but we quickly run into trouble again, since \|2\alpha_1\|=\|1\|=0. So let us adjust again, by moving \alpha_1 until \|2\alpha_1\| becomes as large as possible. There are two minimal ways of doing that: we could take \alpha_2=1/4 or \alpha_2=3/4. Let’s try increasing at every stage, so we take \alpha_2 to be 3/4. Now we run into problems only at \|4\alpha_2\|, when we get \|3\|=0. So let us adjust by taking \alpha_3 such that 4\alpha_3=3\frac 12, which gives us \alpha_3=7/8.

We are running into difficulties, because it appears that \alpha_n=1-2^{-n} if we continue with this algorithm. But at this point we might pause to observe that if m\alpha_n is an integer, we do not need to increase \alpha_n to make m\alpha_n as far from an integer as possible: all that really matters is that it should be further from an integer than c/m. In retrospect, it makes sense if our successive adjustments are as small as possible, so that we don’t mess up the progress we have made previously.

With this thought in mind, let us try again. We take \alpha_0=0. Since 1 is very small, we had better take \alpha_1 to be as far from an integer as possible, so let us once again take \alpha_1 to be 1/2. But now, when we come to deal with the problem that 2\alpha_1 is an integer, let us increase it from 1 to 4/3 instead of to 3/2. In other words, let us take \alpha_2 to be 2/3. Our next problem comes with 3\alpha_2, so let us change that from 2 to 2\frac 14, which gives us \alpha_3=3/4.

Unfortunately, we are in trouble again, because \alpha_n works out to be n/(n+1). So let us think slightly harder about what we have to do in order to ensure that we do not get into trouble. We are trying to build a sequence \alpha_0,\alpha_1,\alpha_2,\dots that converges to a limit \alpha in such a way that \|n\alpha\|\geq c/n for every n. This we can achieve if we ensure that \|n\alpha_m\|\geq c/n for every n\leq m, and this we can achieve if we make sure that \|m\alpha_m\|\geq 2c/m and that |m\alpha-m\alpha_m|\leq c/m.

Here is another approach. Let us again start with \alpha_0=0 and \alpha_1=1/2. To define \alpha_2, we shall use the following principle: we look for the “worst place” and the “second worst place” and share the badness equally between them. Here the “worst place” is where the disaster occurs that 2\alpha_1=1 and the “second worst place” is where \alpha_1=1/2. So we increase \alpha_1 to \alpha_2=2/3, so that both \|\alpha_2\| and \|2\alpha_2\| are equal to 1/3.

The next disaster occurs when we notice that 3\alpha_2=2. The second worst place could be taken to be either \alpha_2 or 2\alpha_2. There is something to be said for alternately increasing and decreasing our \alpha_n, since that will make the total change smaller than if we always increase, so let us focus on 2\alpha_2. To share out the badness equally, we would like to decrease \alpha_2 slightly so that 2\alpha_3-1=2-3\alpha_3. That tells us to take \alpha_3=3/5.

Probably you can guess by now what happens: we keep going like this and we end up producing the fractions \frac 01, \frac 12, \frac 23, \frac 35, \frac 58, ... and in the limit we get the golden ratio.

But the method of making successive adjustments is much more general and flexible than this — we just happened to do things so efficiently that we got the best result. I confess that I haven’t quite nailed this explanation, but ultimately it should be possible to devise a successive-adjustment procedure that could in principle give any real for which the continued-fraction expansion has bounded quotients, and thereby present the continued-fraction solution as the outcome of a just-do-it proof.

Unfortunately, the main point I now want to make is that it is highly unlikely that any such approach could give rise (except perhaps very indirectly) to a counterexample to the Littlewood conjecture. The idea would be to start with a pair of real numbers (\alpha_0,\beta_0) and make successive adjustments to it each time some multiple misbehaved. If you try this, you will soon find yourself running into difficulties, but the result of Einsiedler, Katok and Lindenstrauss tells you in advance to expect this, or at least I think it does.

My heuristic reasoning here is as follows: if you can use a reasonably flexible successive-adjustment argument, then you will tend to get a set that is somewhat Cantor-like: you start with an interval of possibilities, then from it you remove some subintervals, and from what remains you remove further subintervals, and so on. These kinds of ideas show that the Hausdorff dimension of the set of reals with bounded partial quotients is not small: I haven’t yet found a good reference and I don’t know what the precise results are. But turning this on its head, one would expect that for Littlewood’s conjecture, if any successive-adjustment technique worked, it would have to be incredibly non-robust and work almost miraculously: perhaps as one removed subintervals one would sometimes remove whole branches of the tree and it would be far from obvious that by the end of the process there would be any tree left.

It is thoughts such as these that make me interested in Problem 7 (the weak version of Littlewood’s conjecture). For this problem, (\|m\alpha\|,\|m\beta\|) has to avoid a box of radius m^{-1/2}, and therefore area O(m^{-1}). This feels much more like what you have to do when you are trying to find \alpha such that \|m\alpha\| avoids an interval of length O(m^{-1}), so it is tempting to try a successive-adjustment strategy. (Added later: This is indeed the type of argument that Pollington and Vellani use. At one point they use a very clever idea of Davenport.)

My belief is that such a pair (\alpha,\beta) exists, but finding one using a successive-adjustment strategy seems not to be wholly straightforward, because there is not an obvious direction in which to take the adjustment. (What is clear is that this direction must vary enough to stop the set of multiples becoming “approximately one-dimensional”, but I don’t see how to do this systematically.) Anyhow, I very much like this problem, because it seems to be of intermediate difficulty, and if one did manage to find a good supply of examples, and perhaps even understand the Hausdorff dimension of the set of examples, then one might be able to understand how that set behaved when you intersect it with copies of that set that you obtain by stretching it in one direction and shrinking it in the other (which is supposed to help you avoid other rectangles of the same area and thereby attempt to give a counterexample to Littlewood’s conjecture). Perhaps the set is sufficiently non-symmetric under transformations of the kind (x,y)\mapsto(2^kx,2^{-k}y) that images of it under such transformations are “independent” in some way and therefore have a non-empty, but very small, intersection.

(Added later: Given what I now know about results in the area, the goal here would have to be different. It would be to understand the set of examples well enough to be able to show that it has non-empty intersections with various transformations of itself. This, it seems, is a known hard problem: see Conjecture 1 of the paper of Pollington and Vellani.)

Computer-generated pictures.

I’m a hopeless programmer. If I weren’t, then I would have done the following very simple computer experiment, and I would love it if someone else could do it. It would be to generate a picture as follows. Choose a fairly large prime p and take a square of p\times p pixels. Choose also a small constant c such as 1/10 and an integer M that’s much larger than 1 and (probably) quite a bit smaller than p. (It would be interesting to do this with different constants.) Think of the square as \{0,1,\dots,p-1\}^2. Now make the pixel (x,y) black if there exists a positive integer m\leq M such that (mx,my) mod p lies in the box [-cp/m^{1/2},cp/m^{1/2}]^2. I would very much like to see what the resulting set looks like. In particular, I would like to know whether it is obvious from looking at it that the horizontal and vertical distance scales are the same. (One would have to look at just a small chunk of it since it would be symmetrical in x and y.) And I’d also like to know whether it has a Cantor-ish feel to it. If anyone is ready to spend the half hour it would need a competent programmer to generate a few pictures like this I would be extremely grateful to have my curiosity satisfied. This wouldn’t have to wait for an official project to start — it could be an interesting preliminary.

Needless to say, one could also generate images of sets related to Littlewood’s conjecture itself, rather than the weak version. One would simply colour a pixel (x,y) black if \|mx/p\|\|my/p\|\leq c/m. It would be interesting to try this for various values of c, p and (especially) M. In particular, I am interested to know what the residual set looks like, and whether it gets smaller very quickly (as the dimension-zero result suggests that it should).

As well as generating pictures, one could try to test the conjecture in a crude experimental way, by searching for pairs of numbers (\alpha,\beta) such that m\|\alpha m\|\|\beta m\| does not get small (or at least does not get small unless m is large. Indeed, we could define f(M) to be the largest possible value of \min_{1\leq m\leq M}m\|\alpha m\|\|\beta m\|. One can get good estimates of f(M) if M is not too large by simply trying all pairs (\alpha,\beta) such that the decimal expansion terminates after k digits for some manageable k. Of course, if m\|\alpha m\|\|\beta m\| is small for some smallish m, then the same will be true for all pairs in some neighbourhood of (\alpha,\beta), so there may be good ways of speeding up this brute-force approach. (At the very least, one could eliminate all pairs that fail when m=1, then all pairs that fail when m=2, and so on, so that one would search through fewer pairs when one had got further with the search.)

Conclusion.

One of the things I like about this project is that there is quite a long list of subproblems of graded difficulty: it seems likely that at least something interesting could be proved, and the aim would be simply to get as far as we can. Even doing a few simple computer investigations would be quite interesting (though if it turns out that f(M) tends to zero slowly with M, then this might well not show up clearly in the data). The dyadic version of the packing problem looks to me as though it could have a reasonably easy solution, at least if a good packing exists. The “weak Littlewood problem” ought to be substantially easier than the main problem — I think there should be a reasonably large set of counterexamples — but hard enough to be interesting (unless the answer is already known, which is certainly possible). Likewise, the higher-dimensional problem should be easier to prove, this time with a positive answer (but again the answer may be known). And perhaps the “dual versions” are sufficiently different that they too have a chance of being of the right level of difficulty. (Added later: the fact that the weak version is indeed known changes my assessment of this project somewhat. I would be interested to understand the proof of that as well as possible, but given that it is known, the place to focus on first might be the dyadic packing problem in three dimensions.)

For the sake of commenters, here is a set of abbreviations that might be useful.

LP (Littlewood problem). PP (packing problem — the one where you want a large well-separated set with the distance that isn’t really a distance). DPP (dyadic packing problem). WLP (weak version of Littlewood problem). HDLP (high-dimensional Littlewood problem). HDDPP, HDWLP (obvious meanings). DLP (dual Littlewood problem). WDLP (weak dual Littlewood problem).

About these ads

66 Responses to “Problems related to Littlewood’s conjecture”

  1. gowers Says:

    After I posted the above, it occurred to me that there is a question that is even more of a toy problem than DPP. It is the following. Let d be the dyadic metric on [0,1), let n=2^k, let a_i=i/n (i=0,1,\dots,n-1) and let b_i be a_i with its digits reversed. Then we know that d(a_r,a_s)d(b_r,b_s) is always at least 1/2n. DPP asks whether we can find three sequences with a similar property. A more specialized question that may turn out to be interesting (though at this stage I haven’t thought about it enough to be sure) is whether one can do it by using the two sequences above and adding a third. That is, if a_i and b_i are defined as above, is it possible to find a sequence (c_0,c_1,\dots,c_{n-1}) that again permutes the a_i in such a way that d(a_r,a_s)d(b_r,b_s)d(c_r,c_s) is always at least \gamma/n for some absolute constant \gamma>0?

    What is appealing about this question is that the condition on the c_i can be expressed in a nice way. What we need is for d(c_r,c_s) to be large when d(a_r,a_s)d(b_r,b_s) is small. But d(a_r,a_s)d(b_r,b_s)=2^t/4n, where t is the length of the shortest interval that goes from the first digit where a_r and a_s differ to the last digit where they differ. For example, if a_r and a_s are 0.11011000001001 and 0.11010100011001, then they differ in places five, six and ten which means that d(a_r,a_s)d(b_r,b_s)=2^6/4n.

    A quick proof of the general assertion: if the first place where they differ is in digit i and the last place is in digit j, then d(a_r,a_s)=2^{-i} and d(b_r,b_s)=2^{-(k+1-j)}, so the product is 2^{j-i-k-1}=2^t/4n, since t=j-i+1 and 2^k=n.

    What this tells us about c_r and c_s is that they must differ somewhere in the first t+C digits, where C is some absolute constant. To put this in a tidy way, let us define a notion of “distance” between two 01 sequences of length k. It resembles the Hamming distance but is not even close to being a metric. We define D(a,a') to be the length of the smallest interval that contains the bits where a and a' differ. We are then looking for a bijection \phi from \{0,1\}^k to itself with the property that if D(a,a')\leq t then \phi(a) and \phi(a') differ somewhere in the first t+C bits.

    For fixed t we can think of this condition as colouring a graph. We join two sequences if their “distance” is at most t, and we would like to find a proper 2^{t+C} colouring of the vertices (with the additional condition that the colour classes have equal size). It’s possible that even this problem is interesting, though ultimately one would have to do it for all t at once, which one would expect to be considerably harder.

  2. Proposals (Tim Gowers): Polynomial DHJ, and Littlewood’s problem « The polymath blog Says:

    [...] Tim Gowers described two additional proposed polymath projects. One about the first unknown cases of the polynomial Density Hales Jewett problem. Another about the Littelwood’s conjecture. [...]

  3. gowers Says:

    I now realize that the problem for fixed t (see end of my previous comment) is not interesting, since one can define \phi_t(a) as follows. When r\leq t, the rth digit of \phi_t(a) is the sum mod 2 of all the digits in places that are equal to r mod t. For instance, the first digit of \phi_2(a) is the sum of all the odd digits of a and the second digit of \phi_2(a) is the sum of all the even digits of a. One then extends \phi_t to a bijection. The reason this works is that if a and a' differ only on an interval of length t, then at least one of the parities is different for a and a', so \phi_t(a) and \phi_t(a') differ in at least one of the first t places, as required.

    From this perspective, the problem now is that the functions \phi_t seem to be rather different from each other, so getting a single function that works for all t still appears to be a challenge.

    Added later: I can now do it. Details to follow very soon in a later comment.

  4. A possible Polymath problem « Euclidean Ramsey Theory Says:

    [...] http://gowers.wordpress.com/2009/11/17/problems-related-to-littlewoods-conjecture-2/ Possibly related posts: (automatically generated)Polynomial DHJUpcoming polymath projectNew Polymath blogPossible future Polymath projects [...]

  5. Tom Says:

    Tim, I think this code will draw the pictures you’re after in the first paragraph of ‘Computer-generated pictures’. I’m a lurking Computer Scientist and don’t claim to understand the whole of this post, so I may have got the wrong end of the stick. But it’s here all the same in case it’s useful. Run this function in MATLAB (or the free alternative Octave) with values of c, p and M. It doesn’t do anything clever, so start with small values of p and work up gradually!

    function weak_littlewood_ex(c, p, M)
    [x y] = meshgrid(0:p-1, 0:p-1);
    r = zeros(p, p);

    for m = 1:M
    xt = abs(mod(m .* x, p)) <= c * p / sqrt(m);
    yt = abs(mod(m .* y, p)) <= c * p / sqrt(m);
    r = r | (xt & yt);
    end

    imagesc(1 – r); colormap gray; axis equal; axis xy;

  6. gowers Says:

    OK, I think I’ve solved Problem 5. I also think that doesn’t ruin the project, for reasons that I’ll explain later. Recall from my first comment that it is sufficient to find a function \phi from \{0,1\}^k to \{0,1\}^k such that if D(a,a')\leq t then \phi(a) and \phi(a') differ in one of the first t+C digits. One can in fact find a best possible \phi. That is, we can find a map \phi such that if a and a' differ only on an interval of length t, then \phi(a) and \phi(a') differ in one of the first t coordinates. In fact, we can make it linear (and doing so makes it easier to find \phi). To do this, we build up the matrix of \phi row by row as follows. The first row is all 1s. Note that if a and a' differ in an interval of length 1 then they differ in exactly one place, so they have different parities, which means that their images under \phi will differ in the first coordinate. (I am thinking of a and a' as column vectors here.)

    I now want to choose the second row in such a way that all the 2\times 2 submatrices you can form by taking two consecutive terms of the first row and the two terms below them in the second row are non-singular. It is easy to see that this forces the second row to be either 10101… or 01010… . Why did we want this property? Because if a and a' differ just in places i and i+1, then the difference between their images will be the image of a non-zero vector that is zero in all coordinates apart from i and i+1. Therefore, it is the image of a non-zero vector under one of the 2\times 2 non-singular matrices, so it is non-zero.

    We now continue this process. We will be done if we can find an k\times k matrix over \mathbb{F}_2 such that any r\times r submatrix that is formed from the first r rows and r consecutive columns is non-singular. Suppose we have managed to construct the first r rows of such a matrix. Let the first r terms of the next row be arbitrary. I now need the following simple lemma: given any (r+1)\times(r+1) matrix A such that the top left-hand r\times r submatrix is non-singular, then either A is also non-singular or it becomes non-singular when you change the entry in the last row and the last column. This lemma will guarantee that I can fill up the rest of row r+1 and keep the property I want.

    The lemma is simple. Since the top left-hand r\times r submatrix is non-singular, there is a unique combination of the first r rows that equals the last row in the first r places. So to ensure independence, all we have to do is make sure that the last entry of the last row is not equal to the last entry of this linear combination.

    Thus, the answer to Problem 5 is yes.

  7. gowers Says:

    In the light of the previous comment, there are two natural follow-up questions. First, is it possible to use the trick of moving intervals about to convert the example that solves Problem 5 into an example that solves Problem 3? Secondly, is it possible to find a four-dimensional dyadic example?

    If there did turn out to be a relatively simple solution to Problem 3, then the next question would be a rather speculative one. We have seen that if we write numbers in a Fibonacci base, then the “two-dimensional Littlewood” example resembles a digit-reversal with interval exchanges. Is it possible to take a solution to Problem 3, understand the combinatorial phenomena that make it work, and get the same combinatorial phenomena to appear with a clever choice of numbers \alpha and \beta? There is not yet any reason to suppose that the answer is yes, since we have not demonstrated that there is any way of finding something like the linear map \phi in the behaviour of multiples of some number \beta. Having said that, the fact that \phi is linear is not completely unpromising …

  8. Ernie Croot Says:

    What a great set of problems! It’s too bad don’t have a lot of time right now to think about them…

    Here are a few basic observations about your problems:

    1. I wonder whether there might be some natural interpretations of your questions in terms of elliptic curves: One can map a 2D torus (Z x Z) mod 1 to elliptic curves curves y^2 = x^3 + ax + b using the Weierstrass p-function. And, computing multiples of points back in the torus corresponds to taking multiples of the corresponding elements in the elliptic curve group.

    2. Here is a thought on the Problem 6: Perhaps Olson’s theorems on 0 sums mod p is somehow useful. How? Suppose you work in dimension k, and let your real numbers be \alpha_1,...,\alpha_k. Let A be the multiset of all vectors

    v_s = (||s \alpha_1||,\ ...,\ ||s \alpha_k||), where s=0,1,...,\lfloor 2 \sqrt{n} \rfloor.

    If a subset sum of elements from A (not all 0) comes close enough to 0\ mod\ 1 in enough coordinates, then it produces for us an integer m such that

    || m \alpha_1|| \cdots ||m \alpha_k|| is very near 0.

    And what does this have to do with Olson’s theorem? Well, perhaps there is a way to discretize the above problem somehow, so as to interpret the problem of trying to get v_m \in \ell A near 0\ mod\ 1 in all or lots of coordinates as a 0-sum problem: Olson’s theorem says that if A' \subseteq G := (Z_p)^k containing (0,0,...,0), and |A'| \geq k p or so, then there exists a subset of A' whose sum is 0 in G. What’s encouraging here is the fact that |A'| need not be so large — only of size kp or so. Now, if we were to somehow discretize A, to produce A', so that a sum of elements in A is near the 0 vector mod 1 if and only if the corresponding sum of vectors in A'' is near 0\ \mod\ p in all or most coordinates, we could take p to be of size about \sqrt{n}/k, unless I am missing something. This seems quite large to me, and perhaps it means that we can get a vector v_m so that

    ||m \alpha_i|| < k/\sqrt{n}, i=1,2,…,k.

    If so, then we get

    ||m \alpha_1 || \cdots ||m \alpha_k|| < k^k/n^{k/2}.

    3. There is a theorem of David Bailey, the Borwein brothers, and Simon Plouffe called the “hotspot theorem'', some version of which was originally proved by Piatetski-Shapiro back in the 1950's (I had produced my own elementary proof, and was about to publish it, and then was informed by Andrew Granville that it was proved by P-S in an old Russian journal — it left me depressed for a whole month) that might be relevant to attacking Littlewood's conjecture, particularly when you go to consider the dynamics of (||2^k \alpha_1||, ||2^k \alpha_2||). Although this theorem of Bailey et al is only stated for a single real number \alpha, perhaps there is a 2D generalization.

    Basically, their result allows you to show that if 2^k \alpha\ mod\ 1 is even slightly distributed non-uniformly, then there is a “hot spot'' — that is, there is a number \alpha \in [0,1] such that the sequence 2^k \alpha\ mod\ 1 lies in [\alpha-\epsilon, \alpha+\epsilon] many times more often than it “should''. So, the theorem gives you some sort of measure-focusing information, which perhaps can be exploited somehow (at least if a 2D version could be proved).

    Here is the paper of Bailey et al: Go to

    http://crd.lbl.gov/~dhbailey/dhbpapers/

    and click on paper 75. Alternatively, you can see my elementary proof

    http://www.math.gatech.edu/~ecroot/hotspot_journal.pdf

    It may not be so well written — I forgot whether I got it into a journal-quality form. If it isn't well written, my apologies in advance.

    • Ernie Croot Says:

      I see I should have read it more carefully before posting the above comments: In comment 1 above, I should have written R x R mod 1, not Z x Z mod 1; in fact, the usual way the torus is written is C/\Lambda, where C denotes the complex numbers and where \Lambda denotes the particular lattice we use, in this case the integer lattice Z + iZ. The mapping used sends a point z on this torus to (p(z),p'(z)), where p(z) is the Weierstrass p-function. This point lies on a certain elliptic curve of complex solutions to y^2 = 4x^3 + a x + b, where the a,b are determined by our particular lattice C/\Lambda.

      For comment 2, of course the conjectured bounds I wrote down are too good to be true. But still, Olson’s theorem may be useful to achieve much, much weaker bounds that nonetheless solve the problem for large k.

      And most importantly, in comment 3, I left off the name “Michal Misiurewicz”. It was he and Bailey alone who proved the “strong hotspot theorem”, though it was applied to various problems by the Borwein’s et al.

  9. gowers Says:

    I have had a small think and am fairly convinced that the example I presented three comments ago for Problem 5 can be pushed to give an example for the dyadic problem in four dimensions. To do this, it is sufficient to find a linear map \phi such that if a and a' differ only in an interval of length t then \phi(a) and \phi(a') differ somewhere in the first t+C coordinates and somewhere in the last t+C coordinates. For this it turns out that we need a k\times k matrix such that every submatrix you form by taking r consecutive columns and either the first r+C rows or the last r+C rows has rank r. I haven’t yet proved that such a matrix exists, but the constraints don’t seem that tight (at least if C is something like 3 or 4) and I’m fairly sure that it won’t turn out to be all that hard to come up with a proof.

    If this turns out to be correct, then the next question, I suppose, is whether one can produce dyadic examples in arbitrarily high dimension. That would be pretty strange, but I’m beginning to think that it is a serious possibility. Quite what bearing it would have on the Littlewood problem is another question altogether.

  10. Kevin O'Bryant Says:

    Here’s Mathematica code that does what you asked. Nice thing about the code is that, while it is only practical for disappointingly small primes, you get a slider bar to change the value of c, and that works rapidly.

    Note: the picture is for \{-p/2,...,0,...,p/2\}^2, which seemed more reasonable given the calculation being made.

    Closeness[p_, M_] :=
    Closeness[p, M] =
    1/p Table[
    Min @@ Table[
    N[Sqrt[m]] Max[Abs[Mod[m x, p, -(p - 1)/2]],
    Abs[Mod[m y, p, -(p - 1)/2]]], {m, M}], {x, -(p – 1)/
    2, (p – 1)/2}, {y, -(p – 1)/2, (p – 1)/2}];
    pic2[p_, M_, c_] :=
    pic2[p, M, c] = Map[Boole[# > c] &, Closeness[p, M], {2}];
    Manipulate[
    Image[pic2[p, M, c], ImageSize -> 4 p], {{p, 83}, 11, 149,
    2}, {{M, 17}, 1, p, 1}, {{c, .1}, 0, 1/2}]

  11. gowers Says:

    I should say that Sean Lip has also produced some code, which works for reasonably large primes (up to about 3000 or so). It has produced some fabulous pictures, some of which I will post on this blog soon. But before I do that I want to find a setting of the parameters that gives as informative a picture as possible (but whether they are informative or not, they look wonderful).

  12. Luke G Says:

    Is it at all suspected that this problem could be unsolvable (along the lines of the continuum hypothesis)? It asks a question about a “tricky” property of the real numbers, so that sets a red flag in my mind for whether it can be solved at all.

    • Christian Reiher Says:

      For problems involving quantification over infinite totalities (such as Littlewood’s problem) it is, of course, difficult to prove solvability without actually solving them. But nevertheless one could argue that there is at least one mathematical reasons as to why Littlewood’s conjecture (=LC) should be easier than the continuum hypothesis (=CH):

      In contradistinction to CH, LC is only quantifying (=talking) over single real numbers, and not over arbitrary collections of real numbers, i.e. over the powerset of $\mathbb{R}$. Similarly, most of the “well-known” undecidable analytical statements, such as Borel’s question as to whether every set of real numbers having “measure zero in the strong sense” has to be countable, deal with the power set of $\mathbb{R}$.

      If one is interested in LC only for very concrete real numbers, such as $\sqrt{2}$ and $\sqrt{3}$, or, more generally, quadratic irrationalities, then it becomes formalizable in Peano Arithmetic, and there are not many statements that can be formalized in that theory and which are known to be undecidable in the very strong theories we are usually working with, e.g. axiomatic set theory, ZFC.

      Still, it just occured to me that even the restricted version of LC dealing with quadratic irrationalities only seems to be of logical complexity $\Delta_2$ (Does somebody see a clever way how to formalize it $\Pi_1$?) and that some twentyfive years ago or so Saharon Shelah has written a complicated paper exhibiting a $\Delta_2$ statement formally undecidable over Peano arithmetic and whose truth or falsity in the absolute sense is (in contradistinction to, e.g. the Paris-Harrington-Theorem or Gödels examples) not known.

  13. Miguel Lacruz Says:

    Tim, the following is an attempt to generalize your finite construction with Fibonacci numbers. Let (F_n) denote the Fibonacci sequence and set G_r=rF_n\mod F_{n+1}\; for \;1 \leq r \leq F_{n+1}-1. We have the following

    Claim. If 1 < r < s < F_{n+1}\; then \;|r-s| \cdot |G_r -G_s| \geq F_{n-1}.

    Although I only have numerical evidence that the claim above holds when n=6,7,8 and that the inequality is best possible, I have the feeling that it leads to a simplification of the two-dimensional problem.

    Now, let x_r=r/F_{n+1}, \; y_r=G_r/F_{n+1}\; for \;1 \leq r \leq F_{n+1}-1. It is plain that (x_r,y_r) \in [0,1] \times 0,1] for 1 \leq r \leq F_{n+1}-1 and

    \displaystyle{|x_r - x_s| \cdot |y_r - y_s| = \frac{|r-s| \cdot |G_r - G_s|}{F_{n+1}} \geq \frac{F_{n-1}}{F_{n+1}^2} \geq \frac{1}{4F_{n+1}}}.

    If we could only prove the claim above, then we would have a new two-dimensional example. Also, perhaps the above construction could be pushed to solve problem 3, for instance, if we let G_r=rF_{n} \mod F_{n+2} and H_r=rF_{n+1} \mod F_{n+2} then a possible candidate could be

    \displaystyle{x_r=\frac{r}{F_{n+2}}, \;y_r=\frac{G_r}{F_{n+2}}\;, z_r=\frac{H_r}{F_{n+2}}.}

  14. gowers Says:

    If I understand correctly, this example is a special case of the general construction I talked about in the post: you take a number \alpha with bounded quotients in its continued-fraction expansion and take multiples of (n^{-1},\alpha) mod 1. The inequality in your claim is true and follows from the fact that the continued-fraction expansion of F_n/F_{n+1} is [1,1,1,\dots,1]. And unfortunately the suggestion you have for Problem 3 doesn’t work (I have attempted something similar in the past) because for any quadratic irrational \alpha there are linear relationships between 1, \alpha and \alpha^2. For example, if x is the reciprocal of the golden ratio, then x^2=1-x. This reflects itself in the fact that in your case, (F_n/F_{n+2})+(F_{n+1}/F_{n+2})=1, so whenever y_r is close to an integer, z_r will be as well, which is exactly what you don’t want. Similar problems happen with any construction that is based on just one quadratic irrational.

  15. Miguel Lacruz Says:

    Tim, all I wanted to show is how your construction with 8/13 can be done in general with any quotient of consecutive Fibonacci numbers. My claim that this would lead to a new solution of the two-dimensional packing problem was pretentious. Sorry about that.

  16. jozsef Says:

    I have to apologize for suggesting another project here, but this other one might have something to do with Littlewood’s conjecture. The problem I would love to see as a polymath project is the Heilbronn Triangle Problem. The problem is to place n points in a unit square so as to maximize the area of the smallest of the triangles determined by the n points. Heilbronn conjectured that it is at most \frac{c}{n^2} for some constant c, but Komlos, Pintz, and Szemeredi showed that it can be as large as \frac{\ln{n}}{n^2}. The best upper bound is due to Roth, it is \frac{1}{n^{1.117...}}. This problem has geometry, combinatorics, analysis, and probably much more in it and it might be accessible for many mathematicians.

    • gowers Says:

      I see what you mean by everything you say here, including the connection with Littlewood’s conjecture. (I don’t think solving one would help to solve the other, but both this problem and Problem 3 above involve sets of points that are in some area-related sense separated.) And it’s always nice when there is a yawning chasm between the best known upper and lower bounds.

  17. obtext Says:

    What an interesting problem! One way to make the question look more ‘natural’ (in some sense) could be to replace \Vert x \Vert by \lvert \sin \pi x \rvert (an equivalent ‘metric’, up to a constant). Using simple trigonometric identites and change of variables the conjecture can then be reformulated as:

    \liminf_{n \rightarrow \infty} n \lvert \cos n \alpha - \cos n \beta \rvert = 0

    for all values of \alpha and \beta . This raises questions about the density properties of the set

    S(\alpha, \beta) = \{ n (e^{i n \alpha} - e^{i n \beta}) : n \geq 1 \} \subseteq \mathbb{C}

    and its projections. Is it even known, for example, that zero is not a limit point of this set?

    • Ernie Croot Says:

      I haven’t had time to think about what you have written carefully (I am about to go to a conference and have to prepare my talk), but perhaps the usual “Pade approximation” technology will allow you to say something useful and interesting about the distribution of your S(\alpha,\beta) for \alpha,\beta algebraic: Basically, if $\alpha_1,…,\alpha_k$ are algebraic (solutions to polynomials having rational coefficients), then it is known how to use Pade approximation (i.e. rational function approximations) technology to e^x, to produce a common integer denominator D, and integer numerators C_1,...,C_k, such that

      |e^{\alpha_i} - C_i/D| < D^{-\Omega(1)},

      where \Omega(1) denotes a function tending to infinity as D \to \infty. Note that this is much stronger than what one would expect — one would expect that the error in the approximation is at best 1/D^2 or so. So, when you plug in algebraic numbers to the exponential function, you get out special numbers that can be unusually simultaneously well-approximated by rationals.

      My memory here could be in error, as I haven't thought about transcendental number theory and diophantine geometry in ages; however, I once wrote an expository note on some of this for an REU student, which is, for example, the standard way that one shows that \pi is transcendental — you can find this note by going here:

      http://www.math.gatech.edu/~ecroot/transcend.pdf

  18. gowers Says:

    Let me check that. Certainly |\sin\pi x| is equivalent to \|x\|. Then n|\sin\pi nx||\sin\pi ny|=n|\sin\pi nx\sin\pi ny|, which is equal to half of n|\cos n(x+y)-\cos n(x-y)|. So we take \alpha=x+y and \beta=x-y.

    The odd thing is that S(\alpha,\beta) does not have to have zero as a limit point, since we can take \alpha=0 and \beta=2\pi\gamma, where \gamma is some quadratic irrational. The size |1-e^{in\gamma}| is proportional to \|n\gamma\|, so \lim\inf n|1-e^{in\gamma}| is greater than zero.

    Ah, I understand better what is going on. The point is that e^{in\alpha} and e^{in\beta} need not get closer than n^{-1}, but for the projections to the x axis also to be separated by n^{-1} you need e^{in\alpha} and e^{in\beta} to be close only when they are both comfortably away from 1 and -1. It does seem like a nice reformulation, though whether it actually changes how one tackles the problem I don’t know. I must think about it.

    • Alec Edgington Says:

      I have made a few plots to help visualize these sets: http://www.obtext.com/littlewood/

      (fig_5 illustrates the case you describe in your second paragraph.)

      If nothing else, these pictures confirm (as we know already) that any counterexample is going to have to be really pathological. I quite like fig_4 (\alpha =1 , \beta = (1 + \sqrt{5})\pi ), though, which with N = 10^5 points plotted looks like a 17-petalled flower; I’m not sure what’s going on here!

  19. gowers Says:

    I realize there was a small misprint in my previous comment, which is that I should have written n|\cos\pi n(x+y)-\cos\pi n(x-y)|. So we take \alpha=\pi(x+y) and \beta=\pi(x-y).

    Alec, I’d be curious to know what it looks like when \alpha=\pi\sqrt{2} and \beta=\pi(1+\sqrt{5})/2. I’d be particularly interested in a blow-up of the area near the origin.

    • Alec Edgington Says:

      Hi Tim,

      I’ve added some plots for those values (fig_6 to fig_9).

      The point very near zero corresponds to n = 157 .

    • gowers Says:

      Many thanks for that. For anyone who’s interested, \|157\sqrt{2}\| and \|157\phi\| are both around 0.03, so multiplying them together and multiplying the product by 157 gets you around 0.15.

    • Alec Edgington Says:

      In light of the change of variables, the calculation I would expect to do here is 157 \cdot \Vert 157 x \Vert \cdot \Vert 157 y \Vert , where x, y = \phi \pm \sqrt{2} . This is indeed small (about 0.0005 ), mainly because 157 (\phi - \sqrt{2} ) is so close to an integer (\frac{32}{157} turns out to be a convergent of the continued-fraction expansion of \phi - \sqrt{2}). It is interesting that you also get a (fairly) small value with \sqrt{2} and \phi but I wonder if this is a red herring?

      Anyway, fig_10 shows what you get with \phi \pm \sqrt{2} : circular patterns seem to emerge.

    • gowers Says:

      Ah — I didn’t think 0.15 was all that impressive, but was too polite to say so. So thanks for that clarification!

    • Alec Edgington Says:

      You’re welcome, and I’ve just noticed I dropped a factor of two :-). It’s late…

  20. Miguel Lacruz Says:

    Alec’s formulation of the conjecture brings my attention to the sequence (T_n) of Chebyshev polynomials. Since T_n(\cos \alpha)=\cos(n \alpha), the question is whether or not

    \displaystyle{\liminf_{n \rightarrow \infty}n|T_n(x)-T_n(y)|=0}

    for all values of x, y \in [-1,1].

    Let \varepsilon >0 and consider the set

    E_n=\{(x,y) \in [-1,1]^2: n|T_n(x)-T_n(y)| > \varepsilon\}.

    I plotted some of these sets using Wolfram Alpha and I obtained some nice looking pictures. If you copy the code
    plot 4|chebyshevT[4,x]-chebyshevT[4,y]|>1, x=-1..1, y=-1..1

    and paste it into Wolfram Alpha, you get the picture of E_4 when \varepsilon=1, and if you play with the parameters n=4 and \varepsilon=1, you get other pictures. Notice that the blue regions E_n have increasing area as n increases.

    The conjecture would be false if one could prove that

    \displaystyle{\bigcap_{k=1}^\infty \bigcup_{n=k}^\infty E_n \neq \emptyset}.

    This reminds of the Borel-Cantelli lemma. If the events (E_n) were independent, then the condition

    \displaystyle{\sum_{n=1}^\infty \mu(E_n) = \infty}

    would imply that

    \displaystyle{\liminf_{n \rightarrow \infty}n|T_n(x)-T_n(y)| \geq \varepsilon}

    for almost every x,y \in [-1,1]^2. Unfortunately, the events (E_n) are not independent. The probability measure \mu is the normalized two dimensional Lebesgue measure on the square [-1,1]^2. Now, I wonder if one could apply a probabilistic argument in absence of independence, like the counterpart of the Borel-Cantelli lemma.

    • Alec Edgington Says:

      The Chebyshev polynomials do give a nice visualization. I think that the condition (for the conjecture to be false) is not quite as you state — unless I misunderstand your notation, shouldn’t it be

      \bigcup_{\epsilon > 0} \bigcap_{n \geq 1} E_n(\epsilon) \neq \emptyset

      (where I’ve written E_n(\epsilon) for your E_n )?

    • Miguel Lacruz Says:

      Alec, the condition for Littlewood’s conjecture to be false is that

      \displaystyle{\bigcup_{\varepsilon>0}\bigcup_{k \geq 1}\bigcap_{n \geq k} E_n({\varepsilon}) \neq \emptyset}.

    • Alec Edgington Says:

      Hi Miguel,
      Miguel: yes — and our statements are equivalent, I think: for suppose we have \epsilon > 0 , \alpha , \beta , and k \geq 1 such that n \lvert \cos{n \alpha} - \cos{n \beta} \rvert \geq \epsilon . Let \epsilon_1 = \epsilon / k , \alpha_1 = k \alpha and \beta_1 = k \beta . Then for all n \geq 1 , n \lvert \cos{n \alpha_1} - \cos{n \beta_1} \rvert = n \lvert \cos{nk\alpha} - \cos{nk\beta} \rvert = \frac{1}{k}(nk) \lvert \cos{(nk)\alpha} - \cos{(nk)\beta} \rvert \geq \frac{1}{k} \epsilon = \epsilon_1 . (And the converse is trivial.)

      So to find a counterexample it’s enough to show that for some \epsilon > 0, the E_n for n \geq 1 have non-empty intersection.

    • Alec Edgington Says:

      Sorry for the bad formatting — and I should have added ‘for all n \geq k ‘ at the end of my first sentence.

    • Miguel Lacruz Says:

      Alec, you are right about the equivalence of both statements. Also, your last comment shows that Littlewood’s conjecture is equivalent to the condition
      \displaystyle{\inf_{n \geq 1} n |\cos n \alpha -\cos n \beta|=0}
      for all values of \alpha, \beta \in \mathbb{R}.

  21. gowers Says:

    I’ll try that Wolfram alpha thing soon. But let me say that fairly soon I’ll also post some of Sean Lip’s pictures, which are of similar sets for quite large values of n and look beautiful.

    While on the topic of Wolfram alpha, another experiment that is quite easy to do is the following: you type in something like “approximate fractional part of (nth Fibonacci number times square root of 2)” for various values of n, which can be quite large. If you do it for a range of consecutive values of n, it looks as though the answers you get out are approximately uniformly distributed, which, if true, would imply that the square root of 2 and the golden ratio do not give a counterexample to the Littlewood conjecture. However, proving uniform distribution for something like that is very similar to some notorious open problems …

  22. Miguel Lacruz Says:

    I posted some pictures of the level sets for Chebyshev polynomials so that you can see them in the click of a mouse.

    http://mlacruz.wordpress.com/

  23. gowers Says:

    Those are gorgeous, and similar to the images that Sean Lip produced, though his were purely black and white but with a cutoff for n. (In other words, he blacked out all points for which the minimum was less than \epsilon if you took it over all n up to some N.) I must hurry up and finish the post in which I discuss these images and some other issues related to the problem.

  24. Alec Edgington Says:

    Here’s a trivial observation, which nevertheless gave me a little insight into why LC is hard. Consider the two statements:

    \mathrm{A} \quad \inf_{n \geq 1} n \Vert nx \Vert = 0 \quad (\forall x)

    and

    \mathrm{B} \quad \inf_{n \geq 1} n^\frac{1}{2} \Vert nx \Vert = 0 \quad (\forall x) .

    Now it’s obvious that \mathrm{A} \Rightarrow \mathrm{LC} \Rightarrow \mathrm{B} . But A is false; B is true; and LC is sitting in the middle of the huge space between them!

  25. Alec Edgington Says:

    I’ve been thinking about another way to generalize the problem. Let A = \{ (x,y) : \lvert xy \rvert  0) .

    What can we say about sets A \subseteq \mathbb{R}^2 and functions f such that

    \bigcup_{n \geq 1} \frac{1}{n} (\mathbb{Z}^2 + \epsilon f(n) A) = \mathbb{R}^2 for all \epsilon > 0 ?

    More generally still, in k dimensions we can ask whether:

    \mathrm{LC}_{k, A, f} \quad \bigcup_{n \geq 1} \frac{1}{n} (\mathbb{Z}^k + \epsilon f(n) A) = \mathbb{R}^k \quad (\forall \epsilon > 0 ).

    For tractability, let’s assume that A is a ‘star’ (a union of line segments from the origin).

    In one dimension, we are asking about sets A \subseteq \mathbb{R} and functions f such that

    \bigcup_{n \geq 1} \frac{1}{n} (\mathbb{Z} + \epsilon f(n) A) = \mathbb{R} for all \epsilon > 0 .

    It’s natural to consider A =(-1,1) , and then this condition boils down to:

    \inf_{n \geq 1} f(n)^{-1} \Vert nx \Vert = 0 for all x \in \mathbb{R} .

    Here we can get away with f(n) = n^{-1+c} for any c > 0, but not with f(n) = n^{-1}. Thinking about how volumes scale, I guess that f(n) = n^{-\frac{1}{k}} is going to be the ‘difficult’ case in general.

    Obviously, if f \leq g and A \subseteq B then \mathrm{LC}_{k, A, f} \Rightarrow \mathrm{LC}_{k, B, g} . Also, I think (though haven’t checked rigorously) that if A is bounded, then

    \mathrm{LC}_{k, A, f} \Rightarrow \sum_n f(n)^k = \infty

    (consider the volume of the union intersected with [0,1]^k).

    Tim’s Problem 7 corresponds to the case A = \{ (x,y) : \max(\lvert x \rvert, \lvert y \rvert) < 1 \} and f(n) = n^{-\frac{1}{2}} .

  26. Alec Edgington Says:

    Someting funny happened to the formatting of the first equation, which should be:
    A = \{ (x,y) : \lvert xy \rvert < 1 \}.

  27. Alec Edgington Says:

    Actually a whole sentence got lost at the beginning! The point was that for this A, LC asserts:

    \bigcup_{n \geq 1} \frac{1}{n} (\mathbb{Z}^2 + \epsilon n^{-\frac{1}{2}} A) = \mathbb{R}^2 \quad (\forall \epsilon > 0) .

  28. Alec Edgington Says:

    Addendum: just noticed that since the truth of \mathrm{LC}_{k, A, f} is unaffected when A (or f) is multiplied by a positive constant, the fact that it fails in the context of Problem 7 means that it fails whenever A is bounded (with f(n) = n^{-\frac{1}{2}}). So the unboundedness of the set A is essential to LC.

  29. Alec Edgington Says:

    Further to the above, here’s a simple question that may be easier to answer than Littlewood’s, if the answer is negative (or harder, if the answer is positive). Let \mathcal{A}_k be the set of open subsets A \subseteq \mathbb{R}^k such that [-1,1] A = A. (The requirement of openness is not crucial, but rather convenient.) Let \mathcal{L}_k be the set of A \in \mathcal{A}_k such that

    (\forall \epsilon > 0) \quad \bigcup_{n \geq 1} \frac{1}{n} (\mathbb{Z}^k + \epsilon n^{-\frac1k} A) = \mathbb{R}^k.

    If A \in \mathcal{A}_k, is it conceivable that A \in \mathcal{L}_{k} if and only if A has infinite volume?

    This is true for k=1, where there are just two cases to consider (A is either \mathbb{R} or a finite symmetric open interval). When k = 2, the ‘if’ would imply Littlewood’s conjecture, but it would also imply strengthenings of it. With A = \{ (x,y) : \lvert xy \rvert < 1 \; \wedge \; \lvert y \rvert  0 there exist x and y such that

    n \Vert nx \Vert \Vert ny \Vert \cdot (\sqrt{n} \max(\Vert nx \Vert, \Vert ny \Vert))^\beta

    is bounded away from zero. This comes from taking A = \{ (x,y): \lvert xy \rvert \cdot \max(\lvert x \rvert, \lvert y \rvert)^\beta < 1 \}; it can also be seen as a strengthening of Problem 7.

    Calculations like these lead us naturally to consider the function \phi_A : \mathbb{R}^k \rightarrow \mathbb{R}_{\geq 0} defined by

    \phi_A (\mathbf{x}) = \inf \{ \lvert \lambda \rvert : \mathbf{x} \in \lambda A \}

    (so that A = \{ \mathbf{x} : \phi_A (\mathbf{x}) < 1 \} and \phi_A (\alpha \mathbf{x}) = \lvert \alpha \rvert \phi_A (\mathbf{x})). This is a little bit like a norm. The correspondence between sets and LC-like conditions is simplest in the case where A has the additional property that

    \lvert x_i \rvert \leq \lvert y_i \rvert (\forall i) \Rightarrow \phi_A (\mathbf{x}) \leq \phi_A (\mathbf{y})

    (as in both the examples above). Then the condition A \in \mathcal{L}_k translates to:

    (\forall \mathbf{x} \in \mathbb{R}^k) \quad \inf_{n \geq 1} \phi_A (n^\frac1k \Vert n \mathbf{x} \Vert) = 0

    where \Vert \mathbf{x} \Vert for \mathbf{x} \in \mathbb{R}^k is defined pointwise by \Vert \mathbf{x} \Vert_i = \Vert x_i \Vert.

  30. Alec Edgington Says:

    More formatting weirdness. The section from ‘This is true’ to ‘Problem 7′ should read:

    This is true for k=1, where there are just two cases to consider (A is either \mathbb{R} or a finite symmetric open interval). When k = 2, the ‘if’ would imply Littlewood’s conjecture, but it would also imply strengthenings of it. With A = \{ (x,y) : \lvert xy \rvert < 1 \wedge \lvert y \rvert  0 there exist x and y such that

    n \Vert nx \Vert \Vert ny \Vert \cdot (\sqrt{n} \max(\Vert nx \Vert, \Vert ny \Vert))^\beta

    is bounded away from zero. This comes from taking A = \{ (x,y): \lvert xy \rvert \cdot \max(\lvert x \rvert, \lvert y \rvert)^\beta < 1 \}; it can also be seen as a strengthening of Problem 7.

  31. Alec Edgington Says:

    Sorry Tim, let me try again. I hope this works.

    With A equal to the set of (x,y) such that \lvert xy \rvert < 1 and \lvert y \rvert  0 there exist x and y such that

    n \Vert nx \Vert \Vert ny \Vert \cdot (\sqrt{n} \max(\Vert nx \Vert, \Vert ny \Vert))^\beta

    is bounded away from zero.

  32. Alec Edgington Says:

    I give up! Missing text in plain this time:

    With A equal to the set of (x,y) such that | xy | < 1 and | y | infinity such that n_r . || n_r x || . || n_r y || –> 0, but we can find such a sequence where sqrt{n_r} || n_r y || –> 0 as well.

    The ‘only if’ for k=2 would imply that for any beta > 0 there exist x and y such that…

  33. Jack D'Aurizio Says:

    I have a simple question: is it true that, even if alpha is a badly approximable
    irrational number,

    inf_(n in N) || n^2 alpha || = 0

    ? Thanks in advance, Jack D’Aurizio.

    • gowers Says:

      Yes it is true. Here’s a lovely proof I was shown by Vitaly Bergelson. (It doesn’t give the best known bound, but it’s a very nice argument.)

      Split [0,1) up into k equal-sized subintervals, and colour each integer n according to which interval n^2\alpha/2 lies in mod 1. By van der Waerden’s theorem we can find n-d, n and n+d all of the same colour. That implies that \alpha((n-d)^2-2n^2+(n+d)^2)/2 is close to an integer. But it also equals \alpha d^2, so we are done (since k was arbitrary).

  34. Jack D'Aurizio Says:

    Really, really nice.
    And what about improvements of the Heilbronn,Zaharescu and Iwaniec inequalities (that I just discovered)

    || n^2 alpha || < n^(-theta)
    for infinitely many n

    with theta=1/2 or 2/3-epsilon?

  35. Thomas Says:

    I have an idea for the conjecture, let f(n)=max (0<a,b<1)
    min (1<=qinfinity) f(n)=0. With some help, I have computed f(n) for n up to 100000, and one of the interesting results I have found is that the values for a and b for some n are pretty close to sqrt(2)-1 and 2-sqrt(3), so solving the problem for those values might shed some light on the problem.

    • Thomas Says:

      Oops, sorry, to edit, f(n)=max (0<a,b<1) min (1<=qinfinity) f(n)=0.

    • Alec Edgington Says:

      Your maths formatting seems to have been mangled, but this does remind me of some computations that I did a while ago (perhaps the same ones, though it sounds like you went further). Namely, define \epsilon_N (x, y) = \min \{ n \cdot \Vert nx \Vert \cdot \Vert ny \Vert : 1 \leq n \leq N \}. For each N there will be some point (x_N, y_N) \in X (conceivably more than one) where \epsilon_N (x, y) is maximized.

      To find x_N, y_N and \epsilon_N = \epsilon_N (x_N, y_N) for a given N, we end up having to solve three simultaneous equations of the form n_i (n_i x - r_i) (n_i y - s_i) = q_i \epsilon (i = 0, 1, 2), where n_i, r_i, s_i \in \mathbb{N} and q_i = \pm 1. I found the n_i, r_i, s_i, q_i, up to N=62. It appears from the data so far that r_i and s_i are determined by n_i, according to some unknown rule exemplified in the following table:

      n r s
      1 0 0
      2 1 1
      3 1 1
      4 2 1
      5 2 1
      7 3 2
      8 3 2
      15 6 4
      17 7 5
      18 7 5
      26 11 7
      34 13 9

      Here is a table of values of n_0, n_1, n_2 and q_0, q_1, q_2 = \pm 1 for 3 \leq N \leq 62. I’ll also list the resulting values of x_N, y_N and \epsilon_N:

      N n1 q1 n2 q2 n3 q3 x y e
      1 .5000 .5000 .2500
      2 .3694 .3694 .1365
      3 1 + 2 + 3 – .4286 .2857 .1224

      7 4 – 5 + 7 + .4119 .2680 .1013

      15 5 + 7 + 15 + .4113 .2692 .0977

      17 3 – 5 – 15 – .3889 .2692 .0962
      18 5 + 7 + 17 + .4110 .2699 .0957

      22 3 – 15 – 18 + .3871 .2688 .0937

      26 8 + 15 + 18 + .3877 .2644 .0936

      34 8 + 18 + 34 – .3877 .2643 .0928

      49 7 + 17 – 26 – .4125 .2697 .0882

      63 ? ? ? ? ? ? ? ? ?

      In this table, three dots indicate repetition of the sequence above until the next displayed value of N. For example, the values of x_N, y_N, \epsilon_N are the same for N = 3, 4, 5, 6.

      From the data it looks plausible that (\sqrt{2} - 1, 2 - \sqrt{3}) \approx (0.4142, 0.2679) is a limit point of (x_N, y_N): in other words, that (\sqrt{2}, \sqrt{3}) is in some sense a ‘worst case’ for LC.

  36. Thomas Says:

    Your definition of E n (x,y) is what I used in my f(n), my f(n)=max (a,b) E n (x,y). Hope my math doesn’t get mangled again.

  37. Thomas Says:

    Good, my math worked this time :-)

  38. Thomas Says:

    And LC is equivalent to f(n) converging to 0.

  39. Measures Are Better | Gödel's Lost Letter and P=NP Says:

    [...] There are conjectures that seem to approachable, and there are those that seem unapproachable. The Littlewood conjecture falls into the former category. Yet it has resisted efforts to solve it now for years. Akshay Venkatesh of NYU has a lovely survey about the problem. Also Tim Gowers has some interesting ideas about it—see this post. [...]

  40. mkatkov Says:

    By theorem 1 for every n there exist integers s, t, k, s', t', k' that 0 \leq \alpha (t-s) n -k <1 and 0 \leq \beta (t'-s') n - k' <1 . By the common sense there exist integers k_1, k_1' that 0 \leq \alpha (t-s) (t'-s') n^2 -k_1 <1 and 0 \leq \beta (t-s) (t'-s') n^2 -k'_1 <1 , that is k_1, k_1' that || \alpha (t-s) (t'-s') || <\frac{1}{n^2} , || \beta (t-s) (t'-s') || <\frac{1}{n^2} . Therefore, n^2 || \alpha (t-s) (t'-s') || \times || \beta (t-s) (t'-s') || <\frac{1}{n^2} . Since t-s \leq n and t'-s' \leq n (t-s) (t'-s') || \alpha (t-s) (t'-s') || \times || \beta (t-s) (t'-s') || < \frac{1}{n^2}

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 1,418 other followers

%d bloggers like this: