This is the third in a series of posts in which I discuss problems that could perhaps form Polymath projects. Again, I am elaborating on a brief discussion that appeared in an earlier post on possible future projects. [Not for the first time in my experience, WordPress’s dates have gone funny and this was posted not on the 17th as it says above but on the 20th.]

An obvious objection to the Littlewood conjecture as a Polymath project is that it is notoriously hard. On its own that might not necessarily be a convincing argument, since part of the point of Polymath is to attempt to succeed where individual mathematicians have failed. However, a second objection is that the best results in the direction of the Littlewood conjecture, due to Einsiedler, Katok and Lindenstrauss, use methods that are far from elementary (and far from understood by me). I envisage this project as an elementary one, at least to begin with, so does that make it completely unrealistic? I shall try to argue in this post that there is plenty that could potentially be done by elementary methods, even if attempting to prove the conjecture itself is probably too ambitious.

Another advantage of tackling the conjecture by elementary means is that if we find ourselves forced to reintroduce the non-elementary methods that have led to the very interesting results of Einsiedler, Katok and Lindenstrauss, we will have a deeper understanding of those methods than if we had just passively learnt about them. I myself prefer to rediscover things than to learn them: it isn’t always practical, but it’s easier if you half bear in mind that they are there and have a vague idea about them.

Still on the topic of elementary versus non-elementary, one of the factors that contributed to the success of the DHJ project was that there was a non-elementary way of looking at the problem that we were trying to solve by elementary means. That meant that those participants who knew their ergodic theory could look at proposed proof strategies and make useful comments about whether they were likely to succeed. I think of it as like trying to find one’s way out of a jungle. The non-elementary methods give you something like a satellite picture of your surroundings, which enables you to plan your route instead of wandering around in the dense vegetation with no idea where you are. But there are things that the satellite picture won’t show, such as narrow but useful paths. So sometimes the person with the satellite picture knows that the person without it is wasting time, but sometimes there are important ideas that the person with the satellite picture will not spot. With a collaboration, one can of course have the best of both worlds.

For those who are interested, there is a very nice survey article by Akshay Venkatesh about the work of Einsiedler, Katok and Lindenstrauss, which explains how dynamics comes into the problem.

Before we discuss Littlewood’s conjecture, let us look at an easier question. Suppose that is a real number. To what extent can the multiples of stay away from integers? To make this question precise, let us write for the distance from to the nearest integer. Then we would like to examine the sequence .

Before thinking about the right question to ask, let us quickly prove a simple and very well-known theorem. (Unless my history is letting me down, which is possible, it is due to Dirichlet.)

**Theorem 1.** *For every real number and every positive integer there exists a positive integer such that .*

**Proof.** Let be chosen uniformly at random from the interval . Then the expected number of numbers in the sequence that lie in the interval mod 1 is . Therefore, there must exist and such that both and lie in the interval mod 1, from which it follows that . QED

Note (i) that Dirichlet used the pigeonhole principle rather than an averaging argument and (ii) that we could, if we had been bothered, have obtained a bound of from the above argument. (This illustrates the point, which is often handy, that even if partitioning a set into subsets of a certain form is hard — not that it is here — finding subsets of that form such that every element of is in the same number of may well be easy. In that case, something very similar to the pigeonhole principle works. I should expand this parenthetical remark into a Tricki article.)

What this argument shows is that sometimes must be small. Since it clearly won’t stay small (at least if ), this suggests that we should be interested in the sequence . And the obvious question then is: how fast must (or how slowly can) tend to as tends to infinity?

The argument above shows that , and this is asymptotically best possible, because it is not hard to show that if has a continued fraction with bounded quotients then for some positive constant (that depends on the upper bound of those terms). In particular, if is a quadratic irrational, then is at least for every positive integer .

This suggests that the slowest possible rate at which can tend to zero is achieved when is the golden ratio, and that is indeed the case. So this problem is completely understood: basically, the continued fraction expansion of tells you exactly how behaves.

Littlewood’s conjecture is about what at first seems to be an innocent generalization of the above result. Instead of looking at one real number , let us look at two, and . In broad terms, our goal is to understand to what extent it is possible for *both* and to be reasonably large. The precise question that Littlewood asked is (equivalent to) the following.

**Problem 2.** * Let and be real numbers, and for each define to be . Must tend to zero faster than ? *

By “faster”, I mean at a rate . Littlewood conjectured that the answer was yes. A more usual way of formulating the conjecture is as the statement that , which is obviously equivalent to a positive answer to Problem 2.

Since , it is trivial that tends to zero at least as fast as , and the only way that it could fail to tend to zero faster than that is if magically stays away from an integer whenever threatens to get close, and vice versa.

One question that it is reasonable to ask is the following: why, when we are considering the question of whether can remain far from a pair of integers, do we *multiply* together and ? Is that really a natural thing to do? I shall come back to this question, but for now let me remark (slightly cryptically) that this formulation leads to symmetries that are critical to the work that has been done on it.

**A more general question.**

Theorem 1 leads to the following nice construction of a set of points in such that if and are distinct elements of , then . (Note that this is best possible, since just must sometimes be at most .) One takes to be , where is reduced mod 1 and is a number with bounded quotients in its continued-fraction expansion (or, if you prefer, is something concrete like ).

Why does this work? Well,

by our choice of .

This shows that we can use a quadratic irrational, or any “counterexample to the one-dimensional analogue of the Littlewood conjecture”, to produce a set of points with the desired property. Let us define the “distance” between two points and to be , noting that this is very definitely not a metric. Then what we have produced is points in the unit square such that any two have distance at least from each other.

Can we do this in three dimensions? This time, we define the “distance” between and to be . Let me ask this question as a serious problem.

**Problem 3.** *Is there a constant such that for every it is possible to find points in such that the distance between any two of them (as defined above) is at least ?*

The reason for asking this question is the following simple observation.

**Observation 4.** *If there is a counterexample to Littlewood’s conjecture, then the answer to Problem 3 is yes.*

**Proof.** We generalize the two-dimensional construction in the obvious way, by defining to be . Again, and are reduced mod 1, and is a pair of real numbers such that is bounded below by a positive constant . Checking that this works is straightforward. QED

What is the point of asking this question? Well, one reason is that it is clearly very closely related to Littlewood’s conjecture, but it is far more general. Therefore, if such a set of points *does* exist, then it should be easier to find it than to find a counterexample to Littlewood’s conjecture, whereas if it *doesn’t* exist, then proving that it doesn’t exist might *also* be easier (because now it would be a purely combinatorial problem rather than a number-theoretic one).

Unfortunately, there are objections to the above arguments. It seems to be very hard to find a set of well-separated points in three dimensions (I’ll discuss what one *can* do in a moment), and part of the reason is that there are pairs of points to worry about. So it is tempting to introduce dependences, by which I mean to come up with a construction such that if one pair of points is separated it causes many other pairs to be separated as well. But that thought leads one inexorably to the idea that the best approach could be to start by finding a counterexample to the Littlewood conjecture … (This is an issue that I have discussed before on this blog. Sometimes combinatorial generalizations seem to be no easier to tackle than the problems they generalize.)

If finding an example of a collection of well-separated points turns out not to be easier to think about than finding a counterexample to Littlewood’s conjecture, what about trying to prove that such a collection cannot exist? Formally speaking, it is of course harder, but *if* such a collection cannot exist, then it could be that to prove it one is forced to think about the problem in the correct, more general way. I’m not sure how I feel about this. I have no decent idea about how to go about proving that a well-separated collection doesn’t exist, and if I attempt to do so there is always a little imaginary voice saying to me, “If you managed to prove this, then you’d have shown the highly non-trivial (indeed, not known) number-theoretic fact that .” Do I believe that such a fact might conceivably be provable by much more general combinatorial techniques? It seems like a long shot, but I can’t think of a good reason for its being impossible.

Incidentally, it is easy to find points that are separated by . There are two (ultimately not all that different) ways of doing this. One is just to choose any old maximal separated set of points. The set of points at distance at most from any given point has volume at most or thereabouts, so if is then this is about , so you can fit points in before there is any chance of the set being maximal. The other way of doing it is to choose the points randomly until the expected number of pairs that are too close exceeds half the number of points. You then throw away one point from each pair. This uses exactly the same volume calculation (since the volume of points close to is the probability that gets too close to ) and gives essentially the same bound.

But the weakness with those arguments is that they don’t exploit the fact that the “distance” is not in fact a metric and the “ball” around a point is far from convex. The greedy approach takes no account of the fact that the balls around the points could overlap considerably — all the more so if you try to choose the points in order to make this happen. However, I know of no better argument. (Of course, in the two-dimensional case I have just given a better argument, but that doesn’t help us.)

**A variant of Problem 3.**

One of the difficulties in thinking about Problem 3 is that we do not have a big supply of examples in the two-dimensional case. However, there is another construction that doesn’t quite work, but “morally” does work, which goes as follows. For convenience, let and take all points of the form such that both and have binary expansions that terminate after exactly digits (if necessary, one should add zeros to achieve this) and the digit sequence for is the reverse of the digit sequence for . For example, if then one of the points is in binary.

Why should this work? Well, for two distinct points to have coordinates that are very close, they will have to differ in some late digit, which means that their coordinates will have to differ in an early digit. More precisely, if the coordinates first differ in the digit, then the coordinates differ at or before the th digit, so the product of the differences should be at least .

So why doesn’t it work? Unfortunately, that argument makes the elementary blunder of assuming that if two numbers differ in an early digit, then they must be far apart. But the example of and shows that that is false. And if we take the numbers and then we have two numbers such that both they and their reversals are close.

I mentioned this to Nets Katz once, and he came up with the suggestion of changing the metric to a dyadic one. That is, we *define* the distance between two numbers in to be if they first differ in the th digit. With this definition, which is in some ways more natural than the “real” metric on , the digit-reversal construction works.

That may feel a bit like cheating, but if that’s the way you feel, then have a go at the next question, and also read on to the end of the next section.

**Problem 5.** *Is it possible to find points in such that if and are any two of them then ?*

Of course, one wants to do this for all and an absolute constant .

I have a hazy memory that Nets Katz and I managed to find such a set when we discussed this problem, but I’m not quite sure whether that memory is correct. In any case, I don’t have an example at my fingertips right now and it could be a good place to start. (It could be that what I am remembering is something else, namely a way of modifying the digit-reversal construction so that it works for the usual metric. *Added later: I now think that that is indeed what I was remembering, since I have reconstructed the modification. See below.*) Note that the dyadic distance between any two numbers is at least half the actual distance, so if one could prove that no dyadic-separated set existed, one would have proved Littlewood’s conjecture. (*Added later still: I have now found a dyadic-separated set. See the sixth comment below. I don’t remember it from before, but neither am I certain that it is new. Added later still: on further reflection, it does feel faintly familiar. Nets, if you ever read this, does the example ring any bells with you?*)

While I am at it, here is a general question.

**Problem 6.** *Does the Littlewood conjecture, or indeed any of the related problems discussed here, become easy if the dimension is sufficiently high?*

For example, if are real numbers, is it known that ? I think it may be but I have not found it by Googling. Problem 6 is potentially important in the context of the dyadic problem: if the answer to Problem 5 turns out to be yes (as I vaguely remember), then we do not have a proof of Littlewood’s conjecture. But it could be that the answer to a sufficiently high-dimensional analogue of Problem 5 is no, in which case one would have a combinatorial proof of a high-dimensional version of Littlewood’s conjecture.

**Are the two constructions related?**

We now have two ways of producing well-separated subsets of (one of them by changing the metric): using a quadratic irrational, and the digit-reversal idea. However, these two methods may be less different than they at first appear. To see this, let us consider what happens if we try to produce an example using the golden ratio, except that to keep the discussion finite we shall use a good approximation to it of the form , where and are consecutive Fibonacci numbers. In fact, let us go further and take . What’s more, let us make our points live in the grid rather than in . Thus, we shall take the set of all multiples of mod 13. These are ,. Note how beautifully these points behave themselves: when one of them is close to (mod 13) then the other one stays away.

How can we connect this with digit-reversal? Obviously we do not want to take base-ten representations. The natural base if we are thinking about Fibonacci numbers is … to write numbers as sums of Fibonacci numbers. To make the representation unique, we forbid the use of consecutive Fibonacci numbers (it is an easy exercise to check that this gives a unique representation). An easy algorithm for working out the representation is to keep subtracting the largest Fibonacci number that is less than what you have: for instance, 12=8+4=8+3+1 and there is our representation. There is a small issue that needs to be sorted out: the number 1 occurs twice, so are 1 and 2 consecutive Fibonacci numbers? I shall use the algorithm just described, so I shall never need the first 1. Thus, I regard 1 and 2 as consecutive.

Let us write our numbers as we would in base 10: with the largest term at the left. If we do that, and if we put zeros at the beginnings of numbers, then the points above work out to be .

The first thing to say is that we are clearly *not* reversing the digits here. And yet, if one looks more closely, then one starts to see patterns emerging: there seems to be a mixture of digit-reversal and moving “blocks” of numbers around. To see what I mean here, look at what happens if we take the multiples of 8 and reverse their Fibonacci digits. The resulting sequence is this: 1,3,4,10,8,9,11,12,7,5,6,2. Let us throw in 0 at the beginning of the sequence and imaginine it arranged in a cycle. Now we can partition this cycle into two chunks, one of length 5 and one of length 8. These chunks are (7,5,6,2,0,1,3,4) and (10,8,9,11,12). The first chunk naturally splits up as (7,5,6) and (2,0,1,3,4), which again consists of permutations of intervals of Fibonacci lengths. And the second does too: it splits up as (10,8,9) and (11,12). And we can keep doing this. For example, (7,5,6,2,0,1,3,4) splits up into (7,5,6) and (2,0,1,3,4), and so on.

It should be fairly easy to describe exactly what is going on here and prove that it always happens, but I have not got round to doing so. What I would like to see is this. First, there should be a simple algorithm that tells you how to unscramble the sequence of digit-reversed numbers by iterating the following process — take a sequence of numbers of Fibonacci length, divide it into two subintervals of Fibonacci length and possibly swap them round (but not always, so one would need to know when, and also when the longer interval comes first) — until you get down to singletons. The second thing should be a direct combinatorial proof that if you do this process in reverse, then you will get a sequence of numbers such that nearby terms in the sequence have reversals that are far away.

Note that it was obvious in advance that we would not get digit-reversal, because the Fibonacci base is subject to the same problem that the binary base was: it is possible for two numbers to be close and for their digit-reversals to be close. But this is interesting, as it suggests that if we understand what is going on combinatorially in the above patterns, then we might be able to come up with a flexible construction that uses digit reversal but cleverly modifies it to get round the nought-point-one-recurring difficulty. More generally, it suggests that thinking about the dyadic problem could give useful insights into the original problem.

Added later: after editing the above (to add the description of the interesting pattern that appears when you multiply by 8 and reverse the Fibonacci digits), I realized that it is indeed possible to mimic what is going on there and deal with the nought-point-one-recurring difficulty. This gives rise to a construction that is probably what Nets Katz and I came up with when we discussed the problem (about eight years ago), though it is certainly not how we came up with it.

What I want to do is construct a sequence of pairs of points such that both and live in and have binary digits. I also want to be at least whenever Digit reversal doesn’t work, but what about a combination of digit reversal and “sometimes exchanging clumps”? The most obvious idea would be to organize the into a binary tree, and then go down the levels of this tree, alternately not swapping and swapping. That is, I let for every . The first level of the tree is the partition into two intervals according to the first digit of . So I do that partition and then don’t swap the intervals. At the second level, I *do* swap the intervals, so now I have four intervals in the following order: numbers that start 01, numbers that start 00, numbers that start 11, numbers that start 10. I continue this process.

The resulting order will be what you get if you take each number, change every other binary digit, and then put the numbers in increasing order. Let be the map that changes every other binary digit of a number and let be the map that reverses the digits. Then the set of points I want to take is the set of all such that is of the form .

Why does this work? Well, if and first differ in the th binary place, then and differ in the th place and are identical after that. Therefore, The only way that this can fail to help is if and are much closer than . Now if is greater than , then this means that the th digit of must be a 1 and several subsequent digits are 0, while the th digit of is a 0 and several subsequent digits are 1. But then and have alternating 0s and 1s after the th digit, and in different places. If the lengths of these strings of alternating digits are around , then it is not hard to see that and must differ by at least basically because they now differ in the th place and are not followed by strings of 0s and 1s after they first differ.

Conclusion: digit reversal and shifting intervals can combine to give a second example of a well-separated subset of with respect to the Euclidean metric (in each coordinate) rather than the dyadic metric. The rough reason this works is that the digit-reversal almost works, but fails because of the nought-point-one-recurring problem, while takes numbers that are “close when they shouldn’t be” and pulls them apart. (Of course, then has to bring together numbers that used to be far apart, but this turns out not to matter because the images of those numbers under turn out to be sufficiently far apart to compensate for it.)

**More variants.**

Returning to Littlewood’s conjecture itself, there is a substantial weakening of it that I still do not know how to answer, though I think it has a good chance of being known and would be interested to hear from anyone if they do know the answer, though in an ideal world I would prefer to come up with an answer for myself (where “for myself” really means “by thinking about it in collaboration with other Polymath participants”). (*Added later: I have now found a paper that makes it clear that the question I am about to ask is indeed a known result. I will give the reference at the end of this section.*) As I have already mentioned, one of the difficulties of Littlewood’s conjecture is that the function is not a metric. This is why the usual techniques for solving packing problems (up to a constant) fail for Problem 3. However, suppose we know that the ratio of to is roughly to . Then is roughly , which is roughly , which equals

Let and for each between and let us write . Then it is not hard to check that .

Therefore, if were a counterexample to Littlewood’s conjecture and , it would have to be the case that for every between and the numbers were bounded below by some . In particular, this would need to be the case when , which would say that the numbers were bounded below. This motivates the following weak version of Littlewood’s conjecture.

**Problem 7.** *Do there exist real numbers and such that ?*

Here is what this problem is asking geometrically. For each the points (mod 1) with form a subset of of size . A trivial volume argument shows that there must be two of these points within a distance of . Problem 7 asks whether there is some pair such that this trivial upper bound is correct up to a constant for every . Thus, there is not much room to play around, but neither is there in the one-dimensional case and yet numerous examples exist. Here is a somewhat vaguer problem.

**Problem 8.** *If the answer to Problem 7 is yes, then is there some general procedure, analogous to picking a continued fraction with bounded exponents, that yields a large family of examples? *

One might think that if the answer to Problem 7 is yes, then the answer to Problem 8 is almost certainly yes as well. But it does not seem to be all that easy to generalize continued fractions in the appropriate way. (Of course, many people have tried to produce higher-dimensional generalizations of continued fractions, and it is possible that out there is just what would be needed.)

Another indication that the Littlewood conjecture is hard is that it is not known even in very concrete cases such as and , as I have already mentioned. This means that a positive solution to the conjecture would have serious number-theoretic content. However, it might be worth thinking about whether a concrete pair of numbers like this could work for Problem 7. (*Added later: this too seems to have been done.*)

Here is another problem, which I think of as a “dual” version of Littlewood’s conjecture. I will try to justify this name in a moment. (*Added later: it turns out that dual problems such as the ones I am about to mention have been considered in the literature as well, and they are dual in a much more direct, though closely related, sense than the one that I talk about.*)

**Problem 9. ** *Do there exist real numbers and such that ?*

The lim inf there is over all pairs of non-negative integers and of which at least one is positive. Again, if an example exists then it is best possible, since out of all the numbers with there must be two, and , that differ by at most . The difference between this and Littlewood’s conjecture is that we are taking a two-dimensional collection of points in rather than a one-dimensional collection of points in .

This problem also has a weak version.

**Problem 10.** *Do there exist real numbers and such that ? *

What is “dual” about these problems? I’ll give an informal argument here: yet another aspect of this project would be the question of whether the argument can be turned into a formal argument for equivalences, or at least one-way implications, between these problems and some of the earlier ones.

Let us begin with the weak versions. Problem 7 asks for and such that one of and is always at least whenever is a positive integer less than $m$. Let us simplify things a bit by taking to be a prime number and trying to find and mod such that at least one of and lies outside the interval whenever .

Now for any given there are two sets of interest here. One is the set of such that , and the other is the set of such that . We would like the intersection of these two sets, which are equal to and , to be the trivial one, namely . If that happens, then we cannot find in with and in the interval .

Here is where I will get very informal. Heuristically speaking, the Fourier transform of a set such as behaves like an arithmetic progression of common difference and length . (There are all sorts of tricks for making this heuristic better, such as convolving the set with itself so that it has non-negative Fourier coefficients. I won’t worry about such issues here,) Now the intersection of two sets corresponds to the pointwise product of their characteristic functions, and that transforms to the convolution of their Fourier transforms. And the size of the intersection is given by the square of the norm of this convolution. If we ensure that the Fourier transform is real and positive, then the norm of the convolution will not depend on and . Therefore, in order to make the norm as small as possible, we want the convolution to be as “spread out” as possible, which happens if there are as few linear relations amongst and with coefficients less than as possible. This suggests that we should look for and such that all the numbers with are as distinct as possible. And now we are getting close to Problem 10. I haven’t checked, but I think a slightly more complicated argument establishes a similar connection between Problems 2 and 9.

It’s quite possible that the answers to some of these questions are known, or at least that the questions would seem easy to experts in the area.

*Added later: as I mentioned above, this is indeed the case. See for example this paper of Pollington and Vellani, which shows that the set of all pairs such that and are badly approximable and for every , where , has Hausdorff dimension 2. (Their result is in fact more general than this.) Davenport proved in the early 1950s that the answer to Problem 7 is yes, and that there are uncountably many examples. *

*The paper where I found a mention of a dual problem is “Simultaneous Diophantine approximation” by Davenport, published in the Proceedings of the LMS, December 1952, pages 406-416.*

**Could one make a serious attempt to find a counterexample?**

Let me return yet again to the question of whether elementary methods are appropriate for the Littlewood problem. As I have shown above, there are several problems that seem to be very closely related to it but that are in one way or another easier, and I don’t see any good reason for ruling out elementary approaches to those. That is particularly true of Problem 5 (the dyadic variant of the separated-points problem).

However, another reason I think that one should not rule out the use of elementary methods is that it is possible that Littlewood’s conjecture is false. I want in this section to outline a few thoughts about how one might go about disproving it.

The first thing to say is that if the conjecture is false, it is not false by very much. The headline result of Einsiedler, Katok and Lindenstrauss is that the set of counterexamples to the conjecture has Hausdorff dimension zero, and this result has a strong bearing on what kind of methods stand a chance of yielding a counterexample.

In order to elaborate on this remark, let me discuss a “combinatorial” way in which one might end up with a good real number if you want to be positive. We have already seen that any with bounded convergents in its continued fraction expansion will do: what I am about to discuss is a way of seeing that without explicitly thinking about continued fractions.

The idea is to start with an attempt and to make a series of adjustments to it. So let’s start with a truly stupid choice of , namely . Instantly we are in trouble, because is not large. So let us make it as large as we can by taking . Now that sorts out but we quickly run into trouble again, since So let us adjust again, by moving until becomes as large as possible. There are two minimal ways of doing that: we could take or . Let’s try increasing at every stage, so we take to be . Now we run into problems only at , when we get So let us adjust by taking such that which gives us

We are running into difficulties, because it appears that if we continue with this algorithm. But at this point we might pause to observe that if is an integer, we do not need to increase to make as far from an integer as possible: all that really matters is that it should be further from an integer than In retrospect, it makes sense if our successive adjustments are as small as possible, so that we don’t mess up the progress we have made previously.

With this thought in mind, let us try again. We take . Since is very small, we had better take to be as far from an integer as possible, so let us once again take to be But now, when we come to deal with the problem that is an integer, let us increase it from to instead of to . In other words, let us take to be . Our next problem comes with , so let us change that from to , which gives us .

Unfortunately, we are in trouble again, because works out to be So let us think slightly harder about what we have to do in order to ensure that we do not get into trouble. We are trying to build a sequence that converges to a limit in such a way that for every . This we can achieve if we ensure that for every , and this we can achieve if we make sure that and that .

Here is another approach. Let us again start with and . To define , we shall use the following principle: we look for the “worst place” and the “second worst place” and share the badness equally between them. Here the “worst place” is where the disaster occurs that and the “second worst place” is where . So we increase to , so that both and are equal to .

The next disaster occurs when we notice that . The second worst place could be taken to be either or . There is something to be said for alternately increasing and decreasing our , since that will make the total change smaller than if we always increase, so let us focus on . To share out the badness equally, we would like to decrease slightly so that . That tells us to take

Probably you can guess by now what happens: we keep going like this and we end up producing the fractions and in the limit we get the golden ratio.

But the method of making successive adjustments is much more general and flexible than this — we just happened to do things so efficiently that we got the best result. I confess that I haven’t quite nailed this explanation, but ultimately it should be possible to devise a successive-adjustment procedure that could in principle give any real for which the continued-fraction expansion has bounded quotients, and thereby present the continued-fraction solution as the outcome of a just-do-it proof.

Unfortunately, the main point I now want to make is that it is highly unlikely that any such approach could give rise (except perhaps very indirectly) to a counterexample to the Littlewood conjecture. The idea would be to start with a pair of real numbers and make successive adjustments to it each time some multiple misbehaved. If you try this, you will soon find yourself running into difficulties, but the result of Einsiedler, Katok and Lindenstrauss tells you in advance to expect this, or at least I think it does.

My heuristic reasoning here is as follows: if you can use a reasonably flexible successive-adjustment argument, then you will tend to get a set that is somewhat Cantor-like: you start with an interval of possibilities, then from it you remove some subintervals, and from what remains you remove further subintervals, and so on. These kinds of ideas show that the Hausdorff dimension of the set of reals with bounded partial quotients is not small: I haven’t yet found a good reference and I don’t know what the precise results are. But turning this on its head, one would expect that for Littlewood’s conjecture, if any successive-adjustment technique worked, it would have to be incredibly non-robust and work almost miraculously: perhaps as one removed subintervals one would sometimes remove whole branches of the tree and it would be far from obvious that by the end of the process there would be any tree left.

It is thoughts such as these that make me interested in Problem 7 (the weak version of Littlewood’s conjecture). For this problem, has to avoid a box of radius , and therefore area . This feels much more like what you have to do when you are trying to find such that avoids an interval of length , so it is tempting to try a successive-adjustment strategy. (*Added later: This is indeed the type of argument that Pollington and Vellani use. At one point they use a very clever idea of Davenport.*)

My belief is that such a pair exists, but finding one using a successive-adjustment strategy seems not to be wholly straightforward, because there is not an obvious direction in which to take the adjustment. (What is clear is that this direction must vary enough to stop the set of multiples becoming “approximately one-dimensional”, but I don’t see how to do this systematically.) Anyhow, I very much like this problem, because it seems to be of intermediate difficulty, and if one did manage to find a good supply of examples, and perhaps even understand the Hausdorff dimension of the set of examples, then one might be able to understand how that set behaved when you intersect it with copies of that set that you obtain by stretching it in one direction and shrinking it in the other (which is supposed to help you avoid other rectangles of the same area and thereby attempt to give a counterexample to Littlewood’s conjecture). Perhaps the set is sufficiently non-symmetric under transformations of the kind that images of it under such transformations are “independent” in some way and therefore have a non-empty, but very small, intersection.

(*Added later: Given what I now know about results in the area, the goal here would have to be different. It would be to understand the set of examples well enough to be able to show that it has non-empty intersections with various transformations of itself. This, it seems, is a known hard problem: see Conjecture 1 of the paper of Pollington and Vellani.*)

**Computer-generated pictures.**

I’m a hopeless programmer. If I weren’t, then I would have done the following very simple computer experiment, and I would love it if someone else could do it. It would be to generate a picture as follows. Choose a fairly large prime and take a square of pixels. Choose also a small constant such as and an integer that’s much larger than 1 and (probably) quite a bit smaller than . (It would be interesting to do this with different constants.) Think of the square as . Now make the pixel black if there exists a positive integer such that mod lies in the box I would very much like to see what the resulting set looks like. In particular, I would like to know whether it is obvious from looking at it that the horizontal and vertical distance scales are the same. (One would have to look at just a small chunk of it since it would be symmetrical in and .) And I’d also like to know whether it has a Cantor-ish feel to it. If anyone is ready to spend the half hour it would need a competent programmer to generate a few pictures like this I would be extremely grateful to have my curiosity satisfied. This wouldn’t have to wait for an official project to start — it could be an interesting preliminary.

Needless to say, one could also generate images of sets related to Littlewood’s conjecture itself, rather than the weak version. One would simply colour a pixel black if . It would be interesting to try this for various values of , and (especially) . In particular, I am interested to know what the residual set looks like, and whether it gets smaller very quickly (as the dimension-zero result suggests that it should).

As well as generating pictures, one could try to test the conjecture in a crude experimental way, by searching for pairs of numbers such that does not get small (or at least does not get small unless is large. Indeed, we could define to be the largest possible value of . One can get good estimates of if is not too large by simply trying all pairs such that the decimal expansion terminates after digits for some manageable . Of course, if is small for some smallish , then the same will be true for all pairs in some neighbourhood of , so there may be good ways of speeding up this brute-force approach. (At the very least, one could eliminate all pairs that fail when , then all pairs that fail when , and so on, so that one would search through fewer pairs when one had got further with the search.)

**Conclusion.**

One of the things I like about this project is that there is quite a long list of subproblems of graded difficulty: it seems likely that at least something interesting could be proved, and the aim would be simply to get as far as we can. Even doing a few simple computer investigations would be quite interesting (though if it turns out that tends to zero slowly with , then this might well not show up clearly in the data). The dyadic version of the packing problem looks to me as though it could have a reasonably easy solution, at least if a good packing exists. The “weak Littlewood problem” ought to be substantially easier than the main problem — I think there should be a reasonably large set of counterexamples — but hard enough to be interesting (unless the answer is already known, which is certainly possible). Likewise, the higher-dimensional problem should be easier to prove, this time with a positive answer (but again the answer may be known). And perhaps the “dual versions” are sufficiently different that they too have a chance of being of the right level of difficulty. (*Added later: the fact that the weak version is indeed known changes my assessment of this project somewhat. I would be interested to understand the proof of that as well as possible, but given that it is known, the place to focus on first might be the dyadic packing problem in three dimensions.*)

For the sake of commenters, here is a set of abbreviations that might be useful.

LP (Littlewood problem). PP (packing problem — the one where you want a large well-separated set with the distance that isn’t really a distance). DPP (dyadic packing problem). WLP (weak version of Littlewood problem). HDLP (high-dimensional Littlewood problem). HDDPP, HDWLP (obvious meanings). DLP (dual Littlewood problem). WDLP (weak dual Littlewood problem).

November 20, 2009 at 10:54 am |

After I posted the above, it occurred to me that there is a question that is even more of a toy problem than DPP. It is the following. Let be the dyadic metric on let let () and let be with its digits reversed. Then we know that is always at least DPP asks whether we can find

threesequences with a similar property. A more specialized question that may turn out to be interesting (though at this stage I haven’t thought about it enough to be sure) is whether one can do it by using the two sequences above and adding a third. That is, if and are defined as above, is it possible to find a sequence that again permutes the in such a way that is always at least for some absolute constant ?What is appealing about this question is that the condition on the can be expressed in a nice way. What we need is for to be large when is small. But , where is the length of the shortest interval that goes from the first digit where and differ to the last digit where they differ. For example, if and are and then they differ in places five, six and ten which means that

A quick proof of the general assertion: if the first place where they differ is in digit and the last place is in digit then and , so the product is since and

What this tells us about and is that they must differ somewhere in the first digits, where is some absolute constant. To put this in a tidy way, let us define a notion of “distance” between two 01 sequences of length It resembles the Hamming distance but is not even close to being a metric. We define to be the length of the smallest interval that contains the bits where and differ. We are then looking for a bijection from to itself with the property that if then and differ somewhere in the first bits.

For fixed we can think of this condition as colouring a graph. We join two sequences if their “distance” is at most , and we would like to find a proper colouring of the vertices (with the additional condition that the colour classes have equal size). It’s possible that even this problem is interesting, though ultimately one would have to do it for all at once, which one would expect to be considerably harder.

November 20, 2009 at 11:09 am |

[…] Tim Gowers described two additional proposed polymath projects. One about the first unknown cases of the polynomial Density Hales Jewett problem. Another about the Littelwood’s conjecture. […]

November 20, 2009 at 12:32 pm |

I now realize that the problem for fixed (see end of my previous comment) is not interesting, since one can define as follows. When the th digit of is the sum mod 2 of all the digits in places that are equal to mod . For instance, the first digit of is the sum of all the odd digits of and the second digit of is the sum of all the even digits of One then extends to a bijection. The reason this works is that if and differ only on an interval of length then at least one of the parities is different for and so and differ in at least one of the first places, as required.

From this perspective, the problem now is that the functions seem to be rather different from each other, so getting a single function that works for all still appears to be a challenge.

Added later: I can now do it. Details to follow very soon in a later comment.

November 20, 2009 at 6:54 pm |

[…] https://gowers.wordpress.com/2009/11/17/problems-related-to-littlewoods-conjecture-2/ Possibly related posts: (automatically generated)Polynomial DHJUpcoming polymath projectNew Polymath blogPossible future Polymath projects […]

November 21, 2009 at 11:31 am |

Tim, I think this code will draw the pictures you’re after in the first paragraph of ‘Computer-generated pictures’. I’m a lurking Computer Scientist and don’t claim to understand the whole of this post, so I may have got the wrong end of the stick. But it’s here all the same in case it’s useful. Run this function in MATLAB (or the free alternative Octave) with values of c, p and M. It doesn’t do anything clever, so start with small values of p and work up gradually!

function weak_littlewood_ex(c, p, M)

[x y] = meshgrid(0:p-1, 0:p-1);

r = zeros(p, p);

for m = 1:M

xt = abs(mod(m .* x, p)) <= c * p / sqrt(m);

yt = abs(mod(m .* y, p)) <= c * p / sqrt(m);

r = r | (xt & yt);

end

imagesc(1 – r); colormap gray; axis equal; axis xy;

November 21, 2009 at 5:01 pm |

OK, I think I’ve solved Problem 5. I also think that doesn’t ruin the project, for reasons that I’ll explain later. Recall from my first comment that it is sufficient to find a function from to such that if then and differ in one of the first digits. One can in fact find a best possible . That is, we can find a map such that if and differ only on an interval of length then and differ in one of the first coordinates. In fact, we can make it linear (and doing so makes it easier to find ). To do this, we build up the matrix of row by row as follows. The first row is all 1s. Note that if and differ in an interval of length 1 then they differ in exactly one place, so they have different parities, which means that their images under will differ in the first coordinate. (I am thinking of and as column vectors here.)

I now want to choose the second row in such a way that all the submatrices you can form by taking two consecutive terms of the first row and the two terms below them in the second row are non-singular. It is easy to see that this forces the second row to be either 10101… or 01010… . Why did we want this property? Because if and differ just in places and then the difference between their images will be the image of a non-zero vector that is zero in all coordinates apart from and Therefore, it is the image of a non-zero vector under one of the non-singular matrices, so it is non-zero.

We now continue this process. We will be done if we can find an matrix over such that any submatrix that is formed from the first rows and consecutive columns is non-singular. Suppose we have managed to construct the first rows of such a matrix. Let the first terms of the next row be arbitrary. I now need the following simple lemma: given any matrix such that the top left-hand submatrix is non-singular, then either is also non-singular or it becomes non-singular when you change the entry in the last row and the last column. This lemma will guarantee that I can fill up the rest of row and keep the property I want.

The lemma is simple. Since the top left-hand submatrix is non-singular, there is a unique combination of the first rows that equals the last row in the first places. So to ensure independence, all we have to do is make sure that the last entry of the last row is not equal to the last entry of this linear combination.

Thus, the answer to Problem 5 is yes.

November 21, 2009 at 5:13 pm |

In the light of the previous comment, there are two natural follow-up questions. First, is it possible to use the trick of moving intervals about to convert the example that solves Problem 5 into an example that solves Problem 3? Secondly, is it possible to find a

four-dimensional dyadic example?If there did turn out to be a relatively simple solution to Problem 3, then the next question would be a rather speculative one. We have seen that if we write numbers in a Fibonacci base, then the “two-dimensional Littlewood” example resembles a digit-reversal with interval exchanges. Is it possible to take a solution to Problem 3, understand the combinatorial phenomena that make it work, and get the same combinatorial phenomena to appear with a clever choice of numbers and ? There is not yet any reason to suppose that the answer is yes, since we have not demonstrated that there is any way of finding something like the linear map in the behaviour of multiples of some number Having said that, the fact that is linear is not completely unpromising …

November 21, 2009 at 6:57 pm |

What a great set of problems! It’s too bad don’t have a lot of time right now to think about them…

Here are a few basic observations about your problems:

1. I wonder whether there might be some natural interpretations of your questions in terms of elliptic curves: One can map a 2D torus (Z x Z) mod 1 to elliptic curves curves using the Weierstrass p-function. And, computing multiples of points back in the torus corresponds to taking multiples of the corresponding elements in the elliptic curve group.

2. Here is a thought on the Problem 6: Perhaps Olson’s theorems on 0 sums mod p is somehow useful. How? Suppose you work in dimension k, and let your real numbers be . Let be the multiset of all vectors

, where .

If a subset sum of elements from (not all 0) comes close enough to in enough coordinates, then it produces for us an integer such that

is very near .

And what does this have to do with Olson’s theorem? Well, perhaps there is a way to discretize the above problem somehow, so as to interpret the problem of trying to get near in all or lots of coordinates as a -sum problem: Olson’s theorem says that if containing , and or so, then there exists a subset of whose sum is in . What’s encouraging here is the fact that need not be so large — only of size or so. Now, if we were to somehow discretize , to produce , so that a sum of elements in is near the vector mod 1 if and only if the corresponding sum of vectors in is near in all or most coordinates, we could take to be of size about , unless I am missing something. This seems quite large to me, and perhaps it means that we can get a vector so that

, i=1,2,…,k.

If so, then we get

.

3. There is a theorem of David Bailey, the Borwein brothers, and Simon Plouffe called the “hotspot theorem'', some version of which was originally proved by Piatetski-Shapiro back in the 1950's (I had produced my own elementary proof, and was about to publish it, and then was informed by Andrew Granville that it was proved by P-S in an old Russian journal — it left me depressed for a whole month) that might be relevant to attacking Littlewood's conjecture, particularly when you go to consider the dynamics of . Although this theorem of Bailey et al is only stated for a single real number , perhaps there is a 2D generalization.

Basically, their result allows you to show that if is even slightly distributed non-uniformly, then there is a “hot spot'' — that is, there is a number such that the sequence lies in many times more often than it “should''. So, the theorem gives you some sort of measure-focusing information, which perhaps can be exploited somehow (at least if a 2D version could be proved).

Here is the paper of Bailey et al: Go to

http://crd.lbl.gov/~dhbailey/dhbpapers/

and click on paper 75. Alternatively, you can see my elementary proof

http://www.math.gatech.edu/~ecroot/hotspot_journal.pdf

It may not be so well written — I forgot whether I got it into a journal-quality form. If it isn't well written, my apologies in advance.

November 27, 2009 at 4:05 pm

I see I should have read it more carefully before posting the above comments: In comment 1 above, I should have written R x R mod 1, not Z x Z mod 1; in fact, the usual way the torus is written is , where denotes the complex numbers and where denotes the particular lattice we use, in this case the integer lattice . The mapping used sends a point on this torus to , where is the Weierstrass p-function. This point lies on a certain elliptic curve of complex solutions to , where the are determined by our particular lattice .

For comment 2, of course the conjectured bounds I wrote down are too good to be true. But still, Olson’s theorem may be useful to achieve much, much weaker bounds that nonetheless solve the problem for large .

And most importantly, in comment 3, I left off the name “Michal Misiurewicz”. It was he and Bailey alone who proved the “strong hotspot theorem”, though it was applied to various problems by the Borwein’s et al.

November 21, 2009 at 8:05 pm |

I have had a small think and am fairly convinced that the example I presented three comments ago for Problem 5 can be pushed to give an example for the dyadic problem in four dimensions. To do this, it is sufficient to find a linear map such that if and differ only in an interval of length then and differ somewhere in the first coordinates

and somewhere in the last coordinates. For this it turns out that we need a matrix such that every submatrix you form by taking consecutive columns and either the first rows or the last rows has rank I haven’t yet proved that such a matrix exists, but the constraints don’t seem that tight (at least if is something like 3 or 4) and I’m fairly sure that it won’t turn out to be all that hard to come up with a proof.If this turns out to be correct, then the next question, I suppose, is whether one can produce dyadic examples in arbitrarily high dimension. That would be pretty strange, but I’m beginning to think that it is a serious possibility. Quite what bearing it would have on the Littlewood problem is another question altogether.

November 22, 2009 at 6:21 am |

Here’s Mathematica code that does what you asked. Nice thing about the code is that, while it is only practical for disappointingly small primes, you get a slider bar to change the value of c, and that works rapidly.

Note: the picture is for , which seemed more reasonable given the calculation being made.

Closeness[p_, M_] :=

Closeness[p, M] =

1/p Table[

Min @@ Table[

N[Sqrt[m]] Max[Abs[Mod[m x, p, -(p – 1)/2]],

Abs[Mod[m y, p, -(p – 1)/2]]], {m, M}], {x, -(p – 1)/

2, (p – 1)/2}, {y, -(p – 1)/2, (p – 1)/2}];

pic2[p_, M_, c_] :=

pic2[p, M, c] = Map[Boole[# > c] &, Closeness[p, M], {2}];

Manipulate[

Image[pic2[p, M, c], ImageSize -> 4 p], {{p, 83}, 11, 149,

2}, {{M, 17}, 1, p, 1}, {{c, .1}, 0, 1/2}]

November 22, 2009 at 10:15 am |

I should say that Sean Lip has also produced some code, which works for reasonably large primes (up to about 3000 or so). It has produced some fabulous pictures, some of which I will post on this blog soon. But before I do that I want to find a setting of the parameters that gives as informative a picture as possible (but whether they are informative or not, they look wonderful).

November 23, 2009 at 3:01 am |

Is it at all suspected that this problem could be unsolvable (along the lines of the continuum hypothesis)? It asks a question about a “tricky” property of the real numbers, so that sets a red flag in my mind for whether it can be solved at all.

November 23, 2009 at 2:24 pm

For problems involving quantification over infinite totalities (such as Littlewood’s problem) it is, of course, difficult to prove solvability without actually solving them. But nevertheless one could argue that there is at least one mathematical reasons as to why Littlewood’s conjecture (=LC) should be easier than the continuum hypothesis (=CH):

In contradistinction to CH, LC is only quantifying (=talking) over single real numbers, and not over arbitrary collections of real numbers, i.e. over the powerset of $\mathbb{R}$. Similarly, most of the “well-known” undecidable analytical statements, such as Borel’s question as to whether every set of real numbers having “measure zero in the strong sense” has to be countable, deal with the power set of $\mathbb{R}$.

If one is interested in LC only for very concrete real numbers, such as $\sqrt{2}$ and $\sqrt{3}$, or, more generally, quadratic irrationalities, then it becomes formalizable in Peano Arithmetic, and there are not many statements that can be formalized in that theory and which are known to be undecidable in the very strong theories we are usually working with, e.g. axiomatic set theory, ZFC.

Still, it just occured to me that even the restricted version of LC dealing with quadratic irrationalities only seems to be of logical complexity $\Delta_2$ (Does somebody see a clever way how to formalize it $\Pi_1$?) and that some twentyfive years ago or so Saharon Shelah has written a complicated paper exhibiting a $\Delta_2$ statement formally undecidable over Peano arithmetic and whose truth or falsity in the absolute sense is (in contradistinction to, e.g. the Paris-Harrington-Theorem or Gödels examples) not known.

November 25, 2009 at 5:43 am |

Tim, the following is an attempt to generalize your finite construction with Fibonacci numbers. Let denote the Fibonacci sequence and set for We have the following

Claim.If thenAlthough I only have numerical evidence that the claim above holds when and that the inequality is best possible, I have the feeling that it leads to a simplification of the two-dimensional problem.

Now, let for It is plain that for and

If we could only prove the claim above, then we would have a new two-dimensional example. Also, perhaps the above construction could be pushed to solve problem 3, for instance, if we let and then a possible candidate could be

November 25, 2009 at 9:24 am |

If I understand correctly, this example is a special case of the general construction I talked about in the post: you take a number with bounded quotients in its continued-fraction expansion and take multiples of mod 1. The inequality in your claim is true and follows from the fact that the continued-fraction expansion of is And unfortunately the suggestion you have for Problem 3 doesn’t work (I have attempted something similar in the past) because for any quadratic irrational there are linear relationships between , and For example, if is the reciprocal of the golden ratio, then This reflects itself in the fact that in your case, so whenever is close to an integer, will be as well, which is exactly what you don’t want. Similar problems happen with any construction that is based on just one quadratic irrational.

November 25, 2009 at 12:27 pm |

Tim, all I wanted to show is how your construction with can be done in general with any quotient of consecutive Fibonacci numbers. My claim that this would lead to a new solution of the two-dimensional packing problem was pretentious. Sorry about that.

November 25, 2009 at 8:56 pm

No problem — and I’ve just had a look at your post about Fibonacci numbers, which looks very nice.

November 25, 2009 at 8:17 pm |

I have to apologize for suggesting another project here, but this other one might have something to do with Littlewood’s conjecture. The problem I would love to see as a polymath project is the Heilbronn Triangle Problem. The problem is to place n points in a unit square so as to maximize the area of the smallest of the triangles determined by the n points. Heilbronn conjectured that it is at most for some constant c, but Komlos, Pintz, and Szemeredi showed that it can be as large as . The best upper bound is due to Roth, it is . This problem has geometry, combinatorics, analysis, and probably much more in it and it might be accessible for many mathematicians.

November 26, 2009 at 12:07 am

I see what you mean by everything you say here, including the connection with Littlewood’s conjecture. (I don’t think solving one would help to solve the other, but both this problem and Problem 3 above involve sets of points that are in some area-related sense separated.) And it’s always nice when there is a yawning chasm between the best known upper and lower bounds.

November 26, 2009 at 2:20 pm |

What an interesting problem! One way to make the question look more ‘natural’ (in some sense) could be to replace by (an equivalent ‘metric’, up to a constant). Using simple trigonometric identites and change of variables the conjecture can then be reformulated as:

for all values of and . This raises questions about the density properties of the set

and its projections. Is it even known, for example, that zero is not a limit point of this set?

November 30, 2009 at 5:03 pm

I haven’t had time to think about what you have written carefully (I am about to go to a conference and have to prepare my talk), but perhaps the usual “Pade approximation” technology will allow you to say something useful and interesting about the distribution of your for algebraic: Basically, if $\alpha_1,…,\alpha_k$ are algebraic (solutions to polynomials having rational coefficients), then it is known how to use Pade approximation (i.e. rational function approximations) technology to , to produce a common integer denominator , and integer numerators , such that

,

where denotes a function tending to infinity as . Note that this is much stronger than what one would expect — one would expect that the error in the approximation is at best or so. So, when you plug in algebraic numbers to the exponential function, you get out special numbers that can be unusually simultaneously well-approximated by rationals.

My memory here could be in error, as I haven't thought about transcendental number theory and diophantine geometry in ages; however, I once wrote an expository note on some of this for an REU student, which is, for example, the standard way that one shows that is transcendental — you can find this note by going here:

http://www.math.gatech.edu/~ecroot/transcend.pdf

November 26, 2009 at 7:29 pm |

Let me check that. Certainly is equivalent to Then which is equal to half of So we take and

The odd thing is that does not have to have zero as a limit point, since we can take and where is some quadratic irrational. The size is proportional to so is greater than zero.

Ah, I understand better what is going on. The point is that and need not get closer than but for the projections to the axis

alsoto be separated by you need and to be close only when they are both comfortably away from and It does seem like a nice reformulation, though whether it actually changes how one tackles the problem I don’t know. I must think about it.November 27, 2009 at 9:38 am

I have made a few plots to help visualize these sets: http://www.obtext.com/littlewood/

(fig_5 illustrates the case you describe in your second paragraph.)

If nothing else, these pictures confirm (as we know already) that any counterexample is going to have to be really pathological. I quite like fig_4 (, ), though, which with points plotted looks like a 17-petalled flower; I’m not sure what’s going on here!

November 27, 2009 at 4:13 pm |

I realize there was a small misprint in my previous comment, which is that I should have written So we take and

Alec, I’d be curious to know what it looks like when and I’d be particularly interested in a blow-up of the area near the origin.

November 27, 2009 at 7:43 pm

Hi Tim,

I’ve added some plots for those values (fig_6 to fig_9).

The point very near zero corresponds to .

November 27, 2009 at 8:55 pm

Many thanks for that. For anyone who’s interested, and are both around so multiplying them together and multiplying the product by 157 gets you around 0.15.

November 27, 2009 at 11:11 pm

In light of the change of variables, the calculation I would expect to do here is , where . This is indeed small (about ), mainly because is so close to an integer ( turns out to be a convergent of the continued-fraction expansion of ). It is interesting that you also get a (fairly) small value with and but I wonder if this is a red herring?

Anyway, fig_10 shows what you get with : circular patterns seem to emerge.

November 27, 2009 at 11:18 pm

Ah — I didn’t think 0.15 was all that impressive, but was too polite to say so. So thanks for that clarification!

November 27, 2009 at 11:29 pm

You’re welcome, and I’ve just noticed I dropped a factor of two🙂. It’s late…

November 30, 2009 at 12:49 am |

Alec’s formulation of the conjecture brings my attention to the sequence of Chebyshev polynomials. Since the question is whether or not

for all values of

Let and consider the set

I plotted some of these sets using Wolfram Alpha and I obtained some nice looking pictures. If you copy the code

plot 4|chebyshevT[4,x]-chebyshevT[4,y]|>1, x=-1..1, y=-1..1

and paste it into Wolfram Alpha, you get the picture of when and if you play with the parameters and you get other pictures. Notice that the blue regions have increasing area as increases.

The conjecture would be false if one could prove that

This reminds of the Borel-Cantelli lemma. If the events were independent, then the condition

would imply that

for almost every Unfortunately, the events are not independent. The probability measure is the normalized two dimensional Lebesgue measure on the square Now, I wonder if one could apply a probabilistic argument in absence of independence, like the counterpart of the Borel-Cantelli lemma.

November 30, 2009 at 1:35 pm

The Chebyshev polynomials do give a nice visualization. I think that the condition (for the conjecture to be false) is not quite as you state — unless I misunderstand your notation, shouldn’t it be

(where I’ve written for your )?

November 30, 2009 at 7:55 pm

Alec, the condition for Littlewood’s conjecture to be false is that

November 30, 2009 at 9:27 pm

Hi Miguel,

Miguel: yes — and our statements are equivalent, I think: for suppose we have , , , and such that . Let , and . Then for all , . (And the converse is trivial.)

So to find a counterexample it’s enough to show that for some , the for have non-empty intersection.

November 30, 2009 at 9:30 pm

Sorry for the bad formatting — and I should have added ‘for all ‘ at the end of my first sentence.

November 30, 2009 at 10:03 pm

Alec, you are right about the equivalence of both statements. Also, your last comment shows that Littlewood’s conjecture is equivalent to the condition

for all values of

November 30, 2009 at 9:51 am |

I’ll try that Wolfram alpha thing soon. But let me say that fairly soon I’ll also post some of Sean Lip’s pictures, which are of similar sets for quite large values of n and look beautiful.

While on the topic of Wolfram alpha, another experiment that is quite easy to do is the following: you type in something like “approximate fractional part of (nth Fibonacci number times square root of 2)” for various values of n, which can be quite large. If you do it for a range of consecutive values of n, it looks as though the answers you get out are approximately uniformly distributed, which, if true, would imply that the square root of 2 and the golden ratio do not give a counterexample to the Littlewood conjecture. However, proving uniform distribution for something like that is very similar to some notorious open problems …

November 30, 2009 at 12:14 pm |

I posted some pictures of the level sets for Chebyshev polynomials so that you can see them in the click of a mouse.

http://mlacruz.wordpress.com/

December 1, 2009 at 9:32 am

It would be great to see a greyscale (or similar) plot in which the lightness (or darkness) of the point was proportional to

(for some fairly small value of , say ).

I wonder if this might give some sort of feel for the significance of the result about Hausdorff dimension.

December 2, 2009 at 7:02 am

OK, here is a plot of

for and , on a logarithmic scale where high values correspond to bright pixels:

And here is a similar plot of

for and :

The images are rather large but they do give food for thought.

December 2, 2009 at 7:37 am

Edit: the second plot is of

. Apologies.

December 2, 2009 at 1:46 pm

I would like to see a plot of

December 2, 2009 at 1:50 pm

Oops!

I meant to say a plot of

for

December 2, 2009 at 9:37 pm

Miguel, see x1000_25.png for the Tchebyshev plot with .

December 2, 2009 at 10:32 am |

Those are gorgeous, and similar to the images that Sean Lip produced, though his were purely black and white but with a cutoff for n. (In other words, he blacked out all points for which the minimum was less than if you took it over all up to some .) I must hurry up and finish the post in which I discuss these images and some other issues related to the problem.

December 3, 2009 at 2:37 pm |

Here’s a trivial observation, which nevertheless gave me a little insight into why LC is hard. Consider the two statements:

and

.

Now it’s obvious that . But A is false; B is true; and LC is sitting in the middle of the huge space between them!

December 4, 2009 at 10:09 pm |

I’ve been thinking about another way to generalize the problem. Let .

What can we say about sets and functions such that

for all ?

More generally still, in dimensions we can ask whether:

.

For tractability, let’s assume that is a ‘star’ (a union of line segments from the origin).

In one dimension, we are asking about sets and functions such that

for all .

It’s natural to consider , and then this condition boils down to:

for all .

Here we can get away with for any , but not with . Thinking about how volumes scale, I guess that is going to be the ‘difficult’ case in general.

Obviously, if and then . Also, I think (though haven’t checked rigorously) that if is bounded, then

(consider the volume of the union intersected with ).

Tim’s Problem 7 corresponds to the case and .

December 4, 2009 at 10:13 pm |

Someting funny happened to the formatting of the first equation, which should be:

.

December 4, 2009 at 10:16 pm |

Actually a whole sentence got lost at the beginning! The point was that for this , LC asserts:

.

December 5, 2009 at 8:05 am |

Addendum: just noticed that since the truth of is unaffected when (or ) is multiplied by a positive constant, the fact that it fails in the context of Problem 7 means that it fails whenever is bounded (with ). So the unboundedness of the set is essential to LC.

December 13, 2009 at 9:41 pm |

Further to the above, here’s a simple question that may be easier to answer than Littlewood’s, if the answer is negative (or harder, if the answer is positive). Let be the set of open subsets such that . (The requirement of openness is not crucial, but rather convenient.) Let be the set of such that

.

If , is it conceivable that if and only if has infinite volume?

This is true for , where there are just two cases to consider ( is either or a finite symmetric open interval). When , the ‘if’ would imply Littlewood’s conjecture, but it would also imply strengthenings of it. With there exist and such that

is bounded away from zero. This comes from taking ; it can also be seen as a strengthening of Problem 7.

Calculations like these lead us naturally to consider the function defined by

(so that and ). This is a little bit like a norm. The correspondence between sets and LC-like conditions is simplest in the case where has the additional property that

(as in both the examples above). Then the condition translates to:

where for is defined pointwise by .

December 13, 2009 at 9:48 pm |

More formatting weirdness. The section from ‘This is true’ to ‘Problem 7’ should read:

This is true for , where there are just two cases to consider ( is either or a finite symmetric open interval). When , the ‘if’ would imply Littlewood’s conjecture, but it would also imply strengthenings of it. With there exist and such that

is bounded away from zero. This comes from taking ; it can also be seen as a strengthening of Problem 7.

December 13, 2009 at 9:55 pm |

Sorry Tim, let me try again. I hope this works.

With equal to the set of such that and there exist and such that

is bounded away from zero.

December 13, 2009 at 10:02 pm |

I give up! Missing text in plain this time:

With A equal to the set of (x,y) such that | xy | < 1 and | y | infinity such that n_r . || n_r x || . || n_r y || –> 0, but we can find such a sequence where sqrt{n_r} || n_r y || –> 0 as well.

The ‘only if’ for k=2 would imply that for any beta > 0 there exist x and y such that…

July 1, 2010 at 5:16 pm |

I have a simple question: is it true that, even if alpha is a badly approximable

irrational number,

inf_(n in N) || n^2 alpha || = 0

? Thanks in advance, Jack D’Aurizio.

July 1, 2010 at 6:33 pm

Yes it is true. Here’s a lovely proof I was shown by Vitaly Bergelson. (It doesn’t give the best known bound, but it’s a very nice argument.)

Split up into equal-sized subintervals, and colour each integer according to which interval lies in mod 1. By van der Waerden’s theorem we can find and all of the same colour. That implies that is close to an integer. But it also equals so we are done (since was arbitrary).

July 1, 2010 at 7:02 pm |

Really, really nice.

And what about improvements of the Heilbronn,Zaharescu and Iwaniec inequalities (that I just discovered)

|| n^2 alpha || < n^(-theta)

for infinitely many n

with theta=1/2 or 2/3-epsilon?

July 1, 2010 at 8:43 pm

The last I heard, which was about eight years ago, the result of Zaharescu’s was the record.

March 15, 2012 at 3:46 am |

I have an idea for the conjecture, let f(n)=max (0<a,b<1)

min (1<=qinfinity) f(n)=0. With some help, I have computed f(n) for n up to 100000, and one of the interesting results I have found is that the values for a and b for some n are pretty close to sqrt(2)-1 and 2-sqrt(3), so solving the problem for those values might shed some light on the problem.

March 15, 2012 at 3:49 am

Oops, sorry, to edit, f(n)=max (0<a,b<1) min (1<=qinfinity) f(n)=0.

March 15, 2012 at 6:41 am

Your maths formatting seems to have been mangled, but this does remind me of some computations that I did a while ago (perhaps the same ones, though it sounds like you went further). Namely, define . For each there will be some point (conceivably more than one) where is maximized.

To find , and for a given , we end up having to solve three simultaneous equations of the form (), where and . I found the , up to . It appears from the data so far that and are determined by , according to some unknown rule exemplified in the following table:

n r s

1 0 0

2 1 1

3 1 1

4 2 1

5 2 1

7 3 2

8 3 2

15 6 4

17 7 5

18 7 5

26 11 7

34 13 9

Here is a table of values of and for . I’ll also list the resulting values of , and :

N n1 q1 n2 q2 n3 q3 x y e

1 .5000 .5000 .2500

2 .3694 .3694 .1365

3 1 + 2 + 3 – .4286 .2857 .1224

…

7 4 – 5 + 7 + .4119 .2680 .1013

…

15 5 + 7 + 15 + .4113 .2692 .0977

…

17 3 – 5 – 15 – .3889 .2692 .0962

18 5 + 7 + 17 + .4110 .2699 .0957

…

22 3 – 15 – 18 + .3871 .2688 .0937

…

26 8 + 15 + 18 + .3877 .2644 .0936

…

34 8 + 18 + 34 – .3877 .2643 .0928

…

49 7 + 17 – 26 – .4125 .2697 .0882

…

63 ? ? ? ? ? ? ? ? ?

In this table, three dots indicate repetition of the sequence above until the next displayed value of . For example, the values of are the same for .

From the data it looks plausible that is a limit point of : in other words, that is in some sense a ‘worst case’ for LC.

March 15, 2012 at 8:25 am |

Your definition of E n (x,y) is what I used in my f(n), my f(n)=max (a,b) E n (x,y). Hope my math doesn’t get mangled again.

March 15, 2012 at 8:26 am |

Good, my math worked this time🙂

March 15, 2012 at 8:42 am |

And LC is equivalent to f(n) converging to 0.

April 11, 2013 at 10:35 pm |

[…] There are conjectures that seem to approachable, and there are those that seem unapproachable. The Littlewood conjecture falls into the former category. Yet it has resisted efforts to solve it now for years. Akshay Venkatesh of NYU has a lovely survey about the problem. Also Tim Gowers has some interesting ideas about it—see this post. […]

April 14, 2013 at 12:26 pm |

By theorem 1 for every n there exist integers that and . By the common sense there exist integers that and , that is that , . Therefore, . Since and

November 13, 2016 at 6:25 pm |

Here is an argument suggesting that Littlewood conjecture may be

correct. First statement is rather ideological. All bounded type

irrational numbers are equal. That is there are no numbers “more

equal than others”. Then, if there is a counterexample for some

alpha of bounded type, it must be a counterexample for all alpha

of bounded type. But then the Hausdorff dimension will be positive

which is impossible by Einsiedler, Katok and Lindenstrauss,

November 13, 2016 at 6:38 pm |

I have another comment. It is actually a question.

Consider two badly approximable irrational numbers alpha and beta,i.e alpha and beta are of the bounded type. Now, form

a two-dimensional vector (alpha, beta). Is it possible that this

vector is badly approximable as a two-dimensional vector?

I suspect the answer is yes but I don’t know any reference.