This short post is designed as a possible way in to EDP for anyone who might be interested in participating but daunted by the idea of reading large amounts of material. One of the natural strategies for proving EDP is to try to formulate and prove stronger statements. At first that sounds paradoxical: isn’t it even harder to prove a stronger statement? But the answer to that question is often no. To give a slightly silly example, suppose you were asked to prove that for every there exists such that for every if is odd and has at least prime factors (counted with multiplicity), then mod , where is Euler’s totient function. You could make the problem easier by proving Euler’s theorem, that mod for every and every that is coprime to . You wouldn’t have as many hypotheses to use, but that’s good, since they can’t be used. Perhaps a better and more relevant example is when you generalize the class of numbers you are working with so as to allow a wider set of methods. For instance, suppose you want to prove that the largest possible product of three positive integers that add to 300 is at most . If you replace positive integers by positive reals, then you suddenly have available methods that you didn’t have before — for example, you could use compactness plus a lemma that says that if any two numbers are not equal then you can increase the product by replacing both of them by their average. (I’m not saying that’s the easiest proof — just that it’s a proof that you can’t do without first generalizing the statement.)

In this post I want to mention three strengthenings of EDP. One of them I find interesting but not promising as a way of proving EDP, for reasons that I will explain. The other two look to me also very interesting and much more promising. All of them have been mentioned already, but the point of this post is to collect them in one convenient place.

**Restricting the allowable common differences.**

I know of no -sequence that has bounded discrepancy on all HAPs with common difference either a prime or a power of 2. The sequence has bounded discrepancy on all HAPs of prime common difference, but if you allow powers of 2 as well, then no periodic construction can work, since if the period is for some odd integer , then a HAP of common difference will have a bias of at least 1 in each interval of length , for parity reasons, and this bias will gradually accumulate.

One can defeat powers of 2 by using the Morse sequence (if you haven’t seen it, then it’s a nice exercise to see how it is generated), but that has unbounded discrepancy on HAPs of odd prime length. (I can’t remember exactly why.)

Why do I not think that this potential strengthening of EDP is likely to be useful for attacking the problem? Well, one promising feature of EDP is that it seems to generalize to sequences taking other kinds of values, such as complex numbers of modulus 1 or even unit vectors in an arbitrary Hilbert space. However, something that I’ve only just noticed (though others may have spotted it ages ago) is that if one restricts to, say, prime-power common differences, then even the complex version of the problem becomes false. A very simple counterexample is the sequence , where (that is, a primitive sixth root of 1). Then the sum along any HAP with common difference that isn’t a multiple of 6 will be a sum of a GP with common ratio , and will therefore be (uniformly) bounded. In particular, this is true of HAPs with prime-power common difference.

This simple example also shows that a certain real generalization of EDP is false when you restrict the common differences in this way: you cannot prove unbounded discrepancy for sequences that take values in the set . The counterexample is simply twice the real part of the example above: the sequence repeated over and over again. So if (and I think it’s a big if) EDP is true for HAPs with common differences that are either primes or powers of 2, then any proof must make pretty strong use of the fact that the sequence is a -valued sequence. This rules out a lot of promising techniques, so it appears to make the problem harder rather than easier.

**A discrepancy question about matrices.**

In my earlier post I mentioned what I called the non-symmetric vector-valued EDP. I have subsequently realized (though perhaps a better word is “remembered” since I must have been sort of aware of this at some point) that it is equivalent to another discrepancy statement that I now find more appealing. The statement is the following.

**Conjecture.** Let be a function from to and suppose that for every . Then for every real number there exist HAPs and such that .

If we take a function and define , then for every the above conjecture, if true, gives us HAPs and such that , which proves that the discrepancy of is at least . So the above conjecture implies EDP. But the class of functions of the form with is a very small subset of the class of all functions that take the value 1 along the diagonal, so this conjecture is very much stronger than EDP.

It is also equivalent to the statement that for every there exists a diagonal matrix of trace at least 1 that can be written as a linear combination , where for each and are characteristic functions of HAPs and and . (This is the statement that I discussed at length in my previous post.) In one direction this is easy: if a decomposition of that kind exists, then equals the trace of and is therefore greater than 1, but it also equals , which is at most . It follows that there is some such that .

In the other direction, if no such decomposition of a diagonal matrix exists, then the Hahn-Banach theorem gives us a linear functional that separates the class of diagonal matrices with trace at least 1 from the class of linear combinations of HAP products defined above. It is easy to check that this functional must be a diagonal matrix with a constant diagonal such that the value along the diagonal is at least 1 and such that for any two HAPs and .

The conjecture is so much stronger than EDP that I think it would be a mistake just to assume that it is true. My guess is that it *is* true, but I would be very interested if there was a counterexample (even if, like the complex sequence above, it is disappointingly simple — in fact, if there is a counterexample, then it seems quite likely that it will be fairly simple). And if there isn’t a counterexample, then the fact that it is so much stronger a conjecture than EDP does this time make me think that the result might be easier to prove than EDP itself.

Until very recently, I had been mainly interested in the dual version of the question (that is, the question about decomposing diagonal matrices), but now it seems to me that the matrix discrepancy question is worth thinking about directly. It is a clean question, and it has the big advantage over the original EDP question that it does not restrict values to the set , so a number of methods can be used that cannot be used directly for EDP. For instance, linear programming can be used to get experimental results: Sasha Nikolov may be going to look into this.

Given any matrix , one can find vectors and such that , so this matrix question is actually trivially equivalent to the non-symmetric vector-valued question I had formulated earlier. But expressing it in terms of vectors makes it harder to think about rather than easier, and that is what had previously put me off thinking about the question directly.

There is one class of matrices that is particularly worth mentioning I think. If EDP is true but this matrix question is false, then the best candidates for counterexamples to the matrix problem are probably matrices of high rank, and an obvious class of matrices that tend to have high rank is matrices that are constant on diagonals (that is, Toeplitz matrices).

Suppose, then, that our matrix is defined to be for some function such that . Then , where by I mean the number of ways of writing as with and . So we have a class of functions that I’ll call *HAP convolutions*, and we’d like to show that if is any function with and is any constant, then there exists a HAP convolution such that . This is true if and only if there is an efficient way of writing the function that is 1 at 0 and 0 everywhere else as a linear combination of HAP convolutions. This is another question that could be investigated using linear programming, and perhaps since it concerns functions of just one variable we could get more extensive results than we could for the general matrix problem.

**The modular conjecture.**

In his most recent guest post, Gil Kalai reformulates EDP as a question about sums mod . EDP is trivially equivalent to the following assertion.

**Conjecture.** For every there exists such that for every sequence of length and every there exists a HAP such that mod .

At first that doesn’t look like a very interesting reformulation since it is too obviously equivalent to EDP. But what makes it interesting is that it has a very natural generalization that doesn’t have any obvious counterexamples.

**Stronger Conjecture.** For every there exists such that for every sequence of length of non-zero numbers mod and every there exists a HAP such that mod .

In other words, we replace the condition that the sequence takes values by the much weaker condition that it is never zero. Gil calls this the *modular conjecture*. (He also presented it in a comment on a much earlier EDP post.)

As Gil points out in his post, one can write down a polynomial that is identically zero (mod ) if and only if the modular conjecture is true for . It is tempting to try to prove that it is zero by analysing its coefficients. More generally, this approach to the problem appears to open the door to a number of algebraic methods.

If you want to prove EDP this way, you have to solve the conjecture for some non-zero . (Since is prime, if you can show it for one then you’ve shown it for all.) However, the problem with is interesting in its own right, and in particular it seems to be interestingly different from the problem with non-zero . At the time of writing, I don’t see any way of modifying the EDP examples to obtain exponentially long sequences mod that avoid zero sums — it seems to me that the upper bound on the length could be significantly smaller. That would be interesting, as it would place constraints on what a proof could look like for non-zero .

**A fourth strengthening.**

This isn’t really meant as part of the body of the post, but more of an afterthought. A strengthening of EDP that has been considered since very early on in the project is where you replace a sequence by a sequence of unit vectors in a Hilbert space. To be precise, one looks at the following statement.

**Conjecture.** Let be a sequence of unit vectors in a Hilbert space and let be a real number. Then there exists a HAP such that .

There is something slightly curious about this conjecture, which is that it is very hard to see how having infinitely many dimensions to play with could help one find a counterexample. In some sense, if you use too many dimensions, then it ought to be the case that there will be a HAP such that the vectors with are pointing in lots of different directions and therefore not cancelling. That makes me wonder whether one can prove that if there is a counterexample to the above conjecture, then there must be a counterexample for some finite-dimensional Hilbert space. Or if that is too much to ask, perhaps there might have to be a counterexample where all the live in some compact set. I think it would be very interesting to think about whether the vague intuition I have just expressed can be made precise. As it stands, it is of course not even close to a proper argument. (I should stress one aspect of what I am saying. If there is any vector sequence of bounded discrepancy, then one can arbitrarily modify for each prime and the sequence will still have bounded discrepancy. So I’m not suggesting that every bounded-discrepancy sequence lives, or almost lives, in finite dimensions, but that it can be used to construct one that does.)

As I write that, another question occurs to me. For this one I don’t even have a vague intuitive argument. Let’s suppose that we can find a counterexample to the matrix discrepancy question earlier. Suppose also that takes the form for some (not necessarily unit) vectors and in a Hilbert space. Must there be an example where the and lie in a finite-dimensional Hilbert space, or at least in a compact subset of a Hilbert space?

**A combined generalization.**

A second afterthought is that the matrix question has a modular version that might be of some interest. Let be a prime and let be a function from to with for every . Must the sums of on products take all possible values mod ? What if we merely ask for to take non-zero values on the diagonal?

If the strong modular conjecture is false, then we can turn a counterexample into a diagonal matrix in the obvious way and we get a counterexample to the second question. Indeed, suppose that for every and when . Then , which avoids some value, since is a HAP. But with matrices there are many more ways of trying to avoid particular values, so the strong modular matrix conjecture looks like a much stronger statement. Again there are two cases — avoiding 0 and avoiding a non-zero value — of which only the second obviously implies EDP.

September 6, 2012 at 12:04 pm |

I have an suggestion for how one might attempt to get a handle on the matrix question in either its real or its mod- form in the special case where we insist that the HAPs have the same common difference. Let’s imagine trying to build a counterexample: that is, an matrix (with arbitrarily large) such that the sum over any product of HAPs and of the same common difference has absolute value at most (or avoids mod ). We might attempt to do this by finding an ordering of the numbers from 1 to such that every number in that ordering precedes all its factors. We could then attempt to define the matrix by choosing its values at points with highest common factor , then points with highest common factor , and so on.

If we pursued such a strategy, then what would we have to do when we got to ? We could decide that we would worry only about sums over where the common differences of and were both . If we did this, we wouldn’t mess up any of our earlier work, since we would be choosing values at points that were not both multiples of any predecessor of (since all factors of appear after in the ordering).

So far so good, but the problem is that the number of points where we get to choose values is smaller than the number of HAP products that we are trying to deal with. For example, when we get to , which equals 1, we have chosen the values of the matrix for all non-coprime pairs . That leaves us approximately values to choose, since that is roughly the number of coprime pairs, but the number of sums we need to worry about is , since that is the number of products of HAPs of common difference 1.

However, in the mod- version, it’s not clear that we can’t deal with several HAPs at once. For example, if I have numbers and want to choose such that none of is congruent to mod , I can do it easily if .

So here is a question that could be helpful. Suppose you have defined a function on all pairs that have a non-trivial common factor. Can you choose the values of at coprime pairs in such a way that the sum of over any set of the form is not equal to ?

September 6, 2012 at 2:26 pm |

Let me try to answer the question at the end of the previous comment in a trivial way. I don’t expect to succeed, but would like to know what goes wrong.

I think it will be easier if I ask a more abstract question. Suppose I’ve got sets (where and are not the same as the and above) and numbers and I want to find a sequence of numbers mod such that is never congruent to mod . Is there a simple sufficient condition that allows me to choose such a sequence? I’m hoping for something a bit like Hall’s condition for finding a perfect matching, but unlike with that problem I think it is unrealistic to ask for a simple sufficient condition that is also necessary.

A very simple sufficient condition is this: that for each the number of sets with maximum element is at most . If that is the case, then let’s suppose we have chosen in such a way that the conclusion we want is true for all sets that are contained in . There are at most sets of maximum element , so we can choose so as to ensure that none of the sums over those sets takes a value we don’t want it to. (Proof: for each set, there is one bad value of , so at most bad values in all.)

That condition can be generalized in a trivial way: it is enough for it to hold for

sometotal ordering of the ground set.So now let’s ask whether we can find a permutation that does the job when the ground set is the set of all coprime pairs and the sets are intersections of this set with sets of the form . We would like to find a total ordering such that for each the number of sets that contain and no further is at most . Is this possible?

The answer would be trivially no if the number of coprime pairs were less than , but that is not the case, and becomes less and less the case as gets bigger. However, there are little problematic places, such as rows where one of the coordinates has lots of small prime factors, which leads to long strings of points that are not coprime pairs. I don’t yet see whether that kind of problem makes the answer obviously no, or whether it might be possible to get round it in a clever way.

September 6, 2012 at 4:41 pm |

So far that isn’t particularly reminiscent of Hall’s theorem. But what if we ask for conditions under which we can be sure that an appropriate ordering of the ground set exists? If there were a neat characterization, then it might conceivably look a bit like Hall’s condition.

One small remark is that if all the sets in the set system have size 2, then we are looking for an ordering of the vertices of a graph such that each vertex is joined to at most earlier vertices. That is a well-known condition: such a graph is said to be –

degenerate. So we are looking at a certain hypergraph generalization of degeneracy.Another small remark is that without the coprimeness condition the problem is easy: just take any ordering of that extends the partial order where if and only if and . Then the only that has as its last element in this ordering is . So that system is 1-degenerate.

Suppose we totally order just the coprime pairs in a way that extends the coordinate-domination order. That alone doesn’t work, since if, for example, a segment of our order is , and if all those points are bigger than all points such that and , then the point is the maximal coprime pair belonging to all the intervals with . So if , then the condition is violated.

More generally, suppose we have an square of points such that none of them is a coprime pair. Is there any hope of finding an ordering that works?

The answer is not obviously no. Suppose, for instance, that we take the complement of the set of all points in the square and try to order this complement in such a way that the degeneracy condition holds.

Hmm … I’ve changed my mind. I think the answer

isobviously no. Suppose that the maximal pair in our ordering that belongs to the set is the pair . Then it is the maximal pair in any set such that and . Since does not belong to the square above, there are at least such sets .September 6, 2012 at 5:22 pm

OK, the failure above is good news, because it shows that there isn’t a trivial just-do-it counterexample to the strong modular matrix EDP. Before I discuss what that teaches us, let me briefly remark that the simple argument just given points to an obvious necessary condition for an ordering to exist: it must be the case that every set at least contains a point that is in at most of the other sets in that are subsets of . If that fails, then trivially there is no ordering of the points such that the maximal element of is the maximal element of at most other sets in .

But that’s not particularly relevant, because an ordering of that kind does

notexist. So if we are determined to find a counterexample, then either we have to worry about big rectangles of points that do not contain any coprime pairs, or we have to ensure somehow that the values we have already chosen in those rectangles make choosing the coprime pairs easier than it would normally be, or we have to give up on the idea of dealing with HAPs with common difference before we deal with HAPs with common difference a factor of . None of these options seems very palatable. In short, finding a counterexample looks hard.September 6, 2012 at 10:04 pm |

A trivial observation about the ‘stronger’ modular conjecture for : to find an exponentially long sequence modulo that avoids zero sums along HAPs it would be natural to look for a

multiplicativefunction whose partial sums avoided zero.September 7, 2012 at 8:15 am

One reason I thought that avoiding zero might be harder than avoiding a non-zero value is that finding a multiplicative sequence of that kind seems quite hard. The techniques we have used up to now tend to rely heavily on the fact that a sum of small numbers is small. But if you want to avoid zero, then you can’t use that. You can use the fact that a big number plus a small number is big, but then you have to have a bigger supply of big numbers than it seems possible to get. I’m therefore quite surprised by what you say about , but perhaps it is the law of small numbers striking.

September 7, 2012 at 7:01 am

If I’ve understood the comments that Gil linked to from earlier posts, it seems that, at least for , avoiding zero is easier than avoiding a non-zero value. (One can have sequences at least as long as 155 in the former case, but no longer than 83 in the latter.)

September 9, 2012 at 4:06 am

I was going to add that it may not necessarily be easier to avoid 0 than 1, since with zero we have more symbols to work with: {1, 2, 3, 4} for 0 mod 5 but only {2, 3, 4} for 1 mod 5. But I tried avoiding 0 with only {2, 3, 4} and I actually get an example of length 149 (which I think is maximal):

2, 2, 2, 2, 3, 2, 3; 2, 3, 3, 2, 2, 3, 3; 2, 2, 3, 3, 2, 3, 3

2, 2, 2, 3, 3, 2, 3; 2, 2, 3, 2, 3, 3, 2; 3, 3, 2, 2, 3, 2, 3

3, 2, 3, 2, 3, 2, 4; 3, 3, 3, 2, 2, 2, 3; 3, 2, 3, 2, 3, 3, 2

2, 3, 3, 2, 3, 2, 2; 3, 3, 2, 3, 2, 2, 3; 2, 2, 3, 3, 4, 3, 3

2, 3, 2, 2, 3, 3, 2; 2, 3, 3, 2, 2, 4, 2; 2, 3, 2, 3, 2, 3, 3

2, 2, 2, 4, 2, 2, 3; 4, 3, 3, 2, 3, 3, 2; 2, 3, 3, 2, 3, 2, 2

3, 2, 3, 3, 2, 3, 4; 2, 3, 3, 2, 2, 4, 2; 2, 3, 2, 3, 3, 2, 2

3, 2

September 7, 2012 at 10:42 am |

It occurs to me that I haven’t completely killed off the simple approach I was trying to use to obtain a counterexample to the strong modular matrix conjecture in the case of HAPs with the same common difference. I’ve shown that the simple approach fails if you insist on choosing the value at before you choose the value at if the highest common factor of and strictly divides the highest common factor of and . But what if you don’t insist on that? The question then becomes the following. Let be the set of all products , where and are HAPs and subsets of . Does there exist an ordering of $\{1,2,\dots,N\}^2$ minus the diagonal such that with respect to that ordering each point is the maximal element of at most sets in ?

Let me try to rephrase that question as a question about a certain bipartite graph, because I think it will have a dual formulation. The bipartite graph has two sets of vertices: points in and sets in , with an edge joining a point to a set whenever the point is an element of the set.

In the abstract, we have a bipartite graph with vertex sets and , and we would like to find an ordering of such that each is the maximal neighbour of at most vertices in .

Here is a sufficient condition for such an ordering to exist. Suppose we have an ordering of the vertices not of but of . We could order the vertices of by saying that if the minimal that is joined to is smaller than the minimal that is joined to . In other words, we go through the vertices of in order and write down any neighbours that we have not yet written down. (In cases where several new neighbours appear at once, we choose the order arbitrarily.)

Under this ordering, how many elements of have maximum neighbour ? Well, let’s write the elements of in order as . Then will first appear as a new neighbour of some . It will need to be the largest new neighbour. And then we will need the neighbours of all to be vertices of that have already appeared as neighbours of earlier points in . And finally will have a neighbour that has not yet appeared. Under those and only those circumstances, will be the maximal neighbour of vertices of .

So if we want to

avoidthat, then we need to find an ordering of the vertices of such that as we write down their neighbours, there is never a string of more than vertices in without a new neighbour appearing.Going back to our specific bipartite graph, that tells us that we would like to find an ordering of the HAP products such that if we look at the union of the first products, there is never a string of of these unions without a new element appearing.

Can that be done? It is crucial that the common differences should be the same, or there will be too many HAP products and the answer is trivially no. But if the common differences have to be the same, then the number of HAP products is roughly and is therefore at most for some absolute constant . So the existence of such an ordering is not ruled out on trivial grounds. So far I have established that if such an ordering exists, then it will sometimes be necessary for a HAP with common difference to come earlier than a HAP with common difference even when is a non-trivial multiple of . That’s worrying — in general, it seems much better to put smaller sets earlier in the ordering — but it doesn’t by itself rule out the existence of the ordering, especially as the problematic places (that is, large rectangles full of points that all have coordinates with non-trivial common factors) appear to be quite rare, and the rectangles are small compared with how far out they are from the origin.

Here is the kind of thing that we might try to do. Suppose is one of these troublesome rectangles. What worries us is that when we come to HAP products with small common differences (I’m thinking about common differences of 1 as I write this) and maximal points (that is, top right-hand corners) in , we may find that we have already chosen lots of other HAP products with maximal points in , so new points will not easily appear. To counter this, we could simply identify the HAP products with top right-hand corners in factor-rich rectangles and label them “troublesome”. Then we could do a preliminary ordering of the non-troublesome HAP products and insert the troublesome ones in at various points, in such a way that each troublesome product appears only after all its points have already appeared (so it won’t mess up anything that has gone on earlier, but also won’t have a new element) and also in such a way that the troublesome products are reasonably spread out, so that the longest interval of sets that don’t contribute a new point is not much longer after the insertions than it was before.

September 7, 2012 at 3:27 pm |

I’ve just noticed that my argument that the ordering of coprime pairs doesn’t exist was wrong. So I’m going to go back to that question, but think about it in its dual formulation. That is, I’d like to find an ordering of the rectangles such that when you take partial unions you add at least one new point that’s a coprime pair for every new rectangles that you include in the union.

Let’s suppose that we’ve chosen an initial segment of the ordering, and let’s suppose that we’ve done it in such a way that if and then is not chosen after . Then the union will be a down-set (that is, a set of points such that if belongs to the set and and , then belongs to the set) and the next rectangle we pick must be a minimal element of its complement.

One way we could choose our ordering would result from time to time in the union itself being a rectangle. If we did that, then we would have very little choice about which points to pick next, since the complement would have only two minimal elements. But if we’re a bit smarter, we can aim for something more like a triangle — say, the set of all such that — in which case, the complement will have lots of minimal elements.

Here’s another example of a tricky situation that could arise. Suppose that we decide to fill up diagonal by diagonal. Things will go OK for a while, but at some point we will try to fill up a diagonal such as for some large . If then and are coprime if and only if has no prime factor less than or equal to . But the proportion of that satisfy this condition tends to 0 as tends to infinity, so there will be some diagonals that contain very few coprime pairs. And we can also have several consecutive diagonals with very few coprime pairs: what it takes is several consecutive numbers each of which has many small prime factors. This happens, by the Chinese remainder theorem and the fact that the sum of the reciprocals of the primes diverges. We just have to pick several disjoint sets of primes all of which have large sums of reciprocals and then pick an integer such that is a multiple of all the primes in the first set, is a multiple of all the primes in the second set, and so on.

But has to be very large for us to be able to do this, and there is absolutely no obligation to fill up diagonal by diagonal. If you think of the boundary of the down-set you have filled up so far as a kind of curve that is gradually moving away from the origin, then you can make sure that when a dangerous set of diagonal comes along, the curve doesn’t cross them in too parallel a way.

Of course, there are plenty of other constraints of a similar kind, and the difficulty is trying to work out how to satisfy all of them at the same time. It’s a tough one because we can’t hope to use some nice description of the set of coprime pairs: I don’t see any alternative to identifying a property that this set has that is sufficient for the expanding curve to be able to pick up points in the set on a regular basis.

This comment is getting a bit long, so I’ll move to a new one.

September 7, 2012 at 9:31 pm

I’m not sure what got into my head there — the argument I said was wrong was right. So quite a bit of what I’ve written in response to its supposed wrongness is not all that relevant to anything.

September 7, 2012 at 3:42 pm |

Again it feels helpful to think about an abstract version of the problem. Because of the constraint I’m putting on the ordering, is always the unique new element of the union contributed by the set . So what I’m looking for is a total ordering of that extends the obvious partial ordering and has the property that any consecutive points in the total ordering (where is independent of ) include a coprime pair. In the abstract, I can ask for conditions that allow me to extend a partial ordering to a total ordering in such a way that any consecutive elements in the total ordering contain an element from some prescribed set. Is there a nice sufficient condition for this?

September 7, 2012 at 5:08 pm |

To answer that last question, it seems a good idea to try to find some trivial

necessaryconditions for such a total ordering to exist. If the partial order doesn’t make any comparisons at all, then a necessary and sufficient condition is obviously that the prescribed set has size at least (give or take integer parts). If the partial order is non-trivial, then we’ll need some kind of relationship between the set and the partial order. But what?Since I’m looking for a necessary condition, I should try to think of a fairly general situation where it is

notpossible to find a total ordering with the desired property. One situation would be if you can find points such that at least points are obliged to lie between and , and fewer than points that are allowed to lie between and belong to the prescribed set.This is a rather strong-seeming condition. A point is obliged to lie between and if , whereas a point is allowed to lie between and if neither nor hold. Typically there will be many more points of the second kind than the first.

Thus, this observation gives rise to only a very weak necessary condition for the ordering to exist, which makes it rather unlikely to be sufficient — though of course there are some beautiful combinatorial theorems where apparently weak necessary conditions do turn out to be sufficient.

September 7, 2012 at 5:10 pm |

A trivial

sufficientcondition is that there is a partition of the poset into antichains such that if then no point in is greater than any point in and such that at least points in belong to the prescribed set. That is perhaps the natural generalization of the comment about the trival poset.September 7, 2012 at 5:54 pm

I also wanted to try to find an algorithm that would fail only under certain circumstances that one could hope didn’t happen. Let be the prescribed set. The rough idea would be as follows. We pick the total ordering element by element. For each let be the complement of . Then we try to pick the points in such a way that stays as large as possible.

It’s not obvious that that is the right quantity to preserve. The reason I go for it is that I think of the process as “saving points of for when the going gets tough”. If you find yourself approaching a region where lots of points don’t belong to , you want to have a number of points of waiting to be chosen, so that you can intersperse them amongst the points not in as you continue to choose your sequence.

If you have chosen a point of recently (by which I mean more recently than steps ago) and have a choice between picking a point in and picking a point not in , could there ever be a disadvantage in picking the point not in ? I don’t see how there could, but I haven’t quite seen a formal argument for that. If there is no disadvantage, then we can assume that the algorithm has a kind of reverse-greedy property: you pick points of only if either that’s all you’ve got in front of you or if not doing so leads to a sequence of consecutive points not in .

September 7, 2012 at 6:03 pm |

I think I may have made a mistake earlier. Suppose I have a total ordering of the HAP products in such a way that a new element appears at least every steps? If I order the elements according to when they first appear in one of the HAP products, is it really true that each element can be a maximum of at most of the HAP products?

Unfortunately, it doesn’t follow at all. I need to think about this.

September 7, 2012 at 9:47 pm |

Let me go back to the abstract question about bipartite graphs. Suppose you have a bipartite graph with vertex sets and and a total ordering on such that for every there are at most vertices such that is the maximal neighbour of . I’d like to characterize this condition in a more -focused way. That is, I’d like conditions on the neighbourhoods of the vertices in that are necessary and sufficient for such an ordering to exist.

A sufficient condition is that can be partitioned into sets of size at most such that no neighbourhood of any vertex in is contained in the union of the neighbourhoods of the vertices in . If such a partition exists, then let be the set of vertices that are neighbours of a vertex in but not of any vertex in for any . If we order the vertices in in such a way that if then every vertex in precedes every vertex in , then a vertex in can only be a maximal neighbour of if . (Proof: it isn’t a neighbour at all if for some , and if then by assumption has a neighbour that is not joined to any of , so it is greater than all vertices in in the ordering.) Since the have size at most , we are done.

This condition is also trivially necessary, since if an ordering satisfies the original condition, then we can define to be the set of all with maximal element and the sets satisfy this condition.

In the case of HAP products, we want to partition the set of all such products into sets of size at most in such a way that no in is contained in the union of all the sets in .

September 9, 2012 at 10:06 pm |

I feel like returning to EDP itself, and the diagonal decomposition approach where one is allowed to use products of HAPs that may have different common differences (since that would prove EDP and it’s not clear that the result is even true if you insist on the same common difference).

In general, it seems that the reason it is difficult to come up with a counterexample to EDP is that eventually you start facing too many constraints all at once. That happens because some numbers are contained in lots of HAPs. It occurs to me that the reason we have trouble proving anything is that we are trying to find decompositions in a place where they don’t exist, and what we should be doing is looking at some long interval in which every number has lots of small factors. Exactly what that means is not quite clear, and one might perhaps loosen it to almost all numbers having lots of small factors, but let me try to give a clearer idea.

An observation we made a long time ago was that instead of looking at HAPs we could instead look at shifted HAPs. The way it would work is that if is a prime power then we choose an integer and let be the residue class of all integers congruent to mod , making sure that . Then for general we let be the intersection of the sets . The reason we can do this is that if we manage to prove a large discrepancy for these shifted HAPs, then we can use the Chinese remainder theorem to find an integer such that is divisible by if and only if . In other words, “near the HAPs look like shifted HAPs”.

It occurs to me now that if we do this then we may find it easier to find a decomposition, because we won’t be near the incredibly unusual number 0, which is a multiple of everything. If we shift all HAPs randomly (subject to the constraints), then a typical number will belong to for lots of small . The sort of reason this might be useful is that if we take lots of products , then we can hope to cover the pairs much more evenly than if we use HAPs and have to say things like that if is a prime then it is contained in only two HAPs.

September 12, 2012 at 2:06 pm

I should add that it’s a point that I too missed recently (or had perhaps forgotten about) and that Gil put me right on. For the complex version of the EDP you can’t prove anything unless every divides the common difference of one of your HAPs, since if doesn’t then you can rotate through the th roots of unity. In the real case, you can’t use a periodic example if for every you have HAPs with common difference , so it’s less clear what happens, but since the promising approaches (apart from the modular conjecture) would yield the complex case as well, it looks difficult to prove anything without a pretty rich set of HAPs.

September 12, 2012 at 12:17 pm

Ah, good point.

September 12, 2012 at 7:10 am

Thinking in terms of these shifted HAPs seems to be easier if we restrict ourselves to HAPs with square-free common difference, since then we just need to choose an arbitrary residue class modulo for each . I don’t remember if we ever thought much about this strengthening of EDP — it isn’t quite the same as Gil’s ‘square-free EDP’, where the sequence is set to zero at non-square-free numbers — or if we rejected it as false?

September 12, 2012 at 9:16 am

The sequence 1,1,-1,-1,1,1,-1,-1,… is a counterexample for square-free EDP, so unfortunately we can’t make that simplification.

September 29, 2012 at 5:19 am |

Actually Tim, I find your example quite deep. It is not at all obvious that what makes competition or school problems difficult is sometimes the same as what makes research problems difficult: misleading hints or context.

Because in the case of problems there is the psychological aspect of “game against the problem poser” (e.g. is he trying to trap students?).

So your example is thought-provoking.

Thanks.

September 29, 2012 at 5:25 am |

Well I phrased my comment poorly but I hope useful meaning can still be extracted from it. Anyway, thanks for the nice post.