DHJ — the triangle-removal approach

The post, A combinatorial approach to density Hales-Jewett, is about one specific idea for coming up with a new proof for the density Hales-Jewett theorem in the case of an alphabet of size 3. (That is, one is looking for a combinatorial line in a dense subset of \null [3]^n.) In brief, the idea is to imitate a well-known proof that uses the so-called triangle-removal lemma to prove that a dense subset of \null [n]^2 contains three points of the form (x,y), (x,y+d) and (x+d,y). Such configurations are sometimes called corners, and we have been referring to the theorem that a dense subset of \null [n]^2 contains a corner as the corners theorem.

The purpose of this post is to summarize those parts of the ensuing discussion that are most directly related to this initial proposal. Two other posts, one hosted by Terence Tao on his blog, and the other here, take up alternative approaches that emerged during the discussion. (Terry’s one is about more combinatorial approaches, and the one here will be about obstructions to uniformity and density-increment strategies.) I would be very happy to add to this summary if anyone thinks there are important omissions (which there could well be — I have written it fast and from my own perspective). If you think there are, then either comment about it or send me an email.

Two suggestions, made by Terry in comment number 4 (the second originating with his student Le Thai Hoang), led to two of the main threads of the discussion relating to the triangle-removal approach. One was that since Sperner’s theorem is the density Hales-Jewett theorem for alphabets of size 2, and since there is no obvious way of generalizing the existing proofs of Sperner’s theorem to obtain the density Hales-Jewett theorem, then it would make sense to look for other proofs of Sperner’s theorem. The other was that the existing methods of tackling density theorems tend to prove not just that you get one configuration of a desired type but a positive proportion of the number of configurations in the whole set. One can usually obtain a seemingly stronger result of this type by an easy averaging argument (a prototype of which was proved by Varnavides in the 1950s), but in the case of the density Hales-Jewett theorem the stronger statement is false: one can easily come up with dense subsets of \null [3]^n with far fewer than 4^n combinatorial lines (the number that you get in \null [3]^n itself).

The Varnavides thread.

This led to a search for a suitable Varnavides-like statement, and that search seems to have been successful — though we have not checked this carefully. (See comments 61-64 and 66.)The basic idea is to restrict attention to the set T_m of all sequences x such that the numbers of 1s, 2s and 3s in x are all within m of n/3, where m is a large integer that is much smaller than n. An easy averaging argument shows that if you can prove that a dense subset of T_m contains a combinatorial line, then you have the density Hales-Jewett theorem. (It seems that one can take m to be anything from a very large constant that depends on the density only, to an integer that is a small multiple of \sqrt n.) The reason this helps is that it avoids the problem one has in \null [3]^n that the points in a random combinatorial line are very atypical: with high probability they contain about n/2 of one value and about n/4 of the other two values.

This seems to have cleared up the Varnavides problem, so it is something to hold in reserve and use if we get to the stage where we have a definite approach to try out on the problem. For now, one can either just pretend that the Varnavides problem doesn’t exist and think about \null [3]^n, or one can simply bear in mind that one is thinking about sequences with roughly equal numbers of 1s, 2s and 3s and combinatorial lines with small variable sets, or sets of wildcards as some of us are calling them.

The Sperner thread.

Some definite progress has been made on trying to find alternative proofs of Sperner’s theorem. One approach was initiated in comments 21 and 22. The idea here was to try to find a statement that would have the same relationship to Sperner’s theorem as the hoped-for (but not yet even precisely formulated) triangle-removal lemma had to density Hales-Jewett. An argument was given in comments 21 and 22 that the statement, if it existed, should have the following form.

Let \mathcal{A} and \mathcal{B} be collections of subsets of \null [n] such that there are very few pairs (U,V) with U\in\mathcal{A}, V\in\mathcal{B} and U\cap V=\emptyset. Then it is possible to remove a small number of sets from \mathcal{A} and \mathcal{B} and end up with no such pairs.

We have been calling statements like this pair-removal lemmas, even though we have not yet proved any useful ones. A statement like this implies a weak form of Sperner’s theorem as follows. Suppose you have a dense set system \mathcal{A} that contains no pair of distinct sets (U,W) with U\subset W. Then let $\mathcal{B}$ consist of all complements of sets in A. If we can find U\in\mathcal{A} and V\in\mathcal{B} with U\cap V=\emptyset, then U\subset V^c. Since V^c\in\mathcal{A}, this is a contradiction unless V^c=U. Therefore, the number of pairs (U,V) with U\in\mathcal{A}, V\in\mathcal{B} and U\cap V=\emptyset is very small. The pair-removal statement then implies that one can remove small subsets from \mathcal{A} and \mathcal{B} in order to end up with no such pairs. But that contradicts the fact that \mathcal{A} is dense and for every U\in\mathcal{A} we have the pair (U,U^c).

After some difficulty in formulating a precise pair-removal lemma that wasn’t either trivial or false, we have arrived at the following test problem. (It’s not actually the pair-removal lemma we’re going to want, but it is cleaner, looks both true and non-trivial, and seems to involve all the right difficulties.)

Tentative conjecture. Let n=2m+k, where k tends slowly to infinity with n. Let \mathcal{A} and \mathcal{B} be subsets of \binom{[n]}{m} such that the number of pairs (U,V) with U\in\mathcal{A}, V\in\mathcal{B} and U\cap V=\emptyset is o(1)\binom{n}{m,m,k}. Then one can remove o(1)\binom{n}{m} sets from each of \mathcal{A} and \mathcal{B} and end up with no such pairs.

Some promising suggestions have been made for how to go about proving this conjecture. Jozsef (in comment 124) has drawn attention to a theorem of Bollobás that implies that if each U\in\mathcal{A} is disjoint from precisely one V\in\mathcal{B}, then \mathcal{A} and \mathcal{B} must both be small. And Ryan (in comments 82, 145, 150 and 152) has drawn attention to existing papers on very closely related subjects. Very recently (at the time of writing it is in the last two or three hours) Jozsef has given us what appears to be a proof of a pair-removal lemma. Assuming there aren’t any irritating surprises, such as it turning out not to be quite the statement we really want, the obvious thing to do here seems to be to play around with Jozsef’s argument and see whether it can be generalized from 01-sequences to 012-sequences.

A very quick word about what to expect of such a generalization. It would be “completing the square”, where vertex 00 is the trivial pair-removal lemma, vertex 01 is the pair-removal lemma in Kneser graphs, vertex 10 is the normal triangle-removal lemma (as described in the background-information post), and we are looking for vertex 11. Since getting from 00 to 10 is by no means a triviality, getting from 10 to 11 is unlikely to be a triviality either. But now we are in a stronger position than we were initially, because to get to 11 there are two potential routes. If we are really lucky, then we’ve got all the ingredients we need and just have to decide how to mix them together.

Other approaches to Sperner.

The above approach to re-proving Sperner has a certain naturalness to it, at least in the context of the main problem, but it is not the only one to have been discussed. If generalizing it runs into difficulties, then we have other ideas to explore that have definitely not yet been explored as far as they could have. They begin with Ryan’s comment 57 (in which he was responding to a related comment of Boris). This introduces the idea of what Boris calls “pushing shadows around”, which is apparently what Sperner himself did.

Other ideas that are still to be explored.

If we are going to follow the triangle-removal proof of the corners theorem quite closely, then at some point we will need to find an analogue of Szemerédi’s regularity lemma for subgraphs of the graph where you join two sets if they are disjoint. There are some obvious problems with doing this (discussed in the initial post in comment V, and comments EE to LL), and a very speculative suggestion for how a proof might look in comment 110. There would also be a need for a counting lemma: an offer to search for such a lemma in a systematic way has not yet been taken up.

[Remark added 7/2/09. It’s important to understand that this post is how the situation seemed to be when it was written yesterday. If you read the comments you will see that some of the assertions that looked plausible (such as the precise form of the tentative conjecture) have now been discovered to be wrong.]

71 Responses to “DHJ — the triangle-removal approach”

  1. gowers Says:

    300. One of the main reasons for this comment is to establish the numbering. Comments on Terry’s post will be numbered 200-299 (with a convention that from now on if a post receives 100 comments then it must be summarized and restarted), comments on this post will be 300-399, and comments on the obstructions-to-uniformity post, when it is written, will be 400-499.

    To get the ball rolling for this post, I’d like to write out Jozsef’s proof of pair removal in full detail, mainly for my own benefit so that I understand it properly, but also because if, as I hope, a major subproject for this thread will be to see if we can generalize Jozsef’s argument, it will be convenient to have that argument here for easy referral.

    The first thing to note is that the result Jozsef proves is not quite as strong as the pair-removal lemma we originally asked for, because what he shows is not that you can remove a small number of sets and end up with no disjoint pairs, but rather that you can remove a small number of sets and end up with removing almost all the disjoint pairs you started with. This kind of weakening of a removal lemma is good enough for many applications. For instance, if we applied it to the corners problem we would find that we could remove a small number of edges and get rid of almost all the degenerate triangles, which is just as much of a contradiction as the usual one. (In fact, even if we could show that we could get rid of one percent of the degenerate triangles we would have a contradiction.)

    I’ll get on to the actual proof in the next comment.

  2. gowers Says:

    301. Pair removal in Kneser graphs.

    The Kneser graph K_{n,m} is the graph whose vertices are all subsets of \null [n] of size m, with two sets U and V joined if and only if they are disjoint. We should think of n as being 2m+k for some fairly small but nonconstant k but for now I will avoid trying to specify them.

    Let \mathcal{A} and \mathcal{B} be collections of sets of size m (i.e., subsets of the vertex set of K_{n,m}. We can form a bipartite graph by joining U\in\mathcal{A} to V\in\mathcal{B} if and only if U\cap V=\emptyset: this is not quite a bipartite subgraph of K_{n,m} (since \mathcal{A} and \mathcal{B} are not required to be disjoint), but it is an induced subgraph of what one might call the bipartite version of K_{n,m} (where each vertex set contains all m-element sets and you join two such sets if they are disjoint). From now on I will blur the distinction between this bipartite graph and K_{n,m} itself.

    The maximum number of edges that could possibly join \mathcal{A} to \mathcal{B} is the number of edges in K_{n,m} (bipartite version), which is \binom nm\binom{m+k}m=\binom n{m,m,k}. So we shall assume that the total number of edges is at most c\binom n{m,m,k} for some very small constant c>0. Our aim is to remove at most a\binom nm vertices and get rid of at least half these edges (or any other fixed fraction you might care to go for), where a tends to zero as c tends to zero.

    I’ll discuss how Jozsef does this in the next comment.

  3. gowers Says:

    302. Pair removal in Kneser graphs.

    What Jozsef ends up showing is that there is a very simple way of deciding which vertices to remove: you just get rid of all vertices of large degree. It turns out that if you do this then you either get rid of a significant fraction of all the edges, or you had almost no vertices to start with (in which case you can just remove all of them).

    Now all the time we are hoping to remove a number of edges that is large as a fraction of the total number of edges, so “large degree” means “significantly larger than the average degree”. Let us suppose, then, that the average degree in our graph is d. (Here I am averaging over all vertices in K_{n,m}, including those that are not in \mathcal{A} or \mathcal{B}.) It follows from Markov’s inequality that for any C the number of vertices of degree at least Cd is at most 2C^{-1}\binom nm. So we will need to choose C so that 2C^{-1} tends to zero as c tends to zero.

    Suppose that removing these vertices doesn’t get rid of half the edges. That means that after the removal we have a bipartite induced subgraph of K_{n,m} such that the average degree is at least d/2 and the maximal degree is at most Cd. In the next comment I’ll explain what Jozsef does in this case.

  4. gowers Says:

    303. Pair removal in Kneser graphs.

    What he does is to use what he calls “the permutation trick of Lubell” to get an upper bound on the number of vertices that there can be in a graph with average degree at least d/2 and maximal degree at most Cd. Take a random permutation \pi of \null [n]. Given two disjoint sets U and V, we shall say that \pi separates U and V if every element of \pi(U) is less than every element of \pi(V) or vice versa. What is the expected number of separated pairs with U\in\mathcal{A} or V\in\mathcal{B}?

    Let’s suppose that the total number of vertices in the bipartite graph is N. Then the total number of disjoint pairs is at least (d/2)N. Each disjoint pair (U,V) has a probability of 2\binom{2m}m^{-1} of being separated (since there are \binom{2m}m ways of ordering U\cup V of which precisely two separate them), so the expected number of separated pairs is at least dN\binom{2m}m^{-1}.

    In particular, there exists a permutation \pi that gives rise to at least dN\binom{2m}m^{-1} separated pairs. But how many separated pairs can there be for a single permutation? Well, let’s suppose first that that permutation is the identity (without loss of generality) and that we have separated pairs (U_1,V_1),\dots,(U_r,V_r), ordered in such a way that every element of U_i is less than every element of V_i. The crucial observation here is that if U_1 is a set with the smallest maximal element out of any of the U_i (again without loss of generality), then (U_1,V_i) is a separated pair for all V_i. Therefore, there can be at most Cd distinct sets V_i (since we know that U_1 has degree at most Cd). Similarly, there can be at most Cd distinct U_i, so the total number of separated pairs is at most C^2d^2.

    It follows that dN\binom{2m}m^{-1}\leq C^2d^2, and hence that N\leq C^2d\binom{2m}m=C^2d\binom{n-k}m. And having got to this point, I realize that I don’t understand why the right-hand side is small. Does anyone see it, or do we wait till Jozsef wakes up? (I also have trouble understanding the significance of the final inequality in his comment 155).

  5. gowers Says:

    304. Pair removal in Kneser graphs.

    Maybe the point is that the right-hand side doesn’t have to be small if all we know about d is that it’s a small multiple of \binom{n+m}m, but if we assume that d is much smaller (it’s possible that for applications to Sperner we may even be able to take d=1) and k is reasonably large, then the right-hand side is much smaller than \binom nm. So this would fit in with my comment 153 , where I suggested that using a stronger hypothesis might be helpful to prove a pair-removal lemma.

  6. gowers Says:

    305. Triangle removal.

    Here is a possible toy problem to think about, as an initial contribution to the project of generalizing Jozsef’s proof. This time we let n=3m+k. Our basic objects will be pairs (U,V) of disjoint sets. We are given three collections \mathcal{A}, \mathcal{B} and \mathcal{C} of such pairs. A triangle is a triple of the form (U,V),(U,W),(V,W) with (U,V)\in\mathcal{A}, (U,W)\in\mathcal{B} and (V,W)\in\mathcal{C}. We would like to prove that if for each pair (U,V)\in\mathcal{A} there is at most one W such that (U,W)\in\mathcal{B} and (V,W)\in\mathcal{C}, and similarly for the other two ways round, then it is possible to remove a small number of elements from \mathcal{A}, \mathcal{B} and \mathcal{C} and end up removing a significant fraction of all the triangles.

    There are no degenerate triangles here, so this wouldn’t do DHJ straight off, but I think it would be a good way of trying to understand whether Jozsef’s methods generalize. An alternative line of investigation would be to generalize Jozsef’s methods to a statement about several central layers of the cube rather than just one.

  7. gowers Says:

    306. Triangle removal.

    An optimistic guess about how it might go: we just routinely try to generalize Jozsef’s argument, and at a certain point a difficulty arises, which we turn out to be able to solve using the usual triangle-removal lemma.

  8. gowers Says:

    307. Triangle removal.

    At the very least, let’s try to do the routine generalization and see what happens. So without any serious thought about what is sensible and what is not, let us say that a permutation \pi separates the three sets U, V and W if every element of U comes before every element of V, which in turn comes before every element of W, or any one of the other five such statements you get by permuting the three sets. We’ll also say that \pi separates a triangle if it separates the three sets used to build it.

    Given any two disjoint sets U and V, let us define the \mathcal{A}degree of (U,V) to be the number of W such that (U,V)\in\mathcal{A}, (U,W)\in\mathcal{B} and (V,W)\in\mathcal{C}. (In this case, we say that U,V,W form a triangle.) We have similar definitions for the other two possible degrees.

    Now let’s suppose that the average degree (over all three bipartite graphs) is d and the maximal degree is Cd. What can we say?

    Let’s think about the expected number of separated triangles. The total number of triangles is d\binom n{m,m,n-2m}=d\binom{3m+k}{m,m,m+k} (up to some absolute constant depending on whether one orders the vertices etc.), and the probability that \pi separates any given triangle is 6\binom{3m}{m,m,m}^{-1}. So the expected number is 6d\binom{3m}{m,m,m}^{-1}\binom {3m+k}{m,m,m+k}. Therefore, there must be some permuation \pi that separates at least this number of triangles.

    Now let’s think about how many triangles a permutation can separate. Our assumption is that no pair of sets forms a triangle with more than Cd other sets. It’s at this point that, if our wildest dreams come true, we find that normal dense triangle removal is exactly what we need. But no reason to suppose that just yet. I’m going to continue in a new comment.

  9. gowers Says:

    308. Triangle removal.

    Let’s suppose that (U_1,V_1,W_1),\dots,(U_m,V_m,W_m) are separated triangles. We would like an upper bound on m. If we choose i such that the maximal element of V_i is minimized, we see that the number of distinct sets W_j can be at most D=Cd. Similarly, the number of distinct U_i can be at most D. But each pair (U_i,W_j) is allowed at most D sets V_h in the middle, so we seem to have at most D^3 triangles separated by \pi. This is slightly worrying as it came out a bit too simply for my liking.

    If it’s correct, it shows that 6d\binom{3m}{m,m,m}^{-1}\binom{3m+k}{m,m,m+k}\leq Cd^3, and therefore that d^2\geq 6C^{-1}\binom{3m}{m,m,m}^{-1}\binom{3m+k}{m,m,m+k}, which is exponential in k. So for very small d, such as we might expect to have in DHJ, we get a contradiction.

    I’ve got to go now, so all I’ll say at this point is that this argument feels too simple to be correct, but I can’t see what’s wrong with it. (If it’s right, then the explanation must simply be that it is not what we need.)

  10. Ryan O'Donnell Says:

    309. Pair removal in Kneser graphs.

    Can someone help me with this dumb question?

    Suppose A = B are the family of sets not including the last element n. Then A and B have density about 1/2 within KN_{n, n/2-k/2}. (We’re thinking k(n) \to \infty, k(n)/n \to 0 here, right?) It seems that the fraction of Kneser graph edges which are in A \times B is about k/n, which is “negligible”. So we should be able to delete any small constant fraction of vertices from A = B and make it an intersecting family. But A is 100\% of the Kneser graph KN_{n-1, n/2-k/2}, and doesn’t the largest intersecting family inside such a Kneser graph have density only around 1/2?

  11. gowers Says:

    310. Pair removal in Kneser graphs.

    Ryan, all I can say is that I can’t see a flaw in your reasoning. If it’s correct, then it shows something rather interesting. I couldn’t get Jozsef’s argument to work unless I assumed that degrees were very small rather than just small, and your example suggests that it’s actually false when you merely assume that the degrees are a negligible fraction of the maximum possible. In your example, the degree of a vertex in A is still large, even if it’s proportionately small, so there’s still some hope that Jozsef’s argument is correct and adaptable to DHJ.

    Now I must stare at the argument that I produced in 308 and try to work out what’s going on.

  12. Ryan O'Donnell Says:

    311. I guess you can make the degree go down fairly rapidly compared to the density by requiring the last t coordinates to be {}0. (Density goes down like 2^{-t}, fraction of edges goes down like (k/n)^t.)

  13. Jason Dyer Says:

    312. Counting lemma

    This is idle wondering, but given a random (not quasi-random) tripartite graph, do we know the probability P of n triangles appearing given a density D? This seems like this sort of thing that ought to be just a textbook example.

  14. gowers Says:

    313. Triangle removal.

    I’m going to try to do two things at once. The first is express myself more clearly when it comes to arguments such as the one in 308, and the second is to try to develop an argument that involves more than one slice, in the hope that degenerate triangles will come into play somehow.

    Let me begin by trying to define the nicest possible triangular grid of slices . I’ll define T_m to be the union of all \Gamma_{a,b,c} such that (a,b,c) is a triple of integers that belongs to the convex hull of (n/3+m,n/3-m,n/3), … actually, cancel this. It seems to be more natural, if one is going for the nicest possible and most symmetrical set of slices, to go for a hexagon. So let’s take all (a,b,c) that sum to n and that satisfy \max\{|a-n/3|,|b-n/3|,|c-n/3|\}\leq m. Here I’m imagining that n is a multiple of 3, not that it’s all that important. (This is a hexagon because there are six linear inequalities, and the symmetry of the situation makes it a regular hexagon.) Let’s write \Gamma for the union of all these \Gamma_{a,b,c}. We would like to prove that any dense subset A of \Gamma contains a combinatorial line.

    Now I’m going to do roughly the same thing as before. I’ll assume that there are no combinatorial lines in A. For each sequence x in A and each j\in\{1,2,3\}, let’s define the jset of x to be \{i:x_i=j\}, and we’ll denote this by U_j(x). As ever, we define \mathcal{A} to be the set of all pairs (U_1(x),U_2(x)) with x\in A, \mathcal{B} to be the set of all (U_1(x),U_3(x)) and \mathcal{C} to be the set of all (U_2(x),U_3(x)) (again, always with x\in A).

    The assumption that there are no combinatorial lines tells us that there are no triples (U,V,W) with (U,V)\in\mathcal{A}, (U,W)\in\mathcal{B} and (V,W)\in\mathcal{C}, except in the degenerate case where U\cup V\cup W=[n]. In the next comment I’ll try to hit this with Jozsef’s permutation argument.

  15. Jason Dyer Says:

    (Meta-comment)

    Ryan, like the brackets problem you should toss a {} before the 0 there if you want to render it in LaTeX in WordPress.

  16. gowers Says:

    314. Triangle removal.

    So now let’s choose a random permutation and see what we can say about the number of separated triangles. To do this, my first task is to say how many actual triangles there are. And the answer is |A|, one degenerate triangle for each element of A and no others.

    Next, I’d like some idea of the probability that a random permutation \pi separates a given triangle. Let’s suppose that the point x of A that gives rise to the triangle belongs to \Gamma_{a,b,c}. Then the probability that \pi separates (U_1(x),U_2(x),U_3(x)) is 6\binom n{a,b,c}^{-1}. So the expected number of separated triangles is 6|A|\binom n{a,b,c}^{-1}.

    Now suppose I just choose an arbitrary permutation — without loss of generality the identity. How many separated triangles can there be? I think this may be where the problems arise, but let’s see. Suppose that (U_1,V_1,W_1),\dots,(U_s,V_s,W_s) are separated triangles. We know that for each (U_i,V_i)

    I may come back to this, but I’ve now seen where I went wrong in 308, so I should think about that first, as it’s a simpler problem.

  17. gowers Says:

    315. Triangle removal: my mistake in 308.

    I started 308 with the claim that if you choose (U_i,V_i) so as to minimize the maximum element of V_i, then you could see immediately that there were at most D distinct W_j. The thinking behind this was that each W_j would form a triangle.

    But that was wrong: there is no reason for (U_i,W_j) to belong to \mathcal{B} or for (V_i,W_j) to belong to \mathcal{C}. So it’s back to the drawing board here. (But this, I should stress, is good news as I wanted there to be a complication to give triangle-removal a chance to creep in.)

    Here is a new proof strategy. First I’ll copy the first part of Jozsef’s argument to get to the stage where each vertex is contained in roughly the same number of triangles. (If I can’t, I’ll remove a few vertices and get rid of a significant fraction of the triangles.) Next, I’ll argue as above that the expected number of separated triangles is reasonably large. Then I’ll try to bound the number of separated triangles if no vertex is in too many triangles and no edge is in more than one triangle. The first step of that will be to pick … actually, I still don’t see it.

    I was going to say that one should pick the U_i with smallest maximum, but I don’t see what that gives us.

    What we can say is this. For each U there will be some system of sets (V_i,W_i) such that (U,V_i,W_i) forms a triangle. If in this set system we choose i such that the maximum of V_i is minimized, then we can deduce that there are not too many distinct W_j. Except that I don’t know why I even say this so confidently: what makes me think that (V_i,W_j) should lie in \mathcal{C}? OK I’m floundering around a bit here so I’d better stop. But I’ll leave these messy thoughts here in case anyone can get a clearer idea of what ought to be done. It feels as though it’s in promising territory somehow.

  18. gowers Says:

    316. Triangle removal.

    This is the kind of reason I find it promising. The graph where you join two sets if and only if one is completely to the left of the other is, under normal circumstances, much denser than the graph where you just join two sets if they are disjoint. So what Jozsef’s permutation argument is cleverly achieving seems to be to get us from a sparse context to a dense context. (I don’t yet have a fully precise formulation of this remark.)

    Now let’s suppose we have our separated triangles (U_i,V_i,W_i) above. Suppose also that we have managed to show somehow that many of the pairs (U_i,V_j) belong to \mathcal{A}, and so on. We know that there won’t be many triangles, so this will imply, by the usual triangle-removal lemma, that we can remove a small number of edges and end up with no triangles. Could there be an argument along these lines that allows us to prove density Hales-Jewett by hitting it permutation by permutation? We’re in that familiar state where there’s a small probability p of being close to a solution and a large probability 1-p of having written a load of nonsense.

  19. jozsef Says:

    317. Pair removal in Kneser graphs.

    Bad news – good news. Good morning! I hope that I didn’t derail the project with my last calculation yesterday – that the “removal lemma” works when m\sim n - \sqrt{n}. The main inequality is correct, and we see that it tells us something non-trivial if m is small compare to n. As I see now, the average degree, d, grows as m=o(n - \sqrt{n}). I think that this is the range where we have a removal lemma so far. This is a removal lemma we get by simple double-counting, so I’m afraid that we can’t expect too much out of it, but it supports Tim’s conjecture.

  20. gowers Says:

    318. Pair/triangle removal.

    Hi Jozsef. Two remarks. It seems to me (but I’d be reassured if I knew you agreed about this) that we have a removal lemma if the average degree is very small. And in interesting contexts it is. For example, if \mathcal{A} is a counterexample to Sperner’s theorem and we let \mathcal{B} consist of al complements of sets in \mathcal{A}, then each set in \mathcal{A} is joined just to its complement, so the average degree is 1.

    The second remark is that it’s not impossible that a simple double counting gives us something good. The reason is that when going from 2 to 3 in the dense case you go from trivial to interesting, so when going from 2 to 3 here, perhaps you go from simple to very interesting.

  21. Ryan O'Donnell Says:

    319. Sperner.

    Hi Jozsef, is that a typo? m needs to be less than n/2; otherwise the KN_{n,m} Kneser graph is empty.

    I’m not sure the scenario we should consider now… we have counterexamples to the Tentative Conjecture for k both constant and superconstant.

  22. jozsef Says:

    320 Pair removal lemma

    Now I feel a bit dummy; I wanted to check where did the calculation go wrong yesterday in my last post, but I can’t see. Tim, in 304 you asked why is the right hand side small. By checking \binom{n}{m}/\binom{2m}{m} what I see – again – is that it should be around 2^{2\sqrt{n}}. I probably badly overlook the same thing again …
    I’ll go to have a morning coffee.

  23. gowers Says:

    321. Sperner

    Ryan, you may be ahead of me here and already see why the suggestion I’ve been making is a bad one, but what about strengthening the hypothesis of the Tentative Conjecture from the number of disjoint pairs being o(1)\binom{n}{m,m,k} to each set belonging to at most one disjoint pair? (With this second hypothesis one tries to prove that the sets must be small.)

    Actually, isn’t that just Bollobás’s theorem? I think it is because if I take d=C=1 in 303 then I get the correct bound of \binom{2m}m. Is that the standard proof of Bollobás’s theorem?

  24. Jason Dyer Says:

    322. Pair removal in Kneser graphs.

    Ok, I am trying to understand Jozsef’s proof (I notice it uses the greedy algorithm, interesting) and I must be missing something elementary at 303:

    The crucial observation here is that if U_1 is a set with the smallest maximal element out of any of the U_i (again without loss of generality), then (U_1,V_i) is a separated pair for all V_i.

    Could someone expand on this a little more? I’m not seeing why.

  25. gowers Says:

    323. Pair removal in Kneser graphs.

    Jason, if U_1 has the smallest maximal element and (U_i,V_i) is separated, then the largest element of U_1 is at most the largest element of U_i is less than the smallest element of V_i. If this doesn’t explain it then I don’t understand what you’re not understanding.

  26. gowers Says:

    324. Triangle removal

    A quick technical question: I think it should be reasonably straightforward to answer. Suppose we have a dense set A in \null[3]^n. Suppose now that we take a random permutation \pi and just take from A the sequences that start with 1s, continue with 2s and finish with 3s (after you’ve used \pi to scramble the coordinates). And now suppose you form the tripartite graph from just those sequences. That is, you let \mathcal{A} consist of all pairs (U,V) that are equal to (U_1(x),U_2(x)) for some x\in A, and so on. We can think of \mathcal{A} as a bipartite graph, whose vertices are subsets of \null [n], where we join U to V if U finishes before V starts. What I’d like to know is whether this graph tends to be dense. I think the straightforward answer is no, so what I really mean is whether it can be made dense if you remove a smallish number of vertices and edges, or something like that. The motivation for this question is that I’d very much like to be able to define a dense tripartite graph with few triangles and see what happens when we apply the usual triangle-removal lemma.

  27. jozsef Says:

    325. Pair removals

    Re: 319, yes there was a typo, I meant n/2 there, thanks. But I still don’t see if anything is bad with my calculation; Please help me, is it correct that if m=n/2-\sqrt{n} then d should be at least c2^{2\sqrt{n}} or we have a removal lemma? After this I will try to digest Tim’s triangle removal technique.

  28. Ryan O'Donnell Says:

    326. Kneser.

    Hi Tim, in #321, if each set is involved in at most one Kneser edge, then I think the least number of sets you need to delete in order to eliminate all Kneser edges is with a factor of two of the number of sets (vertices), no?

  29. gowers Says:

    327. Kneser

    Yes, so if you can prove that you can remove a small number of sets then you’ve proved that there weren’t very many sets in the first place.

  30. gowers Says:

    328. Pair removal

    Jozsef in 325, my calculation (which I checked and I’m pretty sure it works out to be the same as yours) suggests that you get something interesting as long as C is large and C^2d\binom{2m}m is a lot smaller than \binom{2m+k}m. Since \binom{2m+k}m is about 2^k\binom{2m}m, it follows that we need d to be small compared with 2^k/C^2 in order to get a removal lemma. I think this agrees with you, since you are taking m=n/2-\sqrt n and k=2\sqrt n. So it seems correct to you and it seems correct to me, which means that there’s an \epsilon^{3/2} chance that it’s incorrect.

  31. jozsef Says:

    329. Thank you Tim. I think I should apologize that, after reading 303, I didn’t pay much attention to later posts from where I could have learned that things were OK with the calculation. In the next post I will try to understand and extend your toy version of a triangle removal lemma. Unfortunately now I have to go to the university and probably I won’t read the posts in the next 5-6 hours.

  32. gowers Says:

    330. Triangle removal

    Jozsef, I should make it clear that I don’t yet have an interesting new triangle removal lemma proved, so if you prove anything at all it counts as an extension of what I’ve done. But I might think just a little bit further about the vague idea of 324.

    Suppose then that we have a dense set of sequences in \null[3]^n, and for simplicity let’s assume that they’re all reasonably well balanced (meaning that they all have roughly equal numbers of 1s, 2s and 3s). How many times do we expect a given set U to occur as U_1(x) for some x\in A? The very rough answer is \binom{2n/3}{n/3}. Let’s be even more rough and call that 2^{2n/3}. Now A has size around \binom{n}{n/3,n/3,n/3}=\binom{n}{n/3}\binom{2n/3}{n/3}.

    I’m going to interrupt this comment because I’ve had a different idea that I want to pursue urgently.

  33. gowers Says:

    331. Sperner and DHJ

    The proof I know of Sperner goes like this. You take a random sequence of sets of the form \emptyset=A_0\subset A_1\subset A_2\subset\dots\subset A_n=[n], where A_i has cardinality i. If \mathcal{A} is an antichain, then it intersects each such chain in at most one set.

    Now let’s imagine we choose a set at random. What is the probability that it lies in \mathcal{A}? Let’s write \delta_i for the density of \mathcal{A} inside layer i (that is, the number of sets in \mathcal{A} of size i divided by \binom ni). Then the expected size of the intersection of \mathcal{A} with our random chain is \delta_0+\delta_1+\dots+\delta_n. It follows that the \delta_i add up to at most 1 (since at least one random chain will have at least the expected number of elements of \mathcal{A} in it). Since the middle layer(s) is (are) biggest, it’s clear that we maximize the size of \mathcal{A} by choosing a middle layer.

    The idea that’s just occurred to me is that we’ve just deduced Sperner from the one-dimensional corners result (which states that if you have a dense subset of \null [n] then you must be able to find x and x+d in it with d\ne 0). What happens if we try to deduce density Hales-Jewett from 2-dimensional corners?

    Well, first we need to decide what replaces a random chain. For reasons of symmetry I’d like to think of a random chain as follows: you pick a random permutation of \null[n] and then once you’ve scrambled the set you take all partitions (A,B) of \null [n] into two sets such that every element of A is less than every element of B.

    At this point I’m just going to guess that it might be reasonable to choose for DHJ purposes a random permutation of \null[n] followed by all partitions (A,B,C) such that every element of A comes before every element of B, which comes before every element of C. This is at least a two-dimensional structure, since it’s determined by the end points of A and B. I will also think of this as the set S of all sequences in \null[3]^n that start with some 1s, continue with some 2s, and end with some 3s.

    Next question: if \mathcal{A} contains no combinatorial line, can we deduce that the intersection of \mathcal{A} with S is small? What I’d like to do is prove that it has no corners, in some useful sense of the word “corners”.

    How would we get a combinatorial line in this set? We can of course encode each element of S as a triple (a,b,c) where a+b+c=n. And at this point we have a problem: S contains no combinatorial lines.

    Well, I suppose if that had worked it would have been discovered by now. But it suggests an amusing problem. Let T_m be the hexagonal portion of the triangular grid x+y+z=0 (a 2-dimensional subset of \mathbb{Z}^2) defined by \max\{|x|,|y|,|z|\}\leq m. Let us call a function f:T_m\rightarrow [3]^n a corner isomorphism if it is an injection such that every aligned equilateral triangle in T_m maps to a combinatorial line. (An aligned equilateral triangle is a set of the form (a+r,b,c),(a,b+r,c),(a,b,c+r).) If we could find a corner isomorphism for some fairly large m, then we could probably move the image about and prove density Hales-Jewett using an averaging argument. So can anyone either find a big corner isomorphism or (more likely) prove that no such function exists? I don’t expect this problem to be very hard.

  34. gowers Says:

    332. Sperner and DHJ

    My last contribution for the day: I’ve just checked and there isn’t even a corner isomorphism from T_2 into \null [3]^n. The proof goes like this: you take an arithmetic progression of three points in a line and map them to 12333, 12233 and 12223. (I think I may also have wanted to insist that lines go to sets where the appropriate j-set is fixed.) Those three points determine a T_2 and their images determine what the images of the other three points under a corner isomorphism would have to be, but those three images don’t lie in a combinatorial line. (For what it’s worth, they are 12133, 12113 and 12213.)

  35. jozsef Says:

    333. Triangle removal

    To understand the notations and arguments below, one should read 305-307-308 first. Tim’s proof in 307-308 works nicely, if we change the initial conditions. Our base set \mathcal{S} is now just a collection of m-element subsets (where n=3m+k) instead of \mathcal{A},\mathcal{B},\mathcal{C} and we are considering now every triangle formed by any three pairwise disjoint elements of \mathcal{S}. For this set of triangles, we do have the removal lemma as Tim has calculated it. Let me add two technical remarks which might be needed for the complete proof. A triangle is called “normal” if none of its edges is edge of more than Cd other triangles. Then, the argument gives bound on the number of normal triangles. If most of the triangles aren’t normal, then we have a removal lemma. The conclusion is that if we have a small number of triangles in \mathcal{S} compare to the size of \mathcal{S}, then a small fraction of the edges covers most of the triangles. My second small remark is that when we bound the normal triangles separated by a permutation, then we consider the sets right from the leftmost (having the smallest last point) middle element and and the sets left from the rightmost (having the largest first point) element. These are both bounded by Cd and they span at most (Cd)^3 separated triples, as Tim said. I’m just emphasizing the selection of the left size here, only a technical detail.
    This theorem is more restricted than Tim’s original statement, but I think it might be useful. In DHJ the dense subset is given and we will work with the induced substructure.

  36. jozsef Says:

    334. Sperner

    I was wondering what would be the proper Sperner-type theorem in {}[3]^n similarly useful as Sperner in {}[2]^n. Is there something before DHJ?

  37. gowers Says:

    335. Sperner

    Jozsef, I think I have the answer to your last question. Let’s define the Kneser product of two set systems \mathcal{A} and \mathcal{B} to be the set of all pairs (U,V) such that U\in\mathcal{A}, V\in\mathcal{B} and U\cap V=\emptyset. Then the theorem would be that every Kneser product that is dense in the set of all disjoint pairs (of which there are 3^n) contains a corner: that is, a configuration of the form (U,V), (U,V\cup D), (U\cup D,V) with all three of U, V and D disjoint.

    I’ve stated it like that to make it as similar as I can to Sperner. But a more symmetrical version is this. Suppose that \mathcal{S}, \mathcal{T} and \mathcal{U} are set systems such that there are at least c3^n ways of partitioning \null [n] as S\cup T\cup U with S\in\mathcal{S} etc. Then the corresponding collection of sequences contains a combinatorial line.

    To see that this is a natural extension of Sperner, consider the following silly way of stating (weak) Sperner. If \mathcal{A} and \mathcal{B} are two set systems such that there are at least c2^n ways of partitioning \null [n] as A\cup B with A\in\mathcal{A} and B\in\mathcal{B}, then the corresponding collection of 01-sequences contains a combinatorial line.

    It looks to me as though there should be a two-dimensional family of density Hales-Jewett statements. That is, for each 1\leq j<k there should be a DHJ(j,k), where Sperner is DHJ(1,2), density Hales-Jewett in \null [k]^n is DHJ(k-1,k), and the theorem just discussed (for which it looks as though we may have a proof) is DHJ(1,3). The idea would be that for DHJ(j,k), whether or not a sequence belongs to the set depends on a j-uniform hypergraph with 2^{[n]} as its vertex set.

    We should see if there is any relation between this and Terry’s DHJ(2.5) problem . I suspect there isn’t, in which case it’s amusing to have come up with a different way of “interpolating” between Sperner and DHJ.

  38. gowers Says:

    336. Sperner and DHJ.

    A small observation. One could regard the previous comment as showing that at least one natural generalization of Sperner to \null [3]^n turns out not to be the statement we want. That is unfortunate, though by no means catastrophic. In this comment I want to point out that there is a different way of generalizing Sperner that leads (I think) to a deeper theorem, though it still isn’t DHJ.

    Let’s go back to comment 331 and try to generalize the proof of Sperner and see what comes out, not worrying if it isn’t DHJ. To do this, we associate with any triple a+b+c=n the three sets \{1,2,\dots,a\}, \{a+1,\dots,a+b\} and \{a+b+1,\dots,n\}. Let’s call this the triple U_{a,b,c}. Now let’s suppose we have a corner, by which in this context I mean a triple of triples of the form (a+r,b,c), (a,b+r,c) and (a,b,c+r) (with a+b+c+r=n). What combinatorial configuration do we get from the corresponding triples U_{a+r,b,c}, U_{a,b+r,c} and U_{a,b,c+r}? The answer is that we partition \null [n] into sets A_1=\{1,2,\dots,a\}, A_2=\{a+1,\dots,a+r\}, A_3=\{a+r+1,\dots,a+b\}, A_4=\{a+b+1,\dots,a+b+r\} and A_5=\{a+b+r+1,\dots,n\}. Then the sequence corresponding to U_{a+r,b,c} is 1 on A_1 and A_2, 2 on A_3 and A_4, and 3 on A_5. The sequence corresponding to U_{a,b+r,c} is 1 on A_1, 2 on A_2, A_3 and A_4, and 3 on A_5. Finally, the sequence corresponding to U_{a,b,c+r} is 1 on A_1, 2 on A_2 and A_3, and 3 on A_4 and A_5.

    Thus, we have three fixed sets, namely A_1, A_3 and A_5, and two variable sets, namely A_2 and A_4. If we identify each set to a point, we can think of the three sequences as 11223, 12223 and 12233. If we forget about the fixed sets, the restrictions to the variable sets are going 12,22,23. Also, and this is important, the two variable sets have the same size. (If they didn’t, then the resulting theorem would be much easier.)

    Given a dense set of triples that start with 1s, continue with 2s and end with 3s, then the corners theorem will tell you that you must have a configuration of the above kind. I haven’t checked this, but I’m about 99% certain that if you imitate the proof of Sperner by averaging over a random permutation, then you will be able to deduce that a dense subset of \null [3]^n will contain a “peculiar combinatorial line” of the above form. (Just to be clear, a peculiar combinatorial line is a triple of sequences that are fixed outside two sets V and W that have the same size, and inside V and W they go 12,22,23.)

    The reason I call this a deeper theorem is that it relies on the corners theorem. Indeed, it implies the corners theorem if you restrict to sets that are unions of slices. (This is because we insist that V and W have the same size.)

    Again, this is a slightly depressing observation because it shows that a reasonably natural generalization of the Sperner proof gives us the wrong theorem. However, again it doesn’t actually tell us that we can’t find a different way of generalizing Sperner that gives the right theorem.

  39. gowers Says:

    337. Sperner and DHJ

    A tiny observation, following on from 332. The main difficulty seems to be as follows. Let’s define the order profile of a sequence to be what you get when you collapse each block of equal coordinates to just one coordinate. For example, the order profile of 113323311113 is 132313. In \null [2]^n you can have a combinatorial line where every element has the same order profile: e.g. 11122 and 12222. But in \null [3]^n that is impossible. (Proof: if the last fixed coordinate before a variable string starts is i and the first fixed coordinate after it finishes is j, then the variable string can only take the values i and j.)

    I’ve vaguely wondered about having a two-dimensional ground set, so that it’s possible for three sets to meet at a point, but I haven’t managed to define a nice two-parameter family of sequences.

  40. gowers Says:

    338. Sperner and DHJ

    Actually, I can’t resist just slightly expanding on the last paragraph of 337. Let’s take not \null[3]^n but {}[3]^{n^2}, which we think of as the space of all functions from \null [n]^2 to \{1,2,3\}. I’d like to regard such a function as “simple” if you’ve basically got three big simply connected patches, of 1s, 2s and 3s, and if the boundaries between these patches are not too wiggly. A combinatorial line of functions like this would have the desirable property that it wouldn’t be obvious from just one of the functions what the variable set was: in some sense the “two-dimensional order profile” would be the same for all three points.

    One could now ask whether the simpler structure here would make DHJ easier to prove. If so, then one could average up in the Sperner fashion. But it doesn’t look easy to say exactly what this simpler structure is.

  41. gowers Says:

    339. Sperner and DHJ

    I take that back. Here’s an actual conjecture.

    Let’s take as our basic object the kind of thing one talks about in Sperner’s proof of the Brouwer fixed point theorem (surely a coincidence though). Chop out a large hexagonal piece from a tiling of the plane by hexagons and colour two of its edges (next to each other) red, two green and two blue, and extend these colourings from the boundary into the inside of the hexagon until you’ve coloured it all, and do so in such a way that you have three simply connected regions that meet at some point. Colourings of this kind are the basic objects that we’ll consider. Of course, “red”, “green” and “blue” is another way of saying 1, 2 and 3.

    The question is, if you have a dense set of such colourings, must you have a combinatorial line? A combinatorial line in this context will be like what I described except that the three regions meet not at a point but round the boundary of a fourth region, which is the set of variable coordinates.

    At the moment I know pretty well nothing about this question. I don’t know whether there’s an easy counterexample, I don’t know how to think about the measure on all those colourings (which feels like a fairly nasty object of a kind that the late and much lamented Oded Schramm would have been able to tell us about), and I don’t know whether a positive answer follows from density Hales-Jewett. The one thing I feel fairly confident of is that this, if true would imply DHJ. Oh, and one other thing is that I don’t have any feel for whether this is more or less equal to DHJ in difficulty, or whether there is a chance that it is easier.

    Actually, there’s another thing that I’m confident of — in fact, certain of — which is that it is a natural generalization of the trivial fact that underlies Sperner. (That is the fact that if you have a dense set of 2-colourings of a line of points, then two of those 2-colourings will form a combinatorial line.)

    This conjecture is probably not to be taken too seriously, but I quite like the fact that it doesn’t follow easily from DHJ.

  42. Sune Kristian Jakobsen Says:

    340. Variant of Sperner

    First I must admit, that I haven’t read much in this thread, so I don’t know if this has been said before. I got this idea while I tried to find some low values of c_n.
    In 331 Gowers gives a proof that if \mathcal{A} is a antichain, $\sum_{A\in\mathcal{A}}\frac{1}{\binom n{|A|}}\leq 1$. Using the same idea it is possible to show that if \mathcal{A} is a family, that intersects every chain \emptyset=A_0\subset A_1\subset A_2\subset\dots\subset A_n=[n], then $\sum_{A\in\mathcal{A}}\frac{1}{\binom n{|A|}}\geq 1$. Notice that when the elements of \mathcal{A} is only allowed to have one of two given sizes, these two theorems are equivalent. The later theorem seems to be easier to generalize to the k=3 case, and it might be useful, at least in the 200-thread.

  43. jozsef Says:

    341. Sperner

    I’m reading 335, and I’m trying to understand the conjecture. First, there is a coloring theorem for subsets that says that one can find three disjoint sets in the same colour class that their unions are also in that colour class. It is a special case of the Finite Union Theorem (Folkman’s Theorem) which is equivalent to the finite sums theorem. It states that any finite colouring of the natural numbers contains arbitrary large monochromatic IP sets. The density statement is clearly false without extra conditions on the dense subset. It is obvious that the finite union theorem implies the finite sum theorem, however the other direction is difficult; one can use Van der Waerden for the reduction. Maybe we can also prove some density theorem for disjoint set unions using Szemeredi’s thm. It is still not Tim’s cross-product conjecture, but it seems to be quite promising. Let me also mention that the other proof of finite union theorem uses HJ! About the details: a possible reference is Graham, Rothschild, Spencer, “Ramsey Theory”. Now it’s family lunch-time, so I will be away for a while, but I’m quite excited about this possible “density finite union theorem” so I will come back a.s.a.p.

  44. gowers Says:

    342. Sperner

    Jozsef, What I wrote in 335 was meant to be more or less the same as what you suggested in 333 — a very slight generalization perhaps, but you can make it the same if you look at the special case \mathcal{S}=\mathcal{T}=\mathcal{U}. And I think that, as you say, it can be proved by the argument of 308. Or are we talking about different things?

  45. Boris Says:

    343. In comment 335 what is meant in general by DHJ(j,k)?

  46. gowers Says:

    344. Sperner/DHJ

    Boris, I hadn’t actually worked it out, but I think I can. Basically, DHJ(j,k) is density Hales-Jewett in \null [k]^n but with j placing a restriction on the complexity of the dense set A. Let me give the case j=k-1 first. This is equivalent to the normal density Hales-Jewett theorem, but I’ll state it slightly differently.

    Actually, I think I may as well go for the whole thing right from the start.

    Suppose that for each subset E of \null [k] of size j you have a set \mathcal{A}_E of functions from E to the power set of {}[n] such that the images of all the points in E are disjoint sets. Let’s write (U_j:j\in E) for a typical element of \mathcal{A}_E. I now define a set \mathcal{A} of (ordered) partitions of \null [n] into k sets (U_1,\dots,U_k) by taking all partitions (U_1,\dots,U_k) such that for every E of size j we have that (U_j:j\in E) belongs to \mathcal{A}_E. Thus, in a certain sense the set of partitions (which are of course in one-to-one correspondence with sequences in \null [k]^n) depends only on what collections of j sets there are in the partition. Then DHJ(j,k) is the claim that if your dense set \mathcal{A} has this special form, then it must contain a combinatorial line.

    My suggestion, which I think I could justify completely if pushed to do so, is that if j=1 then one can prove DHJ(j,k) by a fairly simple adaptation of the proof of Sperner as given by Jozsef. Sperner, by the way, is DHJ(1,2), and the problem we have been considering is DHJ(2,3). In general, I suspect that DHJ(j,k) is of roughly equal difficulty to DHJ(j,j+1). It might be interesting to see if we can deduce DHJ(j,k) from DHJ(j,j+1), though at the moment my feeling is that we can prove DHJ(1,k) not by deducing it from DHJ(1,2) but by imitating the proof of DHJ(1,2), which is not quite the same thing.

  47. jozsef Says:

    345. Triangle Removal in Kneser Graphs

    I will go back to the “density finite union theorem” soon as I think it is relevant to our project, but first here is a general remark about the removal lemma. I just want to make it clear that in a certain range we have a real removal lemma. Given three families of sets A,B, and C. If the number of disjoint triples (one element from each) is smaller than X (something depending on the sizes |A|,|B|, and |C|) then a small fraction of the disjoint pairs covers EVERY triangle. To see that, we just have to iterate the counting of “normal” triangles. (post 333.)

  48. Boris Says:

    346. I think I misunderstand comment 344. Consider DHJ(k-1,k). We have k sets \mathcal{A}_E each of which gives a rise to a subset of {}[k]^n which I call B_E. Then the subset B of {}[k]^n that corresponds to \mathcal{A} that is the intersection \bigcap_{E} B_E. However, nothing stops us from make B_E large, and their intersection empty.

  49. gowers Says:

    347. Boris, I wasn’t clear enough in what I meant. Of course, you are right in what you say, but the statement is that if the intersection is dense then you get a combinatorial line, and not the weaker assumption that each individual B_E is dense. Of course, in the case j=k-1 you can consider just a set system B_{[k-1]}, just as in Sperner you can consider just the set of points where your sequence is 0.

  50. jozsef Says:

    348. Density Finite Union (DFU) Theorem

    My previous remark was about a possible DFU thm. Let us see the simplest version. What would be a necessary density condition to guarantee a pair of disjoint sets A and B that A,B, and A\cup B are in this “dense” set? Considering the arithmetic analogue, if we have a set S\subset [n] with cn elements a that 2a is also in S, then there is a nontrivial solution for x+y=z.

  51. jozsef Says:

    349. DFU contd.

    The first density condition would be that the number of disjoint pairs is at least {}c3^n.The second condition should be something related to sets having 2/3n+-k elements. I’m still not sure what could be a meaningful notation for density here.

  52. jozsef Says:

    350. DFU

    A possible statement could be the following; Given a family F of subsets of [n]. Let us suppose that the number of disjoint pairs in F is at least {}c3^n. We are going to define a graph on F. The verices are the sets in F and two are connected iff they are disjoint and their union is also in F. Then, either one can remove {}o(|F|) vertices to destroy all edges, or there are {}c'3^n. edges where c’ depends on c only. I’m not sure that the statement is true, but something like this should be true.

  53. jozsef Says:

    351. DFU (Density Finite Union) theorem.

    Well, now I see that {}c'3^n is way too ambitious; let me change it to f(n)|F| where f(n) goes to infinity.

  54. gowers Says:

    352. DFU

    I’ve got to go to bed, but as a last comment, let me ask whether 350 is intended as a set-theoretic analogue of Ben Green’s theorem that if you have a set with few solutions of x+y=z then you can remove a small number of elements and end up with no solutions of x+y=z. It would be very nice to be able to prove something like that (and a generalization to larger finite unions).

    If one were going to try to find a counterexample, one would have a large collection of sets of size around n/3, and one would partition them into disjoint pairs, and one would add to the collection the set of unions of these pairs, which would turn them into a matching in your graph. The question would then be whether one could do this without creating c'3^n further edges.

    Here’s an attempt to do so. First we choose a random collection \mathcal{F} of sets of size 2n/3, big enough that its n/3 lower shadow is all of \binom{[n]}{n/3}. Next, we choose all sets of size at most n/3. Then the number of disjoint pairs is certainly at least c3^n. Also, the number of edges is at most 2^{n/3}|\mathcal{F}|. Next, it seems to me that it will be very hard to remove all edges: the fact that \mathcal{F} is random and that every vertex is contained in an edge (and we could make \mathcal{F} slightly bigger so that each vertex is in several edges) ought to mean that we can choose a large matching. How big is \mathcal{F}? Well, each set in \mathcal{F} covers 2^{n/3} small sets, so by my calculations the number of edges is 2^{2n/3}, which is a lot smaller than 3^n. Sorry if some of this turns out to be nonsense — I must must sleep now.

  55. gowers Says:

    353. DFU

    Just to say that I wrote 352 before seeing your 351.

  56. jozsef Says:

    354. DFU (Density Finite Union conjecture)

    Let me state here a conjecture. It might not be directly connected to our project, however I’ve been looking for the right formulation for a while and I think that this is the right one; First we have to find the proper notation of density since simple density wouldn’t guarantee the result we are looking for. The “weighted density” of a set S\subset [n] is given by the sum of the (normalized) number of pairwise disjoint k-tuples in S for all 1<kc>0 there are m pairwise disjoint sets, B_1,...,B_m that \cup_{i\in I} B_i  \in S for any nonempty I\subset [m] . This form is clearly a density version of Folkman’s theorem; If we colour the subsets of [n] by K colours, then at least one class has density > 1/K.

  57. jozsef Says:

    Somehow I lost the middle of the text between 10

    355. DFU (Density Finite Union conjecture)

    Let me state here a conjecture. It might not be directly connected to our project, however I’ve been looking for the right formulation for a while and I think that this is the right one; First we have to find the proper notation of density since simple density wouldn’t guarantee the result we are looking for. The “weighted density” of a set S\subset 2^{[n]} is given by the sum of the (normalized) number of pairwise disjoint k-tuples in S. D(S)={\frac{1}{n-1}}\sum_{k=2}^n |\{(A_1,...,A_k):A_i\in S, |A_i\cap A_j|=0\}|/(k+1)^n. The maximum number of pairwise disjoint k-tuples is (k+1)^n so D(S) is a number between 0 and 1. The conjecture is that for any number m, if S contains no m pairwise disjoint elements with all 2^m-1 possible nonempty unions, then it is sparse. D(S) goes to 0 as n goes to infinity.

    With different words, if S is dense (and large enough) then there are m pairwise disjoint sets, B_1,...,B_m that \cup_{i\in I}B_i\in S for any nonempty index set I\subset [m]

  58. gowers Says:

    356. DFU

    Dear Jozsef, I’ve edited your last comment so that the formulae show up, and made a couple of other tiny changes (such as changing S\subset [n] to S\subset 2^{[n]}). Please check that I haven’t introduced any errors.

    As it stands, I don’t understand your conjecture. A typical disjoint k-tuple has all its sets of size around n/(k+1), so it seems to me that the set of all sets of size around n/(k+1) is dense in your sense. And there is of course no hope of getting finite unions in this set. Am I misunderstanding something?

  59. DHJ — quasirandomness and obstructions to uniformity « Gowers’s Weblog Says:

    […] and lower bounds for the density Hales-Jewett problem , hosted by Terence Tao on his blog, and DHJ — the triangle-removal approach, which is on this blog. These three threads are exploring different facets of the problem, though […]

  60. jozsef Says:

    357. DFU

    Tim, there is a 1/(n-1) multiplier, so I thought that one layer (k) will contribute a small fraction only. In particular, I think that rich layers form arithmetic progressions and it might be useful in a possible proof.

  61. gowers Says:

    358. DFU

    Ah — I hadn’t spotted that you were averaging over lots of layers. Now it starts to make more sense to me.

  62. Boris Says:

    359. Averaging over multiple layers does not seem to help in comment 355. Consider the family S of all sets of odd size. It contains no disjoint B_1,B_2\in S such that B_1\cup B_2\in S.

  63. Boris Says:

    Oops… d(S)=1/n for the example I gave.

  64. Boris Says:

    361. I think Jozsef’s conjecture is true, but not for an interesting reasons. I claim that for every fixed positive integer t the condition d(S)>0 implies that |S\cap \binom{[n]}{t}|\geq (1-o(1))\binom{n}{t}. Suppose not, let T=\binom{[n]}{t}\setminus S. Now let us generate the partition of {}[n] into k parts as follows. Let P_1,\dotsc,P_k be the partition of {}[n] into k random parts. If k=cn with c>0, then Pr[P_i\in T]=p is positive, and is independent of i. Had P_1,\dotsc,P_k been independent, it would have implied that Pr[\exists i\ P_i\in T]=p^k is exponentially small. Clearly, P‘s are not independent, but they are nearly independent. More precisely, let R_i be the event that |P_i|\leq n^{1/4}. Then Pr[ P_i\cap P_j\neq \emptyset |R_i\wedge R_j]=1-O(n^{-1/2}). Hence, sampling n^{1/6} random sets by choosing each element with probability 1/(k+1) is virtually indistinguishable from choosing a random k-tuple of disjoint sets, and then looking at first n^{1/6} of them. I believe I can write the appropriate string of inequalities to show that if asked.

    Since density of S is nearly 1 on each \binom{[n]}{t} the existence of any given configuration follows by simple averaging.

  65. jozsef Says:

    362.

    Thanks Boris! It seems that this is still not the right formulation for a DFU conjecture. Actually, I should have noticed that a random selection of sets with probability 1/2 already gives density zero. So, I’m still looking for the right density definition. Boris, do you have any suggestion?

  66. Terence Tao Says:

    363. DHJ(j,k)

    I had to struggle a bit to understand what DHJ(j,k) meant – I had not followed this thread for a while – but when I finally got it, I can see how it fits nicely with the whole hypergraph regularity/hypergraph removal story, and so does look quite promising. Anyway, I thought I’d try to restate it here in case anyone else also wants to know about it…

    In {}[3]^n, we can define the notion of a “(basic) complexity 1 set” to be a set A \subset [3]^n of strings whose 1-set lies in some fixed class {\mathcal A}_1 \subset 2^{[n]}, whose 2-set lies in some fixed class {\mathcal A}_2 \subset 2^{[n]}, and whose 3-set lies in some fixed class {\mathcal A}_3 \subset 2^{[n]}. A simple example here is \Gamma_{a,b,c}; in this case {\mathcal A}_1 is the set of subsets of {}[n] of size a, and so forth. As I just recently understood over at the obstructions to uniformity thread, basic complexity 1-sets are obstructions to uniformity, and so we definitely need to figure out how to find lines in these sets.

    In analogy with hypergraph regularity (and also with ergodic theory), the next more complicated type of set might be “non-basic complexity 1 sets” – sets which are unions of a bounded number of basic complexity 1 sets. (In graph theory, basic complexity 1 corresponds to a complete bipartite weighted graph between two cells with some constant weight, and non-basic complexity 1 corresponds to a Szemeredi-type weighted graph between a bounded number of cells, with any pair of cells V_i, V_j having a constant weight d_{ij} on the edges connecting them.) But it seems fairly obvious that if dense basic complexity 1 sets have lines, then so do dense non-basic complexity 1 sets (though one may have to be careful with quantifiers, and use the usual Szemeredi double induction to organise the epsilons properly).

    Then we have the basic complexity 2 sets in {}[3]^n, which are those sets A in which the 12-profile of A (i.e. the pair consisting of the 1-set and 2-set of A; one can also identify this with an element of 3^{[n]}) lies in some fixed class {\mathcal A}_{12} \subset 3^{[n]}, the 23-profile of A lies in some fixed class {\mathcal A}_{23} \subset 3^{[n]}, and the 31-profile of $A$ lies in some fixed class {\mathcal A}_{31} \subset 3^{[n]}. But of course when k=3, any one of these complexity 2 profiles determine the entire set A, so every set A is complexity 2, which is why DHJ(2,3) is the same as DHJ(3). [Incidentally, to answer a previous question, this hierarchy seems to be unrelated to my DHJ(2.5) – I was simplifying the lines rather than the sets.]

    If we knew that sets A which were uniformly distributed wrt complexity 1 sets were quasirandom (a “counting lemma”), then we would be well on our way to reducing DHJ(2,3) to DHJ(1,3) by the usual regularity type arguments. (I’m sure Tim and the others here already know this, this is just me trying to get up to speed.)

  67. gowers Says:

    364. Big picture

    There is now a definite convergence of this thread and the obstructions-to-uniformity thread, which perhaps illustrates the advantages of not splitting the discussion up. At any rate, this last comment of Terry’s is closely related to comment 411, and also to earlier comments on the 400-499 thread that attempt to establish some sort of counting lemma. I now get the feeling that the problem has been softened up, and that it is not ridiculous to hope that we might actually solve it. But it could be that certain difficulties that feel technical will turn out to be fundamental.

    Unfortunately, I’ve got two big teaching days coming up, so my input is going to go right down for a while. But perhaps that’s in the spirit of polymath — we get involved, then we go off and do other things, then we come back and catch up with what’s going on, but all the while polymath continues churning away at the problem.

  68. Terence Tao Says:

    Metacomment: Ideally, I suppose we should be working in an environment in which comments from different threads can somehow overlap, coalesce, or otherwise interlink to each other. But, failing that, I think this division into subthreads has been the least worst solution (e.g. the c_n for small n thread seems to have taken off once it was separated from the “mainstream” threads of the project). And revisiting the threads every 100 comments or so seems to be just about the right tempo, I think.

  69. gowers Says:

    365. Technical clean-up idea

    This is another comment that could go either here or on the obstructions-to-uniformity thread, but I think it is slightly more appropriate here. Terry, that’s a good point about the 100-comments-per-post rule, and if the 300s and 400s threads really do seem to be converging, then they can merge into the 500s thread.

    My comment is that I have a variant of DHJ that ought to be equivalent to DHJ itself but has the potential to tidy up a lot of the annoying technicalities to do with slices being of different sizes. (I don’t know for certain that it will work, however.) It’s motivated by the proof of Sperner that I gave in 331. There one shows the stronger result that if the slice densities add up to more than 1 then there is a “combinatorial line”. What should the analogue be for DHJ?

    My contention is the following: define the slice density \delta_{a,b,c} to be |A\cap\Gamma_{a,b,c}|/|\Gamma_{a,b,c}|. Then I claim that if \sum_{a+b+c=n}\delta_{a,b,c}\geq cn^2 (for fixed positive c and sufficiently large n), then A contains a combinatorial line. This conjecture bears the same relation to corners that the strong form of Sperner bears to the trivial fact that if you have at least two points in \null[n] then you get a configuration of the form x,x+d with d\ne 0. Given that the averaging argument for Sperner naturally proves this stronger result, I think there is reason to hope that the numbers will all work out miraculously better if we go for this revised version of DHJ. I hope that way we can avoid boring details about being near the middle.

    I haven’t begun to check whether this miracle really does happen, but I think it’s definitely worth investigating.

  70. gowers Says:

    366. Technical clean-up idea.

    Another way of looking at 365. All we’re doing is choosing a random point of {}[3]^n not with the uniform measure but by first choosing a slice and then choosing a random point in that slice. It seems to me that this should have huge advantages. For instance, and most notably, the distribution of a random point used to be radically different if you conditioned on it belonging to a typical combinatorial line. Now it’s different but not that different. (With the new measure the situation is on a par with the situation for corners, where the difference between the distributions of random point and random vertex of a random corner is not a serious problem.) But I think there may be all sorts of arguments that will now work out exactly. Perhaps even the quasirandomness calculations I was trying to do will become clean and nice. No time to check though!

  71. Nicholas Locascio Says:

    Hi everyone. While the Hales-Jewett theorem may be a little beyond my current level of mathematics, I did put something together that I believe the mathematicians that solved this problem would have interest in.

    I created a fully playable 12 dimensional tic-tac-toe iPhone game.
    It is available for free download on the app store.

    http://itunes.apple.com/us/app/twelvetacto/id457438285?mt=8

    My calculation gives 121,804,592 possible solutions for an unaltered playing space via the self-discovered formula:
    s(d) = (1/2) * ( (k +2)^d – k^d )
    where d = # of dimensions
    and k = length of side

    I’ve also defined it recursively as:
    s(d) = (k+2)*s(d-1) + k^(d-1)

    In my game, d = 12 and k = 3.

    Using the basic framework of the game, I do believe a computer-assisted proof would be possible. I do know that computer proofs are generally considered less elegant in the world of higher mathematics, but if anyone is interested, I would be willing to share my source code and collaborate on such a proof.

Leave a comment