A potential new Polymath project: intransitive dice

A few days ago I received an email from Brian Conrey, who has a suggestion for a possible Polymath project. The problem he wants to see solved is a little different from the problems in most previous projects, in that it is not a well known and long-standing question of acknowledged importance. However, that is not an argument against trying it out, since it is still far from clear what kinds of problems are best suited to the polymathematical approach, and it would be good to get more data. And this problem has other qualities that could make it very suitable indeed. First of all, it is quite new — it arises from a paper published last year, though it appeared on arXiv in 2013 — so we do not yet have a clear idea of how difficult it is, which should give us hope that it may turn out to be doable. Secondly, and more importantly, it is a very attractive question.

Suppose you have a pair of dice D_1,D_2 with different numbers painted on their sides. Let us say that D_1 beats D_2 if, thinking of them as random variables, the probability that D_1>D_2 is greater than the probability that D_2>D_1. (Here, the rolls are of course independent, and each face on each die comes up with equal probability.) It is a famous fact in elementary probability that this relation is not transitive. That is, you can have three dice D_1,D_2,D_3 such that D_1 beats D_2, D_2 beats D_3, and D_3 beats D_1.

Brian Conrey, James Gabbard, Katie Grant, Andrew Liu and Kent E. Morrison became curious about this phenomenon and asked the kind of question that comes naturally to an experienced mathematician: to what extent is intransitivity “abnormal”? The way they made the question precise is also one that comes naturally to an experienced mathematician: they looked at n-sided dice for large n and asked about limiting probabilities. (To give another example where one might do something like this, suppose one asked “How hard is Sudoku?” Well, any Sudoku puzzle can be solved in constant time by brute force, but if one generalizes the question to arbitrarily large Sudoku boards, then one can prove that the puzzle is NP-hard to solve, which gives a genuine insight into the usual situation with a 9\times 9 board.)

Let us see how they formulate the question. The “usual” n-sided die can be thought of as a random variable that takes values in the set \{1,2,\dots,n\}, each with equal probability. A general n-sided die is one where different probability distributions on \mathbb N are allowed. There is some choice about which ones to go for, but Conrey et al go for the following natural conditions.

  1. For each integer m, the probability p_m that it occurs is a multiple of 1/n.
  2. If p_m>0, then 1\leq m\leq n.
  3. The expectation \sum_mmp_m is the same as it is for the usual die — namely (n+1)/2.

Equivalently, an n-sided die is a multiset of size n with elements in \{1,2,\dots,n\} and sum n(n+1)/2. For example, (2,2,3,3,5,6) and (1,2,3,3,6,6) are six-sided dice.

If we have two n-sided dice A and B represented in this way as (a_1,a_2,\dots,a_n) and (b_1,b_2,\dots,b_n), then A beats B if the number of pairs (a_i,b_j) with a_i>b_j exceeds the number of pairs with a_i<b_j.

The question can be formulated a little over-precisely as follows.

Question. Let A, B and C be three n-sided dice chosen uniformly at random. What is the probability that A beats C if you are given that A beats B and B beats C?

I say “over-precisely” because there isn’t a serious hope of finding an exact formula for this conditional probability. However, it is certainly reasonable to ask about the limiting behaviour as n tends to infinity.

It’s important to be clear what “uniformly at random” means in the question above. The authors consider two n-sided dice to be the same if the probability distributions are the same, so in the sequence representation a random die is a random non-decreasing sequence of integers from \{1,2,\dots,n\} that add up to n(n+1)/2 — the important word there being “non-decreasing”. Another way of saying this is that, as indicated above, the distribution is uniform over multisets (with the usual notion of equality) rather than sequences.

What makes the question particularly nice is that there is strong evidence for what the answer ought to be, and the apparent answer is, at least initially, quite surprising. The authors make the following conjecture.

Conjecture. Let A, B and C be three n-sided dice chosen uniformly at random. Then the probability that A beats C if you are given that A beats B and B beats C tends to 1/2 as n tends to infinity.

This is saying that if you know that A beats B and that B beats C, you basically have no information about whether A beats C.

They back up this conjecture with some experimental evidence. When n=6, there turn out to be 4417 triples of dice (A,B,C) such that A beats B and B beats C. For 930 of these triples, A and C were tied, for 1756, A beat C, and for 1731, C beat A.

It seems obvious that as n tends to infinity, the probability that two random n-sided dice are tied tends to zero. Somewhat surprisingly, that is not known, and is also conjectured in the paper. It might make a good first target.

The reason these problems are hard is at least in part that the uniform distribution over non-decreasing sequences of length n with entries in \{1,\dots,n\} that add up to n(n+1)/2 is hard to understand. In the light of that, it is tempting to formulate the original question — just how abnormal is transitivity? — using a different, more tractable distribution. However, experimental evidence presented by the authors in their paper indicates that the problem is quite sensitive to the distribution one chooses, so it is not completely obvious that a good reformulation of this kind exists. But it might still be worth thinking about.

Assuming that the conjecture is true, I would imagine that the heuristic reason for its being true is that for large n, two random dice will typically be “close” in the sense that although one beats the other, it does not do so by very much, and therefore we do not get significant information about what it looks like just from knowing that it beats the other one.

That sounds a bit vague, so let me give an analogy. Suppose we choose random unit vectors u,v,w in \mathbb R^2 and are given the additional information that \langle u,v\rangle\geq 0 and \langle v,w\rangle\geq 0. What is the probability that \langle u,w\rangle\geq 0? This is a simple exercise, and, unless I’ve messed up, the answer is 3/4. That is, knowing that in some sense u is close to v and v is close to w makes it more likely that u is close to w.

But now let’s choose our random vectors from \mathbb R^n. The picture changes significantly. For fixed u, the concentration of measure phenomenon tells us that for almost all v the inner product \langle u,v\rangle is close to zero, so we can think of u as the North Pole and the unit sphere as being almost all contained in a thin strip around the equator. And if v happens to be just in the northern hemisphere — well, it could just as easily have landed in the southern hemisphere. After a change of basis, we can assume that u=e_1 and v is very close to e_2. So when we choose a third vector w, we are asking whether the sign of its second coordinate is correlated with the sign of its first. And the answer is no — or rather, yes but only very weakly.

One can pursue that thought and show that the graph where one joins u to v if \langle u,v\rangle\geq 0 is, for large n, quasirandom, which means, roughly speaking, that it has several equivalent properties that are shared by almost all random graphs. (For a more detailed description, Googling “quasirandom graphs” produces lots of hits.)

For the problem of Conrey et al, the combinatorial object being examined is not a graph but a tournament: that is, a complete graph with orientations on each of its edges. (The vertices are dice, and we draw an arrow from D_1 to D_2 if D_1 beats D_2. Strictly speaking this is not a tournament, because of ties, but I am assuming that ties are rare enough for this to make no significant difference to the discussion that follows.) It is natural to speculate that the main conjecture is a consequence of a much more general statement, namely that this tournament is quasirandom in some suitable sense. In their paper, the authors do indeed make this speculation (it appears there as Conjecture 4).

It turns out that there is a theory of quasirandom tournaments, due to Fan Chung and Ron Graham. Chung and Graham showed that a number of properties that a tournament can have are asymptotically equivalent. It is possible that one of the properties they identified could be of use in proving the conjecture described in the previous paragraph, which, in the light of the Chung-Graham paper, is exactly saying that the tournament is quasirandom. I had hoped that there might be an analogue for tournaments of the spectral characterization of quasirandom graphs (which says that a graph is quasirandom if its second largest eigenvalue is small), since that could give a significantly new angle on the problem, but there is no such characterization in Chung and Graham’s list of properties. Perhaps it is worth looking for something of this kind.

Here, once again, is a link to the paper where the conjectures about dice are set out, and more detail is given. If there is enough appetite for a Polymath project on this problem, I am happy to host it on this blog. All I mean by this is that I am happy for the posts and comments to appear here — at this stage I am not sure what level of involvement I would expect to have with the project itself, but I shall certainly follow the discussion to start with and I hope I’ll be able to make useful contributions.

76 Responses to “A potential new Polymath project: intransitive dice”

  1. Brian Conrey Says:

    Thanks, Tim, for posting this. The only thing I’d like to add is that
    we do have a small amount of theoretical evidence for our conjecture.
    We call a die a “one-step” if it differs from the standard die only in
    that one pip is removed from one face and placed on another. There are
    about n^2 of these and so about n^6 triples. Most of these triples
    have ties but around n^2 of the triples have pairwise winners. When looking at these triples of one-step dice restricted to those triples which have pairwise winners we can prove that asymptotically 1/4 of the triples are intransitive, which is what our conjecture predicts. Our proof is very combinatorial; it considers 64 cases, and includes an interesting wrinkle having to do with triples that have special automorphisms. The details of the proof are in the first version of the paper we posted to the arxiv https://arxiv.org/pdf/1311.6511v1.pdf

  2. gowers Says:

    Continuing the analogy with the angles-between-unit-vectors example, it is natural to ask the following question. Suppose we have two sets of n-sided dice, \mathcal A and \mathcal B, and suppose that every die in \mathcal A beats every die in \mathcal B. Must at least one of \mathcal A and \mathcal B have very small measure (where the measure is the uniform probability measure on the set of all n-sided dice)? The analogy is that if you have two subsets A and B of the n-sphere and \langle a,b\rangle\geq 0 for every a\in A and b\in B, then at least one of A and B has exponentially small measure.

    This problem might be more tractable than the main one, but it is in the same ball park.

  3. Kevin Costello Says:

    For the question of “How likely is it that two random dice are tied”…

    The first natural thing you could hope for is “For any fixed die A, the probability that B ties A is small”. This is false — if A is the standard die, every die ties A (due to the sum of the values on a die being fixed). But maybe somehow this is the only exception?

    Suppose we fixed die A to have values a_1 \geq a_2 \dots \geq a_n. This naturally gives rise to a function

    f_A(j) = | \{ i \textrm{ with } a_i<j \} |-|\{i \textrm{ with } a_i>j \}|.

    The question of whether B beats A or not then comes down to whether the sum of this function over B is positive, negative, or 0.

    So you might hope that as long as f_A wasn’t somehow “too close” to the perfectly linear behavior for a standard die, a sum of 0 wouldn’t be too likely.

    The complication here is that the j are not quite independent, both because we’re considering multisets instead of sequences and because we’re requiring the sum of B to be a fixed value.

    • gowers Says:

      I like this idea. To make a start on it, one could consider a few specific examples of dice A. For example, the simplest one to analyse is probably
      A=((n+1)/2,(n+1)/2,\dots,(n+1)/2).
      Now take a random multiset B from the given model, and define g_A(B) to be the number of values of B that are less than (n+1)/2 plus half the number that are equal to (n+1)/2. It seems pretty plausible that the distribution of g_A(B), which is something like a normal distribution with mean n/2 and variance tending to infinity (and quite possibly being of order \sqrt n), which would make the probability that it hits n/2 on the nose be o(n).

    • gowers Says:

      Also, it looks to me as though the ties question becomes much easier if we use the alternative distribution mentioned in Kent Morrison’s comment below. That is (if I understand correctly) we take a random n-sided die to be a random sequence (a_1,\dots,a_n) of integers between 1 and n. Now for each fixed A the analogues of the numbers f_A(j) are independent. There are some extreme cases where ties are quite likely, such as when a_i=1 half the time and n the other half of the time, in which case f_A(j)=0 with probability close to 1 and there’s a good chance that every f_A(j) is zero. But for any reasonable A — perhaps one where the variance of a random a_i isn’t almost as big as it can be — it looks as though this won’t happen and that standard probabilistic techniques can be used to show that a tie happens with probability tending to zero.

      So unless I’ve misunderstood the model, it looks to me as though studying random sequences as above would be a good idea: it would be nice to prove that the tournament is quasirandom for some sensible model, even if it turns out to be hard for the original model.

  4. Kent Morrison Says:

    Our dice are modeled as multi-sets of size n with elements being integers from 1 to n and with the total being n(n+1)/2. But we also looked at defining them to be ordered n-tuples with the uniform probability distribution. In this picture the induced probability on the multi-set model is no longer uniform. However, the computer simulations of random triples of these dice with n=100 sides (or so) showed the same results as for the multi-set model, and so we think that the conjectures hold for either of these two definitions of random dice.

    On the other hand, the problem does show sensitivity to the choice of model. Suppose we define an n-sided die to be a partition of n(n+1)/2 with exactly n parts. Now it is possible for sides to have values greater than n. Put the uniform probability measure on the set of these partitions. Then the numerical simulations show that random triples are much less likely to form an intransitive cycle than they are to be a transitive chain. In this case the “typical” die is farther from the standard die than in the two models above.

    There are a couple of other models that show different degrees of intransitivity that are described in our paper.

  5. adenadel comments on "Nontransitive dice" Says:

    […] https://gowers.wordpress.com/2017/04/28/a-potential-new-poly&#8230; […]

  6. gowers Says:

    I’m beginning to think that the main conjecture should be true and not too hard to prove in the alternative model where a random n-sided die is simply given by choosing each face independently and uniformly from \{1,\dots,n\}. I think it should be the case that for almost all pairs (A,B) of random dice, if you choose a random die C, then the four possibilities for which of A,B it beats all occur with probability roughly 1/4. The proof would come down to analysing two sums of i.i.d. random variables. The first ones would be distributed as f_A and the second ones as f_B. The subtle point is that the two sums are not themselves independent: when we choose a random j to go into C, each f_A(j) is quite strongly correlated with f_B(j). But as long as A and B are not too similar, there should be plenty of j with f_A(j)>f_B(j) and plenty with f_B(j)>f_A(j). So I feel optimistic that we should be able to identify a condition that is almost always satisfied that makes precise the idea that A and B are not too similar.

    • gowers Says:

      Actually, the condition is not that mysterious. We need to look at the random variables (f_A(j),f_B(j)). I.e., we care about the joint distribution of f_A(j) and f_B(j) when a random j is chosen. This will give us a sum of 2D i.i.d. random variables and its distribution should be a 2D normal distribution. Then I think what matters is whether the corresponding ellipse is equally distributed in the four quadrants, or something like that.

      All this feels like the kind of thing that I or someone else should be able to work out given a quiet hour or two, which as it happens I don’t have for a little while …

    • gowers Says:

      I still don’t have time to sit down with a piece of paper, but I can be slightly more specific about what I think needs to be worked out. Given two random dice A and B, we will find that a typical C will beat each of them with probability approximately 1/2. That’s because the probability is related to a sum of independent bounded random variables, and that sum is with high probability close to its mean.

      To examine the correlation, we need to do something like this. We find that for some j we have f_A(j)>f_B(j) and for other j we have the reverse, and the average value of f_A(j)-f_B(j) should I think be close to zero even if we condition on A beating B, because typically A will only just beat B. I think it’s possible that one will be able to express all this rather neatly so that calculations become simple, but that I can’t see in my head.

  7. Henry Towsner Says:

    Regarding a spectral characterization for quasi-random tournaments, it seems that one does at least have a characterization in terms of counting a single tournament analogous to C_4 for graphs, namely the tournament with four vertices x,x’,y,y’ with both x vertices pointing at both y vertices.

    To see this, let \rho(x,y) be the function which is 1 if x points to y and -1 if y points to x. Then we can consider the value ||\rho||=\int \rho(x,y)\rho(x',y)\rho(x,y')\rho(x',y') dxdx'dydy'. Just like in the graph case, one can show that ||\rho||\geq||\rho(x,y)\chi_A(x)\chi_B(y)|| for any rectangle A\times B.

    In particular, if T is not quasi-random, we have A,B so that \int_{A} |\int_B \rho(x,y)dy|dx=\int|\int \rho(x,y)\chi_A(x)\chi_B(y)dy|dx=c (in the notation of Chung-Graham, this is written \sum_{v\in A}|d^+(v,B)-d^-(v,B)|=O(n^2)), and therefore by Cauchy-Schwarz, ||\rho\chi_A\chi_B||=c>0.

    • gowers Says:

      This looks similar, but not identical, to the property P3 in Chung and Graham’s paper, which says that you have roughly the right number of “even 4-cycles” where that’s defined as a 4-cycle where the number of edges going in each direction is even.

    • Henry Towsner Says:

      Ah, you’re right. I was expecting to see a characterization of this kind, and didn’t notice that P3 could be interpreted that way.

  8. gowers Says:

    After thinking a bit more, I have realized that I have clearly misinterpreted Kent Morrison’s sentence “But we also looked at defining them to be ordered n-tuples with the uniform probability distribution.” I took it to mean that n-tuples were chosen uniformly from the n^n possible n-tuples of integers from \{1,2,\dots,n\}. It’s actually been quite instructive to think about this case and to see why one would not expect intransitivity to be common for it.

    Let A and B be two dice chosen from this model. As above, let f_A be the function defined by taking f_A(j) to be the number of i such that a_i>j minus the number of i such that a_i<j. (Actually, that’s minus what we had before, but it’s more convenient this way round for this comment.) Then A beats B if \sum_jf_A(b_j)>0.

    A key point with this model is that we are not assuming that the sum of the values of the faces is n(n+1)/2. That is what it will be on average, but there will be a standard deviation of order n^{3/2}, and that turns out to make a big difference.

    The reason it makes a difference is that f_A does not have to have average zero, so if we find that A beats B, that is quite strong evidence that for this particular A we have that f_A has an average somewhat greater than zero — perhaps some smallish multiple of n^{1/2}. And if that’s the case, then it makes A a whole lot more likely to beat some other C as well. So for this model, we certainly do not expect the tournament to be quasirandom.

    There will be a second part to this comment, but no time for that right now.

  9. gowers Says:

    Another way of seeing that if you define a random n-sided die to be a random element of \{1,2,\dots,n\}^n then you shouldn’t get intransitivity that often is as follows. Let A and B be two such dice chosen randomly and then fixed. Then the random variables f_A and f_B will have mean zero and variance of order n. Also, they will typically be fairly close to the uniform distribution on [-n,n].

    The variance of f_A+f_B will therefore be of order n^2, but the variance of f_A-f_B will be much smaller — I think a typical value will be of order \sqrt n so the variance should be around n.

    I’m not guaranteeing that everything I write is correct, but it seems to me, therefore, that if we take a sum of n independent random variables distributed like (f_A,f_B), we will get something close to a 2D normal distribution with major axis in the x+y direction and minor axis in the x-y direction, with a hugely bigger variance in the x+y direction. But this means that it will almost always lie either in the positive quadrant or in the negative quadrant. That is, if A beats C, then almost always B will beat C as well.

    Now that I’ve written that, I don’t believe it, because it’s too extreme. It would seem to imply that if I choose two random dice and find that one beats the other, then almost all the remaining dice are in between, and that clearly can’t be true: beating another die can give some information, but surely not that much. I’ll have to try to work out where I’ve gone wrong.

    • gowers Says:

      I think my mistake was in accidentally assuming that f_A-f_B will average zero. In this model, that may well not be the case, and if it isn’t, then a sum of random copies of f_A-f_B will have a significant drift.

  10. justatest29 Says:

    $math$

    • gowers Says:

      The answer to your implicit question is that if you want some mathematics to appear, you write it like TeX but with the word “latex” after the first dollar (with no space). For example, if you want to render the expression \int_0^1f(x)dx you write $laatex \int_0^1 f(x) dx$, except that there should be only one “a” in “latex”.

  11. gowers Says:

    Despite my not yet resolved confusion in my 10:12pm comment above, let me mention something else that seems relevant.

    Suppose now that we take as our model the uniform distribution over sequences of length n with values in \{1,2,\dots,n\} that add up to n(n+1)/2. (I think perhaps this is the model Kent Morrison meant that gives behaviour similar to the random-multiset model. But I’m not sure.) If we fix A and now take a random die C, we are taking n random samples f_A(j) of the random variable f_A, except that this time we know that the sum of the j is n(n+1)/2.

    Now let g_A(j) be the number of a_j that are at most j, and let h_A(j)=g_A(j)-j. Thus, h_A(j) is the number of a_j that are at most j minus what we expect that to be. The order of magnitude of a typical h_A(j) will be about \sqrt n.

    If we know that c_1+\dots+c_n=n(n+1)/2, then we know that \sum_jh_A(c_j)=\sum_jg_A(c_j)-n(n+1)/2. I haven’t been careful enough about ties, but if I had, then I think this would be equal to \sum_jf_A(c_j).

    The basic point I want to make is that we seem, with this conditioning, to be able to write the quantity that determines whether A beats C as a sum of random variables that should have a much smaller variance than f_A, namely the variables g_A. Probably what was wrong with the earlier comment is that we can do something a bit like that there.

    I still feel confused, but I think the random variables h_A may be useful for showing that certain correlations are weak: I would expect h_A and h_B to be hardly correlated at all for most pairs A,B.

    • gowers Says:

      I think my confusion is now resolved (see May 1st 10:31am comment just above).

    • Kent Morrison Says:

      Yes, that’s the model that I meant. The sums of the n-tuples are n(n+1)/2 just as in the multiset model.

  12. gowers Says:

    OK, let me say a few things rigorously about the variables h_A.

    Let g_A(j) be |\{i:a_i\leq j\}|. Then

    \sum_jg_A(j)=\sum_i\sum_j\mathbf 1_{a_i\leq j}=\sum_i(n-a_i+1)

    If we make the assumption that \sum_ia_i=n(n+1)/2, then this equals n^2-n(n+1)/2+n=n(n+1)/2. Therefore, if we set h_A(j)=g_A(j)-j, then we get that \sum_jh_A(j)=0.

    Now let C=(c_1,\dots,c_n) and suppose that \sum_jc_j=n(n+1)/2. Then

    \sum_jh_A(c_j)=\sum_j(g_A(c_j)-c_j)

    =\sum_jg_A(c_j)-n(n+1)/2=\sum_i\sum_j\mathbf 1_{a_i\leq c_j}-n(n+1)/2.

    Actually, this is still unsatisfactory from the point of view of ties. I’ll write another comment soon with a better choice. But the above is coming close to showing that the h_A average zero and that to determine whether A beats a randomly chosen C we take the almost independent sum \sum_jh_A(c_j) and look at its sign.

    What I am now hoping is that for almost all pairs (A,B) the random variables h_A and h_B are of mean zero (that bit is OK) and are uncorrelated. That is, I want h_Ah_B to have mean close to zero as well. That should I think be enough to allow us to argue that if A beats C it gives us very little clue about whether B beats C.

  13. gowers Says:

    Here’s a cleaner choice of random variables to use. Again I’ll just do a few simple calculations.

    I’ll return to Kevin Costello’s random variables

    f_A(j)=|\{i:a_i<j\}|-|\{i:a_i>j\}|.

    On average, we expect that to be j-1-(n-j)=2j-(n+1). So we now create some new variables that we expect to be a lot smaller, namely

    h_A(j)=f_A(j)-2j+n+1.

    The main calculation I want to do is to show that if \sum_ia_i=n(n+1)/2, then \sum_jh_A(j)=0. Here it is.

    \sum_jh_A(j)=\sum_{i,j}(\mathbf 1_{a_i<j}-\mathbf 1_{a_i>j})-2\sum_jj+n(n+1)
    =\sum_i((n-a_i)-(a_i-1))=n(n+1)-2\sum_ia_i=0.

    • gowers Says:

      And another calculation is the following. Let C be another die, again with faces adding up to n(n+1)/2. Then

      \sum_jh_A(c_j)=\sum_{i,j}(\mathbf 1_{a_i<c_j}-\mathbf 1_{a_i>c_j})-2\sum_jc_j+n(n+1)

      The last two terms cancel, so the sign of this sum tells us whether A beats C or C beats A or it’s a tie.

  14. gowers Says:

    I’ve done another calculation, which I won’t present here, but I’ll give the result. (Maybe I’ll give the full calculation in a second post at some point.) It’s an expression for \sum_jh_A(j)h_B(j), where h_A and h_B are defined as in the previous comment. I end up with

    n^3 - 2\sum_{i,i'}|a_i-b_{i'}|+2(\sum a_i^2+\sum b_i^2)

    -2n^2(n+1)+\frac 23n(n+1)(2n+1)-n^2(n+1)

    I’ve written it in that not quite fully simplified form to make it more transparent where the terms come from if anyone else feels like doing the calculation too.

    What I’d like is to be able to show that for a typical pair (A,B) this is a lot smaller than \sum_jh_A(j)^2 or \sum_jh_B(j)^2. However, I don’t see how to do that at the moment. But intuitively I’d expect h_A(j)^2 to be around n on average, and I’d expect h_A(j)h_B(j) to be around 0 on average.

  15. Bogdan Grechuk Says:

    A couple of suggestions/ questions / reformulations.

    The question is whether tournament G with vertices dices and arrows “who beats who” is quasi-random. A necessary (not sufficient) condition for this is that, for almost every vertex/dice A, |Out(A)-Int(A)|=o(|G|), where Out(A) and Int(A) are the number of dices beaten by A and which beats A, respectively. More precisely, for every e>0, there is N, such that if |G|>N, then |Out(A)-Int(A)| < e|G| holds for all except e fraction of A.

    I would start with investigating for which distributions this holds. For example, if we do not require the sum to be fixed, then it obviously fails for A which happened to have a sum much higher/lower than average.

    On a different note, it seems convenient to subtract (n+1)/2 from values (1,2,…,n), and assume that n=2k+1 is odd, so that possible values are (-k,-k+1,…,k-1,k) and the sum is required to be 0. More generally, why not allow arbitrary distributions, including continuous ones? A nice special case when dice values are taken from standard normal distribution, subject to condition that the sum is 0. Due to rotation invariant property of normal distribution, this may be equivalent (?) to selecting dices uniformly at random from n-dimensional unit sphere intersected with plane x1+…+xn=0. This distribution looks "nicer" than suggested by authors. At least, in this setting, the ties question trivially holds. Now, the question is, is (almost) every point A on the sphere beats area which is about half of the sphere?

    • gowers Says:

      That sounds like a nice question at the end there.

      Maybe it will be helpful (for others) if I mention an example that demonstrates that the condition you discuss is indeed not sufficient. Take the cyclic group of order n and put an arrow from x to y if y-x\in\{1,2,\dots,\lfloor n/2\rfloor\}. Then each vertex is joined to almost exactly half the other vertices, but the out-neighbourhoods of vertices that are close to each other are strongly correlated, so the tournament is far from quasirandom.

  16. Robert Kent Says:

    Possible typo (unless I’m being dim): in your list of Conrey’s 3 conditions in the fourth para of your post, shouldn’t the formula for the expectation be the sum over m of [m.p subscript m] rather than the sum over m of [p subscript m]?

    Thanks, it was indeed a typo — now corrected.

  17. Graham Jones Says:

    It seems it might be easier, or anyway neater, if the probabilities don’t have to be multiples of 1/n. So a die is a vector p=(p_1, \dots, p_n)^T with p_i \geq 0, \sum_i p_i = 1, and \sum_i i p_i = (n+1)/2. According to my calculations, which need to be checked…

    Let M be a n\times n matrix with 1s on the diagonal, 0s above the diagonal and 2s below. Let s=(1/n,., \dots, 1/n)^T be the standard die. Then p beats q iff p^T Mq > 1. Also p^T Mp = 1 for any allowed p, and the condition \sum_i i p_i = (n+1)/2 is equivalent to p^T Ms= 1.

    In the n=4 case the allowed ps lie in a 2D kite shape, with corners at (1/2,0,0,1/2), (0,3/4,0,1/4), (0,1/2,1/2,0), and (1/4,0,3/4,0). If you draw line through p and s, it divides the kite into dice that p beats and those that beat it.

    • Kent Morrison Says:

      This is an attractive model that should show the same behavior as the multiset dice model and the ordered tuple dice model (both with sums constrained to be n(n+1)/2. There is the advantage that ties obviously have probability zero, and I think that you see the probabilities of the tournament graphs converging faster to the equally likely probability as n grows.

      I’ve just done some random sampling of three distribution vectors for n \le 8, and for each n the 8 different tournament graphs all occur with about the same frequency for a few thousand samples.

      Rather than the using M, one could use the skew-symmetric matrix G with 1’s below the diagonal, and then, necessarily, 0’s on the diagonal and -1’s above the diagonal. Then p beats q iff p^T G q > 0, and the condition that \sum_i i p_i = n(n+1)/2 is equivalent to p^T G s =0.

  18. gowers Says:

    Suppose that A=(a_1,\dots,a_n) is a random n-tuple of integers from \{1,2,\dots,n\}, chosen uniformly from all such triples subject to the condition that \sum_ia_i=n(n+1)/2. Now let u_A(j) be the number of i such that a_i\leq j, and let v_A(j)=u_A(j)-j. Given j_1 and j_2, I would like to understand how the random variables v_A(j_1) and v_A(j_2) depend on each other, and roughly how they are distributed.

    I would expect that if j is not too close to 1 or n, then v_A(j) is approximately normally distributed with mean zero. (The mean zero part is OK.) But while that’s easy to prove without the conditioning on the sum, it is less clear with that conditioning.

    Also, if j_1 and j_2 are very close, then obviously v_A(j_1) and v_A(j_2) will be positively correlated. But what about, say, v_A(n/3) and v_A(2n/3)?

    If A is a purely uniform n-tuple of elements of \{1,2,\dots,n\}, then these two random variables are correlated. Indeed, if u_A(n/3)=n/3+C\sqrt n, then there are 2n/3-C\sqrt n more numbers, all in the range from n/3 to n, to be chosen randomly. On average, half of these will be less than or equal to 2n/3, so on average u_A(2n/3) will be
    n/3+C\sqrt n+(2n/3-C\sqrt n)/2=2n/3+C\sqrt n/2.
    So v_A(n/3) and v_A(2n/3) are positively correlated.

    But if we condition on the sum being n(n+1)/2, I am no longer sure what happens, and I don’t even have a confident guess. If v_A(n/3) is C\sqrt n (with C some smallish positive constant, say) then “too many” terms in the sequence are below n/3. On average, that should mean that in order to make sure that the sum of the sequence isn’t too small, the terms are on average slightly bigger than one would expect given whether they are below n/3 or above it. But that would produce a tendency for v_A(2n/3) to be smaller than expected. The question I’m left with is whether it is smaller by an amount that exactly cancels out the positive correlation that occurred when we didn’t condition on the sum.

    This would be an easy question to test experimentally. The experiment I would be interested in is simply to take a random sample of several sequences, say of length 300 and taking values from 1 to 300, conditioned on having sum 45150, plotting the values (v_A(100),v_A(200)) for each one, and looking at how those points are distributed. My dream would be that they would look like a standard 2D normal distribution, but that might be too much to ask.

    If I were less busy, I would code this up for myself. If nobody feels like doing it, then maybe I’ll get round to it in a couple of weeks.

    • Daniel Fremont Says:

      Unless I made a mistake (quite possible since I coded this up hastily), it looks like they are negatively correlated. I’m not sure if I can post pictures here, so here’s a PDF (original notebook is here).

      The plots (showing the 2D distribution and its marginals) are made from 40,000 samples with n=150 (I couldn’t get to 300 because of some scalability issue I don’t have time to look at now).

    • gowers Says:

      Thanks for this, and apologies that it has sat in the moderation queue for over a week.

  19. Ian Says:

    Hello Professor Gowers,

    My brother and I took a shot at doing the numerical computation suggested in your last post. We collected 100K sequences that summed to 45150 and generated the requested statistics. We are fairly sure we understood your formulation and constraints, but regardless all of the data / images / code (matlab script) can be found in the following google drive link:

    https://drive.google.com/drive/folders/0B8z2ZOOo7rKRYjcwOERpMWc3S2M?usp=sharing

    We found that v(100) and v(200) were both very close to zero, and that the joint distribution does look approximately normal.

    If this is helpful, we can increase our collection to somewhere around 1M sequences, and can post the results tomorrow.

    • Ian Says:

      Sorry, I meant the means of v(100) and v(200) are both close to zero. There is also a text file (Sequences.txt) in the google drive folder that contains ~100K sequences of length 300 that sum to 45150.

    • gowers Says:

      Thank you very much for doing that. It does indeed seem as though the joint distribution is normal, though I had hoped that the variables would be, by some kind of miracle, uncorrelated, whereas in fact they seem to be negatively correlated. Assuming that that is genuinely the case, I’ll have to think where it leaves us.

    • Ian Says:

      Please forgive me, but maybe the negative correlation makes sense? If you have a significant number of dice in this case with values less than 100, in order to preserve the sum wouldn’t many of the remaining dice have to have values greater than 200? Of course, this presupposes that we understand the problem at hand.

    • gowers Says:

      I forgive everything except the words “please forgive me”! I completely agree that the negative correlation makes sense. Let me try to explain why I entertained the hope that there might be zero correlation. It was because there are two competing effects. On the one hand there is the effect you talk about: if there are too many values less than 100, then typically the remaining values are going to have to be a bit bigger to compensate. That certainly has a tendency to make correlations negative. On the other hand, if there are more small values, then that has a tendency to increase f_A(j) for all j\geq 100 and create positive correlations. (To give an extreme example, f_A(100) is clearly very strongly and positively correlated with f_A(101).) I was hoping that a miracle might occur and that for j_1 and j_2 sufficiently far apart these two effects might cancel each other out exactly. But I had no heuristic argument for why they should, and it now appears that they don’t. Probably what happens in reality is that if j_1 and j_2 are close, then there is a positive correlation, and if they are far apart then there is a negative one.

      By a piece of good fortune, Persi Diaconis is in Cambridge for a few days. I had a brief chat with him yesterday evening, and it seems that the kinds of questions that are arising here are much studied by probabilisits. So I hope to post soon with some pointers to the relevant literature (unless someone here beats Diaconis to it).

      To give an example of the kind of question I plan to ask him, suppose we know that f_A(n/3)=n/3+C\sqrt n. Then what is known about the distribution of \sum_{a_j\leq n/3}a_j?

      If we had a clear picture of that, then we could condition on the value of that sum, and we would then know that the rest of A consisted of 2n/3-C\sqrt n numbers between n/3 and n, and we would know their sum, so we would be back in the original situation but with slightly different parameters.

      None of this is quite what we want to know, since what is really interesting to us is how the random variables f_A(j) and f_B(j) relate to each other for a typical pair of dice A and B. But if we can get a handle on the distributions and correlations for a single die, we should be in a much better position to answer this correlation question. Watch this space …

    • Ian Says:

      If anyone is interested, we generated a correlation plot for all the Vs:

      https://drive.google.com/drive/folders/0B8z2ZOOo7rKRYjcwOERpMWc3S2M

      I think the plot is beautiful. It does show that Vs close to each other (near the diagonal) are positively correlated and Vs farther away from each other tend to be negatively correlated, as professor Gowers speculated.

      We used 4.5M length 300 sequences to calculate the correlations.

    • gowers Says:

      I’m definitely interested. It was a stroke of luck that I suggested looking at f_A(100) and f_A(200), where the negative correlation is quite pronounced. I had considered going for n/2 and 3n/4, but if we’d done that, then we’d have taken 150 and 225, which would have made the correlation a lot weaker — perhaps enough to fool us into thinking that it was zero.

      I hope that after talking to Diaconis tomorrow, I’ll know how to calculate the correlation function for large n, or at least that I’ll know where to look in order to learn how to do it. I still haven’t thought about how these correlations affect the conjecture about intransitive dice.

  20. Da Zhao Says:

    I think we may extend the definition of “dice” a bit, say a dice is a partition of $n$ into exactly $k$ parts.

  21. Dsp Says:

    Was the polymath project on the union-closed sets conjecture ever officially ended?

    • gowers Says:

      No. I feel guilty about that, because it’s on my to-do list to write a document summarizing the ideas we had and the observations we made, but I’m not sure when I’ll get round to it.

      But there’s little doubt that the project is closed — even if there wasn’t a neat closure.

  22. Gil Kalai Says:

    Dice paradoxes are closely related to voting paradoxes a la Condorcet and there is a very large body of works on such paradoxes (for elections and also for dices). Here are a few papers where these connections are mentioned:

    1) Richard Savage The Paradox of Nontransitive Dice, American Math Monthly, 1994.

    2) N. Alon, Voting paradoxes and digraphs realizations, Advances in Applied Math. 29 (2002), 126-135 http://www.tau.ac.il/~nogaa/PDFS/voting.pdf

    3) N. Alon, G. Brightwell, H. A. Kierstead, A. V. Kostochka and P. Winkler, Dominating Sets in k-Majority Tournaments, J. Combinatorial Theory, Ser. B 96 (2006), 374-387. http://www.tau.ac.il/~nogaa/PDFS/abkkw2.pdf

    Regarding the probability of paradoxical outcomes there is also much literature for majority rules in elections, but I am not aware of much works for random dices. For general election rules the phenomenon that all tournaments occurs in the limit with the same probability is related to “noise sensitivity” (This is a connection that I discussed here:

    4) Noise sensitivity and chaos in social choice theory.
    http://www.ma.huji.ac.il/~kalai/CHAOS.pdf ).

    I would guess that this connection extends also to the case of dice although I did not study it. For the majority rule different tournaments do nor occur with the same probability and this is related to noise stability of majority. For dice i suppose the issue would be if dice A > dice B is a noise sensitive property.

    • Gil Kalai Says:

      One further remark on the case of tournaments based on permutations. The model is this: You have a Boolean function f on n variables. There are n voters. For each voter you consider a random permutation on m elements (candidates). Do decide if candidate k beats candidate m you compute f(x_1,…,x_n) where x_i is 1 if on the ith permutation k is ahead of m. (You need that f(1-x_1,…,1-x_n)=1-f(x_1,…,x_n).)

      The tournament you obtain is a random tournament on m vertices. It turns out that if it is quasi random (in the very weak sense that it is random on m elements) then it is fully random (as n goes to infinity).

    • Gil Kalai Says:

      It turns out that if it is quasi random (in the very weak sense that it is random on three elements) then it is fully random. (Both statements as n goes to infinity).

    • Tobias Fritz Says:

      As I just learned in a talk of Winfried Bruns, there are recent results on the probability of occurrence of a Condorcet paradox: see https://arxiv.org/abs/1704.00153.

      I haven’t been following the intransitive dice project much, so I’m not sure how interesting or useful this will be here.

  23. Intransitive dice II | Gowers's Weblog Says:

    […] some experimental evidence about this, see a comment by Ian on the previous post, which links to some nice visualizations. Ian, if you’re reading this, it would take you […]

  24. Belkis Says:

    Taylor Swift is a fascist white supremacist she licked the KOKKK of Morsay, Benoit Hamon and ANDREW ANGLIN and she voted for Adolf Trump TWICE. Taylor Swift the mac RACIST also listens to the necro satanic music ANTEKHRIST!!! Google “Taylor Swift Antekhrist” and SEE FOR YOURSELF! The music talk about ejaculating in the asswhole of 14 year old GIRLS!!! BOYCOTT TAYLOR SWIFT to fight FASCISM and HATE! https://ihatetaylorswiftblog.wordpress.com/2017/05/16/taylor-swift-shares-nazi-right-wing-propaganda-on-twitter/

  25. Intransitive Dice – Theory Dish Says:

    […] a month ago, Tim Gowers described a fun potential new polymath project. Here is the basic […]

  26. Transiting – Chillu's WebLog Says:

    […] now, I’m also toying with the idea to participate in polymath projects. While this is not something that I have thought through fully, it feels like a good fit: I get to […]

  27. Jenifer Stautz Says:

    upcrowd funding

  28. Wynona Groeber Says:

    Weight Loss Guaranteed

  29. Quanta Magazine – Dltempire Says:

    […] “I very much liked the question, because of its surprise value,” Gowers said. He wrote a blog post about the conjecture that attracted a flurry of comments, and over the course of six additional […]

  30. Quanta Magazine Says:

    […] “I very much liked the question, because of its surprise value,” Gowers said. He wrote a blog post about the conjecture that attracted a flurry of comments, and over the course of six additional […]

  31. Mathematicians Roll Dice and Get Rock-Paper-Scissors - newsonetop.com Says:

    […] “I very much liked the question, because of its surprise value,” Gowers said. He wrote a blog post about the conjecture that attracted a flurry of comments, and over the course of six additional […]

  32. Mathematicians Roll Dice and Get Rock-Paper-Scissors - NewsyList Says:

    […] very much liked the question, because of its surprise value,” Gowers said. He wrote a blog post about the conjecture that attracted a flurry of comments, and over the course of six additional […]

  33. The Scotfree | Mathematicians Roll Dice and Get Rock-Paper-Scissors Says:

    […] “I very much liked the question, because of its surprise value,” Gowers said. He wrote a blog post about the conjecture that attracted a flurry of comments, and over the course of six additional […]

  34. Mathematicians roll dice and fetch rock-paper-scissors – Cybe™ Says:

    […] beloved the inquire of, attributable to of its shock fee,” Gowers acknowledged. He wrote a blog publish relating to the conjecture that attracted a flurry of comments, and over the course of six extra […]

  35. Quanta Journal – Tecnogerencia Says:

    […] strategy. “I very a lot favored the query, due to its shock worth,” Gowers stated. He wrote a blog post concerning the conjecture that attracted a flurry of feedback, and over the course of six further […]

  36. How Warren Buffett Rigged a Dice Game with Bill Gates - Any Time News Says:

    […] discussion board dialogue. Gowers deemed the intransitive cube query ripe for a gaggle effort and proposed it on his blog in 2017. Changing a chalkboard with a WordPress feedback part, dozens of minds swarmed the issue […]

  37. How Warren Buffett Rigged A Dice Game With Bill Gates - The New York Today News Says:

    […] online forum discussion. Gowers deemed the intransitive dice question ripe for a group effort and proposed it on his blog in 2017. Replacing a chalkboard with a WordPress comments section, dozens of minds swarmed the […]

  38. How Warren Buffett Rigged a Cube Recreation with Invoice Gates – Z-Lib Blog Says:

    […] discussion board dialogue. Gowers deemed the intransitive cube query ripe for a gaggle effort and proposed it on his blog in 2017. Changing a chalkboard with a WordPress feedback part, dozens of minds swarmed the issue […]

  39. How Warren Buffett Rigged a Dice Game with Bill Gates CNN AUS Says:

    […] board dialogue. Gowers considered the intransitive dice query ripe for a group exertion and proposed it on his blog in 2017. Replacing a chalkboard with a WordPress feedback part, dozens of minds swarmed the issue […]

  40. How Warren Buffett Rigged A Dice Game With Bill Gates - Now-Tranding Says:

    […] online forum discussion. Gowers deemed the intransitive dice question ripe for a group effort and proposed it on his blog in 2017. Replacing a chalkboard with a WordPress comments section, dozens of minds swarmed the […]

  41. How Warren Buffett Rigged a Dice Game with Bill Gates - DUK News Says:

    […] online forum discussion. Gowers deemed the intransitive dice question ripe for a group effort and proposed it on his blog in 2017. Replacing a chalkboard with a WordPress comments section, dozens of minds swarmed the […]

  42. How Warren Buffett Rigged A Dice Game With Bill Gates – News For Today Says:

    […] online forum discussion. Gowers deemed the intransitive dice question ripe for a group effort and proposed it on his blog in 2017. Replacing a chalkboard with a WordPress comments section, dozens of minds swarmed the […]

  43. How Warren Buffett rigged a dice game with Bill Gates – Amosama. Says:

    […] online forum discussion. Gowers considered the necessary dice question ready for a group effort and Suggest it on his blog In 2017. When the whiteboard was replaced with a WordPress comments section, dozens of minds […]

  44. How Warren Buffett rigged a cube sport with Invoice Gates – Amosama. Says:

    […] board dialogue. Gowers thought-about the required cube query prepared for a bunch effort and Suggest it on his blog In 2017. When the whiteboard was changed with a WordPress feedback part, dozens of minds rallied […]

  45. 沃伦·巴菲特如何与比尔·盖茨玩骰子游戏 - Mandarinian Says:

    […] 高尔斯认为不及物骰子问题适合集体努力,并且 在他的博客上提出 2017 年,用 WordPress […]

  46. How Warren Buffett Rigged a Dice Game with Bill Gates - Any News Online Says:

    […] online forum discussion. Gowers deemed the intransitive dice question ripe for a group effort and proposed it on his blog in 2017. Replacing a chalkboard with a WordPress comments section, dozens of minds swarmed the […]

  47. Comment Warren Buffett a truqué un jeu de dés avec Bill Gates - Les Actualites Says:

    […] Gowers a jugé que la question intransitive des dés était mûre pour un effort de groupe et je l’ai proposé sur son blog en 2017. En remplaçant un tableau par une section de commentaires WordPress, des dizaines […]

  48. How Warren Buffett Rigged a Dice Game with Bill Gates - Symbels Journal Says:

    […] online forum discussion. Gowers deemed the intransitive dice question ripe for a group effort and proposed it on his blog in 2017. Replacing a chalkboard with a WordPress comments section, dozens of minds swarmed the […]

  49. Selective: How Warren Buffett Rigged a Dice Game with Bill Gates - Daily News Circuit - Daily News Circuit Says:

    […] online forum discussion. Gowers deemed the intransitive dice question ripe for a group effort and proposed it on his blog in 2017. Replacing a chalkboard with a WordPress comments section, dozens of minds swarmed the […]

  50. How Warren Buffett Rigged a Cube Recreation with Invoice Gates – likeappsapk.com Says:

    […] discussion board dialogue. Gowers deemed the intransitive cube query ripe for a bunch effort and proposed it on his weblog in 2017. Changing a chalkboard with a WordPress feedback part, dozens of minds swarmed the issue […]

  51. How Warren Buffett Rigged a Cube Sport with Invoice Gates | NowSnippets Says:

    […] discussion board dialogue. Gowers deemed the intransitive cube query ripe for a gaggle effort and proposed it on his blog in 2017. Changing a chalkboard with a WordPress feedback part, dozens of minds swarmed the issue […]

Leave a comment