However, I don’t rule out that there is in the literature, and perhaps even in the Lawler-Limic book, a result we could quote that would save us from having to prove this small modification of existing results. It would be nice to be able to do that.

]]>The reason is connected with the ideas in the comment thread started by Thomas Budzinski just above. If those ideas are roughly correct, then what they tell us is that quasirandomness fails to occur because there is a non-negligible probability that two dice are “close” in a suitable sense. In particular, I currently believe that a proof of non-quasirandomness along the following lines should work, though clearly there will be details to be sorted out.

1. Choose some fixed positive integer such as 10.

2. Prove that with probability bounded away from zero, the values of are approximately when and when .

3. Prove that if and both satisfy the above condition, then there is significant correlation between the events “ beats ” and “ beats “.

Assuming a proof like that can be made to work, I find it very hard to see what extra condition could hope to stop it working. The proof would be telling us that the set of dice is in some sense “low-dimensional”, and that seems hard to cure by passing to a naturally defined subset.

]]>If we denote the values of the die faces as v_i, then the constraint on the sum is:

sum v_i = n(n+1)/2

And the “complementary die” will also have the same sum of squares:

sum (n+1-v_i)^2 = sum [ (n+1)^2 – 2(n+1)v_i + (v_i)^2 ] = n(n+1)^2 – 2(n+1)n(n+1)/2 + sum (v_i)^2 = sum (v_i)^2

I’m not sure of a good way to generate random dice with the usual average constraint along with the sum of squares constrained to that of the standard die, but for small n it isn’t too hard to just calculate all of them. With n=14, there are 3946 proper dice with the additional sum of squares constraint. Picking a random die, it beats roughly as many dice as it loses to. But if we pick two at random and check out how it separates the remaining dice into four quadrants, it does not appear to split the space evenly. There still appears to be a lot of correlation.

]]>So to get random outcomes for three dice, maybe there is some combination of constrained expectation + “condition X” that would be sufficient. I’m not sure what condition X is, but it still feels to me like the “complementary die” involution plays an important role. I’ve toyed with some ideas but haven’t managed to come up with anything convincing. Such a result would also apply directly to the multiset case (which also meets those conditions).

]]>We can also compare (balanced and non balanced) dice based on other Boolean functions.

]]>The condition for the strong conjecture set out in that post is that with probability we should have

.

But it seems to me that that could easily be false: with macroscopic probability we should have small, and therefore with macroscopic probability both and are small. But with macroscopic probability that should not imply that is small.

Actually, it looks as though proving these assertions will be of roughly equivalent difficulty to proving the ones you wanted: perhaps we just want a general theorem that says that certain broad shapes occur with macroscopic probability.

]]>The harder part would be to show that if and are uniform balanced dice, they are close from each other (and not too close from ) with macroscopic probability. It seems more or less equivalent to proving that the sequence of processes is tight and does not converge to (it should actually converge to some Brownian motion-related process, the most natural candidate is a Brownian bridge conditioned to have integral on ).

]]>To better fit the improper dice model with this proof technique, it can be viewed as looking at [n]^n sequences, and conditioning on the sum being m(m+1)/2 for some m<n. Since this is just conditioning on a different V value, maybe this is just a small step from the current results.

Having one model where average value constraint isn't sufficient, along with one showing removing the average value constraint breaks things (if the proof for #3 works out), it feels like the constraint is necessary but must be "balanced". Another nugget pointing this way is that the multiset model also appears to work, and the constraints appear "balanced" here as well.

Notice in the unconstrained [n]^n sequence, the "complementary die" involution often has a different average value. And notice for the "improper dice" model this involution does not exist (the complementary die may not even be in the set). I wonder if these two conditions together are necessary.

Are there any models that both have a constrained average, and the involution exists, yet the dice are transitive? I think someone mentioned there is a "continuous" model of the dice which is transitive, which would be an example. Has this been shown? And are there any finite models of dice that also displays this?

]]>