We looked at that book, and it is not impossible that it contains a result we can use. However, it wasn’t clear that there was a result we could quote directly. The problem is that (if my understanding is correct) there is a constant involved that depends on the distribution. And the dependence is necessary, since one could have a distribution that is almost entirely, but not quite, concentrated in a sublattice. What we need is an estimate after steps of a random walk, where the random walk itself depends on . So in the end we had to prove explicit bounds on the characteristic function outside a certain small region and go through what I imagine is essentially the standard characteristic-functions approach to LCLT making sure that the final bound obtained is good enough.

However, I don’t rule out that there is in the literature, and perhaps even in the Lawler-Limic book, a result we could quote that would save us from having to prove this small modification of existing results. It would be nice to be able to do that.

]]>I think there may not be any nice condition that guarantees quasirandomness, though I would be happy to be wrong about this.

The reason is connected with the ideas in the comment thread started by Thomas Budzinski just above. If those ideas are roughly correct, then what they tell us is that quasirandomness fails to occur because there is a non-negligible probability that two dice are “close” in a suitable sense. In particular, I currently believe that a proof of non-quasirandomness along the following lines should work, though clearly there will be details to be sorted out.

1. Choose some fixed positive integer such as 10.

2. Prove that with probability bounded away from zero, the values of are approximately when and when .

3. Prove that if and both satisfy the above condition, then there is significant correlation between the events “ beats ” and “ beats “.

Assuming a proof like that can be made to work, I find it very hard to see what extra condition could hope to stop it working. The proof would be telling us that the set of dice is in some sense “low-dimensional”, and that seems hard to cure by passing to a naturally defined subset.

]]>Hmm… maybe this should be obvious, but I don’t understand this intuitively: For some reason a constraint on the sum of squares is compatible with the existence of the “complementary die” involution.

If we denote the values of the die faces as v_i, then the constraint on the sum is:

sum v_i = n(n+1)/2

And the “complementary die” will also have the same sum of squares:

sum (n+1-v_i)^2 = sum [ (n+1)^2 – 2(n+1)v_i + (v_i)^2 ] = n(n+1)^2 – 2(n+1)n(n+1)/2 + sum (v_i)^2 = sum (v_i)^2

I’m not sure of a good way to generate random dice with the usual average constraint along with the sum of squares constrained to that of the standard die, but for small n it isn’t too hard to just calculate all of them. With n=14, there are 3946 proper dice with the additional sum of squares constraint. Picking a random die, it beats roughly as many dice as it loses to. But if we pick two at random and check out how it separates the remaining dice into four quadrants, it does not appear to split the space evenly. There still appears to be a lot of correlation.

]]>I meant that the value is n(n+1)/2. (sorry not the sum I wrote but the verbal description) The additional condition is that the sum of squares should also be its expected value over all dices.

]]>Assuming by “balancedness” you mean constraining the expectation value of a dice roll to be the same for all dice, then balancedness does not imply random outcomes for three dice. For example if we use the sequence model, but constrain on an average other than n(n+1)/2, transitiveness can come back. Or at least that is what is suggested by the “improper” dice in the original paper.

So to get random outcomes for three dice, maybe there is some combination of constrained expectation + “condition X” that would be sufficient. I’m not sure what condition X is, but it still feels to me like the “complementary die” involution plays an important role. I’ve toyed with some ideas but haven’t managed to come up with anything convincing. Such a result would also apply directly to the multiset case (which also meets those conditions).

]]>We can also compare (balanced and non balanced) dice based on other Boolean functions.

]]>Maybe it would work out simpler to follow the approach in this post. For this comment, I’ll assume enough familiarity with that post to know what I mean by the random variables and .

The condition for the strong conjecture set out in that post is that with probability we should have

.

But it seems to me that that could easily be false: with macroscopic probability we should have small, and therefore with macroscopic probability both and are small. But with macroscopic probability that should not imply that is small.

Actually, it looks as though proving these assertions will be of roughly equivalent difficulty to proving the ones you wanted: perhaps we just want a general theorem that says that certain broad shapes occur with macroscopic probability.

]]>Sorry to be so slow to respond, but this seems like a very promising approach. Also, I feel reasonably optimistic that what you describe as the harder part may in fact be not *too* hard. Suppose, for instance, that we split into equal-sized intervals for some suitably chosen absolute constant . Then setting I think it is the case that, with macroscopic probability, in each of the first intervals there are approximately points from and in each of the last intervals there are approximately points. (Here “approximately” means something like “to within .)

The harder part would be to show that if and are uniform balanced dice, they are close from each other (and not too close from ) with macroscopic probability. It seems more or less equivalent to proving that the sequence of processes is tight and does not converge to (it should actually converge to some Brownian motion-related process, the most natural candidate is a Brownian bridge conditioned to have integral on ).

]]>To better fit the improper dice model with this proof technique, it can be viewed as looking at [n]^n sequences, and conditioning on the sum being m(m+1)/2 for some m<n. Since this is just conditioning on a different V value, maybe this is just a small step from the current results.

Having one model where average value constraint isn't sufficient, along with one showing removing the average value constraint breaks things (if the proof for #3 works out), it feels like the constraint is necessary but must be "balanced". Another nugget pointing this way is that the multiset model also appears to work, and the constraints appear "balanced" here as well.

Notice in the unconstrained [n]^n sequence, the "complementary die" involution often has a different average value. And notice for the "improper dice" model this involution does not exist (the complementary die may not even be in the set). I wonder if these two conditions together are necessary.

Are there any models that both have a constrained average, and the involution exists, yet the dice are transitive? I think someone mentioned there is a "continuous" model of the dice which is transitive, which would be an example. Has this been shown? And are there any finite models of dice that also displays this?

]]>