The idea of the proof was to prove that has certain properties with sufficiently high probability that when we condition on the sum being zero it still has those properties. The main properties are the following two.

1. .

2. The characteristic function of is very small when are not both small.

I think that in the rest of the argument I don’t use probabilistic arguments, but just these properties. So what I have written is not expressed clearly but I think it is correct.

What I think I should have done is got to the end of Section 5 and then fixed an that had the desired properties, together with a sum that is equal to , noting that almost all random -sided dice have this property (which is the main content of Section 5). After that, probabilistic arguments would be banned and the rest of the paper should be deterministic. I need to check, but I think that the only reason certain statements after Section 5 are true only with high probability is that they make use of statements in Section 5 that themselves hold with high probability.

So, to summarize, thank you for picking that up, and at the moment I am fairly confident that it will be easy to correct.

]]>It still needs a bibliography, some comments about the experimental evidence for the stronger quasirandomness conjecture, and I also think we should try to solve two further problems. The first is the question of whether we can prove rigorously that the stronger quasirandomness conjecture is false, and the second is whether we can use similar techniques to prove the weaker conjecture for the multisets model. And one other thing that I very much want to add is a rigorous proof that if you just take random elements of , then which die wins is with very high probability determined by the sum of the faces of the dice: that is, the die with the bigger sum almost certainly wins. I think that should in fact be quite a lot easier than what is proved in the draft so far — it just needs doing, as it explains why in the unconstrained model the experimental evidence suggests that one gets transitivity almost all the time.

]]>Maybe for instance by pulling on the thread that the non-uniform weighting changes the weights equally on (die A) and (inverse die A)? ]]>

“Then for around (in the torus topology),

there exists a constant such that

”

That final bound should be , and unfortunately that weakens the bound to the point where it is no longer good enough. However, that just means that looking at successive differences is insufficient, and I’m pretty sure that I am on track to complete the proof with a more complicated argument that looks at longer-range differences. I’m travelling at the moment, so it will probably take me a week or two to have a complete draft of the write-up.

]]>Just for the record: By the definition one finds immediately that (I guess that was the intuition behind the Poisson distribution). Basically I guess the only question is whether the conditioning destroys anything, but that seems unlikely to me.

]]>The used python code is below:

import numpy as np

import numpy.random

import matplotlib.pylab as plt

def sample(n):

“””Sample a dice”””

while True:

dist = numpy.random.randint(1,n+2,n)

if np.sum(dist) == n*(n+1)/2:

return dist

def delta(dist, k):

“””Return the first component of Delta_k”””

return np.sum(dist==k) – 1

n = 10000

dist = sample(n)

kList = np.arange(n) + 1

deltaList = np.array( [delta(dist, k) for k in kList] )

# Plot directly

plt.figure()

plt.plot(kList, deltaList)

plt.xlabel(‘k’)

plt.ylabel(‘$\Delta_k$’)

# Histogram plot

if deltaList.min() < 0:

deltaListP = deltaList – deltaList.min()

binData = np.bincount(deltaListP)

binLabels = np.arange(len(binData)) + deltaList.min()

plt.figure()

plt.bar(binLabels, binData / n)

plt.xlabel('Value of $h_A(k)-h_A(k-1)$')

plt.ylabel('Frequency')

Maybe some other approach to bound the Fourier transform focusing on

the direct differences.

Consider the Fourier transform

(I drop the constant shift in the second component, because it only

adds a constant factor with modulus one)

Let

with the convention . Then

With this

Let

Then we can rewrite the Fourier transform as

As , the absolute value can be bounded as

The sum consists of summands and each of it is at most . Hence

if we can bound a positive fraction of the summands as

strictly less than , then is strictly bounded away

from .

As an illustration, assume that the dice is such that there exist

constants and and such that

and

and

(I guess that could already be true for a typical dice , but

otherwise the argument could easily be refined. Moreover, looking at

the distribution of for a typical dice seems approachable.)

Then for around (in the torus topology),

there exists a constant such that

(if we are not close to , the modulus of the Fourier transform is easily bounded

away from ).

This shows

and

With the constants , we can then easily

bound the Fourier transform as

This seems pretty good to me, because we only need a bound when

is relatively large (in particular I guess significantly

larger than ) and this shows that

for an appropriate constant .

N.B.: We can recover the approach sketched in the post, in a

generalised setting by comparing the term th neighbouring term,

i.e. writing the sum as

so that we need to bound terms of the form

where

In the above approach it was taken , but for refined

estimates maybe could be more helpful.

This seems vaguely analogous to forcing the sum of discrete dice to be n-choose-2, but other than that, this may not be relevant to whether discrete dice are intransitive, or not.

]]>I’m not familiar with R, so I don’t quite follow the details of this generation (or the comparison function). But it roughly appears to generate a continuous function on [0,1] with length_scale scaling the derivatives so as to affect the length scale over which the function values are roughly equal. Is that understanding at least close?

Unless I misunderstood your model, it sounds like the only difference is that in the large n limit, this model yields a continuous distribution where-as the discrete dice would represent a discontinuous distribution. If that is the only difference, then somehow the intransitivity _depends_ on the discontinuity!? Somehow normalized continuous functions on [0,1] are almost totally ordered (ties and exceptions to ordering are of probability 0), where-as normalized discontinuous functions are almost perfectly “not ordered”? Is there some simple reason this should be obvious? If true, this seems fascinating and non-intuitive to me.

Would this mean that in the large n limit, if an idea doesn’t somehow capture the essential “discontinuities”, and instead approximates it as something smooth, it is bound to fail to describe the discrete dice?

So I hope you continue poking at this continuous model of dice to pull out some more information. It sounds interesting and could also be telling us something important.

]]>