Yes, the good multiplicative examples (e.g. define f(n) to be 1 or -1 according to whether the final non-zero digit of n mod 3 is 1 or 2) have partial sums that grow logarithmically.

]]>for all we have ?

]]>for all we have $\lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^n x_{id} = 0$?

]]>I’m still interested in the problem, and would endeavour to join in a new discussion. I haven’t given it any thought lately, having come to the conclusion that it was just too hard! But perhaps the time is ripe to have another go.

]]>Thanks for this interesting comment. We should perhaps also think about whether there are enough people with enough enthusiasm for another serious discussion of this problem. I’d be quite interested myself.

]]>“A fractional version of the Erdős-Faber-Lovász conjecture”, Combinatorica 12 (2): 155–160.

I thought for a while about why this could possibly help them, and my conclusion was that it made their argument just much easier to discover. As expected, it can be rewritten in a way that does not dualize at all, and without giving details I would like to indicate a way to do this: First one restricts ones attention to the case where “all inequalities one has” are in fact equalities, and for that case one runs the (of course still very clever) argument they perform after having dualized twice. So now we are in the case where some of the inequailities floating around are strict, and the subhypergraph of our original hypergraph containg those edges, that correspond to the “sharp” inequalities, is strictly smaller, so we may apply induction to it, think a few minutes, and then finish easily. Thus the modified strategy we get is just this: Make some extra assumptions, solve your problem assuming them, and then get rid of your assumptions using induction.

At the moment, I cannnot see how this could help for EDP, but maybe someone else has an idea.

]]>Interesting point. Is there some kind of duality argument lurking around? In the proof of Roth’s theorem we use Fourier expansion of the AP’s and get a discrepancy result for general sequences, whereas in the above one uses Fourier expansion of primitive Dirichlet characters to get a discrepancy result (with respect to HAP’s). It seems as if one could trade symmetry in the progressions for symmetry in the functions and vice versa.

]]>This is interesting.

I have been wondering whether it is possible to prove that if there is an infinite sequence of bounded discrepancy then either G is infinite, or the sequence is completely multiplicative.

The idea is that each of an infinite number of primes has to be mapped to a member of the group. Take a prime p and look at other primes q – can a finite group consistently control the values at all the pq?

For G cyclic order 2 (the completely multiplicative case) it may be possible, or proof may be difficult.

Where the order of G is divisible by an odd prime – can this be used to show a contradiction (work with p which maps to an element of g having odd order, can we keep the discrepancy under control? – seems to link a bit with Tim’s questions above and the relationship between the prime 2 and the odd primes)

And if G has order a power of 2 – what can be said then?

]]>The idea was that long low-discrepancy sequences might be obtained by composing a multiplicative function (for some abelian group ) with a (not necessarily multiplicative) function . In particular, the group looked like a promising candidate. Later when I analysed the 64 sequences of length 1120 found by Klas, I noticed that they tended to match (up to a change of sign) a particular such function on the 5-smooth numbers. So it seemed interesting to look for long discrepancy-2 sequences of this form.

The particular examples that arose were all of the form

where is multiplicative, is additive (), , and is defined to be if and if .

From the point of view of HAP-discrepancy, the choice of is arbitrary: has discrepancy bounded by on all HAPs if and only if the three functions

all have absolute partial sums bounded by .

I wrote a program to search for the and that maximized the length over which the absolute partial sums of these three sequences are all bounded by 2.

Despite the prominence of sequences like this among long discrepancy-2 sequences, it turns out that, for discrepancy 2, they cannot beat a multiplicative sequence (that is, for all ). So one can’t get a sequence longer than 246 in this way.

]]>Define the maximum , the minimum , the first appearance of a maximum and the first appearance of a minimum . Obviously . Since equals we have .

For we obtain as possible maximal values.

To get the other possible maximal values we observe that equals and have .

Therefore

Combining the results yields the upper bound .

To get lower bounds we consider respectively and obtain

respectively Hence

In summary we have . For each prime p equality holds with one of the four options for infinitely many i.

]]>As a reminder, let me first show how to get the ‘maxima’ of . The generalization to is straight forward albeit some work and done in another comment.

Let with 0<l and 0<=d_i<p be the p-ary representation of some number n. The following can be established by induction on l: Observe that

Thus .

Furthermore, since and we have for that Thus and equality holding for .

]]>We know that mu_3[.] attains its “maxima” at (3^(2m)-1)/8. For mu_p[.] we have similar descriptions. The computations are elementary albeit lengthy and I put them in an upcoming comment. Since the descriptions are explict in terms of properties of the Legendre symbol we can use known discrepancy results to give lower bounds for the discrepancy of mu_p.

To do this, I first thought we have to generalize some idea of Gauss (this was in my first comment) however there seems to be an easier way, a classical result of Schur on the discrepancy of primitive Dirichlet characters. Btw in case there is some open source version of Schur’s result and its proof I would be grateful for a link.

In any case, I proceed to post the ‘easy’ stuff on mu_p.

]]>Roth’s theorem also likely suffices to prove optimality of among , as follows. Fix . If is sufficiently large, then Roth’s theorem guarantees an AP (modulo p, and not hitting 0 mod p) with $\mu_p$ having discrepancy at least . By multiplicativity, the homogeneous AP with difference 1 then must have a drift of , whence the discrepancy is at least C. We can get by even without using multiplicativity since every AP modulo p is a drift of a homogeneous AP.

I haven’t checked the numbers, but I think our proof of Roth’s result is sufficiently explicit that this bounds p enough that brute force can handle the rest. Back in the spring, I did computations showing that there is no Matryoshka sequence that beats with small modulus (if I recall correctly, that means any modulus smaller than 81).

]]>I have kept this program running, but since writing an mps file takes several days it is very slow going. The optimum value remain 5/7 at least up to N=43

]]>That all sounds interesting. How does it relate to what Kevin O’Bryant was thinking about some way back? It sounds like a similar direction at least. One other immediate reaction is that I don’t understand what you mean by “the idea scales for higher discrepancies”. Can you elaborate on that?

]]>with being the Legendre symbol.

Then, with this notation, is our record-holder for slow growing discrepancy. Since its growth is logarithmic any proof of optimality, e.g. for completely multiplicative functions, would naturally settle EDP in the respective setting. I cannot handle such a general situation and have therefore considered the simpler problem:

Is optimal within the ‘s?

The idea is as follows: By a recurrence relation one can show that the growth of the ‘s is solely determined by properties of the Legendre symbol, especially by the difference of the maximal and the minimal value for arguments .

There are discrepancy results for Legendre symbols, usually reported in terms of the existence of consecutive quadratic residues. (Obviously, three in a row imply discrepancy greater equal two.) Let me just sketch a proof of such a result. Define and as the number of triples of quadratic residues in 1,…,p-1. Now consider which is 1 if i-1, i and i+1 are quadratic residues modulo p and 0 otherwise. Summing yields and quadratic reciprocity eventually implies if . (The other case is trivial for our purposes and omitted.) The tricky part is now to estimate . This is done by observing that the diophantine equation (p prime) has solutions and for some t non-residue modulo p. (T_p is constant on residues and non-residues respectively). Putting the pieces together we get and plugging this into our representation of yields a lower bound on primes such that any Legendre symbol for larger primes contains at least one triple of consecutive quadratic residues and thus has discrepancy at least 2.

That should already be enough to prove optimality of among the ‘s. However, what excites me the most is that the idea scales for higher discrepancies. Currently I consider the fifth degree polynomial to get discrepancies larger than 2. Unfortunately the above ‘quadratic reciprocity eventually implies’-part of the proof requires a lot of work and I am stuck. In case all this is considered ‘on-topic’ I would, of course, elaborate things further.

]]>If there is an infinite sequence of bounded discrepancy (on HAPs or some sub-selection of HAPs – this to be understood throughout this comment), then there is a smallest discrepancy d for which such a sequence exists.

Then there is a bound N on the length of any sequence of discrepancy at most (d-1).

Now examine the structure of an infinite sequence of discrepancy d. Suppose there are arbitrary sequences m-1 < p < m+n of length n on which the total of the HAP with difference 1 is non-zero (so either greater than zero throughout, or less than zero throughout) …

Then there is such a sequence of length kN! for arbitrary k, and starting at a multiple of N! gives a sequence in which the discrepancy on the HAP of difference 1 is no greater than (d-1). If it could be shown that there were constraints on the other HAPs with differences up to N/(d-1) there would be a model of sequence starting at 0 (N! or some multiple takes the role of 0, large primes can only add additional constraints), longer than N of discrepancy at most (d-1). So if that could be shown, there would be a bound on the distance between returns to zero on the HAP with difference 1.

Once these are bounded (if indeed they can be) the sequence breaks up into finite segments between returns to zero. It may be that the assumption that returns to zero are bounded in length leads to a contradiction.

Or maybe not. Sometimes it seems that either of these ought to be easy, and then difficulties come to mind.

]]>I’ve been thinking a little about those places where the sequence sums to zero (on HAP with difference 1, for example). These points must come at even values. Can we say anything about the way in which the zero values are distributed? How dense they are? Are there sequences of arbitrary length (in an infinite sequence of bounded discrepancy on some set of HAPs) where the sum of the HAP remains positive (or negative) [think random walk]?

It just struck me that parity played a large part in analysing discrepancy 2 sequences. What is the strongest parity-type tool for other cases. Bringing in the powers-of-2 suggests parity may be significant.

]]>If I understand the question, I think the Walters example works for . This is the completely multiplicative function defined by if , if , and . Because it’s completely multiplicative it suffices to check the condition for . But this follows because it’s made up of blocks of the form and .

]]>Does there exist and a positive integer such that for no positive integer the sequence has a -run?

]]>Indeed, if we can show that for all discrepancy- sequences satisfying (for some and ), then we know that for all such sequences.

Conversely, if we can *eliminate* the case , then we know that is not *forced* to be . Therefore, for the purpose of proving EDP we can assume, without loss of generality, that .

I see from the wiki page

http://michaelnielsen.org/polymath1/index.php?title=Longest_constrained_sequences

that we have eliminated the case , for . If I’m not mistaken this means that to prove EDP for we can assume without loss of generality that .

There are a few other results on that page that imply alternative hypotheses. For example, the fact that we’ve eliminated the case means that we could alternatively assume w.l.o.g. that .

This sort of observation might help us to find a shorter (albeit still computer-assisted) proof that the constant must be at least 3.

]]>The multiplicative structure does come out here – and essentially this seems to that if there is a bounded infinite sequence (for any reasonable discrepancy problem) where the first value is 1, then if the value at some prime p is provably -1, the sequence is completely multiplicative with respect to that prime.

Given a long sequence, it might be worth testing whether the +1 values at small primes were forced (ie if replaced by -1 does the sequence terminate quickly). This would create additional constraints.

How far are we from showing that if there is an infinite sequence having some discrepancy constraint, there is a completely multiplicative infinite sequence with the same constraint (or of identifying a class of constraints for which this is true)?

]]>That’s an interesting line of thought and I plan to ponder it. It also may tie in with a question that occurred to me on my plane journey yesterday, which is this.

Adrian Mathias proved not only the easy result that the best bound for EDP is at least 2, but the much stronger result that if no HAP ever has a sum that goes above 1, then the sums in the negative direction must be unbounded. Now if I understand correctly, we now know that a sequence of length 2000 (or thereabouts) must have discrepancy at least 3. Can we get from that the stronger result that any sequence such that all HAP sums are at most 2 must be unbounded on the negative side?

It is certainly true if we have any kind of relationship such as whenever the sums are bounded, since then we can argue as follows. If the sums are bounded above by 2 and below by then the sums along HAPs with even common differences are bounded below by -2 and above by But since they are also bounded above by 2 we get a contradiction.

Since this would be a rather nice (and clearly publishable) result, and since we already have results that suggest that EDP counterexamples must have multiplicative structure (or at the very least imply the existence of other counterexamples with multiplicative structure, which might perhaps be induced to have one-sided boundedness too), it looks to me like a good thing to think about.

]]>