The Erdős discrepancy problem III

I’m having to write posts at a ridiculous rate now, but until the official launch (at which point I expect a significant theoretical component to add to the experimental one — one decision we should make is whether to have separate theoretical and experimental threads or whether it is better to have a unified discussion) I’ll keep them short and mainly aimed at summarizing what is going on for the benefit of people who want to keep up with the discussion without reading hundreds of comments.

Most of the comments in the last post have concerned sequences with low discrepancy and additional constraints. If we define the map T_d to be the one that takes the sequence (x_n) to the sequence (x_{dn}), then it is notable that the long sequences that we have found satisfy, not quite all the time but certainly most of the time, constraints such as T_d=\pm T_1 or more generally T_d=\pm T_{d'}. This suggests searching for sequences that satisfy these constraints exactly. One reason for doing so is that there are many fewer such sequences, so searching for low-discrepancy examples should be much quicker, and we have some reason to expect them to exist. And indeed, although we have not matched the length 1124, we have got sequences of length well over 500 that satisfy additional constraints.

The best way to keep up with what has been done is probably to check out the experimental results page on the Polymath5 wiki. And the wiki itself is still growing fairly quickly, so if you read through that then you will have a digest of most of what has been discussed in the comments.

A phenomenon that has recently been observed is that sometimes pairs of HAP-subsequences of good sequences do not agree all that well to start with, but after a while “lock in” to each other. We have not yet properly thought about this, but it seems pretty interesting.

About these ads

104 Responses to “The Erdős discrepancy problem III”

  1. Harrison Says:

    This will be a more theoretical comment than most are right now, but it might have some computational significance, and besides, it (hopefully) sums up some “conventional wisdom” that I don’t think I’ve seen explicitly stated. It builds on Tim’s post from a while back about the alternative basis for F_2^Z, the basis of arithmetic progressions. I will call the basis composed of HAPs the “HAP basis,” and will denote the basis vector corresponding to the characteristic function of n, 2n, …, by (n). This basis is well-behaved if we truncate the sequence, and nicely computable.

    A nice property of the HAP basis which I don’t think anyone has stated explicitly is that it behaves very nicely under the transformations T_k. Actually the T_k are linear (for k an integer, anyway), and if p is prime, then T_p((n)) = (n) if p does not divide n and (n/p) if it does. Probably we can use the HAP basis to give some characterization of T_k-invariant sequences (ignoring discrepancy), even sequences with relations between different T_k although I’m still not totally sure how to compute discrepancy given the HAP basis. I suspect that this is something that the “polymathematical mind” already knows, but again, I don’t think I’ve seen it stated outright.

    The major disadvantage of the HAP basis is the fact that it’s not very good at characterizing “almost invariant sequences,” since changing a function at one point can lead to a totally different HAP decomposition. Is there any hope of finding some other basis (or, perhaps, some data which overdetermines a sequence) that: 1. behaves nicely under T_k; 2. allows one to compute or estimate the discrepancy, or at least say something nontrivial about it; 3. Is well-behaved under pointwise transformation? Would having such a basis give us a good line of attack on EDP? Would it help with computational aspects?

    (Aside: I really don’t know enough about the subject to say anything substantive, but I’m thinking of this problem analogously to how wavelets are “intermediate” between a Fourier transform and a pixel-by-pixel decomposition…)

    Here’s a candidate for what such a thing might look like: Instead of using infinite HAPs as our basis, we take bounded subprogressions of the HAPs. The nice thing about this is that if any such subprogression has discrepancy 2d+1, the sequence has discrepancy at least d. I’m pessimistic about how far we can push the idea — in particular looking at all small subprogressions is a lot of data with a lot of nontrivial relationships — but in the polymath spirit, I thought I’d put it out there.

  2. gowers Says:

    There’s a question I’d like an answer to, which I think Alec might be able to provide. Suppose we regard two sequences as in some sense essentially the same if you can get from one to the other by tinkering with the values at large primes and numbers with large prime factors. Then amongst the vast number of sequences we now have of length 1124, are there any two that are not essentially the same? I realize that this is not a precise question, but I’d be satisfied with an imprecise answer.

    On a related note, it occurred to me to wonder what would happen if you took the 2-sequence of one of these sequences of length 1124 and used it to seed a new sequence.

    Wait, that wasn’t what I meant to say, though it is also interesting. What I meant to say was, what if you take the sequence of length 1124 and try to create a new sequence of which this one is the 2-sequence? That is, you’re given (x_n)_{n=1}^{1124} and you insist that y_{2n}=x_n for every n. How long a sequence can you get? And how do you do if you first wait till you get stuck and then backtrack in the normal way?

    It’s possible that something like this may already have been tried — it would be great if someone could write a brief account on the wiki of how the algorithms that discovered the existing sequences worked. (Somebody has created dummy sections for this already.)

    • Mark Bennet Says:

      Could we have a term for the process by which one takes a base sequence and uses it as the n-sequence (ie as a seed) for a new sequence and tries to fill in the gaps – “n-expansion”?

      It would also be interesting to try 2-expansion of some of the very regular sequences which arise from constraints (like T2(x) = T5(x) = -x).

  3. Gil Says:

    (Maybe this was already asked.) What is the smallest K so that we can give all positive integers +1 -1 signs so that on every HAP of length at most K the discrepency is at most 2.
    Another question: if we can extend the 1124 to arbitrary large numbers, does this gives a way to do it for all positive integers?

    • gowers Says:

      I’m not sure I understand your second question, but the answer is “Yes, by an obvious compactness argument” to what seems like the natural interpretation of it. (But that makes me think I have in fact misinterpreted it.)

    • obryant Says:

      The first question was about the largest K? Take x(1)=- x(2) = 1, x(n)= – x(n/3) or x(n)=x(n-3) according to whether n is a multiple of 3 or not. Then for any positive integer d, and any k less than 91, the sum \sum_{i=1}^k x(i d) is between -2 and 2. So the largest such K is at least 90.

  4. Sune Kristian Jakobsen Says:

    @Alex, about your N -> C_6 -> {-1,1} program:
    Of course I don’t know how you wrote your program, but here is what I would have done: The C_6 -> {-1,1} function is given, so we only have to define the N -> C_6 function. Since it is multiplicative we only have to define it for the prime numbers, so we try to define f(2)=0, f(3)=0,… until we get into trouble. Then we try to increase the value the last prime is sent to, and so on until we should increase a value that is already 5, then we go one prime back, and so on… And now the problem is, that the program is slow, so all the low primes are sent to 0 or 3 all the time?

    If this is the approach you use, I have some suggestions:
    1) An obvious suggestion would be to initialize the sequence with the values we found in the first 1124 sequence. But perhaps you already tried this?

    2) Instead of trying f(p)=0 first and then f(p)=1,f(p)=2,…, you could try: 0,3,1,4,2,5. The idea is, if you have tried f(19)=0, and the longest sequence you got this way had length 35, there is no point trying f(19)=1 and f(19)=2.

    We could also try this:
    From the 1124 sequences we deduce the values f(2), f(3) and f(5). This way we have determined all the values x_{2^a3^b5^c} Now we try to find a long sequences {-1,1} with these values without thinking about C_6. Then we analyze the result, and hope that we can deduce f(7) (or at least we try to find a value of f(7) than would give almost the same sequence), and now we have determined the values of x_{2^a3^b5^c7^d}. And so on…

  5. New thread for polymath5 « Euclidean Ramsey Theory Says:

    [...] for experimental results here. Let me update this there is now a third thread for polymath5 see here. Possibly related posts: (automatically generated)New thread for deterministic way to find [...]

  6. Alec Edgington Says:

    Well, there is a lot to think about! First off: I’ve just put on the wiki a sequence of length 974 that satisfies 1=-2=-15 and 11=-13 exactly.

    • Mark Bennet Says:

      I’ve posted an analysis of the subsequences. The initial segments of the subsequences are different from those in the record 1124 sequences, so we are working with a different structure altogether.

    • Mark Bennet Says:

      What I mean is that the subsequences which dominate the analysis of the 1124 sequence show no sign of appearing in this one.

      Also the constraint should read 1=-2=-5

    • Alec Edgington Says:

      Yes it should.

    • Mark Bennet Says:

      Here is a theory about what is happening with the ‘group’ structure here:

      2 maps to an element a of order 2
      3 maps to an element b of order 3
      7 maps to an element c of order 6

      Writing the group operation multiplicatively we have the relation abc=1 or bc=a.

      I think we may be in the non-cyclic abelian group of order 18, and that the 49-sequence “would” split off if the sequence were longer (eg if we have to move to looking at discrepancy 3).

    • Alec Edgington Says:

      If we modify the 974 sequence in just four places, we get a sequence that also satisfies 17=-19 and 23=29 exactly. I’ve put this on the wiki too.

  7. Alec Edgington Says:

    Sune, in answer to your question in the last thread about sequences satisfying 1=-2=-3, the maximum length is 470. I’ll post the sequence on the wiki. It’s probably worth starting to organize these facts in some way: which are the most promising constraints of this form, how far one can get with them, and so on.

  8. Harrison Says:

    I have a question/conjecture related to “restricted-difference HAPs,” although it’s kind of tangential.

    Let S be a subset of the natural numbers, and consider the discrepancy of an infinite sequence only over the HAPs whose difference is in S. If S contains no even numbers, we trivially have sequence with bounded discrepancy (restricting differences to lie within S). But apart from this “modular problem” I think all the examples of S for which we have a construction of a bounded-discrepancy sequence are very sparse. For instance, if we take S to be the powers of 2, then Morse’s sequence has discrepancy 1 over S.

    Question: Is there a general condition we can put on S that guarantees the existence of a bounded-discrepancy sequence, restricting the differences to S? In particular, can we construct such a sequence whenever the sum of the reciprocals of elements of S converges? Does someone have an example of an S whose harmonic series converges but for which there doesn’t seem to be a bounded-discrepancy sequence? Saying something like “the twin primes, and 2″ doesn’t count, by the way. (Can we even construct such a sequence when S is finite? Here I’m pretty sure the answer is “yes,” but I haven’t written out the details and there’s a nontrivial chance my construction is wrong.)

    • Harrison Says:

      I realized soon after posting that that the union of the twin primes and {2} is also easy, since in fact the union of all odd numbers and {2} is easy. But the general question still stands.

  9. gowers Says:

    Here’s an observation relating to questions Gil has raised. It’s possible that he has already made the same observation, in which case I apologize in advance. It’s that the question for \mathbb{N} is equivalent to the question for \mathbb{N}^2 (or \mathbb{N}^d for any fixed d). By “the problem for \mathbb{N}^2” I mean the assertion that for any \pm 1 function defined on \mathbb{N}^2 you can find HAPs (obvious definition) of arbitrarily high discrepancy.

    The proof is simple. Obviously if the result is true for \mathbb{N} then it’s true for \mathbb{N}^2, since you can just look at points of the form (n,n). In the reverse direction, observe first that by compactness if the result is true for \mathbb{N}^2 then for any C there must exist N such that there is no function defined on \{1,2,\dots,N\}^2 with discrepancy less than C. If you now take any \pm 1 function f of \{1,2,\dots,N^2\}, define g(m,n) to be f(mN+n) and apply the two-dimensional result to g, you get a one-dimensional HAP on which f has discrepancy at least C.

    This equivalence might seem too easy to prove to be helpful, but I think there’s a chance that one might be able to extend to some infinite-dimensional example, perhaps on the space of all non-negative integer sequences with finite support or something like that. It’s just conceivable, but not very likely, that that could be useful if one were searching for an infinitary proof.

  10. Alec Edgington Says:

    Tim, I’ve written a program to investigate your ‘quasi-periodic T_2‘, using z = \pm 1. For C=2, the maximum length of sequence is 33. The program is just checking C=3 now, and has got to 159; I’ll let you know if and when it finishes.

    • Alec Edgington Says:

      Oops, I made a little mistake (using \alpha/2 instead of \alpha). Corrected, with C=2 the limit is 59.

    • Alec Edgington Says:

      With C=3, it’s been stuck for a long time at 143. I’m mildly surprised, since I would have expected the difference between C=2 and C=3 to be enormous, but perhaps there is some barrier implied by the additional constraint. (It’s found over 424\,892\,550 of length 143 and only tracked back to 59.)

  11. Sune Kristian Jakobsen Says:

    On the wiki, Tim asked:
    “Can one characterize all functions on the rationals that are pointwise limits of sequences of the form T_kx, where x is the Morse sequence? ”

    I assume (because of “It is T_2-invariant”) that it is indexed such that the sequence begins 1101001… starting from the first element? So the n’th element is the parity of 1s in the binary representation of n. I think the answer is all function such that y_{2a}=y_a for all rational a.

    Sketch of proof:
    We have x_a=x_{2a} (in the following, x is the Morse sequence), so all limits must be on this form. Now I need to prove all functions on this from is a limit of sequences of the form T_kx. It is enough to show this for sequences (indexed by \mathbb{N}), and in order to show this I only need to show: For any choice of y_1,y_3,y_5,...,y_{2m+1} I can find a k such that (T_kx)_1=y_1, (T_kx)_3=y_3\dots (T_kx)_{2m+1}=y_{2m+1}. The set choices of y_1,y_3,y_5,...,y_{2m+1} that has a solution k is closed under addition (mod 2): If k_1 is a solution to one set, and k_2 is a solution to the other, then k_1+2^ik_2 is a solution to the sum for sufficiently large i (remember that y_n is the parity of 1s in kn). So if, for each m I can find y_1,y_3,y_5,...,y_{2m-1} such that both y_1,y_3,y_5,...,y_{2m-1}, y_{2m+1}=0 and y_1,y_3,y_5,...,y_{2m-1}, y_{2m+1}=1 has a solution k, I’m done (Because the sum of these are 0,0,…,0,1).

    For each m I can find a n and i such that n(2m+1)>2^i and n(2m-1)<2^i. Now consider the numbers k_1=n and k_2=n+2^i+2^h where h is a sufficiently large number. The parity of the 1s in a k_1 and a k_2 agree for a=1,3,5\dots 2m-1 but not for a=2m+1, and we are done.

    I wish I could formulate the proof better, but I have to go to bed now.

  12. Harrison Says:

    I’ve been looking at the “points where similar subsequences diverge,” if you will, and while I don’t have enough to make any solid conjectures, what I do have is intriguing.

    Using Thomas’ table for one of the 1124 sequences (I don’t know which, sorry), I derived the following:

    x_{5n} = x_{6n} for n < 176.

    x_{2n} = x_{21n} = x_{22n} for n < 47;

    x_{21n} = x_{22n} for n = 47 but not for n = 48; note that 22*48 = 6*176 has quite a lot of divisors.

    The really interesting behavior, though, is the following:

    x_n = x_{8n} for 7 < n < 47.

    From n = 47, they agree most of the time, but disagree at a few values of n, up until around n = 112, after which things fall apart rapidly.

    Here's what seems to be going on. The sequence "wants to be" T_8-invariant, but there's the problem of x_8 \neq x_1. (In addition, the 2- and 4- subsequences have some control over what the 8- subsequence looks like). So it "swaps irregularities" at 1 and a small prime (in this case, 7) and goes as far as possible, until other sequences force a new irregularity — say, x_k = 1, x_{8k} = -1. Now the 8-subsequence has a different discrepancy again, and the sequence repairs this as quickly as possible, usually at a prime number: x_p = -1, x_{8p} = 1. It continues like this for a while, occasionally swapping irregularities at primes. But eventually we get an irregularity followed by a long prime gap (in this case, the gap between 113 and 127) — now we can't correct the difference in discrepancy without messing up some other subsequence, at which point everything falls apart.

    Note that the above narrative, while it seems plausible, is based on pretty much just the one piece of evidence, so it should be taken with a grain of salt.

    • Alec Edgington Says:

      I find this particularly interesting because it suggests a link between the difficulty of maintaining multiplicativity and the presence of large gaps in the primes. (I suppose there’s no reason why the corrections have to happen at primes, but they would be more likely to.) And it may shed some light on why the gradient pictures seem to be composed of little pieces with gradient \pm 1: the HAPs are, so to speak, trying to compensate for each other’s imperfections.

  13. gowers Says:

    Alec, I’m quite surprised to know just how badly the T_2-quasiperiodicity failed. While we’re on the subject of sequences that don’t work very well, I have another thing I’d be extremely interested to know about. Suppose we impose the following antimultiplicativity property: that x_nx_{2n}x_{3n}x_{6n}=-1 for every n. (If a sequence is multiplicative we should of course get 1.) It is easy to see that such sequences can be infinite. For example, one could insist that if n=2^a3^bm where m is odd and not a multiple of 3, then x_n=(-1)^a(-1)^{\lfloor b/2\rfloor}x_m. But can they have low discrepancy?

    I was hoping for the answer no, but just writing this has made me see that my question is probably not completely well conceived, since we could still have quasimultiplicative sequences with the above property. So here is a second question. Suppose we choose a random function f:\mathbb{N}\rightarrow\{-1,1\} and then try to find the best discrepancy of a sequence subject to the constraint that x_{2^am}=f(a)x_m for every odd m. The randomness of f kills off any hope of T_2 having any periodicity properties, but it leaves open the possibility that other dilations could have such properties. So what I’d like to know is whether that completely messes up the sequence and forces it to have high discrepancy, or whether just messing up T_2 is in fact not all that damaging.

    One other thought that occurred to me was that it would surely be worth trying to find the longest completely multiplicative sequence of discrepancy at most 3, or does that seem to be so long as to be unfindable?

    • Alec Edgington Says:

      On the failure of T_2-quasiperiodicity, it may not lend itself particularly well to small-number investigations, since quasiperiodicity is essentially a property of infinite rather than finite sequences (and we are only using the first seven or eight terms of it in getting up to 143). Nonetheless it is interesting that constraining the sequences (x_{2^k m}) does appear to hinder rather than help the search. (Restricting to z = \pm 1 necessarily limits us to constraining such sequences only where m is odd.)

    • Alec Edgington Says:

      I’m fairly sure that the random f you propose would, for small numbers, produce similar results. (I’ve just tried it for a couple of randomish-looking f.) What happens in the limit may be different, of course, but other methods would be needed then. The moral seems to be that if we want very small discrepancy, we shouldn’t mess with T_2!

  14. Alec Edgington Says:

    I’ve put up on the wiki a sequence of length 1112, which may be of interest because it was derived differently from the 1124 sequences, by backtracking from the sequence of length 974 satisfying 1=-2=-5 and 11=-13. In particular, I will be interested to see whether the sequences still display the ‘late correlation’ property.

    • Alec Edgington Says:

      See:

      This does not seem to have anything like the late correlation shown by the 1124 sequence (though there does seem to be some ‘piecewise’ correlation). Overall it appears more regular, which isn’t that surprising.

    • gowers Says:

      I’m delighted to see that this time there do seem to be some approximately linear functions with gradients other than 1 and -1. And I’m also secretly hoping that the record of 1124 will at some stage be broken …

    • Alec Edgington Says:

      Yes, and gradients around \pm \frac{1}{3} and \pm \frac{2}{3} do seem to be popular among the smaller prime pairs. Here is a plot using primes up to 29:

      Gradients between \pm \frac{1}{3} seem to be avoided: why is this?

    • Alec Edgington Says:

      Actually, T_3 x and T_9 x are almost orthogonal. This plot shows the gradients for all pairs (not just primes) up to 9:

  15. Alec Edgington Says:

    I’d like to pop a thought up here in the hope that it may become clearer. I want to move temporarily from \pm 1-valued sequences to \mathbb{Z}-valued sequences (I might even want to move to \mathbb{C}-valued sequences). It looks as if the long multiplicatively-constrained sequences often satisfy (or seem to try to satisfy) relations like T_3 x + T_{13} x = 2 \chi_3, where \chi_3(n) cycles between 1, -1 and 0, like a Dirichlet character modulo 3. (This is just an example I think I noticed once.) I suppose the thought is that there may be further multiplicative structure hidden in the linear combinations of dilations.

    • Alec Edgington Says:

      What I meant (the example I think I remember) is:

      T_3 (T_3 + T_{13}) x = (T_9 + T_{39}) x = 2 \chi_3

    • gowers Says:

      Presumably you will be imposing some kind of additional constraint so as to avoid examples like \chi_3 itself that have bounded discrepancy. Perhaps it is enough to insist that every number in the sequence has modulus at least 1.

    • Alec Edgington Says:

      Roughly speaking, I think my question is whether, if x is a \pm 1 sequence of bounded discrepancy, the vector space generated by the T_k x must contain a (non-zero) multiplicative sequence. Or even if one can say anything about that space. (For example, one might consider the Dirichlet series it gives rise to…).

  16. gowers Says:

    When we start thinking theoretically, one of the things I hope to concentrate on (and I also hope very much that others will join me in thinking about it) is whether an infinite sequence of bounded discrepancy is forced to have some kind of multiplicative structure. To that end, I am interested in experimental back-up in the form of constraints that force a sequence to have high discrepancy. I’ve already suggested one or two experiments along those lines. Here’s another. What if you force a constraint such as T_2=T_3 up to some n such as 100 and thereafter insist that T_2=-T_3? In other words, you get a certain multiplicative structure established and then insist on the opposite. That would give a curve that started out with gradient 1 and then switched to gradient -1, which is quite unlike what we seem to observe (though the 1124 example did have some switches of gradient). What I would hope for here is that it was not possible to keep the discrepancy down. It would be even better if one could identify a reason for its being difficult: for instance, it seems that multiplicativity causes many of the constraints to agree with each other when you are choosing a new value for the sequence, so perhaps suddenly reversing the multiplicativity would cause constraints to pull in opposite directions and therefore force the discrepancy to go up.

    • Harrison Says:

      “…whether an infinite sequence of bounded discrepancy is forced to have some kind of multiplicative structure.”

      This is something that I’ve been trying not to think too much about before the official launch, but I can’t resist chiming in with the suspicions that I do have…

      So the very long sequences we’ve discovered so far are “close to multiplicative.” There are often pairs m, n for which T_m x = T_n x almost always early in the sequence, but as we near the end T_m x * T_n x bounces between -1 and +1 almost at random.

      It seems like early “multiplicative errors” are quickly locally corrected, but globally they force other entries in the sequence which ramify and eventually come back to cause more “multiplicative errors.” So in our infinite sequence of bounded discrepancy, in the long run we should expect the partial sums of inner products of T_m, T_n to either behave essentially randomly, or have gradient 1.

      Of course, there is some experimental evidence that seems to go against this theory — the sequences that start out differently and then fall into lockstep, and to a lesser extent the “linear-looking sequences” of gradient about 1/3. And I’m not sure how to account for these, except that I’ll voice my suspicion that the gradient-1/3 line is starting to descend into (pseudo)randomness toward the end. But there is a lot of data that does support the story.

      So to test this experimentally, I’d even be interested in trying to extend some low-discrepancy sequences which have only a couple of “multiplicative errors” and otherwise satisfy, say, T_1 = T_4 or $T_2 = T_3$ and see if the errors do indeed propagate.

    • Mark Bennet Says:

      It seems to me that there may be an infinite group involved, with each prime mapping to an element of finite order (so each prime has a ‘natural’ order. Orders 3 and 6 seem to be preferred, but other finite orders (eg 5) can be forced.

      What is striking is that the number of initial segments (eg of length 13 or 15) which appear in each of the long sequences with discrepancy 2 is so small. But it seems to me that in the highly ordered examples the first difference between two subsequences can be in position 71 (or another prime of similar size) – and the sequences are nowhere near long enough to really get a handle on that. It seems possible that sequences which seem to be the same are ‘ultimately’ different.

      The reason for the finite group structure appearing so dominant might just be that the statistics for subsequences are dominated by multiples of small primes – it always seems to be some subset of larger primes which look problematic.

  17. Gil Kalai Says:

    Here is another “generic” suggestion (related perhaps to comment http://gowers.wordpress.com/2010/01/11/the-erds-discrepancy-problem-iii/#comment-4988 ) that may be relevant both to the theoretical and experimental aspect. If we take random +1 -1 sequences then the discrepency for a sequence of length n is roughly square root n. there are various ways to take random choices for sequence of +1s and -1s that will cause the distribution of partial sums to me much more concentrated, even “tight”. (Well, I am quite sure that this is well known, even to me although I dont remember anything specific right now.) In our case we want some random processes that the value of x_m reflects the various HAP containing m and is biased towards the target of having low discrepency.

    So perhaps the easiest suggestion would be to try to create random sequences so that the value is x_m is tilted towards having small discrepency on HAP’s and try to experiment with such processes.

    Understanding such probabilistic processes may be relevant even to lower bounds even though our problem is entirely “deterministic.” (I can think of some examples where this is the case but they are quite far from discrepency theory so maybe there are much more relevant examples.)

    • Gil Kali Says:

      So perhaps the simplest suggestion is this. We start with a function f(k). For simplicity we can think about f(k)=1/k. when we assign a sign \chi(m) to an integer m (after we already assigned signs to smaller integers) we compute two numbers P+ and P-. P+ is the product of f(k) for all positive k of the form

      k=\sum \chi(id) where d divides m and i ranges from 1 to (m/d-1).

      P- is the product of f(-k) for all negative such k.

      Now we let \chi (m) to be 1 with probability P+/(P-+P+).

      If f(k) is very very rapidly tending to 0 then this is just the greedy algorithm to get signs with minimum discrepency. I dont know how good is this greedy algorithm (I suppose this info can be found on the experimental wiki or discussion, and the 1024 record represent much better algorithms).

      Id f(k) is not so rapidly tending to 0 this can be of interest. Maybe some randomization can also be added to the other more sophisticated algorithms. The hope is that looking at a stochastic model will make it easier to analyze and maybe also it will behave nicer than a greedy/deterministic one.

    • gowers Says:

      I suggested something very similar at the wish list over on the wiki. I think it would be extremely interesting to try out some semi-sophisticated greedy algorithms (and I agree that a bit of randomization could well help too), since there seems at least a chance that they would give rise to very slow-growing discrepancy. If the experimental evidence suggests that they do, then one could start trying to prove that there exists a sequence such that the discrepancy grows sublogarithmically by attempting to prove rigorously that the algorithm does what it seems to be doing.

      Central to such an analysis, it seems to me, would be the question of how many competing wishes you have when you choose the value at a given point. However, this raises some subtle issues that I’ll try to describe in a future post.

      In the light of the fact that multiplicative structure seems to help, another thing one might perhaps do is this. A sequence is completely multiplicative if x_ax_b=x_cx_d whenever ab=cd. Therefore, each time you choose some x_n, it might be worth looking at all triples (a,b,c) such that a,b,c are all less than n and ab=cn, and biasing the choise of x_n towards the predominant value of x_ax_bx_c over such triples. The idea would be that there would be a bit of a pressure towards greed and a bit of a pressure towards multiplicativity, and if you got the balance right you might end up producing extremely good sequences.

      Having said that, I also think (and have also put on the wish list) that it would be very interesting to try to choose fully multiplicative sequences by means of some greedy algorithm and then look experimentally at how that algorithm performs. It’s not obvious that that wouldn’t produce sublogarithmic growth.

  18. gowers Says:

    Thomas, if you had a spare moment and could face using your program to produce HAP tables for several of the new sequences that Alec has produced and put up on the wiki, it would be wonderful. In principle, it should be enough to do what you have already done and just post the code. In practice, I have never managed to run a piece of code.

    Actually, while I’m at it, here’s a question that I really shouldn’t have to ask. I have a reasonably recent model of Mac. If someone posts a piece of code in C and I want to run it on a given piece of data, what do I do? I don’t even know where to start. My experience with trying to find out the answer to questions like this from the internet is that people assume I know things I don’t know. For example, I haven’t the faintest idea what a shell script is (if it’s anything at all). I do know how to open a terminal, so that at least is something. But how do I create a file with the code in it and then run the code? Is C automatically there on my Mac or do I have to download it? If the latter, then how do I do it? I need instructions for utter idiots. (Also, I don’t know C, so it’s vital that I should be able to treat the code as a black box.)

    If I’m asking too much, then that would also be useful information.

    • Jason Dyer Says:

      You need Xcode, the Mac development environment, which supposedly comes on the same DVD the Mac software is on but I don’t believe gets installed by default.

      It will include the C compiler (GCC) that you can use for compiling the files, but since I’m not a Mac expert I don’t want to attempt exact directions.

      I will run the program through Alec’s sequence when I get home if Thomas hasn’t gotten to it yet.

    • Harrison Says:

      If you can’t find the DVD, it appears that you can get Xcode online, I think for free, by signing up for an online membership with Apple Developers. But from what I’ve read, it seems like the DVD is much easier and faster.

      gcc is kind of hard to understand at first, but luckily it’s very well-documented. If you don’t want to do anything hard, you can get away with knowing pretty much one flag, which is -o.

      For right now it’s a moot point, though; I got Thomas’ program up and running on my (Linux) machine yesterday, and I don’t have a class until this afternoon, so I can create the tables now (and I was planning on doing so anyway!)

    • Thomas Sauvaget Says:

      Sorry I didn’t see this message earlier: of course you can compile C code on your Mac, I was planning to write a tutorial actually next week-end. I’ve made a super-quick starting guide here not to clutter the current thread

      http://thomas1111.wordpress.com/2010/01/12/super-quick-tutorial-for-polymath5-users/

      Please do ask any further questions there.

  19. Harrison Says:

    I’ve created a HAP table for the length-1112 sequence and put it on its own page.

    • Harrison Says:

      Added a HAP table for the 1 = -2 = -3 sequence of length 470, and it’s sort of fantastic.

    • Thomas Sauvaget Says:

      Wow, indeed this 470 sequence is highly regular!

      (Great to see also that the tables in fact fit nicely on the wiki, it’s better than have them scattered over many blogs perhaps).

    • Harrison Says:

      Thomas: Actually the wiki doesn’t much like having thousands of lines of HTML dumped onto a page; it gave me a warning saying that the page shouldn’t be over 32 kB since that makes it hard to edit. Fortunately, though, you can ignore it, although it may very well be an issue if we ever try to find long discrepancy-3 sequences (and create HAP tables…)

      But it is nice to be able to put the data in a central location.

  20. Sune Kristian Jakobsen Says:

    “Suppose that I want to pass to an example that is multiplicative. I can do this if for every finite multiset A of rationals there exists some rational r such that y_ry_{rab} = y_{ra}y_{rb} for every a,b in A. The reason is that if I have such an r for a given set A and I set z_a = y_{ar} / y_r, then z_{ab} = z_az_b for every a,b in A. Taking a collection of sets A whose union is all of \mathbb{Q} and passing to the limit of the corresponding z-sequences, we obtain an example that is multiplicative everywhere. ” - Tim

    Shouldn’t the “collection of sets” be a increasing sequence?
    I think this can be improved using your shift idea: If we can find an increasing sequences (A_n) of multisets of rational, and two sequences (s_n) and (r_n) of rationals, such that the union of (A_n) is \mathbb{Q}, for each integer k there is a N such that for all n>N, k divides s_n and for all a,b\in A_n we have y_{r_n-s_n}y_{r_nab-s_n} = y_{r_na-s_n}y{r_nb-s_n}.

    • Sune Kristian Jakobsen Says:

      Instead of “for each integer k there is a N such that for all n>N, k divides s_n”
      I should have said:
      for each p and i, there is a N such that for all n>N, the power of p in s_n is at least i. The s_n‘s doesn’t have to be integers.

    • gowers Says:

      Yes it should indeed be an increasing sequence — that’s what I had in my head but I expressed myself sloppily.

  21. Mark Bennet Says:

    I’d be interested in what happened with constraints of the form 1=-2, 3=-5, or 1=-2, 3=-7 or 1=-2, 5=-7.

    Like Tim I use a Mac, and if there were a tutorial page on the Wiki for compiling and running some of the code on a Mac, that would help me. At present I use MSWord search and replace to do conversions, and I have some tables set up in MSExcel which do the subsequence analysis, but I am acutely aware of the limitations of these tools. They would be completely inadequate to investigate the C=3 case.

    It would also be interesting if there were some code posted which allowed for the input of particular constraints.

    • Alec Edgington Says:

      My code is so hacked that I’m almost embarrassed to post it, but I will do anyway. It’s not specially user-friendly, and the constraints are just given as a #define in the code itself, but I’ll be happy to provide assistance. At some point I should tidy it up!

    • Alec Edgington Says:

      OK, it’s there on the wiki, with some explanation. Let me know if you need any help getting it to work. (It’s only tested on Linux, though I’ve tried to ensure that it will compile on Windows.)

    • harrison Says:

      Alec, I realize that this was several weeks ago, but looking through the old threads you calculated the maximal length of a discrepancy-2 multiplicative sequence. I assume you wrote something to calculate that — it’d be a lot of work by hand, or even by Excel.

      What was your approach there? Do you happen to still have the source code on hand? I’d like to put a few hunches about multiplicative, quasi-multiplicative, and “almost” quasi-multiplicative sequences to the test, but I don’t think that depth-first search is the best way to examine those…

    • Alec Edgington Says:

      harrison, I do have the code to hand, and have put it on the wiki. It is a simple Python script, good enough for a complete search with C=2. Hope it’s useful. Let me know if you need any help with it.

  22. Alec Edgington Says:

    I have added an item to the ‘wish list’ for experiments on the wiki. I think it would be quite interesting actually to compute the Dirichlet series

    f(s) = \sum x_n n^{-s}

    corresponding to some of our long low-discrepancy sequences, and to see what this function looks like, particularly in the vicinity of 1 but also elsewhere. I imagine that one of the commercial mathematics packages could be persuaded to do something like this, and produce nice plots, fairly easily. I imagine too that there must be numerical techniques for analytic continuation, but have no knowledge of this.

  23. obryant Says:

    A rough estimate would help me, an exact number would be nice, and a list of all of them (if there are a workable number) posted somewhere would be nicest:

    How many discrepancy 2 sequences are there of length 48 and 96?

    • obryant Says:

      …and how many of those are completely multiplicative?

    • Alec Edgington Says:

      To the first question: there are 8\,436\,986 of length 48 (counting x and -x as different). But 96 is well beyond the point where my computer runs out of memory.

      To the second question: there are 89 multiplicative discrepancy-2 sequences of length 48, and 119 of length 96 (and, if you’re interested, 304 of length 192).

    • Thomas Sauvaget Says:

      I’ve just added these numbers to the wiki ‘table of short sequences statistics’.

    • gowers Says:

      To get a rough estimate for Kevin’s question, how about randomly sampling sequences of length 48 that work and seeing how many extensions they have, on average, to sequences of length 96? It’s not immediately clear how one would do the initial random sample, but here’s a crude approach. You inductively build up a sequence as follows. Once you’ve chosen the first k bits of the sequence, you calculate the number of extensions to a sequence of length 48 if the next bit is a 0, and the number of extensions if the next bit is a 1, and you then choose the next bit with probabilities that match the ratio of these two numbers.

      Only a small number of samples would suffice to get a pretty accurate estimate for the number of sequences of length 96. I too would be interested to know what the average number of extensions per sequence of length 48 was, though Kevin sounds as though he has something more precise in mind.

    • Alec Edgington Says:

      I’m probably missing something, but couldn’t you just generate all the length-48 sequences and throw away a large (random) proportion of them before continuing? Or did you have in mind a sample that was biased in some way?

    • gowers Says:

      You could indeed do that. I didn’t realize that was feasible, but if you’ve got enough storage space then that is of course what you would do.

    • Alec Edgington Says:

      Yes, I run out of memory somewhere in the 50s, so that is feasible. I’ll give it a try and see what comes out.

    • gowers Says:

      Actually, even if you didn’t have enough memory, my suggestion would have been needlessly complicated. All you’d have to do is run through all the sequences of length 48 and for each one decide to store it with some probability p, and then for each stored sequence work out how many extensions there are. (All this is to deal with situations where you’ve got time to work out lots of sequences, but not the memory to store them.)

    • Alec Edgington Says:

      Yes, and it’s just occurred to me that this may be a good way to break the record: store as many sequences as you can at each stage, and throw away as many as you need to at the next.

    • Alec Edgington Says:

      If throw away about 99.99% of the length-48 sequences, keeping 638 of them, then from those 638 I can generate 1610844 of length 96. So the expansion factor seems to be of the order of 2000.

    • gowers Says:

      Maybe one could attempt to explain these observations by interpreting the logs of the numbers as the “amount of freedom” one has to choose the sequences — that is, a sort of entropy-type idea. And maybe one could come up with a heuristic argument that predicts how much freedom there should be.

      Re your very interesting proposal for breaking the record, perhaps it would also be good to look at popular beginnings for sequences and choose them with higher probability (the theory being that they should be easier to extend).

    • Klas Markström Says:

      With regards to the record I should mention that I still have a computer, about 200 CPU cores, working on that. I expect to either improve the record or find an upper bound on the length of the longest sequence for C=2.

    • Alec Edgington Says:

      I was wondering about such a heuristic argument too — and also in relation to completely multiplicative sequences. I guess the ingredients would be a probability distribution over the values from -C to C that was considered ‘stable’ (apart from a dependence on parity), and some reasoning about the distribution of the divisors of large numbers.

      The probabilistic idea sounds good. This could be seen as a refinement of Klas’ method, with ‘continuous assessment’.

    • obryant Says:

      I had two things in mind when I asked the question. Let R(N) be the number of sequences of length N satisfying some property P (for example, discrepancy at most 2 and completely multiplicative). If there are only finitely many infinite sequences (perhaps even 0) satisfying P, then I would expect R(N) to increase rapidly for small N, then taper off, and then decrease. Of course, the arithmetic of N is in play, and I’ve no idea how to compensate for that. The shape of R(N) may even indicate (approximately) which N will have R(N)=0. This seems to be a quite plausible computation to get for discrepancy-2-completely-multiplicative, from the look of Alec’s numbers above. To handle “discrepancy 2″, it appears to me like it will be necessary to proceed by sampling, as Alec has done.

      Basically, I am curious as to why 1124 is such a sticking point. Is R(1125) much smaller than 1124?

      That was the first thing, this is the second. I haven’t been doing any coding for this, so forgive me if this is naive. I was contemplating the usefulness of the following “picture” from a computational angle. I’ll use the numbers for the completely multiplicative case for specificity, but I’m really thinking about the non-multiplicative case. First, pick one of the 304 length-192 sequences for the 1-HAP. That gives the first 96 colors along the 2-HAP, so there are only 119 possibilities. Whichever one we have, it has on average 304/119=2.6 possible extensions. If it has zero, then we can eliminate one of the 304 stubs. Otherwise, pick one for the 2-HAP. Now the 3-HAP is constrained at 192/3 positions in [1,192] and at 192/6 positions in (192,284], so again there are 2.6 extensions on average, but if there are zero we can eliminate a pair of stubs (not individually, but as a pair). Continuing, the 4-HAP is constrained at 112 positions, the 5-HAP at 93 positions, the 6-HAP at 122 positions, and so on.

      This will never tell us what to do with 193, but it does fill in quite a few blanks. It may even reveal that some of the 304 192-stubs can be eliminated from consideration. Is it true that each of the 96-stubs extends in 2 or 3 ways to a 192-stub, or are there a few that extend in many ways and many that extend in zero ways?

    • Alec Edgington Says:

      Regarding the growth of R(N): here is a plot of the logarithm of the number of multiplicative discrepancy-2 sequences of varying length. It demonstrates that in this case at least, there is no gradual reduction at the end, but a sudden collapse:

    • Thomas Sauvaget Says:

      Kevin,
      regarding multiplicative sequences with discrepancy C=2, I’ve now used Alec’s nice python script to list their number as a funtion of length on the wiki, and I’ve made this plot out of it http://thomas1111.files.wordpress.com/2010/01/multnumzoom.png
      It does increase monotonically at the beginning as you mentionned, but very soon some complicated behaviour appears. So if the same phenomena occurs for other classes of sequences satisfying some property R (including perhaps classes to which the record 1124 sequences belong), then R(1125) could well be much smaller.

    • Thomas Sauvaget Says:

      Oops, while I was making my plot for this reply I didn’t notice Alec had just produced one too…

    • gowers Says:

      Those plots are very nice. It’s interesting that they have bits that are approximately linear (or seem to be). I mean linear in the log graph that is. That suggests that most of the time there are some local constraints you have to satisfy and the rest doesn’t matter, but just occasionally there is a crisis, perhaps at particularly smooth numbers or something like that, at which point several of the sequences are pruned. This picture is also supported by the fact that the two most obvious linear parts have roughly the same gradient.

    • obryant Says:

      If you look at a plot of R(3N)/R(2N) (for completely multiplicative, discrepancy at most 2 sequences), you see a lot of variation caused by the arithmetic of N, but basically it’s around 2 for N<40. That is each 2N-stub is gives birth to two 3N-stubs. For 40<N<68, it's under 1/2. Most 2N-stubs are childless and the whole process nearly dies out: R(135)=44. Then it picks up dramatically, presumably with many "irrelevant primes" (the p,q trick in action?).

  24. gowers Says:

    This is partly something I was going to write anyway and partly a response to Alec’s 2:14pm comment. I decided to use an “intelligent greedy algorithm” to generate (by hand) a multiplicative sequence of discrepancy 2. By that I mean that each time I chose the value at a prime, I looked not just back but forwards too and tried to make the choice in a way that would avoid clusters of 1s or -1s. Of course, the top priority was to keep the partial sum between -2 and 2, but most of the time that didn’t force the decision.

    What was interesting about the little experiment was that the sequence it generated agreed with the best known multiplicative sequence for quite a long time, and when it started to disagree that was probably because I wasn’t looking far enough ahead. So I think an idea along these lines might give a quick algorithm for finding a long completely multiplicative example of discrepancy at most 3. As it stands, I don’t have a precise algorithm — I was just making the decision that seemed most sensible at each stage — but I think what I was doing could be developed into a precise algorithm with a bit of further thought.

    If I have a spare moment, I’ll try repeating the experiment but with a much longer track-forward and see if I can get anywhere near to the longest possible sequence by hand. If I can, then it will suggest that I have an efficient algorithm hiding in there somewhere.

    • gowers Says:

      I have now repeated it, tracking forward up to 112 (after which my piece of paper ran out). By the end I had filled in all the primes up to 47 apart from 43, at which point the tracking forward didn’t seem to place enough constraints to force my choices. For all those primes I ended up making the right choices (that is, the only choices possible if you want a sequence that goes up to the maximum, as worked out by Alec). So I’m now convinced that we could get some interesting results by developing an algorithm along these lines and thinking about it theoretically. (The kind of theoretical thought I had in mind was to consider what it is that causes the algorithm to get into trouble and to estimate how often one would expect the corresponding difficult circumstances to arise.)

      Incidentally, when we do come to think more theoretically, I hope we will try to develop heuristic arguments based on assumptions about how many factors a typical number has: when the numbers get large, then almost all of them will belong to quite a few HAPs, and keeping the discrepancy down should be much harder — except that if we have good multiplicity properties then the conditions that have to be satisfied will be far from independent.

    • gowers Says:

      I’ve now tracked further forward again. Interestingly, the first prime where I didn’t feel forced to choose as I chose was 71 — exactly as it should be since that’s the first prime that genuinely isn’t forced. Playing around with it, I have found many ways of forcing choices (a trivial example would be that if you have a run + + ? + + then the ? has to be minus, but parity makes quite a difference too), and I’m now pretty sure that I could come up with a maximal fully multiplicative example without using a computer.

      Although I’ve been sounding optimistic about finding a much longer multiplicative example with discrepancy 3, there is a difficulty to face, which is that a lot less will be forced. In the light of that, does one make arbitrary decisions? I think what I would probably do is try to avoid sums equal to plus or minus 3 as much as possible. In other words, I wouldn’t just be trying to keep within [-3,3] but would be trying to keep within [-2,2] unless I was forced out of it.

    • Jason Dyer Says:

      I’m still a little unclear on your hand algorithm: how far ahead is your “look-ahead”?

      I was thinking perhaps an optimal algorithm would not necessarily look ahead consecutively, but to the next “most difficult” number (whether that be the next prime p + 1 or determined by more complicated means).

    • Jason Dyer Says:

      One more brief thought. Perhaps a greedy algorithm could have two look aheads: one consecutive, and one for the next term with a large number of distinct factors.

    • Harrison Says:

      “Although I’ve been sounding optimistic about finding a much longer multiplicative example with discrepancy 3, there is a difficulty to face, which is that a lot less will be forced. In the light of that, does one make arbitrary decisions? I think what I would probably do is try to avoid sums equal to plus or minus 3 as much as possible. In other words, I wouldn’t just be trying to keep within [-3,3] but would be trying to keep within [-2,2] unless I was forced out of it.”

      I’ve thought about this and I agree that you want most of your sequences to have even smaller discrepancy than is necessary. But there are some gray areas; say you’re up to adding the 2*3*3331th element, and making that element a +1 would make the 3331-subsequence have discrepancy 3, but making it a -1 would make the 2- and 3-subsequences both have discrepancy -2. (Ignore the other subsequences for now.) From the purely analytic point of view, I’d think that a -1 would be better, but intuitively speaking that’s crazy, since you’re looking at possibly forcing a lot of elements of the sequence — I mean genuinely forcing — a lot sooner. Of course, it’s possible that that sort of thing doesn’t come up in practice, but… unlikely. Especially if we move beyond discrepancy 3.

    • gowers Says:

      I’m not sure I fully understand what you mean. When my “algorithm” got to 2*3*3331 it would have already decided what sign to give it because it is composite.

      In answer to Jason’s question, my look-ahead is as far as I can be bothered to make it. Or to put that slightly differently, when I start to feel that my moves are not being sufficiently forced, I look a bit further ahead. In the most recent attempt, I decided that after each prime I would do all forced moves before looking at the next prime. Sometimes I would find that the value at 2p was forced by the values at surrounding (composite) numbers, and would use that to choose p when not all smaller primes had been chosen. It was quite surprising to see just how much was forced when I did this. (Another example of the kind of information you can use is that if you have a sequence – + + + starting at an odd number, then the partial sum up to but not including that segment of the sequence has to be zero: it can’t be odd, it can’t be -2 because of the initial minus, and it can’t be 2 because then the partial sum at the end would be 4. Come to think of it, all I needed for that argument to work was – + +.)

      Going back to Harrison’s message, I agree that there is a tension between trying to minimize the discrepancy so far and trying to maximize the choice that you will have later. There is also a tension between making the algorithm as clever as possible (e.g. looking ahead and making full use of all the information you can deduce from the later values) and making it easy to analyse theoretically. It may be that doing something a bit more stupid would lead to a function that grew more quickly but for which it was actually possible to prove an upper bound like C\sqrt{\log n} instead of just speculating that such a bound was true.

    • Harrison Says:

      Gah, somehow I managed to completely overlook the fact that your sequence was assumed to be multiplicative. OK, then my concern doesn’t strictly apply, although it’ll still probably make trying to construct good non-multiplicative sequences difficult.

      Your comment about “forcing a value from the surrounding numbers” gave rise to this thought, though, which is hopefully at least false for a less trivial reason:

      So you can lower-bound the discrepancy “locally.” For instance, if you can find an arithmetic subsequence whose the terms have a nontrivial mutual common divisor, and this subsequence has K more plusses than minuses (or vice versa), then the full sequence has discrepancy at least K/2 - 1. This is very nice, since it lets us force things without having to compute a whole bunch of stuff.

      Question: Can you upper-bound the discrepancy “locally,” i.e. non-homogeneously (but able to be continued to a HAP)? Essentially, is there a dual argument to the lower-bound one? (Sorry if this is a bit incoherent, I haven’t managed to get nearly enough sleep the past couple of nights.)

  25. Gil Kalai Says:

    There is another way we can introduce randomization by replacing the problem with a problem in which the structure of positive integers is replaced by something random but similar.

    Here is a suggestion: Suppose that for every n there is a counter K_n. Suppose that for i<n the number n effect the counter K_i with probability 1/i (independently). So the first thing we do is we run over the positive integers and decide by this random rule which n effect the counter of which i.

    Next we want to assign a sign 1 or -1 to every n. At step n the sign of n is added to all counters K_i so that n effect them.

    The question can we with probability 1 (over the questions) assign signs to the integers so that all the counters at step n for every n are bounded by a constant. Or perhaps with positive probability they are all bounded by 2. Or perhaps we can typically guarantee all counters <=n are bounded by log n? or log log n? etc…

    • gowers Says:

      That sounds like a very interesting line of enquiry, which could lead to problems that were more tractable. However, I would expect the behaviour of this random model to be importantly different from the behaviour of the original problem. For example, in this case we would be greatly smoothing the number of “factors” of each integer n, whereas the original problem seems to be importantly affected by the fact that there are lots of primes, where we are relatively free to make choices. Of course, the primes get much sparser later on, but there is still much more variability in the number of factors of a random integer than there would be with these pseudofactors.

      Then again, perhaps one could incorporate that into the model. Probably quite a lot is known about the distribution of the number of factors of n. (I mean factors rather than prime factors, so I’m not talking about the Erdős-Kac theorem here.) So perhaps one could let i affect n with a probability p(n), where 1/p(n) is distributed in the same way as the number of factors of integers around n.

  26. Sune Kristian Jakobsen Says:

    Two ideas, inspired by Gils idea of modeling the natural numbers with something random:

    The first one is a small modification of Gils model. For every prime we have a counter K_p, and the number n effect the counter K_p with probability 1/p (of course we could also use a different number). We also have counters for prime powers K_{p^i}, and n can only effect this counter if it effects K_{p^{i-1}}, and in this case it effects it with probability 1/p. Finally we have a counter for each natural number K_{p_1^{a_1}p_2^{a_2}\dots p_n^{a_n}}, and this counter counts if and only if K_{p_1^{a_1}}, K_{p_2^{a_2}},\dots K_{p_n^{a_n}} all counts.

    A much different approach is to reorder the natural numbers: For each prime we define L(p)>0 (think of this as the log of p) such that L is increasing and its values are linear independent over \mathbf{Q}. Now define p_1^{a_1}p_2^{a_2}\dots p_n^{a_n}> p_1^{b_1}p_2^{b_2}\dots p_n^{b_n} if a_1L(p_1)+a_2L(p_2)+\dots +a_nL(p_n)>b_1L(p_1)+b_2L(p_2)+\dots+ b_3L(p_3) . The HAP with length m and difference d is the m smallest (wrt. this ording) numbers divisible (in the normal sense) by d.

  27. Alec Edgington Says:

    In a different line of experimentation, I have done some elementary computations of the Dirichlet series \sum x_n n^{-s} for real s, where (x_n) is a discrepancy-2 sequence of length 1124 or 974. There is a formula for the abscissa of convergence that is valid as long as the series is not convergent for s=0 (as it isn’t): it is given by

    \limsup \frac{\log (x_1 + \ldots + x_n)}{\log n}

    (http://eom.springer.de/d/d032920.htm; Hardy and Riesz, 1915). So if the partial sums are bounded, the abscissa of convergence ought to be zero. This seems to be the case, though the manner of convergence is interesting. These pictures show partial sums of the series, for 0.1 \leq s < 2, from the 100th partial sum onwards:

    As the partial sums of the series move between the points \{-2, -1, 0, 1, 2\}, the curves move between five areas of convergence, which tend slowly towards each other and the central one. (The process is clearer when you actually see the curves being successively plotted.) The central group of curves stays extremely well bounded, so this looks a good way to compute the function: just sum up to those points where the partial sum of the sequence is zero. The function seems to tend to zero as s \rightarrow 0.

    • Alec Edgington Says:

      (My last message is awaiting moderation, but I’ll post this follow-up now anyway.) The curious gaps in the two groups of curves corresponding to \pm 2 in the 974-sequence plot are explained by the fact that the partial sums of the sequence avoid -2 for a long stretch, and then avoid +2 for another long stretch.

    • Alec Edgington Says:

      A quick thought from last night: if f(s) = \sum x_n n^{-s} \rightarrow 0 as s \rightarrow 0, then \log f(s) here diverges. There are formulae for the logarithm of a Dirichlet series under absolute convergence, and even better ones for when the function is completely multiplicative. We don’t have either absolute convergence or complete multiplicativity, but it might be interesting to plug s=0 into these formulae and see what they suggest.

    • gowers Says:

      Back here I thought a bit about Euler product formulae in the case where the sequence is completely multiplicative. I got into a bit of a muddle, but the following experiment would interest me. Suppose we take one of the record multiplicative examples and form a corresponding Euler product using the values it takes at primes. And suppose also form the corresponding Dirichlet series. That gives us two functions. In what region do they appear to agree with each other?

      It could be that this is not the best question to ask, particularly near s=0 (though it is probably too optimistic to hope for agreement round there anyway) since setting x_p=1 for a large prime leads to a big contribution from the term (1-x_pp^{-s})^{-1} in the product. Actually, I take that back: it should be OK down to s of order 1/\log n, since then p^{-s} will not be almost equal to 1. But it could still be that one would want somehow to attach less weight to the contribution of large primes to the Euler product, since they shouldn’t have much effect on the relevant part of the Dirichlet series.

      I’ve no idea what I expect to come out of such an experiment, but I do feel pretty curious about it, and I think the answer could be interesting. The fact that the Dirichlet series doesn’t converge absolutely doesn’t worry me too much: I’m hoping that the amazing properties of the sequence will cause there to be an extraordinary amount of cancellation that keeps it in bounds, as appears to be the case with your other series.

    • Alec Edgington Says:

      Here are the two functions (the blue curve is the Dirichlet series, the green curve is the Euler product), plotted from 0.5 to 2.0:

      Agreement seems very good for s \geq 1, less good in the region below.

      By the way, I looked up the formula for the logarithm of the Dirichlet series when the function is completely multiplicative. If we want it to diverge at zero, as seems to be the case for our functions, then we get a condition on this sum over prime powers:

      \sum_{n, n = p^m} \frac{1}{m} x_{p^m} = \pm \infty

      I’d like to do some calculations of this sum, and also of the corresponding sum in the general case (which involves the Mobius mu function), for our sequences.

    • Alec Edgington Says:

      That plot was produced with the first maximal sequence produced by my program, so it may reflect a bias towards values of +1 at higher primes.

    • gowers Says:

      I feel pretty sure that this is an avenue worth exploring, but I’m not so sure what should be done next. Perhaps it would be interesting to compare the two if you truncate the Euler product by doing something like only using primes up to 67 (the point after which the series ceased to be forced).

      Incidentally, I’ve just successfully completed a hand-computation of a record-equalling multiplicative discrepancy-2 sequence. It’s been quite a good intuition-builder. For example, I have now felt for myself the (not very surprising) fact that if you’re assigning values to primes where the density of primes is quite high (as it is, say, between 100 and 110), then your moves are not at all forced, whereas in regions with a lower density they are.

      If I can face it, then there are two things I may do next. One is to write a wiki page where I show exactly what I’ve done, by displaying several stages of the process and explaining how they were generated. The other is to try to continue the process. I was forced to mess up by a long run of -1s at composite numbers. They were 242 (forced to be -1 because 2 goes to -1 and 121 is a perfect square), 243 (forced to be -1 as 3 goes to -1 and this is 3^5), 245 (49 is a square and 5 goes to -1), 246 (more subtle, but it just happens that 2, 3 and 41 all have to go to -1), 247 (it happens that 13 goes to -1 and 19 goes to 1), 248 (31 goes to 1 and 2 goes to -1).

      The bad news is that since the sum up to 241 is at most 1, this implies that the sum up to 248 is at most -4. In other words, as soon as the 2 barrier is broken, the 3 barrier is immediately broken as well. This is bad because it implies that if we want to find a super-long sequence of discrepancy 3, then a greedy algorithm will not work: we’d have done better to relax a little earlier on and allow a few 3s and -3s.

      So perhaps the thing to do is develop a more sophisticated algorithm still, one that takes note in advance of long runs of composite numbers (or something cleverer that takes account of how composite they are ) and gets worried if they are all getting the same sign. Or perhaps one could do something that’s a bit closer to what I’ve actually been doing, which is to fill in all values that are implied by multiplicativity and to deduce as many inequalities as possible from those values, some of which will force decisions for composite numbers with large prime factors that had not yet had their values assigned.

      On a completely different note, the fact that one does run into difficulties with long sequences of composite numbers would almost certainly have a bearing on any proof that multiplicative sequences have to have unbounded discrepancy. When the project gets more theoretical, I think this should be one of the first things we think about.

  28. gowers Says:

    I have added to the wiki the thought that one might use an idempotent ultrafilter to obtain from a hypothetical counterexample a function defined on the rationals that had extra properties. I have restrained myself from thinking about what those properties might be. The discussion can be found in the subsections entitled Method 3 and Using an Idempotent Ultrafilter on this page of the wiki.

  29. Thomas Sauvaget Says:

    I’ve written a new program to address some questions mentioned in the wish list about the growth of the discrepancy of completely multiplicative sequences and relevant greedy algorithms. I’m planning to gradually add data over the next days on the ‘multiplicative sequences’ page of the wiki. I shall also try to take into account Tim’s recent comments.

  30. obryant Says:

    Discussion has moved here:

    http://gowers.wordpress.com/2010/01/14/the-erds-discrepancy-problem-iv/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 1,577 other followers

%d bloggers like this: