This is an emergency post, since the number of comments on the previous post about Erdős’s discrepancy problem has become unwieldy while I’ve been away enjoying a bit of sunshine in Luxor. My head is full of amazing details from the walls and columns of ancient Egyptian tombs and temples. The main thing I didn’t see, or at least not until I finally saw them sticking up through the mist from my plane window yesterday morning, was the pyramids, since those are near Cairo. However, I didn’t feel too bad about that as my guide book, the Lonely Planet guide to Egypt, assured me that the pyramids never fail to disappoint (which was just one of many little gems of bad writing in that book).
For the first time for ages, I was completely away from all email and internet for a week, but just before I left, the results of the polls, as they then stood, about the next Polymath project were already suggesting that the Erdős discrepancy problem was the clear favourite, so I couldn’t help thinking about it a certain amount while I was in Egypt. I said that the process of choosing the next problem was not going to be fully democratic, but the fact is that I love the Erdős discrepancy problem and am currently somewhat grabbed by it, so I see no reason to go against such a clear majority, especially as it also has a clear majority of people who say that they are ready to work on it.
Now that I’m back, I see that the longest sequence yet found with discrepancy at most 2 in any arithmetic progression of the form has gone up to 1124, and is exhibiting some clear structure. So yet another argument for choosing this particular project is that it has in a sense already begun. I’d like to regard what is happening now as a kind of preliminary stage before the true launch. The fact that such long sequences with low discrepancy can exist, and the fact that they appear to have to have some structure, are two extremely helpful pieces of information, since they place very definite constraints on what any proof could look like. For example, it no longer seems all that likely that the true bound for the best possible discrepancy of a sequence of length
is logarithmic in
, and even if it is, the proof will not be some easy induction if the bound is logarithmic to a very high base. And I don’t think one can rule out the possibility that there exists an infinite sequence of bounded discrepancy: at any rate, it seems foolish to stop trying to find longer and longer sequences, since there is almost certainly more that we can learn from them.
There are a couple of terminological decisions I’d like to make collectively before we start properly. Then main one is what we should call an arithmetic progression of the form . I’ve never managed to come up with a good answer to this, and the lack of a good phrase for it is annoying when writing about the Erdős discrepancy problem, so any suggestions would be very welcome. [While writing this post, I found a description of the problem that called them homogeneous arithmetic progressions. That seems a pretty good solution to me, so perhaps we should go for that unless someone comes up with a clear improvement.] A more minor decision is how to refer to the problem itself. I think I favour EDP, since three-letter acronyms seem somehow right. The only argument against that is that we talked about DHJ rather than DHJP (or DHJT), but I think that is outweighed by the catchiness of EDP: ED just doesn’t do it for me in the same way. A yet more minor decision is what number to give to this project. I’m pretty sure it should be polymath5 (following on from Terry Tao’s polymath4, which was the search for a deterministic algorithm for finding primes). But before giving it that tag I thought I’d quickly check in case anyone thinks I’m wrong. Another decision I’d like to make, since not having made it clearly yet is another source of slight annoyance when I try to write about the Erdős discrepancy problem, is what counts as a positive answer. I’m not sure what Erdős actually conjectured, but I’d like to assume that the conjecture is that all sequences have unbounded discrepancy. Therefore, if I refer to a positive answer, that is what I mean, and a counterexample is an infinite sequence with bounded discrepancy. Another practical question: is there some way of displaying good examples so that their structure can be more easily seen? At the very least it might be good to put them in tabular form, perhaps with each row of some highly composite length such as 24. If I just see a long line of pluses and minuses and I want to know what the subsequence
is like, then it is extremely tedious to find out. I don’t know what WordPress’s support for tables is like, though. If anyone can help me to display the sequence of length 1124 (and others) in a nice way, I’d be extremely grateful.
At some point reasonably soon, I plan to write a post that will describe various approaches to the problem that do not work, in the hope of stimulating a discussion about what kind of approach could conceivably work — which at the moment is not at all clear to me (at least if the conjecture is true). That post, rather than this one, will be the “official launch” of the project, and it is at that point that I shall start working seriously on the problem.
For the benefit of anyone who does not want to wade through well over a hundred comments on the previous post, here is a very quick summary of what we know now that we did not know when I put up the post. The main thing is that it is possible to have extremely long sequences of discrepancy at most 2 (meaning that the sum over any homogeneous arithmetic progression has modulus at most 2). But there is more to say. A simple observation that I mentioned in the previous post is that multiplicative sequences, by which I mean sequences
such that
for every
and
, are good candidates for low-discrepancy sequences, since the discrepancy of such a sequence is bounded by the largest size of any of its partial sums, which sort of means that there is less to check.
Of course, that is not a very convincing argument, because the price we pay for having less to check is a huge constraint on the sequence: we are now free to choose its values only at primes and everything else is determined. And the initial experimental evidence quickly showed that multiplicativity was indeed too strong a constraint if one is looking for the longest possible sequence with a given discrepancy, since the longest sequence of discrepancy 2 is much longer than the longest multiplicative sequence of discrepancy 2. However, what the examples given in the comments on the previous post are suggesting is that a weaker form of multiplicativity may well still hold, perhaps in some approximate sense. We can characterize a multiplicative sequence as a sequence
that starts with
and has at most two distinct subsequences of the form
(one of which will be minus the other — there will be exactly two unless the original sequence is 1 everywhere). The weaker property is simply that there should be a small number of distinct subsequences. The very long examples don’t quite exhibit this, but they do show something a bit like it: if you look at the subsequences of the given form, then there are six beginnings that appear a lot. This has led to a suggestion that another way of building good examples is to take an Abelian group
, to choose a function
such that
for all
and
, and to compose that with a function from
to
. In the light of what we have seen, a good choice for
appears to be
. It turns out that if a
sequence
has only finitely many distinct subsequences
, then it must have a subsequence of this form.
It doesn’t look as though the example of a sequence of length 1124 is of this form for the group , but it certainly does seem that we can get some understanding of the sequence (which we do not yet fully have) by looking at it with
very much in mind. If the conjecture is false, then it seems possible that to produce an infinite sequence of bounded discrepancy one would start with a long finite sequence produce with the help of a small group, and one would then introduce complications that would eventually be explained by the presence of a much larger group, and one would iterate this process. (That is intentionally vague — I do not know how to make it less so.)
Incidentally, the account of the problem I alluded to above, by Josh Cooper, gives it as an open problem whether a multiplicative sequence can have bounded discrepancy. I have some dim memory of being told by a number theorist once that it couldn’t: does anyone know? The most obvious -valued multiplicative sequence, the Liouville function, defined to be -1 raised to the power the number of prime factors of n, has partial sums that grow at rate that is, I think, known to be at least in the rough region of
and at most at this sort of rate if and only if the Riemann hypothesis is true. [Added later: I now know that both these statements are correct.] But what about an arbitrary multiplicative sequence? We know from the Walters base-3 sequence that the partial sums can grow quite slowly, so perhaps this question really is also unsolved, in which case it would make an excellent auxiliary project: a strictly easier question that now appears to be highly relevant to the main question.
January 6, 2010 at 11:45 am |
“It doesn’t look as though the example of a sequence of length 1124 is of this form for the group
”
or
you’ll get rid of many (if not all) the sporadic sequences.
I think it is worth noticing that if you pass to the subsequences
January 6, 2010 at 12:38 pm |
Maybe someone can answer the following question, about which I do not yet feel entirely clear, though I might be able to work it out from a closer look at the relevant comments on the earlier post. Is the claim that there are just six (major) ways in which sequences of the form
begin, or are we talking about the entire subsequences? If the former, how far does one typically go before the phenomenon stops and the sequences that start out the same become different?
The reason I ask is that I’m wondering whether one can argue that some kind of group structure is highly likely to appear. The rough reasoning would be as follows. If it is hard, or at least not very easy, to produce long sequences of discrepancy 2, then there are probably not all that many ways that such a sequence can start. But if that is the case, then quite a lot of what has been observed is fairly forced. For instance, here is a simple result: if there were a unique infinite sequence with
and discrepancy 2, then it would be forced to be multiplicative (since any subsequence of the form
would have to be either the original sequence or minus the original sequence). So it seems that in a certain sense, the more difficult it is to produce examples, the more those examples have to have multiplicativity properties, which suggests that if the conjecture is true, then the best examples should indeed have a multiplicative nature. So, oddly enough, the existence of these interesting examples could be taken as weak evidence that the conjecture is true.
January 6, 2010 at 1:22 pm
There look to be six major ways in which the subsequences can begin. These are
(d, code, length of subsequence, start of subsequence starting with position 1)
2, 2674, 562, -1, 1, -1, -1, 1, 1, 1, -1, -1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1, -1
3, 1421, 374, 1, -1, 1, 1, -1, -1, -1, 1, 1, -1, 1, -1, -1, 1, 1, -1, 1, 1, -1, 1
4, 1205, 281, 1, -1, 1, -1, 1, 1, -1, 1, -1, -1, 1, -1, -1, 1, 1, -1, 1, 1, -1, 1
5, 2890, 224, -1, 1, -1, 1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1, -1
8, 2892, 140, -1, -1, 1, 1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, -1, -1, 1, -1, 1, 1
10, 1203, 112, 1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, -1, -1, -1, 1, 1, -1, 1, -1, -1
The value of d is the smallest for which the particular sequence appears. The code is a binary one: convert -1 to 0 and read the earliest members of the sequence as the least significant digits (not sure this is the best coding, but it was what worked at the time).
Working on longer subsequences is possible, but for higher value of d there are divergences at the tail end, and it is not yet possible to say whether these are necessary. If this were the case the 6 sequences might become 12, for example as each sequence diverges into two possibilities. Given that there are two choices at each stage, it is natural to expect a split into two.
The alternative is that the long sequence needs adjusting to extend it, and once adjusted the regular pattern will persist.
[eg the 15th value in the subsequence for d=56 is at position 840, and previous major extensions to the sequence have involved adjustments that far back]
The existence of the factor 3 in a process which seems to have such a clear binary nature intrigues me. It looks as though the subsequences for d=4 and d=10 have a lot in common, but they are different and behave differently, and diverge further.
One matter worth considering is whether early divergences in the case of d=prime which give rise to some of the sporadic effects eventually disappear (so that eventually the sequence becomes identical with some canonical one).
I am not sure that the sporadic elements do disappear on passing to “regular” subsequences – that is a conjecture, which would require proof.
January 6, 2010 at 1:57 pm
To answer your first point further.
Working with 1124 sequence
Subsequence difference 2 is the same as the negative of subsequence 3 up to position 71 [71 is a sporadic prime, and this is a significant point since this affects position 213 of the main sequence]
Subsequence 4 is the negative of subsequence 5 as far as it goes and 6 is the same as 5 up to and including position 175. (4 and 6 – cf 2 and 3), 9 is the negative of 5 up to position 105 and of 40 to position 27 (as far as it goes)
Subsequence 7 is the negative of subsequence 1 up to position 60 [corrected from previous comment]
Subsequence 8 is the negative of 10 up to position 80 and of 12 up to position 88 (cf relationships between sequences 4,5,6). It is the same as subsequence 15 up to position 73, and the negative of 27 up to position 40 (and the same as 1 up to position 47).
Subsequence 16 is the negative of sequences 20 and 24 (up to 43)
Most of these divergences are out beyond f(800) except for the one between 1 and 7 – but the main sequence and 7 are sporadic approximating the sequences for 8 and 10 respectively.
So, to answer your question, we could very well be looking at entire subsequences.
January 6, 2010 at 2:35 pm
Sorry, subsequence 8 diverges from the main sequence in positions 1, 7, 47, 49, 53, 61, 71, 73, 94, 98, 107, 112, 116
[I suggested above that the first divergence was at position 47]
These positions have a strong relationship with the values of d for sporadic sequences.
January 6, 2010 at 4:59 pm |
Thanks for all that information, which I am getting to grips with. Apologies if I repeat things that others have said, but let me do so in case it helps anyone else who might be reading this. Looking at those six sequence starts, one can get a cyclic structure on them by means of the operation of taking every third element. Denoting each sequence by its starting element, this gives us the permutation (2 5 8 3 4 10). From this it is clear that one should expect 2 and 3, 4 and 5, and 8 and 10 to form negative pairs.
It seems a good idea to try to factor this sequence into a map from
to
and a map from
to
. Without loss of generality, we can map 3 to 1 (where 3 is a natural number and 1 is the generator of
). Of course, we also map 1 to 0. And 2 maps to 4, since the 2-sequence is obtained from the 3-sequence by multiplying everything by 3 three times, which translates into adding 1 three times to 1. For similar reasons, 5 maps to 5. By multiplicativity we then know that 4 maps to 2, 8 maps to 0 (which corresponds to Mark’s observation that the 8-sequence is more or less the same as the original sequence) and 10 maps to 3. We can also get those more directly.
It would be nice if someone could produce a table of the values associated with each prime. Just to clarify what I mean here, for each prime p I’d look at the sequence
and see which of the six fundamental sequences it corresponded most closely to. Then I’d define
to be the element of
corresponding to that sequence. It would then be nice to write out the whole function that results from these assignments and multiplicativity (where again that is the now the property
). I’m also intrigued to know whether it connects in any surprising way with Alec’s hexagonal investigations.
One can also see what the map from
to
is by looking at the initial values of the sequences numbered 3,4,10,2,5,8, which should also be the values taken at the first six powers of 3, with the caution that subsequences 8 and 10 differ from what they “should be” in the first position. This gives us the map that takes 1,2,3,4,5,0 to 1,1,-1,-1,-1,1. This is pretty close to what Sune Kristian Jakobsen was saying, though it doesn’t seem to be quite identical.
Yet another question, which might be answerable by brute force. If we insist on producing a sequence by first mapping multiplicatively to
and then composing that with the map just defined in order to get a
sequence, then what is the best we can do? In particular, can we do substantially better than we can do with a multiplicative sequence? (We can do at least as well by just mapping everything to 0 and 3 in
, which will result in a multiplicative sequence.) Has anyone tried that?
It’s also tempting to be slightly more general and compose a multiplicative homomorphism from
to
with the map that takes complex numbers with positive imaginary part to 1 and those with negative imaginary part to -1 (and takes 1 to 1 and -1 to -1). Here is a situation where working with the positive rationals might be rather nicer, since they form a group under multiplication, so we could talk about a character in the usual sense. The idea here would be that each sequence obtained by restricting to a homogeneous progression would be a sort of “rotation” of each other one, so if the original sequence was sufficiently well-distributed then we’d be completely done and have a counterexample.
January 6, 2010 at 5:55 pm
Here is a table of primes – the sporadic primes seem to behave quite like the regular ones, so I’ve allocated these to the sequence to which they seem to belong. The primes 47, 71, and 73 are conjectural. 7 is a potential counterexample to regularity.
2, 37
3, 17
4, 29, 41 [sporadic: 53, 61]
5, 31, 43, 67, 103 [sporadic ?47]
8, 11, 23, 79, 97 [sporadic: 1, 49, ?73]
10, 13, 19, 59, 83, 89, 101 [sporadic: 7, ?71]
I like the idea of working with the rationals and with characters.
January 6, 2010 at 6:29 pm
Let me tabulate that in a different way: f(2)=4, f(3)=1, f(5)=5, f(7)=3?, f(11)=0, f(13)=3, f(17)=1, f(19)=3, f(23)=0, f(29)=2, f(31)=5, f(37)=4, f(41)=2, f(43)=5, f(47)=5??, f(53)=2?, f(59)=3, f(61)=2?, f(67)=5, f(71)=3??, f(73)=0??, f(79)=0, f(83)=3, f(89)=3, f(97)=0, f(101)=3.
Now I’ll put together a sequence that’s the multiplicative function from
to
that you get by assigning these values to the primes. I’ll arrange it in rows of 10, to make it clearer what each number is the image of. I’ll go up to 50.
0, 4, 1, 2, 5, 5, 3, 0, 2, 3
0, 3, 3, 1, 0, 4, 1, 0, 3, 1
4, 4, 0, 1, 4, 1, 3, 5, 2, 4
5, 2, 1, 5, 2, 4, 4, 1, 4, 5
2, 1, 5, 2, 1, 4, 5, 5, 0, 2
Next, I’ll map that to
by mapping 0,1,2 to 1 and 3,4,5 to -1.
+ – + + – – – + + –
+ – – + + – + + – +
– – + + – + – – + –
– + + – + – – + – –
+ + – + + – – – + +
Encouragingly, the partial sums of this sequence are well-behaved. So are the sums along multiples of 2, 3, 4, 5 and 7, which implies (since that covers all six possible subsequences) that it gives us an example of a sequence that works up to 50 and for all I know will work quite a bit further.
January 6, 2010 at 9:13 pm
I reckon that this scheme requires an adjustment (anomalous value) at position 112 (=7 x 16).
January 6, 2010 at 9:49 pm
And there seems to be an issue with the 7 sequence generally, though it is easy to get out to 182 by altering values at 112 and 122.
The scheme seems to have a long-term bias towards the value -1, so new primes have to come in at +1 more often than not. There also look to be issues with the 61 sequence.
The other previously sporadic primes look to be under control, but the multiples of 2 and 3 tend to blow up, suggesting that there will need to be some variation/anomalous values in these sequences.
January 6, 2010 at 11:38 pm
Tim, your sequence works all the way up to
, if we further set
and
.
January 7, 2010 at 8:21 am
One can get rather further with this type of sequence, as follows:
f(2) = 3; f(3) = 3; f(5) = 0; f(7) = 3; f(11) = 0; f(13) = 4; f(17) = 5; f(19) = 0; f(23) = 3; f(29) = 2; f(31) = 5; f(37) = 1; f(41) = 2; f(43) = 5; f(47) = 5; f(53) = 5; f(59) = 0; f(61) = 2; f(67) = 5; f(71) = 5; f(73) = 0; f(79) = 0; f(83) = 0; f(89) = 3; f(97) = 3; f(101) = 3; f(103) = 3; f(107) = 3; f(109) = 0; f(113) = 0; f(127) = 3; f(131) = 0; f(137) = 3; f(139) = 0; f(149) = 0; f(151) = 3; f(157) = 0; f(163) = 0; f(167) = 3; f(173) = 0; f(179) = 3; f(181) = 3; f(191) = 0; f(193) = 0; f(197) = 3; f(199) = 3; f(211) = 3; f(223) = 3; f(227) = 3; f(229) = 0; f(233) = 0; f(239) = 0; f(241) = 0; f(251) = 0; f(257) = 3; f(263) = 0; f(269) = 0; f(271) = 3.
However, one can’t get much further than with a purely multiplicative sequence. In other words, with this particular map
, factoring the sequence through
is only slightly better than factoring through
.
January 7, 2010 at 8:52 am
Apologies — I made a mistake in my program. It’s quite possible you can get longer sequences. I’ll correct the program and try again!
January 7, 2010 at 9:01 am
It’s striking that there are large numbers of 3s and 0s there, and in particular that from a fairly early point on every single choice is a 3 or a 0. Do you have an explanation for that?
January 7, 2010 at 9:02 am
I was writing my comment before seeing your last one, so I’ll wait for the new data since it seems that my question may not apply after all.
January 7, 2010 at 9:41 am
Yes, that was my mistake: I thought that for primes
(where
is the limit of the search), it only mattered whether
mapped to
or
. Of course, this is only the case for primes above
.
January 7, 2010 at 11:40 am
Alec
For p greater than the square root of N, all the multiples of p up to N will also belong to earlier progressions, which are already determined. For p of any reasonable size these values will determine the class to which p belongs.
January 7, 2010 at 11:44 am
Mark, I don’t understand what you mean here. If, say, p=89, then in what sense is 178 “already determined”?
January 7, 2010 at 12:09 pm
My mistake – I was going back to the era of my thinking when the subsequences were ‘given’ rather than constructed.
It would, though, be possible for the value at 178 to be forced because it is in the 2 subsequence, which will be complete up to 176, and if the sum along the 2 subsequence to 176 is 2 or -2 the value at 178 will have to be -1 or 1 respectively.
January 7, 2010 at 12:18 pm
That’s an interesting point, since it could in theory speed up the search for multiplicative examples: after choosing the values at all primes up to n, one would fill in all values that were forced by multiplicativity, and then search for further values that were forced because they lay at the end of a homogeneous AP where the sum was 2 or -2, followed by further values that followed from multiplicativity, and so on until the process stopped. And one would then have a free choice for the smallest prime that had not yet been filled in. It may be that Alec has already been doing something like this, but it may also not have been worth his while. Unfortunately, it seems less likely to be all that helpful for the more sophisticated going-through-
examples.
January 7, 2010 at 12:32 pm
Tim
Looking at the prime 89 we need to know whether it maps to 0, 1 … 5. This is not determined by its first value, so if the value at 178 is forced – and some later values too, these may help, I think.
January 7, 2010 at 1:33 pm
“It’s also tempting to be slightly more general and compose a multiplicative homomorphism from
to
with the map that takes complex numbers with positive imaginary part to 1 and those with negative imaginary part to -1 (and takes 1 to 1 and -1 to -1).”
I will call the above function from the complex numbers to
for
(s for sign).
with elements from
-balanced (or just balanced) if for any
the multiset
contains almost the same number of 1’s and -1’s (so the difference between the number of 1s and -1s is at most C).
Lets call a finite multiset
What can we say about balanced sets? It is easy to construct a set with sum 0, that is not balanced (choose i and a lot of numbers just below 1 and -1 ) but will every balanced set have sum close to 0?
Lets call a (finite or infinite) sequence
-balanced (or balanced) if for any n the multiset of the first n elements in the sequence is C-balanced.
What can we say about balanced sequences?
The motivation for the definitions is, that if we can find a infinite balanced multiplicative sequence, then s taken on this sequence is a counterexample to the conjecture.
January 7, 2010 at 9:44 pm
To answer my own question: The sum of the elements in a
-balanced set is at most
. Sketch proof: Assume WLOG that the sum is real and positive. Because the set is C-balanced, we can prove that the number of elements with real part
is almost C greater than the number of elements with real part
. So the sum of the real part of the element with (C+i)th greatest real part and the real part of the element with i'th smallest real part is negative. So the sum of all the real parts is less than the sum of the C greatest real parts.
QED
This implies, that if the partial sums of a sequence is unbounded, it cannot be balanced, and we cannot use it in the way Gowers suggested. So if we can prove that for any sequence that takes values in
and any C, the sequence has a HAP (homogeneous arithmetic progression) with a sum greater than C in absolute value (e.g. using the approach Gowers suggested here:
https://gowers.wordpress.com/2010/01/06/erdss-discrepancy-problem-as-a-forthcoming-polymath-project/#comment-4758 ) we would have to:
1) Use a different function from
to {-1,1} or
be a non-cyclic group or
2) Let
3) Abandon the idea of using a multiplicative function from N to a group G and compose it with a function from G to {-1,1}.
January 7, 2010 at 9:47 pm
“is almost C greater”:
“almost” should have been “at most”
January 6, 2010 at 6:20 pm |
I don’t know how useful this might be as a means of visualising a sequence, but perhaps this will inspire some further ideas:
Click to access fibspiralKlas717.pdf
The number n is plotted at radius Sqrt[n] and angle n*phi, where phi = (Sqrt[5] – 1)/2. The colouring is based on Klas Markström’s sequence of length 717 from https://gowers.wordpress.com/2009/12/17/erdoss-discrepancy-problem/#comment-4646
Red is +1, blue is -1. I’ve also drawn on spirals which hit the numbers d*n, for the cases d=5 and d=83. For d=5 and near the center it is easy enough to follow along and tally the sums. Further out, and for d=83 it isn’t so appealing.
I can tidy up and post the Mathematica code if anyone is interested.
January 7, 2010 at 9:56 am
I like the idea of representing a sequence as a spiral, as it may show up interesting patterns. However, I suggest that for normal posts of interesting sequences a format that is easily cut-and-pastable into a program (for analysis) would be best. I’d hope that WordPress tables would be fine for this; otherwise (or if tables prove to be a lot of work to enter), just a convention of putting line breaks after every 24 (or 30) numbers, and ensuring that the numbers align in a fixed-width font (for example, by inserting plus signs as well as minus signs), would work.
January 6, 2010 at 6:42 pm |
In combinatorial optimization EDP refers to the well-known edge-disjoint paths problem.
January 6, 2010 at 8:48 pm |
I think you are right this should polymath5. Polymath3 is scheduled to start in April of 2010 according to http://gilkalai.wordpress.com/2009/12/08/plans-for-polymath3/#comment-2244.
January 6, 2010 at 8:59 pm |
“Homogeneous arithmetic progressions” seems to be the accepted term for arithmetic progressions containing 0, see for example this paper on quasi-arithmetic progressions:
Click to access v15i1r104.pdf
(It’s possibly a relevant paper to our problem, since it proves the conjecture false in the case of quasi-arithmetic progressions.)
January 7, 2010 at 3:33 pm
Just to add to the terminology confusion, here’s another source that states: An arithmetic progression is homogeneous if it is of the form {ld, (l + 1)d, . . . , (l + k)d}.)
Click to access 0811.1311v2.pdf
January 6, 2010 at 9:06 pm |
[…] The topic has been chosen for polymath 5. it is Erdős’s discrepancy problem. See here for more […]
January 6, 2010 at 10:34 pm |
Hi, just dropping by to confirm that to my knowledge, Polymath5 is not “reserved” by any other project.
Granville and Soundararajan have made some study of the discrepancy of bounded multiplicative functions. The situation is remarkably delicate and number-theoretical (and is closely tied with the Granville-Soundararajan theory of pretentious characters). See for instance http://arxiv.org/abs/math/0702389 .
I am unfortunately so swamped with other work right now that I will probably not be able to contribute in any meaningful manner for at least a month, but I certainly give this project my moral support ;-). Also, one should open up a page for this project on the wiki as soon as possible, it seems like there is a lot to put on there already. I hope someone else other than myself will be willing to take the initiative to do this, otherwise it won’t get done for ages…
January 7, 2010 at 1:23 am
Thanks for that reference, and also for reminding me about setting up a wiki page, which I had (slightly surprisingly, given what an integral part it was of DHJ) not thought about. I’ll get that done before the project starts properly. If nothing else, it would be good to have a place with some interesting and well-displayed sequences, and other experimental data too.
January 7, 2010 at 7:56 pm
It may also be good to try to have a parallel discussion thread to accompany the research thread, as I can imagine that there are going to be a number of meta-issues to discuss which will involve people (like myself) who are not going to be able to follow the flurry of activity on the research thread.
January 7, 2010 at 2:47 am |
In the U.S., there are an insane number of commercials advertising medicines to treat “ED” (google it if you aren’t sure what it is!). So many, in fact, that I suspect we might find trouble with spam filters, and I certainly couldn’t tell anyone that I was working on “ED”.
I vote for “EHD” as the name for the problem, as in Erdos’ Homogeneous Discrepancy problem.
January 7, 2010 at 11:07 am
That has the added feature that EHD is sort of a bit like an echo of DHJ backwards.
January 7, 2010 at 9:43 am |
Is it really more natural to look at the discrepancy in the positive integers than in the nonnegative integers? Including 0, of course, only changes the discrepancy by at most 1, but it doesn’t always change the discrepancy at all. This means that the extremal sequences will be different, and may exhibit more structure. Or less, or the same.
January 7, 2010 at 10:08 am
I wondered about this too. In particular, it could be worth investigating sequences of discrepancy at most
in this arena, as they may reveal different structures.
January 7, 2010 at 10:44 am
Do you man that you want to look at APs which begin 1,d,2d….?
January 7, 2010 at 10:46 am
No: APs that begin at zero.
January 7, 2010 at 11:16 am
Interesting. This changes the parity considerations, which seem to have a powerful effect.
Assume C=2. Suppose we have a finite sequence f(n) starting with f(0)=1. Take g(2r) = f(r) and try to fill in the gaps. The d=2 sequence is under control, and the d=1 sequence can always be controlled (by parity).
Thus it remains to manage the other sequences, for which there is some inherent flexibility.
January 7, 2010 at 11:15 pm
If one asks for discrepancy at most
with respect to all zero-based HAPs, one is much more constrained. I believe the longest attainable sequence has length
(in other words, one can’t get beyond
). Here is an example of a maximal sequence:
+1, +1, -1, +1, -1, +1, -1, -1, +1, -1, -1, +1,
+1, -1, +1, -1, -1, +1, +1, -1, +1, +1, -1, +1,
-1, -1, +1, -1, -1, +1, +1, -1, +1, -1, -1, +1,
-1, -1, +1, +1, -1, +1, -1, +1, +1, -1, -1, +1,
+1, -1, +1, -1, -1, -1, +1, -1, +1, +1, -1, +1,
-1, -1, +1, +1, -1, +1, +1, -1, +1, +1, -1, -1,
-1, +1, +1, -1, -1, +1, -1, -1, +1, +1, -1, +1
The set of such sequences is small (there are
), so it may be interesting to analyse it for symmetries.
January 8, 2010 at 12:52 am
Well that one you’ve put up is certainly quite interesting. Here are the first twelve multiples (starting at 0) of 1, 2, 3, 4 and 5:
+1 +1 -1 +1 -1 +1 -1 -1 +1 -1 -1 +1
+1 -1 -1 -1 +1 -1 +1 +1 -1 +1 +1 -1
+1 +1 -1 -1 +1 -1 +1 +1 -1 -1 +1 -1
+1 -1 +1 +1 -1 +1 -1 -1 +1 -1 -1 +1
+1 +1 -1 -1 +1 -1 -1 +1 -1 -1 +1 -1
If you compare these, you find that the multiples of 1 and 4 are very similar, though not quite identical, and the same goes for the multiples of 2, 3 and 5, which are similar to minus the multiples of 1 and 4. In short, the sequence is struggling to be multiplicative, but not with complete success.
In fact, the sequence is pretty close to the Walters example: if you look at integers that are 1 mod 3 then their images are almost all -1, while those that are 2 mod 3 have images that are almost all 1. Similarly, numbers that are 3 mod 9 almost all go to 1 and numbers that are 6 mod 9 almost all go to -1. (In fact, they all do this for a long time but the pattern breaks unexpectedly towards the end.)
January 8, 2010 at 8:59 am
Potentially, at least, there is a convincing argument for why it is so much harder to find sequences if you define HAPs to start at 0. If, as the experimental evidence is suggesting, we are getting a group structure out of the sequences, and if that group has even order, then we should expect there to be some d such that the values of
are minus the values of
. If we really did have two sequences that were exact negatives of each other, then in order to get
with zero included, we need to confine the partial sums of that sequence, if we don’t start at zero, between -1 and 3, and also between -3 and 1. In other words, we need to obtain a
sequence in the old sense. Of course, what actually happens is that the sequences aren’t exact negatives of each other near the beginning, but it’s at least believable that including 0 is going to create difficulties.
January 8, 2010 at 10:30 am
The first thing to note about the
maximal sequences of zero-based discrepancy
is that they are quite similar! In fact, modulo negation of the whole sequence, they only differ at the points:
In particular, fixing
forces the sequence up to
.
January 8, 2010 at 11:25 am
The second thing to note is that we have complete freedom to choose the values at
,
,
and
. With these values set, there are
possible sequences, which differ at the points:
January 7, 2010 at 10:26 am |
Fix an odd prime p, and let f map
into +1,-1, so that
. Define the cyclic discrepancy
to be the maximum of
, with the maximum being taken over all n, d in
. Let
to be the minimum of
taken over all functions f.
Now take an f so that
and so that the worst discrepancy is along the
progression.
Then we can make a sequence
over the naturals by setting
if n is not a multiple of p, and
if n is a multiple of p. This sequence will have logarithmic discrepancy. In particular, it should have discrepancy around
.
So then, what is the smallest value of
? If this quantity goes to 0, then we probably have accomplished (using a compactness argument, but I haven’t done it) a sub-logarithmic discrepancy. My intuition is that it achieves it smallest value at
, unfortunately.
January 7, 2010 at 11:05 am
We investigated something pretty like this here and in the surrounding comments. It seems that things are definitely not optimized at
. (In the example we looked at there, we had an example with
and
, achieved with a multiplicative function
.)
January 7, 2010 at 2:31 pm
Ahh, wonderful. That’s exactly what I was suggesting.
I think the analysis of the example there is flawed though, and doesn’t have
. For example, the progression with difference 209 isn’t automatically handled by the way we truncated and extended the sequence, so the discrepancy could be much larger.
Specifically, with p=11 and the seed sequence
f=+ + – – – + – + – +
(which has normal discrepancy 3 but cyclic discrepancy 4), we get an F sequence with F(9), F(18), F(27), F(36) being f(9), f(7), f(5), f(3), which are all -. Also F(2), …, F(12) shows a new difficulty.
Am I missing something about that example? It doesn’t seem that it coming from a completely multiplicative f is relevant, since that property won’t survive the truncation and periodic extension.
January 7, 2010 at 2:51 pm
Yes you’re right: I stupidly didn’t spot that mod-211 multiplicativity was needed, which is a much stronger property.
My instinct is now the same as yours. I think
probably has to be around
, so one has to look at small
only. The way I would attempt to prove that
would be to find a Fourier coefficient of about that size and then do some averaging tricks to get from a trigonometric function to a progression mod p. (I’m not using multiplicativity, but I don’t think I need to.)
January 8, 2010 at 4:43 am
Unfortunately, van der Waerden numbers grow to rapidly.
January 10, 2010 at 8:59 am
Claim:
. Proof: By Roth’s lower bound for the nonhomogeneous discrepancy, there is some AP with discrepancy at least
. Since any AP in
is a sub-AP of a homogeneous mod-p AP (with the same difference), we get
.
January 7, 2010 at 12:21 pm |
I have now started, in a very small way, a wiki page for Polymath5. In due course I shall add to it, but anybody else is of course welcome to do so as well. (You have to register, as we’ve had problems with spam, but the registration process is easy.) It would be particularly good to have nice tables of some of the sequences that people have been producing.
I’ve put a link to the wiki from the main page of this blog, so you don’t necessarily have to find this comment again if you want to visit the wiki.
January 7, 2010 at 3:49 pm
Mainly because I had to kept digging for the link, I added a “annotated bibliography” stub with a link to our (as of the moment) most important paper.
As a gentle request, could we keep the bibliography annotated? One of the issues with the bibliography of polymath1 is it started to become unclear what each particular reference was used for.
January 7, 2010 at 2:45 pm |
A couple of thoughts about multiplicative sequences and the like. First, let me try to describe a not fully precise algorithm for producing good sequences: the idea is that it would be something to investigate theoretically rather than use as a basis for computational experiments, though perhaps the latter could be done too.
To choose a multiplicative sequence, one just chooses its values at each prime. But might there be a sort of semi-greedy algorithm for doing this? The idea would be that, having chosen the values at primes
, which are not necessarily the first k primes, one would then have a look at everything that is implied by multiplicativity, try to identify “areas of danger” and make further choices to alleviate the danger. For example, if one found an interval that contained many more 1s than -1s, one might choose
in such a way that a multiple of
lay inside that interval, and choose the value at
so that the value at the multiple would be -1. At any one moment there might be quite a number of competing constraints, but for large n the constraints would be quite weak (because the set of places where the values had been chosen would be quite sparse). It seems to me at least possible that some cleverly designed procedure could produce multiplicative sequences that do better than the logarithmic growth rate.
To turn that thought into an algorithm, one would need to think carefully about how to ascribe danger levels to intervals. (An extreme example would be something like that if
and your choices so far imply that f(p-1)=f(p-2)=f(p+1)=f(p+2)=1, then there’s a super-dangerous interval {p-2,p-1,p,p+1,p+2} containing p, so we’re forced to choose f(p)=-1. But the idea of this quasi-algorithm is to start getting anxious before one’s moves are completely forced. Erdős himself had some results about game strategies based on this kind of idea: one could perhaps attach to each unspecified point in the sequence a number that measured the “pressure” that number felt to be assigned a value, where the sign of the number would tell you whether the value should be 1 or -1. Then one would try to choose values that relieved as much pressure as possible. Forced moves would correspond to infinite pressure, or perhaps just pressure that was so large that it swamped everything else.)
The other idea is that we might be able to get somewhere by aiming first for a weaker, but quite natural, target. The way discrepancy is measured, there is a sharp cutoff: a homogeneous AP reaches the end and then suddenly stops. One could smooth this out as follows. Instead of looking at homogeneous APs, let’s look at functions of the following form. You pick a constant s>0 and a positive integer d. You then define f(nd) to be
and set f to be 0 everywhere else. Finally, you define the discrepancy of a
sequence
with respect to f to be the sum
. One can then ask, for a given sequence
, what the largest possible discrepancy with respect to functions f of the above type for fixed s and varying d. (Here, s relates to something like the length of the homogeneous AP. We think of s as quite small, so
starts to get small only when n is exponentially large in 1/s: that is, we can think of the length as something like exp(1/s).)
If
is a multiplicative sequence, then the function
has all sorts of nice properties. Can it be bounded near zero? If not, then I think partial summation shows that the partial sums of
are unbounded too, in which case the conjecture would be proved for multiplicative functions. But I think it’s quite likely that it can be bounded: if one smooths the progressions like this, then the occasional “accident” can be compensated for, whereas with ordinary homogeneous progressions every accident is fatal.
January 8, 2010 at 7:25 am
If
is the indicator function of the set A of positive integers, then
is exactly the logarithmic density of A:
This leads me to wonder if the meaning of “the occasional accident can be compensated for” may actually be that the the density of `1’s in each progression is 1/2. That is, if we let s go to a pole of the generating function, we lose all information except density. Maybe something more can be salvaged, though, doing as you suggest and taking a fixed s.
January 7, 2010 at 5:55 pm |
Continuing with the second of the above themes, an obvious thing to do would be to write
as the Euler product
. We want that to be bounded. The whole thing is a little bit strange because we don’t have absolute convergence, so we’re relying heavily on cancellation. But that is not too worrying when in a sense the original problem is about cancellation.
I’m not sure whether my intuition is correct here, but it also looks as though there needs to be a fairly heavy bias towards -1 as the value at primes. That’s because when s is small then
is extremely large.
And now I’m starting to wonder whether the product formula itself is correct: in the absence of absolute convergence it is not as obvious as I unthinkingly took it to be. And it worries me because it seems as though to make the function bounded my best chance is trivially to take
to be -1 for every prime p, in which case we end up with the Möbius function.
Formally speaking,
, but this isn’t valid near s=0. If we pretend it is, then
which suggests that the Möbius function does in fact give an example for boundedness in this weaker sense.
So there’s a wildly non-rigorous and not obviously correct argument that it is indeed easier to get boundedness when you smooth off the ends of the homogeneous APs. Maybe some number theorist out there can go through the previous paragraphs and tell me what I ought to have said. If by a miracle the argument is correctable, then it doesn’t help much, except to demonstrate that the hardness of the problem lies, in a certain sense, in the sharpness of the cutoff.
January 7, 2010 at 11:12 pm
I thought the Möbius function had zeros?
As I said above, it might be interesting to consider sequences over
.
Meta comment:
Perhaps we could ask some of the small isolated questions at http://mathoverflow.net/ ? Of course there is a risk that the discussion gets too spread out, but it would make it possible to contribute without reading though lots of comments. (I think I’ve seen someone, perhaps you?, mentioning the same idea, but I couldn’t find it anywhere.)
January 7, 2010 at 11:42 pm
Oops, you’re right. I meant the Liouville function (which is -1 raised to the number of prime factors). Of course, now it is no longer the case that we get the inverse of the zeta function.
A quick glance at Wikipedia tells me that
equals 1 if n is a perfect square and 0 otherwise. If we set
, then this tells us that
. Thus,
. This would still seem to suggest that
was bounded near
, though the argument is still so unrigorous that it could well be wrong.
I like the idea of asking questions at mathoverflow, though it would be important to make the answers available here too. A possible code of practice would be as follows. If you have a question that can be understood in isolation from the rest of the discussion and think the answer is known, and if nobody participating here supplies an answer, then you post it at mathoverflow. If someone answers it, then you write a comment explaining the answer if it is short, or else summarizing it and providing a link to the relevant mathoverflow page if it is long.
January 8, 2010 at 12:01 am
I see that Kevin has put a link on the wiki page to a paper of Borwein and Choi that is relevant. Amongst the results it mentions is that showing that partial sums of the Liouville function are bounded in absolute value by
is equivalent to proving the Riemann hypothesis. I didn’t find any mention of results in the opposite direction.
Perhaps this would make a good question for mathoverflow actually.
January 8, 2010 at 12:17 am
Perhaps we should have a polymath or ever polymath5 user, like the 20 questions user http://mathoverflow.net/users/85/20-questions or a polymath tag?
January 8, 2010 at 10:24 am
When you wrote that, I had already asked the question here. I now have some useful answers. In particular, the Liouville function (-1 raised to the power the number of prime factors of n) has discrepancy
infinitely often. I don’t have any answers about general multiplicative functions — this fact and a similar one for
depend on the close relationships between these functions and the Riemann zeta function.
January 8, 2010 at 10:31 pm
There is a new answer and engelbrekts answer has been updated.
January 8, 2010 at 3:49 am |
Here’s another (equivalent) form of the problem whose finite approximations enjoy a bit more symmetry.
Let D be a finite set of positive integers (the differences) with least common multiple M, and let N be the set of divisors of M that are also multiples of some element of D (include 0 in N). The discrepancy of a function f (from N to
) is the maximum of
with the maximum going over all
and all
. Denote this discrepancy by
, and denote the minimum of
over all f by
.
Note that
is bounded as D goes through
if and only if Erdos’s question has a negative answer.
Understanding
for various D may help us understand how some progressions are interacting. Also, the inclusion of 0 in the discrepancy-defining summation seems to introduce some useful symmetry of the “x goes to
” sort.
January 8, 2010 at 4:33 am |
Does anybody know the discrepancy of the Thue-Morse sequence?
January 9, 2010 at 10:39 am
Another famous example with low discrepency (when you replace 2 by -1) is the sequence that equals to its own run sequence (Conway’s?) 122112122…
Also (inspired by properties of Morse sequences) we can ask the following strenghening of the original problem. Is it possible to find +- 1 sequence x_1,x_2,… so that
for every n
for some fixed R.)
d and r. (Or we can restrict our attention to
January 9, 2010 at 11:00 am
Gil, do you mean Conway’s Look and Say Sequence? I thought that included 3s?
There is a binary encoding version.
January 9, 2010 at 1:00 pm
Jason, I do not think it is the same but it is a very similar idea.
In the sequence that equals its runs there are only 1s and 2s.
For example the runs in the sequence I wrote above are 122112
Which is the first 6 terms of the sequence and if you want to add more terms to the sequence so that the sequence of runs will capture all 9 terms
you need to add a few terms: 12211212212211
and now this determines even more terms 122112122122112112212 etc.
It is a conjecture that the density of 1s tends to 1/2 and the discrepency is possibly logarithmic. I do not have references and when I tried to google it I got the “look and say” sequence which is little different.
January 9, 2010 at 7:06 pm
Regarding the question: Is it possible to find +- 1 sequence x_1,x_2,… so that
for every n, d and r. It looks that it will be easier to show that this is impossible. On the other hand, is it possible that an example for the original problem is also automatically an example for the more general problem? Do the uge sequences with C=2 also have low value of C’ for the more general requirement?
January 8, 2010 at 8:08 am |
Here’s a quick proof that it’s infinite. Some kind of quantitative bound can be obtained from the argument too.
The Morse sequence tells you the parity of the number of 1s in the binary expansion. It follows that if you take a number such as 100000001 as your d, then you will get
for a large value of
. Indeed, if we take
, then we get a discrepancy of
appearing inside an interval of length
or so. In other words, the discrepancy of the sequence is at least
-like. I suspect that’s an upper bound as well but I don’t know of a proof.
January 8, 2010 at 9:48 am
This makes me wonder whether there is a bias towards positive values in the partial sums of the HAPs in this sequence, say for
, and even (though it seems unlikely) whether they might be bounded below.
January 8, 2010 at 10:19 am
“The Morse sequence tells you the parity of the number of 1s in the binary expansion”
tells you the parity of the number of 1s in n-1. But you can modify your proof to work in this case to: Like before we choose
but now we find a number n such that
. Now
, because the high bits of
are identical and the low bits contains an even numbers of 1s for the same reason as before.
I think it would be more natural to index the sequence such that
January 8, 2010 at 10:56 am
That’s an interesting question of Alec’s, and might be something worth thinking about, even if it’s not likely to have a bearing on the Erdős discrepancy problem itself. But before even that it might be amusing to do some computations. For instance, it could turn out that if you choose a fairly random
then the values at multiples of
are themselves fairly random and therefore the discrepancy is
-like in both the positive and negative directions. This hypothesis seems worth testing experimentally before any attempt at proving it.
January 8, 2010 at 1:35 pm
Here is one intriguing observation. The partial sums
never go below zero for
and
.
January 8, 2010 at 2:10 pm
It appears that some
(those divisible by a
) show a strong bias towards positive values, while others show a stronger-than-expected neutrality. An example of the latter is
, whose first
partial sums lie between
and
.
(For
a power of two, the partial sums are bounded by
, because
for all
.)
January 8, 2010 at 5:41 pm
An aside related to the Thue-Morse sequence (which might be trivial for everyone, but I just noticed it) is that if we restrict ourselves to considering d to be powers of 2, we get a a discrepancy of 0. The sequence can also be modified to give a discrepancy of 0 or 1 when restricting d to be the powers of any natural number. (Suppose we want powers of p. The sequence would be placed into x_p, x_2p, x_3p … x_n and the remaining values would be an alternating sequence.)
This suggests perhaps we can (cyclically) add different versions of the Thue-Morse sequence to obtain some useful result.
January 9, 2010 at 2:27 am
For how partial sums of HAPs in Thue-Morse for d=3 behave, see J. Coquet, A Summation Formula Related to the Binary Digits, Invent. math. 73, 107-115 (1983). There are similar results for other d, but I don’t remember the details or have references to hand.
January 9, 2010 at 8:02 am
I could only find the first page of that paper online, but it mentions that Newman proved in 1969 that the sums
all lie between
and
, where
; Coquet proves more precise bounds. There are some related results (though no discussion of other
) in this paper of Wang:
Click to access IJNSVol07No1Paper13.pdf
January 8, 2010 at 9:57 am |
I’m still working on optimizing the
-discrepancy sequence. In the polymath spirit, I’m recording a first lemma in that direction.
Fix an odd prime p and a function f from
into
. Extend f to
by
if n is a multiple of p, and
if n is not a multiple of p. For any d, k, the discrepancy of f along the HAP
is
where
is the largest power of p dividing d, and
is the base-p expansion of k.
The next step is to work out the corresponding lemma replacing the odd prime p with any odd m. Then, to do some computation modulo m to find the best seed for the f sequence.
January 8, 2010 at 10:42 am
Oops. Also need to assume
.
BTW, this already gives us a new record for infinite sequence with smallest discrepancy. The improvement is due to the minus sign in
.
Corollary: there is an f with
, for some constant C.
Proof sketch: Start with p=5 and f taking 1,2,3,4 to 1,-1,-1,1, respectively. In the above formula, for any particular d the inner summation is always in {0,1} or in {0,-1}, depending on d. So half the terms drop out! The worst cases are
, in which case
January 8, 2010 at 1:55 pm
Oops again. The part about the inner summation being in either {0,1} or {0,-1} is wrong. The proof seems to work for p=3, and f taking 1, 2 to 1, -1, though, giving
, a slight improvement.
January 8, 2010 at 2:29 pm
While I was in Egypt I had a similar thought (that it should be possible to get from log to base 3 to log to base 9 by alternating the signs at powers of 3) though I also thought that that would improve the wrong base-211 example to a base-
one.
If something like this is best possible, then it’s interesting as it gives a bound that is surprisingly far from the best for all functions.
January 8, 2010 at 10:04 am |
Recall that
is defined in terms of the set
of all HAPs contained in
. We can also define
in terms of the set
of all HAPs of length at most N contained in
. The same f that shows it is possible for
shows that it is possible for
.
Many of the arguments so far end up caring about the length of the progression instead of its diameter.
January 8, 2010 at 1:00 pm |
I’ll try to make the group idea more general, so let
be any sequence.
and
if
we get another commutative monoid (I think). If, for every
there is a
such that
(that is: Every HAP has a HAS identical to the original sequence), then every element is invertible, and we have an abelian group.
The natural numbers under multiplication is a commutative monoid. If we identify
January 8, 2010 at 3:17 pm
“Every HAP has a HAS” -> “Every HAP has a HAP”
January 8, 2010 at 3:21 pm
I interpreted HAS as “homogeneous arithmetic subprogression” …
January 9, 2010 at 11:39 am |
I’ve made the following tree to help visualize the 1124-term discrepancy 2 example. The vertices are the +/- patterns that a HAP starts with, with an edge connecting e1 to e2 if e2 is a continuation of e1. Each vertex is labeled with with the pattern and the number of HAPs that start with that pattern.
For example, the vertex “63 – – +” is connected to “45 – – + +” and to “7 – – + -“. There are 63 HAPs that start “- – +”, and of those 45 continue with +, 7 continue with -, and 9 are unknown (the next term would be after 1124).
http://obryant.wordpress.com/2010/01/09/a-rauzy-tree/
I may be naive, but perhaps it is even possible to work backwords from the distribution seen here to speed up the search. For example, almost all “+ + – -” patterns get extended to “+ + – – + + – +”, so a smart search might try that extension first.
Except for the exact layout of the tree, the process is automated within Mathematica. If anyone would like to see variants, or this kind of picture for different stubs, it’d likely take me less than two minutes.
January 9, 2010 at 12:04 pm
Could you plot it for
and
, where
is the 1124 sequence? Perhaps also the tree for the sequence with only the first 1124/2 resp. 1124/3 terms of the 1124 sequence, so we can compare trees of sequences of the same length.
January 9, 2010 at 12:19 pm
Timothy, I went ahead and wrote a computer program to finish your numbering. I can also use it to generate CSV files to make spreadsheet format, but I need to know what you think would be most helpful.
January 9, 2010 at 12:47 pm
Two things. First, thanks for finishing off the numbering, which I had been doing laboriously by hand. Secondly, it might be quite nice to have exactly the same thing (that is, number followed by sign) but arranged in a table. I think a good arrangement of the table would be to have rows of length 24, but the first row should go from 0 to 23 (with the 0 position of the table left blank) so that it is easier to identify at least some of the HAPs.
The other thing is that, like Sune, I am interested in the possibility that by passing to subsequences of the form
we may remove some of the “errors” and get better properties. So it would be very nice to have some of these subsequences displayed too. One that would be interesting is the subsequence
, which should be the same as the original but is in fact slightly different. A selection of subsequences that should be the same could be fascinating, especially if a majority vote led to a sequence with good multiplicativity properties.
January 9, 2010 at 1:13 pm
As an experiment (before I saw the most recent message) I tried converting the table to Google Docs although I did not have a 0 space. When I make a 24 table I will put one. I am however running into a stupid problem — does anyone know how to change column size in Google Docs?
http://spreadsheets.google.com/ccc?key=0AkbsKAn5VTtvdGY4VDlRU2dSUG9Lb1JtOWdROUtPMkE&hl=en
Only displaying the subsequences will also be easy. Could you list all the ones you want specifically?
January 9, 2010 at 1:23 pm
I’m in the process of writing a code to automatically convert a sequence of + and – into an HTML table with colors.
Here’s the 1124 sequence into a 24-column table. Is that the king of thign you’re looking for?
http://thomas1111.wordpress.com/2010/01/09/erdos-discrepancy-a-program-to-get-html-tables/
January 9, 2010 at 1:29 pm
Here’s my google docs version of what you were looking for
http://spreadsheets.google.com/ccc?key=0AkbsKAn5VTtvdGpoOG9xYWlUYTNpa1I0UktEMUsxZmc&hl=en
although I like the color formatting Thomas is using better.
January 9, 2010 at 1:38 pm
It seems that Thomas’ sequence is wrongly indexed. It first element of the sequence is called 0 in his plot. But it is nice to have the colors.
January 9, 2010 at 1:47 pm
Thomas, that’s a nice display, but at the moment the colour you have assigned to n is the colour you ought to have assigned to n+1 (and zero should have no colour, or a special colour).
January 9, 2010 at 1:56 pm
Yes Sune and Tim, apologies, I’ve now fixed it and updated the code and table.
January 9, 2010 at 2:18 pm
I have used the magic of Google Docs to add coloring to my spreadsheet (same link as above).
It’s also quite easy with the spreadsheet to do the subsequences that are factors of 24, just remove columns.
January 9, 2010 at 2:29 pm
One more link and I’m going to bed. This is multiples of 2 only of the sequence, all I did was delete columns.
http://spreadsheets.google.com/ccc?key=0AkbsKAn5VTtvdDdrTDd1YmM3bGZESEFwZWhnSVBZMEE&hl=en
I still think also having the data in HTML format is useful.
January 9, 2010 at 3:26 pm
One thing that jumps out from the data when it is presented in rows of 24 is that the sequence is highly biased with respect to most non-zero residue classes mod 24. I can think of various possible reasons for this, but I somehow can’t express them clearly enough to myself to feel ready to post them. At any rate, it seems to me to be a phenomenon worth investigating.
January 9, 2010 at 8:25 pm
@sune: I’ve given the same tree for various subsequences (the same link still works). I’m letting Mathematica place the vertices, so some of the labels are obscured but the shape of the graphs is more consistent.
January 9, 2010 at 8:36 pm
Thanks. But I can only find the old tree.
January 10, 2010 at 1:59 am
Sorry, I was having problems with “update” hanging, but it’s fixed now.
http://obryant.wordpress.com/2010/01/09/a-rauzy-tree/
January 9, 2010 at 1:09 pm |
If we let C depend on d, we get a weaker version of the EDP.
This is motivated by the fact that for the sequence -1,1,-1,1,… , we can choose C_n=1 for all odd n, but C has to e infinite for even n.
January 9, 2010 at 2:56 pm |
@Gowers: Have you decided when to officially launch polymath5?
January 9, 2010 at 3:15 pm
That’s a good question. I wanted to wait till I had done various things that absolutely need doing, but I’m also finding it hard not to think about the problem. I am slightly holding back thoughts about how one might start to build a proof. What about other people? There still seems to be some mileage in looking at the experimental data, but is there some impatience to get going properly, or are people happy to wait a bit longer?
Perhaps a compromise could be this. I have another post ready, which I was going to put up when the number of comments here reached three figures, which it is just about to do. It isn’t the official launch post, but it does contain some more theoretical thoughts. It could perhaps be regarded as a warm-up post, and the one after that could be the official one. But again, I’d welcome any views on this, particularly from people who plan to be serious participants.
January 9, 2010 at 6:27 pm
Well, I’m impatient, but I won’t have much time the next week. I’ll try to be “an interested non-participant”, as I said I would, but “interested” and “non-participant” is a difficult combination!
January 9, 2010 at 7:15 pm
As far as I’m concerned, the current level of activity is satisfactory. There’re still about a dozen things I’d like to compute…
January 9, 2010 at 3:20 pm |
I’ve added a section to the ‘Experimental results’ page about the maximal lengths of sequences with varying upper and lower bounds for the partial sums along HAPs.
If
is the maximum length of a
sequence with partial sums along its HAPs bounded below by
and above by
, and
is the corresponding maximum length for a zero-based sequence, then
. So I have just listed values of
.
Sets of sequences that are
-admissible in this sense could be useful objects of study when it comes to building a proof.
January 9, 2010 at 7:12 pm
Awesome. Thanks for the data (I’d just been dreaming about N(a,b)!), and the formatting!
January 9, 2010 at 3:59 pm |
I had two more examples of length 1008, again not optimized in nay way
{1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, -1, -1, -1, 1, 1, -1, 1, -1, \
-1, 1, 1, 1, -1, 1, -1, -1, 1, -1, 1, 1, -1, -1, 1, -1, 1, -1, -1, 1, \
1, 1, -1, 1, -1, -1, 1, -1, 1, 1, -1, -1, 1, 1, -1, 1, -1, 1, 1, -1, \
-1, 1, -1, 1, 1, -1, 1, -1, -1, -1, 1, 1, -1, -1, 1, 1, 1, -1, -1, 1, \
-1, 1, -1, -1, 1, 1, -1, -1, 1, -1, 1, 1, -1, 1, -1, -1, -1, 1, 1, \
-1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, -1, \
1, 1, -1, 1, -1, -1, -1, -1, 1, 1, 1, 1, -1, -1, 1, -1, -1, -1, 1, 1, \
-1, 1, 1, -1, 1, 1, -1, 1, -1, 1, -1, -1, 1, -1, -1, -1, 1, 1, 1, 1, \
-1, -1, -1, -1, 1, 1, 1, -1, -1, 1, -1, 1, 1, -1, 1, -1, -1, 1, 1, \
-1, 1, -1, -1, 1, -1, 1, -1, 1, -1, -1, 1, 1, -1, -1, 1, 1, -1, -1, \
1, 1, -1, 1, 1, -1, 1, 1, -1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, \
-1, 1, 1, -1, 1, -1, -1, 1, 1, 1, -1, 1, -1, -1, 1, -1, -1, -1, 1, 1, \
-1, -1, 1, 1, -1, 1, 1, -1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, -1, \
-1, 1, 1, -1, 1, 1, 1, -1, -1, -1, 1, 1, -1, -1, 1, 1, 1, -1, -1, 1, \
-1, 1, -1, -1, -1, 1, 1, -1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, 1, 1, \
1, -1, -1, 1, -1, -1, 1, 1, -1, -1, -1, 1, 1, 1, -1, -1, 1, -1, 1, 1, \
-1, 1, -1, -1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, 1, -1, 1, 1, -1, 1, \
-1, 1, 1, -1, -1, 1, -1, -1, 1, 1, -1, -1, 1, -1, 1, 1, -1, 1, -1, \
-1, 1, 1, 1, -1, -1, 1, -1, -1, -1, 1, 1, -1, 1, 1, -1, 1, -1, -1, 1, \
1, -1, 1, 1, -1, 1, -1, -1, -1, 1, -1, 1, -1, -1, 1, -1, 1, 1, 1, -1, \
1, 1, -1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, -1, 1, 1, -1, -1, -1, 1, \
-1, 1, 1, -1, -1, 1, 1, -1, -1, -1, 1, -1, 1, 1, 1, -1, -1, -1, 1, 1, \
-1, 1, -1, 1, 1, 1, -1, -1, 1, -1, -1, 1, 1, -1, -1, 1, -1, 1, -1, \
-1, 1, 1, -1, -1, 1, -1, 1, -1, 1, 1, 1, 1, -1, -1, -1, 1, -1, -1, 1, \
1, -1, 1, 1, -1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, -1, -1, 1, 1, -1, \
1, -1, -1, 1, -1, 1, 1, 1, -1, 1, 1, -1, 1, -1, -1, -1, 1, -1, 1, 1, \
-1, 1, -1, -1, 1, 1, -1, 1, -1, 1, -1, -1, 1, -1, 1, -1, 1, 1, -1, 1, \
-1, -1, 1, -1, -1, 1, 1, 1, -1, 1, -1, -1, 1, 1, 1, -1, -1, 1, -1, \
-1, 1, 1, -1, -1, 1, -1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, -1, 1, 1, \
1, -1, 1, -1, -1, 1, -1, -1, 1, 1, -1, 1, 1, 1, -1, -1, -1, 1, 1, -1, \
1, -1, -1, 1, 1, -1, -1, 1, -1, 1, -1, -1, 1, -1, 1, 1, 1, -1, 1, 1, \
-1, 1, -1, -1, -1, 1, 1, 1, -1, -1, 1, -1, -1, 1, 1, -1, -1, -1, 1, \
1, -1, -1, 1, 1, -1, 1, 1, -1, 1, -1, -1, 1, 1, -1, -1, 1, -1, 1, -1, \
-1, 1, -1, 1, 1, -1, -1, 1, -1, 1, 1, 1, -1, 1, 1, -1, 1, -1, -1, 1, \
1, -1, 1, -1, -1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, 1, -1, -1, 1, \
-1, 1, 1, -1, 1, -1, 1, 1, -1, -1, -1, 1, 1, 1, -1, -1, 1, 1, -1, 1, \
-1, -1, 1, -1, -1, 1, 1, -1, -1, 1, -1, 1, -1, -1, 1, 1, -1, 1, 1, \
-1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, -1, -1, 1, 1, -1, 1, 1, -1, 1, \
-1, -1, 1, 1, -1, 1, -1, -1, 1, -1, -1, 1, 1, 1, 1, -1, -1, 1, -1, \
-1, 1, 1, -1, 1, 1, -1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, 1, \
-1, -1, 1, -1, -1, 1, -1, -1, 1, 1, -1, 1, 1, -1, 1, -1, -1, 1, 1, \
-1, -1, -1, 1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, -1, 1, 1, 1, -1, \
-1, 1, 1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, -1, -1, 1, 1, -1, 1, -1, \
-1, 1, 1, -1, -1, 1, -1, 1, 1, -1, -1, -1, 1, 1, 1, -1, -1, 1, -1, 1, \
-1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, -1, 1, 1, -1, 1, -1, \
-1, 1, 1, -1, 1, 1, -1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, -1, -1, 1, \
1, -1, 1, 1, -1, 1, -1, -1, 1, 1, 1, -1, -1, -1, 1, -1, -1, 1, 1, 1, \
1, -1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, 1, -1, -1, -1, 1, -1, 1, 1, \
-1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, -1, -1, 1, 1, 1, -1, 1, 1, -1, \
-1, -1, 1, 1, -1, 1, -1, -1, 1, -1, -1, 1, 1, -1, 1, 1, 1, -1, -1, \
-1, -1, 1, -1, 1, 1, -1, 1, -1, -1, 1, -1, 1, 1, -1, -1, 1, 1, 1, 1, \
-1, -1, 1, -1}
and the second one
{1, -1, -1, 1, 1, -1, -1, -1, 1, 1, -1, 1, -1, 1, -1, -1, 1, -1, 1, \
1, 1, -1, -1, 1, -1, 1, 1, -1, -1, -1, 1, 1, 1, -1, -1, -1, 1, 1, -1, \
-1, 1, 1, -1, 1, 1, -1, -1, -1, 1, 1, 1, -1, -1, 1, -1, 1, -1, -1, 1, \
1, -1, 1, -1, -1, 1, -1, 1, 1, 1, -1, -1, 1, -1, 1, -1, -1, 1, 1, 1, \
1, -1, -1, 1, -1, -1, 1, 1, -1, -1, -1, 1, 1, -1, 1, 1, 1, -1, -1, 1, \
-1, -1, -1, 1, 1, 1, -1, 1, -1, 1, 1, -1, 1, -1, 1, -1, -1, -1, 1, \
-1, -1, 1, -1, 1, 1, 1, 1, -1, -1, -1, -1, 1, 1, -1, 1, 1, 1, -1, -1, \
1, -1, 1, -1, -1, -1, 1, 1, -1, 1, -1, 1, -1, -1, 1, 1, -1, -1, 1, \
-1, 1, 1, 1, 1, -1, 1, -1, -1, 1, -1, 1, -1, -1, -1, 1, 1, 1, -1, -1, \
1, 1, 1, -1, -1, 1, 1, -1, -1, 1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, \
-1, 1, -1, -1, 1, 1, -1, -1, 1, 1, 1, -1, 1, -1, -1, 1, -1, 1, 1, -1, \
1, -1, 1, -1, -1, 1, -1, -1, -1, 1, 1, 1, -1, -1, 1, -1, 1, -1, -1, \
1, 1, 1, -1, -1, 1, -1, -1, 1, -1, 1, -1, 1, 1, -1, 1, 1, -1, 1, 1, \
-1, 1, -1, -1, -1, -1, 1, 1, 1, -1, -1, 1, -1, -1, 1, -1, 1, 1, -1, \
-1, 1, 1, 1, 1, -1, 1, -1, 1, -1, -1, -1, -1, 1, -1, 1, 1, -1, -1, 1, \
-1, 1, 1, -1, 1, 1, -1, -1, 1, 1, 1, -1, -1, -1, 1, 1, -1, 1, -1, -1, \
1, -1, 1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1, -1, -1, 1, -1, 1, 1, -1, \
1, -1, 1, 1, 1, -1, -1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1, 1, 1, -1, \
-1, -1, -1, 1, 1, 1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, 1, -1, -1, \
-1, 1, -1, -1, 1, -1, 1, 1, 1, -1, 1, -1, -1, 1, -1, 1, -1, -1, 1, 1, \
-1, -1, 1, -1, 1, 1, -1, -1, 1, -1, 1, 1, 1, 1, -1, -1, -1, 1, -1, 1, \
1, -1, -1, 1, 1, -1, 1, 1, -1, 1, -1, -1, 1, -1, 1, 1, -1, 1, -1, -1, \
1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, -1, 1, 1, -1, -1, -1, 1, 1, \
-1, -1, 1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1, 1, 1, -1, 1, 1, -1, -1, \
1, -1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, -1, -1, 1, 1, -1, -1, 1, \
-1, 1, 1, 1, 1, -1, -1, -1, 1, 1, -1, 1, -1, -1, 1, -1, -1, 1, -1, 1, \
1, -1, 1, 1, -1, -1, 1, -1, 1, -1, 1, 1, -1, 1, -1, 1, -1, -1, 1, 1, \
1, -1, -1, 1, 1, -1, -1, -1, -1, 1, 1, 1, -1, 1, -1, -1, 1, -1, 1, \
-1, 1, -1, 1, -1, -1, 1, -1, 1, 1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, \
-1, 1, 1, 1, -1, -1, 1, 1, -1, -1, 1, -1, -1, -1, 1, 1, 1, -1, -1, 1, \
-1, 1, 1, -1, 1, 1, -1, -1, -1, -1, 1, 1, -1, 1, 1, -1, -1, 1, 1, -1, \
-1, -1, 1, 1, 1, -1, 1, -1, -1, 1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, \
-1, 1, 1, -1, -1, -1, 1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1, -1, 1, 1, \
-1, 1, -1, -1, 1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1, -1, 1, 1, -1, \
-1, 1, -1, 1, 1, -1, 1, 1, -1, -1, 1, -1, 1, 1, -1, -1, 1, 1, -1, 1, \
-1, -1, 1, -1, 1, 1, -1, 1, 1, -1, -1, 1, -1, 1, 1, -1, -1, -1, -1, \
1, 1, -1, 1, 1, -1, -1, 1, 1, -1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1, \
-1, 1, 1, -1, -1, 1, -1, 1, -1, -1, 1, 1, 1, -1, 1, -1, -1, 1, -1, 1, \
1, -1, -1, 1, -1, 1, -1, -1, 1, 1, 1, -1, -1, -1, 1, 1, -1, 1, 1, -1, \
-1, 1, 1, -1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1, -1, -1, 1, -1, 1, 1, \
-1, 1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1, -1, 1, 1, -1, -1, 1, -1, 1, \
-1, -1, 1, 1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, -1, 1, 1, -1, 1, -1, \
-1, 1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, -1, 1, -1, 1, 1, -1, 1, 1, \
-1, -1, 1, -1, -1, 1, -1, 1, 1, -1, -1, -1, 1, -1, 1, 1, 1, 1, -1, \
-1, -1, 1, 1, 1, -1, 1, -1, -1, 1, 1, -1, -1, -1, -1, 1, 1, 1, -1, 1, \
-1, -1, 1, -1, 1, 1, -1, -1, 1, -1, 1, 1, -1, 1, 1, -1, -1, 1, -1, \
-1, 1, -1, 1, 1, -1, -1, -1, 1, 1, -1, -1, 1, 1, -1, -1, 1, -1, 1, 1, \
-1, 1, 1, -1, -1, 1, -1, -1, 1, -1, 1, 1, 1, -1, 1, -1, -1, 1, -1, 1, \
1, -1, -1, 1, -1, 1, 1, -1, 1, -1, 1, -1, -1, 1, -1, 1, -1, 1, 1, -1, \
-1, 1, -1, 1, -1, -1, 1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, -1, -1, 1, \
-1, -1, 1, -1, 1, 1, -1, -1, 1, 1, 1, 1, -1, 1, -1, -1, -1, 1, -1, 1}
I’m busy with other things at the moment but once I have done some modifications to my program I hope to be able to push it a bit further too. The computer can keep on working on this even if I am busy writing a paper.
January 9, 2010 at 4:22 pm
The first of these yields another sequence of length
, which I’ll post on the Wiki.
January 9, 2010 at 4:57 pm
It would be great if someone could do a quick Mark-Bennet-style analysis of this sequence (that is, look at its HAP subsequences and classify them up to close resemblance) to see whether it too has a quasi-multiplicative structure derived from the group
.
January 9, 2010 at 5:27 pm
I’ve just made tables for that second 1124 sequence here:
http://thomas1111.wordpress.com/2010/01/09/tables-for-the-second-1124-sequence/
January 9, 2010 at 5:34 pm
Thomas, that’s great, but the 8-sequence is once again shifted by 1.
January 9, 2010 at 5:55 pm
Oops, indeed, sorry! I’ve now fixed the 8-subsequence tables in both cases.
January 9, 2010 at 5:58 pm
At numbers congruent to
modulo
, the first sequence seems to contain a preponderance of
s, the second a preponderance of
s.
Incidentally, my program is still chugging away; it’s tracked back to
now and found
sequences of length
. This is using the
optimization trick, so is probably an underestimate by several orders of magnitude of the actual number reachable from the first
members of the sequence.
January 9, 2010 at 7:03 pm
What is the (p,q) optimization trick?
January 9, 2010 at 7:21 pm
I’m guessing, based on the format of the output, that you are using Mathematica. Mind sharing the code?
January 9, 2010 at 7:46 pm
Sune, I was referring to this observation:
https://gowers.wordpress.com/2009/12/17/erdoss-discrepancy-problem/#comment-4645
It means that under certain circumstances during a search you can know that
will work if and only of
will work, so you only need check one of these.
January 9, 2010 at 8:56 pm
This is a brief analysis of the second of the 1124 length sequences.
With the same coding as before, the same six common sequences arise. It is also interesting to note the primes 11, 23; 13, 19 which are paired both times in sporadic sequences, and that the sequence appears in some respects more regular than the previous one (the basic sequence has no discrepancy and 7 appears to be regular).
179 11 23
1187 61
1203 1 8 15 18 49 58 64 70
1205 5 6 28 31 34 40 43 48 52 55 63 66
1206 73
1268 67
1421 2 16 21 22 25 30 36 39 46 57
1422 37
2674 3 14 17 20 24 26 33 38 45 54 69
2676 47
2844 59
2889 41 71
2890 4 9 29 32 35 42 44 50 51 60 65 72
2891 53 74
2892 7 10 12 27 56 62 68
3916 13 19
January 10, 2010 at 9:25 am |
Oddball thought. Every positive integer has a unique representation in the form
, where
. In this representation, the tail of every HAP has a nice digital expansion! The HAP with difference d, for instance, has a finite set of possible
, and then
are completely arbitrary.
Here’s one way not to use this, but maybe it can be fixed. Color m according to whether its factorial expansion has
even or odd. The discrepancy on each HAP is bounded (if I’m thinking straight), but unfortunately the bounds aren’t bounded. The HAP for 11 isn’t controlled (at least not a priori) until N gets up to 11! or so.
January 11, 2010 at 4:54 pm |
Random extra source: Erdős brings up the problem in this paper and also mentions the weaker problem which I believe we’ve already brought up: if
is completely multiplicative, then is
unbounded?
January 11, 2010 at 5:17 pm |
Looking at the case (by Alec’s notation) of N(1,1), I note it can be proved brute force in that every placement of + or – is forced.
Suppose 6 is +. (Our first placement can be arbitrary, as our bounds are symmetrical). This forces 12-, which forces 3- and 9+, which forces 2- and 4+, which forces 1+ and 8-, which forces 5- and 10+, which forces 7+, and a contradiction when d = 1 and k = 10.
The sequence is thus: + – – + – + + – + + . –
January 11, 2010 at 5:26 pm |
The discussion continues here:
https://gowers.wordpress.com/2010/01/09/erds-discrepancy-problem-continued/
January 12, 2010 at 1:57 am |
The problem reminded of the following result:
The exists a binary(i.e. 0,1) sequence (b_n, -\infty < n < \infty) such that for any binary
periodic sequence (y_n, n \geq 0, y_{n+T} = y_n)
\lim_{n \rightarrow \infty} \frac{\sum_{0 \leq k \leq N} b_{i+k} \oplus y_k}{N+1} = \frac{1}{2}
uniformly (for a given periodic y_n) on i.
Here a \oplus b is a sum modulo 2.
Apperared in my LAA 1995 paper "Stability of Discrete Linear Inclusion".
March 6, 2015 at 12:27 am |
Can you help me gain a better understanding.
For the counterexample it sounds like you need two irrational numbers that you convert into two separate continued fractions.
Then you multiply the continued fractions together in order so the smallest continued fraction of both will be multiplied followed by the second largest of both, then the third largest of both, etc…
Then you order this new set of combined fractions with counting numbers so the first combined fraction, then the 2nd, then the 3rd, etc…
Then you multiply the continued fractions by their counting numbers.
Finally you find the counterexample if none of the denominators in the combined fractions from one to infinity cancel with the specific counting numbers.
That’s what I got from the explanation on this page, but I don’t feel like it’s right. Could someone try and explain to me how you are supposed to get the counterexample in the way that I tried to explain it without using math symbols.
October 1, 2015 at 8:05 pm |
[…] quickly attracted nearly 150 comments, and on January 6, 2010, Gowers wrote what he called an“emergency” post saying that this problem was clearly the people’s […]
October 17, 2015 at 1:18 pm |
[…] quickly attracted nearly 150 comments, and on January 6, 2010, Gowers wrote what he called an “emergency” post saying that this problem was clearly the people’s […]
December 31, 2017 at 3:07 am |
[…] quickly attracted nearly 150 comments, and on January 6, 2010, Gowers wrote what he called an “emergency” post saying that this problem was clearly the people’s […]