http://michaelnielsen.org/polymath1/index.php?title=Generalize_to_a_graph-theoretic_formulation

The prime factorization algorithm is horribly messy (and not entirely bug-checked).

I still have hope for a statement stronger than the EDP which will be a generalized combinatoric version of the problem.

]]>http://numberwarrior.wordpress.com/2010/04/21/a-gentle-introduction-to-the-5th-polymath-project/

I did not link to this blog, only to the wiki page. I figure anyone qualified enough to join in will be able to find their way here.

]]>Here is another problem on restricted sets of HAPs.

Assume that we look at sequneces of length and include the HAPs with difference 1, and the HAPs with difference for all prime .

For sequences with bounded discrepancy are easy to construct. How small can we make without getting a large discrepancy?

]]>Let f be completely multiplicative, fix a prime p and define another completely multiplicative g(p):=-f(p) and g(q):=f(q) for prime q not equal to p. Then, since f and g coincide on non-multiples of p, the partial sum of g up to n satisfies

Using c.m. and the definition of g we end up with a recurrence

.

We solve this for and get

.

For f with bounded discrepancy this yields the following estimate

at prime powers . Therefore, for a c.m. function to satisfy EDP it is necessary that the flipped function is log-bounded at the powers of the ‘flipped prime’.

Obryant’s log-bounded examples have discrepancy less than (in his notation). In principle they could be flips of bounded discrepancy functions. I am not optimistic that this is the case, though.

]]>I don’t know enough about Kadison-Singer to say if there is a connection. Gil, can you elaborate on what you think the relation might be ?

]]>Is there any relation to the Paving conjecture (aka Kadison-Singer conj)? (maybe this is related to EDP12’s discussion). It looks artificially similar.

]]>Consider a sequence , and consider the associated matrix such that , i.e. . The approach is to prove something about which would imply that the HAP discrepancy of is large, i.e. that is large for some that is the characeristic vector of a HAP. In proving something like this, we need to exploit the fact that is obtained from a sequence. The various approaches differ in what properties of they use giving rise to a sequence of strengthenings of EDP. At the very least, they use the fact that the diagonal entries are 1, and in fact some use only this fact.

Here is a sequence of conjectures, ordered from strongest to weakest — each is stronger than EDP and would imply EDP if true.

** Conjecture 1 :** For any matrix with * ones on the diagonal* , there exists a HAP such that is large.

If true, this would imply EDP, because in particular, consider . Then . Thus is large implies that the discrepancy of is large. Unfortunately, this conjecture is too strong. Alec gave a construction of a matrix such that for all HAPs .

** Conjecture 2 :** For any matrix with * ones on the diagonal *, there exist HAPs such that is large.

The proof that this would imply EDP is similar to the argument earlier. Consider . Then $P^T A Q = (X \cdot P)(X \cdot Q)$, hence large implies that the discrepancy of is large.

This conjecture is the basis for Tim’s representation of diagonals approach. We can express the problem of finding an matrix such that is bounded as a linear program, then the dual problem gives a lower bound for how small can be made by choosing apppropriately. In fact, the dual is exactly the problem of constructing the diagonal representation that Tim defined.

We have also considered the stronger

** Conjecture 1.5 :** is large for HAPs with the * same common difference *. Experimental evidence suggests that this is false (see Problem 1 in Tim’s EDP13 post). So far, we don’t have an explicit construction of a matrix with ones on the diagonal that falsifies this conjecture, but I believe this should be possible.

** Conjecture 3 :** For any * positive semidefinite * matrix with * ones on the diagonal *, there exists a HAP such that is large.

Note that is positive semidefinite, hence the statement applies to it. This conjecture is the basis of the SDP approach to proving a lower bound on EDP. The problem of finding the “best” psd matrix can be formulated as a semidefinite program and giving a lower bound for the value of this SDP by constructing a feasible dual solution amounts to producing a certain quadratic form, and subtracting a large diagonal term from it such that the remainder is positive semidefinite.

We ought to consider the analog of Conjecture 2 for psd matrices giving the potentially weaker statement:

** Conjecture 4 :** For any * positive semidefinite * matrix with * ones on the diagonal *, there exists HAPs such that is large.

In fact, it turns out that Conjecture 4 is equivalent to Conjecture 3, since (as Tim pointed out earlier) for psd matrices , .

]]>Dear Kevin, here I mean the following: you want to assign the square free numbers values -1 and 1 so that when you sum these values for the square free numbers in every HAP the discrepency (absolute value of the sum) is small.

I would expect that just like the original EDP, you cannot make the discrepency uniformy bounded. But that you can make it grow logrithmically.

]]>What do you mean by “EDP for square free numbers”?

]]>1) How is our basic multiplicative example (based on the least significant non zero digit in the ternary expansion) behaves if we restrict it just to the first k primes (and let it be 0 on other primes).

2) How small maximum discrepency can we ensure simultanusly for a

multiplicative -1 +1 function and all its restrictions when we force the value 0 on a finite subset of primes.

3) suppose we just ask EDP for square free numbers. (I still tend to speculate you can get logarithmically low discrepency in this case as well.) How good is ? (Maybe this was already answered.)

4) How does the greedy-look-ahead algorithm for multiplicative sequences works. In this algorithm, given n, you choose the value of f on the kth prime as to minimize the discrepency (for all intervals [0,r] r <=n), when the values for all larger primes is set to 0.

I think there is a good shot that it will be logarithmic.

]]>This led me to consider whether there might be a finite field construction of the sets A that work well. The way finite fields are used to generate as-good-as-random constructions is to use the connection between the cyclic multiplicative group and the representation of in some fixed basis. This isn’t so nice for us, since the multiplicative group will always have even order, whereas we need m to be odd.

Wait, the mult group isn’t always of even order. If , then the multiplicative group has order $m=2^k-1$, which is at least odd. This still isn’t so nice for us, though, since such m will be prime for infinitely many k (no, I can’t prove that), and we know from Roth’s result that $M(p) \gg p^{1/4}$ for primes .

Short story even shorter: I don’t see finite field arithmetic generating any good EDP constructions.

My machine is chugging away on deciding if M(121) is 2 or 3 or 4, and I expect that it will finish in a week or two.

]]>More data:

M>2 for m=43,47,51,53,55,57,59,61,63,65

M=2 for m=45 (A={1,3,4,9,10,11,12,14,16,19,21,25,26,29,30,31,34,36,39,40,41,44})

Currently working on: M(121).

]]>I’m still thinking about trying to find a good decomposition of a diagonal matrix. I’m going to assume that this matrix is non-negative (even though we don’t actually need that to be the case). So let us call it .

Now let be an orthogonal matrix and let the columns of be . Then can easily be checked to be . Since that is the identity, it follows that . Now the columns of are , so .

This gives us another way we could perhaps approach finding a diagonal representation. The idea would be to begin by finding some natural orthonormal basis — an obvious choice would be the trigonometric functions — then multiply them pointwise by a vector (the coefficients of the diagonal matrix ), and hope to choose the in some clever way to make the resulting vectors efficiently representable as linear combinations of HAPs.

It might sound as though a serious disadvantage of this approach is the usual difficulty that we don’t know how to guess the . But I’m not sure if that’s true: we could begin by looking at which vectors we find we can represent efficiently by HAPs, and then use those to guide us to our choice of the . And we also have some clues from the experimental evidence about which diagonal matrices can be represented.

]]>m=41, M>3

m=49, M=2, A=({1,2,4}+{0,7,14,21,28,35,42}) \cup (7*{3,5,6})

]]>Latest data implies M(105)>2, so it can’t set a record. All that’s needed for a record is an odd m strictly between and with , or to show that for some r. For r=1, this is impossible by computation for m=11 and because any sequence of length 12 has homo. disc. at least 2 even without wrap-around. I was hoping that perhaps m=105 could be worked with by understanding m=15,21,35, but even if that's the case we must have $M(105)\geq M(21) =3$, so that it won't be a record.

I'm not convinced that M(m)/log(m) is bounded away from 0 (it certainly seems likely), but I don't see any way to make progress on this.

Here's the new data (giving one example for each modulus)

m=3, M=1, A={1}

m=5, M=1, A={1,4}

m=7, M=2, A={1,2,4}

m=9, M=1, A={1,4,6,7}

m=11, M=2, A={1,2,4,7,9}

m=13, M=2, A={1,3,4,9,10,12}

m=15, M=2, A={1, 3, 4, 6, 10, 11, 13}

m=17, M=2, A={1,2,4,8,9,13,15,16}

m=19, M=3, A={1, 4, 5, 6, 7, 9, 11, 16, 17}

m=21, M=3, A={1, 2, 3, 6, 8, 13, 14, 15, 17, 20}

m=23, M=4, A={1, 2, 3, 4, 6, 8, 11, 12, 15, 17, 20}

m=25, M=2, A={1, 2, 6, 7, 11, 12, 15, 16, 17, 20, 21, 22}

m=27, M=2, A={1, 4, 6, 7, 10, 13, 15, 16, 18, 19, 22, 24, 25}

m=29, M=3, A={1, 4, 5, 6, 7, 9, 13, 16, 20, 22, 23, 24, 25, 28}

m=31, M=4, A={1, 2, 3, 4, 6, 8, 11, 12, 19, 23, 25, 27, 28, 29, 30}

m=33, M=3, A={1,2,4,7,9,12,13,15,18,20,22,23,24,26,29,31}

m=35, M=3, A={1,3,6,8,10,13,14,17,20,21,22,24,25,27,29,31,34}

m=37, M=4, A={1, 3, 4, 7, 9, 10, 11, 12, 16, 21, 25, 26, 27, 28, 30, 33, 34, 36}

m=39, M=3, A={1,2,7,9,11,12,13,14,15,19,20,24,25,27,28,30,32,37,38}

m=81, M=2, A=({1,4,6,7}+{0,9,18,27,36,45,54,63,72}) \cup (9*{1,4,6,7})

]]>In summary, quadratic residues are the unique optimal for m in {3,5,17, 29}, are the nonunique optimal for m in {7,13,19,37}, and are not as good as optimal for m in {11,23,31}.

Just being concerned about the coefficient of the log in the discrepancy gives some symmetries: all of A, t*A (the dilation of A mod m by a factor of t, with (t,m)=1), A’ (the complement of A in {1,2,…,m-1}, and t*A’ give the same coefficient. Ignoring those symmetries, here are the optimal A I’ve found so far:

m=3, A={1} (quadratic residues)

m=5, A={1,4} (quadratic residues)

m=7, A={1,2,4} (quadratic residues)

m=7, A={1,5,6}

m=11, A={1,2,4,7,9}

m=13, A={1,3,4,9,10,12} (quadratic residues)

m=13, A={1,2,4,9,11,12}

m=17, A={1,2,4,8,9,13,15,16} (quadratic residues)

m=19, A={1, 4, 5, 6, 7, 9, 11, 16, 17} (quad. res.)

m=19, A={1, 2, 3, 5, 7, 12, 14, 17, 18}

m=19, A={1, 2, 3, 5, 9, 10, 16, 17, 18}

m=19, A={1, 2, 3, 6, 7, 12, 13, 16, 18}

m=19, A={1, 2, 3, 6, 7, 12, 13, 17, 18}

m=23, A={1, 2, 3, 4, 6, 8, 11, 12, 15, 17, 20}, and 567 other equivalence classes of optimal A, none of which are the quad. res.

m=29, A={1, 4, 5, 6, 7, 9, 13, 16, 20, 22, 23, 24, 25, 28} (quad res)

m=31, A={1, 2, 3, 4, 6, 8, 11, 12, 19, 23, 25, 27, 28, 29, 30}, and 51 other equivalence classes of optimal A, none of which are the quad. res.

m=37, A={1, 3, 4, 7, 9, 10, 11, 12, 16, 21, 25, 26, 27, 28, 30, 33, 34, 36} (quad res), and 12 other equivalence classes of optimal A

]]>Kevin, for the prime that you’ve worked out, is the optimal set always the set of quadratic residues modulo ?

]]>I have made an error in the above posts. I need drift 7 to force discrepancy 4. Drift 6 only works for one where the drift starts with consecutive values of the same sign starting at an odd number. Drift 5

can be used to prove discrepancy 3.

So it is …

]]>If I’ve got the right question here after a quick reading, you find that the average of over all in the unit circle is .

]]>