### The purpose of this post is to ignite some new activity related to polymath5 about Erdős’ Discrepancy Problem. This post is thus a polymath5 research thread and comments (also unrelated to the suggestions in this post) are welcome.

### The general form of a discrepancy problem

Let be a hypergraph, i.e., a collection of subsets of a ground set . The** discrepancy** of , denoted by is the minimum over all functions of the maximum over all of

.

We will mention one additional definition, that of **hereditary discrepancy**. When is a hypergraph and , the restriction of to is the hypergraph with vertex set whose edges are all sets of the form for edges of . The hereditary discrepancy of is the maximum over all of the discrepancy of restricted to .

Here is a link for a recent post discussing discrepancy and the famous Beck-Fiala theorem. The Beck-Fiala theorem assert that if every element in is included in at most sets in then . (Of course, the theorem applies also to the hereditary discrepancy.)

## The Erdős’ Discrepancy Problem reviewed

### The problem and links to polymath5 material

**Erdős Discrepancy Problem (EDP).** *Is it possible to find a -valued sequence and a constant such that for every and every ?*

A HAP (Homogeneous Arithmetic Progression) is an arithmetic prograssion of the form {k,2k,…,rk}. EDP asks about the sum of a sequence on HAPs.

Given a sequence define to be the maximum of sums of subsequences over HAPs which are subsets of {1,2,3,…,n}. EDP asks if we can find a sequence for which is uniformly bounded and we will be interested in finding sequences where grows slowly.

EDP was extensively discussed and studied in polymath5. Here is the link of the first post. Here are links to all polymath5 posts. Here is the link to polymath5 wiki.

A sequence is called *completely multiplicative* if for every and . EDP is of great interest even if we restrict our attention to completely multiplicative sequences. For those we have only to consider partial sums .

### An example where the discrepancy grows like log n.

The function that takes n to 1 if the last non-zero digit of n in its ternary representation is 1 and -1 if the last non-zero digit is 2 is completely multiplicative and the partial sum up to n is easily shown to be at most . Therefore, the rate at which the worst discrepancy grows, as a function of the length of the homogeneous progression, can be as slow as logarithmic.

### What a random assignment gives us.

A random sequence (or a random completely multiplicative sequence) gives us discrepancy close to . (There is apparently an additional factor but I am not sure of the precise asymptotic behavior.)

## Two greedy algorithms

From now on we write instead of .

**Greedy algorithm 1** (for multiplicative functions): Assign the value as to minimize the maximum discrepancy in every partial sum whose terms are now determined.

**Greedy algorithm 1** (for general functions): Assign the value so as to minimize the maximum discrepancy in every HAP whose terms are now determined.

**Problem 1:** How does Greedy algorithm 1 perform?

**Empirical observation:** (This was claimed in some polymath 5 remarks but I am not sure if there was definite evidence. I would appreciate clarifications.) The discrepancy for sequences based on the greedy algorithm 1 (for multiplicative functions and for general functions) is .

**Interpretation:** Greedy algorithm 1 optimizes an “irrelevant task”.

We would like to suggest here

**Greedy algorithm 2** (for multiplicative sequences): Assign the value so as to minimize the maximum discrepancy in every partial sum where unassigned entries get the value zero.

**Greedy algorithm 2** (for general sequences ): Assign the value f(n) as to minimize the maximum discrepancy in every partial sum in every HAP where unassigned entries get the value zero.

**Problem 2:** How does Greedy algorithm 2 perform?

Omri Floman ran Greedy 2 on inputs such as N=10000 and got a discrepancy of around 20 (there is a bit of randomness involved in cases of ties). For N=100000 he got about 45. It is unclear what the behavior is.

**Problem 3:** Can we find an optimal compromise between Greedy 1 and Greedy 2?

**Problem 4:** Can randomization help?

Of course, since we know that a sequence of length n and discrepancy log n exists if we draw a random sequence there is a probability larger than of reaching such a low discrepancy sequence. What we want to ask is if randomization can lead to a method of getting a low-discrepancy sequence with larger probability, or even better with provable larger probability. (Or even better yet, to a sequence of provable low discrepancy via an effective algorithm.)

**Problem 5:** How do Greedy algorithms 1 and 2 perform if we apply them for a random ordering of {1,2,…,n}.

## Extensions of Erdős’ Discrepancy problems

Here are some variations of the Erdős’ Discrepancy Problem along with some guesses for the answers. I will explain where these guesses came from next time.

**E0:** Erdős’ Discrepancy Problem

Guess:

**E1**: Allow f(n) to attain values which are complex numbers of norm 1.

Guess, same answer

(Perhaps we can even consider norm-1 vectors in some Euclidean space or Banach space. If this is too premissive () we may go down to a constant.)

### Square-free integers

**E2**: The EDP for square-free integers

Here we simply consider sequences where if is not square-free. For this variation the multiplicative version of the problem is not a special case of the general question. (Here multiplicative means that if are coprime.)

Guess:for both versions, same answer .

### Random subsets of integers

**E3:** Instead of square-free integers consider a random dense subset of integers and assume that the sequence vanishes for indices not in the subset.

**Guess:** same answer .

### Random sets of primes

Here we consider multiplicative functions which are non zero only on a random dense sert of primes.

**E4**: The EDP for a random dense subset of primes.

Guess:same answer

**Problem 6:** Find a sequence for problems E2, E3, and E4 with discrepancy , or better or better . **Update**: see below. It can be shown that sequences with discrepancy exist for all these variations.

### Prime power differences

**E5**: The EDP for HAP with prime power differences.

Guess:.

### Beurling primes and integers

Beurling primes were defined by Arne Beurling in 1937 and he also proved a prime number theorem for them. The most general definition is very simple: Consider a sequence of real numbers regarded as “primes” and consider the (ordered) sequence of their products (multiplicities allowed) as the “integers”. (We will assume that all products are distinct although for the original purpose of defining a zeta function multiplicities may play a role.) Beurling primes played a role in the polymath4 discussions.

**E6**: The EDP for Beurling primes and integers

CarelessGuess:at most .

### Sune Kristian Jakobsen’s systems of pseudointegers

One way to think about Beurling primes is to identify with and to reorder the integers according to the ordering of the s. Actually, given the ordering we can recover uniquely the Beurling primes. A much more general notion of “pseudointegers” was suggested over polymath5 by Sune Kristian Jakobsen. See also the overview over Polymath5’s wiki.

An ordering of the natural numbers is a SKJ-ordering if it fulfills the following two conditions:

1) If and then .

and

2) for any the set is finite.

**Remark: **1) Sune Kristian considered orderings on sequences of integers (the exponents in the prime factorization). This is equivalent to (but perhaps less provocative than) the formulation here. 2) We can expect that Beurling-orderings are a tiny tiny subset of SKJ-orderings.

Given an SKJ-ordering of the natural numbers we can ask the EDP for that ordering.

**E7**: The EDP for Sune Kristian Jakobsen systems of integers.

CarelessGuess:at mostpolylog n

**Problem 7**: How does Greedy algorithm 2 perform for variations E2 -E7.

The answers for E2, E3 and E4 is especially interesting because these are examples where the best upper bounds I know behave like and are obtained from a random assignment. (See problem 5.)

**Problem 8:** Prove the assertion of EDP (namely that the discrepancy is unbounded) for an SKJ-pseudointegers of ** your** choice.

## Related Discrepancy Problems

### Discrepancy and hereditary discrepancy

Recall that for a hypergraph defined on a ser the discrepancy of , denoted by is the minimum over all functions of the maximum over all of

.

When is a hypergraph and , the restriction of to is the hypergraph with vertex set whose edges are all sets of the form for edges of . The hereditary discrepancy of is the maximum over all of the discrepancy of restricted to .

### An operation of hypergraphs

Let be a hypergraph on a vertex set and assume that is ordered, e.g., . Consider the hypergraph obtained from by adding for every set all its initial subsets w.r.t. the ordering. We will consider the operation of moving from to . Note that for EDP all our variations E1-E8 the underlying hypergraph is obtained by this construction . (From a certain natural hypergraphs ). I further guess that for all the variations E1-E8 if we consider the hypergraph (before taking initial segments) then the discrepancy is bounded.

### Probabilistic versions

Let be a ground set and let be a set of reals. First consider the discrepancy problem for a random hypergraph with edges, whose ith edge is a random set so that every element of belongs to with probability (and all these events are statistically independent). Next we can move from to as described above.

The case of taking for the probabilities can be seen as a probabilistic analog of EDP. My guess for the discrepancy of this case is also . I also guess that the discrepancy of itself is bounded.

**Problem 9:** How do Greedy algorithms 1 and 2 perform for the random hypergraph .

### Roth’s theorem about the discrepancy of arithmetic progressions

For EDP the ground set is all natural numbers, or just the set {1,2,…,n} and the hypergraph is the collection of all HAPs(homogeneous arithmetic progressions). Roth considered the hypergraphs of **all** arithmetic progressions. Roth proved that the discrepancy in this case is at . The existence of with discrepancy of was proved by Beck and the factor was removed by Matousek and Spencer. These works by Beck, Matousek and Spencer may be very relevant to prove the existence of low-discrepancy sequences for EDP and some of its extensions.

## Some results with Noga Alon

Here are some results proved in collaboration with Noga Alon which are based on the Beck-Fiala theorem and a general argument on moving from to . (It is likely that some of them are known and we will be happy to know.)

**Proposition 1:** For the discrepancy of described above is at most .

This follows from the following general proposaition:

**Proposition 2:** Let be a hypergraph on an element ordered set and let be the maximum degree of a point of , then .

**Proof**: Let be a partition of into two nearly equal intervals, a partition of into 2 nearly equal intervals and similarly , etc ( levels). Now define a new hypergraph obtained from by replacing each set in by the following sets (possibly some empty):

, , , (for all ), .

Note that the maximum degree of a point in is , hence the discrepancy of is at most by Beck-Fiala’s theorem. Also, each initial segment of is a union of at most pairwise disjoint members of . This gives Proposition 1, and since for the case described in Proposition 2 the degree of vertices for behaves like we obtain Proposition 2 as well.

**Proposition 3:** for versions **E1-E8 **there are examples where the discrepancy is .

**Proof: **In all these examples we consider a hypergraph on the ground set A={1,2,…,n}, or on a subest of A with positive density (or for E7 and E8 on another non conventionally ordered set of integers). Then move to the hypergraph of initial subsets of edges of . To apply Proposition 2 and Beck-Fiala’s theorem we need to find an upper bound on the degree of a vertex in the hypergraph $\cal H$. Consider the case of EDP. (The other cases are similar.) The maximum degree is obtained by an integer that is the product of the first k distinct primes. In this case the degree is smaller than . Note that the proof applies even to hereditary discrepancy.

**Problem 10:** Use similar ideas to prove better (even polylog (n)) constructions for EDP and its variations **E1**–**E8**.

I extend all my earlier guesses when we move from discrepancy to hereditary discrepancy.

**Proposition 4: **The hereditary discrepancy for the hypergraph of HAP on {1,2,…,n} is .

The proof is obtained as follows: Consider the first primes, and a hypergraph on {1,2,…,m} of discrepancy . (For example, a Hadamard matrix of order describes such an hypergraph. ) Next, for every edge consider the integer that correspond to products of primes whose indices are in . Then restrict your attention only to these integers.

Propositions 3 and 4 show that the hereditary discrepancy of the hypergraphs of HAPs in {1,2,…,n} is between and .

## A Quick Outlook at Polymath 5: EDP 1-21

Before our quick review of polymath5, let me mention a major difficulty which seems relevant to all approaches:

If we allow zero entries in our sequence, even a small density of zero entries, then the discrepancy can be bounded.

For example consider the sequence 1, -1, 0, 1, -1, 0, 1, -1, 0, …

**Experimentations:** Computer experiments played a large role in polymath5. One of the most striking discoveries was a sequence of length 1124 of discrepancy 2. Later a second sequence of length 1124 and discrepancy 2 was found but not a larger sequence. Several people made great contributions with computer experiments.

**Additive Fourier analysis ideas:** Using Fourier analysis on our sequence and somehow reaching a contradiction when we assume that the discrepancy is bounded was a suggestion that was dominant in the first few threads and we came back to from time to time.

**Terry Tao’s reduction:** Terry Tao found a reduction from the general question to the variation of multiplicative functions with complex norm-1 values (**E1**). The beautiful proof relies on “multiplicative” Fourier analysis and it is striking how “little” the proof uses.

**Problem 11** (Sune Kristian Jakobsen): Does Tao’s reduction apply to arbitrary SKJ-pseudointegers.

**Semi-definite programming:** A major turning point was a suggestion by Moses Charikar to use a natural semi-definite relaxation for the problem. This promising avenue was explored over several threads and here too computer experimentation was done. One reason to regard Charikar’s approach (and the related linear programming approach described next) as hopeful is precisely because it offers a convincing way to get around the difficulty demonstrated by the sequence 1, -1, 0, 1, -1, 0, …

**A generalization and linear programming:** The last few threads were centered around a different related relaxation proposed by Tim Gowers. The problem was generalized from a single-sequence problem to a pair-of-sequences problem. (This is a common motif in extremal combinatorics although here the motivation came from functional analysis.) Then relaxation of the problem led to a very appealing linear programming question.

**Mathoverflow questions:** Mathoverflow was used to ask several questions related to the project.

**Participation:** Polymath 5 attracted much interest and wide participation. Overall, it did not attract researchers with major prior interest in discrepancy theory.

## Discrepancy

Discrepancy theory is a huge an exciting area. Let me just give references to some books.

Jozsef Beck, William W. L. Chen: *Irregularities of Distribution*, Cambridge University Press, 1987. And, Jozsef Beck: *Irregularities of Distribution (Cambridge Tracts in Mathematics Cambridge Tracts in Mathemat Volume 89)(Paperback)*

Bernard Chazele: *The Discrepancy Method: Randomness and Complexity*

Jiri Matousek: *Geometric Discrepancy: An Illustrated Guide (Algorithms and Combinatorics)*

J. Beck: *Combinatorial Games: Tic-Tac-Toe Theory*, Cambridge University Press, 2008.

J. Beck: *A forthcoming book.*

Reading these books will prepare you better to deal with EDP and will enrich your life tremendously.

August 23, 2012 at 9:02 am |

[…] Erdos discrepancy problem (polymath5): Tim Gowers and I have a plan to try soon to revive the EDP project at least for a short while. We plan 3 posts about it in the second half of August over Tim’s blog. Update: The first post EDP22 just appeared. […]

August 23, 2012 at 10:59 am |

You have the formula twice in the post. I think there should be absolute value signs around it in both cases.

Corrected now — Tim.August 23, 2012 at 1:28 pm

Dear Sune, Right, my mistake. BTW did you explore further your notion of pseudointegers?

August 23, 2012 at 2:09 pm

Not anymore than what i have posted

August 24, 2012 at 11:42 am

Sune, I found your pseudointegers very interesting. I just remind the reasers that they can be regarded as total order relations on the positive integers, where a<b implies ac<bc and there are only finitely many integers smaller than any number n. For example you can ask: if the number of integers smaller than a prime p is roughly p

does this implies the statement of the prime number theorem? (Something of this kind is known for Beurling primes.) Two further questions: 1) I dont remember if you had examples for your pseudoprimes where EDP was violated? 2) Did you check if Tao's reduction works in this generality?

August 24, 2012 at 12:53 pm

1): Yes, here: https://gowers.wordpress.com/2010/02/08/edp7-emergency-post/#comment-6120 (as a reply to the comment you linked to. I couldn’t remember it before I read it).

We can also construct examples where the multiplicative EDP is true, by having arbitrarily long runs of consecutive squares.

2) No. I remember that I didn’t understood Tao’s proof, at least not well enough to check if it worked for the pseudointegers. I had hoped Tao could tell if it works!

September 1, 2012 at 9:08 am

“We can also construct examples where the multiplicative EDP is true, by having arbitrarily long runs of consecutive squares.” That does not hold: We can’t have two consecutive pseudointeger that are square in any set of pseudointegers. Assume for contradiction that x and y are consecutive and squares. We can write them as and . But then , contradiction.

So I still haven't found a set of pseudointeger where I can prove EDP.

September 3, 2012 at 8:17 pm

Dear Sune, it somehow looks to me that your pseudo-integers are the same as Beurling’s integers. Maybe we can relax your definition and insist that c and d are relatively prime to a and b to get a more general notion.

September 4, 2012 at 9:02 am

Sune, if you can tell me what the statement following “such that” was supposed to be, then I will edit the comment. TimDear GIl, yes it seems to be almost the same. Two Beurling’s integers are also allowed to correspond to the same real number, and that corresponds to the statement that we can have pseudo-integers a and b such that a

b^{n}$ for all n. The only difference seems to be that pseudointegers have a complete ordering. The ordering of the Beurling’s integers come from a (possible not injective) map to the reals, so it might not be complete. But that is just a little extra structure so in the following I will use “Buerling’s integers” to mean what I used to call “pseudo integers”I think that if you have a set of Beurling’s integers then for all n, the set of the first n of these Beurling’s integers will be isomorphic to a subset of , and I think you lose this property if you generalise further.August 23, 2012 at 11:46 am |

For define to be the smallest possible discrepancy of a sequence over where the density of 0’s is at most p. We know that this is finite for all p>0: If we can choose the sequence that is 0 on all multiples of and sends to for all . This should have discrepancy n.

EDP is equivalent to . We could hope that the behavior of Z is simpler than that of E, since in the definition of E we are postponing problems, while in the definition of Z we are spreading them out.

August 23, 2012 at 2:15 pm

Could you elaborate on your last sentence: I find it intriguing but I don’t see exactly what you mean by it?

August 23, 2012 at 3:23 pm

First, in the above should have been (I hope).

It seems that I was a bit too sloppy both when I read the definition of E and when I posted the above comment. I though the definition of E(n) was “the smallest possible discrepancy of a sequence of length n”.

What I meant was: One way to investigate EDP is to fix some integer d and try to find the longest possible sequence with discrepancy below d. The sequences we get are then optimized to keep the discrepancy at most d for as long as possible, and it might be that for all extensions of the sequence the discrepancy grows to, say, 3/2d, within the next few terms. This is what I mean by “postponing the problems”.

Instead we might try to keep the discrepancy below d while defining the sequence on a subset of with positive density (and setting all other terms to 0). If the positions of these 0’s are periodic, as in the low-discrepancy examples we know, then it will be possible to fill out more terms without increasing the discrepancy by much.

August 23, 2012 at 3:44 pm

The downside of this approach is that the definition of Z involves infinite sequences. It is not even clear if there is a way to compute Z(p) for a given p.

August 24, 2012 at 6:52 am

You could combine the two ideas, by considering , the minimal HAP-discrepancy over sequences where for at most values of in the range (and there’s no restriction for values greater than ).

Then your is just , and .

It could be interesting to think specifically about where the zero values should be placed in order to minimize the discrepancy in this context.

August 26, 2012 at 1:35 am |

Very interesting post!

Some notes on the operation on hypergraphs. The short version of this comment is that Prop 2 can be tightened, and also generalized to show that the operation on hypergraphs increases hereditary discrepancy by at most a polylog factor.

1) Prop 2 holds with the bound latex H$ is ). This follows from the following theorem by Banaszczyk:

> For any convex set of gaussian measure at least 1/2 and any set of vectors each of euclidean norm at most 1/10, one can find a such that .

Let be the incidence matrix of (as you define it in your proof). Scale down the columns of by for a large enough constant so that each column has norm at most 1/10 and let the scaled columns be .

Let be the incidence matrix of and be such that . As you argue, each row of can be taken to be a 0-1 vector with at most 1’s, so each row of can be taken to have euclidean norm . Define . Clearly is convex and for large enough has gaussian measure at least 1/2. By Banaszczyk’s theorem, we can find such that , and scaling back up we have found such that .

2) If is the result of applying the transformation on and has sets, .

The proof is similar to yours. Take be the result of replacing each set with and ; : the result of replacing with for all and so forth. This gives us for . Since each is the union of *disjoint* restrictions of , for all we have . Then and by a recent result of Matousek, . Finally, as you argue, .

August 26, 2012 at 1:39 am |

Sorry, the above didn’t parse right, I am not very good with this wordpress-latex thing..

Some notes on the operation on hypergraphs. The short version of this comment is that Prop 2 can be tightened, and also generalized to show that the operation on hypergraphs increases hereditary discrepancy by at most a polylog factor.

1) Prop 2 holds with the bound (assuming the number of sets of is ). This follows from the following theorem by Banaszczyk:

For any convex set of gaussian measure at least 1/2 and any set of vectors each of euclidean norm at most 1/10, one can find a such that .

Let be the incidence matrix of (as you define it in your proof). Scale down the columns of by for a large enough constant so that each column has norm at most 1/10 and let the scaled columns be .

Let be the incidence matrix of and be such that . As you argue, each row of can be taken to be a 0-1 vector with at most 1’s, so each row of can be taken to have euclidean norm . Define . Clearly is convex and for large enough has gaussian measure at least 1/2. By Banaszczyk’s theorem, we can find such that , and scaling back up we have found such that .

2) If is the result of applying the transformation on and has sets, .

The proof is similar to yours. Take be the result of replacing each set with and ; : the result of replacing with for all and so forth. This gives us for . Since each is the union of *disjoint* restrictions of , for all we have . Then and by a recent result of Matousek, . Finally, as you argue, .

August 26, 2012 at 2:14 pm

Dear Sasho, This is very interesting!

August 26, 2012 at 8:43 am |

Some information regarding greedy algorithm 2 can be found in

Robert Lemke Oliver’s answer to this MathOverflow question

http://mathoverflow.net/questions/105383/the-behavior-of-a-certain-greedy-algorithm-for-erds-discrepancy-problem

August 26, 2012 at 9:30 am |

Terry Tao’s reduction of the problem to one about multiplicative sequences (described here) also invites a class of negative attacks that may conceivably be easier than the direct search for a counterexample. Namely, can one find a method of generating a

randommultiplicative sequence such that (with respect to the resulting random distributiom) theexpectationof is bounded as a function of ? Such a method would (as shown at the bottom of that wiki page) imply a counterexample to the Hilbert-space version of EDP.August 26, 2012 at 9:59 pm |

[…] Autumn I prepared three posts on the problems and we decided to launch them now. The first post is here. Here is a related MathOverflow question. Discrepancy theory is a wonderful theory and while I am […]

August 27, 2012 at 6:26 pm |

Perhaps it is worth adding also

Proposition 5:For every there is a set of the natural numbers of density such that restricted to this set the discrepancy of HAPs in {1,2,…,n} is at most polylog (n).Proof:Simply consider natural numbers with at most prime factors for sifficiently large , and apply Proposition 2.

August 29, 2012 at 4:52 pm

I meant the hereditary discrepancy. (for discrepancy there are esaier things that can be done..)

September 4, 2012 at 7:15 pm |

Last week Kunal Talwar and I thought a bit about the hereditary discrepancy of HAPs. We can show that the hereditary discrepancy is not polylogarithmic; in fact it is , which, as the upper bound in Gil’s post shows is tight up to the constant in the exponent.

Below I am writing a sketch of how the proof goes. Since the proof itself got a bit long, we wrote a note: you can find it here.

(1) Consider the following set system of boolean subcubes, let’s call it . Each element of the ground set is identified with a boolean vector . We get each set from fixing some of the coordinates and leaving the other ones free. That is, each set is associated with a vector and consists of those for which whenever .

(2) We construct a set of positive integers by associating an integer to each boolean vector so that each corresponds to a HAP restricted to . This is not hard and is similar to the construction in the post. We pick the first primes and associate to each . The set corresponds to the HAP with difference .

(3) We show that

This proceeds in two steps:

3.1 Use the determinant lower bound of Lovasz, Spencer, and Vesztergombi to show that the matrix of Fourier characters of of weight ( a constant) has hereditary discrepancy . Call this matrix

3.2. Show that . This goes by writing a character of weight as a linear combination of indicator vectors of subcubes. This is possible since a character of weight is a function of components.

We can choose so that 3.1 and 3.2 imply (3).

September 13, 2012 at 3:35 am

By request from Gil: you can read another perspective on the results above in a blog post Kunal posted on an MS Research computer science theory blog: http://windowsontheory.org/2012/09/05/from-discrepancy-to-privacy-and-back/. The post focuses on connections with computer science (specifically differential privacy) that gave us some intuition.

September 13, 2012 at 7:03 pm

Very nice result. It is nice to see the hereditary discrepancy of HAP being completely understood and also the relation with privacy is very nice.

September 5, 2012 at 4:33 pm |

[…] the EDP, it is natural to ask how large the hereditary discrepancy is. Alon and Kalai show that it is and at most . They also showed that for constant , it is possible to delete an […]

October 15, 2012 at 4:56 am |

[…] Kunal Talwar. See this post From discrepancy to privacy and back and the paper. Noga Alon and I showed that it is and at most , and to my surprise Alexander and Kunal showed that the upper bound is […]

November 17, 2012 at 5:35 pm |

[…] and Talwar of the hereditary discrepancy analog for Erdos’ discrepancy problem. See also this post in Gowers’s […]

August 19, 2013 at 2:06 pm |

[…] Doron’s introduction; 00:43-4:00 On the screen: Erdős discrepancy problems: showing this post (EDP22) from Gowers’s blog. I talked about the chaotic nature of mathematics on the Internet. Then I explained what are […]

February 28, 2014 at 4:10 pm |

[…] in Logic and Computation. They have recently made important progress on the Erdős Discrepancy Problem using computer tools from logic. Unlike the approach of a PolyMath project on the problem, their […]