## FUNC3 — further strengthenings and variants

February 13, 2016

In the last post I concentrated on examples, so in this one I’ll concentrate on conjectures related to FUNC, though I may say a little about examples at the end, since a discussion has recently started about how we might go about trying to find a counterexample to FUNC.

### A proposal for a rather complicated averaging argument

After the failure of the average-overlap-density conjecture, I came up with a more refined conjecture along similar lines that has one or two nice properties and has not yet been shown to be false.

The basic aim is the same: to take a union-closed family $\mathcal A$ and use it to construct a probability measure on the ground set in such a way that the average abundance with respect to that measure is at least 1/2. With the failed conjecture the method was very basic: pick a random non-empty set $A\in\mathcal A$ and then a random element $x\in A$.

The trouble with picking random elements is that it gives rise to a distribution that does not behave well when you duplicate elements. (What you would want is that the probability is shared out amongst the duplicates, but in actual fact if you duplicate an element lots of times it gives an advantage to the set of duplicates that the original element did not have.) This is not just an aesthetic concern: it was at the heart of the downfall of the conjecture. What one really wants, and this is a point that Tobias Fritz has been emphasizing, is to avoid talking about the ground set altogether, something one can do by formulating the conjecture in terms of lattices, though I’m not sure what I’m about to describe does make sense for lattices.
Read the rest of this entry »

## FUNC2 — more examples

February 8, 2016

The first “official” post of this Polymath project has passed 100 comments, so I think it is time to write a second post. Again I will try to extract some of the useful information from the comments (but not all, and my choice of what to include should not be taken as some kind of judgment). A good way of organizing this post seems to be list a few more methods of construction of interesting union-closed systems that have come up since the last post — where “interesting” ideally means that the system is a counterexample to a conjecture that is not obviously false.

### Standard “algebraic” constructions

#### Quotients

If $\mathcal A$ is a union-closed family on a ground set $X$, and $Y\subset X$, then we can take the family $\mathcal A_Y=\{A\cap Y:A\in\mathcal{A}\}$. The map $\phi:A\to A\cap Y$ is a homomorphism (in the sense that $\phi(A\cup B)=\phi(A)\cup\phi(B)$, so it makes sense to regard $\mathcal A_Y$ as a quotient of $\mathcal A$.

#### Subfamilies

If instead we take an equivalence relation $R$ on $X$, we can define a set-system $\mathcal A( R)$ to be the set of all unions of equivalence classes that belong to $\mathcal{A}$.

Thus, subsets of $X$ give quotient families and quotient sets of $X$ give subfamilies.

#### Products

Possibly the most obvious product construction of two families $\mathcal A$ and $\mathcal B$ is to make their ground sets disjoint and then to take $\{A\cup B:A\in\mathcal A,B\in\mathcal B\}$. (This is the special case with disjoint ground sets of the construction $\mathcal A+\mathcal B$ that Tom Eccles discussed earlier.)

Note that we could define this product slightly differently by saying that it consists of all pairs $(A,B)\in\mathcal A\times\mathcal B$ with the “union” operation $(A,B)\sqcup(A',B')=(A\cup A',B\cup B')$. This gives an algebraic system called a join semilattice, and it is isomorphic in an obvious sense to $\mathcal A+\mathcal B$ with ordinary unions. Looked at this way, it is not so obvious how one should define abundances, because $(\mathcal A\times\mathcal B,\sqcup)$ does not have a ground set. Of course, we can define them via the isomorphism to $\mathcal A+\mathcal B$ but it would be nice to do so more intrinsically.
Read the rest of this entry »

## FUNC1 — strengthenings, variants, potential counterexamples

January 29, 2016

After my tentative Polymath proposal, there definitely seems to be enough momentum to start a discussion “officially”, so let’s see where it goes. I’ve thought about the question of whether to call it Polymath11 (the first unclaimed number) or Polymath12 (regarding the polynomial-identities project as Polymath11). In the end I’ve gone for Polymath11, since the polynomial-identities project was listed on the Polymath blog as a proposal, and I think the right way of looking at things is that the problem got solved before the proposal became a fully-fledged project. But I still think that that project should be counted as a Polymathematical success story: it shows the potential benefits of opening up a problem for consideration by anybody who might be interested.

Something I like to think about with Polymath projects is the following question: if we end up not solving the problem, then what can we hope to achieve? The Erdős discrepancy problem project is a good example here. An obvious answer is that we can hope that enough people have been stimulated in enough ways that the probability of somebody solving the problem in the not too distant future increases (for example because we have identified more clearly the gap in our understanding). But I was thinking of something a little more concrete than that: I would like at the very least for this project to leave behind it an online resource that will be essential reading for anybody who wants to attack the problem in future. The blog comments themselves may achieve this to some extent, but it is not practical to wade through hundreds of comments in search of ideas that may or may not be useful. With past projects, we have developed Wiki pages where we have tried to organize the ideas we have had into a more browsable form. One thing we didn’t do with EDP, which in retrospect I think we should have, is have an official “closing” of the project marked by the writing of a formal article that included what we judged to be the main ideas we had had, with complete proofs when we had them. An advantage of doing that is that if somebody later solves the problem, it is more convenient to be able to refer to an article (or preprint) than to a combination of blog comments and Wiki pages. Read the rest of this entry »

## Frankl’s union-closed conjecture — a possible Polymath project?

January 21, 2016

Although it was from only a couple of people, I had an enthusiastic response to a very tentative suggestion that it might be rewarding to see whether a polymath project could say anything useful about Frankl’s union-closed conjecture. A potentially worrying aspect of the idea is that the problem is extremely elementary to state, does not seem to yield to any standard techniques, and is rather notorious. But, as one of the commenters said, that is not necessarily an argument against trying it. A notable feature of the polymath experiment has been that it throws up surprises, so while I wouldn’t expect a polymath project to solve Frankl’s union-closed conjecture, I also know that I need to be rather cautious about my expectations — which in this case is an argument in favour of giving it a try.

A less serious problem is what acronym one would use for the project. For the density Hales-Jewett problem we went for DHJ, and for the Erdős discrepancy problem we used EDP. That general approach runs into difficulties with Frankl’s union-closed conjecture, so I suggest FUNC. This post, if the project were to go ahead, could be FUNC0; in general I like the idea that we would be engaged in a funky line of research.
Read the rest of this entry »

## Entropy and Sidorenko’s conjecture — after Szegedy

November 18, 2015

Here is a simple but important fact about bipartite graphs. Let $G$ be a bipartite graph with (finite) vertex sets $X$ and $Y$ and edge density $\alpha$ (meaning that the number of edges is $\alpha |X||Y|$). Now choose $(x_1,x_2)$ uniformly at random from $X^2$ and $(y_1,y_2)$ uniformly at random from $Y^2$. Then the probability that all of $x_1y_1, x_1y_2, x_2y_1$ and $x_2y_2$ are edges is at least $\alpha^4$.

The proof is very simple. For each $x$, let $f_x:Y\to\{0,1\}$ be the characteristic function of the neighbourhood of $x$. That is, $f_x(y)=1$ if $xy$ is an edge and $0$ otherwise. Then $\sum_{x,y}f_x(y)$ is the sum of the degrees of the $x\in X$, which is the number of edges of $G$, which is $\alpha |X||Y|$. If we set $f=\sum_{x\in X}f_x$, then this tells us that $\sum_{y\in Y}f(y)=\alpha|X||Y|$. By the Cauchy-Schwarz inequality, it follows that $\sum_{y\in Y}f(y)^2\geq\alpha^2|X|^2|Y|$.

But by the Cauchy-Schwarz inequality again,
$(\sum_{y\in Y}f(y)^2)^2=(\sum_{y\in Y}\sum_{x_1,x_2\in X}f_{x_1}(y)f_{x_2}(y))^2$
$= (\sum_{x_1,x_2\in X}\sum_{y\in Y}f_{x_1}(y)f_{x_2}(y))^2$
$\leq |X|^2\sum_{x_1,x_2\in X}(\sum_{y\in Y}f_{x_1}(y)f_{x_2}(y))^2$
$=|X|^2\sum_{x_1,x_2\in X}\sum_{y_1,y_2\in Y}f_{x_1}(y_1)f_{x_2}(y_1)f_{x_1}(y_2)f_{x_2}(y_2).$
That last expression is $|X|^2$ times the number of quadruples $x_1,x_2,y_1,y_2$ such that all of $x_1y_1, x_1y_2, x_2y_1$ and $x_2y_2$ are edges, and our previous estimate shows that it is at least $\alpha^4|X|^4|Y|^2$. Therefore, the probability that a random such quadruple consists entirely of edges is at least $\alpha^4$, as claimed (since there are $|X|^2|Y|^2$ possible quadruples to choose from). Read the rest of this entry »

## Interesting times in academic publishing

November 10, 2015

In this post I want briefly to mention four current goings on in the world of academic publishing.

First, I’ll just briefly say that things are going well with the new journal Discrete Analysis, and I think we’re on course to launch, as planned, early next year with a few very good accepted papers — we certainly have a number of papers in the pipeline that look promising to me. Of course, we’d love to have more.

Secondly, a very interesting initiative has recently been started by Martin Eve, called the Open Library of Humanities. The rough idea is that they provide a platform for humanities journals that are free to read online and free for authors (or, as some people like to say, are Diamond OA journals). Perhaps the most interesting aspect of this initiative is that it is funded by a consortium of libraries. Librarians are the people who feel the pain of ridiculous subscription prices, so they have great goodwill towards people who are trying to build new and cheaper publication models. I think there is no reason that the sciences couldn’t do something similar — in fact, it should be even easier to find money.
Read the rest of this entry »

## Gil Kalai starts Polymath10

November 5, 2015

I’ve already mentioned this on Google Plus, but my readership there is different from my readership here, so it seems a good idea to write a similar post here, to give what publicity I can to the fact that Gil Kalai has started a new Polymath project. The problem he would like solved (and I would very much like to see it solved too) is the famous delta-systems or sunflower conjecture of Erdős and Rado.

The problem, as with many problems in combinatorics, is easy to state, but fascinatingly hard to solve. It is a classic extremal problem, in that it asks how big some combinatorial object needs to be before it is guaranteed to contain a subobject of some particular kind. In this case, the object is a $k$uniform hypergraph, which just means a collection of sets of size $k$. The subobject one would like to find is a sunflower of size $r$, which means a collection of sets $A_1,\dots,A_r$ such that we can find disjoint sets $H, P_1,\dots,P_r$ with the $P_i$ disjoint and with $A_i=H\cup P_i$ for each $i$. I have used the letters $H$ and $P$ to stand for “head” and “petal” — $H$ is the head of the sunflower and $P_1,\dots,P_r$ are the petals. Read the rest of this entry »

## EDP28 — problem solved by Terence Tao!

September 20, 2015

I imagine most people reading this will already have heard that Terence Tao has solved the Erdős discrepancy problem. He has blogged about the solution in two posts, a first that shows how to reduce the problem to the Elliott conjecture in number theory, and a second that shows (i) that an averaged form of the conjecture is sufficient and (ii) that he can prove the averaged form. Two preprints covering (i) and (ii) are here and here: the one covering (i) has been submitted to Discrete Analysis.

This post is therefore the final post of the polymath5 project. I refer you to Terry’s posts for the mathematics. I will just make a few comments about what all this says about polymath projects in general.
Read the rest of this entry »

## Discrete Analysis — an arXiv overlay journal

September 10, 2015

This post is to announce the start of a new mathematics journal, to be called Discrete Analysis. While in most respects it will be just like any other journal, it will be unusual in one important way: it will be purely an arXiv overlay journal. That is, rather than publishing, or even electronically hosting, papers, it will consist of a list of links to arXiv preprints. Other than that, the journal will be entirely conventional: authors will submit links to arXiv preprints, and then the editors of the journal will find referees, using their quick opinions and more detailed reports in the usual way in order to decide which papers will be accepted.

Part of the motivation for starting the journal is, of course, to challenge existing models of academic publishing and to contribute in a small way to creating an alternative and much cheaper system. However, I hope that in due course people will get used to this publication model, at which point the fact that Discrete Analysis is an arXiv overlay journal will no longer seem interesting or novel, and the main interest in the journal will be the mathematics it contains.

The members of the editorial board so far — but we may well add further people in the near future — are Ernie Croot, me, Ben Green, Gil Kalai, Nets Katz, Bryna Kra, Izabella Laba, Tom Sanders, Jozsef Solymosi, Terence Tao, Julia Wolf, and Tamar Ziegler. For the time being, I will be the managing editor. I interpret this as meaning that I will have the ultimate responsibility for the smooth running of the journal, and will have to do a bit more work than the other editors, but that decisions about journal policy and about accepting or rejecting papers will be made democratically by the whole editorial board. (For example, we had quite a lot of discussion, including a vote, about the title, and the other editors have approved this blog post after suggesting a couple of minor changes.)

I will write the rest of this post as a series of questions and answers.
Read the rest of this entry »

## Is Nick Clegg a Liberal Democrat?

April 28, 2015

All my life I have found that Liberal Democrat policies (and before that, Liberal-SDP Alliance policies, and before that, Liberal policies) have been, if not a perfect match to my own views, then at least the closest to them. In particular, I am broadly centrist, tilting somewhat to the left, strongly in favour of voting reform, strongly in favour of remaining part of the European Union, and very keen to take much more radical action to combat climate change. However, Nick Clegg is doing his best to persuade me that to vote Liberal Democrat is no longer a good way of promoting these values. Here is what he has said about building coalitions after the election:

As we have always said, the party with the most votes and the most seats in this election has the first right to seek to form a Government. The British people would rightly question the legitimacy of a coalition that didn’t allow the party with the largest number of seats and votes the opportunity to attempt to form a Government first.

I’m proud that the Liberal Democrats have proved we can form a strong and stable coalition government, able to bring prosperity to Britain.

Just like we would not put UKIP in charge of Europe, we are not going to put the SNP in charge of Britain – a country they want to rip apart.

The current projections at Five Thirty-Eight put the Conservatives on 281 seats, Labour on 268, the Scottish Nationalists on 49 and the Liberal Democrats on 26. If these are correct, then Clegg is saying that he will try first to form a Government with the Conservatives. I claim that this is inconsistent with all four of the fundamental Liberal values I mentioned.
Read the rest of this entry »