## Discrete Analysis one year on

October 5, 2016

This is cross posted from the blog on the Discrete Analysis web page.

Approximately a year on from the announcement of Discrete Analysis, it seems a good moment to take stock and give a quick progress report, so here it is.

At the time of writing (5th October 2016) we have 17 articles published and are on target to reach 20 by the end of the year. (Another is accepted and waiting for the authors to produce a final version.) We are very happy with the standard of the articles. The journal has an ISSN, each article has a DOI, and articles are listed on MathSciNet. We are not yet listed on Web of Science, so we do not have an impact factor, but we will soon start the process of applying for one.

We are informed by Scholastica that between June 6th and September 27th 2016 the journal had 18,980 pageviews. (In the not too distant future we will have the analytics available to us whenever we want to look at them.) The number of views of the page for a typical article is in the low hundreds, but that probably underestimates the number of times people read the editorial introduction for a given article, since that can be done from the main journal pages. So getting published in Discrete Analysis appears to be a good way to attract attention to your article — we hope more than if you post it on the arXiv and wait for it to appear a long time later in a journal of a more conventional type.

We have had 74 submissions so far, of which 14 are still in process. Our acceptance rate is 37%, but some submissions are not serious mathematics, and if these are discounted then the rate is probably somewhere around 50%. I think the 74 includes revised versions of previously submitted articles, so the true figure is a little lower. Our average time to reject a non-serious submission is 7 days, our average to reject a more serious submission is 47 days, and our average time to accept is 121 days. There is considerable variance in these figures, so they should be interpreted cautiously.

There has been one change of policy since the launch of the journal. László Babai, founder of the online journal Theory of Computing, which, like Discrete Analysis, is free to read and has no publication charges, very generously offered to provide for us a suitable adaptation of their style file. As a result, our articles will from now on have a uniform appearance and, more importantly, will appear with their metadata: after a while it seemed a little strange that the official version of one of our articles would not say anywhere that it was published by Discrete Analysis, but now it tells you that, and the number of the article, the date of publication, the DOI, and so on. So far, our two most recent articles have been formatted — you can see them here and here — and in due course we will reformat all the earlier ones.

If you have an article that you think might suit the journal (and now that we have several articles on our website it should be easier to judge this), we would be very pleased to receive it: 20 articles in our first year is a good start, but we hope that in due course the journal will be perceived as established and the submission rate of good articles will increase. (For comparison, Combinatorica published 31 articles in 2015, and Combinatorics, Probability and Computing publishes around 55 articles a year, to judge from a small sample of issues.)

## In case you haven’t heard what’s going on in Leicester …

September 15, 2016

Strangely, this is my second post about Leicester in just a few months, but it’s about something a lot more depressing than the football team’s fairytale winning of the Premier League (but let me quickly offer my congratulations to them for winning their first Champions League match — I won’t offer advice about whether they are worth betting on to win that competition too). News has just filtered through to me that the mathematics department is facing compulsory redundancies.

The structure of the story is wearily familiar after what happened with USS pensions. The authorities declare that there is a financial crisis, and that painful changes are necessary. They offer a consultation. In the consultation their arguments appear to be thoroughly refuted. The refutation is then ignored and the changes go ahead.

Here is a brief summary of the painful changes that are proposed for the Leicester mathematics department. The department has 21 permanent research-active staff. Six of those are to be made redundant. There are also two members of staff who concentrate on teaching. Their number will be increased to three. How will the six be chosen? Basically, almost everyone will be sacked and then invited to reapply for their jobs in a competitive process, and the plan is to get rid of “the lowest performers” at each level of seniority. Those lowest performers will be considered for “redeployment” — which means that the university will make efforts to find them a job of a broadly comparable nature, but doesn’t guarantee to succeed. It’s not clear to me what would count as broadly comparable to doing pure mathematical research.

How is performance defined? It’s based on things like research grants, research outputs, teaching feedback, good citizenship, and “the ongoing and potential for continued career development and trajectory”, whatever that means. In other words, on the typical flawed metrics so beloved of university administrators, together with some subjective opinions that will presumably have to come from the department itself — good luck with offering those without creating enemies for life.

Oh, and another detail is that they want to reduce the number of straight maths courses and promote actuarial science and service teaching in other departments.

There is a consultation period that started in late August and ends on the 30th of September. So the lucky members of the Leicester mathematics faculty have had a whole month to marshall their to-be-ignored arguments against the changes.

It’s important to note that mathematics is not the only department that is facing cuts. But it’s equally important to note that it is being singled out: the university is aiming for cuts of 4.5% on average, and mathematics is being asked to make a cut of more like 20%. One reason for this seems to be that the department didn’t score all that highly in the last REF. It’s a sorry state of affairs for a university that used to boast Sir Michael Atiyah as its chancellor.

I don’t know what can be done to stop this, but at the very least there is a petition you can sign. It would be good to see a lot of signatures, so that Leicester can see how damaging a move like this will be to its reputation.

## ∈

June 2, 2016

For several reasons, I am instinctively in favour — strongly so — of remaining in the EU: I have a French wife and two bilingual children, and I am an academic living in the age of the internet. The result is that my whole outlook is international, and leaving the EU would feel to me like a gigantic step in the wrong direction. But in this post I want to try to set those instincts aside and try to go back to first principles, which doesn’t make it a mathematical post, but does make it somewhat mathematical in spirit. That is why I have chosen as my title the mathematical symbol for “is a member of”, which can also be read (in some contexts) as “in”, and which conveniently looks like an E for Europe too.

I’ll consider three questions: why we need supranational organizations, to what extent we should care about sovereignty, and whether we should focus on the national interest.

### The need for supranational organizations

In the abstract, the case for supranational organizations is almost too obvious to be worth making: just as it often benefits individual people to form groups and agree to restrict their behaviour in certain ways, so it can benefit nations to join groups and agree to restrict their behaviour in certain ways.

To see in more detail why this should be, I’ll look at some examples, starting with an example concerning individual people. It has sometimes been suggested that a simple way of dealing with the problem of drugs in sport would be to allow people to use whatever drugs they want. Even with the help of drugs, the Ben Johnsons of this world can’t set world records and win Olympic gold medals unless they are also amazing athletes, so if we allowed drugs, there would still be a great deal of room for human achievement.

There are many arguments against this proposal. A particularly powerful one is that allowing drugs has the effect of making them compulsory: they offer enough of a boost to performance that a drug-free athlete would almost certainly be unable to compete at the highest level if a large proportion of other athletes were taking drugs. Since taking drugs has serious adverse health effects — for instance, it has led to the deaths of several cyclists — it is better if competitors agree to forswear this method of gaining a competitive advantage. But just saying, “I won’t take drugs if you don’t” isn’t enough, since for any individual there will always be a huge temptation to break such an agreement. So one also needs organizations to which athletes belong, with precise rules and elaborate systems of testing.
Read the rest of this entry »

## Reflections on the recent solution of the cap-set problem I

May 19, 2016

Sometimes blog posts about recent breakthroughs can be useful because they convey the main ideas of a proof without getting bogged down in the technical details. But the recent solution of the cap-set problem by Jordan Ellenberg, and independently and fractionally later by Dion Gijswijt, both making crucial use of an amazing lemma of Croot, Lev and Pach that was made public a week or so before, does not really invite that kind of post, since the papers are so short, and the ideas so transparent, that it’s hard to know how a blog post can explain them more clearly.

But as I’ve got a history with this problem, including posting about it on this blog in the past, I feel I can’t just not react. So in this post and a subsequent one (or ones) I want to do three things. The first is just to try to describe my own personal reaction to these events. The second is more mathematically interesting. As regular readers of this blog will know, I have a strong interest in the question of where mathematical ideas come from, and a strong conviction that they always result from a fairly systematic process — and that the opposite impression, that some ideas are incredible bolts from the blue that require “genius” or “sudden inspiration” to find, is an illusion that results from the way mathematicians present their proofs after they have discovered them.

From time to time an argument comes along that appears to present a stiff challenge to my view. The solution to the cap-set problem is a very good example: it’s easy to understand the proof, but the argument has a magic quality that leaves one wondering how on earth anybody thought of it. I’m referring particularly to the Croot-Lev-Pach lemma here. I don’t pretend to have a complete account of how the idea might have been discovered (if any of Ernie, Seva or Peter, or indeed anybody else, want to comment about this here, that would be extremely welcome), but I have some remarks.

The third thing I’d like to do reflects another interest of mine, which is avoiding duplication of effort. I’ve spent a little time thinking about whether there is a cheap way of getting a Behrend-type bound for Roth’s theorem out of these ideas (and I’m not the only one). Although I wasn’t expecting the answer to be yes, I think there is some value in publicizing some of the dead ends I’ve come across. Maybe it will save others from exploring them, or maybe, just maybe, it will stimulate somebody to find a way past the barriers that seem to be there.
Read the rest of this entry »

## The L-functions and modular forms database

May 10, 2016

With each passing decade, mathematics grows substantially. As it grows, mathematicians are forced to become more specialized — in the sense of knowing a smaller fraction of the whole — and the time and effort needed to get to the frontier of what is known, and perhaps to contribute to it, increases. One might think that this process will eventually mean that nobody is prepared to make the effort any more, but fortunately there are forces that work in the opposite direction. With the help of the internet, it is now far easier to find things out, and this makes research a whole lot easier in important ways.

It has long been a conviction of mine that the effort-reducing forces we have seen so far are just the beginning. One way in which the internet might be harnessed more fully is in the creation of amazing new databases, something I once asked a Mathoverflow question about. I recently had cause (while working on a research project with a student of mine, Jason Long) to use Sloane’s database in a serious way. That is, a sequence of numbers came out of some calculations we did, we found it in the OEIS, that gave us a formula, and we could prove that the formula was right. The great thing about the OEIS was that it solved an NP-ish problem for us: once the formula was given to us, it wasn’t that hard to prove that it was correct for our sequence, but finding it in the first place would have been extremely hard without the OEIS.
Read the rest of this entry »

## Should I have bet on Leicester City?

May 3, 2016

If you’re not British, or you live under a stone somewhere, then you may not have heard about one of the most extraordinary sporting stories ever. Leicester City, a football (in the British sense) team that last year only just escaped relegation from the top division, has just won the league. At the start of the season you could have bet on this happening at odds of 5000-1. Just 12 people availed themselves of this opportunity.

Ten pounds bet then would have net me 50000 pounds now, so a natural question arises: should I be kicking myself (the appropriate reaction given the sport) for not placing such a bet? In one sense the answer is obviously yes, as I’d have made a lot of money if I had. But I’m not in the habit of placing bets, and had no idea that these odds were being offered anyway, so I’m not too cut up about it.

Nevertheless, it’s still interesting to think about the question hypothetically: if I had been the betting type and had known about these odds, should I have gone for them? Or would regretting not doing so be as silly as regretting not choosing and betting on the particular set of numbers that just happened to win the national lottery last week?
Read the rest of this entry »

## Discrete Analysis launched

March 1, 2016

As you may remember from an earlier post on this blog, Discrete Analysis is a new mathematics journal that runs just like any other journal except in one respect: the articles we publish live on the arXiv. This is supposed to highlight the fact that in the internet age, and in particular in an age when it is becoming routine for mathematicians to deposit their articles on the arXiv before they submit them to journals, the only important function left for journals is organizing peer review. Since this is done through the voluntary work of academics, it should in principle be possible to run a journal for almost nothing. The legacy publishers (as they are sometimes called) frequently call people naive for suggesting this, so it is important to have actual examples to prove it, and Discrete Analysis is set up to be one such example. Its website goes live today.
Read the rest of this entry »

## FUNC4 — further variants

February 22, 2016

I’ve been in Paris for the weekend, so the number of comments on the previous post got rather large, and I also fell slightly behind. Writing this post will I hope help me catch up with what is going on.

### FUNC with symmetry

One question that has arisen is whether FUNC holds if the ground set is the cyclic group $\mathbb Z_n$ and $\mathcal A$ is rotationally invariant. This was prompted by Alec Edgington’s example showing that we cannot always find $x$ and an injection from $\mathcal A_{\overline x}$ to $\mathcal A_x$ that maps each set to a superset. Tom Eccles suggested a heuristic argument that if $\mathcal A$ is generated by all intervals of length $r$, then it should satisfy FUNC. I agree that this is almost certainly true, but I think nobody has yet given a rigorous proof. I don’t think it should be too hard.

One can ask similar questions about ground sets with other symmetry groups.

A nice question that I came across on Mathoverflow is whether the intersection version of FUNC is true if $\mathcal A$ consists of all subgroups of a finite group $G$. The answers to the question came very close to solving it, with suggestions about how to finish things off, but the fact that the question was non-trivial was quite a surprise to me.
Read the rest of this entry »

## FUNC3 — further strengthenings and variants

February 13, 2016

In the last post I concentrated on examples, so in this one I’ll concentrate on conjectures related to FUNC, though I may say a little about examples at the end, since a discussion has recently started about how we might go about trying to find a counterexample to FUNC.

### A proposal for a rather complicated averaging argument

After the failure of the average-overlap-density conjecture, I came up with a more refined conjecture along similar lines that has one or two nice properties and has not yet been shown to be false.

The basic aim is the same: to take a union-closed family $\mathcal A$ and use it to construct a probability measure on the ground set in such a way that the average abundance with respect to that measure is at least 1/2. With the failed conjecture the method was very basic: pick a random non-empty set $A\in\mathcal A$ and then a random element $x\in A$.

The trouble with picking random elements is that it gives rise to a distribution that does not behave well when you duplicate elements. (What you would want is that the probability is shared out amongst the duplicates, but in actual fact if you duplicate an element lots of times it gives an advantage to the set of duplicates that the original element did not have.) This is not just an aesthetic concern: it was at the heart of the downfall of the conjecture. What one really wants, and this is a point that Tobias Fritz has been emphasizing, is to avoid talking about the ground set altogether, something one can do by formulating the conjecture in terms of lattices, though I’m not sure what I’m about to describe does make sense for lattices.
Read the rest of this entry »

## FUNC2 — more examples

February 8, 2016

The first “official” post of this Polymath project has passed 100 comments, so I think it is time to write a second post. Again I will try to extract some of the useful information from the comments (but not all, and my choice of what to include should not be taken as some kind of judgment). A good way of organizing this post seems to be list a few more methods of construction of interesting union-closed systems that have come up since the last post — where “interesting” ideally means that the system is a counterexample to a conjecture that is not obviously false.

### Standard “algebraic” constructions

#### Quotients

If $\mathcal A$ is a union-closed family on a ground set $X$, and $Y\subset X$, then we can take the family $\mathcal A_Y=\{A\cap Y:A\in\mathcal{A}\}$. The map $\phi:A\to A\cap Y$ is a homomorphism (in the sense that $\phi(A\cup B)=\phi(A)\cup\phi(B)$, so it makes sense to regard $\mathcal A_Y$ as a quotient of $\mathcal A$.

#### Subfamilies

If instead we take an equivalence relation $R$ on $X$, we can define a set-system $\mathcal A( R)$ to be the set of all unions of equivalence classes that belong to $\mathcal{A}$.

Thus, subsets of $X$ give quotient families and quotient sets of $X$ give subfamilies.

#### Products

Possibly the most obvious product construction of two families $\mathcal A$ and $\mathcal B$ is to make their ground sets disjoint and then to take $\{A\cup B:A\in\mathcal A,B\in\mathcal B\}$. (This is the special case with disjoint ground sets of the construction $\mathcal A+\mathcal B$ that Tom Eccles discussed earlier.)

Note that we could define this product slightly differently by saying that it consists of all pairs $(A,B)\in\mathcal A\times\mathcal B$ with the “union” operation $(A,B)\sqcup(A',B')=(A\cup A',B\cup B')$. This gives an algebraic system called a join semilattice, and it is isomorphic in an obvious sense to $\mathcal A+\mathcal B$ with ordinary unions. Looked at this way, it is not so obvious how one should define abundances, because $(\mathcal A\times\mathcal B,\sqcup)$ does not have a ground set. Of course, we can define them via the isomorphism to $\mathcal A+\mathcal B$ but it would be nice to do so more intrinsically.
Read the rest of this entry »