As the ICM recedes further into the past, these posts start to feel less and less fresh. I’ve had an enforced break from them as over the course of three days I drove my family from the south of France back to Cambridge. So I think I’ll try to do what I originally intended to do with all these posts, and be quite a lot briefer about each talk.
As I’ve already mentioned, Day 3 started with Jim Arthur’s excellent lecture on the Langlands programme. (In a comment on that post, somebody questioned my use of “Jim” rather than “James”. I’m pretty sure that’s how he likes to be known, but I can’t find any evidence of that on the web.) The next talk was by Demetrios Christodoulou, famous for some extraordinarily difficult results he has proved in general relativity. I’m not going to say anything about the talk, other than that I didn’t follow much of it, because he had a series of dense slides that he read word for word. The slides may even have been a suitably chopped up version of his article for the ICM proceedings, but I have not been able to check that. Anyhow, after a gentle introduction of about three or four minutes, I switched off.
I switched on again for János Kollár’s lecture, which was, like some of the others, what I feel a plenary lecture should be: a lecture that gives the non-expert a feel for what is important in the area being talked about. The first thing I wrote down was his brief description of the minimal model problem, one of the central questions in algebraic geometry. I think that by that time he had spent a while telling us what algebraic sets were, explaining why the picture you get if you just work over the reals is somewhat incomplete (for example, you may get a graph with two components, when if you work over the extended complex plane you have a torus), and so on.
The minimal model problem is this: given an algebraic variety , find a variety (the “minimal model” of ) such that the space of meromorphic functions on is isomorphic to the space of meromorphic functions on and the geometry of is as simple as possible. The condition that the function spaces are isomorphic seems (from a glance at Wikipedia) to be another way of saying that the two varieties are birationally equivalent, which is a fundamental notion of equivalence in algebraic geometry. So one is trying to find a good representative of each equivalence class.
The problem was solved for curves by Riemann in 1851, for surfaces by Enriques in 1914 and by Kodaira in 1966 (I don’t know exactly what that means, but I suppose Enriques made major inroads into the problem and Kodaira finished it off). And for higher dimensions there was the Mori program of 1981. As I understand it, Mori made huge progress towards understanding the three-dimensional case, and Christopher Hacon and James McKernan, much more recently, made huge progress in higher dimensions.
Another major focus of research is the moduli problem. This, Kollár told us, asks what are the simplest families of algebraic varieties, and how can we transform any family into a simplest one? I don’t know what this means, but I would guess that when he said “families of algebraic varieties” he was talking about some kind of moduli space (partly because that seems the most likely meaning, and partly because of the word “moduli” in the name of the problem). So perhaps the problem is sort of like a “family version” of the minimal model problem: you want to find a simplest moduli space that is in some sense similar to the one you started with.
Anyhow, whatever the problem is, it was done for curves by Deligne and Mumford in 1969, for surfaces by Kollár and Shepherd-Barron in 1988 and Alexeev in 1996 (again I don’t know who did what), and apparently in higher dimensions the Kollár-Shepherd-Barron-Alexeev method works, but there are technical details. (Does that mean that Kollár is confident that the method works but that a full proof has not yet been written out? He may well have told us, but my notes don’t tell me now.)
Kollár then explained to us a third problem. A general technique for studying a variety is to find a variety that is birationally equivalent to and study the question for instead. Under these circumstances, there will be lower dimensional subvarieties and such that . So one is left needing to answer a similar question for and , and since these are of lower dimension, one has the basis for an inductive proof. But for that to work, we want to be adapted to the problem, so the question, “When is a variety simple?” arises.
Apparently this was not even a precisely formulated question until work of Mori and Reid (1980-2) and Kollár, Miyaoka and Mori (1992). The precise formulation involves the first Chern class.
And that’s all I have, other than a general memory that this lecture continued the generally high standard of plenary lectures at the congress.
At 2pm, Avila gave his Fields medallist’s lecture. As with Hairer, I don’t feel I have much to say that I have not already said when describing the laudatio, so I’ll move on to 3pm, or rather 3:05 — by today the conference organizers had realized that it took a non-zero amount of time to get from one talk to another — when David Conlon was speaking.
David is a former student and collaborator of mine, and quite a bit of what he talked about concerned that collaboration. I’ll very briefly describe our main result.
There are many combinatorial theorems that can be regarded as questions about arbitrary subsets of nice structures such as the complete graph on vertices or the cyclic group of order . For example, Ramsey’s theorem says that if you 2-colour the edges of the complete graph on vertices, then (as long as is large enough) one of the colour classes will contain a complete graph on vertices. And Szemerédi’s theorem is equivalent to the assertion that for every and every positive integer there exists such that for every subset of the integers mod of size at least there exist and such that all of belong to .
For many such questions, one can generalize them from the “nice” structures to arbitrary structures. For instance, one can ask of a given graph whether if you colour its edges with two colours then one of those colours must contain a complete subgraph with vertices. Obviously, the answer will be yes for some and no for others, but to make it an interesting question, one can ask what happens for a random . More precisely, how sparse can a random graph be and still have the Ramsey property?
This question was answered in full by Rödl and Rucinski, but our method gives a new proof of the upper bound (on how dense the random graph needs to be), and also gives a very general method that solves many problems of this type that were previously unsolved. For example, for Szemerédi’s theorem it tells us the following. Define a subset of to be –Szemerédi if every subset of size at least contains an arithmetic progression of length . Then if is large enough (depending on and only), then a random subset of where elements are chosen independently with probability is -Szemerédi with high probability.
This bound is within a constant of best possible, since if the probability dips below , around half the elements of the random set will not even belong to an arithmetic progression of length , so those elements form a dense set that proves that is not -Szemerédi.
The method David and I used was inspired by the “transference principle” that Green and Tao used to prove their famous result about arithmetic progressions in the primes, though it involved several additional ingredients. A completely different approach was discovered independently by Mathias Schacht. Like ours, his approach established a large number of previously open “sparse random versions” of well-known combinatorial theorems.
David always gives very nice talks, and this one was no exception.
After his talk, I went to hear Nets Katz — with some regret as it meant missing Maria Chudnovski, who followed on from David in the combinatorics section. I must try to watch the video of her talk some time, though I’m bad at finding time to watch videos on the internet if they last for more than about three minutes.
Nets talked about work related to his famous solution with Larry Guth of the Erdős distance problem. That problem asks how many distinct distances there must be if you have points in the plane. If you put them evenly spaced along a line, you get distinct distances. You can do a bit better than that by putting them in a grid: because the density of numbers that can be expressed as a sum of two squares is roughly , one gets around distinct distances this way.
Erdős asked whether this was anywhere near to being best possible. More precisely, he asked whether there was a lower bound of , and that is what Guth and Katz proved. This was a great result that answered a question that many people had worked on, but it is also notable because the proof was very interesting. One of the main tools they used was the polynomial method, which I will not attempt to describe here, but if you are curious, then Terence Tao has posted on it several times. Nets Katz’s talk is here.
Then it was back (quite some way) to the combinatorics room to hear Michael Krivelevich talking about games. (This link is quite hard to find because they’ve accidentally put his name as Michael Krivelerich.) By “games” I mean two-player positional games, which are defined as follows. You have a set (the board) and a collection of subsets of (the winning positions). There are then two kinds of games that are typically studied. In both kinds, a move consists in choosing a point of that has not yet been chosen. In the first kind of game, the players alternate choosing points and the winner is the first player who can make a set in out of his/her points. (If neither player can do this by the time the entire board is filled up, then the result is a draw.) Noughts and crosses (or tic-tac-toe) is an example of this: is a 3-by-3 grid and consists of all lines of three points in that grid.
A well-known argument that goes back (at least) to John Nash when he was thinking about the game of Hex proves that the second player cannot have a winning strategy for this game. The argument, referred to as strategy stealing is as follows. Suppose that the second player does have a winning strategy. Then the first player has a winning strategy as well, which works like this. First choose an arbitrary . Then ignore , pretend that your opponent is the first player and play the second player’s winning strategy. If you ever find that you have already played the point that the strategy dictates, then play an arbitrary unoccupied point instead.
This contradiction (a contradiction since it is not possible for both players to have winning strategies) proves that the first player can guarantee a draw, but it is a highly inexplicit argument, so it gives no clue about how the first player can do that. An interesting open problem that Krivelevich mentioned relates this to the Hales-Jewett theorem. A consequence of the Hales-Jewett theorem is that if you play noughts and crosses on an -dimensional board where each side has length , then provided is large enough in terms of , it is not possible for the outcome to be a draw — since there is no 2-colouring of the points of the grid that does not give rise to a monochromatic line. So we know that the first player has a winning strategy. However, no explicit strategy is known, even if is allowed to be ridiculously large. (I am talking here about general : for small such as 3, and perhaps even 4, a winning strategy is known for fairly small .)
I asked Krivelevich about this problem, and his opinion was that it was probably very hard. The difficulty is that the first player has to devote too much attention to stopping the second player from winning, so cannot concentrate on trying to build up a line.
Another open problem is to find an explicit strategy that proves the following statement: there exist positive integers and such that for every , if the game is played on the complete graph on vertices (that is, players are alternately choosing edges), then the first player can create the first clique of size 5 in at most moves.
A moment I enjoyed in the talk was when Krivelevich mentioned something called the extra set paradox, which is the statement that if you add to the set of winning positions, a game that was previously a win for the first player can become a draw.
At first that seemed to me obviously false. When that happens, it is always interesting to try to analyse one’s thoughts and formulate the incorrect proof that has sprung to mind. The argument I had was something like that adding an extra set only increased the options available to the first player, so could not make it harder to win. And that argument is complete garbage, because it increases the options for the second player too. So if, for example, the first player plays as though the extra winning positions didn’t exist, the second player could potentially win by reaching one of those positions. The extra effort required to stop this can potentially (and sometimes does) kill the first player’s winning strategy.
Games of the kind I’ve just been discussing seem to be very hard to analyse, so attention has turned to a different kind of game, called a maker-breaker game. Here, the first player’s objective is to occupy a winning position, and the second player’s objective is to stop that happening. Also, the number of moves allotted to the two players is often different: we may allow one player to take moves for each move that the other player takes.
A typical question looked at is to take a graph property such as “contains a Hamilton cycle” and to try to find the threshold at which breaker can win. That is, if breaker gets moves for each move of maker, how large does need to be in order for breaker to be able to stop maker from making a Hamilton cycle? The answer to this, discovered by Krivelevich in 2011, is that the threshold is at , in the sense that if then maker wins, while if then breaker wins.
What makes this result particularly interesting is that the threshold occurs when the number of edges that maker gets to put down is (approximately) equal to the number of edges a random graph needs to have in order to contain a Hamilton cycle. This is the so-called random paradigm that allows one to guess the answers to many of these questions. (It was Erdős who first conjectured that this paradigm should hold.) It seems to be saying that if both players play optimally, then the graph formed by maker will end up looking like a random graph. It is rather remarkable that this has in some sense actually been proved.
Next up, at 6pm (this was a very long day) was the Abel lecture. This is a tradition started in 2010, where one of the last four Abel Prize winners gives a lecture at the ICM. The chosen speaker this time was John Milnor, whose title was “Topology through four centuries.” I did not take notes during this lecture, so I have to rely on my memory. Here’s what I remember. First of all, he gave us a lot of very interesting history. A moment I enjoyed was when he discussed the proof of a certain result and said that he liked it because it was the first example he knew of of the use of Morse theory. A long time ago, when I had very recently got my PhD, I thought about a problem about convex bodies that caused me to look at Milnor’s famous book on Morse theory. I can’t now remember what the problem was, but I think I was trying to think hard about what happens if you take the surface of a symmetric convex body with a sphere inside, gradually shrink it until it is inside the sphere, and look at the intersection of the two surfaces. That gives you (generically) a codimension-1 subset of the sphere that appears, moves about, and eventually vanishes again. That’s exactly the kind of situation studied by Morse theory.
Much more recently, indeed, since the talk, I have had acute personal experience of Morse theory in the outdoor unheated swimming pool where I was staying in France. Because I am worried about setting my heart out of rhythm if I give it too much of a shock, I get into cold swimming pools very slowly, rather than jumping in and getting the discomfort over all at once. This results in what my father describes as a ring of pain: the one-dimensional part of the surface of your body that is not yet used to the water and not safely outside it. Of course, the word “ring” is an oversimplification. Ignoring certain details that are inappropriate for a family post such as this, what I actually experience is initially two rings that after a while fuse to become a figure of eight, which then instantly opens out into a single large ring, to be joined by two more small rings that fuse with the large ring to make a yet larger ring that then becomes a lot smaller before increasing in size for a while and finally shrinking down to a point.
It is clear that if you are given the cross-sections of a surface with all the planes in a certain direction that intersect it, then you can reconstruct the surface. As I understand it, the basic insight of Morse theory is that what really matters if you want to know about the topology of the surface is what happens at the various singular moments such as when there is a figure of eight, or when a ring first appears, etc. The bits in between where the rings are just moving about and minding their own business don’t really affect anything. How this insight plays out in detail I don’t know.
As one would expect from Milnor, the talk was a beautiful one. In traditional fashion, he talked about surfaces, then 3-manifolds, and finally 4-manifolds. I think he may even have started in one dimension with a discussion of the bridges-of-Königsberg problem, but my memory of that is hazy. Anyhow, an indication of just how beautiful the talk was is what happened at the end. He misjudged the time, leaving himself about two minutes to discuss 4-manifolds. So he asked the chairman what he should do about it, and the chairman (who was Helge Holden) told him to take as much time as he wanted. Normally that would be the cause for hate rays to emanate towards the chairman and the speaker from the brains of almost the entire audience. But with this talk, the idea of missing out on the 4-manifold equivalent of what we had just heard for 2-manifolds and 3-manifolds was unthinkable, and there was a spontaneous burst of applause for the decision. I’ve never seen anything like it.
The one other thing I remember was a piece of superhuman modesty. When Milnor discussed examples of extraordinary facts about differentiable structures on 4-manifolds, the one he mentioned was the fact that there are uncountably many distinct such structures on , which was discovered by Cliff Taubes. The way Milnor presented it, one could have been forgiven for thinking that the fact that there can be distinct differentiable structures on a differentiable manifold was easy, and the truly remarkable thing was getting uncountably many, whereas in fact one of Milnor’s most famous results was the first example of a manifold with more than one differentiable structure. (The result of Taubes is remarkable even given what went before it: the first exotic structures on were discovered by Freedman and Kirby.)
Just to finish off the description of the day, I’ll mention that in the evening I went to a reception hosted by the Norwegians (so attending the Abel lecture was basically compulsory, though I’d have done so anyway). Two things I remember about that are a dish that contained a high density of snails and the delightful sight of Maryam Mirzakhani’s daughter running about in a forest of adult legs. Then it was back to my hotel room to try to gather energy for one final day.