Normally at an ICM the first day has its own very distinct atmosphere because of the opening ceremony, the laudationes, and so on, after which the congress settles into a more regular, working format, with plenary lectures in the morning and invited lectures in parallel sessions in the afternoon. This year, because the number of prizes has increased and one of them needed inaugurating, some of the first-day feeling continued into the second day. And because my panel discussion was in the afternoon and lasted two hours, I had to miss the parallel sessions, so I personally had no sense of the ICM having properly started until the third day.

After the coffee break that failed to live up to its name was a lecture by Yan Yan Li on the work of Louis Nirenberg. From a distance, Yan Yan Li looked about 25, though later I caught a glimpse of him from closer up and could see that he belonged to the age range one might expect. (After writing that, I looked him up and found that he was slightly older than me — I certainly wasn’t expecting that. I must ask him how he got hold of the elixir of youth.) He was another member of the read-directly-from-slides school. I have no good suggestion about what to do when a slide with the next two or three minutes of talk appears. Do you read through it in the twenty seconds it takes and then listen to a repeat of all but the first twenty seconds? Do you try to discipline yourself not to read ahead? (I think the effort of doing that would soon cause me to forget about actually thinking about what the words mean.) Or do you ignore the speaker entirely, and if you can get through the slides more quickly than the speaker then you spend the remaining 75-80% of the talk catching up on sleep in little two-minute bursts? There are drawbacks with all these strategies. It follows, as night follows day, that there is a drawback with this style of talk …

However, some of the content was quite amusing. I didn’t write much in my notebook, so I may as well reproduce the lot (not word for word, but in slightly expanded form).

Why did Nirenberg get the first ever Chern prize for lifetime achievement? Yan Yan LI’s short answer was that he was one of the most outstanding analysts of the twentieth century, who worked in partial differential equations, inequalities, and regularity theory.

If you like amusing statistics, then you’ll be interested to know that in each of the last ten years, Nirenberg has had at least two of the top fifteen most cited papers. His mathematical ancestry can be traced back to Hilbert as follows: Hilbert — Schmidt — Hopf — Stoker — Nirenberg. And Nirenberg is one up on Chern with 45 students.

Two famous results were to solve the Weyl and Minkowski problems, pioneering the use of nonlinear PDEs in geometry. What are these problems? They turn out to have very nice simple statements. Weyl asked whether if you have a smooth metric on the 2-sphere with positive Gaussian curvature everywhere, you can find an embedding of the sphere in to that gives rise to precisely that metric. To put it another way, an obvious way of producing smooth metrics on the sphere with positive curvature everywhere is to take smooth embeddings of the sphere into that have positive curvature (defined extrinsically) everywhere and pull back the metric you get. (That is, if is the embedding, you define to be the distance in between and ) In 1938, Lewy solved this problem for *analytic* embeddings; Nirenberg managed to remove the analyticity assumption.

The Minkowski problem was a similar problem but the question was about curvature instead. Since the first problem mentioned curvature, I had better explain further: this time one wants an embedding such that the curvature at is given by the extrinsically defined curvature at

I would love to tell you something about the ideas that went into the proofs of these results, but I have nothing written down. I cannot remember whether Yan Yan Li told us anything about the proofs, apart from the fact that nonlinear PDEs came into them somehow.

So let me give a total guess instead, one that I would not have been able to make had I not heard innumerable accounts of Richard Hamilton’s general strategy for solving the Poincaré conjecture. Perhaps Nirenberg devised a very clever PDE such that if you start with a smooth embedding of the sphere as your initial data and allow your embedding to evolve according to the PDE, then it will converge to an embedding that has the metric you want. But it’s equally possible that he did nothing of the kind.

After a while, I have written in my notes, the talk became a long list of statements of theorems of Nirenberg and related results of other people. I stopped being able to concentrate, I think because I lost all sense of what was important and why. I started to long for the talk to end. It became rather like a brilliant spoof by Dudley Moore of a Beethoven piano sonata, where he keeps playing what you think must be the final climactic cadence of the piece, and it keeps not being the end. Similarly, Yan Yan Li would come to a good stopping point, and then up would come a new slide and he would say, “Part 6. [Insert branch of analysis here],” which we had of course all read in the previous second or two. But he did enough to convince me that Nirenberg was a worthy winner of the prize.

I’ve just realized that I made a mistake in my previous post. The coffee break came not after Bryant’s talk but after this one of Yan Yan Li. Perhaps that explains better why I was so desperate for it to finish. (The overrunning was principally due to Bryant, so I had been hoping for the most unlikely event that Yan Yan Li would not use up his 45 minutes.) And perhaps it excuses me for the decision I made next, which was to sit near the back of the next talk, Ingrid Daubechies on Yves Meyer, so that I could listen to the beginning and leave it early when it got more technical (if it did).

The half or so of the talk that I heard was extremely good, and under normal circumstances would definitely not have justified walking out. (I later heard from other people that the whole talk was excellent.) Meyer was best known to me, as is Daubechies herself, as a key protagonist in the story of the discovery of wavelets and the explosion in their use that took place from about the 1980s. But Daubechies began by telling us about a very pretty early result of Meyer’s that I had not heard of: he constructed sequences of integers such that for each real number the integer parts of are equidistributed in if and only if is transcendental. At the moment all I can tell you is this statement — I need to think about the result for a while to appreciate it properly. But I presume that it is a genuinely hard result and not some easyish exercise that uses nothing more than the fact that the set of algebraic numbers is countable.

In fact, having written that, let me at least check that it does not have a trivial proof of that kind. Let be a countable set of reals. Can we construct a sequence such that is not equidistributed mod 1 for any but *is* equidistributed mod 1 for any ? Let’s think how one might find a very general class of sequences satisfying just the first condition. One obvious method would be to enumerate as and insist that has the property that lies in the interval mod 1 for every and every By Dirichlet’s box principle we can certainly construct such a sequence, and we can make it grow as fast as we like.

But what would make all the other sequences equidistributed mod 1? Now we have uncountably many things to check. The only way I can even vaguely imagine of dealing with this would be an intricate bookkeeping exercise: one would choose a large and ensure that all but a small set of numbers in have multiples that are approximately equidistributed mod 1; then one would repeat the exercise with a much larger and so on.

Of course, I’m not expecting this to work, since I don’t actually believe that Meyer’s result can be generalized to all cocountable sets (or, to avoid triviality, all cocountable sets of irrational numbers).

Come to think of it, I don’t see why I don’t have a counterexample to Meyer’s result. Given a sequence let’s try to build a transcendental number such that the sequence is *not* equidistributed mod 1. To begin with …

OK, I take that back. My proposed proof would have worked for irrational numbers too, but we all know that the sequence will do for those. My misconception was to think of as a very rapidly growing sequence, whereas in fact to make the result work one will need the opposite. Now I’ve been through that thought process, I have a slightly more informed respect for the result. (I’d be even happier with an example of a cocountable set of irrationals for which the result is false. The next exercise I would want to try is to take all irrationals apart from That is, if you know that is equidistributed for all irrationals apart from , can you deduce it for as well? If that turns out to be easy, as I think it may, then I would change it to all irrationals apart from rational multiples of There I think it is probably enough to take to do something like running through all the integers such that lies between and for some integer

Another very nice looking number-theoretic result proved by Meyer, and again one that I haven’t understood properly in the sense of appreciating where the difficulty lies, is this. Define a subset of to be a *model set* if there exists a finite set such that That is, the differences of elements of all belong to the union of a finite set of translates of Meyer showed that if is a model set, then there must be a Pisot or Salem number such that Conversely, if is a Pisot or Salem number, then there must exist a model set such that

If you haven’t heard of Pisot or Salem numbers, let me tell you what I remember from an explanation I received from Jonatan Katznelson (son of Izzy) sixteen years ago. (I’ve been reminded a few times since, or I would have forgotten by now.) We know that the multiples of an irrational number are equidistributed mod 1, but what about the *powers* of an irrational number? Well, trivially they don’t have to be equidistributed, since you could take an irrational number that’s less than 1 in modulus. But what if we insist on an irrational number that is greater than 1? Must its powers be equidistributed? And do we even need the number to be irrational? Are the powers of 3/2 equidistributed, for instance?

This kind of question takes one rapidly to some fascinating open problems, but one question at least is easily answered: there are some very nice irrational numbers with powers that are not equidistributed. The canonical example is the golden ratio Since the th Fibonacci number is and is less than 1, the powers of get closer and closer to Fibonacci numbers, so they tend to zero mod 1. If I remember correctly, a *Pisot number* is a number for which this proof works: that is, an algebraic number greater than 1 such that all its conjugates are less than 1 in modulus. Actually, thinking about it, one should also insist that it is a root of a monic polynomial with integer coefficients, so that the corresponding recurrence relation gives you an integer sequence with each term an integer combination of previous terms. Why might such numbers pop up when one has a model set? I do not know, though for reasons that I cannot explain, even to myself, I don’t find it completely implausible.

Going back to the general question of when powers of a number are equidistributed, I suppose there are numbers like that fail for the trivial reason that every other other power is an integer. But I’m fairly sure that all the known examples of numbers with powers that are not equidistributed have fairly simple reasons for not being equidistributed: as soon as you don’t have a simple reason (as you don’t, for instance, for the number ) then you have an open problem.

Meyer is interested in all sorts of topics that are of huge practical interest. For instance, he has worked on denoising and deconvolution: if you have a noisy or blurred image, can you find good algorithm for producing from it a clean focused one? The human brain somehow manages to extract information from imperfect images. Obviously, that is partly because it *interprets* those images. But there seems to be a big automatic part there too that is amenable to mathematics: the kind of thing one would like to do is expand the image in a clever basis so that the noise or blur goes to certain coefficients that can be thrown away, whereas the “real” image goes to other coefficients. And wavelets are just such a clever basis for many applications.

A beautiful moment in the talk came when Daubechies reminded us of a strange few seconds in the Jim Simons video, during which he stopped looking like Jim Simons and looked more like the villain in Terminator 2 during one of his (its?) semi-liquid moments. I would have forgotten about that almost immediately, but as soon as Daubechies mentioned it, I realized that it was quite puzzling: what kind of fault would lead to that very strange effect? She explained that with modern image compression techniques, if you lose part of an image, then it won’t be a particular region or anything like that, but a particular *aspect* of the image, such as its textural quality. In this case, she told us, we had lost the texture but retained a sense of edges and boundaries. It made me want to go back and look at the weird video moment all over again and appreciate it properly (instead of just sniggering as I had the first time) but it was too late.

There was more, not just of the talk, but even of the part of the talk that I attended, but I think I’ll leave it there for now. When I get time, I’ll cover Smirnov’s talk, the panel discussion, and the conference dinner. And that will take me to the end of the second day.

I’ll finish this post with a reminder that the work of Nirenberg and Meyer is discussed by Terry Tao on his blog.

## Leave a Reply