I’ve saved the best till last, which should not be taken as a negative comment about the other laudationes, since Gil Kalai’s was a tour de force. His first distinction was that he was the only one of the five speakers not to be wearing a tie. He was, however, wearing a suit, so the result was to look smart in a trendy way rather than smart in a more standard mathematician-giving-important-talk way. And he opened his talk daringly with the promise that his would be a comprehensible talk, which got a laugh from the audience.

On a more negative note, I was a bit shocked that a significant proportion of the audience got up to leave before he started, as if to say, “The real business is over — this is just the Nevanlinna prize.” All I can say is that it was their loss, not just because of the wonderful talk but also because of the wonderful mathematics described in the talk.

One of the big achievements of Spielman (in joint work with his long-time collaborator Shanghua Teng) is a technique known as smoothed analysis. Here is Gil’s explanation (except that if anything seems wrong about it, that is my fault not his) of what it is and what the point of it is.

Much of complexity is concerned with *worst-case* analysis of algorithms. That is, you have an algorithm, and to assess how good it is, you ask yourself what the longest it could possibly take is over any input of length But there are some algorithms, notably the simplex algorithm, that work very well in practice, even though there are examples that show that their worst-case behaviour is exponentially bad. This was a mystery for decades after the algorithm was first introduced. (More accurately, at first it was believed to be good for all inputs, but then Klee and Minty found an example that demonstrated that is was not, and that was the start of the mystery.)

One obvious idea is to study the *average* case behaviour. That is, one might like to show that the algorithm works quickly for *almost* all inputs. However, this is not a great idea, since what it is basically saying is that the algorithm works well for random inputs, but in practice the inputs to which one applies algorithms such as the simplex algorithm are far from random. In principle, one might prove excellent average-case behaviour and have a result that tells you nothing about any input that you are likely to need the algorithm for.

The highly ingenious idea of Spielman and Teng (at least I think it was their idea — certainly, they were the ones who got it to work) was to mix worst-case analysis and average-case analysis as follows: you take an arbitrary input (that is the worst-case part) and randomly perturb it (that is the average-case part) and prove that with high probability the algorithm runs in a time that is polynomial in the size of the input, the dimension, and the reciprocal of the amount by which you perturb. (This is the variance of a Gaussian, though more recently Tao and Vu have shown that the result holds for a much wider class of perturbations.) So we now have a beautiful and conceptual explanation for the success of the simplex algorithm.

I’m trying to be a bit more concise now, especially given that I am still on day one of the ICM. So I’ll finish by saying that Spielman has also proved a result that I found out about while writing my smiley-complexity conversations: there is an error-correcting linear code that has a positive rate, corrects a positive fraction of errors, and can be encoded and decoded in *linear* time. That was joint with Michael Sipser. If you’ve heard of tornado codes, that is Spielman too (with Luby, Mitzenmacher and Shokrollahi). Finally, he has worked on fast algorithms for solving systems of linear equations — which was the topic of his talk the next day. Here he has found almost linear-time algorithms for several important classes of equations.

Gil summarized Spielman’s achievements by describing them as a beautiful interface between theory and practice. And that is quite obviously correct: all the results are deeply rooted in the need for fast algorithms, and they all turn out to involve mathematics that is extremely interesting quite independently of the applications.

August 22, 2010 at 4:02 pm |

That last coauthor is Shokrollahi, for some reason ACM has it truncated at http://portal.acm.org/citation.cfm?id=276756

August 22, 2010 at 5:51 pm

Many thanks — I’ve corrected it now.

August 26, 2010 at 5:30 am |

[...] a comment » In his blog,Tim Gowers summarizes Gil Kalai‘s talk at the ICM explaining the work by Daniel Spielman [...]

October 27, 2010 at 1:25 am |

Is smoothed analysis the same as basically taking the worst case modulo sets of measure zero (at least in the case where such terminology applies), like how $L^{\infty}$ functions are “essentially bounded”? The terminology used to describe it always uses a more awkward description based on shrinking Gaussians.