Actually, Dinur’s gap amplification via expanders is more direct in this sense, and sort of resembles more what you may have in mind, but you can get weaker results reducing some error correcting (and linearitiy testing) -friendly NP complete problem -like solving quadratic equtions over finite fields- to your PCP approach, which has a pretty different feel than messing around a constraint problem, yet works. Even the general approach of passing between the CSP and proof view needs a few smart moves with the notion of NP hardness, which might be mindtwisting for someone not that familiar with complexity theory thinking.

]]>Are there sharp implications on what can and what cannot be done in polynomial time if UGC is proven FALSE? Any insights here would be appreciated. Thank you.

]]>I really like this formulation, because it highlights what the difficulty is better. Of course, it is possible I did not internalize this correctly yet and also wrongly assume that the checking part is easy once you have this algorithm.

]]>Does that make it less intuitive?

]]>I for one look forward to the day when there will not be a separate prize for informatics AND we see the first informatician Fields medalist.

]]>On the other hand, you can surely take the interpretation and language of this result to reach philosophical heights when combined with the fact that all formalized math theorems are basically SAT instances, and PCP guarantees you that there exists these switch-off-a-few-bits strange proofs for anything you like.

]]>It works the other way around: an instance in which the standard algorithm performs poorly on a problem P (of a certain type) can be turned into a reduction from Unique Games to the problem P, showing that all algorithms have to perform similarly poorly, or else UGC is false.

Very roughly, if the problem P is a graph problem, then a reduction from UG to the problem P describes how to turn each variable into a little graph, and how to turn each constraint into a set edges between the two little graphs corresponding to the two variables. An instance in which the standard algorithm performs poorly can be used to define the little graphs and the connections. The properties that a graph has to have to make the standard algorithm perform poorly are precisely the properties that make the reduction work.

]]>there is a natural synergy and also a low-level antagonism between mathematicians & computer scientists captured somewhat in the anecdote about all the mathematicians leaving the breakthrough/ award CS talk (the kind of offhand/ colorful/ revealing observation thta would never be reported almost anywhere else except a blog)…. have observed this friction in some occasional ways & it can be seen in your blog comments at times… the word “frenemies” comes to mind… often applied to teenage girl relationships :/

anyway you are leading/ pioneering in building bridges in this area eg with your interest/ fascination with P vs NP etc & think that is highly commendable. & more on the theme of (longterm) math-CS fusion in my blog…

]]>The CSP reformulation sound very intuitive: it is hard to find a satisfying assignment and, if you can not find it, it is also hard to guarantee that all assignments satisfy no more than, say, 99% of constraints. (Did I understand it correctly?)

On the other hand, the original formulation in terms of probabilistically checkable proofs is kind of mind blowing – if you have a logically correct statement, you can explain it a little better (increasing the length polynomially), and then I will only read 1000 words to convince myself with 99% confidence that the proof is correct!

Does it mean that the equivalence between these two statements is just as non-trivial as the PCP formulation, let’s say, in the same sense that it is equivalent to 2 x 2 = 4?

]]>*Thanks — updated.*

Well, the whole Nevanlinna prize is very strange. I don’t understand why it’s promoted by the IMU as a fifth Fields medal, since it only covers one area of mathematics. I think most people would react the same way if it was a prize reserved to topologists or analyst or any specific field.

]]>About Maryam Mirzakhani – my impression was that her recent very deep work with Eskin was the main reason for her Fields medal, not the earlier work that came out of her Ph.D. thesis.

]]>Structure of theoretical regularity.

Structure of regularity theory.

Theory of structure regularity.

Theory of regularity structures.

Regularity of structure theory.

Regularity of theoretical structure.