Archive for October, 2009

A conversation about complexity lower bounds, VIII

October 27, 2009

In this next instalment, our characters discuss the norm you get by looking for the best possible correlation with a quadratic phase function. They end up discussing a heuristic argument that might, just conceivably, show that this norm is one of a wide class of norms that cannot possibly give rise to superlinear lower bounds. Along the way they have several thoughts, some of which are quite interesting, some not interesting at all, and some plain wrong. (The more interesting ones are mostly later on in the instalment.)

*******************************

:) Last time we met, I came to the understanding that if you build a norm by means of a formula of the form \|f\|=\max\{|\langle f,g\rangle|:g\in\Gamma\}, then there are two properties that \Gamma might have that will give the norm an outside chance of being a useful quasirandomness norm for proving nontrivial lower bounds. The first is that the cardinality of \Gamma should be superpolynomial in 2^n (or else there is a trivial polynomial-time algorithm for working out the norm). The second, which implies the first, but which I prefer to think of as a separate property, is there should not be some clever polynomial-time way of working out the norm—which, when \Gamma has superexponential size would require one to exploit special properties of the particular set of functions.

As I see it, if you take \Gamma to be the set of all quadratic phase functions, then you get a set that definitely has the first property and could well have the second. So I want to go back to thinking about this quadratic-correlation norm. Earlier I convinced myself that a random low-complexity function should not correlate with any quadratic phase function. But if for any fixed quadratic phase function I can get only an exponentially small probability of a huge correlation, and if there are superexponentially many quadratic phase functions, then perhaps we need to revisit this statement. Is it conceivable that every function of linear circuit complexity correlates quite heavily with a quadratic phase function? (more…)

Triple negatives and Conservapedia’s support for Hitler

October 23, 2009

In an entry entitled “Negatives” in his Modern English Usage, Henry Fowler gave an amusing collection of examples of blunders that had been made with them. (If you follow this link, you have to scroll down a page to find the article I’m talking about.) Unaware of this, though not surprised to see it, I have been making a little collection myself. Since this is supposed to be a maths blog, let me feebly justify posting it by saying that it is a reflection on the fact that (-1)^3=-1 (and at one point on the corollary that (-1)^4=1). (more…)

A conversation about complexity lower bounds, VII

October 21, 2009

Another somewhat inconclusive instalment, but it introduces another candidate for a non-natural property that might be of some use. (In a later instalment, a heuristic argument is proposed that would show, if correct, that in fact it is not of any use …)

*****************************************

8) What’s on the agenda for today?

:) Well, there are two main things I’d like to discuss. One is our “proof” that a random low-complexity function was indistinguishable from a purely random function if you use anything U^k-ish to do the distinguishing. I have a small problem with the argument, though I wouldn’t go so far as to say that it is wrong.

The second is that we haven’t got round to having a serious discussion of the lessons that can be drawn from the failure of the approach. If it’s true that it doesn’t work, then I think we can come up with a pretty general class of “simplicity properties” that fail for similar reasons. But I’d like to be as precise about that as possible. (more…)

Miscellaneous matters

October 20, 2009

Michael Nielsen and I have written an Opinion Piece for Nature about the Polymath project and related matters. Thanks almost entirely to Ryan O’Donnell, a preprint at last exists that contains Polymath’s proof of the density Hales-Jewett theorem with all the details. It will be posted on the arXiv very soon and I will update this post when it is.

Update: it can be found here. Owing to a misunderstanding, it was posted before I had any input into it, but in any case, the full proof is here, even if the version that is submitted for publication will have some changes.

The Notices of the AMS have published five back-to-back reviews of the Princeton Companion to Mathematics. They are by Bryan Birch, Simon Donaldson, Gil Kalai, Richard Kenyon and Angus Macintyre.

From Quomodocumque I learned of a new website, Math Overflow, where you can ask and answer mathematical questions. It seems to be very active, with a lot of users, rating systems for comments and commenters, and the like. So in principle it could be another mechanism for pooling the resources of mathematicians with the help of the internet. For example, if you need a certain statement to be true and do not know whether it is known, then my guess is that you could find out pretty quickly if you post a question there. For more discussion, see a post over at the Secret Blogging Seminar.

A conversation about complexity lower bounds, VI

October 16, 2009

The story so far: our three characters, :), 8) and :|, have discussed in detail an approach that :) has to proving complexity lower bounds. At the beginning of the discussion, they established that the idea would be very unlikely to prove lower bounds for circuit complexity, which is what would be needed to prove that P\neNP. More recently, they have given arguments that strongly suggest that the idea cannot prove interesting new bounds even for weaker models. In this segment of the dialogue, they nevertheless consider what kinds of methods would be needed in order to prove lower bounds for circuit complexity (as opposed to formula complexity). They don’t really get anywhere, and this part of the discussion ends rather abruptly, but they mention one or two interesting facts along the way.

***************************************

:) After all that, I have to say that I’m still slightly disappointed that it seems that even if the ideas I’ve been having could be got to work, they are most unlikely to show that P\neNP.

:| I didn’t really understand why not. Can somebody remind me? (more…)

A conversation about complexity lower bounds, V

October 11, 2009

Here is the next instalment of the dialogue. Whereas the previous one ended on an optimistic note, this one is quite the reverse: plausible arguments are given that suggest that the U^k dual norm approach is unlikely to give superlinear lower bounds. A few other ideas are introduced in the process.

********************************************************

:| Can I make a slightly cheeky comment?

:) OK

:| Well, over the years I’ve heard about a number of approaches that people have had to famous problems, including the P versus NP problem, that have the following characteristic. Somebody who works in a different area of mathematics comes in and claims that the right way of solving the famous problem is to reformulate it as a problem in their area. They then make grand claims for the reformulation, and there’s quite a bit of excitement, though also scepticism from those who have thought hardest about the problem, and ten years later nothing has actually happened. I’m usually a little suspicious of these reformulations, unless they feel so natural as to be almost unavoidable. So the fact that you like additive combinatorics and are trying to reformulate the P versus NP problem (or even just the formula-complexity problem) as a problem in additive combinatorics makes me very slightly wary. (more…)

A conversation about complexity lower bounds, IV

October 7, 2009

This is the fourth part of a dialogue between three characters :) (a mathematical optimist who likes to suggest approaches to problems), 8) (a more sceptical mathematician who knows a small amount of theoretical computer science), and :| (an onlooker who occasionally asks for more detail, and occasionally makes comments about what :) and 8) say). They are discussing the possibility of proving lower bounds for computational complexity. :) wanted at first to prove that P\neNP, but has now been persuaded that proving a lower bound for formula size is a more realistic target, at least for the time being, and is still a major unsolved problem. In part II of the conversation, :) proposed trying to show that the dual of the U^k norm of a low-complexity function cannot be too large, at least when k has order of magnitude \log n. Numerous problems have been identified with this approach, but the idea has not yet been completely killed off. We return to the conversation as a new topic is introduced. This instalment is shorter than average, so I’ll probably post the next instalment sooner than average.

*****************************************************

8) You know, something has been bothering me about our discussion so far.

:) You’ve made that pretty clear.

8) I’m not talking about the fact that you don’t have any heuristic argument in favour of the U^{C\log n} dual norm being a useful complexity measure, but about something else that would be equally fundamental to any lower bound proof.

:) What’s that?

8) Well, you haven’t said how you would go about proving that some function in NP has a large norm. (more…)

A conversation about complexity lower bounds, III

October 2, 2009

The story so far: :), 8) and :| have been discussing how it might be possible to prove lower bounds for formula or circuit complexity. They have come to some understanding of what a proof could not be like, and in particular they have looked at examples of proof attempts that are ruled out by the result of Razborov and Rudich on natural proofs. But :) has proposed basing a proof on the dual of the U^k norm, with k around \log n. Several observations have been made that suggest that this is very unlikely to work, but the approach has not been definitively killed off. In the next stage of the conversation, our characters investigate the approach in more detail, finding further ways to attack it. They end on an optimistic note, but that optimism will be shaken in later instalments of the dialogue.

Incidentally, I’ve decided to let the rate that I post instalments be governed by the statistics: when they suggest that most people who are going to read an instalment have done so, then I’ll post the next one. It’s not an exact science, of course.

*************************************************

:) Before we dismiss the U^k norm for unbounded k as a useful complexity measure, it still seems worth thinking about the following problem.

Problem. Let f and g be two Boolean functions. If \|f\|_{U^k}^*\leq A and \|g\|_{U^k}^*\leq B, then how big can \|f\vee g\|_{U^k}^* be?

:| I suppose quite a nice feature of that problem is that you can start right at the bottom, with k=2. (more…)


Follow

Get every new post delivered to your Inbox.

Join 1,576 other followers