Archive for the ‘complexity’ Category

A possible answer to my objection

August 14, 2010

It seems that the informal argument in my previous post may have been aiming off target. Here is a sort of counter to it, by Jun Tarui. (See also this earlier comment of his, and the ensuing discussion.)

If I’ve understood various comments correctly, Deolalikar is making a move that I, for one, had not thought of, which is to assume that P=NP and derive a contradiction. How could that help? Well, it allows us to strengthen any hypothesis we might like to make about polynomial-time computable functions to one that is preserved by projection.

Let me spell that out a bit. Suppose that f is a polynomial-time computable function on n+m bits, where m is at most polynomial in n. Then we can fix a set A of n bits, and for each x\in\{0,1\}^A we can ask the question, “Does there exist y\in\{0,1\}^{A^c} such that f(x,y)=1?” (By (x,y) I mean the sequence that agrees with x on A and with y on A^c.) And we can set g_A(x)=1 if and only if the answer is yes.
(more…)

Could anything like Deolalikar’s strategy work?

August 13, 2010

This post comes with the same warning as my previous one: my acquaintance with the ideas of Deolalikar’s paper is derived not from the paper itself (apart from a glance through it to get some kind of general impression) but from some of the excellent commentary that has been taking place on the blogs of Dick Lipton and Scott Aaronson. One comment that many people, including me, have found very useful has been one of Terence Tao, who attempted to summarize the state of the discussion up to the time he made the comment. Here is a portion of his summary.

The paper introduces a property on solution spaces, which for sake of discussion we will call β€œproperty A” (feel free to suggest a better name here). Formally, a solution space obeys property A if it can be efficiently defined by a special family of LFP formulae, the monadic formulae (and there may be additional requirements as well, such as those relating to order). Intuitively, spaces with property A have a very simple structure.
Deolilakar then asserts
Claim 1. The solution space to P problems always obey property A.
Claim 2. The solution space to k-SAT does not obey property A.

The latest from Dick Lipton’s blog seems to be that Neil Immerman has identified two serious flaws with Deolalikar’s proof. However, Tao, commenting on that, has pointed out that Deolalikar’s general strategy has not yet been shown to be hopeless in the way that, say, an attempted natural proof would have been shown to be hopeless by Razborov. In this post, I’d like to argue that it might be hopeless. I’m not sure I can say what that means in the abstract, so let me just give the argument. (more…)

My pennyworth about Deolalikar

August 11, 2010

Given that I have expressed rather unambiguously an interest in the P/NP problem on this blog, I feel duty bound to make some kind of remark about Deolalikar’s recent attempt to prove the theorem. (If by some extraordinary chance you don’t know what I am talking about, then an excellent starting point is Dick Lipton’s blog, a link to which can be found in my blogroll.)

The problem is that I haven’t looked at Deolalikar’s paper in any detail, and already I can tell that I cannot possibly compete with the extremely knowledgeable comments that have been made by several TCS experts. So instead, let me say that, like everybody else, I greatly admire what Deolalikar has done, whether or not it turns out to be correct, and am happy to wait to see how things pan out.

Instead of commenting directly on his work, I thought I’d air some thoughts I had a few years ago. I was reminded of them by reading that an important feature of Deolalikar’s proof is that it uses uniformity in an essential way (though whether this is really the case has been disputed by some commenters). (more…)

A conversation about complexity lower bounds, IX

November 3, 2009

This instalment has a brief discussion of another barrier to proving that P\neNP, known as algebrization. I don’t fully understand it, and therefore neither do my characters. (I’m hoping that maybe someone can help me with this.) But even a fuzzy understanding has some consequences, and the characters are led to formulate a simpler (and almost certainly already considered by the experts) problem that has the merit that when trying to solve it one is not tempted by proof techniques that would run up against the algebrization barrier. However, for all the usual reasons, this “simpler” problem looks very hard as well.

**************************************

πŸ™‚ I’m afraid I’m not yet ready to tell you what basic 3-bit operations do to quadratic phase functions.

8) In that case, can I instead mention something I read that looks relevant?

πŸ™‚ Sure, go ahead.

8) Well, you may remember my mentioning that Scott Aaronson and Avi Wigderson have a paper in which they introduce another barrier to lower bound proofs, which they call “algebrization”. If a proof can be algebrized, then it can’t prove that P\neNP. (more…)

A conversation about complexity lower bounds, VIII

October 27, 2009

In this next instalment, our characters discuss the norm you get by looking for the best possible correlation with a quadratic phase function. They end up discussing a heuristic argument that might, just conceivably, show that this norm is one of a wide class of norms that cannot possibly give rise to superlinear lower bounds. Along the way they have several thoughts, some of which are quite interesting, some not interesting at all, and some plain wrong. (The more interesting ones are mostly later on in the instalment.)

*******************************

πŸ™‚ Last time we met, I came to the understanding that if you build a norm by means of a formula of the form \|f\|=\max\{|\langle f,g\rangle|:g\in\Gamma\}, then there are two properties that \Gamma might have that will give the norm an outside chance of being a useful quasirandomness norm for proving nontrivial lower bounds. The first is that the cardinality of \Gamma should be superpolynomial in 2^n (or else there is a trivial polynomial-time algorithm for working out the norm). The second, which implies the first, but which I prefer to think of as a separate property, is there should not be some clever polynomial-time way of working out the norm—which, when \Gamma has superexponential size would require one to exploit special properties of the particular set of functions.

As I see it, if you take \Gamma to be the set of all quadratic phase functions, then you get a set that definitely has the first property and could well have the second. So I want to go back to thinking about this quadratic-correlation norm. Earlier I convinced myself that a random low-complexity function should not correlate with any quadratic phase function. But if for any fixed quadratic phase function I can get only an exponentially small probability of a huge correlation, and if there are superexponentially many quadratic phase functions, then perhaps we need to revisit this statement. Is it conceivable that every function of linear circuit complexity correlates quite heavily with a quadratic phase function? (more…)

A conversation about complexity lower bounds, VII

October 21, 2009

Another somewhat inconclusive instalment, but it introduces another candidate for a non-natural property that might be of some use. (In a later instalment, a heuristic argument is proposed that would show, if correct, that in fact it is not of any use …)

*****************************************

8) What’s on the agenda for today?

πŸ™‚ Well, there are two main things I’d like to discuss. One is our “proof” that a random low-complexity function was indistinguishable from a purely random function if you use anything U^k-ish to do the distinguishing. I have a small problem with the argument, though I wouldn’t go so far as to say that it is wrong.

The second is that we haven’t got round to having a serious discussion of the lessons that can be drawn from the failure of the approach. If it’s true that it doesn’t work, then I think we can come up with a pretty general class of “simplicity properties” that fail for similar reasons. But I’d like to be as precise about that as possible. (more…)

A conversation about complexity lower bounds, VI

October 16, 2009

The story so far: our three characters, :), 8) and :|, have discussed in detail an approach that πŸ™‚ has to proving complexity lower bounds. At the beginning of the discussion, they established that the idea would be very unlikely to prove lower bounds for circuit complexity, which is what would be needed to prove that P\neNP. More recently, they have given arguments that strongly suggest that the idea cannot prove interesting new bounds even for weaker models. In this segment of the dialogue, they nevertheless consider what kinds of methods would be needed in order to prove lower bounds for circuit complexity (as opposed to formula complexity). They don’t really get anywhere, and this part of the discussion ends rather abruptly, but they mention one or two interesting facts along the way.

***************************************

πŸ™‚ After all that, I have to say that I’m still slightly disappointed that it seems that even if the ideas I’ve been having could be got to work, they are most unlikely to show that P\neNP.

😐 I didn’t really understand why not. Can somebody remind me? (more…)

A conversation about complexity lower bounds, V

October 11, 2009

Here is the next instalment of the dialogue. Whereas the previous one ended on an optimistic note, this one is quite the reverse: plausible arguments are given that suggest that the U^k dual norm approach is unlikely to give superlinear lower bounds. A few other ideas are introduced in the process.

********************************************************

😐 Can I make a slightly cheeky comment?

πŸ™‚ OK

😐 Well, over the years I’ve heard about a number of approaches that people have had to famous problems, including the P versus NP problem, that have the following characteristic. Somebody who works in a different area of mathematics comes in and claims that the right way of solving the famous problem is to reformulate it as a problem in their area. They then make grand claims for the reformulation, and there’s quite a bit of excitement, though also scepticism from those who have thought hardest about the problem, and ten years later nothing has actually happened. I’m usually a little suspicious of these reformulations, unless they feel so natural as to be almost unavoidable. So the fact that you like additive combinatorics and are trying to reformulate the P versus NP problem (or even just the formula-complexity problem) as a problem in additive combinatorics makes me very slightly wary. (more…)

A conversation about complexity lower bounds, IV

October 7, 2009

This is the fourth part of a dialogue between three characters πŸ™‚ (a mathematical optimist who likes to suggest approaches to problems), 8) (a more sceptical mathematician who knows a small amount of theoretical computer science), and 😐 (an onlooker who occasionally asks for more detail, and occasionally makes comments about what πŸ™‚ and 8) say). They are discussing the possibility of proving lower bounds for computational complexity. πŸ™‚ wanted at first to prove that P\neNP, but has now been persuaded that proving a lower bound for formula size is a more realistic target, at least for the time being, and is still a major unsolved problem. In part II of the conversation, πŸ™‚ proposed trying to show that the dual of the U^k norm of a low-complexity function cannot be too large, at least when k has order of magnitude \log n. Numerous problems have been identified with this approach, but the idea has not yet been completely killed off. We return to the conversation as a new topic is introduced. This instalment is shorter than average, so I’ll probably post the next instalment sooner than average.

*****************************************************

8) You know, something has been bothering me about our discussion so far.

πŸ™‚ You’ve made that pretty clear.

8) I’m not talking about the fact that you don’t have any heuristic argument in favour of the U^{C\log n} dual norm being a useful complexity measure, but about something else that would be equally fundamental to any lower bound proof.

πŸ™‚ What’s that?

8) Well, you haven’t said how you would go about proving that some function in NP has a large norm. (more…)

A conversation about complexity lower bounds, III

October 2, 2009

The story so far: :), 8) and 😐 have been discussing how it might be possible to prove lower bounds for formula or circuit complexity. They have come to some understanding of what a proof could not be like, and in particular they have looked at examples of proof attempts that are ruled out by the result of Razborov and Rudich on natural proofs. But πŸ™‚ has proposed basing a proof on the dual of the U^k norm, with k around \log n. Several observations have been made that suggest that this is very unlikely to work, but the approach has not been definitively killed off. In the next stage of the conversation, our characters investigate the approach in more detail, finding further ways to attack it. They end on an optimistic note, but that optimism will be shaken in later instalments of the dialogue.

Incidentally, I’ve decided to let the rate that I post instalments be governed by the statistics: when they suggest that most people who are going to read an instalment have done so, then I’ll post the next one. It’s not an exact science, of course.

*************************************************

πŸ™‚ Before we dismiss the U^k norm for unbounded k as a useful complexity measure, it still seems worth thinking about the following problem.

Problem. Let f and g be two Boolean functions. If \|f\|_{U^k}^*\leq A and \|g\|_{U^k}^*\leq B, then how big can \|f\vee g\|_{U^k}^* be?

😐 I suppose quite a nice feature of that problem is that you can start right at the bottom, with k=2. (more…)