This is the second part of a dialogue in which three characters,, 8) andπ, discuss issues connected with the P versus NP problem. There are now nine parts: if this number continues to increase at a faster rate than I post them, then I may have to increase the rate of posting.

The story so far: has put forward one or two ideas, and 8) has drawn attention to the work of Razborov and Rudich that places very serious constraints on what any proof that PNP could possibly look like. This makes precise an intuition that had been beginning to build up after several failed attempts to find such a proof. then enthusiastically discusses further possible applications of the basic idea of Razborov and Rudich (seemingly unaware that Razborov and Rudich themselves had had similar but much more precise thoughts of the kind), before retiring to think for a while about what a “non-natural” proof might look like.

****************************************************************

Hi again.

8) Hello. Got anything for us today?

I’m not really sure. But I have had a few thoughts that I’d like to try out on you.

8) Fire away.

Well, the result of Razborov and Rudich tells us that if we want to define some notion of “structure” or “simplicity” that all functions of low complexity have and some function in NP does not have, then we have a choice. Either almost all functions must be simple, or it must not be possible to decide in polynomial time whether a function is simple. I have to say that I rebel against the first, so I’ve been trying to come up with properties that one might be able to prove something about, despite their not being in P.

If you take the attitude that any property not in P must be “weird” and “complex”, then this seems hopeless. But it’s gradually been dawning on me that there are plenty of properties that we understand perfectly easily even though we don’t have efficient algorithms for determining whether they hold. A pretty obvious class of such properties is NP!

For instance, if the property were “this graph on vertices contains a clique of size ” one can easily imagine proving that it holds for some specific graph, even if we do not have a polynomial-time algorithm for determining whether it holds for a general graph.

8) Yes, but if you allow properties in NP, then you allow the trivial universal properties such as “Has circuit complexity at most .”

π Hang on just a moment. Can you explain why that’s in NP?

8) Certainly. If you want to convince me in polynomial time that a certain function from to has circuit complexity at most , then all you have to do is tell me a sequence of functions that you claim is a straight-line computation of . If you want to make it easy for me, you can also tell me what the Boolean operations are that produce each term in the sequence. I will then go away and check that for each of the input strings , if I apply the Boolean operations you have given me then I end up with .

In short, if you tell me how is built up using Boolean operations, it’s easy for me to check that you are telling me the truth. But it is not at all easy for me to work out from the values of what the Boolean operations were that led to those values.

π OK, I get the general idea I think.

Hmm, so you’re saying that if I look at properties in NP then I run into the other difficulty—that I am allowing properties that lead to trivial reformulations of the original problem?

8) Well, it sort of looks that way.

Maybe. But it still doesn’t rule out that there might be *some* property in NP that does the job for us. For instance, perhaps for every polynomial-time computable function on one can find some subset of such that the restriction of to has a property that *can* be verified in polynomial time. But there might be so many candidates for that it would not be possible to tell in polynomial time whether this was the case. And perhaps it would be possible to prove for some specific function in NP that no such existed.

8) Well, in theory that might be possible I suppose. But the kind of thing you have just proposed won’t do as it stands, because if restricts to a nice function on and restricts to a nice function on some disjoint set , then it might be impossible to say anything about . You’d need some much stronger inductive hypothesis, such as that there were “many” sets for which the restriction of was nice.

Hmm. But then if is the set of all such that restricts nicely to , and is the set of all such that restricts nicely to , then it might be that the corresponding set for was something like . That’s a bit worrying, as the size of the intersection could go down quickly.

But wait. Is it as bad as all that? Let’s suppose that these collections had some kind of positivity property that meant that if has density and has density then has density at least . There are many examples of collections of sets with this kind of property. Then if basic functions had sets associated with them of density , then the density after steps would have to be at least . That sounds bad until one remembers that each set that we are talking about is a collection of *subsets of* . In other words, we are potentially talking about a set of size here, in which case a density of could be considered large even when is much larger than .

8) So, have you got any suggestion for a property along these lines. You appear to be wanting to say that is “structured” if there is a “large” and “nice” collection of subsets of such that the restriction of to each is “nice”. And you appear also to want it to be the case that if the restrictions of and to are both nice, then so are the restrictions of both and . Good luck!

I see what you mean. In particular, that last requirement is very strong indeed, since it implies that for every that we use, the nice functions on are closed under Boolean operations—or perhaps just approximately closed in some sense. But then, if niceness on is a property in P, it would seem to follow (from natural-proofs type considerations) that almost every Boolean function on is nice. (I admit that I don’t have a precise argument here—I’m just trying to make useful guesses.)

That wouldn’t quite kill off the approach, because it might be that for a typical function, even if the restriction to almost every was nice, it was nevertheless hard to find a nice collection of such sets. But rather than explore this desperate idea I’d like to mention something else I thought of.

8) OK, let’s hear it.

Well, I was trying to think of naturally occurring properties that are in NP and not obviously in P, and I suddenly realized that additive combinatorics can supply us with some.

8) Oh yes. Like what?

Well, an interesting phenomenon in additive combinatorics is that there are certain norms that can be computed in polynomial time, but that have dual norms that seem to be hard to think about except in an NP-ish way.

π Whoah, you’re losing me here. What is a dual norm, and what on earth has it got to do with NP?

Sorry, I keep forgetting that many people have an irrational dread of norms. I myself find that I can’t really understand anything until I’ve phrased it in terms of norms. (That’s an exaggeration of course, but it contains a kernel of truth.)

Anyhow, suppose we’ve got a finite set and a norm defined on the vector space of all functions from to or . And let’s suppose also that we have defined an inner product to be . Here, is shorthand for .

We now define the dual norm by the formula

Note the trivial but useful inequality , which holds for any two functions and and any norm .

Now suppose that the norm of a function can be computed in polynomial time. It takes a little bit of effort to say precisely what that means, since we are dealing with real numbers here, but that’s a technicality I don’t want to get into. It’s sort of obvious in any individual case. For example, if I define to be , then that is in P because all I need to do to calculate is arrange the values of in non-increasing order and take the average of the first of them.

Next, suppose that we have a norm that can be computed in polynomial time and we want to prove that for some function . This statement belongs to NP because one can verify it by exhibiting a function such that and : note that checking those two inequalities can be done in polynomial time, the first one by hypothesis and the second one by just working out all those products and taking their average.

π That’s all very well, but for the examples I can think of, it seems to be possible to calculate the dual norm in polynomial time. For example, assuming this concept of duality is basically the same as a concept I was once taught about in a functional analysis course, isn’t the dual of equal to , where ? If so, it would seem that at least for spaces the dual space is of the same complexity.

That’s what I thought as well to start with. Here’s a more sophisticated example. A well-known norm in additive combinatorics is the norm. If is a finite group and is a function from to , then the norm of is defined by the formula

Now the direct definition of would be

and if we define it that way then it looks like a function in NP that’s not obviously in P. However, it is an easy exercise to prove that is actually equal to , where is the Fourier transform of . But from that it follows from duality that is equal to , which can be calculated in polynomial time (since the Fourier transform can).

This shows that sometimes one can do a clever transformation, in this case the Fourier transform, that makes a dual norm easy to calculate, even if at first it appears to be hard to calculate.

8) OK then, what’s your point?

My point is that there are norms around where it’s nothing like so obvious how to do this. Here, for example is a problem that I don’t know the answer to. The norm of a function is defined by the formula

Now quite a lot is known about the norm. In particular, if a Boolean function has large norm, then a theorem of Samorodnitsky says that it must correlate quite well with something called a quadratic phase function. But there are around such functions, which is more than polynomially many in , and we don’t know how to express the norm or the dual norm of in terms of some easily calculated decomposition of into quadratic phase functions. So it is not at all obvious whether there is a polynomial-time algorithm for calculating .

8) Are you trying to suggest that the dual of the norm could be a useful complexity measure?

Well, it seems like a long shot, since I thought of it just as an example of a fairly natural function that wasn’t obviously in P, and have no reason to suppose that it really is a formal complexity measure.

Maybe we should think about that. But first I want to tell you another reason that I can’t help being slightly excited about dual norms. Recall that we are trying to define a property of functions that we think of as “simplicity” or “structuredness”. A natural idea would be to define a notion of quasirandomness and say that a function is simple if it isn’t quasirandom. The and norms can be thought of as measures of quasirandomness of a certain type, and they are very useful in additive combinatorics.

However, there is a problem, which is that if and are non-quasirandom, it certainly doesn’t follow that is non-quasirandom. For example, suppose that we divide our group up into two nice sets and . We define to be on and random on , and we define to be random on and on . Then neither nor is quasirandom (their and norms will be big) but is a random function.

However, if we use *dual* norms, we are in much better shape. We say that is simple if is not too big. Now what does this mean intuitively? It means that is not only not quasirandom, but it doesn’t even *correlate* with a quasirandom function. So the functions and above are completely ruled out. In general, it seems far more likely that we would end up with a complexity measure. After all, we’ve already got the triangle inequality, which isn’t what we want but does at least resemble it a bit.

π So you’re saying that it’s no good using “is not quasirandom” as a definition, but “does not correlate with a quasirandom function” might be OK.

That’s exactly what I’m saying.

π There’s something here that’s confusing me. Earlier, you mentioned what you called a trivial inequality, that .

Yes I did.

π Well a special case of that is the statement that .

Yes.

π So if is a Boolean function, and therefore takes values everywhere, we get that .

Oh. I think I see where this is going.

π So the only way can be small is if is reasonably large. Or to put it another way, if is very small, which is your interpretation of the statement “ is quasirandom,” then must be very large.

Thus, for to be a useful complexity measure, we need to be not all that small whenever is a function of low complexity. But if the norm can be calculated in polynomial time, then the property “ is not that small” is in P. Therefore, given all those results about natural proofs that we were talking about in our earlier conversation, almost all Boolean functions must have the property that is not all that small. That doesn’t instantly imply that cannot be big, because in theory could correlate with a quasirandom function that is not a Boolean function. But it is a bit worrying, since it no longer seems to make sense to describe the norm as small for quasirandom functions if it tends to be big for random Boolean functions.

Damn. I was hoping to use some “magic power of duality”, but the observation you’ve just made shows that even the fact that the predual of some norm is in P causes problems for that norm.

That’s annoying, because dual norms had other virtues.

π What were those other virtues?

I just mean the things I’ve already mentioned: that the property of not even *correlating* with a quasirandom function seems much more likely to be closed under Boolean operations (in some useful sense that one would have to invent). I was also encouraged by the fact that the duals of the norms are known to have what one might call “algebra-like” properties. In particular, a crucial lemma in the proof by Green and Tao that the primes contain arbitrarily long arithmetic progressions asserts that certain special functions in the dual space have pointwise products with dual norms that are not too large. Of course, pointwise products are not the same as pointwise maxima and minima, but if and are functions that take values , then , for example, so there is some connection. The lemma of Green and Tao does not come close to what one would need for the dual norm to be a useful complexity measure, but I found it encouraging nevertheless. But now it seems that the whole idea is completely wrecked.

8) Perhaps we could formalize the difficulty and give a very slight (and very simple) extension of the Razborov-Rudich result. The reason your NP property (having a small dual norm) doesn’t work is that it implies a property in P (having a large norm) that holds for almost no functions. And that implies, by the work of Razborov and Rudich, that it cannot hold for all functions of low complexity. Therefore, your original property cannot hold for all functions of low complexity.

In fact, this is not an extension of the Razborov-Rudich result, since they have a notion of a *naturalizable* proof, which essentially means a proof that can easily be turned into a natural proof. And that is the case whenever the “simplicity” property you are considering implies another property that can be checked in polynomial time and does not hold for almost all functions.

That brings me to another point I wanted to make. It seems to me that coming up with an NP function that is quasirandom is not a particularly easy task, whereas coming up with an NP function that *correlates* with a quasirandom function ought to be significantly easier. I have examples up my sleeve to illustrate this, which perhaps I’ll talk about later. But for now I just wanted to point out that this was supposed to be another advantage of dual norms. The odd thing is that the failure of the proof seems to have nothing to do with the part of the argument where you show that some NP function is “complex”.

8) There. I told you you’d end up sadder but wiser.

I find that these feelings of sadness pass very quickly. I’ve just had another thought that might make things work.

8) I’m all ears.

Let me begin by saying what the norm is. It’s the obvious generalization of the and norms. If , then is defined by the formula

Here, is the operation of taking the complex conjugate, and is shorthand for So if your function is real you can ignore the

π By the way, is it supposed to be obvious that that formula really does define a norm?

Oh, sorry. No, it’s not obvious, but the proof is not too hard either. It’s a generalization of the proof that the norm is a norm. It’s proved in a similar way, but one uses the Cauchy-Schwarz inequality more times.

Anyhow, the thought that’s occurred to me is that perhaps we’d get something useful if we took to be a slowly growing function of rather than just a fixed constant. Then the number of -tuples is no longer a polynomial in , so the norm no longer appears to be in P.

8) You do realize, I suppose, that there is more to proving a complexity lower bound than just identifying a non-natural property. Do you have any reason to suppose that this one works, or are you just trying to find an approach that doesn’t trivially not work? Given the state of play with the problem, even the second aim has something to be said for it, but if you just make wild guesses like this, then the likelihood is that they won’t actually work.

You’re absolutely right. At this stage I just want to try to think about what a non-natural proof could conceivably look like. But now that I’ve got a candidate, why don’t we try to attack it and show that it doesn’t work?

8) Sounds good to me. And here’s my first attack. What makes you so confident that this norm does not belong to P?

I’ve just explained that. There are about configurations to look at, so if tends to infinity then we have a superpolynomial number.

8) Yes, but do we really have to look at all those configurations? Why couldn’t we take a much smaller random sample and get a good approximation to the norm with high probability? I’m pretty sure that would make the property naturalizable. (Here I’m using the general philosophy that all randomized algorithms can be derandomized — I could go into that, but suffice it to say that if we are assuming that strong pseudorandom generators exist then it shouldn’t be a problem to use randomized algorithms.)

I see. That’s a serious attack. So let’s think about how big the random sample needs to be.

Basically, it needs to be big enough that if there is some non-quasirandomness around, we will find it. So to make it as hard as possible to find, how about if we squash all the non-quasirandomness into a small space? For example, we could choose a function to be entirely random on the whole of , apart from on some -codimensional subspace where it takes the constant value 1.

I can’t be bothered to work out precisely what the norm of this function is, but a woolly argument would be that when we choose a random , we get a contribution of if all of belong to , and on average everything else gives almost zero. The probability that all lie in is , so the norm ought to be something like . This will be large even when , in which case the probability that all lie in is . So in order to pick up any configurations at all that make a contribution to the norm, we would need a random sample of size at least . That would be superpolynomial in if is , or . (I’m taking logs to base 2 here.)

What this seems to show is that, for this approach to have any chance of working, we would have to take to be greater than . But that’s only , so the approach doesn’t seem to be killed off by your random-sampling objection.

8) Maybe not. Actually, I find your argument quite convincing. If all the non-quasirandomness of some function can be hidden in a very small subspace, it could well be quite hard to find that subspace.

π I’m not sure I’m quite as convinced. With your example, the function was *constant* on a subspace. Surely there must be ways of identifying large subspaces on which a function is constant if it is otherwise random. For instance, couldn’t one just take the Fourier transform?

Yes, but that was just one example. Having found that, one could disguise it a bit. For example, instead of making constant on , one could instead make it a polynomial phase function of degree . The estimate above would still go through, and it is much less clear how one would identify a subspace on which an otherwise random function was equal to some arbitrary polynomial phase function of unbounded degree. And one could make it harder still by randomly flipping a few values in order to defeat any clever algebraic tricks.

π OK, you’ve bludgeoned me into submission.

By the way, I should make clear that the complexity measure I would then go for is the dual norm of the norm. Although, as we’ve seen, this is not a philosophical advantage, the other advantages of dual norms were enough to make this a good thing to do. Another way I like to think of this: if a low-complexity function cannot be quasirandom, then it ought to be the case that no restrictions of it can be quasirandom, since they too are built up from simple functions by means of a limited number of Boolean operations. This idea of being “non-quasirandom everywhere” is much better captured by duals of quasirandomness norms than by the quasirandomness norms themselves.

Having said all that, I still don’t really believe that I’ve just stumbled on a genuinely useful formal complexity measure. Here’s why.

We already more or less know that for bounded the duals of the norms don’t give us formal complexity measures, because a random formula probably has almost maximal dual norm. Therefore, we must have many pairs of functions such that . Probably in the worst case, the left-hand side is greater than the right-hand side by at least a constant factor depending on .

Suppose we were very lucky and could prove that , for some constant that tended to zero with . One might at first think that one could just choose nice and large and end up with a good enough approximation to a formal complexity measure to be able to prove some serious results. But unfortunately something else is decreasing at the same time: the largest possible dual norm of a Boolean function. Roughly speaking, the th power of this is equal to the probability that a configuration is degenerate, which is of order .

π What do you mean by a degenerate configuration?

Well, with the norm, for example, one takes an expectation over quadruples . Associated with each quadruple is a three-dimensional structure that consists of the points Normally, these points are all distinct, but if some of them are equal then we say that the configuration is degenerate.

Anyhow, I’m pretty sure that the largest possible dual norm of a Boolean function is around . So it is not enough for that constant to tend to zero: it has to do so “faster”, in a certain sense, than tends to 1. And there doesn’t seem to be any reason for that to be the case.

8) I’d like to try out another demolition job if I could.

Fine by me, since I’m feeling less and less emotionally attached to this particular idea.

8) Well, if I understand correctly your definition of the norm, there is a simple combinatorial interpretation of the norm of a Boolean function, at least if one ignores the degenerate cases.

Yes, I’m pretty sure there is. But why don’t you explain it?

8) Well, let be a non-degenerate -tuple and let be a Boolean function. Let be the subspace of generated by and let . Then the product that appears in the definition of the norm is if the number of points where is odd, and otherwise. In other words, it measures the parity of inside . So the th power of is the average parity of in a random -dimensional affine subspace of .

Thus, for a Boolean function to have a large norm, one needs the function to have even parity noticeably more often than odd parity, or vice versa.

A simple argument shows that the expectation is non-negative, so if there is going to be any bias of this kind it will have to be towards even parity.

8) OK, forget the “or vice versa”.

Now the parity of the restriction of to an affine subspace is an easily computable function, so if we believe that a random low-complexity is polynomially indistinguisable from a genuinely random , then it would seem to follow that is guaranteed to have even parity almost exactly half the time. And now by linearity of expectation, it seems to follow that the expected parity is small. So even when is unbounded, it seems that the norm of a random low-complexity is small, and therefore that its dual norm is large.

Hmm … it sounds as though we’ve hit a fairly general obstacle here.

Oh, but hang on a moment. There’s a flaw in your argument. You correctly show that the expectation of the th power of the norm of a random low-complexity function will be small, but how small will it be? Polynomial indistinguishability requires only that it should be at most for any fixed . But if we raise this to the power , we get . If we’ve chosen to be , then this is , which is more or less equal to 1.

The point I’m making here is that if is as big as , then even a very tiny bias, smaller than any power of , in the number of affine subspaces where the parity is even, is detectable by the norm. And the polynomial indistinguishability doesn’t tell us that we can’t find bias that’s as small as that.

8) Hmm, I’m going to come back to this point, because I’m still very suspicious of it. And there’s also your own objection to worry about — the one about getting smaller and smaller. In fact, if that looks very small indeed. And if you set then is no longer more or less equal to 1.

Yes, but somehow that feels like an objection to one particular proposal for a complexity measure. And that proposal was, as you say, just a pure guess, with no thought for why low-complexity functions might turn out to have small dual norms. Perhaps some more carefully thought out quasirandomness norm might do better.

Also, I have to say that I don’t want to dismiss the dual of the norm until I have understood rather better why it doesn’t work.

September 28, 2009 at 6:49 pm |

This might be a good time to point to the paper of Timothy Y. Chow, http://math.mit.edu/~tchow/almost.pdf. Therein he shows that the Razborov-Rudich result provably fails if you relax the “largeness” condition slightly; specifically, if you only insist that a random function have the property with probability at least rather than .

He gives Salil Vadhan’s short proof of this statement at the top of page 10.

September 29, 2009 at 7:00 am |

Incidentally, 8) , the calculation of the U^k norm can be derandomized unconditionally using “epsilon-biased” spaces.

September 29, 2009 at 3:12 pm |

[…] emoticon’s converse … my guess this isn’t the conversation most people think they’re going to be […]

September 29, 2009 at 4:35 pm |

Thought-provoking post. Re. the “desperate idea”, it seems that niceness on X cannot be decidable in polynomial time (under a complexity hypothesis). Razborov and Rudich show that any formal complexity measure must be high for most Boolean functions. The same argument seems to carry through to say that if there is any function that is not nice on X, then it follows from the closure (or approximate closure) of niceness under Boolean operations that most functions on X are not nice on X. But then any criterion for niceness on X can be used to distinguish between random functions on X and pseudorandom functions (since pseudorandom functions ought to be nice on X), and hence cannot be decidable in polynomial time.

Perhaps a good test for a complexity measure in general is to check that it’s small for simple functions in NC^1, such as Recursive Majority. Indeed, if a measure is small for a complete function such as Boolean Formula Evaluation or iterated multiplication over S_5, and if it’s “closed under AC^0 reductions”, it’s small for all of NC^1. This would typically be the “easy” direction of a lower bound, the hard part being showing that there’s a function in NP for which the complexity measure is high. Maybe for “interesting” complexity measures, there would be a different tradeoff between these two parts of the argument, where it would now be non-trivial to show that the measure is small for functions with small formula size, but the proof that there’s a function in NP for which the measure is large would be simplified.

September 29, 2009 at 6:14 pm

The “desperate idea” took that point on board, and was proposing a definition that was something like this: a function f is “simple” if there is a “structured collection” A of “structured sets” X such that the restriction of f to any X in A is “nice”. If f is a typical low-complexity function, then for any individual X one would expect that the probability that f has a nice restriction to X is small (if niceness is in P), for the reason you give. The desperate idea was that for any low-complexity f there was nevertheless a structured collection A of exceptional sets X for which f did have a nice restriction, and that this property was preserved (in some sense) by Boolean operations. I think what I’m writing here is consistent with what you write, so apologies if this clarification is not needed.

I get the general gist of your second paragraph (after looking up what NC^1 is, but I haven’t yet got on to iterated multiplication over S_5 or closure under AC^0 reductions). I like the idea of trying to move the tradeoff.

September 30, 2009 at 4:43 pm |

Thanks for the explanation; I understand better now.

With regard to the second paragraph, the idea was that complexity measures which are proved to be useful by induction (i.e., by analyzing the effect of each individual AND and OR gate) might turn out to be just as hard to work with as formula size itself. A different way to show that a complexity measure is useful is to prove it’s small for a problem that is complete for NC^1, and then argue that it is small for all of NC^1 by using reductions. Typically, complete problems are complete under some simple form of reduction, such as AC^0 reductions. AC^0 reductions are reductions computable by a polynomial-size constant-depth family of circuits, where the circuits consist of AND and OR gates with arbitrary fan-in. The nice thing about AC^0 is that it’s fairly well-understood as a complexity class (relatively speaking!). Linial, Mansour and Nisan showed that any AC^0 Boolean function is well-approximated in L^2 norm by the Fourier spectrum restricted to co-efficients of weight at most polylog(n). As for completeness, there are some non-trivial (and simple) examples of problems that are complete for NC^1 – Barrington showed that multiplying a sequence of n elements of S_5 is complete, which is quite surprising since this can be done even by a finite automaton. An optimist might consider all this as evidence that NC^1 is a “simpler” and “easier to analyze” class than P.

September 30, 2009 at 5:09 pm

Exactly the same first sentence as yours!

Let me check that I really do understand what you’re saying—I’m fairly sure I do though. I haven’t checked, but I’m assuming that you focus on NC^1 because it is closely related to the set of functions of polynomial formula complexity, though if I’ve understood the definition correctly (bounded fanin, logarithmic depth, polynomial size circuits) then it doesn’t seem to be obviously identical. (In my head, it feels as though any NC^1 circuit can be expanded out into a polynomial-sized formula, but that some high-depth formulae cannot obviously be rewritten so as to have logarithmic depth.) But I’m also guessing that proving that some function is not in NC^1 would still be of major interest. So then you’re suggesting that to prove that using a complexity measure, it might be a good idea to do it in two stages. First, prove that some complete problem, such as multiplying permutations of a set of size 5 together (which can be done in NC^1 if you pair them up and multiply, then iterate that process until you’ve got a tree of permutations, each the product of the two before it) has low complexity according to that measure. Then prove that if you compose a low-complexity function with a function of bounded depth, you still have a low-complexity function. You suggest, but don’t quite say, that every function in NC^1 can be AC^0-reduced to the multiplying-permutations function. So then one would be done.

Even if that’s only approximately correct, what you say looks very interesting, and ties in a little bit with things that the characters say in later instalments of the dialogue. (What I mean here is that they consider computational models where you do one explicit function to scramble things up a bit and follow it by a random function from a rather simple model. The composition appears to end up looking a lot more random than either of the two ingredients that make it up.) And whether or not your approach could be made to work, it focuses the mind on a very interesting question: just what is it that is “simple” about iterated multiplication over ?

September 30, 2009 at 5:39 pm

Actually, NC^1 is equivalent to having polynomial formula size, but there’s a (standard) trick required to see that. A formula can be thought of as a binary tree, and one can always find a node in this tree such that the size of the subtree rooted at that node is between n/3 and 2n/3 (where n is the size of the tree). This allows us to “re-balance” the formula to depth O(log(n)). And yes, you’re right, every function in NC^1 can be AC^0 reduced to the standard complete problems, such as Boolean formula evaluation or multiplication over S_5.

What makes me more hopeful about dealing with NC^1 rather than P is that regular languages are well understood, and AC^0 is somewhat well understood, so one might expect (naively) that it’s also possible to get a handle on functions that are AC^0-reducible to a regular language. In any case, I look forward to future instalments of the dialogue!

October 6, 2009 at 5:20 am |

This is somewhat tangential to the main post, but I wanted to mention a related open problem, which is to find faster ways to compute deterministically. For k=2, the FFT lets one compute this quantity in arithmetic operations. But for k=3, even with the FFT the fastest I can see is (by using the recursive formula for the uniformity norms); more generally, the fastest algorithm I know of for the U^k norm is . It may be that an improvement on this could lead to some further understanding on these norms (historically, there has been a reasonable correlation between the ability to quickly compute an expression, and the ability to control that expression).

Of course, by random sampling (Monte Carlo integration, basically), one can quickly get the most significant digits of the U^k norm (assuming f bounded by 1, say), but this says nothing about the circuit complexity of these norms. I suspect though that the type bound for the U^k norm can be beaten.

October 6, 2009 at 12:32 pm

A related question that I find interesting is to find a polynomial-time algorithm for the dual norm. I haven’t completely convinced myself that it’s hard, but I certainly don’t see how to do it. In fact, I don’t even see how to do the following (but again I haven’t sweated over it). If you have a function with norm at least , then you know that it correlates with a quadratic phase function on a Bohr set (or a bracket polynomial, or whatever your favourite quadratic structure is — or look at the equivalent problem for and take straight quadratics). There are superpolynomially many such structures, so one cannot find one in polynomial time by brute force. But can one do so by clever techniques?

October 7, 2009 at 3:18 am

Tim’s question is answered in the case of for boolean functions in this paper:

Parikshit Gopalan, Adam R. Klivans, David Zuckerman: List-decoding reed-muller codes over small fields. STOC 2008: 265-274. (link)

They show that if agrees with a degree-2 polynomial on at least a fraction of inputs, then:

1) There are at most polynomials having agreement at least with $f$ and

2) They can all be found in time .

Even the first result is non-trivial because there are about degree-2 polynomials.

October 7, 2009 at 2:54 pm

I think Parikshit generalized some of these results to the case, here:

http://research.microsoft.com/pubs/80251/Quadratic.pdf

although I haven’t read his paper carefully yet…