A conversation about complexity lower bounds, III

The story so far: :), 8) and :| have been discussing how it might be possible to prove lower bounds for formula or circuit complexity. They have come to some understanding of what a proof could not be like, and in particular they have looked at examples of proof attempts that are ruled out by the result of Razborov and Rudich on natural proofs. But :) has proposed basing a proof on the dual of the U^k norm, with k around \log n. Several observations have been made that suggest that this is very unlikely to work, but the approach has not been definitively killed off. In the next stage of the conversation, our characters investigate the approach in more detail, finding further ways to attack it. They end on an optimistic note, but that optimism will be shaken in later instalments of the dialogue.

Incidentally, I’ve decided to let the rate that I post instalments be governed by the statistics: when they suggest that most people who are going to read an instalment have done so, then I’ll post the next one. It’s not an exact science, of course.

*************************************************

:) Before we dismiss the U^k norm for unbounded k as a useful complexity measure, it still seems worth thinking about the following problem.

Problem. Let f and g be two Boolean functions. If \|f\|_{U^k}^*\leq A and \|g\|_{U^k}^*\leq B, then how big can \|f\vee g\|_{U^k}^* be?

:| I suppose quite a nice feature of that problem is that you can start right at the bottom, with k=2.

:) Yes indeed. I’ve been thinking about that. I have to say that it’s not obvious to me even that the answer is independent of n. A preliminary observation is that if f is a basic function, then \|f\|_{U^2}^*=1, and if f and g are two different basic functions, then \|f\vee g\|_{U^2}^*=2^{4/3}. Both these facts can be proved by “cheating” and using the fact that \|f\|_{U^2}^*=\|\hat{f}\|_{4/3}. The Fourier transform of f is 1 at one point and 0 everywhere else, while the Fourier transform of f\vee g is \pm 1/2 at four points and 0 everywhere else.

Ah, I think I’ve come up with a fairly disastrous example. Choose a positive integer r and take f(x) to be a random function that depends just on x_1,\dots,x_r and g(x) to be a random function that depends just on x_{r+1},\dots,x_{2r}. Then f\vee g is a randomish function of x_1,\dots,x_{2r}. It’s not hard to show that the U^2 dual norms of f and g are something like 2^{r/4}. I’m not sure what the norm of f\vee g is, but I think it might be more like 2^{2r/4}=2^{r/2}. If so, then we’d be in big trouble.

:| What makes you think that the norm of f\vee g is that big?

:) Well, somehow f\vee g is a bit like a “tensor product” of f and g, and so the badnesses of f and g feel as though they could multiply together.

:| But what would the function be that norms f\vee g?

:) Good question. Let’s try to find one. I think that f norms itself and g does too. (If not, I can create explicit examples of quadratic phase functions where that is definitely the case.) So we could try taking fg as a norming function. Since f and g depend on disjoint sets of variables, it isn’t too hard to prove that \|fg\|_{U^2}=\|f\|_{U^2}\|g\|_{U^2}\approx 2^{-r/2}. Just to make things simpler, let’s look at f\wedge g instead. Then, as we’ve already commented, f\wedge g=(1+f)(1+g)/2-1=(f+g+fg-1)/2, so we want a lower bound for

\displaystyle \langle fg,-1+f+g+fg\rangle.

Now the pointwise product of fg with f is just g, and similarly the pointwise product with g is f. Also, since f and g are “independent”, the expectation of fg is zero. So all the first three terms are small. The last one, since fg is a Boolean function, is 1. Therefore, \langle fg,f\wedge g\rangle=1/2. Since \|fg\|_{U^2}=2^{-r/2}, we get that \|f\wedge g\|\geq 2^{r/2}/2. So my prediction seems to have been correct, unfortunately.

If, as this calculation suggests, pointwise maxima somehow behave more like tensor products than sums, then one might expect \log\|f\|_{U^k}^* to be a formal complexity measure, or at least to have more of a chance of being one. But that’s not very encouraging, since the largest value it can take is not going to be more than \log(2^n)=n, so there would be no hope of proving superlinear lower bounds.

8) Well, I can’t say that came as a huge surprise.

:) The argument seems to apply to any dual norm if the original norm has what one might call the product property: that configurations in a product group G\times H are of the form ((g_1,h_1),\dots,(g_s,h_s)), where (g_1,\dots,g_s) is a configuration in G and (h_1,\dots,h_s) is a configuration in H. Unfortunately, that’s a pretty common property for these norms to have.

8) That’s the price you pay for proposing a complexity measure without any thought for why it should do the job.

:) I think that’s slightly unfair. I did at least think about why it wouldn’t trivially not do the job.

Actually, hang on a moment.

8) Groan.

:) No, bear with me for two more minutes. Probably this is nothing, but it occurs to me that we ought to think a little bit harder about the argument above before being certain that it applies to the U^k norm when k=C\log n.

8) Why? Where was the dependence on k?

:) Well, it looks like a merely technical point at first, but I’m starting to wonder. You recall that we found Boolean functions f and g for which we could prove that \|f\vee g\|^*\geq\|f\|^*\|g\|^*/2. Now in order for that to be significant, we need at least one of \|f\|^* and \|g\|^* to be bigger than \sqrt{2}. When k is bounded, that’s easy to achieve: you just pick a random function that depends on a small but bounded number of variables. But when k is unbounded, the number of variables it has to depend on becomes unbounded too. Indeed, a random function that depends on r variables will have U^k norm about 2^{-r/2^k}, so we appear to need r to be about 2^k, which is bigger than n. So here’s a potential reason that the U^k norm could behave genuinely differently when k passes the \log n threshold.

Of course, we’re still in the realms of guesswork here, but this suggests we need to think a little further.

:| Can I interject at this point?

:) Of course.

:| Well, doesn’t the argument you’ve just given show that no function has small U^k norm? After all, if random functions have the smallest U^k norms and you need more than n variables to get them to be small, then you seem to have a proof that all functions have large U^k norm. Doesn’t that just kill off the idea completely?

:) Oops. Nor for the first time I feel quite embarrassed. Indeed, the U^k norm of an arbitrary Boolean function is at least N^{1/2^k}, which becomes constant when 2^k=\log N=n. I was forgetting that when you calculate the U^k norm then the power you take is exponentially small in k, so k=\log\log N is actually a bigger function than it looks.

8) Can we stop this conversation now?

:) OK.

…………….

No, sorry, no, hang on a moment!

8) All right, but at the next collapse I’m off for some coffee, and I can’t promise you that I’ll be back.

:) Well, all I wanted to say was that :|’s observation doesn’t kill off the approach at all. In fact, the more I think about it, the more I wonder whether we shouldn’t in fact be encouraged by it.

8) Your optimism really does know no bounds.

:) Just let me explain what I mean. The point I want to make is that even if all U^k norms lie between 1 and 2 (or perhaps 1 and 1.01 or something like that), the U^k dual norm could still be a useful measure of complexity. It might be that all the norms of low-complexity functions were incredibly close to 1. Perhaps we would find that it was more natural to raise everything to the power 2^k, in which case the largest possible value would be N, which is back to being acceptably large. In short, what I’m saying is that what matters is behaviour within a scale: if the scale looks a bit narrow, then rescale it.

That looks like another desperate manoeuvre, and perhaps it is. But why did I say that it might even be positively encouraging rather than merely not disastrous? The reason is that the argument that the dual of the U^k norm is not a complexity measure really does fail when k is of order \log n, because then that seemingly innocuous division by 2 becomes much more important. So we’re back in the nice position of having a proposed measure of complexity, and no convincing reason for its not working.

8) Yes, but you don’t have any convincing reason in favour of its working either.

:) I fully admit that. All I’m saying is that we need to go back to the phase of trying to attack it.

There seem to me to be two natural lines of attack, and I can’t get either of them to work. The first is to look for a more sophisticated example of a pair of Boolean functions where \|f\vee g\|_{U^k}^* or \|f\wedge g\|_{U^k}^* is big given the values of \|f\|_{U^k}^* and \|g\|_{U^k}^*. The second is to try to come up with some low-complexity function f such that \|f\|_{U^k}^* is near to being maximal.

Let me start with the first approach. Now this is a bit strange, because so far, as a result of the division by 2, I haven’t even got an example where I’ve managed to show that \|f\vee g\|_{U^k}^* is bigger than \|f\|_{U^k}^*\vee\|g\|_{U^k}^*. But let’s have a look again at the example where f is a random function of r variables and g is a random function of a disjoint set of r variables. As I commented before, f\wedge g=(f+g+fg-1)/2. If k is small and r is big, then it seems pretty clear that fg is a good choice of function to norm (f+g+fg-1)/2. That is because \|fg\|_{U^k} is much smaller than any of 1, \|f\|_{U^k} or \|g\|_{U^k} so it is an efficient function to use for providing a lower bound for the U^k dual norm. But if k is very large, then the norms of all vectors are pretty similar and this argument breaks down. So we could for example consider taking f+g+fg-1.

But what is \|f+g+fg-1\|_{U^k}? Let’s assume that \|f\|_{U^k} and \|g\|_{U^k} are both at most 1-\epsilon. Then \|fg\|_{U^k} is at most (1-\epsilon)^2, so by the triangle inequality it is at most 1+2(1-\epsilon)+(1-\epsilon)^2=(2-\epsilon)^2. It follows that \|f+g+fg-1\|_{U^k}^* is at least 4/(2-\epsilon)^2=1/(1-\epsilon/2)^2. Unfortunately, this is smaller than 1/(1-\epsilon), which is the lower bound we know for \|f\|_{U^k}^* and \|g\|_{U^k}^*. But that could be because of our inefficient use of the triangle inequality. What, for instance, do we expect \|f+g\|_{U^k} to be when f and g are random Boolean functions of disjoint sets of r variables each?

Hmm … I don’t yet see how to answer this question precisely enough. So let me make a general remark that clarifies a little :|’s remark about the dual norm never being big. If k is C\log n and f is any function defined on \{0,1\}^n (not necessarily Boolean), then a trivial upper bound for \|f\|_{U^k} is \|f\|_\infty, since if |f(x)|\leq 1 for every x, then the average in the expression for \|f\|_{U^k}^{2^k} is also at most 1.

But for large k this is also a lower bound. Indeed, if f(u)=1, then when you choose a random x,a_1,\dots,a_k, the probability that x=u and a_1=\dots=a_k=0 is N^{-(k+1)}=2^{-n(k+1)}. It’s not too hard to prove that this implies that \|f\|_{U^k}\geq 2^{-n(k+1)/2^k}. So as long as 2^k is at least comparable to n(k+1), the U^k norm is equal to the L_\infty norm up to a constant. It follows that the U^k dual norm is equal to the L_1 norm up to a constant.

:| Doesn’t that make it useless as a complexity measure?

:) No. As I’ve already said, but perhaps not quite in these terms, I am now trying to understand the U^k and U^k dual norms at a finer level of detail than just equality up to a constant. In fact, if the approach stands any chance of working, I think it will be just this feature that distinguishes it from the U^k norms for bounded k and may lead to different behaviour.

8) I still don’t see any particular reason to suppose that for large k the U^k norm of every low-complexity function is large.

:) OK, you’re moving to the second line of attack, so let’s have a go at that. A first idea is that when k is bounded, if we believe the natural-proofs heuristic, then a random formula (I’m still thinking about formula complexity, by the way) gives an extremal example. So it makes sense to try to work out, or at least guess, what \|f\|_{U^k}^* is when f is a random low-complexity function. At the moment I don’t have a clear idea how to go about that. All I would point out is that, at least to start with, the functions one builds up all have norm 1. That’s because (-1)^{p(x)}\wedge (-1)^{q(x)}=(-1)^{p(x)q(x)}. Therefore, if f and g are polynomial phase functions of low degree (meaning functions like (-1)^{p(x)}, with p a low-degree polynomial), then so are f\vee g and f\wedge g.

:| Sorry to interrupt with a very basic question, but how do you prove that for \vee? Your proof applied to \wedge only.

:) Yes, but basically anything you can do for \wedge you can do for \vee. That’s because f\vee g=-((-f)\wedge(-g)), which is one way of stating de Morgan’s laws.

:| I see. And if f(x)=(-1)^{p(x)}, then -f(x)=(-1)^{1-p(x)}, so you’ve got another function of the same kind.

:) Exactly. So as I say, it takes a while to produce a Boolean function that doesn’t have norm 1. That is slightly disconcerting, because it makes it a little hard to see how an inductive argument would work. But maybe one could come up with a more sophisticated complexity measure that looked at all U^k dual norms at once and picked out the point at which they started to be large, or something like that.

Let me leave that question aside and turn to the question of trying to devise an explicit low-complexity Boolean function f such that \|f\|_{U^k}^* is large.

If you recall from way back in the conversation, the natural explicit example of a function with very low U^2 norm was a quadratic phase function. I suggested the function f(x)=(-1)^{x_1x_2+\dots+x_{n-1}x_n+x_nx_1}. Now why didn’t I just go for the simpler (-1)^{x_1x_2}? The reason is that if you use a low-rank quadratic form, then you can cut the function up into nice linear pieces: this function, for example, is constant on each of the four subspaces you get by fixing the coordinates x_1 and x_2. The result is that you don’t get a small U^2 norm after all.

With this thought in mind, it would seem that the natural choice for an explicit Boolean function with almost minimal U^k norm would be a polynomial of degree k+1 that in some sense has “high rank” so that it can’t be built out of lower-degree polynomials.

Now a problem I have when trying to build an explicit polynomial phase function of high degree is that it doesn’t seem to be easy to find one for which I can prove that it really does have a small U^k norm. Incidentally, the unboundedness of k makes a difference here: when k is bounded we can just pick a random collection \mathcal{A} of sets of size k, define p(x) to be \sum_{A\in\mathcal{A}}\prod_{i\in A}x_i (that is a function from \mathbb{F}_2^n to \mathbb{F}_2), and set f(x)=(-1)^{p(x)}. Computing this function will take something like n^k steps (and can easily be done by a formula of that size too). But that’s not a polynomial if k is unbounded, so we need to think of an explicit example.

For instance, we could try the obvious generalization of what I did in the case k=2 and consider the polynomial that you get when \mathcal{A} consists of the n sets of the form \{i+1,\dots,i+k\}, where addition is mod n.

But if we restrict to the subspace x_k=x_{2k}=\dots=x_{\lfloor n/k\rfloor k}=0, then this polynomial is identically zero. The codimension of this subspace is about n/k, so the U^k norm looks to me as though it is at least (2^{-n/k})^{1/2^k}, which is very close to 1, since n/k2^k\rightarrow 0. (Again, this argument does not work when k is bounded.)

That doesn’t really prove much, since I could obviously make some much cleverer choices of \mathcal{A}, though always with the strong constraint that the resulting polynomial must be of low complexity. But I can also make some much cleverer choices when I try to restrict the polynomial to a set where it is constant: for example, instead of restricting to a linear subspace, I can restrict to a degree-(k-1) submanifold of \mathbb{F}_2^n.

That’s where I’ve got so far. I have to say that the question I like best out of the ones I’ve just raised (either explicitly or implicitly) is the one about random low-complexity functions and their U^k norms and dual norms. If I could convince myself that a random low-complexity function almost certainly had a fairly large U^k norm, then I’d be pretty excited.

8) What you’re saying is reminiscent of a standard proof technique: the method of random restrictions.

:) What’s that?

8) Well, some nice lower bounds have been proved as follows. You take a function of low complexity, and you randomly fix a number of its coordinates. You then argue that this hugely simplifies the function — for instance, by making it almost constant. And then you point out that the parity function cannot be simplified unless you restrict all the variables. In this way it has been shown that the parity function has formula-size at least n^2 (this is a result of Andreev) and cannot be computed by a circuit of constant depth and polynomial size (this is a result of Hastad).

It seems to me that you are randomly restricting to a k-dimensional subspace and hoping that a very slight simplification will occur that encourages your Boolean function to have even parity. It’s not quite the same, since you don’t expect this simplification to be all that dramatic — just faintly detectable.

:) Hmm. I think it is fairly different, but it’s quite a nice way of thinking about the problem, because if we take a random low-complexity Boolean function and restrict it to a random k-dimensional affine subspace, the result ought itself to be some kind of random low-complexity Boolean function.

8) How do you mean?

:) Well, let X be a random k-dimensional affine subspace and let e_i be the basic function e_i(x)=(-1)^{x_i}. Then there are two possibilities. Either X is contained in one of the two subspaces x_i=0 or x_i=1, or we have two subspaces X\cap\{x_i=0\} and X\cap\{x_i=1\}, and the restriction of e_i to X is 1 on the first and -1 on the second. In other words, the restriction of a basic function is either constant or very similar to a basic function. And with high probability it is the latter, since a k-dimensional subspace will have constant x_i coordinate with probability 2^{-k}.

Since 2^k>n, this in fact shows that with reasonably high probability all the basic functions will end up non-constant when you restrict them to X. And then when you build up a low-complexity function f using Boolean operations, you do exactly the same to the restriction of f to X.

Does this mean that inside X there is a little copy of f? Not at all, because if you look at the intersections of the sets where the basic functions are constant, then you find that they don’t behave as they should. To give a trivial example, the intersection of all the sets \{x\in X:x_i=0\} is almost certainly empty.

And now my intuition is starting to desert me again. What I want to understand is what happens when I just build up a function randomly using Boolean operations. Could it be that then the random restriction cleverly manages not to lose information? I think what I mean by that is something like that one should be able to reconstruct the original function pretty well from its restriction to a random k-dimensional subspace.

That feels plausible somehow. But if it is true, then why would it help to produce a large U^k norm? One might think it would do the opposite, since all those random restrictions of random low-complexity functions would still look random. It’s not clear where the bias towards even parity would come in. But that thought may be orthogonal to what is really going on: even if a random restriction does indeed look pretty random, we are looking for just a small proportion of subspaces where it isn’t random. In other words, we are interested in a low-probability event.

8) If I had to evaluate what I’ve heard so far, I think the point where your approach is most vulnerable to attack is one you mentioned just before we started talking about random restrictions. You’ve just mentioned that if it were possible to take a random polynomial of degree k, then it would have a very low U^k. You then made a rather pathetic effort to come up with an explicit example that would do the job, and were encouraged by the fact that it failed. But perhaps there is a clever method of defining a pseudorandom polynomial of degree k.

:) Do let me know if you find one. But I’m having trouble doing so. And bear in mind that there’s an important distinction between explicitly defining a set system \mathcal{A} that looks like a random collection of sets of size k, and defining a low-complexity function f such that f is a polynomial of the form f(x)\sum_{A\in\mathcal{A}}\prod_{i\in A}x_i, where \mathcal{A} looks like a random collection of sets of size k.

Also, now that I think about it, there is a better reason that my “pathetic” attempt, as you put it, failed. That is that the polynomial \sum_i x_{i+1}\dots x_{i+k} is almost always zero, so the polynomial phase function that results is almost always 1. Indeed, the probability that it is not zero if you choose a random x is at most 2^{-k}n, which is very small when k is 2\log n, say. One could cure that by taking a sum of 2^k monomials, but if it’s a small challenge even to get the polynomial not to be almost constant, a lack of quasirandomness that even the U^1 seminorm will pick up, think how much harder it will be to make it quasirandom for the U^k norm.

Ah, but I’ve now had an idea. It’s a way of building up a random polynomial of degree at most k and complexity at most m. I’m going to think of it as a function from \{0,1\}^n to \{0,1\} rather than to \{-1,1\}. I then allow myself three operations, which are f\mapsto 1-f, (f,g)\mapsto fg and (f,g)\mapsto f+g. The first two of these are Boolean operations (NOT and AND) and the third, addition mod 2, or parity, is easily built out of Boolean operations.

:| If you can easily build it out of the other two operations, then why include it?

:) Because I’m going to insist that it is used a lot. Once I’ve got these operations, I can build up functions rather as one does with circuit complexity: I just take a sequence of functions f_1,\dots,f_m such that each f_r is built from earlier functions f_s and f_t, or possibly just f_s, using one of the permitted operations. Because parity can be built easily out of \wedge and \vee (and I should have mentioned that multiplication is \wedge), the circuit complexity of f_m is at most C m. Also, the degree of f_r is the maximum of the degrees of f_s and f_t if f_r=f_s+f_t, and the sum of their degrees if f_r=f_sf_t (and the degree of f_s if f_r=1-f_s). Therefore, if I want to keep the degree down, then all I have to do is make sure that I use multiplication only a limited amount.

One way of doing that is as follows. I start by producing a “layer” of polynomially many functions of the form \sum_{i\in A}e_i, where the e_i are basic functions. From these, I form a second layer of polynomially many functions, each of which is a product of two functions from the first layer. Then for the third layer I take sums of functions from the second layer, and so on. I can keep this up for \log_2k layers before I will start getting polynomials of degree k.

My question now is this. If I do that randomly, then do I get a polynomial p(x) that is random enough for the corresponding phase function (-1)^{p(x)} to have a small U^k norm? What we’ve done is sufficiently linear that it’s not even obvious that its U^2 norm will be all that small.

:| That at least I can answer. If you let your first layer consist just of the basic functions, your second layer consist of a random collection of products of basic functions, and your third layer be the sum of all those products, then you have a random quadratic, and I thought you said that random quadratic phase functions had tiny U^2 norm.

:) Oh yes, sorry. So we can be pretty certain that the random polynomials I’ve just constructed will have almost minimal U^d norm for all bounded d. But what happens when d gets to k=\log n? I can see a problem arising at the last layer. There we would be wanting to take the sum of a whole lot of functions, each of which would typically be a product of two polynomials of degree k/2. I don’t know, but it sounds to me as though such a polynomial might be rather “simple” and “of low rank”. And it might be that you had to add together a superpolynomial number of “low-rank” polynomials to get a “high-rank” one.

:| I’m a bit confused here. Ultimately what you want to prove is that all functions of polynomial complexity have large U^k norms (and even small U^k dual norms). In order to test this hypothesis you are attempting to find a counterexample. And in order to do that you are going to a lot of effort to ensure that your example is a polynomial of low degree. But why bother? All you need is a counterexample. Why not just take a random function of complexity m?

:) It’s because that would result in a mere rephrasing of the question. It seems almost certain that either the hypothesis is true or a random function of complexity m is a counterexample to it. But I don’t know how to argue that random functions of complexity m are so random that even the U^k norm fails to detect any nonrandomness. So instead of looking for a generic counterexample I am looking for the simplest one. And that is why I am restricting my attention to polynomials of degree at most k. If there seem to be reasons for such an example not to exist, it may give us some kind of hint about why the hypothesis could after all be true.

8) OK, I think we can all agree that this problem doesn’t look completely straightforward. But still, it seems possible that somebody will have a clever idea for producing such a polynomial — indeed, your random polynomial may even be such an idea — and if they do then your U^k dual proposal is destroyed instantly.

:) I admit that. But it looks to me like a nice problem, unless there’s some construction that computer scientists already know about, in which case I can’t really think of any consolation, other than the weak one that I will have learnt something.

8) If it failed for that reason, then perhaps you could analyse the failure and come up with a new barrier to proving complexity lower bounds.

:) I suppose that would be quite nice if it could be done.

:| I’ve thought of another way of producing randomish polynomials of degree k in n variables. Perhaps it might work.

:) Uh oh. What is it?

:| I can think of many variants, but I’ll choose one arbitrarily. Pick some m that’s at most polynomially large in n and choose a random function \phi:\{1,2,\dots,m\}^2\rightarrow\{1,2,\dots,n\}. I’m going for a function of polynomial circuit complexity, but if I wanted it to be in P I could replace “random” by “pseudorandom” there.

Now, given a Boolean sequence x=(x_1,\dots,x_n), form the m\times m matrix A(x)=(a_{ij}), where a_{ij}=x_{\phi(i,j)}. In other words, fill up the matrix with the values taken by the sequence x. At this point one could just take the determinant, which would be a polynomial of degree m, and I don’t see any reason for it to have a large U^k norm. But you insist on a polynomial of degree k, so we have to be slightly cleverer. What I suggest here is to calculate the characteristic polynomial of A(x) and then the coefficient of \lambda^{m-k} in that polynomial. That will be a polynomial of degree k in the variables x_1,\dots,x_n, and that is my proposed “random looking polynomial of degree k“.

:) Can we just think about how one calculates the characteristic polynomial in polynomial time?

:| Well, it’s just the determinant of \lambda-A(x), and calculating determinants can be done in polynomial time, by using row operations.

:) I’m happy with that when you’ve got a matrix of numbers, but it doesn’t seem quite so straightforward when you’ve got variables involved.

:| Oh yes …

Right, I’ve done a search on the web, and it seems that there are polynomial time algorithms for calculating the characteristic polynomials of 01 matrices over the reals (often known as “chemical graphs” in the literature). So one could just reduce mod 2.

:) Oh dear. If that’s right then it does indeed look like a pretty random polynomial of degree k. I can’t think offhand of a reason that it ought to have “low rank”.

8) One way to think about that is to see exactly what it is. You put all your x_is into an m\times m matrix and then you sum up all the determinants of all the k\times k submatrices, without worrying about signs since we’re working in \mathbb{F}_2. Each determinant is itself a sum over all products of k entries, where you pick exactly one from each row and each column. Since each product determines the k\times k submatrix it has come from, the entire polynomial is the sum over all products of k entries of the matrix that do not have two entries from the same row or the same column.

:) I have two remarks here. The first is that it’s not so obvious that the determinant has polynomial formula size, so perhaps there is still some possibility of using the U^k dual norm as a useful formal complexity measure. The other, which I don’t think you will like, is that I can’t think how I’d go about estimating the U^k norm of your “random-like polynomial”. So if I believe strongly enough that the U^k norm of a function of polynomial circuit complexity must be large, then I can believe that the very fact that your polynomial is in fact computable in polynomial time implies that it has large U^k norm.

8) You must be getting truly desperate to have said that.

:) I sort of am. But what I’m really saying is that if we don’t manage to come up with two Boolean functions f and g such that the U^k dual norm of f\vee g is quite large, given the U^k dual norms of f and g (all this relative to the very fine scale that we must take, given that the norms of all Boolean functions are the same up to a constant), then I won’t feel I can completely rule out the existence of some inductive argument that shows that an explicit function has superpolynomial formula complexity. Who knows, it might even be the function that :| has just thought of.

8) You’ve switched back to formula complexity again. But earlier you were talking about circuit complexity. You clearly haven’t let go of your dream of proving that P\neNP, but neither have you given the slightest reason to suppose that the U^k norm of a function of low circuit complexity must be large. And :| has given us an example of a polynomially computable function that it’s very plausible to suppose has an almost minimal U^k norm.

:) I almost agree with you. It’s just that I don’t see any strong arguments for whether :|’s function has a large U^k norm or a small one. Perhaps the fact that determinants are used gives the function some special structure that causes it to have “low rank” in some sense. I feel as though more thought is needed before we accept this example.

In fact, here’s an argument that this determinant construction isn’t all that different from my random polynomials construction. Unfortunately, owing to the annoyances of characteristic 2, it only seems to work when k is odd. But in that case recall that we are summing over a particular set of products of k-tuples of entries from the matrix, namely the ones that do not involve two entries from the same row or column. To evaluate that we could take each entry of the matrix in turn, remove its row and column, work out the corresponding degree-(k-1) polynomial of the resulting (m-1)\times(m-1) matrix, multiply it by the given entry, and add. This counts each k-tuple that we want to count k times, which is fine if k is odd.

That argument shows that the polynomial you have defined is a sum of m^2 polynomials that are “simple”, in the sense that they are products of a linear polynomial with a polynomial of degree k-1. So we’re back to the question we had earlier: in order for a degree-k polynomial to have sufficiently high rank for its U^k norm to be very small, is it enough to take a sum of only polynomially many “simple” polynomials? Given that k depends on n, I don’t think we can claim that the answer is obvious.

It may not be much, but we seem to have arrived at a purely mathematical question that would shed quite a lot of light on this whole approach (either demolishing it or giving quite a good heuristic reason in its favour, depending on whether or not a polynomial number of “simple” degree-k polynomials can be considered as having high rank).

:| I don’t know why you keep going on about this. My intuition tells me that you shouldn’t need more than n simple polynomials to make a truly complex one.

:) Can you back that up?

:| A little, as follows. Consider the following simple polynomial of degree k. I take two polynomials p and q of degree k-1 and I define f(x) to be p(x) if x_1=0 and q(x) if x_1=1. I can write f(x) as (1-x_1)p(x)+x_1q(x), so f is a simplish polynomial of degree k, according to your definition. Now let’s suppose I build random polynomials f_i like that, one for each coordinate i, and define f(x) to be f_1(x)+\dots+f_n(x). Then f looks to me like a polynomial that’s “truly of degree k“. In particular, a k-dimensional subspace isn’t going to detect any unusual correlations, as far as I can see, because as you change coordinates you keep changing from one random degree-(k-1) polynomial to another. And because we’ve got a simple polynomial for each coordinate, we can’t avoid these changes of coordinate. This isn’t a rigorous proof, but it convinces me for the time being.

:) Ah, but you haven’t said how to produce all those random polynomials of degree k-1. You can’t just iterate the procedure, because then you’ve got to build each of those polynomials out of 2n polynomials of degree k-2, and so on, and the computational complexity of the eventual polynomial is (2n)^k or something. So you have to recycle the lower-degree polynomials (as you clearly do in both my random-polynomials construction and your determinant one).

:| Agreed, but all I was trying to criticize was your intuition that when k is unbounded you need a superpolynomial number of simple polynomials to create one with small U^k norm. If you want to prove that a polynomial of low complexity cannot have small U^k norm, it is now clear that you can’t just look back one layer in the way it is constructed, but will have to argue from the entire construction, as you were starting to do just now. And that looks a lot harder.

:) Unless we could prove somehow that the U^k dual norm was a good complexity measure. Then we’d be done by induction, for formula size at least.

By the way, a small change to my random-polynomials construction makes it look more like your determinant one. You build up layers as follows. If you’ve got a layer j that consists of a bunch of polynomially many polynomials of degree j, then you form a layer j+1 by taking polynomially many random sums of the form q(x)=\sum_{i\in A_q}x_ip_{i,q}(x), where A_q\subset\{1,2,\dots,n\} and each p_{i,q} is a polynomial drawn from layer j. The advantage of this is that it goes on for k layers rather than \log k layers, so it looks more random. In fact, it looks more random than your determinant construction, so I’m no longer sure that yours affects my perception of the chances that low-complexity functions have large U^k norms.

By the way, I take your point about the lemma I was hoping for being highly likely to be false. But my point is that I still don’t find it clear whether we can build up an explicit degree-k polynomial with almost minimal U^k norm.

8) But that’s still the weakest-looking point of your approach in my view.

:) Quite possibly.

:| I don’t know whether I agree. I’ve just done a bit of Googling and got led to a paper by Viola and Wigderson that contains a number of ideas that are quite close to the ideas we are discussing here. Of particular relevance is a theorem that exhibits a polynomially computable function that has low correlation with any degree-$d$ polynomial. But the bound that they get for the correlation is of the form \exp(-\alpha n/2^d). Since functions with very small U^{d+1} norm have very small correlation with degree-d polynomials …

:) How did you know that? It’s true, but you seem to have picked it up very quickly.

:| A form of telepathy. Anyway, what I’m saying is that if the result of Viola and Wigderson is state of the art (as it may well be given that it was published in 2008), then it suggests that finding an example to demonstrate that your approach doesn’t work may be a challenging problem, even if it doesn’t work.

8) I wouldn’t celebrate just yet. It suggests to me that getting the approach to work could be a very challenging problem even if it does work.

:) I don’t know whether you have any grounds for saying that, but it is obviously an important issue. Let’s suppose that it really is true that the U^k dual norm is small for Boolean functions of low complexity — and as :| has pointed out, there is probably no example known that contradicts this.

8) I have to admit that that is a point in your proposal’s favour.

:) And let’s suppose also that there exists some function in NP has a large U^k dual norm. There still remains the question of whether proving these two facts is any easier than proving that P\neNP.

:) How can it be easier if it actually implies that P\neNP?

:) Oh dear, that’s one of those philosophical questions that keeps coming up. But this one has an answer, which, if you believe that P\neNP, or even if you don’t know how to prove that P=NP, you have to take seriously. If you want to prove that A implies C, it may be that there exists a statement B such that A implies B easily and B implies C easily, but it may also be that it is hard to think of statement B. (What I mean about P and NP is that it’s easy to see that B works when you’re told it, so what I’m saying doesn’t make sense if P=NP.)

When one is trying to prove that A implies C, one often tries to find a statement B that will have this nice property. If you have some candidate for B, then one obvious thing that can go wrong is that either A doesn’t imply B or B doesn’t imply C. But another thing that can go wrong is that B so obviously implies C that proving that A implies B is no easier than proving that A implies C. So my question is whether, if it is true that the U^k dual norm of every low-complexity function is small, this is easier than proving that P\neNP or whether it’s typical of the kinds of problems that are already known to be excruciatingly difficult.

One reason to suspect the latter is that it deals, implicitly at least, with polynomials of degree \log n, and that seems to be the point at which people don’t know how to prove anything. But then again, we wouldn’t be trying to improve the result of Viola and Wigderson: we’d be hoping that it couldn’t be improved! And perhaps if we thought hard enough about the U^k dual norm we’d be able to say something interesting about the norms of f\vee g and f\wedge g. There seems to me to be at least a chance, if a slim one, that this property is an intermediate property of the kind one wants — not something that makes the whole problem easy, but at least something that splits it into two parts that are strictly easier than the original problem.

About these ads

21 Responses to “A conversation about complexity lower bounds, III”

  1. Emanuele Says:

    Hi,

    I thought I’d point out a recent survey on correlation bounds for polynomials: http://www.ccs.neu.edu/home/viola/papers/viola-sigact-gf2.pdf.

    Let me also take advantage of this to mention a few things:

    Exhibiting an explicit function (on n bits) that has small correlation (1/n) with polynomials of low degree (log n) is necessary to prove circuit lower bounds (if for every distribution you can find a low-degree polynomial that correlates with f, then f can be written as a threshold of few such polynomials).

    We have no reason to believe that this is hard to do (in particular, Razborov-Rudich is not known to apply to polynomials). In fact, Razborov and Smolensky did prove some correlation bounds for high-degree polynomials. While their bounds are worse than 1/n, there is no excuse like “we cannot say anything about these things.”

    Here’s a candidate hard function: divide the input in blocks of sqrt(n) bits. Take Mod3 in each block, take parity of the results.

  2. gowers Says:

    Dear Emanuele,

    I am very interested by what you write (and also by the survey paper you mention, which I discovered and printed out in between writing and posting this instalment). I would like to understand it better, so let me ask you two questions.

    I don’t see why it matters if f can be written as a threshold of polynomials of degree logn, because those polynomials don’t seem to me to be obviously computable in polynomial time. But perhaps there is some clever argument here that I don’t know, since the number of polynomials of degree logn appears to be comparable to the number of polynomial-size circuits.

    Assuming, however, that it does matter. is the proof of the result you mention in parentheses something like this? I am guessing that “threshold of few such polynomials” means a reasonably general linear combination and not just some majority-like function. (Probably you can get them to be roughly equivalent by repeating polynomials.) Anyhow, if f is not a linear combination, with certain natural properties, of few such polynomials, then the Hahn-Banach theorem should give some kind of linear functional that separates f from all those polynomials and also has some other properties. That, I can imagine, would translate into a distribution with respect to which f does not correlate with any low-degree polynomial.

    The problem of trying to obtain improved correlation results for the function you mention looks very interesting. I am particularly interested that you think it is a problem that is highly relevant to circuit lower bounds but potentially easier. For similar reasons I liked Ryan’s problem about threshold circuits. In both cases, I wonder whether they might make good Polymath projects.

    • Emanuele Says:

      Hi Tim,

      thanks for the interest (and the blog).

      I am not sure I completely understand your first question, but let me try to answer anyway.
      All I meant is that a polynomial of degree log n has only quasipolynomially many (n^{O(log n)}) terms and so it can be computed by a circuit of size (n^{O(log n)}).
      Is the question about the difference between n^{O(1)} and n^{O(log n)}?
      If so, indeed the circuit has slightly superpolynomial size. On the other hand, it has a very simple structure (just a threshold of polynomials).

      Regarding the proof: indeed one can have a majority-like function by just repeating polynomials (i.e., the weights in the threshold only have polynomial magnitude).
      If I understand it correctly, the proof you describe is similar to the one I have in mind.
      The paper by Goldmann, Håstad, and Razborov
      http://people.cs.uchicago.edu/~razborov/files/pt1.ps (Theorem 10) has a simple proof using the min max theorem: if for every distribution D on inputs one can find a polynomial that correlates well (1/n) with f with respect to D, then by min max there is a distribution on polynomials that for every input x correlates well with f(x). By concentration of measure, the sum of poly(n) independent polynomials has exponentially large correlation with f (and so some choice of the polynomials will work for every input x).

      I am absolutely thrilled at the idea of having a polymath project on correlation bounds for polynomials! I would definitely like to be involved.

      I think Ryan’s problem about threshold of thresholds would be great too.
      I have thought more about polynomials, so my opinion is severely biased, but let me try to comment more on the two problems.
      Let me stress right away the obvious thing that I feel I have absolutely no clue of which problem is more interesting or easier.

      The threshold of thresholds problem looks “close” to being related to the “natural-proofs barrier:” It seems that it is known how to compute pseudorandom functions using only 4 levels of majority gates (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.27.5547&rep=rep1&type=pdf), and in general, I think, few levels of majorities can do amazing things. On the other hand, polynomials may be “further” away. (Or maybe not? An interesting point here might be trying to see if low-degree polynomials correlate with pseudorandom functions, which might have bearing on this research via an extension of the Razborov-Rudich approach. I tried this a bit but in vain.) Finally, for polynomials we can already say something for high degree, and correlation bounds with respect to the uniform distribution would give pseudorandom generators as a bonus.

      Anyway, I think any project on low-level lower bounds would be exciting.

    • gowers Says:

      Yes it was indeed just a question of whether your remark applied to beating n^{\log n} rather than n^{O(1)}. I think the proof you refer to must be exactly the kind of argument I had in mind: previous experience suggests that where I say “Hahn-Banach”, computer scientists say “min-max”.

      A quick question: what is the best-known explicit function for not correlating with quadratics?

      Also, apologies that your comments take a while to appear. For some reason they are being treated as spam (the above one perhaps because it had two links). I don’t know if there’s some way I can mark you as a legitimate commenter — I’ll look into it.

    • Emanuele Says:

      Hi Tim,

      for quadratics, there is a result by Frederic Green that computes exactly the correlation between parity and polynomials modulo 3 (the input is still {0,1}^n).
      ( http://mathcs.clarku.edu/~fgreen/papers/quad.pdf ).
      To my knowledge, no such exact result is known in any other case. His paper also discusses some of the difficulties in extending his result.

      For polynomials modulo 2, I am not aware of any result which improves on the exp(- \alpha n/2^d) bound for any d > 2, I think that it would be exciting to prove any bound that is better than 2^(- n/2^d).

      As is well-known, this n/2^d in the exponent comes from the iterated Cauchy-Schwarz approach. The same loss appears in the best known lower bounds in multiparty communication complexity, so I think it would be great if one could understand this better.

    • Ryan O'Donnell Says:

      @Emanuele: Yeah, it’s basically the case [Yao90] that ACC0 is contained in MAJ o MAJ o MAJ. I think Yao showed that bounded-depth circuits with arbitrary AND, OR, NOT, and MOD gates of quasipoly size are computable by MAJ o MAJ o AND_{polylog} circuits of quasipoly size. So the lower bound scene looks grim even once you step up from THR o THR to MAJ o MAJ o MAJ. In that sense, I think THR o THR lower bounds are a bit of a dead end, since even if you get them, you would despair of extending things to the next level. (Still, I think it’s a kind of fun and possibly tractable dead end.) Correlation bounds for log-degree polynomials seems a bit more open-ended to me.

    • Emanuele Says:

      Sorry, I still cannot get that paragraph to show right. One last try.

      What I meant to say is that alpha is related to constructions of small-bias generators, that for degree two I would guess something can be done using the fact that the polynomials can be diagonalized up to a linear transformation of the variables, and that for degree bigger than two I think that any bound better than alpha = 1 would be interesting.

      [I've corrected the first attempt now, and deleted the failed attempt to correct it.]

    • gowers Says:

      A very minor correction to what you say. I think the bound to beat is \alpha=1/2. For example, here’s a sketch of how to obtain a bound of 2^{-n/4} in the case of quadratic correlation. Let \kappa be a random cubic, by which I mean that \kappa(x)=\sum_{\{i,j,k\}\in A}x_ix_jx_k for some random collection A of triples. Now let f(x)=(-1)^{\kappa(x)}. Then by two applications of Cauchy-Schwarz one can show that

      \displaystyle |\mathbb{E}_xf(x)|^4\leq\mathbb{E}_{u,v,x}(-1)^{\kappa(x+u+v)-\kappa(x+u)-\kappa(x+v)+\kappa(x)}.

      The exponent on the right-hand side is a trilinear function of x,u,v plus a function of u and v only. The randomness of A should make the expectation over x zero almost always. Indeed (and this is the bit where I am being sketchy and possibly even incorrect, but I hope not) I believe the probability (over u and v) that it is 1 rather than 0 is pretty close to the trivial minimum of 2^{-n}. If that argument works, it gives 2^{-n/4} for quadratic correlation and in general it gives 2^{-n/2^{d-1}}. (It can also be regarded as the natural generalization of the argument that gives 2^{-n/2} in the linear case.) Sorry, I forgot to say that the above argument works if you add an arbitrary quadratic to \kappa, which is how one deduces something about quadratic correlation.

      One might object that a random cubic doesn’t give a uniform family of functions, but I suspect that one could derandomize it somehow.

      Another question I wanted to ask. It seems that in the case of the Gauss sum \sum_{x=0}^{p-1}e(ax^k/p) much better bounds are known than you get from iterated Cauchy-Schwarz arguments. For instance, I think you can get (k-1)p^{-1/2} by using Weil’s results. I would guess that the same is true if you replace x^k by any polynomial of degree k. If so, then it is slightly surprising if there isn’t some analogue for \mathbb{F}_2^n, but perhaps there are good reasons for its being a genuinely different problem. (For instance, the number of quadratics is superpolynomial in 2^n so one might expect correlation results to be harder to prove.)

    • Emanuele Says:

      Regarding your correction, there has to be something very simple I don’t see:
      Are we thinking of the bound 2^{- \alpha n/2^d}?
      Then your example d = 2 achieves just 2^{-n/4} which indeed corresponds to \alpha = 1, right?

      Perhaps some of (my?) confusion comes from the fact that actually n/2^d is a bit better than what is stated in some papers, which Cauchy-Schwarz d+1 times to handle degree d.

      Anyway, in your argument, shouldn’t be easier to take \kappa to be a truly random function, rather than a random cubic? I would guess that if you can make the proof go through in that case then you can derandomize it (that’s what some of the known bounds do).

      Ryan: I was wondering if you have some specific motivation for the cylinder intersection problem you mention. Is it connected to some problem in 3-party communication complexity? (By the way, I think superpoly lower bounds for bounded-depth circuits are due to Ajtai independently of FSS, then Yao improved them to get oracles that separate PH, i.e., superquasipolynomial bounds, I think.)

      (I hope the latex will show up right; I wish there was a way to preview comments before posting them.)

    • Emanuele Says:

      Regarding the question on Gauss sum and Weil’s results, just for the record let me say that I don’t know: I think a few people thought about Weil’s bound but did not see how to make progress on the GF(2) problem.

      From a biased computer science perspective, perhaps it may not be too surprising that the GF(2) problem may be different from the problem in larger domains.

    • Ryan O'Donnell Says:

      @Emanuele: Oops, yeah, I meant to write Ajtai when I wrote Yao. Thanks. And yes, the motivation for Sergey’s problem is 3-party communication complexity… unfortunately I don’t remember the exact details.

    • gowers Says:

      Responding to Emanuele’s last comment but one, I wrote something stupid. I think the bound you give for \alpha can be improved by a factor of 2, but what I should have said is that you can take \alpha=2. Or at least, I’m pretty sure of this. So when d=2 you get 2^{-n/2}, which is indeed the bound you get from quadratics. And then the exponent goes down by a factor of 2 each time.

      Hang on, I’m being even more stupid than I thought. I keep getting confused between the degree of the polynomial I am thinking about and the degrees of the polynomials I am trying not to correlate with. So you were indeed right all along and \alpha=1 is the correct factor.

      I’m not sure I understand your remark about purely random functions: I thought it was fairly obvious that they didn’t correlate, just by a counting argument. But let me check, since I haven’t. The probability of correlating by t2^{-n/2} should be e^{-ct^2}, and the number of quadratics is at most 2^{n^2}. So it looks as though we can organize for the best correlation with a quadratic to be about n2^{-n/2}. In other words, within a log factor of the best possible bound. I don’t see how one could derandomize that, since precisely the same argument can be used to show that a random function has small correlation with all polynomially computable functions.

      Probably I am misunderstanding what you write, though.

    • Emanuele Says:

      Regarding the remark about purely random functions:
      It seems that your argument only exploits a certain “hitting-the-subcube” property of \kappa, i.e., you want that over the choice of A with high probability the bias over x is small (or something like that). I guess this is easier to prove when you take \kappa to be a purely random function. As long as that’s the only property you need, that can be derandomized. Indeed, if I am not misunderstanding, that’s exactly what some previous bounds do. (Perhaps a minor point is that derandomizing a purely random function may give slightly worse parameters than derandomizing a cubic, say. So there might be some slight advantage in this, or at least a different tradeoff, but I am not sure about this and in any case it seems to have minimal consequences.)

    • gowers Says:

      It seems to me that for such a derandomization to work, one would need to have a pseudorandom generator that is indistinguishable from a random generator by any quadratic. But I don’t see how constructing such a generator is any different from solving the original problem. My guess is that we are talking about different problems: perhaps you are talking about correlations like n^{-O(1)}, whereas I am talking about correlations closer to 2^{-n/2}, which would of course be best possible.

      Ah, actually, I think I understand. You’re saying that to get 2^{-n/4} one just uses the fact that a generic cubic should have a small U^4 norm. I agree that that would be easier to do by using random functions and derandomizing them. What I am saying (which doesn’t contradict you) is that the trivial argument that I gave in the second paragraph of my previous comment in this sequence gives a best possible result (more or less) for purely random functions but does not derandomize. So when you said “your argument” you were referring not to that argument but to the one that used iterated Cauchy-Schwarz.

  3. Ryan O'Donnell Says:

    Emanuele: I’m glad you posted and linked to your survey. I was going to do the same!

    A collection of short comments:

    . Tim: the proof you sketched of Emanuele’s parenthetical is right, I believe. Regarding your question about circuit lower bounds implying correlation bounds, perhaps Emanuele was referring to super-quasipolynomial lower bounds?

    . The correlation problem(s) mentioned in Emanuele’s survey are very scary to me — a lot of heavyweights have worked on them since maybe Babai-Nisan-Szegedy in the late ’80s. I’m glad Emanuele is enthusiastic though; it’s true that there’s been tremendous progress on understanding low-degree polynomials lately, and no killer reason why a correlation bound for degree log(n) couldn’t be proved.

    . On that note, here is a nice problem Sergey Yekhanin told me (I hope I’m stating it correctly): Show that a cylinder intersection in [N]^3 of density \delta has a much larger than \delta^8 fraction of 2 x 2 x 2 hypercubes. Being a cylinder intersection means that (x,y,z) is present iff f(x,y), g(x,z), and h(y,z) all hold, where f, g, h are predicates on [N]^2.

    . Ran Raz has the latest on formula size bounds for the determinant: http://www.wisdom.weizmann.ac.il/~ranraz/publications/Pmultlin.ps

    [. On attributions: n^2 formula-size lower bound for Parity was Khrapchenko, I believe, and superpoly size for bounded-depth circuits was first Furst-Saxe-Sipser (and sometimes Yao is credited too).]

  4. luca Says:

    I think it’s useful to state the following general form of the Goldreich-Goldwasser-Micali / Razborov-Rudich construction:

    Under standard assumptions, for every time bound T(n) > 2^n there is a distributions over functions f: \{ 0,1 \}^n \rightarrow \{0,1\} such that each function is computable by a formula of size (log T(n))^{O(1)} and such that functions from the family are indistinguishable from truly random functions by probabilistic distinguishers running in time T(n).

    (A sufficient assumption is that factoring k-digit composites of a certain type requires time at least 2^{k^\epsilon} for a constant \epsilon>0.)

    In particular, there is a family of polynomial size formulas such that their U^{\log n} norm (which is computable in time roughly 2^{n\log n}) has approximately the same distribution as the U^{\log n} norm of truly random functions.

    This means that, to avoid the natural proof barrier, it is not enough to have a property that is not computable in time polynomial in 2^n; the property should also be not computable in time 2^{n^{O(1)}}.

    Another, more “philosophical,” obstacle is that if one has a property that distinguishes functions of formula size \leq S(n) from random functions, then the computation of the property must not only just take time at least 2^{(S(n))^\epsilon}, but it must also “encode” the computation of a factoring algorithm operating on integers with S(n)^\delta digits. More generally, it must encode computations of inverting algorithms for all one-way functions computable by formulas of size of S(n)^\delta. This means that the property is already a nearly universal computation, and then exhibiting a specific function in NP which does not satisfy the property, has nearly “all” the difficulty of proving a lower bound for NP.

    • gowers Says:

      I’m very interested in what you write. The first part explains why some of the ideas I was playing with began to seem as though they couldn’t, even in principle, give superpolynomial bounds — which led me to focus more on superlinear bounds (which forced me to think more about circuit complexity). This comes up in a later instalment, but I’ll insert a note with a link to this comment of yours.

      As for the second, I’ll think about it. I find this concept of a property that is useless because it is too universal an interesting one. Is there some way of formalizing it (and hence proving results that say that such-and-such a property cannot give a proof because it leads to a simple reformulation of the problem)? Informally, there seems to be a serious barrier there: natural proofs tell you that your property cannot be too simple, and this one tells you that it cannot be too “complex” — except that that’s not quite what I want to say. Somehow the second barrier should rule out things like properties that are defined inductively in terms of Boolean operations.

    • Emanuele Says:

      Tim:
      I don’t feel confident that there would be something to gain in considering superlinear instead of superpolynomial. I’d think it’s only a matter of optimizing the constructions mentioned in Luca’s comment to obtain that the natural-proofs obstacle applies to linear size circuits as well.
      In particular there is a recent result that shows how to construct pseudorandom functions using linear-size circuits (“cryptography with constant computational overhead”). (From a cursory glance there might be a loss in security in their result which does not quite give the optimal connection, I have not checked that carefully, but to me it seems to indicate that linear size circuits are a difficult target already.)

    • gowers Says:

      I don’t feel confident about that either! I just mean that the arguments I was trying seemed to force me to give up all hope of proving anything better, but in the end even superlinear seemed out of reach too. (In fact, in later instalments I have some speculations about linear-sized pseudorandom functions, which sound pretty similar to what you are referring to: I suggest first applying a clever linear-sized code and then doing a small amount of extra scrambling randomly.)

    • luca Says:

      Emanuele, even if you have linear size pseudorandom functions of exponential hardness, it remains possible that a complexity measure computable in time 2^{O(n log n)} would distinguish functions of circuit complexity n log n from random functions. Indeed, the following complexity measure is computable in time 2^{O(n log^2 n)} and does perform such a distinguishing:

      \mu (f) := \min \{ n log n , circuit complexity of f \}

      Then, of course, we are back to Tim’s question of whether the only complexity measures that do the distinguishing are those like the above one which shift all the difficulty of the lower bound proof into showing that a given function has large measure.

  5. Emanuele Says:

    Luca, all I meant is that, probably, even proving a superlinear lower bound requires going beyond natural proofs.
    But I now see that this comment may have been somewhat off-topic.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 1,540 other followers

%d bloggers like this: