A conversation about complexity lower bounds, VII

Another somewhat inconclusive instalment, but it introduces another candidate for a non-natural property that might be of some use. (In a later instalment, a heuristic argument is proposed that would show, if correct, that in fact it is not of any use …)

*****************************************

8) What’s on the agenda for today?

πŸ™‚ Well, there are two main things I’d like to discuss. One is our “proof” that a random low-complexity function was indistinguishable from a purely random function if you use anything U^k-ish to do the distinguishing. I have a small problem with the argument, though I wouldn’t go so far as to say that it is wrong.

The second is that we haven’t got round to having a serious discussion of the lessons that can be drawn from the failure of the approach. If it’s true that it doesn’t work, then I think we can come up with a pretty general class of “simplicity properties” that fail for similar reasons. But I’d like to be as precise about that as possible.

8) Sounds like a good plan to me. Shall we start with your doubts about my argument against your U^k-dual approach?

πŸ™‚ Yes. I’m talking about the modified one where we ignore the degenerate cases. And for now I’m forgetting about the duality. So the question I want to address is this. Suppose you take a random function f of superlinear complexity and look at the probability, for a random subspace X of \{0,1\}^n of dimension k (which is around \log n) that the restriction of f to X has an even number of 1s. (Here I am fixing f and letting X vary, so this probability is a parameter associated with the function f.)

Now let’s just suppose that the subspace X in question is the subspace generated by the first k basis vectors (that is, the subspace consisting of all sequences x such that x_{k+1}=\dots=x_n=0). Then if you do random Boolean operations, most of them will not involve any of the first k coordinates. Indeed, if you do m operations, then the probability that you miss the first k coordinates completely is something like (1-Ck/n)^m, or \exp(-Ckm/n). And for that to be exponentially small we need m to be n^2/Ck.

😐 Yes, but you’re now fixing X and letting f vary.

πŸ™‚ I know, and for that reason I think this argument may be going nowhere. But let me pursue it a tiny bit further. I’m now trying to think how many Boolean operations are needed if you want to scramble every single subspace that is generated by k unit vectors. And it’s not obvious to me that linearly many operations will do the job. If we choose random Boolean operations, then we need around n\log n of them before we can be fairly sure that we’ve involved every single coordinate. And if we haven’t involved x_1, say, then flipping x_1 makes no difference to the value of the function, so f is guaranteed to have even parity in any subspace that involves x_1.

😐 But why is it a problem for f to have even parity in some subspaces?

πŸ™‚ That’s something we’d have to think about. Possibly one could go further and argue for some kind of bias. But let me continue with what I was saying.

😐 OK.

πŸ™‚ We could try starting with a couple of non-random layers in order to ensure in a crude way that we involve every coordinate. But involving every coordinate isn’t enough. For example, suppose we chose random operations …

8) Can I stop you here?

πŸ™‚ I suppose so.

8) It’s just that if all you’re planning to do is look at subspaces of this special type then you’re looking at only \binom nk subspaces. Even if you allow the fixed coordinates to take arbitrary (fixed) values, then you’re still looking at only 2^{n-k}\binom nk subspaces. So it’s hard to see how you will end up saying anything that can’t be checked in polynomial time in 2^n.

πŸ™‚ OK you’re right. I suppose it means that there is some clever pre-scrambling that can be done with linearly many operations. For instance, perhaps there’s a nice function from \{0,1\}^n to \{0,1\}^{2n} such that the image of every sequence has large support. If we applied that function and then did random Boolean operations, we’d probably get all our probabilities decaying exponentially.

8) You’re asking there for a nice code that can be computed in linear time. I don’t know enough about coding theory to know whether such a thing exists.

😐 A quick Google search reveals that it does. [Apologies — this file seems to be stored as an image and takes a long time to download.]

πŸ™‚ OK, I admit it. Any argument that tries to make use of the behaviour of the restriction of f to subspaces generated by k unit vectors is doomed to failure, and there are two ways of seeing why. The first is to observe, as 8) did, that anything you can say about these restrictions will be checkable in polynomial time. The second is to note that in linear time you can map \{0,1\}^n to \{0,1\}^{2n} in such a way that every vector has an image with lots of 0s and lots of 1s.

😐 I’m sorry, I don’t understand the second of those arguments.

πŸ™‚ No apologies needed — I was far too vague. But explaining myself will take us to the second topic I wanted to discuss, which is trying to come up with informal, but reasonably precise, descriptions of the kinds of arguments that run up against the natural-proofs barrier. In the light of what we’ve just said, I want to refine the heuristic picture I had earlier that was based on the Gowers model of random computable functions.

8) I wish you wouldn’t call it the Gowers model. It’s just the obvious model of a random reversible computation.

😐 Why is it obvious? In particular, why do the basic operations depend on three bits rather than say two or four?

πŸ™‚ That’s easy to answer. It’s because if you base them on two bits then the transformations you get are affine over \mathbb{F}_2, and therefore are by no means quasirandom. By contrast, if you use three bits then you can generate all even permutations of the cube. So three is chosen because it’s the smallest number that works. A related reason is that if you allow some extra “rubbish” bits then you can do any computation you like. For example, to simulate an AND gate, you take the two bits you want to do AND to, and a third bit that you set to 0. Then on those three bits you do the transposition that swaps 110 and 111. At the end of that transposition, the only way that the third bit can end up as a 1, given that it started as a 0, is if the previous two bits are 11.

Gowers looked at random compositions of basic operations, but his probabilities were affected by the fact that if a pair of sequences differed in just one coordinate, then the probability of doing anything to that coordinate when you did a random basic operation was 1-3/n, which meant that the probability of not touching it after m steps was around \exp(-3m/n), which did not become smaller than n^{-1} until m was around n\log n.

This is actually a genuine phenomenon: for example, if you do a random walk in the discrete cube, then the mixing time is about n\log n because before then there will be directions that you have not moved in.

However, we should not be misled into thinking that we can somehow exploit this phenomenon in order to prove complexity lower bounds of n\log n, because if we do a bit of preprocessing, by applying a linear-time code first, so that any two sequences now differ in some constant fraction of their bits, then the expected time it takes to separate any two sequences is constant. This means that after only a linear number of operations we can expect to have chopped everything up.

😐 Is it possible for you to say more precisely what you mean by “chopped everything up”?

πŸ™‚ I can try. Recall what a basic operation does. It takes the discrete cube and divides it into eight equal parts, according to the values taken by three selected coordinates. It then shuffles these parts by translating them. For example, if the basic transformation looks at the first three bits and exchanges 000 with 111, then to every sequence that begins 000 or 111 you add the vector (111000\dots 0). So if x and y belong to the same part, then they haven’t really been shuffled relative to each other.

I suppose what I’m saying there is that x-y remains the same if x and y are in the same part. But one can also look at more complicated but very important relationships. For example, for a composition of basic operations to look random, we need it to “break up linearity”. One way we could measure that is to look at the extent to which the equation x+y=z+w implies that \phi(x)+\phi(y)=\phi(z)+\phi(w), where \phi is the composition of basic operations.

Now x and y differ in a constant fraction of places, as do x and z, and as do y and z. So with constant probability we choose three coordinates such that the restrictions of x, y and z are all different. And if x+y=z+w then it follows that the restriction of w is different again, and the four restrictions form a 2-dimensional subspace of \{0,1\}^3. With positive probability the basic transformation will turn that into a subset of size 4 that is not a subspace, and we will have “broken the linearity” of the quadruple.

At this point I want to jump straight to the vague thought that I hope to make precise. Let f be a random Boolean function defined on \{0,1\}^n as follows. You first map \{0,1\}^n to \{0,1\}^{2n} as described, then do a random sequence of m basic operations, and finally you look at the first coordinate of the result (or take -1 raised to that power if you want an answer in \{-1,1\}). I want to suggest that if S is any set of Boolean functions on \{0,1\}^n such that membership of S can be computed in polynomial time (in 2^n, or equivalently in exponential time in n), then the probability that f\in S is 2^{-2^n}|S|+c(m), where c(m) tends to zero exponentially quickly.

I now want to think about (i) whether that is actually true and (ii) what the consequences are for complexity proofs if it is true. I feel as though it was something like the above fact that caused us problems earlier.

I don’t really have anything sensible to say about (i). We sort of know from Razborov and Rudich that something like it is probably true, so I think I’ll just concentrate on (ii), because that is what really interests me.

So let’s suppose that S is a set of Boolean functions. I’m looking for properties of S that ensure that it cannot distinguish between random low-complexity functions and purely random functions. And the starting point will be what you told me right at the beginning of our conversation: that if S is a polynomially computable property then it has no chance of working.

Now it follows trivially from this that if S (which we think of as our “simplicity” property) implies some property T that is polynomially computable and does not hold for random functions, then again S cannot work. Why? Because then T would hold for random low-complexity functions and not for purely random functions, while also being polynomially computable. And that takes us back to our starting point.

8) I should make clear that Razborov and Rudich included this observation in their paper, and indeed gave examples of properties S that had been used in the literature that were not themselves natural but that were “naturalizable” in the sense of implying natural properties T.

πŸ™‚ Yes, I think you mentioned that earlier. Perhaps I should make clear that I’m not claiming originality for any of this. One way of describing what I’m doing now is to say that I’m trying to describe general classes of properties that turn out to be naturalizable.

8) I agree that that’s interesting, even if it doesn’t count as a theoretical advance.

πŸ™‚ OK. Now one of the first “unnatural” (as far as I could tell) properties that I was interested in was the property of having a not too large U^3 dual norm. But as😐 pointed out, that property is easily naturalizable, since it implies the property of having a not too small U^3 norm. I had been hoping that duality was a way of building nice properties that were not polynomially computable out of ones that were, and therefore of generating a large number of good candidates for properties that might distinguish between pseudorandom functions and random functions. However, it seems that no construction of this kind can work.

😐 Can you say that formally?

πŸ™‚ It’s not at all hard to say it formally. The question is how generally one can say it. Here’s what I’ve got so far. Let \|.\| be a polynomially computable norm. Then the property \|f\|^*\leq C does not work as a simplicity property. The reason is that if f is a Boolean function, then \langle f,f\rangle=1, so the property \|f\|^*\leq C implies the property \|f\|\geq C^{-1}. So if \|f\| is small for random functions (as it is for U^k norms), then we have a polynomial-time way of distinguishing between random and pseudorandom, which is not allowed.

😐 You don’t seem to have ruled out the possibility that there might be a norm \|.\| that is not small for random functions, but such that the dual norm \|.\|^* is large for random functions.

πŸ™‚ That’s true. So maybe I need to add that as an assumption. I’ll call \|.\| a quasirandomness norm if it is small for random functions. So then the statement above is that duals of polynomially computable quasirandomness norms cannot work.

We should perhaps bear in mind what you’ve just said, though it seems to me to be a very long shot. Perhaps one could devise a norm \|.\| that was large for random functions but small for functions that were “even better than random”. To give an example, if you define \|f\| to be \|\hat{f}\|_\infty, then the norm of a random function is (\log N/N)^{1/2} rather than N^{-1/2}. The reason is that each Fourier coefficient has a Gaussian concentration around 0 with standard deviation N^{-1/2}, so if you’ve got N Fourier coefficients to worry about, then you’ll expect one of them to have size equal to about \sqrt{\log N} standard deviations. However, if you construct a function using a high-rank quadratic form, then you can produce several functions f with \|\hat{f}\|_\infty\leq CN^{-1/2}.

Now the dual norm of f is \|\hat{f}\|_1, where this is the \ell_1 norm rather than the L_1 norm (meaning that you add up the moduli of the Fourier coefficients rather than taking their average). Since a typical Fourier coefficient of a random function has modulus about N^{-1/2}, it seems that the dual norm of a random function is bigger by a factor of \sqrt{\log N}=\sqrt{n} than the reciprocal of its (non-dual) norm.

8) Are you trying to say that you’ve got a new candidate for a property that could be used to prove nontrivial lower bounds?

πŸ™‚ I don’t know, since this idea has only just occurred to me and I haven’t had time to think about it. But let’s break off for a second and try to see whether we have some way of ruling out this property.

😐 Of course we do. To calculate the dual norm you just work out the \ell_1 norm of the Fourier transform!

πŸ™‚ Oops. But that doesn’t quite kill off the underlying idea. For example, perhaps we could define a “quadratic” version of the above norm. We would let \|f\| be the largest inner product of f with any quadratic phase function (-1)^{q(x)}. There are 2^{cn^2} of these, so we might expect one of the inner products to have size about n=\log N times the typical size of N^{-1/2}. But perhaps we can also show that a random function correlates very well with a function g that has only the expected correlation of N^{-1/2}.

If that worked, then we might conceivably be able to get superlinear bounds. There would be two stages to the proof, both of which look as though they are either true but very hard to prove, or false.

😐 What are they?

πŸ™‚ Well, unfortunately superlinear bounds for formula complexity are not interesting since they are already known (e.g. for the parity function). So we’d have to look at circuit complexity. We’d need to show that for every Boolean function f of circuit complexity at most n\omega(n) (for some function \omega that tends to infinity) there is a quadratic phase function g that has very slightly more than the expected correlation with f. That is, one would be looking for a bound like \langle f,g\rangle\geq N^{-1/2}\log N/1000, or perhaps something smaller still. It seems highly unlikely that one could prove that by identifying a property of f and using the property to find a g with good correlation. But perhaps one could somehow prove that to get rid of all the correlations would take a superlinear number of steps. It might be a bit like the result that you need \log_2n partitions of an n-element set into two sets if you want every pair of elements to be in different cells of at least one of the partitions.

But at this stage I don’t even have a feel for what happens if you take a polynomial of the form p(x)=\sum_{\{i,j,k\}\in\mathcal{A}}x_ix_jx_k for some collection \mathcal{A} of {}Cn sets of size 3. That will have linear circuit complexity, and perhaps \mathcal{A} can be chosen so that it has essentially minimal U^4 norm and hence minimal correlation with any quadratic phase function. At the moment my guess is that that is indeed what happens.

8) Mine too I have to say.

πŸ™‚ But if we were very lucky and that didn’t happen, then the second stage of the argument would be to show that … oh wait. No, the second stage would in fact be easy, because I’m sure there is at least some cubic phase function with minimal U^3 norm. So that would be a function in P that had superlinear circuit complexity. The issue, therefore, is whether we can find one that’s computable in linear time.

8) There’s something that’s worrying me here. It’s that there’s a concept called algebrization that appears in a paper of Scott Aaronson and Avi Wigderson. It’s to do with low-degree polynomials, and says something like that if you have a proof that still works when you replace the field \mathbb{F}_2 by some extension of it, then you’re in trouble.

πŸ™‚ Can you be more precise?

8) I’m afraid not. I don’t know enough about it to know whether it’s relevant here, but I think you should check. It’s another barrier to proving that P\neNP and it is known to kill off some proof techniques that get round the natural-proofs barrier. [Algebrization is discussed in more detail in a later instalment of this dialogue.]

πŸ™‚ Hmm, that is indeed slightly worrying.

😐 Can I chip in here?

πŸ™‚ Of course.

😐 Well, even if everything works, I don’t see how you are proposing to distinguish between random and pseudorandom functions. You’ve argued that it’s conceivable that a random function of complexity 1000n has a correlation with at least some quadratic phase function that is not quite minimal, but that’s true of random functions.

πŸ™‚ Yes, but then I was going to look at the dual of that, arguing that random functions do have maximal dual norm.

😐 Right, but why shouldn’t random functions of complexity 1000n also have maximal dual norm?

πŸ™‚ Ah yes, I see your point. I’m not sure I have any reason to think that.

OK, let’s get back to the “negative” discussion and see whether we understand better the kinds of proofs that won’t work.

Oh, but hang on, I’ve just realized I’ve been making a calculation error.

8) Not for the first time …

πŸ™‚ I’m sorry about that — it’s a bad habit of mine, thinking about ideas without properly checking their flimsy foundations.

But it’s just occurred to me that the dreaded polynomial example isn’t as frightening as I thought. This is also relevant to the discussions we were having earlier about the U^k norm for unbounded k. Let me explain.

It can be shown that if p is a polynomial of degree k-1, g is the polynomial phase function (-1)^p and f is some other function, then \langle f,g\rangle\leq \|f\|_{U^k}. Now the smallest that this can be is at least N^{-1/2^k}, which is much bigger than the correlation between g and a random function.

😐 But I thought you said that in the case where k=2 and f is a suitably chosen quadratic you got N^{-1/2} rather than N^{-1/4}.

πŸ™‚ So I did. What’s going on?

OK, I’ve done a few calculations, and it seems that using the inequality above is inefficient. You’ll just have to take my word for this, but the best bound one can hope to prove by easy Cauchy-Schwarz methods is that a well-chosen polynomial of degree k has a correlation of at most N^{-1/2^{k-1}} with any polynomial of degree less than k. So when k=1 we get N^{-1/2} and when k=3 we get N^{-1/4}.

😐 When you say “by easy Cauchy-Schwarz methods” are you trying to suggest that more advanced techniques could give better bounds?

πŸ™‚ Possibly. This is very close to some difficult problems in analytic number theory concerning exponential sums of the form \sum_{x=1}^n\exp(i\alpha n^k), on the assumption that \alpha is not too close to a rational with small denominator. I’m not sure what the state of the art is when k=3, but I’m pretty sure people don’t know how to get to bounds like \sqrt{n}. Also, I think the best bounds that are known use algebraic geometry.

This changes things, of course. It means that it is probably hopeless to try to kill off the approach using a polynomial. (And as for my earlier attempts to do that for polynomials of degree \log n, they now look laughable.) Instead, I think our best bet is to try to guess what happens if you take a random function of linear circuit complexity, where by that I mean a function taken from the model we were discussing earlier: first embed nicely into \{0,1\}^{2n}, then apply 1000n random basic operations, and then look at the first coordinate.

😐 I’m sorry to interrupt you yet again, but I think that model is still flawed.

πŸ™‚ Why?

😐 Because most of the random basic operations you took will have no effect on the final answer. To see what I mean, let’s suppose that the sets of size 3 that are used for the basic operations are A_1,\dots,A_m, in that order. Then we can certainly say that the mth basic operation has no effect on the first coordinate unless 1\in A_m. So we’ll expect the last cn of the basic operations to have no effect. But that’s not all. Let’s suppose that r is maximal such that 1\in A_r. We now know that no previous A_s has any effect unless either A_s\cap A_r\ne\emptyset. And that happens with probability c/n as well. Indeed, it seems to me that only a constant number of the basic operations will have any effect at all on the final answer.

πŸ™‚ You’re quite right of course. But I think we can deal with that problem in a simple way. Instead of looking at the first coordinate, let’s do something more global like taking the parity. In other words, the revised model is this. First you map into \{0,1\}^{2n} in a clever way, then you do a random sequence of 1000n basic operations, and finally you count how many 1s you’ve got and set f(x)=1 if this number is even and -1 otherwise.

8) It’s still not obvious to me that this model is a good one, because the randomized part (that is, the product of a random collection of basic transformations) is very simple and very easy to distinguish from a truly random function.

πŸ™‚ I find that quite interesting. It’s not clear to me either way. The random part does indeed look very simple, but I don’t see why it should remain simple when composed with a carefully chosen function. For instance, the random part will have constant depth and bounded fanin, which computer scientists would regard as very simple indeed, but the explicit parts that come before and after cannot be computed by bounded-depth circuits (because the parity function can’t), so perhaps one part of the procedure gives you depth and the other part gives you randomness.

But in a way, all this is irrelevant, because I still haven’t addressed😐’s earlier question. Do I have any proposal at all for a property that would distinguish between pseudorandom functions and random ones? Unfortunately, I don’t think I do, or at least not one that I believe in. Let me explain and then we can get back to trying to rule out more general kinds of approach.

The property I’ve just been considering is the property of having a very large dual norm, where the norm itself is given by the maximum possible correlation with a quadratic phase function.

Let me say that more clearly. I’ll let Q be the set of all quadratic phase functions, which are functions of the form (-1)^{p(x)} where p is a quadratic in n variables over \mathbb{F}_2. I’ll then define \|f\| to be \max\{\langle f,g\rangle:g\in Q\}. Finally, the property I’m interested in is the property of satisfying the inequality \|f\|^*\geq N^{1/2}/1000.

I don’t have a proof, but I’m guessing that random functions have this property (because they do if you replace “quadratic” by “linear”). Also, I’m guessing that this property is not in P.

😐 Can you quickly justify that guess?

πŸ™‚ I can, but not all that quickly. A first step would be the claim that distinguishing between Boolean functions with \|f\|\approx CN^{-1/2}\log N (the random functions) and Boolean functions with \|f\|\approx CN^{-1/2} (the better-than-random functions) is impossible in polynomial time. My reasoning there would be that there are 2^{n^2} functions in Q, and the sort of correlation one is looking for is so small that there are no clever methods to choose which of these functions will correlate with some random Boolean function f. So one has to do a brute-force search, and 2^{n^2} is a superpolynomial function of 2^n.

And then I’d claim that calculating the dual is even harder. One can in fact prove that the dual norm of f is the smallest possible sum \sum_i|\lambda_i| for which it is possible to write f as a linear combination \sum\lambda_iq_i, where each q_i\in Q. But there seems to be no earthly way of guessing which functions q_i will be useful for the purposes of expressing f as a linear combination of this kind.

😐 Thanks — that sounds reasonably convincing.

πŸ™‚ I think so too, but unfortunately it doesn’t help. Prompted again by what goes on in the linear case, one might well guess that the dual norm of a random function f is maximal to within a constant — in other words, it is around N^{1/2}. The rough reason for this would be that f itself would correlate with some quadratic phase functions, but by modifying it a bit, one should be able to come up with a function f' that correlates well with f but no longer correlates with those few quadratic phase functions that f correlated with.

😐 Now you have stopped convincing me.

πŸ™‚ Why?

😐 Because it seems to me that there’s an important difference between the linear case and the quadratic case. The expected number of linear phase functions that a random f correlates with is a small fraction of 2^n. So it seems reasonable to suppose that with a small amount of tinkering one can get rid of these correlations without radically changing f. But in the quadratic case the number of troublesome correlations is a small fraction of 2^{n^2}, which will be far bigger than 2^n, the dimension of the space we are talking about. So it’s not at all obvious that we can get rid of all these correlations: the functions we are trying to avoid will probably span the whole space many times over.

πŸ™‚ That’s actually quite an interesting point. It affects what I was saying, but it doesn’t affect the main point I was leading up to, which is that I still don’t see any distinction between random and pseudorandom functions. To see why I’m saying this, consider some fixed quadratic phase function g and let’s consider the behaviour of \langle f,g\rangle, where f is a pseudorandom function of complexity m (from the model that 8) doesn’t entirely trust). My heuristic principle is that the expected correlation decays exponentially, so when m is something like 1000n then it is as small as it can be, on average at least.

Next, I get to a statement that I don’t really see how to justify, even heuristically, which is that the probability of deviating from the mean should be similar in the pseudorandom case to what it is in the random case. So if you now look at all quadratic phase functions, you will find that about e^{-ct^2} of them will have correlations greater than tN^{-1/2}.

8) I think I might be able to justify that.

πŸ™‚ Oh really? How?

8) Well, one could say that the property of having an inner product of at least tN^{-1/2} with some fixed quadratic phase function is a polynomially computable property that holds for an appreciable fraction of all Boolean functions (that fraction being e^{-ct^2},) so the probability that it happens for a pseudorandom function should be roughly the same as the probability that it happens for a random function.

πŸ™‚ Oh yes. I think I buy that. I think from that argument we can extract a fairly general principle. Let \Phi be any collection of functions that take Boolean functions to the real numbers. (It may well be possible to genearalize this, but I’ll stick with real-valued functions for now.) Suppose that every function in \Phi is polynomial-time computable, in the sense that we can decide in polynomial time (in 2^n) the approximate value of \Phi(f) (up to an error of C^{-n}, say). Then the distribution of \Phi(f) over random functions f should be almost exactly the same as the distribution over pseudorandom functions.

Oh … just a moment.

8) What’s happened this time?

πŸ™‚ Well, it’s occurred to me that there might after all be a difference between the norm of a random function and the norm of a pseudorandom function, where the norm is that quadratic-correlation norm.

It goes back to your argument that purported to show that the two norms were the same. It seems to me that that argument is fine up to t=\sqrt{n}, but once you get bigger than that, the probability e^{-ct^2} becomes subexponential, so exponentially approximating it is not good enough.

So at the moment it feels as though the norm of a random function should be around cnN^{-1/2} (because there are e^{cn^2} chances for it to be big, with a probability of e^{-cn^2} each), whereas it seems very unlikely that a random model based on just {}Cn random steps would give rise to events with that low a probability.

😐 Are you saying that pseudorandom functions are more random than random ones?

πŸ™‚ Oh dear, something odd is going on here isn’t it? Maybe I am paying the price for ignoring the fact that the values of \langle f,g\rangle for the various quadratic phase functions g are not independent. I was hoping that they would be sufficiently independent, however.

😐 Yes, but once you know 2^n of those correlations, then morally you know all the rest, so perhaps it was just completely unrealistic to hope for events to have subexponential probabilities.

πŸ™‚ There’s also something else that I haven’t quite got to the bottom of, which is that if we define a norm by taking \Gamma to be the set of all functions of circuit complexity at most n^{\log n} and set \|f\| to be \max\{|\langle f,g\rangle|:g\in\Gamma\}, then trivially all functions of polynomial circuit complexity have norm 1, whereas random functions have very small norm. But why can’t we say that for any fixed function g, the probability that |\langle f,g\rangle|\geq tN^{-1/2} given that f has circuit complexity at most 1000n is roughly the same as the probability for a purely random f?

8) I think you can. It’s just that “roughly the same as” means that the two probabilities are exponentially close. So the fact that a random function is astronomically unlikely to have almost perfect correlation with a fixed function g of circuit complexity at most n^{\log n} allows us to deduce that a pseudorandom function has an exponentially small probability of this. The latter is compatible with every pseudorandom function correlating with at least one function of circuit complexity at most n^{\log n} (as there are more than exponentially many of these), while the former is compatible with almost every random function not correlating with any such functions.

πŸ™‚ There’s a simple but quite useful general moral to draw from that, I think. It’s that if you want your simplicity property to imply correlation with a function from some collection \Gamma, then on the one hand you need \Gamma to have superexponential size in n, or equivalently superpolynomial size in 2^n. That’s because otherwise you can check in polynomial time (in 2^n) just by brute force what the largest correlation is with a function in \Gamma. Conversely, if \Gamma does have superexponential size, then there is at least a theoretical chance that it can be used as the basis for a simplicity property. All this makes me want to think some more about the quadratic correlation norm, where \Gamma, the set of quadratic phase functions, has size 2^{cn^2}.

😐 I’m getting a bit tired. Can we save that for another day?

πŸ™‚ Fine by me.

8) And by me.

One Response to “A conversation about complexity lower bounds, VII”

  1. Boaz Barak Says:

    All this discussion on “mixing” and touching every coordinate, sounds very familiar to issues that arise in the construction of block ciphers. (Indeed, the Gowers model is very familiar to such block ciphers.)

    There one often has a small number of rounds (since it’s with fixed parameters, its hard to tell if we should think of this number as a constant or as log n) which combine “mixing/diffusion” operators that can be linear, and then simple local non linear operations.

    Interestingly, what is believed in the block cipher community is that for many of these ciphers, there is no non trivial attack. That means that if the cipher is a random function on n bits from a collection of 2^k functions (n is known as the block length, and k as the key length) then it should not be distinguished from a random (in some cases even, though this is not explicitly stated) permutation in time less than 2^k. If k is superlinear in n, then this time is not just polynomial in 2^n but super-polynomial.

    Lastly, as indicated in this discussion, things might be somewhat easier when you’re trying to prove just a super-linear lower bound, as compared to lowerbounds better than n*log n.

    In fact, Valant has shown a non-natural property of (multi-bit output) functions f:\{0,1\}^n\rightarrow \{0,1\}^n that implies that they cannot have a circuit of O(n) size and O(log n). This is that there is a constant epsilon > 0 so for every function g that is n^{\epsilon}-sparse (i.e. each output of g depends on at most n^{\epsilon} coordinates), the probability that f agrees with g on a random input is at most $2^{-\epsilon n}$.

    There is also a variant of this for *linear* functions talking about *linear* circuits (all operations are linear in some field). There the property becomes that the matrix f cannot be written as g + h where g has at most n^{\epsilon} non zero elements in each column, and h has rank at most \epsilon n. If $f$ has this property we say that it is *rigid* and this implies that it has no linear-operation circuits of linear size and logarithmic depth.

    Alekhnovich argued that this property is not natural in the following paper:
    http://www.math.ias.edu/~misha/papers/average.ps

    Note that there indeed the hard cases for this approach are random “low complexity” functions, obtained by taking a random low rank matrix and adding noise to it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: