A conversation about complexity lower bounds, VI

The story so far: our three characters, :), 8) and :|, have discussed in detail an approach that πŸ™‚ has to proving complexity lower bounds. At the beginning of the discussion, they established that the idea would be very unlikely to prove lower bounds for circuit complexity, which is what would be needed to prove that P\neNP. More recently, they have given arguments that strongly suggest that the idea cannot prove interesting new bounds even for weaker models. In this segment of the dialogue, they nevertheless consider what kinds of methods would be needed in order to prove lower bounds for circuit complexity (as opposed to formula complexity). They don’t really get anywhere, and this part of the discussion ends rather abruptly, but they mention one or two interesting facts along the way.

***************************************

πŸ™‚ After all that, I have to say that I’m still slightly disappointed that it seems that even if the ideas I’ve been having could be got to work, they are most unlikely to show that P\neNP.

😐 I didn’t really understand why not. Can somebody remind me?

8) I can. Recall that a formal complexity measure is a function \kappa such that \kappa(f)=1 for every basic function f, \kappa(f)=\kappa(-f) for every f, and \kappa(f\vee g) and \kappa(f\wedge g) are both at most \kappa(f)+\kappa(g) for every f and g. Now if all you know about \kappa is that it is a formal complexity measure, then all you can deduce from the fact that \kappa(f)\leq m is that f has formula size at most m. Therefore, if there exist functions with low circuit complexity but very high formula size, then one cannot use a formal complexity measure to prove that P\neNP unless it has some further properties.

πŸ™‚ Hmm … it seems that you need a pretty strong statement to back up what you’ve just said. If I can find a formal complexity measure \kappa and also a function f in NP with \kappa(f)=M, then that will prove that P\neNP unless there exist functions of polynomial circuit complexity and formula size at least M. Is it believed that a function of polynomial circuit complexity could have exponentially large formula size, for instance?

8) I’d have to check. But if you take a fairly random circuit, then the obvious way of turning it into a formula certainly does require the size of that formula to be exponential, and it’s hard to see how to do any better.

😐 How do you mean, “the obvious way of turning it into a formula”?

8) Well, if you’ve got a sequence of functions f_1,\dots,f_m with each one either a basic function or a Boolean operation of two earlier functions, you can just build up a formula in the same way. For instance, if we write e_i for the ith basic function, then we could define a function by taking f_1=e_1, f_2=e_2, f_3=e_3, f_4=e_4, f_5=f_1\vee f_2, f_6=f_3\vee f_4, f_7=f_5\wedge f_6. Unpacking this, we get

\displaystyle f_7=(f_1\vee f_2)\wedge(f_3\vee f_4).

If you don’t count the basic functions in the definition of circuit complexity, then the circuit complexity of f_7 is 3, while this formula has length 4. And that lack of economy goes up rapidly as more and more functions in the circuit depend on earlier ones. If you play around with it a bit, you’ll be convinced.

😐 Great, thanks, I believe you. In fact, it’s sort of obvious really: as you increase the length of the sequence of functions, to get the formula complexity of f_m you add the formula complexities of earlier functions, whereas to get the circuit complexity you just add 1. So the formula complexity really does look as though it goes up much more rapidly. Of course, that’s not a rigorous proof, since there might be some less obvious way of producing a formula that was far shorter. But I agree that that looks very unlikely.

Actually, couldn’t one use this thought to tackle the formula-size question? Perhaps one way to produce a function in NP with large formula size is actually to produce a function in P with large formula size by picking a random sequence of Boolean operations.

8) Two comments on that. One is that a random sequence of Boolean operations will give you a function of low circuit complexity, but that’s not quite the same as a function that can be computed in polynomial time. The second is that you’ll still need a proof that your random function has high formula complexity.

[See this comment of Ryan Williams for an explanation of why the first objection can be dealt with.]

😐 I was wondering whether maybe it would have maximal formula complexity in the following sense: the obvious formula really is the best you can do, as long as you obey some simple rules that don’t allow you to produce a shorter formula in a trivial way. For example, you can shorten (f\vee g)\wedge(f\vee h) to f\vee(g\wedge h), but that depended on the fact that f\vee g and f\vee h have some overlap. Perhaps in the random case you just wouldn’t get that kind of overlap.

8) If that’s the sort of thing you’d like to do, then let me tell you something depressing. It’s a well-known approach that morally ought to give a lower bound for formula complexity, but nobody has managed to get it to work.

Suppose that f_1,\dots,f_k are formulae, and that they depend on disjoint sets of variables. Suppose also that f is a formula that depends on k variables. Then you might expect that the shortest formula for computing the function f(f_1(x),\dots,f_k(x)) is the obvious one, where you compute f_i(x) for each i and then apply f. Therefore, if f has formula size r and all the f_i have formula size s, you might expect the formula size of f(f_1(x),\dots,f_k(x)) to be rs.

Now choose a random function f of r=\log n variables. This will have formula size m, which is somewhere close to n, by a simple counting argument. We can then build a new function f_1=f(f,f,\dots,f), where it is to be understood that we are applying f to r disjoint sets of r variables. Then f_1 has formula size m^2, if we believe our previous argument. Then we can build another function f_2=f(f_1,\dots,f_1), which now depends on r^3 variables and has formula size m^3, and so on. We keep going until we use all n variables, which happens when we reach the function f_k, where r^k=n. This function has formula size m^k, which equals n^{\log m/\log r}, which is about n^{\log n/\log\log n}, which is superpolynomial. But clearly the function f_k can be computed in polynomial time, since f can (as it depends on just \log n variables).

😐 That’s pretty cool. So what’s wrong with the argument?

8) Just the fact that I stated a major lemma and didn’t prove it. I didn’t justify the statement that the shortest formula for f(f_i,f_i,\dots,f_i) is the obvious one. And I think that’s an unsolved problem: certainly, it’s either an unsolved problem or someone’s found a counterexample, since I know that the formula-size problem is still open.

😐 I suppose another point is that even if this argument worked, it wouldn’t actually give a superpolynomial bound for formula size, because the function f that you used to build everything up was chosen randomly rather than given by some algorithm.

8) You’re right. In the absence of any further tricks, what this argument would show is that there was a function of polynomial circuit complexity and superpolynomial formula size. [See again Ryan Williams’s comment.] But even without any new ideas, one would in principle be able to use this approach to obtain arbitrarily large polynomial lower bounds for formula size.

😐 How would you do that?

8) Well, even if we can’t choose a random function f to get things going, we could in theory take some fixed r and find a suitable function by brute force.

😐 Sorry to be slow, but how would you do even that?

8) You’d look at every single Boolean function of r variables and every single formula of size at most, say, (3/2)^r, and pick out a Boolean function that is not given by one of the formulae.

😐 Wow. There are 2^{2^r} Boolean functions in r variables, so that’s going to take a pretty long time, even for very small r.

8) Yes. That’s why I used the phrase “in principle” earlier. But even distinguishing between polynomial circuit complexity and polynomial formula size would be very interesting.

πŸ™‚ Er, excuse me.

8) Yes?

πŸ™‚ We seem to have got sidetracked. I was just wondering whether it would be possible to return to the main topic.

8) How do you mean, “the main topic”?

πŸ™‚ I mean the question of whether any of the ideas we’ve been discussing could be used to prove that P\neNP rather than just the weaker statement that some explicit functions have large formula size.

8) Ah yes, your obsession with the million-dollar prize …

πŸ™‚ That’s not fair. I absolutely agree that the formula-size problem is a great problem, but that doesn’t mean we shouldn’t even think about circuit complexity.

8) Relax, I’m just teasing you. Everyone would prefer to solve the P versus NP problem if there was a chance of doing so, including me.

πŸ™‚ OK, well perhaps you’d like to answer a question that’s been bugging me. Here is a proof that the notion of a formal complexity measure is very natural if one is thinking about formula size. We’ve already discussed it, but let me quickly go through it again just so I can be clear about the question I want to go on to ask.

One can arrive at the notion of a formal complexity measure by asking the following question: if we define \lambda(f) to be the formula size of f, then what properties are satisfied by \lambda? And one quickly observes that \lambda(f)=\lambda(-f), and that \lambda(f\vee g) and \lambda(f\wedge g) are both at most \lambda(f)+\lambda(g). Having done that, one can define a formal complexity measure to be any function with these properties, and one then observes that \lambda is the largest possible formal complexity measure.

8) Indeed.

πŸ™‚ The idea behind our previous discussion was this. In principle, one can prove that a function f has large formula size by finding a formal complexity measure \kappa and proving that \kappa(f) is large. But we have two problems. Obviously, to increase our chances of success, we’d like \kappa to be as large a function as possible. But if we take \lambda itself then we have done precisely nothing: we’ve shown that a function has large formula size if and only if it has large formula size. So there’s another requirement: we want \kappa to be a function that we can actually say something about, so that proving that \kappa(f) is large is strictly easier than proving that f has large formula size. However, here we run into the problem that if \kappa is too simple, and in particular if it can be calculated using a polynomial-time algorithm (in N=2^n), then it will almost certainly not work.

8) Let me quickly interject here that in their paper Razborov and Rudich showed that if a formal complexity measure is ever large, then it will be large for a positive fraction of all Boolean functions. So if you believe that natural proofs can’t solve the formula-size problem either, as you seem to do, then you definitely can’t use a polynomially computable formal complexity measure.

πŸ™‚ Right, that just confirms what I was saying.

Anyhow, on to my question. I’m just wondering whether we can go through the whole thought process again, but this time for circuit complexity. But I seem to have got stuck at the very first step.

8) How do you mean?

πŸ™‚ Well, the first step of the previous argument was to look for properties satisfied by the function \lambda. So I thought I’d define \sigma(f) to be the circuit complexity of f and try to find some properties of \sigma. But the properties I am coming up with don’t seem to be strong enough. For instance, if you know \sigma(f) and \sigma(g), what can you say about \sigma(f\vee g)? The best I can come up with is that it is at most \sigma(f)+\sigma(g), which is no stronger than we have for formula size.

8) Ah ha, that is a very good question, particularly as there isn’t any reason to think that that statement can be strengthened.

😐 Why do you say that?

8) Well, suppose that f and g depended on entirely different sets of variables. Then it is hard to see how to calculate f\vee g more efficiently than calculating f, calculating g, and taking the max of the two. That gives an upper bound \sigma(f\vee g)\leq\sigma(f)+\sigma(g)+1.

πŸ™‚ What? You mean it’s even worse than it is for \lambda?

8) I think so. The trivial bound requires you to add 1.

I ought to say that the argument I’ve just given is another “obvious” argument that is not in fact known to be correct. I claimed that calculating the max of two functions that depend on disjoint sets of variables cannot be done more efficiently than the obvious way of doing it. But if that is really so, then one can obtain arbitrarily large linear lower bounds for circuit complexity as follows. Let r be a small constant, and by brute force find a function f that has circuit complexity at least (3/2)^r. Then let f_1,\dots,f_s be copies of f, each depending on a disjoint set of r variables. Here, s is \lfloor n/r\rfloor. Then the obvious straight-line computation of f_1\vee\dots\vee f_s has length around (3/2)^rs, which is around (3/2)^rn/r. However, the best known lower bound for circuit complexity is something like 4n, so I’m pretty sure that the result about functions depending on disjoint sets of variables is not known.

πŸ™‚ What you’ve just said manages to be more irritating than one might have thought possible. I’m trying to say something about circuit complexity, and you have managed convince me not only that I can’t, but also that I won’t even be able to make rigorous the demonstration that I can’t.

8) Welcome to the wonderful world of unconditional results in theoretical computer science.

Hey, cheer up.

😦

8) OK, let me try to say something to make you feel better. I want to point out that you’re not forced to define \sigma(f) to be a number, and if you don’t then there’s a slightly better chance of proving something interesting.

πŸ™‚ I feel better already. Tell me more.

8) I’d better warn you that what I’m about to say is less promising than it might at first seem. It’s still worth mentioning, but promise me you won’t get too excited about it. It’s a brilliant idea of Razborov, but he has also proved that it cannot be used to obtain interesting lower bounds for circuit complexity.

πŸ™‚ What is it with this guy?

8) Well, I think like you he’d love to prove that P\neNP. But unlike you he has made (at least) two extraordinary contributions to the area: he has proved major results in the direction of the problem, and he has checked very very carefully that they cannot be pushed any further — indeed, he has proved this rigorously.

Anyhow, let’s not worry about that for now. I just want to show you that there is a different notion of formal complexity measure, where you allow it to take values that are sets rather than numbers.

To explain what I mean, let’s suppose that we have a function \rho that associates with each Boolean function f some set \rho(f). We are thinking of \rho as being a measure of the complexity of f, in some sense. What properties might \rho(f) have that could be useful? One obvious one is to insist that \rho(f\vee g) and \rho(f\wedge g) are both contained in \rho(f)\cup\rho(g). But that is not terribly helpful, because it implies that every \rho(f) is contained in the union \rho(e_1)\cup\dots\cup\rho(e_n)\cup\rho(-e_1)\cup\dots\cup\rho(-e_n). So it looks as though \rho(f) can’t be bigger than 2n times what it is for a basic function.

😐 Hang on. It seems to me that you are implicitly taking your sets to have some notion of size attached to them.

8) Yes I am. What I want is for \rho to be small for basic functions and very large for some function f in NP. And I also want it to grow only slowly when you do Boolean operations.

Here is a set of axioms that would be very useful to us if we could obtain them. The idea is to allow a small error on the right-hand side. That is, instead of asking for \rho(f\vee g) and \rho(f\wedge g) to be contained in \rho(f)\cup\rho(g), we ask for them to be contained in \rho(f)\cup\rho(g)\cup\Delta where \Delta is a set of small measure. And to get things started, we’ll assume that \rho(f)=\emptyset whenever f is plus or minus a basic function.

If that is the case, then what can we say about f if it has small circuit complexity? Well, we have a sequence of functions f_1,\dots,f_m, and for each r we can find s,t<r such that \rho(f_r)\subset\rho(f_s)\cup\rho(f_t)\cup\Delta_r for some small set \Delta_r. From this and a trivial inductive argument it follows that \rho(f_m)\subset\Delta_1\cup\dots\cup\Delta_m. So if we can now show for some function f in NP that \rho(f) is a set with measure that is larger than that of any set \Delta_r by a superpolynomial factor, then we have proved that f cannot have polynomial circuit complexity.

πŸ™‚ OK, I’ve got lots of questions about this, but here are just two to get started. How might one go about defining a useful set-theoretic complexity measure like this? And is there some trivial argument, as there is the case of formula complexity, that gives a kind of “maximal” set-theoretic complexity measure such that the size of \rho(f) is actually equal to the circuit complexity of f?

8) Let me deal with the first of these questions by telling you about Razborov’s method of approximations. The idea of this is to construct a lattice \mathcal{L} of Boolean functions, where all that this means is that you have the basic functions (and their negatives), together with some operations, \sqcap and \sqcup, under which \mathcal{L} is closed.

πŸ™‚ Aren’t you going to assume anything about those operations? For example, shouldn’t they be associative?

8) Oddly enough, we don’t need to assume that, but if \mathcal{L} is to be of any use, then the operations will have to be chosen very carefully, as you’ll see.

Indeed, what we want from \mathcal{L} is two properties. We would like the operations \sqcup and \sqcap to approximate the operations \wedge and \vee, and we would also like \mathcal{L} to be as small as possible.

Now suppose that we have a straight-line computation f_1,\dots,f_m. If we carry out exactly the same computation but replacing the operations \wedge and \vee by \sqcup and \sqcap, then we obtain a sequence g_1,\dots,g_m. Now for each r and s, let’s suppose that g_r\sqcup g_s=g_r\vee g_s and g_r\sqcap g_s=g\wedge g_s, except on a small set \Delta(g_r,g_s). Then define \Delta(f_m) to be the set \Delta(g_r,g_s) for which f_m equals f_r\vee f_s or f_r\wedge f_s. An easy inductive argument shows that f_m agrees with g_m except possibly on \Delta_1\cup\dots\cup\Delta_m. So we can set \rho(f_m) to be \Delta_1\cup\dots\cup\Delta_m.

😐 Something’s bothering me here. It’s that your function \rho isn’t obviously well-defined. It seems to depend on the particular way that f is computed.

8) You’re absolutely right of course. I must apologize. But it turns out that for some problems, one can prove highly non-trivial lower bounds using this kind of idea, though I have in fact oversimplified what Razborov did.

The bad news is that Razborov has proved that the method of approximations doesn’t yield even superlinear lower bounds for circuit complexity, unless you allow extra variables, in which case you can produce a suitable lattice in a trivial universal way.

πŸ™‚ Ah, that’s sounding close to an answer to my second question. But perhaps we can think about it for ourselves.

Wait, there is indeed a spectacularly trivial set-theoretic complexity measure if it’s allowed to depend on how the function is computed.

😐 What’s that?

πŸ™‚ You just take the set of all functions that you used in the computation. Then if f_m=f_r\vee f_s, say, then you’d have \rho(f_m)=\rho(f_r)\cup\rho(f_s)\cup\{f_m\}. So the size of \rho(f_m) would be at most m, and it’s not hard to show that the size of \rho(f_m) is in fact the length of a computation closely related to the one given. (For uninteresting reasons it could be different, since r and s might both be less than f_{m-1}, but in that case f_{m-1} would not have been used in the computation of f_m.)

I’m still interested to know whether there might exist a set-theoretic complexity measure that applies to functions rather than to computations of functions. Has Razborov shown that they cannot exist?

8) I’m not sure. What I can say is that his method of approximations doesn’t give you one. If you wanted it to do so then you’d have to find some way of associating with each f a function g=L(f) that belonged to the lattice \mathcal{L}, in such a way that L(f\vee g) was equal to L(f)\sqcup L(g) and L(f\wedge g) was equal to L(f)\sqcap L(g). But that would be asking for a partition of \{0,1\}^n that somehow respected Boolean operations. I’m basically certain that such a thing doesn’t exist. In fact, I think I’ve got a proof. If \mathcal{L} is small, then by the pigeonhole principle L(f) would have to be the same for a large set of functions f. But then you’d have two functions f and g that were almost orthogonal, from which it would follow that f\vee g was nothing like either f or g, but L(f)\vee L(g) was equal to L(f) and L(g).

πŸ™‚ I don’t think an additive combinatorialist would give up just yet.

8) What do you mean?

πŸ™‚ Well, I can see that there is a problem with random functions f if you want to define L(f) to be a Boolean function. But couldn’t one generalize the notion of a lattice to allow functions to take values other than -1 and 1? In additive combinatorics, one tends to think of a random \pm 1 function as being “just a random perturbation of the zero function”.

8) If you want to try something like that, then you’ve still got some serious thinking to do.

πŸ™‚ Why?

8) Well, if you take two independent random Boolean functions f and g, then f\vee g is a random function as well, but it takes the value 1 with probability 3/4 instead of 1/2. This suggests that if the zero function 0 belongs to \mathcal{L}, then 0\vee 0 should be the constant function 1/2. But if f and g are not independent, then this is no longer appropriate. For instance, if g=-f, then f\vee g is the constant function 1. So you can’t just send all random functions to the zero function and hope to preserve Boolean operations in a nice way.

πŸ™‚ I see what you mean. That looks pretty bad. But does it show that no set-theoretic complexity measure that depends just on f can exist? Or does it merely show that it cannot be derived from some lattice \mathcal{L}?

8) I’m not sure. I’d be quite surprised if one existed. But perhaps it’s possible to find one in a trivial way.

πŸ™‚ It’s difficult to see how one could define such a set without making any reference to circuits. So perhaps one would have to associate a set with each straight-line computation and then for each f one would take the intersection of all the sets associated with computations of f.

😐 I’m not sure I like the sound of that. For instance, if you apply it to the “trivial” function that associates with each computation the set of Boolean functions involved in that computation, then you don’t get anything interesting when you intersect over all computations.

πŸ™‚ Yes, you’re right. One would probably have to do something fairly clever.

It’s a very long shot, but I wonder whether the U^k idea could somehow be converted into a set-theoretic complexity measure. For example — and I should quickly say that I do not for a moment expect this particular example to work — one could associate with each Boolean function f the set \rho(f) of all Boolean functions g such that g is a witness to the fact that \|f\|_{U^k}^*\geq 1. That is, \rho(f)=\{g:\langle f,g\rangle\geq\|g\|_{U^k}\}. I’ve chosen that just because the sets get bigger when \|f\|_{U^k}^* gets bigger. But I have no reason to suppose that anything much can be said about \rho(f\vee g) in terms of \rho(f) and \rho(g).

In fact, I can’t face thinking about it for now. I need a rest.

8) 😐 OK, see you again some other time.

πŸ™‚ But just to end on not too depressed a note, I’d like to point out that a property such as “The set of functions that witness that the dual norm of f is at least 1 is not too large” is not completely obviously naturalizable. More generally, perhaps we can get somewhere by considering properties that are not in NP but in some class like #P (but probably not quite as extreme as that) where you have to recognise roughly how many ways that there are of checking that an NP property holds. We may manage to come up with some reason for this not working, but I think we’d have to think about it first.

8 Responses to “A conversation about complexity lower bounds, VI”

  1. Mark Bennet Says:

    Can’t do the latex, but there is a point about half way through where Rho(Fm) is used with “V” notation rather than “U” (Here is a set of axioms which would be very useful to us …)

    And thank you for a cogent and interesting summary.

  2. luca Says:

    You raise the following “direct product conjecture:” if f() has circuit complexity \geq C_f and g() has circuit complexity \geq C_g, is it the case that h(x,y) := f(x) \vee g(y) has circuit complexity \geq C_f + C_g?

    The intuition is that there should be no other way to compute h(x,y) than to first compute f(x) and g(y), since the two are “independent.” This basic intuition is not necessarily true: take two operations +, \cdot such that + is much easier to compute than \cdot and such that a distributive law applies.

    Then computing a\cdot x + a\cdot y is strictly easier than computing a\cdot x and then a\cdot y, because one can do the computation a\cdot (x+y) instead.

    In fact I think that there are known counterexamples to the “direct product conjecture” above. Specifically (and this would apply to your candidate for a super-linear bound) if f: \{0,1\}^n \rightarrow \{0,1\} is any function, and N = 2^n, then the function

    F(x_1,\ldots,x_N) =  f(x_1) \vee \cdots\vee  f(x_N)

    should always be computable in size at most N \cdot (\log N)^{O(1)}. This contradicts the direct product conjecture if we pick f to be a function of maximal circuit complexity \geq N/log N.

    Maybe someone who knows a bit more about circuit complexity can confirm? My idea would be to sort the input sequence x_1,\ldots,x_N using a sorting network realizable as a circuit of size N (\log N)^2. Next we say that given the truth-table of f, and given a sorted sequence of inputs y_1,\ldots,y_N, there is a two-tape Turing machine that computes f(y_1),\ldots,f(y_N) in time O(N \log N, and so it can be simulated by a circuit of size O(N (\log N)^{O(1)}). Use this circuit, with the truth-table of f hard-wired into it, to turn the sequence y_1,\ldots,y_N produced by the sorting network into the sequence f(y_1),\ldots,f(y_N). Now compute the OR of the above bit sequence

  3. Ryan Williams Says:

    (I really hope this comes out right, because I haven’t tried WordPress Latex before.)

    Luca: Yes, if the function f has circuit complexity c where c is very high, then computing k copies of f can indeed have much lower circuit complexity than O(kc). A version of your idea is illustrated in Lemma 2 of

    Click to access 75-215.pdf

    which discusses these questions at length. (The above link is just an old tech report of Wolfgang Paul, but I can’t seem to find a free version of the published paper online.)

    Tim: Suppose you find an infinite sequence \{C_n\} of circuits (constructed randomly, or however you like) such that \{C_n\} has circuit complexity at most polynomial in n, and formula complexity that grows superpolynomially in n.

    Then there is an explicit polynomial time computable function that has superpolynomial formula complexity. Namely, the function CircuitEval(C,x) which is 1 iff the circuit described by C on input x outputs 1.

    If \{C_n\} has high formula complexity, then any circuit family which correctly computes CircuitEval must also have high formula complexity. Otherwise, I could just plug in the (polynomial length) code of each C_n as input into the appropriate small formulas for CircuitEval, and now I have small formulas that compute \{C_n\}.

    So you do not need to worry if your circuits are inefficient to compute; in this case you get that for free.

  4. Ryan Williams Says:

    Also, Chapter 10.2 of Ingo Wegener’s book “The Complexity of Boolean Functions” available at

    Click to access cobf.pdf

    contains more information about this direct product stuff, which Wegener calls “mass production”.

  5. Boaz Barak Says:

    In a slightly different context (circuit and not formula complexity, multibit output) the canonical counterexample for the “direct sum” question (does n copies of f has circuit complexity n times the circuit complexity of f) is fast matrix multiplication. (This has to do with Luca’s example of the distributive law.)

    This example is in the case where we consider functions with multiple bits of output, and hence the operation is not OR but simple concatenation, where it seems “obvious” that you could not compute n copies of the function on n independent inputs without paying n times the price.

    But by simple counting you know that there is a linear function f:\{0,1\}^n\rightarrow \{0,1\}^n such that f has circuit complexity \tilde{\Omega}(n^2) (e.g., take f to be computed by a random matrix), but if you look at the function f^n:\{0,1\}^{n^2}\rightarrow \{0,1\}^{n^2} that is defined as f^n(x_1,\ldots,x_n) = f(x_1)f(x_2)\cdots f(x_n) then this is just multiplying the matrix f by the matrix x_1\cdots x_n and can be done in O(n^{2.4}) time.

  6. Boaz Barak Says:

    p.s. I learned from Avi Wigderson that one should use “direct sum” for questions of this form (does the amount of resources grow when combining functions) and “direct product” to questions of the form: “does the probability of successfully computing the function decreases exponentially when combining functions”

  7. Kristal Cantwell Says:

    This reminds me of a chess book I once read. There was an analysis of a chess game as I recall. There were several participants in a dialog involvign various features of chess analysis. The use of emoticons also reminds me of the ongoing project of translating Moby Dick into Japanese emoticons. Today was the deadline for the raising of the 3500 dollars needed and apparently they have reached their goal:

    Also on this survey article:

    http://cacm.acm.org/magazines/2009/9/38904-the-status-of-the-p-versus-np-problem/fulltext

    There is a mention of Geometric complexity theory. So apparently there is an approach using complexity that works although the authors say it may take one hundred years to reach an answer. There is this reference:

    Mulmuley, K. and Sohoni, M. Geometric complexity theory I: An approach to the P vs. NP and related problems. SIAM Journal on Computing 31, 2, (2001) 496–526.

Leave a comment