DBD1 — initial post

This post is intended as a launch of Polymath9. I have no idea how the project will go, but I think it may be rather short lived, since the difficulties I am having at the moment look as though they could turn out to be serious ones that rule out any approach along the lines I have been thinking about. However, it is difficult to say that with any certainty, because the approach is fairly flexible, so even if the precise statements I have been trying to prove are false, it might be possible to come up with variants that are true. In a way I find that a good state of affairs, because it increases the chances of proving something interesting. Obviously it increases the chances of proving that P\neNP if one has more ways of attacking the problem. (I’m not claiming that it increases the probability to one that is not small — just that it increases it.) But it also increases the chances of what I would regard as a very nice consolation prize if, as expected, the approach does not work, namely a new barrier to proving that P\neNP. I don’t think it would be as fundamental a barrier as the three main barriers discovered so far, since it would not be showing that existing methods cannot work. Rather, it would be saying, “Here’s something we could try. Oh dear, it doesn’t work.” But as long as that something was reasonably general, I think its failure to work could be interesting enough to publish.

I’ve thought a little about what phrase to attach to the project (the equivalent of “density Hales-Jewett” or “Erdős discrepancy problem”). I don’t want to call it “P versus NP” because that is misleading: the project I have in mind is much more specific than that. It is to assess whether there is any possibility of proving complexity lower bounds by drawing inspiration from Martin’s proof of Borel determinacy. Only if the answer turned out to be yes, which for various reasons seems unlikely at the moment, would it be reasonable to think of this as a genuine attack on the P versus NP problem. So the phrase I’ve gone for is “discretized Borel determinacy”. That’s what DBD stands for above. It’s not a perfect description, but it will do.

For the rest of this post, I want to set out once again what the approach is, and then I want to explain where I am running into difficulties. I’m doing that to try to expose the soft underbelly of my proof attempt, in order to make it as easy as possible for somebody else to stick the knife in. (One could think of this as a kind of Popperian method of assessing the plausibility of the approach.) Another thing I’ll try to do is ask a number of precise questions that ought not to be impossible to solve and that can be thought about in isolation. Answers to any of these questions would, I think, be very helpful, either in demolishing the approach or in advancing it.

Reminder of basic definitions

This section is copied from my previous post.

I define a complexity structure to be a subset X of a set \Gamma_1\times\dots\times\Gamma_n. I call the union of the \Gamma_i the alphabet associated with the structure. Often I consider the case where \Gamma_1=\dots=\Gamma_n. The maps between complexity structures that I consider (if you like, you can call them the morphisms in my category) are maps \pi:Y\to X such that for each i, the coordinate \pi(y)_i depends only on y_i. To put that another way, if Y\subset\Theta_1\times\dots\times\Theta_n is another complexity structure, the maps I consider are ones of the form (y_1,\dots,y_n)\mapsto(\pi_1(y_1),\dots,\pi_n(y_n)). I have found it inconvenient not having a name for these, but I can’t think of a good one. So I hereby declare that when I use the word “map” to talk about a function between complexity structures, I shall always mean a map with this property.

I call a subset of a complexity structure X basic if it is of the form \{x\in X:x_i\in\Delta_i\} for some i and some \Delta_i\subset\Gamma_i. The motivation for the restriction on the maps is that I want the inverse image of a basic set to be basic.

The non-trivial basic sets in the complexity structure \{0,1\}^n are the coordinate hyperplanes \{x:x_i=0\} and \{x:x_i=1\}. The circuit complexity of a subset of \{0,1\}^n measures how easily it can be built up from basic sets using intersections and unions. The definition carries over almost unchanged to an arbitrary complexity structure, and the property of maps ensures that the inverse image of a set of circuit complexity m has circuit complexity at most m.

Given a complexity structure X, we can define a game that I call the shrinking-neighbourhoods game. For convenience let us take n to be 2m for some positive integer m. Then the players take turns specifying coordinates: that is, they make declarations of the form x_i=\gamma_i. The only rules governing these specifications are the following.

  1. Player I must specify coordinates from 1 to m.
  2. Player II must specify coordinates from m+1 to n.
  3. At every stage of the game, there must be at least one x\in X that satisfies all the specifications so far (so that the game can continue until all coordinates are specified).

Note that I do not insist that the coordinates are specified in any particular order: just that Player I’s specifications concern the first half and Player II’s the second.

To determine who wins the game, we need a payoff set, which is simply a subset A\subset X. Player I wins if the sequence (\gamma_1,\dots,\gamma_n) that the two players have specified belongs to A, and otherwise Player II wins. I call a set A I-winning if Player I has a winning strategy for getting into A and II-winning if Player II has a winning strategy for getting into A. (Just in case there is any confusion here, I really do mean that A is II-winning if Player II has a winning strategy for getting into A. I didn’t mean to write A^c.)

Because the game is finite, it is determined. Therefore, we have the following Ramseyish statement: given any 2-colouring of a complexity structure X, either the red set is I-winning or the blue set is II-winning. (Normally with a Ramsey statement one talks about containing a structure of a certain kind. If we wanted to, we could do that here by looking at minimal I-winning and minimal II-winning sets.)

Given a complexity structure X, I define a lift of X to be a complexity structure Y together with a map \pi:Y\to X that satisfies the condition set out earlier. I define a lift to be Ramsey if \pi(W) is a winning subset of X whenever W is a winning subset of Y, and moreover it is winning for the same player. A more accurate name would be “winning-set preserving”, but I think of “Ramsey” as an abbreviation for that.

This gives us a potential method for showing that a subset A\subset X is I-winning: we can find a Ramsey lift \pi:Y\to X such that \pi^{-1}(A) is simple enough for it to be easy to show that it is a I-winning subset of Y. Then the Ramsey property guarantees that \pi(\pi^{-1}(A)), and hence A, is I-winning in X.

The definition of a Ramsey lift is closely modelled on Martin’s definition of a lift from one game to another.

What I would love to be able to show

Suppose that we have a suitable definition of “simple”. Then I would like to prove the following.

  1. If a set A\subset\{0,1\}^n has polynomial circuit complexity, then there exists a Ramsey lift (Y,\pi) of \{0,1\}^n with Y\subset\Theta_1\times\dots\times\Theta_n such that \pi^{-1}(A) is simple and the cardinality of \Theta_1\times\dots\times\Theta_n is much less than doubly exponential.
  2. If A is a random subset of \{0,1\}^n, then with high probability the smallest Ramsey lift that makes A simple has an alphabet of doubly exponential size.
  3. There exists an NP set A\subset\{0,1\}^n such that
    the smallest Ramsey lift that makes A simple has an alphabet of doubly exponential size.

Obviously, the first and third statements combined would show that P\neNP. For the time being, I would be delighted even with just the first of these three statements, since that would give an example of a property of functions that follows non-trivially from low circuit complexity. (That’s not guaranteed, since there might conceivably be a very simple way of constructing lifts from circuits. However, I think that is unlikely.)

Having the first and second statements would be a whole lot better than just having the first, since then we would have not just a property that follows non-trivially from low circuit complexity, but a property that distinguishes between functions of low circuit complexity and random functions. Even if we could not then go on to show that it distinguished between functions of low circuit complexity and some function in NP, we would at least have got round the natural-proofs barrier, which, given how hard that seems to be to do, would be worth doing for its own sake. (Again this is not quite guaranteed, since again one needs to be confident that the distinguishing property is interestingly different from the property of having low circuit complexity.)

As I said in my previous post, I think there are three reasons that, when combined, justify thinking about this potential distinguishing property, despite the small probability that it will work. The first is of course that the P versus NP problem is important and difficult enough that it is worth pursuing any approach that you don’t yet know to be hopeless. The second is that the property didn’t just come out of nowhere: it came from thinking about a possible analogy with an infinitary result (that in some rather strange sense it is harder to prove determinacy of analytic sets than it is to prove determinacy of Borel sets). And finally, the property appears not to be even close to a natural property in the Razborov-Rudich sense: for one thing it quantifies over all possible complexity structures that are not too much bigger than \{0,1\}^n, and then it demands that the maps should preserve the I-winning and II-winning properties.

It is conceivable that the property might turn out to be natural after all. For instance, maybe the property of preserving I-winning and II-winning sets is so hard to achieve (I have certainly found it hard to come up with examples) that all possible Ramsey lifts are of some very special type, and perhaps that makes checking whether there is a Ramsey lift that simplifies a given set possible with a polynomial-time algorithm (as always, polynomial in 2^n). But I think I can at least say that if the above property is natural, then that is an interesting and surprising theorem rather than just a simple observation.

An approach that initially looked promising, but that ran up against a serious difficulty

Let A_1,A_2,\dots,A_m be a straight-line computation of a set A\subset\{0,1\}^n. That is, each A_i is either a coordinate hyperplane (a set of the form \{x\in\{0,1\}^n:x_j=\epsilon\} for some 1\leq j\leq n and some \epsilon\in\{0,1\}), or the intersection or union of two earlier sets in the sequence, and A_m=A. We would like to find a complexity structure X\subset\Gamma^n with \Gamma not too large, together with a map \pi:X\to\{0,1\}^n that has the properties required of a Ramsey lift, such that \pi^{-1}(A) is simple. Since a composition of Ramsey lifts is a Ramsey lift, and since taking inverse images (under the kinds of maps we are talking about) preserves simple sets, whatever definition of “simple” we are likely to take, as well as preserving all Boolean operations, a natural approach is an inductive one. The inductive hypothesis is that we have found a Ramsey lift \pi_r:X_r\to\{0,1\} such that the sets \pi_r^{-1}(A_i) are simple for every i\leq r. We now look at \pi_r^{-1}(A_{r+1}). By the inductive hypothesis, this is a union or intersection of two simple sets, so we now look for a Ramsey lift \phi:X_{r+1}\to X_r such that \phi^{-1}(\pi_r^{-1}(A_{r+1})) is simple. Setting \pi_{r+1}=\phi\circ\pi_r, we then have a Ramsey lift \pi_{r+1}:X_{r+1}\to\{0,1\}^n such that \pi_{r+1}^{-1}(A_i) is simple for every i\leq r+1.

Thus, if we can find a very efficient Ramsey lift that turns a given intersection or union of two simple sets into a simple set, then we will be done. “Very efficient” means efficient enough that repeating the process m times (where m is polynomial in n — though even superlinear in n would be interesting) does not result in an alphabet of doubly exponential size. Note that if our definition of “simple” is such that the complement of a simple set is simple, then it is enough to prove this just for intersections or just for unions.

What might we take as our definition of “simple”? The idea I had that ran into trouble was the following. I defined “simple” to be “basic”. I then tried to find a very efficient lift — I was hoping to multiply the size of the alphabet by a constant — that would take the intersection of two basic sets to a basic set.

Let us very temporarily define a basic set to be ibasic if it is defined by means of a restriction of the ith coordinate. That is, it is of the form \{x\in X:x_i\in\Delta_i\}. (I want this definition to be temporary because most of the time I prefer to use “k-basic” to refer to an intersection of at most k basic sets.) If A is i-basic and B is j-basic, then it is natural to expect that if we can lift A\cap B to a basic set, that basic set should be either i-basic or j-basic. Furthermore, by symmetry we ought to be able to choose whether we want it to be i-basic or j-basic. But then if we let A be the 1-basic set X and let B be any other basic set, that tells us that we can lift B so that it becomes a 1-basic set.

Now let us apply that to the 2n coordinate hyperplanes in \{0,1\}^n. If we can lift these very efficiently one by one until they all become 1-basic sets, then we have a complexity structure X with a small alphabet and a map \pi:X\to\{0,1\}^n such that \pi^{-1}(E) is 1-basic for every coordinate hyperplane E\subset\{0,1\}^n. But applying Boolean operations to 1-basic sets yields 1-basic sets, and every subset of \{0,1\}^n is a Boolean combination of coordinate hyperplanes. Therefore, every subset of \{0,1\}^n has become a 1-basic set!

This is highly undesirable, because it means that we have shown that the property “Can be made simple by means of an efficient Ramsey lift” does not distinguish functions of low circuit complexity from arbitrary functions.

Because of that undesirability, I have not tried as hard as I might have to find such a lift. An initial attempt can be found in this tiddler. Note that the argument I have just given does not show that there cannot be a Ramsey lift that turns an i-basic set into a 1-basic set at the cost of multiplying the size of the alphabet by a constant. What I have shown is that if this could be done, then there would be a Ramsey lift that converted all sets simultaneously into 1-basic sets, with an alphabet of size at most C^n. If that were the case, then I think the approach would be completely dead. (Correction: the approach if the sets to be preserved are I-winning and II-winning sets would almost certainly be dead, and I don’t have any reason to think that if one tried to preserve other classes of sets, then the situation would be any different.) So that is one possible way to kill it off.

Problem 1. Let X\subset\Gamma^n be a complexity structure and let A be a basic subset of X. Must there exist a complexity structure Y\subset\Theta^n and a Ramsey lift \pi:Y\to X such that \pi^{-1}(A) is 1-basic and |\Theta|\leq C|\Gamma|?

In fact, if all one wants to do is disprove the statement that for a random set there is a doubly exponential lower bound, it is enough to obtain a bound here of the form |\Theta|\leq 2^{2^{o(n)}}|\Gamma|.

Another difficulty

The above observation tells us that we are in trouble if we have a definition of “simple” such that simple sets are closed under unions and intersections. More generally, we have a problem if we can modify our existing definition so that it becomes closed under unions and intersections. (What I have in mind when I write this is the example of basic sets. Those are not closed under intersections and unions, but if one could prove that every intersection of two basic sets can be lifted to a basic set, then, as I argued above, one could probably strengthen that result and show that every intersection of two basic sets can be lifted to a 1-basic set. And the 1-basic sets are closed under intersections and unions.)

Before I go on to discuss what other definitions of “simple” one might try, I want to discuss a second difficulty, because it gives rise to another statement that, if true, would deal a serious blow to this approach.

In the previous post, I gave an example of a lift that provides us with what I think of as the “trivial upper bound”: a Ramsey lift that turns every single subset of \{0,1\}^n into an n-basic set, with an alphabet of doubly exponential size. So if we want an inductive argument of the kind I have discussed above, we will need to show that an intersection or union of two simple sets can be lifted to a simple set with the size of the alphabet increasing in such a way that if one iterates that increase polynomially many times, the resulting size will be less than doubly exponential. (Actually, that isn’t quite necessary: maybe we could establish a lower bound of 2^{2^{cn}} for a function in NP and an upper bound of 2^{m2^{c'n}} for functions of circuit complexity m, where c'<c.) This makes it highly problematic if we want to do anything that squares the size of the alphabet after only polynomially many steps. If we do that, then the size of the alphabet after n times that polynomial number of steps, which is of course still a polynomial number of steps, will be at least 2^{2^n} and we will have proved nothing.

The reason this is troubling is that even if I forget all about simplifying any set A, I find it very hard to come up with examples of Ramsey lifts. (All I mean by a Ramsey lift of X is a complexity structure Y and a map \pi:Y\to X that takes I-winning sets to I-winning sets and II-winning sets to II-winning sets.) The only ones I know about can be found on this tiddler here. And they all have the property that the players have to provide “extra information” of a kind that at the very least squares the size of the alphabet. In fact, it is usually quite a lot worse than that.

Maybe I can try to be slightly more precise about what I mean there. All the lifts I have considered (and I don’t think this is much of a restriction) take the form of sets Y where a typical sequence in Y is of the form ((x_1,u_1),\dots,(x_n,u_n)) and the map \pi takes that sequence to (x_1,\dots,x_n). If u_i\in U_i, then What makes it interesting is that we do not take all sequences of the above form (that is, for arbitrary x\in X and arbitrary (u_1,\dots,u_n)\in U_1\times\dots\times U_n. Rather, we take only some of those sequences. (It is that that makes it possible to simplify sets. Otherwise, there would be nothing interesting about lifts.) So if Player I makes an opening move y_i=(\gamma_i,u_i), we can think of this as a move x_i=\gamma_i in the original game together with a binding obligation on the two players that the eventual sequence (x_1,\dots,x_n) will have at least one preimage y\in Y such that y_i=(\gamma_i,u_i). The set of all such sequences is a set X(i,\gamma_i,u_i) that may well be a proper subset of \{x\in X:x_i=\gamma_i\}.

Suppose now that this extra information u_i is enough to determine some other coordinate x_j. Then unless there are already very few options for how to choose x_j, the number of possibilities for u_i will be comparable in size to the size of the alphabet, and therefore the size of the alphabet is in serious danger of squaring, and certainly of raising itself to the power 3/2, say. And that is, as I have just pointed out, much too big an increase to iterate superlinearly many times.

So it looks as though any “extra information” we declare has to be rather crude, in the sense that it does not cut down too drastically the set in which the game is played. But I have no example of a Ramsey lift with this property. What’s more, the kind of difficulty I run into makes me worry that such a lift may not exist. If it doesn’t, then that too will be a serious blow to the approach.

Let me ask a concrete problem, the answer to which would I think be very useful. It is a considerable weakening of Problem 1.

Problem 2. Let X\subset\Gamma^n be a complexity structure. Does there necessarily exist a non-trivial Ramsey lift \pi:Y\to X with Y\subset\Theta^n and |\Theta|/|\Gamma| bounded above by a function of n?

The main concern is that |\Theta|/|\Gamma| should not depend on |\Gamma|.

I have not sorted out completely what “non-trivial” means here, but let me give a class of examples that I consider trivial. Let \Theta be a large enough set and let \phi:\Theta\to\Gamma be a surjection. Define a map \pi:\Theta^n\to\Gamma^n by \pi(\theta_1,\dots,\theta_n)=(\phi(\theta_1),\dots,\phi(\theta_n). Finally, let Y=\pi^{-1}(X). Then we can think of \pi as a map from Y to X. Note that Y is in some sense just like X: it’s just that the coordinates of X may have been repeated.

I claim that this is a Ramsey lift. Indeed, suppose that W is a I-winning subset of X. Then a winning strategy for Player I for \pi^{-1}(W) is simply to project the game so far to X, play a winning strategy in X, and choose arbitrarily how to lift each specification of a coordinate of x to a specification of the corresponding coordinate of y.

To put that more formally, if the specifications so far are y_i=\theta_i for i\in K and it is Player I’s turn, then she works out the specification she would make in X in response to the specifications x_i=\phi(\theta_i) for i\in K. If this specification is x_j=\gamma_j, then she picks an arbitrary preimage \theta_j of \gamma_j and makes the specification y_j=\theta_j.

A similar argument works for winning sets for Player II.

It is the fact that this can always be done that makes the lift in some sense “trivial”. Another way of thinking about it is that there is an equivalence relation on \Theta such that replacing a point by an equivalent point makes no difference.

As far as I can tell at this stage, the problem is interesting if one takes “non-trivial” to mean not of the form I have just described. However, I reserve the right to respond to other examples by enlarging this definition of triviality. The real test of non-triviality is that an interesting Ramsey lift is one that has the potential to simplify sets.

A positive answer to the problem above will not help us if |\Theta|/|\Gamma| is an enormously large function of n. However, for now my main concern is to decide whether it is possible to obtain a bound independent of |\Gamma|. If it is, then a major source of worry is removed. If it is not, then the approach will be in serious trouble.

A simple observation

I stopped writing for a few hours after that last paragraph, and during those few hours I realized that my definition of non-triviality was not wide enough. Before I explain why not, I want to discuss a worry I have had for a while, and a very simple observation that explains why I don’t have it any more.

Because the worry was unfounded, it is rather hard to explain it, but let me try. Let’s suppose that we are trying to find an interesting Ramsey lift \pi:X\to\{0,1\}^n. Suppose also that we choose a random subset A of X with the critical probability p. That is, we choose elements with that probability p that makes the probability that A is a I-winning set equal to 1/2. Then it seems highly likely that A will be “only just” a I-winning set if it is one. And we’ll need to make sure that every time A just happens to be I-winning, then \pi(A) is I-winning, and every time it just fails to be I-winning, \pi(A^c) is II-winning. This seems extraordinarily delicate, unless somehow the winning strategies in \{0,1\} are derived rather directly from the winning strategies in X (as seems to be the case for the examples we have so far).

The observation I have now made is almost embarrassingly simple: if A is only just a I-winning set, we do not mind if \pi(A^c) is a II-winning set. That is because \pi(A^c) is not usually the complement of \pi(A). In fact, if A is a random set and every element of \{0,1\}^n has many preimages in X, then both \pi(A) and \pi(A^c) will be pretty well all of \{0,1\}^n.

It is worth drawing attention to the way that it seems to be most convenient to prove that a lift \pi:Y\to X is Ramsey. Instead of taking a winning subset of Y and trying to prove that its image is winning (for the same player) in X, I have been taking a winning subset of X and trying to prove that its inverse image is winning (for the same player) in Y. Let me prove a very easy lemma that shows that this is OK.

Lemma. Suppose that \pi:Y\to X is a lift. Then the following two statements are equivalent.
(i) The image of every winning subset of Y is winning in X for the same player.
(ii) The inverse image of every winning subset of X is winning in Y for the same player.

Proof. Suppose that the second condition holds and let A be a winning subset of Y. If \pi(A) is not a winning subset of X for the same player, then \pi(A)^c is a winning subset of X for the other player, which implies that \pi^{-1}(\pi(A)^c) is a winning subset of Y for the other player. But \pi^{-1}(\pi(A)^c)\subset A^c, so this contradicts A being a winning set for the original player.

Conversely, suppose that the first condition holds and let A be a winning subset of X. Then if \pi^{-1}(A) is not a winning subset of Y for the same player, then \pi^{-1}(A)^c is a winning subset of Y for the other player, which implies that \pi(\pi^{-1}(A)^c) is a winning subset of X for the other player. But \pi(\pi^{-1}(A)^c\subset A^c, so this contradicts A being a winning set for the original player. QED

Another way of saying all this is that if we want to prove that a map \pi:Y\to X is a Ramsey lift, then the only winning sets B\subset Y for which we need to prove that \pi(B) is also a winning set are inverse images of sets A\subset X. And the reason for that is that one can replace B by the superset \pi^{-1}(\pi(B)) without affecting the image.

A slightly less trivial, but still uninteresting, class of Ramsey lifts

The quick description of these is as follows: take a trivial Ramsey lift of the kind I described earlier (one that duplicates each coordinate several times) and pass to a random subset of it.

Let me sketch an argument for why that, or something similar to it, works. The reason is basically the same as the reason that the trivial lift works. For the sake of clarity let me introduce a little notation. I’ll start with a complexity structure X\subset\Gamma^n. I’ll then take Y to be a random subset of X\times U^n, where U is some set and I write a typical element of X\times U^n as a sequence ((x_1,u_1),\dots,(x_n,u_n)). The map \pi:Y\to X takes this sequence to (x_1,\dots,x_n). I’m thinking of U as a fairly large set, and the elements of Y are chosen independently from X\times U^n with some suitable probability p.

Now let W be a winning subset of X. I want to show that \pi^{-1}(W) is a winning subset of Y for the same player. So let \sigma be a winning strategy for W for Player I (the case of Player II is very similar, so I won’t discuss it). Then in Y she can play as follows. If it is her turn and the specifications so far are of (x_i,u_i) for i\in K, then she looks at what the strategy \sigma dictates in X in response to the specifications of the x_i, ignoring the u_i. This will involve specifying some x_j. Now she must find some u_j such that there exists a sequence in Y that satisfies the specifications so far as well as the specification (x_j,u_j).

Typically, the proportion of u\in U that will serve as a suitable u_j is approximately p^{|K|}, so what we need, roughly speaking, is that |U| should be bigger than (1/p)^n. It’s not quite as simple as that, since if the alphabet \Gamma is very very large, then there may be occasional pieces of extraordinary bad luck. However, I’m pretty sure it will be possible to modify the above idea to make it watertight.

A wider definition of non-triviality

Let X\subset\Gamma^n and Y\subset\Theta^n be complexity structures and \pi:Y\to X a Ramsey lift. Let us say that \pi is trivial if for any set of specifications x_i=\gamma_i (i\in K) that can arise during the game in X, for any set of specifications y_i=\theta_i (i\in K) with \pi(\theta_i)=\gamma_i (this is a slight abuse of notation) and for any further specification x_j=\gamma_j, there exists a further specification y_j=\theta_j, consistent with all the previous ones, such that \pi(\theta_j)=\gamma_j.

This is an attempt to describe the property that makes it very easy to lift strategies in X to strategies in Y: you just see what you would do at each stage in X and lift that to Y — a policy that does not work in general but works in some simple cases.

One thing that is probably true but that it would be good to confirm is that a Ramsey lift of this simple kind cannot be used to simplify sets. I’ll state this as a problem, but I’m expecting it to be an easy exercise.

Problem 3. Let \pi:Y\to X be a lift that is trivial in the above sense. Is it the case that for every A\subset X the straight-line complexity of \pi^{-1}(A) is equal to the straight-line complexity of A?

(A quick reminder: in a general complexity structure, I define the straight-line complexity of a set A to be the length of the smallest sequence of sets that ends with A, where all earlier sets in the sequence are either basic sets or unions or intersections of two earlier sets.)

Assuming that the answer to Problem 3 is yes, then the next obvious question is this. It’s the same as Problem 2 except that now we have a candidate definition of “non-trivial”.

Problem 4. Let X\subset\Gamma^n be a complexity structure. Does there necessarily exist a non-trivial Ramsey lift \pi:Y\to X where the size of the alphabet goes up by at most a factor that depends on n only?

I very much hope that the answer is yes. I was beginning to worry that it was no, but after the simple observation above, my perception of how difficult it is to create Ramsey lifts has altered. In that direction, let me ask a slightly more specific problem.

Problem 5. Is there a “just-do-it” approach to creating Ramsey lifts?

What I mean there is a procedure for enumerating all the winning sets in X and then building up Y and \pi in stages, ensuring for each winning set in turn that its inverse image is a winning set for the same player. I would be surprised if this could be done efficiently, but I think that it would make it much clearer what a typical Ramsey lift looked like.

Let me also recall a problem from the previous post.

Problem 6. Let A be the set of all sequences in \{0,1\}^n of odd parity. Does there exist a Ramsey lift \pi:X\to\{0,1\}^n such that \pi^{-1}(A) is a basic set and the alphabet of X is not too large?

I would also be interested in a Ramsey lift that made A simple in some other sense. Indeed, I suspect that the best hope for this approach is that the answer to Problem 6 is no, but that for some less restrictive definition of “simple” it is yes.

Matters of procedure

Maybe that’s enough mathematics for one post. I’d like to finish by trying to clarify what I mean by “micro-publication” on the TiddlySpace document. I can’t do that completely, because I’m expecting to learn on the job to some extent.

I’ll begin by saying that Jason Dyer answered a question I asked in the previous post, and thereby became the first person to be micro-published. I don’t know whether it was his intention, but anyway I was pleased to have a contribution suitable for this purpose. He provided an example that showed that a certain lift that turns the parity function into a basic function was (as expected) not a Ramsey lift. It can be found here. There are several related lifts for which examples have not yet been found. See this tiddler for details.

Jason’s micro-publication should not be thought of as typical, however, since it just takes a question and answers it. Obviously it’s great if that can be done, but what I think of as the norm is not answering questions but more like this: you take a question, decide that it cannot be answered straight away, and instead generate new questions that should ideally have the following two properties.

  • They are probably easier than the original question.
  • If they can be answered, then the original question will probably become easier.

One could call questions of that kind “splitting questions”, because in a sense they split up the original task into smaller and simpler tasks — or at least there is some chance that they do so.

What I have not quite decided is what constitutes a micro-publication. Suppose, for example, somebody has a useful comment about a question, but does not generate any new questions. Does that count? And what if somebody else, motivated by the useful comment, comes up with a good question? I think what I’ll probably want to do in a case like that is write a tiddler with the useful comment and the splitting question or questions, carefully attributing each part to the person who contributed it, with links to the relevant blog comments.

Also, I think that when someone asks a good question, I will automatically create an empty tiddler for it. So one way of working out quickly where there are loose ends that need tying up is to look for empty tiddlers. (TiddlySpace makes this easy — their titles are in italics.)

Some people may be tempted to think hard about a question and then present a fairly highly developed answer to it. If you feel this temptation, then I’d be very grateful if you could do one of the following two things.

  1. Resist it.
  2. Keep a careful record of all the questions you ask in the process of answering the original question, so that your thought processes can be properly represented on the proof-discovery tree.

By “resist it”, what I mean is not that you should avoid thinking hard about a question, but merely that each time you generate new questions, you should write up your thoughts so far in the form of blog comments, so that we get the thought process and not just the answer. The main point is that if we end up proving something interesting, then I would like it to be as clear as possible how we did it. With this project, I am at least as interested in trying to improve my understanding of the research process as I am in trying to make progress on the P versus NP problem.

60 Responses to “DBD1 — initial post”

  1. gowers Says:

    A quick comment to say that I am thinking about Problem 3 above: that is, I am attempting to show that a trivial lift (according to the definition in the post) cannot reduce the circuit complexity of a set. The problem is set out in this tiddler here. It doesn’t seem completely straightforward, so I have come up with two splitting questions and am thinking about the first. I am writing up my thoughts about that first subquestion in this tiddler.

    • gowers Says:

      The first subquestion turned out to be easy, so the next thing to do is decide whether it can be generalized to answer the question I was originally trying to answer.

    • gowers Says:

      I have subsequently answered some more splitting questions. For instance, a trivial lift cannot turn a set that is not an intersection of two basic sets into a set that is an intersection of two basic sets. Unfortunately, the proof doesn’t seem general enough to yield the general case.

  2. Polymath 9 Starts | Euclidean Ramsey Theory Says:

    […] Polymath 9 has started. See this post: https://gowers.wordpress.com/2013/11/03/dbd1-initial-post/ […]

  3. Jason Dyer Says:

    I’m fine with my micro-publishing page, except maybe at the end we should have the cases with Player II responding take the cases x_4, x_5, and x_6 in order?

    Going on; I’ve only started thinking about this, but trying to stick to your philosophical ideal:

    Regarding the case where Player I always specifies the parity on the first move, it’s clear just by checking cases that Player I’s winning set is reduced to null when n = 2. I’m a little less clear on how to generalize this to larger n, but I imagine it can happen by doing parity locks across the sequence (x_i = x_{i+1} for odd i) making it so Player I has no control over parity and Player II has just enough control to foil the payoff set solely on that criterion.

    As far as new Ramsey lifts go, I was thinking about a variant where three versions of the game are played simultaneously, and a player needs to win two out of the three to win. This isn’t an obvious lift fail to me and it also doesn’t increase the alphabet much but it isn’t obvious what the lift would be, exactly. I’ll write it up more formally when I have time.

    • gowers Says:

      I’d be happy to change the order, but before I do, let me explain why I instinctively ordered the cases as I did. Its that I felt that the case where Player II responds by specifying y_6 first is in some sense the “real” case of interest, whereas the other two are cases where Player II does something a bit strange and it’s pretty obvious that he won’t win but you have to check.

      Another possible reordering would be to pair coordinate i with coordinate i+3. In that case, the natural ordering of the cases would also be the ordering of the indices.

    • Jason Dyer Says:

      Your second suggestion works for me.

      For the game where Player I and Player II are forced to pick parity on their first move, let the payoff set be any sequence. Player I will clearly win the unmodified game, but in the parity-choice game Player II will have the last move and be able to choose so the parity opposes Player I. This will apply no matter the value of n.

    • gowers Says:

      I’m afraid I don’t understand what you are saying about the game where Player I and Player II are forced to pick parities on their first move. Can you give more detail?

      I’ve changed the indices in the “micro-publication”. I hope I have done so consistently.

  4. Polymath9: P=NP? | The polymath blog Says:

    […] Gowers Proposed and launched a new polymath proposal aimed at a certain approach he has for proving that […]

  5. Jason Dyer Says:

    Sure:

    Suppose the unmodified version of the game, and suppose W^c is the null set. Clearly Player I will automatically win.

    Now suppose your variant where “both players commit themselves to their parities as soon as they make their first moves”. Player I plays and picks a parity (\omega), Player II picks the opposite parity (\eta). Play alternates until the last move, which is Player II’s. If the current parity is equal to \eta, Player II puts a 0 in the last empty position and wins. If the current parity is equal to \omega, Player II puts a 1 in the last empty position and wins.

    • gowers Says:

      But Player II doesn’t win because the eventual sequence belongs to W.

    • Jason Dyer Says:

      Doesn’t your revised version include the parities in W? Is it merely restricting the placement but doesn’t determine wins if a player picks a parity contradictory from the final sequence?

      In any case, I’ve moved on to looking at the general case; I’m fairly certain at this point any sort of parity will cause the Ramsey lift to fail.

    • gowers Says:

      I’m afraid I don’t understand your question. The modified game I had in mind is one where each player, when making their first move, promises that the parity of their moves will take a particular value. Their remaining moves have to keep that promise. The payoff set is still W.

    • Jason Dyer Says:

      I was implicitly making it so the meta-rule of parity was affecting the winning set in such a way that winning set + wrong parity still equaled a loss.

    • gowers Says:

      I now have an example that shows that the modified version of the game where both players declare their parities at the beginning is not a Ramsey lift. The general idea behind the construction can probably be used to rule out a number of other over-simple candidates for Ramsey lifts but I haven’t looked into that yet.

  6. Boaz Barak Says:

    I am very happy to see such a different approach to this problem! I imagine that many theoretical computer scientists might be able to look at this more carefully and possibly contribute after Nov 11th (the STOC deadline). I definitely fall in that category…

    It might be useful if there are concrete “morally related” technical problems (perhaps toy versions of the original problems) that could be stated concisely without requiring following all the definitions in this and the previous post. But perhaps once I really try to read it I’ll see these definitions are less scary than they look..

    • gowers Says:

      Ah — that’s good news that the silence so far from TCS experts has a possible explanation other than a general feeling that the approach is obviously doomed (which it may be, but if it is, then I think it would be useful to understand why).

      Concerning the scariness of the definitions, I think it is important to distinguish between two types of scariness. One is to do with what they mean, and here I think they aren’t scary once one makes the effort to read them carefully. The definitions of a complexity structure and of maps between complexity structures are extremely elementary, the shrinking-neighbourhoods game is also easy to understand, and then the “Ramsey” maps are just maps that take sets that are a win for Player I/II to sets that are a win for Player I/II. I don’t think that’s scary at all.

      However, there is another sense in which I myself find this definition scary even after thinking about it for some time, and that is that I have very little feel for how “flexible” it is. At the moment, I find it very hard to give interesting examples of these winning-set-preserving maps, but also very hard to think of any argument that would show that they are in any sense hard to come up with. So my general strategy at the moment is to forget all about P versus NP and just try to understand better this strange class of functions between complexity structures. Without that understanding, I think it will be difficult to say whether the approach has the slightest chance of working (unless it turns out to fall foul of some general barrier, but as said in the previous post, it doesn’t appear to me to be easy to naturalize).

    • Jason Dyer Says:

      If there’s interest I might do another one of my “gentle introduction” posts like I did for Polymath 1 and 5, but I need a better feel for the problem first. (I’m still very shaky on the P vs NP connection because it looks nothing like the NP-related stuff I’ve seen before, but I suppose that’s part of the point.)

    • gowers Says:

      I thought I didn’t have any toy problems, but I’ve now come up with a possibility, which is to look at the case n=2. A complexity structure then becomes a bipartite graph. I think at least some of the problems may be non-trivial even in this very small case (since the alphabets can be large even if the dimension is small) and may shed useful light on some of the main problems, such as Problem 4.

    • Jason Dyer Says:

      Regarding toy problems:

      One problem that ought to be simple but I’m not sure the answer to — in the shrinking-neighbourhoods game, does \lim_{n\rightarrow \infty} \frac{\left | W^c \right |}{\left | W \right |} converge, and if so, to what?

    • gowers Says:

      What’s W here? I think I will be able to answer the question but I need more details to be sure of what you are asking.

    • Jason Dyer Says:

      Let me rephrase and change letters so I don’t mix up with other uses:

      Problem 1:

      Let C be the set of payoff sets that are I-winning.
      Let D be the set of all possible payoff sets.

      Does \lim_{n\rightarrow \infty} \frac{\left | C \right |}{\left | D \right |} converge?

      Problem 2:

      Let C be the set of payoff sets that are I-winning.
      Let D be the set of payoff sets that are II-winning.

      Does \lim_{n\rightarrow \infty} \frac{\left | C \right |}{\left | D \right |} converge?

    • gowers Says:

      Here’s a heuristic argument that both ratios tend to zero. If we choose a random set, then the advantage Player II gets from playing last means that it will almost certainly be a II-winning set. (That’s because Player I needs to get to the point where both Player II’s possible final moves result in points not in the random set.)

      One can make this argument more precise if one insists that the players play their coordinates in a specific order. Then the critical probability for the game is the probability p that satisfies the equation p^2=1-p. The reason is that if you pick a random point, then its probability of being in the set is p, so if you pick a random specification of all but the final coordinate of Player II, then the probability that Player II can’t go on and win after that specification is 1-p^2, so the probability that it is a winning position for Player I is p^2. If that equals 1-p, then the probability that it is a winning position for Player II is p. Therefore, applying the same argument tells us that if we specify all but the final coordinate of each player, then the probability that we have a winning position for Player I is p again, and back it goes.

      Thus, for that version of the game, the critical probability for the set to be I-winning is not 1/2 but something golden-ratio related (and greater than 1/2). Therefore, almost all sets are II-winning. (I’m relying on a general principle that there will almost certainly be an extremely sharp threshold here, so if p is even a small amount less than the critical probability, then the chances of the set being I-winning ought to be absolutely tiny.)

      I’m pretty sure that something similar ought to hold for the version of the game where the coordinates can be chosen in any order as long as each player sticks to his/her half.

  7. Bhupinder Singh Anand Says:

    “Obviously, the first and third statements combined would show that P \neq NP.”

    Without having quite grasped the subtlety of your approach and for what it is worth, here are two observations which suggest that set-theoretic formulations of the PvNP problem, as also those that involve ‘polynomial-time’, may not be well-defined from a computational perspective.

    First: It is obscure why conventional wisdom treats it as self-evident that if we can define a polynomial-time checking relation R(x, y), which holds iff x codes a propositional formula F and y codes a truth assignment to the variables of F that makes F true, then F is necessarily algorithmically decidable.

    Reason: If F is well-defined, we can always define a checking relation R(x, y) which holds iff x codes a propositional formula F and y codes a truth assignment to the variables of F which makes F true, even when F is not algorithmically decidable.

    In other words, we first need to be able to logically prove that, as formulated currently, both P and NP are well-defined algorithmically decidable classes.

    Second: The PvNP problem is formulated in terms of set-theoretically defined ‘languages’. However, in set theory the distinction between computable (recursive) sets and computably (recursively) enumerable sets may not be well-defined from a computational perspective.

    Reason: Since functions are defined extensionally in set theory, the theory cannot distinguish that two number-theoretic functions such as below—when defined set-theoretically by their extensions only—could be instantiationally equivalent but computationally distinct (i.e., they may be associated with distinctly different computational properties).

    For instance, by Theorem VII of Gödel’s famous 1931 paper on undecidable arithmetical propositions, it can be shown (see Lemmas 8 and 9 of this paper) that every algorithmically computable (recursive) number-theoretic function f(x) is instantiationally equivalent to an arithmetical function g(x) which is algorithmically verifiable but not algorithmically computable.

    The distinction could be of significance in a game where Alice knows f(x) is algorithmically computable, but Bob only knows that g(x) is algorithmically verifiable (but may not be computable).

  8. domotorp Says:

    @Jason: I would love to see an introduction (by someone else then Gowers). I feel at the moment that there are too many things to cope with – learning tiddlyspace, following the stream of consciousness style ideas and the math.

    Also, unrelated – two players game are usually PSPACE-complete, maybe a separation of P and PSPACE would come out, which would be also great.

  9. gowers Says:

    I’ve now looked more closely into the n=2 case and come to the conclusion that it is too simple to be truly informative about the general case. I’m considering going for the n=4 case instead, but it’s not clear that that will be easier to think about than the general case. See this page and this page for an analysis of the n=2 case.

  10. Albert Atserias Says:

    I think it would be useful to see some examples of Ramsey lifts for some very simple functions, much simpler than the parity function. For example, is there an efficient Ramsey lift for the n-bit OR function that simplifies it to a basic set? By a lift for a function f : \{0,1\}^n \rightarrow \{0,1\} I mean a lift for its 1-set f^{-1}(1), so in this case for the set \{0,1\}^n \setminus \{(0,\ldots,0)\}. We can ask the same question for a function whose 1-set is almost basic itself: is there an efficient Ramsey lift for the function “x_1 = x_n = 1” that simplifies it to a basic set? I was stuck for while even with the constant-1 function, until I realized that its 1-set is already a basic set on \{0,1\}, namely the set \{ x : x_1 \in \{0,1\} \}.

    • gowers Says:

      I’d love to know whether such examples exist: some pages of the proof-discovery tree are devoted to precisely these questions (which is not meant as a way of saying that you should have read those pages — I’m glad you’re coming to similar conclusions independently). Let me explain why I hope that the answer to your question about simplifying the function x_1=x_n=1 is that it cannot be done efficiently. It relates closely to a point in the post above. If one could do that, then probably one would simplify it to a basic set of the form \{y:y_n\in\Delta_n\}. But then we could do the same for the set x_1=1, x_n=0, and then, by taking unions, we would have converted the set x_1=1 into a set of the form y_n\in\Delta_n'. Then we could do that for x_i=1 (i=1,\dots,m), or rather we could compose m efficient lifts, one for each i, and we would have converted all the first m coordinate functions into basic functions based on the nth coordinate. But then all Boolean combinations of those basic functions — i.e., all functions that depend only on the first m coordinates — would be converted into basic functions, and that would be bad news as we are trying to discriminate between simple and complex functions.

      So what I’d actually like is two things. One is a lower bound on the “lift complexity” of “shifting a basic set” from one coordinate to another. And the other is some notion of simplicity that’s a bit more complicated than that of being a basic set and that has the property that all sets of small circuit complexity can be made simple efficiently. It’s quite a lot to hope that both these can be done, but I don’t want to give up until I have a compelling reason (not necessarily 100% rigorous) that it can’t be done.

    • Albert Atserias Says:

      I see the point: basic sets are probably “too simple” so a lower bound would be good news and confirm the need to define simplicity a bit higher in the hierarchy. Even a lower bound showing that the alphabet must square would morally confirm this, since one expects an iterative construction, and squaring at each iteration is more than we can afford.

      However, here is one question that arises (and I really hope it wasn’t asked in the proof-tree already and I missed it). Let’s look at the lift that provides the “trivial” doubly exponential upper bound on the alphabet size. That lift is built in one shot. Is there a way to think of that lift, or one that has the same effect of providing a doubly exponential upper bound, as being built iteratively, by an iteration step that, say, squares the alphabet at each step? Just to imagine how this could go, perhaps each iteration step would be designed to take care of one more move in the game, so the auxiliary information would not be based on a full strategy but just a strategy for the first few moves (I found in the pages something along the lines of considering a few moves only, but the goal looked different). If this iterative construction existed, the hope would be to exploit it as a skeleton on which to build other more efficient lifts by redesigning each step to fit whatever goal we are after (e.g. simplifying a set of low circuit complexity).

    • gowers Says:

      I think you’re right that what I did in the pages was different: very roughly speaking, Player I declares a strategy for the first few moves and Player II declares an outcome for those moves that is consistent with Player I’s strategy. But that’s still a “one-shot” lift. The question of whether one could get from zero to simplifying all sets in a series of gradual steps is one that I don’t know the answer to and I agree that it would be well worth answering.

  11. Victor Porton Says:

    I have created my own TiddlySpace site: http://portonmath.tiddlyspace.com
    I can’t get how to make the hierarchy/hierarchical presentation. Please help

  12. vznvzn Says:

    wow 1st thought this is way over my head. think this is the 1st case of anyone thinking that borel sets have some major application in CS, right? skimming over your notes on the tiddly site, it seems highly tree-focused. agreed with your notes that there seems like a huge size gap between trees and DAGs although am not aware of thms that actually prove that. [some basic thms relating trees to DAGs would be a great start in this area]
    just asked a similar question on stackexchange
    P/Poly vs NP separation based on trees instead of DAGs

    as for the idea of a circuit-related game relating to lower bounds, interesting idea. it would help just to establish any game that is closely connected to circuits. am not aware of such a result in CS theory. it did occur to me that circuit formulas do indeed bear a close resemblance to game trees. also note for monotone formulas you have OR,AND allowed only which are eerily similar to “min/max” approaches used in various game tree analysis strategies.

    one idea, the so-called “pebble game” does show up in some key TCS complexity class type theorems. havent seen nice surveys on the subject, its more a sort of disconnected technique so far.
    (wikipedia pebble game.) it does seem to relate closely to compressibility (a metaphor I use a lot to understand complexity class results & lower bounds).

    ok, this is a left field/curve ball compared to your ideas, but have you ever looked into graph complexity eg stasys jukna notes? seems to me the community is relatively unaware of this very nice strategy/direction, have seen very few papers on the subject yet it seems to me to be in many ways quite promising.

    also of course I sketched out an approach based on hypergraph decompositions, slice fns, monotone circuits in my blog which has still gotten no expert attn/feedback.

    • gowers Says:

      It’s not the first case of somebody trying to relate the theory of Borel sets to computational complexity. As far as I know, that was Michael Sipser. As I said elsewhere, there are worrying disanalogies between Borel sets and sets of low circuit complexity that suggest that this approach to the P versus NP problem is an extremely long shot.

  13. vznvzn Says:

    another quick thought. wonder if there is a game where one side chooses the ANDs of the circuit, the adversary player chooses ORs, and one can have DAGs instead of game trees resulting, and the analogous P=?NP theorem for this would be something like, the game can conclude (ie can be won by either player) only after a large number of plays….. apparently the game is about computing an NP function and is won when the final player applies the last move equivalent to the NP function.

    • gowers Says:

      Can you describe more precisely how this game works?

    • vznvzn Says:

      its really quite simple =)
      havent worked this all out, but something like the following. the goal is to compute the NP complete function for a specific number of input bits. the score of a player is how many “errors” remain (simple calculation using the hamming difference in the truth table, partially constructed function vs actual function). the “board” consists of the circuit constructed so far. each player adds one gate to the DAG constructed so far. the question is, are there any thms in game theory that describe that certain games must take a large amt of moves for either player to succeed, that would be key? (lower bound)… am not an expert on game theory….

    • gowers Says:

      Can you be more precise about what the payoff set is? That is, under what circumstances does the game stop and one of the players get declared as the winner?

    • vznvzn Says:

      hmmm, ok, on further thought this may be related to some of your musings on xor of functions (have to go look at those again maybe). let $f_n(x)$ be a function that is meant to “approximate” a target function $g_n(x)$ where the latter is NP complete. then $f_n(x)$ Xor $g_n(x)$ is the number of “errors” in using $f_n(x)$ as an “approximation” of $g_n(x)$. the total error is the sum of all those bits. this is a common pattern in monotone circuit lower bounds proofs incl razborovs. but note this is also just the hamming distance between the output bit of the two function’s truth tables. (suspect something like this concept also shows up in natural proofs). figure it is in coding theory also somewhere.
      similar to this question
      cnf/dnf conversion to minimize errors

  14. Victor Porton Says:

    I have also created my TiddlySpace: http://portonmath.tiddlyspace.com/ – now it is about values of functions in singularity points (from a novel topological point of view). Please look to my space http://portonmath.tiddlyspace.com/#%5B%5BTheory%20of%20singularities%20using%20generalized%20limits%5D%5D and comment whether clarity of exposition is OK. Especially bad is http://portonmath.tiddlyspace.com/#%5B%5BMore%20on%20galufuncoids%5D%5D where my ideas are not properly exposed or sorted (they need to be precisely formulated and moved to other tiddlers). Is it overally OK? Should I organize it more before I “officially” announce my wiki about singularities?

  15. Bogdan Says:

    Nice project!
    I am reading your TiddlySpace description, looking to the first “open problem” I can contribute. In section “A more complicated class of games” you asked a question “Open Task: Analyse this (3_SAT) game for some very simple payoff sets, such as x_1=1”.
    First, let us clarify the problem formulation. If you mean that 3-SAT clauses players can use are GIVEN AND LISTED, and there are m of them, then your statement “rules force the game to terminate: it terminates when there is just one sequence that satisfies all the clauses” would be wrong, if there are at least two sequences that satisfies all m clauses.
    So, I conclude players can use ANY clauses they want subject to rules 1 and 2 in game description. Then, I claim player 1 wins your game in 4 moves:
    Move 1. x_1=1 or x_2=1 or x_3=1
    Move 2. x_1=1 or x_2=1 or x_3=0
    Move 3. x_1=1 or x_2=0 or x_3=1
    Move 4/ x_1=1 or x_2=0 or x_3=0
    (no matter what player 2 does)
    After move 4, all the remaining feasible sequences have x_1=1, and player 1 is guaranteed to win.

    • gowers Says:

      You are of course right. When I wrote that, I must have been misremembering a question I had thought about that didn’t have a simple answer like that. In fact, in my notes (written before I wrote the TiddlySpace document) I made the observation you made above. I then commented that it didn’t seem obvious which player won the game defined by the parity function, so that may be a better question. I’ll change the “open task” to that.

  16. Bogdan Says:

    Open task: think further about whether there might be some notion of “simple” strategy for which it can be shown that games with payoff sets of low circuit complexity have simple winning strategies.

    I would call a winning strategy for player i is “simple” if there is an efficient algorithm (say, polynomial in 2^n), which, given payoff set A and a sequence of current moves, returns the next move for player i. I understand that the definition of simplicity invovling “efficient algorithm” is maybe not what you wanted, but this is exactly what I intuitively understand by “simple strategy”. For example, chess is determined, but there is no simple stragety in the sense above, and therefore the game is interesting to play. And if it would be, and I knew it, I could easily win against any opponent. So, for me this seems to be THE CORRECT definition of “simple strategy”. And, by the way, you question with this definition of simplicity seems to be interesting.

    • gowers Says:

      I’ve wondered about this definition of “simple”, but it has some problems. First, the whole point of any definition of simplicity is that it should be something we can use to prove results about computational complexity. If to prove a lower bound on circuit complexity we are required to show that there is no polynomially computable strategy of some kind, then the obvious question arises of how we are going to do that, given that proving superpolynomial lower bounds is more or less the problem we started with (but perhaps harder in this case, since strategies are complicated objects to analyse).

      Secondly, if you take any monotone game, then there is a very simple optimal strategy (at least in the version of the game where each player has their own set of coordinates to play): Player I always plays 1 and Player II always plays 0. Since there are plenty of NP-complete monotone functions, this is fairly worrying.

    • Bogdan Says:

      Completely agree that proving the statement S: “every set A of low circuit complexity have an efficiently computable strategy” would not directly solve P vs NP problem for the reasons you described. However, it would be at least a good starting point, because, with any definition of simplicity I can imagine, a simple strategy is efficiently computable, therefore the statement T: “every set A of low circuit complexity have a simple strategy” is much stronger than S, therefore we cannot hope proving T before we even know how to prove S.

      I do not completely agree with second worry: Having T, be will only need to show that ONE (not any) NP-complete function does not have a simple strategy. Ok, so we would need to choose a non-monotone example at this stage.

    • gowers Says:

      I agree that the second worry doesn’t rule out an approach along these lines, but I still find it a worrying sign that some of the hardest NP functions, such as the clique function, are not hard from this point of view.

      However, your first point is one that I agree with: it would be interesting to know whether functions of low circuit complexity have efficient strategies (either for the 0-set or for the 1-set). My instinct is that they don’t. If, for example, one creates a highly pseudorandom function, then it seems to me to be very unlikely that there would be a polynomial-computable function (incidentally, I think it should be polynomial in n rather than 2^n because the payoff set is not part of the input) for deciding how to play optimally.

      However, that is not a very good argument, since the weirdness of a computable function could be matched by the weirdness of a computable strategy. For example, it might be that one could build up a circuit for the strategy in terms of a circuit for the function. So it would be good to try to find either a proof that efficient strategies exist, which would be a very interesting result I think, or a convincing heuristic argument that they don’t.

      When I get time, I think I’ll add a page suggesting this question (including your thoughts above and crediting it and them to you).

    • Bogdan Says:

      “My instinct is that they don’t.”

      If they don’t, thet you current open task

      “think further about whether there might be some notion of “simple” strategy for which it can be shown that games with payoff sets of low circuit complexity have simple winning strategies.”

      may be solved only with “simple” strategy witch is not efficiently computable…

      On the other hand, if they are, it would be a very interesting result indeed.

    • Bogdan Says:

      Hm… In the obvious game, by setting the winning set A to contain all sequences with x_2=1, we can force the second player to choose a specific move (x_2=0 in our case), and thus allow the first player to move twice. Thus, we can assume that each player has k consecutive moves.
      Now, take any P-space complete game, for example Node Kayles game (where players mark vertices in an undirected graph, and you cannot choose a marked vertex or any of its neighbours, and loses a player who cannot make a legitimate move), and let G be any hard instance of this game with m vertices. Let all vertices in G are marked as 0-1 sequences of length k. Let n=mk, and consider the following obvious game. If the first k moves of player 1 do not describe a vertex of G, then the sequence does not belong to A. Otherwise, if the next k moves of player 2 do not describe a legitimate move in a Node Kayles game, then the sequence belongs to A, and so on. As a result, the first player wins this “obvious” game if and only the first player wins the corresponding Node Kayles game. For every given sequence it is easy to say if it belongs to A or not, therefore A has low circuit complexity. However, there is no “simple” winning strategy unless either P=PSPACE or we define simplicity such that “simple” does not imply “efficiently computable”…
      Does it make sense?

  17. porton Says:

    TiddlySpace seems to have a deficiency: It is not convenient to track changes in somebody’s tiddly space. For example in Mediawiki/Wikipedia there is tracking for latest changes (an one if he wants may see all changes by others but not by himself). It is very sad for such mathematical projects. Can more convenient tracking changes be added? (Yes, I’ve noticed “Recent” in the right sidebar, but that is just not convenient enough.)

  18. Gareth Rees Says:

    There’s a small omission here: in the definition of the complexity class NP, the function g must be computable in a time that is at most polynomial in n.

    Thanks — corrected.

  19. bmk Says:

    This is just a minor point regarding step (3) of what you would like to prove. Would it be any easier to establish (3) with NEXP in place of NP? If one of your goals is to show that some class is not contained in P/poly via a combination of (1) and (3), why not start with an (ostensibly) weakened version of (3)? (which would still be very interesting)

    • gowers Says:

      I don’t know whether it would be easier, but that’s an interesting suggestion. A related suggestion is to try to establish (1) with formula complexity replacing circuit complexity. That again would potentially lead to very interesting results that fell short of solving the P vs NP problem.

  20. Anonymous Says:

    Sorry if there’s a better place than this for naive questions, but I’m confused about what it means to say a set $A \in {0,1}^n$ has polynomial (circuit) complexity, as opposed to a sequence of sets $A_n \in {0,1}^n$ for all $n$.

    • gowers Says:

      Strictly speaking it doesn’t make sense for a single n. But non-strictly speaking I mean that n is arbitrary and the bound one can prove for the circuit complexity is a polynomial that does not depend on n.

  21. Gowers "discretized Borel determinacy" approach | Question and Answer Says:

    […] has recently outlined a problem, which he calls “discretized Borel determinacy,” whose solution is related to […]

  22. Anonymous Says:

    Hi everybody, just to know, is there any news about this approach, it looks like very interesting. Any results since 2014 ? Thanks !

Leave a reply to Anonymous Cancel reply