This post is a kind of postscript to the previous four. It consists of miscellaneous observations that shed some light on the proof of Borel determinacy. I’m writing it mainly for my own benefit, but there seems to be no harm in posting it, just in case anyone else finds it interesting.

#### An attempt to lift that sort of works and sort of doesn’t

One way to get some further insight into the proof of Borel determinacy is to look at some auxiliary games that don’t work as lifts. A particularly natural candidate is this. Let be an infinite (pruned) tree, let and let be the game . Now define a game as follows. Player I plays a move of the form , where is a strategy for with first move . Player II plays a move of the form , where is a strategy for and . Thereafter, the two players must play the strategies and .

Clearly the outcome of this game is decided after the first two moves: if beats then Player I wins and otherwise Player II wins.

Now let’s try to map strategies to strategies. Given a strategy for Player I for , let the first move be . Then it makes sense to let be the image of . Does this give us the lifting property? Well, if is a run of with Player I playing the strategy , then there must be some Player II strategy for (in fact there are several) such that (that is, if Player I plays and Player II plays then is the sequence produced). So if Player II plays (where is the second term of ) as his first move in the result of the run of when projected to will be the sequence . So far so good.

The problem comes when we try to map Player II strategies. Let be a strategy for Player II in . This can be thought of as a map that takes Player I strategies for to Player II strategies for . (Player I’s first move is of the form , but actually is determined by . Likewise, is determined by and .) Given such a map, how are we supposed to create a strategy out of it?

Let’s think about the set of all possible runs of the game that can result if Player II uses the strategy . They are the sequences of the form . In order to be able to find a suitable image for , we need to be able to find a strategy such that every possible run of if Player II uses gives us a sequence of the form . In other words, we need Player II to have a winning strategy for this set of sequences. Note that we need to be able to do this for *every* strategy — that is, every function from Player I strategies to Player II strategies.

Now the one thing that matters about is the sequence and the one thing we know about that is that it is a sequence that can result if Player I plays the strategy . Since is arbitrary, we need to show that if is any set of sequences with the property that for every Player I strategy there is at least one that can result if Player I plays , then Player II has a winning strategy for . But the assumption we are making about is precisely that Player I does *not* have a winning strategy for . (Proof: if is any strategy for Player I, there is, by assumption, a sequence not in that can result if Player II plays appropriately.)

So we find that what we are asking for is precisely this: that if is any set for which Player I does not have a winning strategy, then Player II has a winning strategy for . (Here is playing the role of above.) But this is the statement that all sets are determined, otherwise known as the axiom of determinacy. And of course, if we have that, then we don’t need to go to any trouble to prove that all Borel sets are determined.

A small additional remark here is that, as I’ve sort of said but not fully spelt out, we could change the rules of the so that Player II plays not a strategy but just some sequence that is consistent with . In other words, we don’t actually care about : we just care about .

Yet another remark is that in the argument above, the payoff set played no role. That’s because the proof that the game is decided after the first move of each player is particularly simple: the entire sequence is decided after these moves, and therefore whatever set we take, we know after those moves which player is destined to win. In other words, the map above from strategies to strategies simultaneously lifts all games to clopen games. So it gives us a neat way of proving the axiom of determinacy, using … er … the axiom of determinacy.

#### An attempt to lift that doesn’t work

In the previous example, we showed that if all games are determined, then all games can be lifted to clopen games, and indeed to games that are decided after the first move of each player.

What if we try to prove the much stronger result that every determined game can be lifted? This is stronger because we are not allowed to use the determinacy of *all* sets in the proof, as we did above in an absolutely critical way, but instead just the determinacy of the payoff set in the game we are trying to lift.

Let the game be as above. Then the initial data we have consists of , , and either a winning strategy for Player I or a winning strategy for Player II. We now have to devise an auxiliary game. I’m just going to write down the obvious thing, with no thought of whether it works — and then show that it doesn’t work.

Let’s define as follows. If Player I has a winning strategy for , then we pick one and let be an arbitrary run of . That is, is played on the tree of all paths in that are consistent with and all paths are considered wins for Player I. So this game is in fact decided before it even starts.

Does the lifting property hold? Well, there is only one possible strategy for Player I, since every point at even distance from the root has just one successor. The obvious strategy to map that to is , and then trivially every run of with Player I playing comes from a run of .

How about the other way round? A strategy for Player II in the game can be thought of as a sequence that is consistent with . What should we map that to? We need to map it to a strategy that guarantees that the resulting sequence will be . But that’s obviously out of the question: Player I can easily play in a way that doesn’t produce . (Note that in there is no reason for Player I to apply the strategy . Indeed, she doesn’t even need to apply a winning strategy.)

What went wrong here, when it was all looking so good for Player I? The rough answer is that the game restricts far too much what Player I can do and places no restrictions on Player II. That’s all very well when we are trying to show that Player I has a winning strategy for the set of sequences that can result from a run of , but when we try to show the same for Player II, the fact that Player I’s moves are so circumscribed in is a disaster: it means that it is easy for Player I to make moves in that couldn’t have come from a run of .

#### Another attempt to lift that doesn’t work

Suppose we try to imitate Martin’s proof that closed sets can be lifted, but show instead that *all* sets can be lifted. We define the auxiliary game as follows. Player I declares a quasistrategy . Player II then declares either an infinite sequence that’s compatible with and belongs to or a quasistrategy for the tree defined by such that all possible outcomes lie in . The difference between this and what we did before is that the sequence is infinite, rather than being a finite sequence with all its continuations in .

Actually, this is rather similar to the first attempt. The differences are that Player I declares a quasistrategy rather than a strategy, and instead of sequences that end up in , Player II plays quasistrategies that end up entirely in . But I think these will turn out to be minor variations, so I won’t discuss this after all — I’m almost certain that this is another example that works, but only if you assume the axiom of determinacy.

#### Determinacy can be thought of as a Ramsey property

There is a way of thinking of determinacy that I find quite helpful. Let us think of the payoff set as a 2-colouring of , the set of infinite paths in . To make this more vivid, I’ll reword it in colouring terms, so let’s say that we have coloured the paths in with two colours, red and blue, the red paths being precisely the paths in .

A typical Ramsey theorem tells us that we can find a monochromatic substructure of some kind — that is, a substructure that is nice in some way and has all its elements of the same colour. Sometimes we consider *off-diagonal* Ramsey theorems, where we want either a red substructure of one kind or a blue substructure of another kind.

Let’s define a I-tree to be a subtree of that contains the root and has the property that each vertex at even distance from the root has precisely one successor in , whereas for each vertex of at odd distance from the root, the successors in are the same as the successors in . Every strategy for Player I defines a I-tree — the tree of all finite paths that can be obtained if Player I plays with the strategy . The strategy is a winning strategy if and only if all infinite paths in the corresponding I-tree are red.

Similarly, we can define II-trees, and once we have done so, the statement “ is determined” is equivalent to the statement, “Either there is a I-tree such that is red or there is a II-tree such that is blue.

The monochromatic structures we are looking at for this off-diagonal Ramsey theorem are particularly nice sets of paths.

There are quite strong parallels here with the infinite Ramsey theorem of Galvin and Prikry. Let us write for the set of all infinite subsets of . Then the Galvin-Prikry theorem says (or at least implies) that for every Borel subset of (in the product topology) there is an infinite subset such that either or . As with determinacy, to find any set that fails this theorem one needs to use the axiom of choice (or, to be more accurate, something like a non-principal ultrafilter, which is a strictly weaker assumption than the axiom of choice, but most naturally proved using the axiom of choice).

#### What happens if we try to use strategies instead of quasistrategies?

Going back to Martin’s proof itself, let’s try to understand what would go wrong if the players declared strategies in the places that they actually declare quasistrategies. We have sort of seen this — there are places in the proof where we need to deem a player to have played a certain quasistrategy. But was that just a convenience, or was it essential?

I’m considering the following auxiliary game. Player I’s first move is , where now is a *strategy*, and not just a quasistrategy. Player II either plays , where is a finite sequence consistent with , all of whose continuations lie in , or , where is a strategy inside the tree corresponding to .

The first thing to note here is that is effectively an infinite sequence . So let’s reword this to say that Player II plays , where is either a finite sequence with all its continuations in or an infinite sequence that lives in . In both cases, the sequence must be consistent with the strategy declared by Player I.

In the first case, the players must play along and after that are released from all obligations. In the second case, they must play along .

Now let be a strategy for Player I in this auxiliary game and let be its first move. What infinite sequences can arise? They are all sequences with one of the following two properties:

1. has an initial segment consistent with such that all continuations of lie in , and the rest of is consistent with the strategy , given the initial moves and ;

2. is consistent with and lies in .

If Player I has a winning strategy for the first class of sequence, then she plays it, and after reaching such that all continuations lie in , she does whatever dictates.

If Player I does not have a winning strategy for the first class of sequence, then in the Martin proof, she would deem Player II to be playing the canonical quasistrategy for the complement of that class. However, that option is not open to us if we aren’t allowed quasistrategies. So she has a problem.

But maybe it isn’t a huge problem, since in she will be required to play consistently with , whatever Player II does. Since is a strategy rather than a quasistrategy, we don’t have to make any choices, and therefore don’t have to “deem” Player II to have declared any particular sequence .

What I’m saying here is that in the second case Player I should simply play the strategy , but if Player II ever departs from the canonical quasistrategy and gets to a position where Player I can force a finite sequence of the first type, then Player I switches to doing that and continuing with . Otherwise, play continues for ever, consistently with and producing an element of . So Player I can deem Player II to have declared that element.

So it seems as though quasistrategies are not yet essential. But now let’s think about Player II. If is a strategy for Player II, then it defines a map from Player I strategies to finite sequences with continuations in and infinite sequences that live in , always consistent with . Let be the set of finite sequences that can arise. In Martin’s proof, we then consider the canonical quasistrategy for avoiding , if Player I has such a thing. Call this . Then Player II’s strategy for is roughly this. Assume that Player I is playing and do whatever dictates. If Player I ever departs from , then force a sequence in .

If we are forced to consider *strategies*, then the above doesn’t make sense: doesn’t dictate a response to a quasistrategy. So let’s just think about what sequences could possibly arise. For each strategy declared by Player I that doesn’t produce some , will give rise to an infinite sequence that’s consistent with .

I think that yet again we’re back in the situation where Player II does have a winning strategy for the appropriate set of sequences if we assume the axiom of determinacy, but otherwise doesn’t. To see that, just consider the case where . In this case, we never get the finite sequences — we just get for each strategy an infinite sequence consistent with , and we want to show that Player II has a winning strategy for the resulting set of sequences.

So I *think* the situation is that we need Player I to be able to play, at the very least, canonical quasistrategies for closed sets. And once that is permitted, I think it forces us to allow Player II to use quasistrategies as well. (I haven’t checked this, but at any rate the analysis for Player I above used in an important way the fact that was a strategy and not just a quasistrategy.)

#### Another connection with Ramsey theory

This one’s a bit more of a stretch, but I find it quite useful. It takes a while to explain.

To begin with I want to consider a simple way of reformulating Ramsey statements that is sometimes useful. Let be a collection of subsets of a set . We call an *up-set* if whenever and we have . Sometimes up-sets are called *monotone* set systems. Given an up-set we can form an up-set by taking the set of all sets such that for every . We call such a set a *transversal* of . Note that , so this is a form of duality. It is trivial that . In the other direction, if , then no subset of can belong to (since is an up-set) and therefore is a transversal of that does not intersect .

Now let us take a typical Ramsey statement, such as van der Waerden’s theorem. The statement that for every 2-colouring of there is a monochromatic arithmetic progression of length can be reformulated as follows. Let be the set of all subsets of that contain an arithmetic progression of length . If no set in consists entirely of blue elements, then the set of red elements is a transversal of , and conversely. Therefore, we need to prove that every transversal of contains a progression of length — that is, belongs to .

This reformulation can be applied to all (sufficiently conventional) Ramsey statements. They tell us that every transversal of a certain up-set must contain a set of a certain kind.

Now let’s consider what it means to lift a game not just to a clopen game but to a game whose outcome is decided after the first move of each player. (Recall that we showed that every closed game can be lifted to a clopen game of this kind.) I want to concentrate particularly on the condition that there should be a map from strategies to strategies such that every run of consistent with should be the image of some run of consistent with .

I want to go further still and consider the case where the tree of consists of certain paths of the form , where is a path in (and and can be thought of as “extra information” of some kind). Here belongs to some set and belongs to some set .

I haven’t thought whether this is a genuine further restriction or whether it’s more or less a WLOG. But it’s the situation I want to think about.

After the first two moves have been played, the possible further moves can be thought of as a subtree of (which itself is the set of all descendants of ). Let us call this subtree and let . Then we can think of the extra information as the tree and each as a subtree of . So the way that works is this. Player I plays a subtree of , Player II plays a subtree of , and after that the game is played in the subtree .

I stress that I certainly don’t mean that Player I plays an *arbitrary* subtree of or that Player II plays an *arbitrary* subtree of . Rather, there will be some class of subtrees that Player I is allowed to choose from, and for each there will be some class of subtrees that Player II is allowed to choose from. Let us call a subtree that belongs to the class it needs to belong to *valid*.

A small remark here is that we don’t actually need to think about the tree . We could regard Player I as playing the set and Player II choosing a tree from that set. Indeed, sometimes it is clearer to think of things this way.

It will be convenient to adopt the following definition. If is an infinite rooted tree, then I’ll define a *I-subtree* of to be a subtree that contains the root such that every vertex at even distance from the root has exactly one successor in and every vertex at odd distance has the same successors in as it has in . (Equivalently, it is the tree of finite sequences that could result from Player I playing a certain strategy.) We’ll also define a II-tree in a similar way.

Now let us look at the lifting property. Let be a strategy for Player I for and let be its first move. Then is the subtree chosen by Player I (from a set of trees ) in which the rest of the game will take place.

What possible sequences can arise? Well, at the next move, Player II plays some , and will give Player I a strategy for playing in the tree . The set of sequences consistent with this strategy is an arbitrary I-subtree of . Or rather, it is a set of the form , where is a I-subtree of . And we need that however Player I chooses these I-subtrees , their union will contain a I-subtree of .

That is the same as saying the following. Let be a pruned subtree of such that for every , contains for some I-subtree of . Then contains a I-subtree of .

We can think of this as a weak Ramsey statement about the set of trees . Instead of saying that every transversal of this set contains a I-subtree of , we are assuming something stronger than the transversality property: we require that is not just non-empty, but contains a I-subtree of .

I won’t do it here, but if one translates this into a statement about red-blue colourings, it says something of the following flavour: either the blue set contains a structure of one kind or the red set contains *a large subset of* a structure of another kind. It is in that sense that it is a “weak” Ramsey theorem.

This may seem a bit artificial, but actually weak Ramsey statements can be useful, as I once found out myself, when to solve a problem in Banach spaces I formulated and proved a weak Ramsey statement that said something along the following lines: if you 2-colour the sequences (of a certain kind) in a Banach space, then there is a subspace such that all the sequences in that subspace are of one colour, or Player I has a winning strategy in a certain game for producing sequences of the other colour.

So far I’ve analysed the lifting property for Player I strategies. The analysis for Player II strategies is somewhat similar, but different enough that I should give the details.

A strategy for Player II in the game consists in selecting a way of choosing, for every valid subtree , a valid subtree and after that a strategy for playing the game defined by . (Again, I don’t really care what the payoff set is — strategies, quasistrategies, subtrees etc. are all defined without reference to the payoff set.) We can think of that strategy as a II-subtree of .

The property we need is this. However Player II chooses the subtrees , their union will contain a II-subtree of .

In a moment I’ll state as concisely as I can the two weak Ramsey properties that we require of our sets of valid subtrees in order to have a lift of anything at all. But before I do that, let me now comment about where payoff sets fit in. They add a huge extra constraint: if is the payoff set, then for every valid subtree we must have either or . So the question is whether we can combine this constraint with the weak Ramsey constraints. In the next section, I’ll give a couple of simple examples of classes of valid subtrees that satisfy the Ramsey constraint.

#### Summary of the conclusions reached in the previous section

Let be a rooted pruned tree. Suppose we define an auxiliary tree as follows. Its first layer of vertices consists of labels that standa for subtrees of (where by subtree I mean pruned subtree that contains the root). Then each successor of is a label that stands for a subtree of . The descendants of each are given by a copy of .

We are interested in the following question. Under what conditions is it possible to map every strategy for to a strategy for such that every run of is the image (under the obvious map) of a run of ? The conclusion I reached was this.

1. For every Player I strategy to be mappable in this sense we need that for every the set system has the following weak Ramsey property: every tree such that contains a I-subtree of for every contains a I-subtree of .

2. For every Player II strategy to be mappable in this sense we need that if for each we pick some and a II-subtree of , then the union of the contains a II-subtree of .

Here are a couple of extreme examples to illustrate this. First, let’s take the case where the trees are all possible I-subtrees of and the trees are all possible infinite paths in . We have looked at this case already, but in a different language.

Trivially if is a subtree of and all the infinite paths in belong to , then contains a I-subtree of , since it actually equals .

As for the second property, if for each I-subtree we can find an infinite path that’s a subset of a tree , then Player I does not have a winning strategy for , which, if we allow ourselves the axiom of determinacy, tells us that Player II has a winning strategy for , which implies that contains a II-subtree of and gives us the required property.

So as we saw earlier, to show that this system of trees works requires the axiom of determinacy.

As a second example, let’s take the case where there is only one subtree and it’s the whole of , and where the subtrees are all II-subtrees of .

Now the premise of the first weak Ramsey property is that every II-subtree of contains a I-subtree that is a subset of a certain tree . But a I-subtree of a II-subtree is just an infinite path, so this says that for every . Determinacy gives us that contains a I-subtree, as required. Without determinacy we can’t conclude that it does.

The second property says in this case that if we choose a II-subtree of a II-subtree of , then it must contain a II-subtree of , which is trivial, since the only II-subtree of a given II-subtree is that II-subtree itself.

#### What happened to the condition that what does for the first moves must depend only on what does for the first moves?

I have to admit that I had rather forgotten about that. I now want to think about how much difference it makes if we add it in. In particular, I’d like to think about the example where Player I chooses an arbitrary strategy and Player II chooses an arbitrary sequence consistent with that strategy — the game where we used the axiom of determinacy to prove the lifting property.

Given a Player II strategy for , here’s how we “defined” . We noted that gave us a map from Player I strategies (for ) to sequences compatible with those strategies, and that the axiom of determinacy implies that the resulting sequences must give us a set for which Player II has a winning strategy (since Player I trivially hasn’t got a winning strategy for the complement). The strategy depends only on the map , which depends only on what does for Player II’s first move. So it seems as though we are safe here.

That’s a relief. The situation for Player I’s strategies is even simpler. Recall that Player I just plays in the strategy that she declares in her first move. So once again, what does for the first moves depends only on what does for the first move.

#### How do the various conditions sit in tension with each other?

In this section I want to do a rather simple exercise: I want to consider various potential monotonicity statements. Here once again are the properties I am interested in of the sets of subtrees .

1. For every the set system has the following property: every tree such that contains a I-subtree of for every contains a I-subtree of .

2. If for each we pick some and a II-subtree of , then the union of the contains a II-subtree of .

I want to answer questions like, “Does it help to have more sets ?” and “Does it make things easier if the trees are bigger?”

For property 1, since we require it to hold for every set , it is easier if there are fewer such sets. Also, if we add more trees to one of those sets, it makes the premise harder to satisfy, and therefore makes the conclusion more likely to hold. This again is helpful for property I.

The monotonicity with respect to the trees themselves is a subtler matter. (In fact, I wrote an earlier draft of this post where the subtlety escaped me. I’ve decided in this instance that it is too much to expect someone to read through a whole lot of mistaken text before I come clean and admit that it is mistaken. So what you are getting here is not my actual thought processes but a later tidying up. But had I been more careful in the first place, then these might have been my thought processes I suppose.) The question we are asking is this. For a fixed set , what makes it easier for to contain a I-subtree of and what makes it harder? Basically, it is easier if Player I has more moves available but *harder* if Player II has more moves available.

That observation motivates the definition of two orderings on trees. Let’s write if is a subtree of with the following property: given any path in , if it ever leaves , then the first point at which it does so is at an odd distance from the root — that is, it offers Player I a move in that she does not have in . Similarly, if the same is true but with “odd” replaced by “even” (or Player I replaced by Player II). I’ll call *I/II-larger than* if .

For property 1 to be more likely to hold, we want the premise to be less likely to hold, so we want to be less likely to contain a I-subtree of . Therefore, we prefer to be I-smaller and II-bigger.

Now let us turn attention to property 2. This time it is helpful if there are more sets . It is also helpful if there the sets are smaller, since that makes it harder to create a counterexample to property 2. And it is also helpful if II-subtrees are harder to come by, for which we prefer the trees to be II-smaller and I-bigger.

In every case, the monotonicity goes in opposite directions for properties 1 and 2. This is reassuring, since otherwise there would probably be some “collapse” where we would say something like, “Without loss of generality all the are equal to ,” and end up with a trivial definition.

#### Why do we need the lemma that closed sets can be lifted? (Section added later.)

Since putting up this post earlier today I have had the following thought. Every open set is a countable union of basic open sets, and every basic open set is also closed. Since basic open sets are clopen, they can trivially be lifted — to themselves. So why not start the “induction” at the basic open sets, and thereby avoid the need for the fairly complicated lemma that tells us that closed sets can be lifted to clopen sets?

It took me a while to realize the answer to this. It comes in the innocuous-looking final step of the argument for dealing with countable intersections. Recall that the technique was to find a *simultaneous* lifting (or rather -lifting) of all the sets to be intersected, so that they all become clopen. Then we argue that their intersection is at least closed, so it too can be lifted, by the complicated lemma. And that gives us our lift for the original intersection.

So the complicated lemma about lifting closed sets is essential at the point where we argue that an intersection of the clopen sets coming from the simultaneous lifting can be lifted.

I find this a useful point to highlight, since it shows that the lemma (lifting closed sets) is not just there to deal with the base case — it is driving the inductive step as well. So one can view the proof as a multiple (indeed, transfinitely multiple) application of that lemma.

September 20, 2013 at 1:47 am |

I’ve really enjoyed this series of posts, but then, I do descriptive set theory. One thing I’m still curious about: what got you looking at Borel determinacy in the first place?

September 24, 2013 at 2:55 pm |

Something like the lifting of closed sets must be invoked at every step of the induction, because of Friedman’s theorem on the complexity of Borel determinacy.