## Determinacy of Borel games IV

To recap briefly, we have defined a notion of lifting from one game to another and embarked on a proof that every Borel game can be lifted to a clopen game. The stages of that proof are to show that every closed game can be lifted to a clopen game, and that the set of games that can be lifted is closed under countable unions. It is straightforward to show that it is also closed under taking complements, so that will prove that all Borel games can be lifted to clopen games. As we saw earlier, if a game can be lifted to a determined game, then it is itself determined. Since clopen games are determined, this will be enough.

We are in the middle of the proof that closed games can be lifted to clopen games. We have defined the game to which we will lift, and shown how to map Player I strategies for the auxiliary game $G'$ to strategies for the original game $G$. So the first thing to do in this post is to show how to map Player II strategies for $G'$ to Player II strategies for $G$.

Recall that I defined the game $G'$ as follows. Let $G$ be the game $G(T,A)$ (that is, the game is played on the infinite tree $T$ and $A\subset[T]$ is the set of infinite paths for which Player I wins). Then Player I begins by playing a pair $(x_1,\sigma)$, where $\sigma$ is a quasistrategy for $G$. Then Player II either plays $(x_2,u)$, where $u$ is a finite sequence $u$ that is consistent with $\sigma$, all of whose continuations belong to $A^c$, or he plays $(x_2,\tau)$, where $\tau$ is a quasistrategy $\tau$ for the game restricted to the tree that corresponds to $\sigma$ (which is the tree of all possible finite sequences that can result if Player I plays consistently with $\sigma$), such that all runs consistent with $\sigma$ and $\tau$ result in wins for Player I. In the first case, both players play along the sequence $u$ and then Player I continues playing consistently with $\sigma$. In the second case, Player I continues playing consistently with $\sigma$ and Player II continues playing consistently with $\tau$. Player I wins if and only if the eventual sequence $(x_1,x_2,x_3,\dots)$ belongs to $A$.

#### Mapping Player II strategies to Player II strategies

Let $\sigma'$ be a Player II strategy for $G'$. We need to find a strategy $\psi(\sigma')$, defined in such a way that what it does to sequences of length at most $n$ depends only on what $\sigma'$ does to sequences of length at most $n$, such that every possible run of the game, if Player II plays using the strategy $\psi(\sigma')$, is a sequence that can arise from $\sigma'$.

In the previous post we ended up with a useful way of thinking about this. Let $B(\sigma')$ be the set of all sequences $(x_1,x_2,\dots)$ that could possibly arise if Player II uses the $G'$ strategy $\sigma'$. Then we need to prove that Player I does not have a winning strategy for obtaining a sequence outside $B(\sigma')$.

At first this looks odd, since it doesn’t mention $A$, but the point is that $\sigma'$ isn’t necessarily a winning strategy. The definition of lifting is set up in such a way that if we have a winning strategy in $G'$, then it must map to a winning strategy in $G$. Therefore, we don’t have to think about this when verifying the lifting property.

Why is this necessary for what we need to prove? Well, if Player I plays a strategy $\lambda$ that guarantees to produce a sequence outside $B(\sigma')$, then, by definition, whatever strategy Player II plays, the resulting run of the game $G$ will not be one that results from a play of the game $G'$ with Player II using the strategy $\sigma'$.

An obvious way to go about proving that Player I does not have such a strategy is to prove that Player II has a winning strategy for producing a sequence in $B(\sigma')$, and that is what we shall do.

What is the set $B(\sigma')$? Well, a Player II strategy for $G'$ works as follows. If Player I plays a quasistrategy $\sigma$, then Player II must decide, based on that (and the initial point $x_1$) whether to play a pair of the form $(x_2,u)$ or a pair of the form $(x_2,\tau)$, and having done that he must decide how to play consistently with the choice he has made. So $B(\sigma')$ is the set of all sequences $(x_1,x_2,x_3,\dots)$ such that there exists a quasistrategy $\sigma$ (for $G$) such that $(x_1,\sigma)$ is a valid move for Player I and one of the following two conditions holds. Either (i) $\sigma'$ dictates that Player II’s next move is $(x_2,u)$, where $u$ is some initial segment of $(x_1,x_2,x_3,\dots)$, and the rest of the sequence is played consistently with $\sigma$ and what $\sigma'$ dictates, or (ii) $\sigma'$ dictates that Player II’s next move is $(x_2,\tau)$, and that the rest of the sequence should be consistent with $\sigma$, $\tau$ and what $\sigma'$ dictates.

We run into a serious difficulty here. A natural approach is to define $U$ to be the set of all finite sequences that can result from case (i). That is, the set of all $u$ such that there exists an opening move $(x_1,\sigma)$ in $G'$ that would cause $\sigma'$ to play the move $(x_2,u)$. (Here, $x_2$ is the second term of $u$.) Then we can say what happens if Player II has a winning strategy for producing a sequence in $U$, and what happens if he doesn’t have one.

But suppose he does have one. The idea would be to play the game in a way that guarantees that eventually such a sequence $u$ will be reached, after which he is guaranteed to win the game $G$, and then do pretty much anything. He can then deem Player I to have played the move $(x_1,\sigma)$, where $\sigma$ is a strategy that causes $\sigma'$ to respond with $(x_2,u)$. But the “doing pretty much anything” part is not an option: Player II needs the rest of the sequence to be consistent with $\sigma$, and there is no need for Player I to continue to play consistently with this quasistrategy.

On spotting this difficulty, I realized that I had not properly understood the definition of the game $G'$. I have deliberately not gone back and changed the definition to the correct one, since these mistakes are useful for my purposes: they help to explain why the definition is as it is. The right definition of the game $G'$ is in fact this.

As I’ve already said, Player I’s first move is of the form $(x_1,\sigma)$, where $\sigma$ is a quasistrategy, and Player II’s first move is either of the form $(x_2,u)$ or of the form $(x_2,\tau)$, where all continuations of $u$ belong to $A^c$ and all sequences consistent with $\sigma$ and $\tau$ are in $A$. However, if Player II plays a move of the first kind, then after helping to create the sequence $u$, Player I is not obliged to play consistently with $\sigma$. A move of type $(x_2,u)$ releases Player I from that obligation. However, if Player II plays a move of type $(x_2,\tau)$, then both players must play consistently with the quasistrategies that they have declared.

This adjustment to the definition of $G'$ deals with the difficulty above: if Player II has a winning strategy for obtaining a sequence $u\in U$, then he can deem Player I to have played a sequence $(x_1,\sigma)$ that would have provoked the response $(x_2,u)$ (given that the strategy $\sigma'$ is being used for $G'$) and once the sequence $u$ has finished, as long as Player II does what $\sigma'$ tells him to do in the game $G'$ (given the moves so far, and assuming that the first two moves were $(x_1,\sigma)$ and $(x_2,u)$), the resulting sequence is indeed one that comes from a run of $G'$ with the strategy $\sigma'$ being employed. The run is precisely the one that continues after $u$ with Player I playing in $G'$ whatever she actually chooses to play in $G$.

What if Player II does not have a winning strategy for obtaining a sequence in $U$? In that case, he must again imagine a game of $G'$ going on on the side. In this game, he will want to deem Player I to have played a move of the form $(x_1,\sigma)$ that will be followed (if he plays the strategy $\sigma'$ in $G'$) by a move of the form $(x_2,\tau)$. Then he will want Player I’s subsequent moves to be consistent with $\sigma$. Of course, they may not be, so if they aren’t, then he will want to get maximum advantage from this somehow.

There is a natural candidate for $\sigma$. If Player II has not got a winning strategy for sequences in $U$, then (by the Gale-Stewart theorem — the result that open games are determined) Player I has a winning strategy for sequences not in $U$. Better still, she has a canonical quasistrategy for this game, namely the quasistrategy “don’t move to a winning position for Player II”. This quasistrategy has a very useful maximality property that we shall exploit, which is that if Player I ever departs from it, then Player II immediately has a winning strategy. Roughly speaking, this means that Player II can deem Player I to be playing according to the canonical quasistrategy, and if she ever does a move that is inconsistent with that quasistrategy, then Player II can change his mind and deem Player I to have played a quasistrategy that provokes a sequence in $U$.

Let me say that more precisely. If Player II does not have a winning strategy for the set $U$, then let $\lambda$ be the canonical quasistrategy for Player I in the game defined by $U$ (by which I mean the closed game in which Player I aims to create a sequence with no initial segment in $U$). For as long as Player I plays $G$ consistently with the quasistrategy $\lambda$, Player II will play $G$ as follows: play the game $G'$ as if Player I has begun with the move $(x_1,\lambda)$ (where $x_1$ is the move that Player I begins with in $G$) and play the corresponding points in $G$. Note that the move $(x_1,\lambda)$ will not permit Player II to play a pair of the form $(x_2,u)$ in $G'$, since $u$ has to be consistent with $\lambda$, which is a quasistrategy that explicitly avoids all possible sequences $u$.

If at any stage, Player I plays a move that is not consistent with $\lambda$ (and this could even be the very first move $x_1$), then Player II is in a winning position for creating a sequence in $U$. But that’s great! Player II can revert to the original plan, get to a sequence $u\in U$, deem Player I to have played a move $(x_1,\sigma)$ that would have provoked the move $(x_2,u)$, and so on.

The point here is this. At some point, Player II has to settle on an initial move for Player I in the game $G'$. There is nothing to stop him making a decision about this and later changing his mind. However, he mustn’t do this infinitely many times, since then it is no longer clear that there is a quasistrategy $\sigma$ that can be deemed to be part of Player I’s first move. But in the argument above, Player II changed his mind at most once, and was therefore OK.

I’ve more or less done so already, but let me spell out what happens if Player I does play consistently with $\lambda$. In that case, Player II plays a game of $G'$ on the side, responding to the move $(x_1,\lambda)$ and all subsequent moves that Player I makes in $G$ (which can be interpreted as moves in $G'$). Then he plays the corresponding moves in $G$ (where “corresponding” means “the same” except in the case of the first move $(x_2,\tau)$, which involves forgetting the quasistrategy $\tau$ and just playing the point $x_2$).

#### $k$-liftings

At the end of the previous post, I mentioned that we would need a generalization of the result that we have just finished proving, one that is amenable to diagonalizations. We will find that we want to build a single lifting out of a sequence of an infinite liftings, and to do that we need those liftings to “settle down”, just as if we want a sequence of 01-sequences to converge in the product topology, we need to be able to say, for every $k$, that all the sequences from some point on agree in their first $k$ terms.

A $k$-lifting of a game $G$ to a game $G'$ is a lifting with the following additional properties, which basically say that $G'$ is the same as $G$ for the first $2k$ moves.

1. The first $2k$ levels of the tree $T'$ on which $G'$ is played are identical to the first $2k$ levels of the tree $T$ on which $G$ is played.

2. For every $x\in T'$ at distance at most $2k$ from the root, $\phi(x)=x$.

3. If $\sigma'$ is a strategy for $G'$, then $\psi(\sigma')$ must do the same in $G$ as $\sigma'$ does in $G'$.

We have shown that every closed game has a $0$-lifting to a clopen game. We actually need it to have a $k$-lifting for every $k$. This is a straightforward adaptation of what we have already proved, so I won’t say very much about it.

But I will say a little. The game $G'$ is defined as follows. As before, the tree $T'$ is almost the same of $T$, but contains a little more information. However, this time the extra information is provided by the two players not on their first moves, but at moves $2k-1$ and $2k$. (The case discussed earlier was the case $k=1$.) Thus, after the two players have played $(x_1,\dots,x_{2k-2})$, Player I plays a move of the form $(x_{2k-1},\sigma)$, where $\sigma$ is a quasistrategy, and Player II plays a move either of the form $(x_{2k},u)$, where $u$ is a finite sequence that begins with $(x_1,\dots,x_{2k-1})$ and has all its continuations in $A^c$, or of the form $(x_{2k},\tau)$, where $\tau$ is a quasistrategy that will guarantee that Player II loses, assuming that Player I plays consistently with $\sigma$. In the first case, the two players must play along $u$ and thereafter can play how they like, and in the second case they must play consistently with their quasistrategies.

The construction of the map $\psi$ taking $G'$ strategies to $G$ strategies is basically the same as before, except that we play the first $2k-2$ moves of the game before getting started.

#### Limits of $k$-liftings

The rough idea of how to prove that a countable union of “liftable” games is a liftable game is this. Let $T$ be a tree and let $A_1,A_2,A_3,\dots$ be the sets corresponding to a whole lot of games on $T$ that belong to Borel classes for which we have already proved determinacy. We then lift to a tree $T_1$ that makes $A_1$ clopen. The inverse images of the remaining sets under the continuous map that takes $T_1$ to $T$ belong to the same Borel classes as before (since these are preserved by inverse images under continuous maps). We now repeat the process for $T_2$, $T_3$, and so on.

The idea is to create a lifting for which all the games are simultaneously clopen. If we just had a finite union, that would be straightforward: we would just repeat the process $n$ times. The resulting lifting would give us a union of $n$ clopen sets, which would be clopen. (There are little details to check, such as that a composition of liftings is a lifting.) However, we have a countable union, so a sequence $G\leftarrow G_1\leftarrow G_2\leftarrow G_3\leftarrow\dots$, where I’m using $\leftarrow$ to mean “can be lifted to”. Therefore, we need to construct some kind of inverse limit of all these lifted games. That’s where $k$-liftings come in.

In this section I’ll show how this inverse limit works. After that, the proof will be almost finished.

Suppose then that we have a sequence $G_0\leftarrow G_1\leftarrow G_2\leftarrow G_3\leftarrow\dots$, where for each $i$ $G_i$ is an $i$-lifting of $G_{i-1}$. (Later we’ll need to add $k$ to all these $i$s, but again I prefer to prove the initial case to keep the notation simpler and wave my hands when it comes to the general case.) We now want to build a game $G_\infty$ that is simultaneously a lift for all the $G_i$.

Incidentally, the small fact I mentioned earlier — that compositions of lifts are lifts — is indeed easy to prove. Suppose, for example, that $G\leftarrow G'\leftarrow G''$ and $\sigma$ is a strategy for $G''$. Then $\sigma$ maps first to a $G'$ strategy $\tau$ and then to a $G$ strategy $\rho$. Given a run of $G$ where $\rho$ is played, there exists a run of $G'$ where $\tau$ is played that maps to it, and then a run of $G''$ where $\sigma$ is played that maps to that.

By the definition of $k$-liftings, we have a sequence of trees $T\leftarrow T_1\leftarrow T_2\leftarrow\dots$, where for each $i$ the first $2i$ levels of $T_i$ are the same as the first $2i$ levels of $T_{i-1}$. We therefore have an obvious definition of a limiting tree $T_\infty$: its first $2k$ levels are given by the first $2k$ levels of $T_k$. Then the first $2k$ levels of every tree from $T_k$ onwards will agree with the first $2k$ levels of $T_\infty$.

How do we map $T_\infty$ to one of the trees $T_i$? Well, let us write $T^k$ for the restriction of a tree $T$ to the first $2k$ levels. Then we need a way of mapping $T_\infty^k$ to $T_i^k$. This we do by mapping to $T_k^k$ using the “identity” mapping (i.e., using the fact that the first $2k$ levels of $T_\infty$ are the same as the first $k$ levels of $T_k$) and then composing that with the maps that take $T_k$ to $T_{k-1}$ to $T_{k-2}$ etc. all the way down to $T_i$. (I could summarize what I’ve just said by saying “take the inverse limit in the obvious way”. I probably don’t even need the words “in the obvious way” but I’m trying to get away with a slightly imprecise understanding of the notion of inverse limits.)

Now suppose we are given a strategy $\sigma$ for $T_\infty$. (Note that I don’t need to say anything about payoff sets — a strategy is defined just in terms of the tree that the game is played on, regardless of which outcomes are judged to be wins for which player.) We need to map it to a strategy for $T$.

Let us write $\sigma^k$ for the part of the strategy that says what $\sigma$ does for the first $2k$ moves of the game (and thus the first $k$ moves of the player whose strategy it is). Then it is straightforward to map $\sigma^k$ to a strategy for the first $2k$ moves of the game $G_i$, whenever $i\geq k$: it just maps to the “same” strategy. And what about when $i? In that case we have a map $\psi_{k,i}$ that takes strategies for $T_k$ to strategies for $T_i$. Moreover, what $\psi_{k,i}(\rho)$ does for the first $2k$ moves depends only on what $\rho$ does for the first $2k$ moves (at last we see this property coming in in an absolutely critical way, though it is a pretty natural property anyway), so it makes sense to talk about $\psi_{k,i}(\sigma^k)$.

So the natural definition for a strategy $\tau$ for $G$ built out of the strategy $\sigma$ for $G_\infty$ is as follows. If you want to know what $\tau$ does for the first $2k$ moves of $G$, you take the strategy $\sigma^k$, consider it as a strategy for the first $2k$ moves of $G_k$, and then map it to a strategy $\tau^k$ for the first $2k$ moves of $G$ using the map that lifts $G$ to $G_k$ (which is a composition of maps that lift $G_i$ to $G_{i+1}$).

In a similar way we can define a strategy $\tau_i$ for each game $G_i$ (just by regarding $G_i$ as the beginning of the sequence and doing exactly what we did above — a $(k+i)$-lifting is in particular a $k$-lifting).

Of course, there is something to check if we define it this way: we need to know that what we claim the strategy does for the first $2k$ moves is consistent with what we claim it does for the first $2j$ moves when $j. In other words, we need to check that $\tau^j$ is the strategy for the first $2j$ moves of $\tau^k$. But this is indeed the case: to work out $\tau^j$ we take the strategy $\sigma^j$, consider it as a strategy for the first $2j$ moves of $G_j$, and map that to a strategy for the first $2j$ moves of $G$. Instead of the first stage, we could take the strategy $\sigma^k$, consider it as a strategy for the first $2k$ moves of $G_k$, map that to a strategy for the first $2k$ moves of $G_j$, restrict it to a strategy for the first $2j$ moves, and then map it to a strategy for the first $2j$ moves of $G$. The result would be the same as both the partial strategies that we want to show are equal.

The same argument shows that all the strategies $\tau_i$ are well-defined.

How about the lifting property? Let’s suppose we run the game $G$ according to this strategy $\tau$, producing a path $x=(x_1,x_2,x_3,\dots)$ in $T$. We would now like to find a path in $T_\infty$ consistent with $\sigma$ that maps to $x$.

Recall that the first $2k$ levels of $T_\infty$ are the same as the first $2k$ levels of $T$, and that for each $k$ the lifting from $T_{k-1}$ to $T_k$ is a $k$-lifting. Therefore, to find the path we seek, we simply lift $x$ to a path in $T_1$ that is consistent with the strategy $\tau_1$, lift that to a path in $T_2$ that is consistent with the strategy $\tau_2$, and so on. The first $2k$ points in the path do not change after we have reached $T_k$, so we end up with a well-defined limiting sequence $(y_1,y_2,y_3,\dots)\in[T_\infty]$. For every $k$, the first $2k$ terms of this sequence are consistent with the strategy $\sigma^k$, which tells us precisely that the sequence as a whole is consistent with $\sigma$.

The above argument may look slightly complicated, but it’s really the definitions that are slightly complicated: there isn’t any point where the proof takes a surprising turn.

An additional remark we need to make is that because we need $k$-liftings in our inductive hypothesis, we need them in the conclusion as well. This can be achieved as follows. What we have shown is that if each $T_{i-1}$ is $i$-lifted to $T_i$, then every $T_i$ is lifted to $T_\infty$. A simple modification to the argument shows that if each $T_{i-1}$ is $(k+i)$-lifted to $T_i$, then every $T_i$ is $k$-lifted to $T_\infty$.

#### Dealing with countable unions

The rest of the argument is now quite easy. Suppose we have a tree $T$ and sets $A_1,A_2,A_3,\dots$ all of which give rise to games $G(T,A_i)$ that can be lifted to clopen games. Suppose also that we know that any inverse image under a continuous map of one of these sets has the same property. (This is true in particular if $A_i$ belongs to some Borel class for which we have proved that all sets in the class can be lifted appropriately.) We then construct a sequence of games $G\leftarrow G_1\leftarrow G_2\dots$ as follows. First we 1-lift $G$ to $G_1$ in such a way that $A_1$ becomes a clopen set. All the other $A_i$ are mapped to continuous inverse images of themselves in the new tree $T_1$. We then 2-lift $G_1$ to $G_2$ in a way that makes $A_2$ (or rather, its inverse image) clopen. Then we 3-lift $G_2$ to $G_3$ to make $A_3$ clopen, and so on. Finally, we let $G_\infty$ be the inverse limit discussed in the previous section.

What we have now is a lifting of the original game $G$ to a game $G_\infty$ such that all the inverse images of the sets $A_i$ are clopen. It follows that the union of these inverse images is at least open. We can therefore do one final lift to a game that makes the union clopen. We have therefore lifted the original union in $G$ to a clopen set, which is what we were trying to do.

I didn’t say that last sentence very well, and in particular was mixing up my types: do we lift games, or trees, or sets, or what? But I hope the meaning is clear: as I remarked earlier, liftings don’t depend on payoff sets, so we found a lifting of the original tree $T$ to a new tree $T'$ — where lifting includes the fact that strategies for $T'$ map to strategies for $T$ — such that the inverse image of $\bigcup A_i$ in $T'$ is a clopen set.

Again, we need something a bit stronger: for any $k$ we need to be able to $k$-lift $T$ to some $T'$. But that is straightforward, given the remarks above.

I haven’t actually given the final tiny step of the argument, which is to show that the lifting property is preserved when you take complements. But that follows trivially from the fact that inverse images preserve complements and the fact that complements of clopen sets are clopen.

I was planning to end this series of posts here, but have subsequently had some further thoughts about liftings that shed a little light — for me at least — on the proof. I have collected those into a fifth post, which I will put up in a few days.