Determinacy of Borel games III

My aim in this post (if I have enough space) is to prove that every closed game can be lifted to an open (and therefore, by continuity, which is part of the definition of lifting, clopen) game. Since I am discussing a formal proof, I shall be a little more formal with my definitions than I have been so far. Much of what I write to begin with will be repeating things I have already said in the two previous posts.

Trees and paths

Recall the definition of a pruned tree. This is an infinite rooted tree T such that from every vertex there is at least one directed infinite path. (Less formally, if you are walking away from the root, you never get stuck.) Given such a tree T, we write [T] for the set of infinite directed paths in T. If we are working in \mathbb{N}^{\mathbb{N}}, then the tree T we will work with has finite sequences as its vertices, with each sequence (n_1,\dots,n_k) joined to its extensions (n_1,\dots,n_k,n). Then [T]=\mathbb{N}^{\mathbb{N}}.

If p and q are finite paths in T (I’ll stop using the word “directed” — all paths are supposed to be directed away from the root) — then we write p\leq q to mean that p is an initial segment of q and p<q if p is a proper initial segment. Also, if q is a path that starts where another path p ends, I’ll write p\smallfrown q for the concatenation of p and q. In \mathbb{N}^{\mathbb{N}} paths are represented by integer sequences. In that case, if p=(n_1,\dots,n_k) and q=(m_1,\dots,m_r), then p\smallfrown q denotes the sequence (n_1,\dots,n_k,m_1,\dots,m_r). That is, we view the path q as starting at the sequence (n_1,\dots,n_k). This isn’t exactly consistent with the previous definition, but it is the only definition that makes sense, so no confusion should arise.

If T is a tree and p is a path that starts at the root, then T_p denotes the tree of all q\in T such that p\leq q.

We topologize [T] by taking sets of the form N_p=\{q\in[T]:p\leq q\} as the basic open sets. Thus, a subset A\subset[T] is open if for every x\in A there exists a finite path p such that p\leq x and q\in A for every infinite path q such that p\leq q.

Games and strategies on trees

Given a pruned tree T and a “payoff set” A\subset [T], we can define a two-player game as follows. The players build up a path that starts at the root, taking turns to decide which the next vertex will be. Thus, if x_0 is the root, then the players take turns choosing x_1,x_2,x_3,\dots subject to the condition that (x_0,x_1,\dots,x_k) should be a path in T for every k. At the end of the game, the players have defined an infinite path x=(x_0,x_1,x_2,\dots). Player I wins if x\in A and otherwise Player II wins.

A game is called open/closed/Borel/etc. if and only if the payoff set A is open/closed/Borel/etc. in [T].

A strategy for Player I is a function from even-length paths in T that takes each such path p to a path q of length one greater such that p<q. A strategy for Player II is similar but for odd-length paths.

It can be helpful to associate strategies with subtrees of T. A strategy \sigma for Player I can be used to create a subtree S of T that consists of all vertices in T that can be reached by a path consistent with \sigma, together with the edges induced by those vertices. The strategy \sigma is a winning strategy if and only if [S]\subset A — that is, every infinite path consistent with the strategy \sigma is in the payoff set A. (Everything I say about Player I can be easily adapted to corresponding statements about Player II — I won’t keep pointing this out.)

Which subtrees can be obtained in this way? A subtree S is derived from some strategy for Player I if and only if it has the following property. Let us say that a vertex y is a successor of a vertex x if it is a neighbour of x and is further away from the root. Then for each vertex of S at an even distance from the root, there should be exactly one successor in S, and for each vertex at an odd distance from the root, every successor should be in S. To define a strategy \sigma from which S is derived is easy. Given a vertex x in S at even distance from the root, let p be the unique path that ends at x and let \sigma(p) be the unique path q in S of length one greater than p. If p is a path of even length not in S, then \sigma(p) can be defined arbitrarily, but this is not very important, since p cannot be reached if Player I uses the strategy \sigma. Thus, there isn’t exactly a one-to-one correspondence between strategies and certain subtrees, but there is a correspondence between “the parts of strategies that actually matter” and those subtrees.

Note that if S_1 and S_2 are subtrees corresponding to strategies \sigma_1 and \sigma_2 for Player I and II, respectively, then S_1\cap S_2 is an infinite path. Indeeed, it is precisely the path that results if Player I plays \sigma_1 and Player II plays \sigma_2. This path we denote by \sigma_1\circ\sigma_2.

A quasistrategy on T is a bit like a strategy except that it doesn’t determine moves uniquely. For example, a quasistrategy for Player I in a closed game is, “If you can move to a position that is not a winning position for Player II, then do so, and otherwise move arbitrarily.” (This is a winning quasistrategy if Player II does not have a winning strategy.) Subtrees are a convenient way of formalizing the notion of a quasistrategy: we can define a quasistrategy for Player I on T to be any subtree S\subset T such that for each vertex of S at even distance from the root at least one successor belongs to S, and for each vertex at odd distance from the root, every successor belongs to S. (Again, it is not too important to know what happens at vertices that cannot be reached if the quasistrategy is applied, but it is easy to invent a definition that does specify this.)

Lifting one game to another

Given a tree T and a payoff set A\subset[T], let us write G(T,A) for the associated game. A lifting of G=G(T,A) is another game G'=G(T',A') together with a map \phi:T'\to T and a map \psi that takes strategies for G' to strategies for G with the following properties.

(i) For every n, \phi takes paths of length n to paths of length n. (In other words, if p=(x_0,x_1,\dots,x_n) is a path in T', then the images \phi(x_0),\phi(x_1),\dots,\phi(x_n) form a path in T, which we will denote by \phi(p). We will also use \phi for the resulting map on infinite paths: that is, we will sometimes regard \phi as a map from [T'] to [T].)

(ii) A'=\phi^{-1}(A). (Here we are already regarding \phi as a map from [T'] to [T].)

(iii) If \sigma is a strategy for G', then \psi(\sigma) is a strategy for the same player, and what \psi(\sigma) does to sequences of length at most n depends only on what \sigma does to sequences of length at most n.

(iv) For every strategy \sigma for G' and every infinite path x in T that is consistent with \psi(\sigma) there exists an infinite path x' in T' that is consistent with \sigma such that \phi(x')=x. (Informally, every run of the game in G with \psi(\sigma) being applied is the image of some run of the game in G' with \sigma being applied.)

We make a couple of remarks about this definition. Note first of all that the map \phi:[T']\to[T] is continuous, since if \phi(y)=x, then if z is any path that agrees with y for the first n steps, \phi(z) will agree with x for the first n steps.

Note too that if \sigma is a winning strategy in the game G', then \psi(\sigma) is a winning strategy in the game G. Indeed, if x is a run of G that is consistent with \psi(\sigma), then there must be a run x' of G' consistent with \sigma such that \phi(x')=x. Since \sigma is a winning strategy, x' will belong to \phi^{-1}(A) or \phi^{-1}(A)^c as appropriate (i.e., according to whether \sigma is a strategy for Player I or Player II), and therefore x will belong to A or A^c as appropriate.

How much structure do liftings have?

Our ultimate aim is going to be to lift an arbitrary Borel game G to a clopen game G'. How easy is this likely to be? Are liftings difficult things to construct?

At first one might think that even if we completely ignore the map \psi that takes strategies to strategies, if A is a rather complicated Borel set it would be hard to find a continuous function \phi such that \phi^{-1}(A) is clopen. However, this is not as hard as it sounds: it is a reasonably straightforward exercise to show that every Borel set is a continuous image of a closed set. (The converse is not true. For example, the non-Borel set described in an earlier post, of all graphs on \mathbb{N} that contain an infinite clique, is, more or less by definition, a continuous image of a closed set.)

Once we have chosen the function \phi, it is tempting to think that there are too few constraints on the function \psi. However, the constraint that what \psi(\sigma) does to sequences of length at most n depends only on what \sigma does to sequences of length at most n restricts the possibilities for \psi quite considerably.

Let’s write \Sigma_1[T] for the set of all infinite subtrees S\subset[T] that satisfy the conditions described earlier: that each vertex at an even distance from the root is joined to exactly one successor, while each vertex at an odd distance is joined to all its successors. Let us call such trees strategic. Let us also call a finite tree strategic if all its leaves are at the same distance from the root and all vertices apart from the leaves satisfy the conditions of an infinite strategic tree. Thus, finite strategic trees are restrictions of infinite strategic trees to some depth n. Let us write \Sigma_I(T_n) for the set of all such trees. If \psi:\Sigma_1[T']\to\Sigma_1[T] is a function that satisfies condition (iii) above, then for each n it induces a function \psi_n:\Sigma_I(T_n')\to\Sigma_I(T_n). The definition of \psi_n is obvious: given a strategic tree S' in T' of depth n, extend it to an arbitrary tree S''\in\Sigma_I[T'] and define \psi_n(S') to be the restriction to the first n levels of \psi(S''). This is well-defined, by condition (iii).

Suppose now that we have a collection of maps \psi_1,\psi_2,\dots with \psi_n:\Sigma_I(T_n')\to\Sigma_I(T_n). If they are all derived from some map \psi:\Sigma_I[T']\to\Sigma_I[T] that satisfies condition (iii), then they have to satisfy a simple compatibility condition. Let us write T|_n for the restriction of a tree T to its first n levels. Then we require that, for any m<n and any strategic tree S'\in\Sigma_I(T_n'), \psi_m(S'|_m)=\psi_n(S')|_m.

Conversely, if we are given such a sequence of maps \psi_1,\psi_2,\dots we can define a map \psi:\Sigma_I[T']\to\Sigma_I[T] that gives rise to all of them in a natural way. (It is clearly some kind of inverse limit, but I haven’t thought carefully about exactly what the category is.) Given a strategic tree S'\in\Sigma_I[T'], we define \psi(S') to be the union of the trees \psi_n(S'|_n). The compatibility conditions ensure that the trees \psi_n(S'|_n) are nested, and since they are strategic it follows that their union is strategic.

The fact that \psi is an inverse limit shows that it is very far from an arbitrary function from strategies to strategies (or strategic trees to strategic trees). Let us think about how we might build up a sequence \psi_1,\psi_2,\dots satisfying the compatibility conditions. That is, if we have defined \psi_n already, what more do we need to specify in order to define \psi_{n+1}?

To begin at the beginning, the first move of a strategy is just a choice of a successor of the root. Let us write L_n' and L_n for the “level-n” vertices in the trees T' and T, respectively. Then \psi_1 can be thought of as a map from L_1' to L_1.

Once we have decided on \psi_1, the map \psi_2 is also decided, since Player I has no decisions to make in the second move. So the next choice to make is of \psi_3. Informally, \psi_3 is a map from “how Player I makes her first two moves in G'” to “how Player I makes her first two moves in G“. More formally, it is a map from strategic trees of depth 3 to strategic trees of depth 3. To specify a strategic tree S_3' of depth 3 in T' we have to specify a successor x_1' of the root, and for each of its successors we have to specify a further successor. So we can split up the domain of \psi_3 according to the choice of x_1'. For each fixed x_1', the specification of a tree is a function defined on the set of successors of x_1', which takes each successor to one of its successors. Meanwhile, the codomain of this restriction of \psi_3 can be thought of as a function defined on the set of successors of \psi_1(x_1'), which again takes each successor to one of its successors. So this restriction of \psi_3 is an arbitrary function from a huge set of functions to a huge set of functions. And we then need to choose such functions for every x_1.

Already we see that we are dealing with a vast and unstructured set. Should we worry about this? (The kind of worry I am talking about is that we have so much flexibility that it is hard to see why the theorem we are trying to prove isn’t trivial.) I think not, for the following reason. If we visualize the tree laid out “horizontally” with the root at the left and the levels living in vertical lines, then all the lack of structure is “vertical”, which is somehow appropriate because the set of successors of any given vertex is just a set and has no structure. On the other hand, trees have a lot of “horizontal structure” (for example, the topology is “defined horizontally”), and this is where the restrictions on \psi make themselves felt.

Of course, condition (iii) is not the only condition that a lifting has to satisfy: there is also the all-important condition (iv). However, I will save a similar discussion of that condition for later.

Lifting a closed game to an open game

Now at long last comes the definition of the lifting we shall use. Let T be a pruned tree, let A\subset[T] be a closed set and let G be the game G(T,A). We define a tree T' as follows. Its vertices are certain sequences of the form

((x_1,\zeta),(x_2,\theta),x_3,\dots,x_k)

where (x_1,\dots,x_k) is a path in T (I should put the root x_0 before everything, but I take it as implicit that every path starts at the root and only the subsequent vertices need to be specified) and \zeta and \theta are extra pieces of information that I will describe in a moment. The reason I said “certain sequences” above is that the information \zeta and \theta will restrict what is allowed for the x_i.

The map \phi:T'\to T is the map that takes ((x_1,\zeta),(x_2,\theta),x_3,\dots,x_k) to (x_1,\dots,x_k) (or to x_k if you prefer to think of it like that — there is a one-to-one correspondence between paths in T and their end vertices). In other words, \phi simply forgets the extra information.

What is this “extra information”? In the case of \zeta, it is easy: Player I has to provide not just a successor x_1 of the root in T but also a quasistrategy \sigma for the game G that results in playing x_1. We can think of it as follows: Player I plays G but in addition says, “This is my quasistrategy.” If Player I plays the move (x_1,\sigma) as her first move, then all subsequent choices of x_i must result in a sequence compatible with \sigma. In terms of trees, the subtree of T' with (x_1,\sigma) as its root projects to the subtree of T that corresponds to the quasistrategy \sigma and starting move x_1.

The definition of \theta is less obvious, since Player II has two options. He can either provide a sequence u=(x_1,x_2,\dots,x_k) of length 2k that is consistent with \sigma, such that all extensions of u live in A^c, or a quasistrategy \tau for the game played in the tree that is derived from \sigma (that is, the tree given by all possible runs of the game that are consistent with \sigma). The quasistrategy \tau must have the property that every infinite sequence that can result from it belongs to A. In other words, it must be a losing quasistrategy (in the strong sense that it guarantees a loss) for Player II.

If Player II plays a move of the form (x_2,u), then both players must play along the sequence u until they reach the end of it, and then they must continue consistently with \sigma. If Player II plays a move of the form (x_2,\tau), then both players must play consistently with the quasistrategies \sigma and \tau. (Again, the force of this “must” is that the game is taking place inside a certain subtree. In other words, it’s not that they lose if they don’t play consistently with their quasistrategies: rather, they cannot but play consistently with their quasistrategies.)

Since \phi is such a simple map, it is easy to say who wins the game G': Player I wins if the resulting infinite sequence, ignoring the extra information, belongs to A, and otherwise Player II wins.

Another easy observation is that this game is clopen. To see that it is closed, observe that \phi is obviously continuous and A is closed. To see that it is open, note that if Player II plays a move of the form (x_2,u), then since the two players continue along u and all continuations of u belong to A^c, Player II wins the game in this case, whereas if Player II plays a move of the form (x_2,\tau), where \tau is a guaranteed-losing quasistrategy, then by definition the sequence that results at the end of the game belongs to A and Player I wins. So the result of the game is entirely decided after Player II’s first move.

The fact that the game is decided after Player II’s first move suggests that we have not actually used the assumption that A is closed, since we could ignore the first proof that G' is a closed game and just use the fact that it is decided after Player II’s first move. However, the assumption that A is closed has sneaked in under the radar via the unspoken assertion that Player II can play a move of one of the two types above. Suppose Player I were to play a guaranteed-losing quasistrategy \sigma. Then Player II would not be able to counter with a move of the second type, so would be forced to find a finite sequence all of whose extensions live in A^c. This may not be possible if A isn’t closed.

Who wins the game G'?

This subsection isn’t strictly necessary, but it seems a pretty sensible thing to think about.

If Player I has a winning strategy \sigma for G, then an obvious first move in G' is the move (x_1,\sigma), where x_1 is the first move in G determined by \sigma.

This move automatically results in a win for Player I in G', since there is no sequence u consistent with \sigma, all of whose extensions lie in A^c. (If there were, then \sigma would by definition not be a winning strategy.) So Player II is forced to play a move of the second type and therefore loses.

If Player I plays a move (x_1,\sigma), where \sigma is not a winning quasistrategy (which in particular happens automatically if Player I does not have a winning quasistrategy), then there must be some infinite sequence x consistent with \sigma that lives in A^c. Since A^c is open, there is some initial segment u of x, all of whose continuations lie in A^c. If Player II plays u, then the two players continue to the end of u, at which point the resulting sequence is guaranteed to belong to A^c, and Player II wins.

What we have observed is that if Player I has a winning strategy for G, then she has a winning strategy for G', and if she does not have a winning strategy for G, then Player II has a winning strategy for G'. The interesting thing to note here is that I did not deduce the latter statement from the determinacy of G. That is, I didn’t say, “If Player I doesn’t have a winning strategy for G, then, since A is closed, Player II must have a winning strategy \tau for G. We use that to create a winning strategy for G' as follows.” Rather, I directly defined a winning strategy for Player II in G' using the fact that A is closed and the quasistrategy \sigma is not winning.

What we are trying to do, however, is the reverse. We want to show that if Player I has a winning strategy for G', then at least somebody has a winning strategy for G, and likewise for Player II. (As I write this, I haven’t yet digested the proof enough to know whether the same player has a winning strategy for G, but my impression is that that isn’t necessarily the case. We shall soon see. [Added later: I now see that that was a silly remark, since I've already observed that the lifting property ensures that if \sigma is a winning strategy for G' then \psi(\sigma) is a winning strategy for G for the same player.])

A bad way of converting Player I strategies for G' into Player I strategies for G

What is a strategy \sigma' for Player I like in the game G'? It consists of an initial move (x_1,\sigma), where \sigma is a quasistrategy for G, and then a strategy for playing the remainder of the game consistently with not just \sigma but also with either the sequence u or the quasistrategy \tau given by Player II in his first move.

How can we use this for creating a strategy for G? An obvious first thought is that the strategy should be some strategy \rho that is consistent with the quasistrategy \sigma. (The meaning of “consistent with” is I hope obvious.) Does that work? Certainly, what \sigma does to sequences of length at most n depends only on what \sigma' does to sequences of length at most n, since it depends only on what \sigma' does in the very first move.

What about the lifting property? Suppose that x=(x_1,x_2,\dots) is a sequence consistent with \sigma. Is there some play of G' consistent with \sigma' that yields the same sequence?

There are two cases to consider here: the case x\in A and the case x\in A^c.

If x\in A, then pick a strategy \tau for Player II such that \rho\circ\tau=x. If Player II plays the move (x_2,\tau) in G', then the result will be that for the remainder of the game the players will produce the sequence x, just as required.

If x\in A^c, then choose an initial segment u of x of even length such that all continuations of u belong to A^c and let Player II play the move (x_2,u) in G'. Then the sequence x is consistent with u and \sigma, so again it results from a play of G'.

So it seems that creating a G strategy from a G' strategy and obtaining the lifting property is almost trivial. The problem is that in the actual proof something more complicated is done, so the above argument has to be wrong somehow. But how? It took me about a day (not of continuous thought, I hasten to add, and not writing anything down, but a day nevertheless) to see my stupid mistake. I’m leaving the above wrong argument in, since I think it helps to understand why the right argument is as it is. You may prefer to test your understanding by working out for yourself what is wrong with the argument rather than just reading my explanation below.

The problem is that to define the G strategy I ignored most of the G' strategy, so it wasn’t actually true that I obtained the lifting property. Recall that a strategy for Player I consists of two things: a first move (x_1,\sigma), where \sigma is some quasistrategy, and then a strategy \mu for playing consistently not just with \sigma but also with Player II’s extra information — either a sequence u that must be followed or a quasistrategy \tau. So if in the game G Player I just plays an arbitrary strategy \rho that is consistent with \sigma, there is no reason to think that this will result from some play of G' with the strategy “Start with \sigma and continue with the strategy \mu(\sigma,u) or \mu(\sigma,\tau).” We need \rho to depend on \mu somehow.

A better way of converting Player I strategies for G' into Player I strategies for G

Let’s have another try. (I admit that for this I am just copying from an existing account of the proof.) Suppose that Player I’s first move in the game G' (according to a given G' strategy \sigma') is (x_1,\sigma), where \sigma is a quasistrategy for G. Now consider the game G restricted to \sigma. That is, we associate with \sigma a tree S in the manner described earlier, and the payoff set is A\cap[S]. This game can be thought of as G with the restriction that Player I is required to play consistently with the quasistrategy \sigma.

Since A is a closed set, so is A\cap[S] in this restricted game. Therefore the game is determined. However, there is a twist. We shall ask for Player I to play the open game with payoff set [S]\setminus A, which is also determined. Or rather, we shall insist on the following: if Player I has a winning strategy for getting into the open set [S]\setminus A, then she must play according to such a strategy.

This may seem a rather bizarre thing for Player I to do: the reason for it is to force the game to reach a sequence u, all of whose continuations belong to [S]\setminus A, which is good for us because we can then deem Player II to have played the sequence u in his first move in the game G'.

We could perhaps summarize this by saying that in the case where Player I has a winning strategy for the “reverse game”, she is not in a very good position in the game G, so instead she plays in a way that will ensure that Player II wins, but will also ensure that the run of the game lifts properly.

Another puzzling feature of the argument so far

The next thing I don’t understand is this. To prove that G is determined, we use the fact that G' is determined. The way we use that is to show that a winning strategy for G' maps to a winning strategy for G for the same player. So it would seem that we don’t need a map from arbitrary G' strategies to G strategies, but just a map from winning G' strategies to winning G strategies.

But a winning strategy for Player I for G' must start with a move (x_1,\sigma) such that \sigma is a winning quasistrategy for G, or else Player II can play a move of the type (x_2,u), where u is a finite sequence, consistent with \sigma, all of whose continuations lie in A^c. And if Player I starts with a winning quasistrategy, and if S is the tree defined above, we have that [S]\subset A, so Player I cannot possibly have a winning strategy for producing a finite sequence with all its continuations in A^c. So that complicated case would appear not to arise, or at least not to arise in any relevant way.

I can’t see anything wrong with this simplification, but I have a guess about why we would be ill advised to make it. The reason is that all we are considering at the moment is the base case of the induction. Later on, we are going to have to look at countable intersections and countable unions. It seems at least possible (and in fact very likely) that we will have to consider images of non-winning strategies at that stage, for example if some of the sets are wins for Player I and some are wins for Player II. So let us persist with the not yet obviously necessary task of constructing the map \psi that is defined on arbitrary strategies.

Converting Player I strategies for G' into Player I strategies for G, continued

So far we have defined what \psi does in the case that Player I has a winning strategy for arriving in [S]\setminus A, where S is the tree that corresponds to the quasistrategy \sigma in Player I’s first move (x_1,\sigma). Or rather, we have said what the important part is. Let me briefly describe this case in full, repeating a few things I’ve already said.

If \mu is Player I’s strategy for G', then the G strategy \psi(\mu) is defined as follows. Let (x_1,\sigma) be the first move required by \mu and let S be the tree corresponding to the G quasistrategy \sigma. If Player I has a winning strategy for the game G(S,[S]\setminus A), then she must choose such a strategy and play it. After a point u has been reached where all continuations of the current sequence lie in A^c, Player I must continue with whatever strategy \lambda dictates in the case that Player II plays the sequence (x_2,u) as his first move.

If Player I does not have a winning strategy for the game G(S,[S]\setminus A), then Player II must have a winning strategy for this game, since it is open (in [S]) and therefore determined. In fact, we know that the quasistrategy “always move to a position that is not a winning position for Player I” is a winning quasistrategy for Player II. Let \tau be this quasistrategy. Player I’s strategy for G is as follows. For as long as Player II plays consistently with the quasistrategy \tau, she plays whatever \lambda dictates if Player II’s first move is (x_2,\tau). If Player II ever departs from \tau, then he moves to a losing position for the game G(S,[S]\setminus A), that is, a winning position for Player I in this game. In that case, Player I goes back to the previous approach: that is, choosing a winning strategy for that game (given the current position), playing it until a finite sequence u is reached, all of whose continuations are in A^c, and continuing as \lambda dictates.

How to remember the definition of \psi for Player I strategies

This post has passed the 5000-word limit, but I think I should finish it with a quick verification that the all-important lifting property holds for this strategy. But before I do that, let me try to describe in a less formal way what \psi does to Player I strategies for G'.

The best way to think about it (or rather, the way I find helpful at the time of writing) is to forget all about winning and losing and just concentrate on the lifting property. That is, if we have a G' strategy \lambda for Player I, we don’t try to map it to a G strategy that will maximize Player I’s chances of winning. We don’t care about that, since all we need is that if \lambda is a winning strategy for Player I, then the lifting property will guarantee that \psi(\lambda) is a winning strategy for Player I in the game G. So instead, we focus entirely on the lifting property.

Once we think about things this way, the definition writes itself to some extent. All we have to do is work out what sequences in T can possibly arise during runs of the strategy \lambda and devise a strategy for Player I for creating one of those sequences. In other words, we end up playing a different game — the one where Player I’s aim is to create what one might call a \lambda-compatible sequence.

What are these sequences? There are two kinds, according to the kind of move Player II makes. Again let (x_1,\sigma) be Player I’s first move as dictated by \lambda. One kind is a sequence consistent with \sigma that has an initial segment u, all of whose continuations belong to A^c and that continues according to the strategy \lambda. The other kind is a sequence that could result if Player II plays some quasistrategy against \sigma that always yields a sequence in A and Player I does what \lambda dictates in response to that quasistrategy.

Can Player I force the game G to yield a sequence of such a kind? Clearly yes if she has a winning strategy for producing the requisite initial segments u. If she doesn’t, then we would like to find a suitable quasistrategy for Player II, and the most natural one (sometimes called the canonical quasistrategy) is the “avoid losing if you can” quasistrategy that we chose.

As a matter of fact, I think I’ll end the post here, since with the above explanation it becomes clear that (not too surprisingly) the strategy \psi(\lambda) has been defined in a way that makes absolutely sure that the lifting property holds. So the lifting property holds. I don’t feel the need for a more formal proof.

A brief preview of the next post

It remains to say what \psi does to G' strategies for Player II and then to discuss countable unions and intersections. It turns out that the countable unions and intersections are easier to deal with than the base step of the induction (that is, the statement that closed games can be lifted to clopen games). However, there is one aspect of the proof that I have not yet mentioned, since I didn’t want it to be a distraction. It’s that we actually need a very slight further strengthening of the inductive hypothesis. We need not just that the game lifts to a clopen game but that for every k the game G(T,A) lifts to a clopen game G(T',\phi^{-1}(A)) in such a way that the first 2k levels of the tree T' are the same as the first 2k levels of the tree T. The modifications needed are completely straightforward (roughly speaking you don’t introduce the extra information until the first 2k moves are complete) but complicate the notation. So I decided to present the case k=0, which contains the main idea.

The reason for needing the result for general k is that that enables us to do a diagonal construction when we come to look at countable unions and intersections.

About these ads

One Response to “Determinacy of Borel games III”

  1. Dömötör Pálvölgyi Says:

    It is not very nice in this post that in the “Lifting one game to another section” \sigma is used for G' strategies, while later it is used for G strategies and \sigma' for G' strategies.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 1,541 other followers

%d bloggers like this: