I’m trying to understand the famous result of Donald Martin that Borel games are determined, and by that I mean not just understand the proof line by line, but also understand why the lines are as they are. I have found some nice presentations online, in particular these notes by Shehzad Ahmed and this Masters thesis by Ross Bryant, which have been very helpful, not just in presenting the proofs line by line but also, to some extent, in doing things like explaining certain definitions that would otherwise seem a little strange. (Ahmed’s notes are good for a quick overview and Bryant’s thesis is good if you want full details.) Nevertheless, I think one can go further in that direction and speculate about how Martin came up with his proof. I’m actually talking not about his original 1975 proof but about a simpler argument he discovered about ten years later in a paper that, thanks to the generosity of the AMS, I have not been able to look at. (I am on holiday in France, or I suppose I could have trudged over to my departmental library, but for some reason I find I can never bear to do that even if I’m in Cambridge.)
Understanding a proof in that kind of detailed way takes some effort, and usually everybody has to make that kind of effort for themselves. However, that ought not to be the case in the internet age: in this post I’m going to try to write an account of the proof in a way that makes it possible for others to understand it properly without making much effort. Whether I’ll succeed I don’t know. [Added later: I have now written three posts, and expect to finish in one more. I have got far enough to be confident that I will actually manage to write that fourth post. So at least I will end up with a reasonable understanding of the proof, even if I don't manage to transmit it to anyone else.]
One thing I believe quite strongly is that it is better to be in a position where you can remember the ideas in a way that makes it easy to reconstruct the details than to be in a position where you have all the details in front of you but are less certain of the underlying ideas. So I may skimp on some of the details, but only if I am confident that they really have been reduced to easy exercises.
What is the statement of the theorem?
This takes some time. I need to say what a Borel set is, what a game is, what it means for a game to be determined, and why not all games are determined. I’ll start with the stuff about games.
What is a game?
Let us write for the set of all infinite sequences of positive integers. If is a subset of , then we can define a two-player game in a simple way as follows. Player I chooses a positive integer , then Player II chooses a positive integer , then Player I chooses a positive integer , and so on. At the end of the game, when they have between them chosen an infinite sequence , Player I wins if and Player II wins if .
To make this definition slightly more intuitive, let me make the simple observation that if you choose a suitable set , then you have the game of chess. Let me do it in a very crude way. Choose an enumeration of all the possible positions in chess. Call a pair of integers a legal move for white if and are both associated with chess positions in this enumeration, and there is a valid chess move for white that takes the board from position to position . Similarly, we can define a legal move for black. Now let be the set of all sequences with the following properties.
(i) corresponds to a position that can be reached by white from the starting position.
(ii) If all pairs are legal moves, then the sequence corresponds to a game of chess played according to the usual rules.
(iii) If not all pairs are legal moves, then is even and the only exception is the pair .
Roughly speaking, the sequence either corresponds to a normal game of chess or to a game of chess that continues as normal until black cheats and white refuses to play on. (Note that chess has a rule that if you revisit the same position three times then it’s a draw. Since there are only finitely many possible positions, there is an upper bound on the number of possible moves in any game of chess.)
Now define to be the set of infinite sequences that have an initial segment in . If Player I wants to get a sequence to be in , then she must play (integers corresponding to) legal chess moves for white until either she wins the game of chess or Player II cheats — which we count as a win for Player I. Similarly, if Player II wants to get a sequence not to be in , then he must play (integers corresponding to) legal chess moves and must win the corresponding game of chess.
I realize after writing all that that there are of course draws in chess. We can model that by having three sets: a win for Player I, a draw, and a win for Player II. However, in the discussion of abstract games, it is usual to have just a set and its complement, so that one player always wins.
A quick remark about pronouns: it is almost impossible to avoid the he/she question when describing a run of a game, so I’m not going to try. However, since there are two players I’ve decided to make Player I “she” and Player II “he”, which seems rather satisfactory.
Strategies and winning strategies.
A key definition is that of a strategy. Roughly speaking, this means a way of deciding what to do in any given situation. We can formalize that very easily: a strategy for Player I is a function that maps sequences to integers . (We allow to be zero, so the sequences may be empty.) Player I plays according to the strategy if at the end of the game we have a sequence such that for every . In other words, Player I uses the function to decide what positive integer to play, given the sequence so far. A strategy for Player II is defined similarly, but now the function is defined on odd-length sequences. Note that if is a strategy for Player I and is a strategy for Player II, then the two strategies define a unique infinite sequence — the sequence that results if Player I plays with strategy and Player II plays with strategy . This sequence is denoted by .
A strategy is a winning strategy for Player I if for every strategy of Player II. Equivalently, is a winning strategy for Player I if every sequence that satisfies the condition for every belongs to . Similarly, we can define a winning strategy for Player II.
A game is determined if one or other player has a winning strategy.
Why isn’t it obvious that all games are determined?
Most people, when they meet this definition for the first time, have a strong intuition that every game must be determined. If Player I does not have a winning strategy, then trivially, or so it seems, Player II has a way of defeating Player I — that is, a winning strategy. And yet, if you believe in the axiom of choice, the conclusion of this argument is false. So at the very least it can’t be trivially true that every game is determined. So what is wrong with the argument I’ve just given?
To answer that, we need to try to understand as precisely as possible what the argument actually is that underlies our feeling that one or other of the two players must have a winning strategy. Here is one possibility. Let’s suppose that Player I does not have a winning strategy. Then we can define a strategy for Player II as follows. After Player I’s first move, there must be a move for Player II that results in a position from which Player I still doesn’t have a winning strategy, or else Player I trivially would have a winning strategy. (This argument is actually correct, so I make no apology for using the word “trivially”.) But then this observation can be repeated: whatever Player I does next, there must be a move for Player II that results in a position from which Player I does not have a winning strategy. In general, Player II’s strategy can be defined as follows: repeatedly play the least integer that leads to a position that is not a win for Player I.
If Player II uses that strategy, then at no stage in the game does Player I have a winning strategy. But does that imply that Player I loses the game? Not necessarily. For example, suppose Player I’s objective is to create the all 1s sequence. Then if Player II decides to cooperate and they both play nothing but 1s, there is no stage at which Player I has a winning strategy, and yet Player I ends up winning.
What the proof does show is that a certain important class of games is determined: the class of open games. Informally, an open game is one with the property that if Player I wins, then there must be a finite stage by which she has already won, in the sense that all extensions of the sequence reached so far are in .
A simple but not too simple example of an open game is the set of sequences with at least 1s. Player I can win this game easily, by simply choosing 1 every move. Moreover, if Player I wins, then there will be some stage at which at least 1s have been chosen, and after that it doesn’t make any difference what anyone plays. Note, however, that Player II can postpone this moment for as long as he likes, by choosing sufficiently large. Thus, while a win for Player I must happen by some finite stage, there isn’t necessarily a bound on how long it takes to reach that stage.
Open games are called open games because the winning sets are open in the product topology on , where itself has the discrete topology. The basic open sets in this topology can be described as follows. Let be a finite sequence. Then we can define an open set by taking all sequences with as an initial segment. These are the basic open sets, so an open set is a union of sets of the form . If is an open set and is an infinite sequence that belongs to , then there must be a finite sequence such that and . That is, there must be some initial segment of such that every sequence that starts with belongs to .
Suppose that is an open game, and that Player I does not have a winning strategy. Then consider what the earlier argument tells us. It allows Player II to play without ever reaching a stage where Player I has a winning strategy. In particular, Player II can play in such a way that it is never the case that every continuation of the sequence so far belongs to (in which all possible strategies for Player I would be winning strategies). But if is open, this means that the sequence reached at the end of the game does not belong to .
Thus, one way of answering the question of why it isn’t obvious that all games are determined is to say that the obvious “proof” that springs to mind shows instead the weaker statement that open games are determined, since these games, by their very definition, have the property that if Player II can manage not to lose at any finite stage, then he wins. The result that open games are determined is due to Gale and Stewart in 1953. The proof is easy, but the paper of Gale and Stewart was also the one where infinite games of this kind were first introduced and where the question of Borel determinacy was first posed.
If you still feel funny about this statement, then a shorter argument against the obviousness may be helpful. It’s that the statement that Player I has a winning strategy for can be written as follows: there exists a strategy for Player I such that for every strategy for Player II the sequence . If this is false, then we can conclude that for every there exists such that . But for Player II to have a winning strategy we need something potentially far stronger: that can be chosen independently of . It turns out that for some games, declaring your strategy in advance puts you at a huge disadvantage (just as it does if you want to play the who-can-name-the-larger-number game).
How to build a nondetermined game
If you give yourself the luxury of the axiom of choice, then showing that there are nondetermined games is straightforward. Note first that the set of possible strategies has the same cardinality as that of the continuum, since a strategy is a function from a countable set (of all positive integer sequences of even length or of all positive integer sequences of odd length) to a countable set (of positive integers). Therefore, we can well-order the set of all strategies in such a way that each strategy has fewer than continuum-many predecessors.
We now build a set in such a way that no strategy is a winning strategy. Let us call a sequence consistent with a strategy if every other move is played as dictates that it should be. What we need to do is ensure that for every strategy for Player I there is a sequence consistent with that belongs to , and for every strategy for Player II there is a sequence consistent with that belongs to .
Suppose that we have chosen sequences for every strategy less than (in our well-ordering). Then we have chosen fewer than continuum many sequences. But there are continuum many sequences consistent with (since for every other term of the sequence there is a free choice), so amongst those sequences will be one that has not yet been chosen and that is consistent with . We are free to put that sequence into or according as is a strategy for Player I or Player II.
In short, this is a very standard use of the well-ordering principle: because there are lots of sequences consistent with any strategy, and not too many strategies, it is easy to take care of the strategies one by one and produce a game that is not determined.
Is that game really not determined?
Let’s make one more hopeless attempt at proving that all games are determined. Consider the following “strategy” for Player II: whatever strategy Player I decides on, choose a sequence consistent with and not in , and ensure that is the sequence produced.
The trouble with that idea is that it is not a strategy. What Player II does depends not just on the sequence so far, but on Player I’s strategy, and that is not allowed. This is another way of saying that if you put all your cards on the table right at the beginning, then you put yourself at a big disadvantage in this game.
What are Borel sets like?
I’d better start with the definition, but the main point of this subsection is to give a few examples of sets that are Borel and sets that are not Borel.
A Borel set is anything that you can obtain from the open sets and closed sets using countable intersections and countable unions. I could also mention complements, but it’s an easy exercise to show that if you adopt the above definition, then the Borel sets are closed under taking complements.
Suppose you want to prove that all Borel sets have a certain property. Then an obvious approach is to show that all open sets have that property, all closed sets have that property, and the property is closed under countable intersections and countable unions. For example, one can show in that way that the Borel sets in are Lebesgue measurable.
When you meet Borel sets for the first time, it is tempting to think that every Borel set is either an open set, or a countable intersection of open sets, or a countable union of countable intersections of open sets, or … etc. … or the same but starting with closed sets. However, these by no manes exhaust all the Borel sets. Let’s say that a Borel set has index if you can get it from the open or closed sets by at most alternations of countable intersections and unions. For example, a countable intersection of countable unions of closed sets would have index 2 (or rather, index at most 2). Now suppose that for each we have a Borel set of index . Then the union is a Borel set and there is no reason to suppose that it has index for any . If you are used to ordinals, you will immediately want to say that this union has index and that the process of generating Borel sets can be continued transfinitely. And indeed it can be shown (and the proof is not very hard — it is not unreasonable to try to prove it for yourself, though it is also not embarrassing to fail) that for every countable ordinal there is a Borel set of index that is not of index for any .
Here’s an example of a Borel set of sequences. Define to be the set of sequences such that . If we write out this definition in full, we see that if and only if
The sets are open, since if a sequence belongs to , then so does any other sequence that agrees with it in the first entries. So what we have here is a Borel set of index at most 3. In fact, it has index 2, since basic open sets are also closed. (To see this here, note that the complement of is the set of all such that , which is just as clearly open.)
Here’s another example: the set of sequences for which at least one positive integer is repeated infinitely many times. The condition for belonging to this set is
This again translates into a countable union of countable intersections of countable unions of basic open sets.
I don’t have any good examples of Borel sets of large finite index, or of Borel sets of infinite index. It’s not hard to construct them, but it is less easy to imagine them coming up naturally. A rough reason for this is that a natural definition of a Borel set of index requires alternations of quantifiers, so a Borel set of index , for example, would then require us to have defined natural sets with arbitrarily long alternations of quantifiers.
Just in case anyone objects to what I’ve just said, let me mention a quick way of constructing Borel sets of arbitrarily high index. Let be a bijection between and . Given an infinite sequence , let . Then defines a relation on . If we choose our sequence carefully, this relation may turn out to be a well-ordering. If so, then it is a well-ordering of a countable set, and therefore order-isomorphic to some countable ordinal. If I remember correctly, the set of sequences such that gives rise to a well-ordering of index less than turns out to be a Borel set of index . (If not, then a very similar construction works.)
Given the difficulty in coming up with natural examples of Borel sets of anything more than very small index, it may seem a bit surprising that there are extremely simple and natural definitions of sets that are not Borel sets at all. Here’s one I particularly like. Let be a bijection between and the set of all subsets of of size 2. Given a sequence , let be the infinite graph that includes the edge if and only if is odd. Now let be the set of all sequences such that the graph contains an infinite complete subgraph. Then is not Borel.
Why is not Borel? I won’t prove that here, but I can at least explain why it isn’t obviously Borel. To do that, let me talk about infinite graphs rather than about their encodings as sequences. How do we define the set of graphs that contain an infinite clique? We say that contains an infinite clique if and only if there exists an infinite set of vertices such that for any two distinct the edge belongs to . The huge difference between this and the definitions given earlier is that we are quantifying over all infinite subsets of . In other words, this definition is naturally a second-order definition (roughly, one that quantifies over subsets of ) rather than a first-order definition (roughly, one that quantifies over elements).
Of course, it isn’t obvious that a second-order definition isn’t equivalent to some other first-order definition, and indeed for some quite natural second-order definitions that is the case. But by and large one doesn’t expect it. However, there is certainly a sense in which the set above — of graphs that contain an infinite clique — are not particularly complicated: it has a simple and concise definition. It is in fact an example of an analytic set. These can be defined in various equivalent ways, one of which is as projections of Borel sets. The idea is as follows. First, we take a Borel subset of (which is homeomorphic to ). The set we take is a set of pairs , where encodes an infinite subset of (in any reasonable way — for example, it could be the set of all such that ), and encodes a graph with as vertex set. The set of pairs we take is the set of all pairs such that the infinite set encoded by is the vertex set of a clique in the graph encoded by . If we forget the encodings, we are taking pairs such that is infinite and is the vertex set of a clique in .
The projection of this set on to the second coordinate gives us the set of all such that belongs to for at least one . In other words, it is precisely the set of graphs that contain an infinite clique.
Martin’s theorem is, as its name suggests, the statement that all Borel games are determined. Here are a few reasons that the theorem is not trivial.
1. In order to prove the theorem for Borel sets of index , Martin iterates the power-set operation on times. In other words, he uses extremely large cardinals. (Here I do not mean large cardinals in the usual sense — by the standards of set theorists these are tiny cardinals, but by the standards of most mathematicians, even the power set of the power set of the power set of the power set of is pretty huge.)
2. The above feature of the proof was not some piece of laziness on Martin’s part: Harvey Friedman had earlier shown that they are necessary. I think what this means is roughly as follows. In order to iterate the power set operation many times, you have to use the axiom of replacement many times. Of course, at a successor ordinal, all you need to do is apply the power-set axiom to the previous set, but at the limit stage it is less straightforward. In one way it is easy: you just take the union of all the sets so far. But in order to define this union, you have to take the set of all sets so far, and for that you need to use the axiom of replacement. So my guess is that if you don’t allow yourself the axiom of replacement, you can deduce the existence of the -times iterated power set of from the determinacy of Borel games of index . But I haven’t looked into this properly, so my guess may be wrong. [Added later: it is indeed wrong -- see the helpful discussion below that starts with this comment.]
In any case, there is some precise sense in which proving the determinacy of Borel games requires you to iterate the power-set operation uncountably many times (by which I mean countably many times for each index, but there are uncountably many possible indices).
3. The axiom of choice, as already noted, implies the existence of non-determined games. From that one might expect that all “nice” sets can be shown to be determined. The need for largish cardinals to prove Borel determinacy certainly dents one confidence in this, but once one goes further and considers analytic sets, and more general sets (I’m referring here to the projective sets, which are like the analytic sets except that now you can have an alternation of several second-order quantifiers), it turns out that whether or not they are determined depends on what axioms you are prepared to accept. For instance, to prove that analytic sets are determined, it is helpful if you can assume the existence of a measurable cardinal, which would count as a “large cardinal” in the set-theorist’s sense (though fairly modest as large cardinals go). It is not necessary to use a measurable cardinal (something called zero sharp does the job), but it becomes necessary if you go slightly further up the projective hierarchy.
4. I mentioned earlier that a standard way of proving that all Borel sets have a certain property is to show that open sets and closed sets have it, and that it is closed under countable intersections and unions. Unfortunately, as is easily seen, determined sets are not closed under even pairwise intersections. All you have to do is take a non-determined set and let and . Then and are obviously determined (since Player I can choose freely) and their intersection is .
Of these difficulties, the fourth is perhaps the least serious. If you want to prove an inductive statement and the obvious inductive hypothesis “doesn’t induct”, that does not mean that the situation is hopeless. It does, however, mean that it would be good to find a stronger hypothesis that implies what you want to prove and “does induct”.
That is what Martin does. However, this post has got quite long, so I think I’ll stop it here and move on to Martin’s proof in the next post.