Group actions II: the orbit-stabilizer theorem

How many rotational symmetries does a cube have? This question can be answered in a number of ways. Perhaps the one that most readily occurs to people is this: each vertex can end up in one of eight places; once you’ve decided where to put it, there are three places you can put one of its neighbours; once you’ve decided where to put that, the rotation is determined, so the total number of rotations is 8\times 3=24.

Here’s another proof. Take one of the faces. It can go to one of six other faces, and once you’ve decided which face it will go to, one of the vertices on the face has four places it can go, and once you’ve decided that you’ve fixed the rotation. So the total number of rotations is 6\times 4=24.

And here’s another. Take one of the midpoints of the twelve edges. There are twelve places it can end up, and once you’ve decided where to put it, there are two choices for how you send the two endpoints of the original edge to the endpoints of the new edge. So the total number of rotations is 12\times 2=24.

And here’s yet another. Take a point half way between the centre of one of the faces and one of the vertices. (If the cube is the set of all points (x,y,z) such that x, y and z all lie in the closed interval [-1,1], then we could, for instance, take the point (1/2,1/2,1).) There are 24 places that that point can end up — the 24 points that are half way between the centre of some face and one of the vertices of that face. There is only one rotational symmetry that takes the original point to any given one of those 24 points. Therefore, the number of rotations of the cube is 24\times 1=24.

If you understood those arguments, then you basically understand the orbit-stabilizer theorem, even if you think you don’t. To try to convince you of that, I’ll first show how to convert one of the proofs into a proof that explicitly uses the orbit-stabilizer theorem. I’ll then discuss how much you have to remember in order to remember the proof of the orbit-stabilizer theorem. As usual, I shall aim to show that the answer is “very little indeed”. However, routine proofs in abstract algebra have a somewhat different flavour from routine proofs in set theory and analysis, and as this one is a good representative example, it bears discussing in some detail. (Later I may try the same with the first isomorphism theorem, which you should be meeting fairly soon.)

To illustrate how to count symmetries using the orbit-stabilizer theorem, let me convert the 6\times 4 proof that used faces. The obvious way to capture the idea of “which face a particular face goes to” is to let the symmetry group of the cube act on the faces of the cube. Then the set of places that a particular face can go to is nothing other than the orbit of that face. How does the group act on the faces? Well, any rotation of the cube will permute the faces, and that permutation is the one that corresponds to the rotation.

In the earlier version of the argument, we said that our chosen face can go to any one of six other faces. The fancy way of saying that is that the orbit of the chosen face is the set of all faces, which has size 6.

And in the earlier version of the argument, we said that once we had decided which face our chosen face would go to, there were four ways of lining up the vertices. This doesn’t quite correspond to a statement about stabilizers, but let us note for now that if the chosen face maps to itself, then we have four choices. So the stabilizer of our chosen face has size 4. Multiplying those two numbers together gives us 24.

The content of the orbit-stabilizer theorem is basically that if a group G acts on a set X, then once you know how many ways there are of sending an element x of X to itself, you also know how many ways there are of sending it to any other element in its orbit. In the faces-of-the-cube example, knowing that there are four ways of rotating the cube so that a face F ends up where it was before tells us that there are also four ways of sending it to some other face F'. And that is because “all faces look basically the same” from the point of view of the rotation group. In general, “all points of the orbit of x look basically the same” from the point of view of the transformations performed by the elements of G.

I’m going to give a few different proofs of the orbit-stabilizer theorem, not because they are fundamentally different, but because they have certain stylistic features that are worth commenting on. Also, it may be interesting to see how much variety is possible even when presenting a proof, even when the underlying argument is the same.

First, I’ll remind you of the statement we are trying to prove.

Theorem. Let G be a group that acts on a set X, let x be an element of X, let O_x be the orbit of x and let S_x be the stabilizer of x. Then |O_x||S_x|=|G|.

In words, the size of the group is the size of the orbit times the size of the stabilizer.

Proof 1. For this proof I want to try to capture as closely as possible our intuition in the cube example that the set of rotations that takes one face to another is “just like the set of rotations that fix that face, but over at another place”. However, I’ll argue in general.

What am I trying to show in the abstract? I want to show that if y belongs to the orbit of x, then the set of g\in G such that gx=y is the same size as S_x, which is the set of g\in G such that gx=x. (Here I’m slipping into the condensed notation and writing gx instead of \phi(g,x) or \phi(g)(x). But it’s important to bear in mind that the “product” gx is not the group operation, since it’s only g that belongs to a group. Rather, it is what one might call the group action operation, which has some nice group-like properties such as (gh)x=g(hx) and ex=x.)

What is the most natural way to show that two sets have the same size? It’s to produce a bijection between them. (It isn’t the only way. Another is to calculate their sizes, possibly by completely different methods, and show that you get the same answer in both cases. But that is not an appropriate method here, since we’re not given enough information to calculate anything.)

Let me write S_{xy} for the set of g\in G that take x to y. If I’m given an element of S_x, how do I turn it into an element of S_{xy}? Well, in the faces-of-cube example, what I might do is this. I fix, once and for all, a rotation \rho that takes my initial face to the given face. And then, given a rotation that fixes the initial face, I compose it with \rho to get a rotation that takes the initial face to the given face.

Let’s try that with S_x and S_{xy}. Since y belongs to the orbit of x (by assumption), there is some h such that hx=y. Now let me define a map from S_x to S_{xy} by mapping g to hg. Note that hg takes x first to x (since g\in S_x) and then to y (since h\in S_{xy}). Thus, hg really does belong to S_{xy}.

However, is the map we’ve just defined a bijection? Let’s see if we can prove that it’s an injection and a surjection. No, let’s not do that. Let’s try to find an inverse. Given an element u of S_{xy}, how might we convert it into an element of S_x? There’s only one thing we can even dream of trying at this point. Since u takes x to y and h also takes x to y, h^{-1}u takes x to x, as we want. So the map u\mapsto h^{-1}u takes S_{xy} to S_x. Does it invert the previous map? Yes of course it does.

We’ve shown that for each y\in O_x there are precisely |S_x| elements of G that take x to y. But every element of G takes x to something in the orbit, so |G|=|O_x||S_x|, as we were trying to prove. \square

In case you found that off-puttingly long, here it is stripped of all the accompanying chat.

The real Proof 1. Let y be an arbitrary element of O_x, and let S_{xy}=\{g\in G:gx=y\}. Pick h\in S_{xy} and define a map \phi:S_x\to S_{xy} by \phi:g\to hg. The map \psi:S_{xy}\to S_x defined by \psi:u\to h^{-1}u inverts \phi, so |S_x|=|S_{xy}| for every y\in O_x. But the sets S_{xy} with y\in O_x form a partition of G. It follows that |G|=|S_x||O_x|. \square

I cheated slightly there by not checking carefully that \phi really does take S_x to S_{xy} and that \psi really does take S_{xy} to S_x. But those facts are easy enough to be left to the reader to do in his/her head.

That proof is so short that one might wonder why another proof could possibly be desirable. Well, it’s mostly an aesthetic point, at least as far as this proof is concerned, but there’s something a bit ugly about choosing that h. Why? Because I didn’t say how to do it. So, for example, the way I chose h for one y might be completely unrelated to the way I chose it for another y. Wouldn’t it be nicer if there were some natural way of defining h in terms of y?

Well, yes it would, but it’s not that easy to do. Just think of the cube example. Suppose I take a face F and I ask you to define, for each other face F', a rotation of the cube that takes F to F'. I want you to do it in a nice systematic way. How might you do it? Well, if F'=F then probably the most natural thing to do is take the identity. How about if F' is one of the neighbouring faces to F? Perhaps the most natural thing there is to take a 90-degree rotation about an axis through the centre of the cube that’s parallel to the edge where F and F' meet. But then what about the face opposite F? There are four ways of rotating the cube so that F lands up at the opposite face, and they are of two kinds: two of them are half turns about axes that go through the midpoints of two opposite faces, and the other two are half turns about axes that go through the midpoints of two opposite edges. (If you draw a diagram, I hope you’ll be able to see what I’m talking about here. It’s one of those things that may be easier to work out for yourself.) Anyhow, for each of these two types of half turn, there is absolutely nothing to choose between the two half turns of that type. So there just isn’t a natural way to choose a rotation for each face. As mathematicians might say, there isn’t a canonical choice.

So is that the end of the story? Not quite. There’s a useful principle that sometimes applies in this kind of situation. It says this.

  • If you can’t make a canonical choice, then make all choices at once.
  • This meme was embedded in my brain as a result of a nice Mathoverflow question, though I suppose I was aware of it less consciously before that. Let’s see how it plays out with our proof.

    If I don’t choose just one h\in S_{xy}, then what do I do instead? I choose all h that belong to S_{xy}.

    A problem then arises when I try to define a bijection from S_x to S_{xy}. Never mind, though. Let’s just define a map from S_{xy}\times S_x to S_{xy} by sending (h,g) to hg. Note that this is like what we did before, but we’re doing it for every h and not just one chosen h. And what’s nice about it is that we are now doing precisely the same thing for every y.

    Let’s call our map \phi. Obviously, \phi is not a bijection, but that doesn’t matter. We’re trying to show that it’s a bit like a putting together of |S_{xy}| bijections. So what we’d like to show is that every element of S_{xy} has exactly |S_{xy}| preimages under \phi. That will tell us that |S_{xy}||S_x|=|S_{xy}|^2, which will imply that |S_{xy}|=|S_x|.

    So let’s take u\in S_x. How many preimages has it got? Well, a preimage of u is a pair (h,g) such that hg=u. So I want to show that there are precisely |S_{xy}| solutions of the equation hg=u with h\in S_{xy} and g\in S_x. But that’s easy: for each h\in S_{xy} there is precisely one possible g, namely h^{-1}u, which does indeed belong to S_x.

    Let me again give the condensed version, just to convince you that what I’ve written doesn’t add up to a long proof. In fact, I don’t even need to bother to define the function \phi.

    Proof 2. Let y be an arbitrary element of O_x and let S_{xy}=\{g\in G:gx=y\}. For every u,h\in S_{xy} there is exactly one g\in S_x such that hg=u, namely h^{-1}u. It follows that every u\in S_{xy} can be written in |S_{xy}| ways as hg with h\in S_{xy} and g\in S_x. Also, hg\in S_{xy} for every h\in S_{xy} and every g\in S_x. It follows that |S_{xy}||S_x|=|S_{xy}|^2. Therefore, |S_x|=|S_{xy}|. But the sets S_{xy} with y\in O_x form a partition of G. It follows that |G|=|S_x||O_x|. \square

    Here is a variant of the above argument that I like better.

    Proof 3. Let y be an arbitrary element of O_x and let S_{xy}=\{g\in G:gx=y\}. Let u\in S_{xy} and let us see in how many ways we can write u=hg with h\in S_{xy} and g\in S_x. Well, for each h\in S_{xy}, there is exactly one g\in S_x such that hg=u, namely h^{-1}u. So we can write u=hg in |S_{xy}| ways. But also, for each g\in S_x there is exactly one h\in S_{xy} such that hg=u, namely ug^{-1}. This shows that we can write u=hg in |S_x| ways. Therefore, |S_x|=|S_{xy}|. But the sets S_{xy} with y\in O_x form a partition of G. It follows that |G|=|S_x||O_x|. \square

    Or do I like it better? It’s starting to look, with its non-canonical choice of u\in S_{xy}, a bit too like the first argument.

    I still don’t feel as though I’ve arrived at the neatest and most symmetrical argument, and I’ve also not satisfied another urge, which is to prove that |G|=|O_x||S_x| by finding a nice bijection between G and O_x\times S_x. There are good reasons for the second problem, as I’ve already discussed, but I might be able to satisfy my urge by finding not a one-to-one correspondence, but something a bit more general and multivalued.

    So let’s take a point (y,g)\in O_x\times S_x. What can we do with it? We can use it to define the element gy of X, but that doesn’t seem very exciting or helpful. What we’d really like is to define an element of G, which earlier we did by fixing some h\in S_{xy} and taking hg. But that felt too non-canonical, so instead we preferred to take all h.

    If we take all h, then that’s saying that we can get from x to y in several different ways. So we seem to get a map from G\times S_x to G, which is very simple: it takes (h,g) to hg. But what’s the point of it? And where does O_x come in?

    One way of making O_x come in is to go one step further and let hg act on x to create hgx=hx. So now we’ve got a map from G\times S_x to O_x, the map that takes (h,g) to hx. How many preimages does an element y have?

    I’m not going to answer that question (though I could) because once we’ve got to the point of mapping G\times S_x to O_x using the map above, we see that g isn’t really playing a role, so why not just focus on the map from G to O_x that takes g (which I was calling h above) to gx? If y\in O_x, then how many preimages does y have?

    I won’t bother to answer that question either: it’s the same as asking how big the set S_{xy} is, and we’ve established already that that is |S_x|, so it wouldn’t be a particularly new proof that resulted. However, one remark that’s worth making is that the set S_{xy} is a left coset of S_x. Probably the proof you were given in lectures was this.

    Proof 4. We shall show that there is a bijection between O_x and the set of left cosets of S_x. To do this, we map the left coset gS_x to gx. We must show that this is well-defined. But if gS_x=hS_x, then h^{-1}gS_x=S_x, and therefore, since S_x is a subgroup, h^{-1}g\in S_x. From that it follows that h^{-1}gx=x, so gx=hx.

    Since the number of left cosets of S_x is |G|/|S_x|, we are done. \square

    If the phrase “well-defined” worries you in the above argument, then I recommend a post I once wrote about what it means.

    Here’s a different way of writing the above proof, where we think more about equivalence relations than about partitions.

    Proof 5. Here are two equivalence relations on G. For the first, I define g\sim_1h if gx=hx. For the second, I define g\sim_2h if h^{-1}g\in S_x. Note that this is the same as saying that g^{-1}h\in S_x and also the same as saying that gS_x=hS_x.

    Now gx=hx if and only if h^{-1}gx=x if and only if h^{-1}g\in S_x. So the two equivalence relations are the same. The number of equivalence classes for the first relation is obviously |O_x| and the size of each equivalence class for the second relation is obviously |S_x|, so the result is proved. \square


    Let me make one final remark about the orbit-stabilizer theorem. Why, one might ask, is it a useful result? A rather general answer is that it gives us a relationship between three quantities, namely |G|, |S_x| and |O_x|, that allows us to determine any one of them from the other two. Situations crop up quite frequently in group theory where it is not very easy to see what one of these quantities is directly, but quite easy to calculate the other two. The orbit-stabilizer theorem then gives us the hard one too. For example, in the case of counting rotational symmetries of a cube, it isn’t easy to think of all the rotations unless you partition them in some nice way, such as looking at what they do to a particular vertex or face, which, as we saw before, amounts to counting orbits and stabilizers.

    It’s not always |G| that’s the tough one to get a handle on. For example, in question 10 of Examples Sheet 3 it’s the size of the orbit that isn’t obvious. (In general with the bunch of questions around there, I recommend thinking to yourself, “What is the orbit-stabilizer theorem giving me?”)

    About these ads

    21 Responses to “Group actions II: the orbit-stabilizer theorem”

    1. kim Says:

      I wasn’t quite sure about the first proof for the rotational symmetries of a cube. Can you confirm that what you are doing is picking a line through one vertice and another vertice opposite that, then twirling it around this axis and this is what you mean when you say ”there are three places you can put one of its neighbours”?

      thanks

      • gowers Says:

        If I understand you correctly, then yes that is what I was saying. Let x be your starting vertex and let y be another vertex. There’s definitely at least one rotation that takes x to y. Once you’ve found that, you can rotate the cube about the line that joins y to the vertex opposite y (exactly as you say) through any multiple of 120 degrees. That gives you three rotations that take x to y (since a composition of two rotations is a rotation). But the way I was thinking about it, I was describing those three rotations by taking a vertex z adjacent to x and considering where it could go if you already know that x goes to y. Clearly it has to go to one of the three vertices adjacent to y. Also, once you’ve decided where x goes and where z goes then the rotation is completely determined. (If you allow reflections as well, then this is no longer the case: there are two possible transformations that take x to y and z to some given neighbour of y.)

    2. kim Says:

      And did you have trouble at first visualising that patricular rotation? I had to physically rotate a cube about that axis to convince myself of that symmetry.

    3. nyaya@null.net Says:

      In your first paragraph, you write:

      …once you’ve decided where to put it, there are three places you can put one of its neighbours; once you’ve decided where to put that, the rotation is determined..

      Now in the image, We can go from (i) to (iii) via a rotation but not to (ii). Is there a easy way to see this without invoking orientations?

      • gowers Says:

        I think you probably have to invoke orientations somehow, even if you disguise them. One way one might disguise them is to say something like this: imagine that you are walking from b to a in the first picture; then c is over to the right; the same must be true after a rotation; so once you’ve fixed a and b then you’ve fixed c and hence the whole linear map. (This is a bit like the right-hand rule applied to 0, a, b and c.)

    4. Joseph Says:

      I feel the orbit-stabilizer theorem is an example of a theorem whose proof is more interesting than its statement. The way I remember how to prove it is probably closest to proof 4. I think it is clearer to prove a stronger statement: that the group is partitioned into (equally sized) cosets of the stabiliser, and elements in the same coset send x to the same place, and elements in different cosets send x to different places. This statement is easy to remember and gives one a better insight into the structure of the group action, and the steps of proving the statement are quite easy. If you like you can notice that the stabiliser is a subgroup and then use existing knowledge about cosets of a subgroup (for example that that they are all the same size and the set of cosets partition the group).

    5. Anonymous Says:

      I’m not sure Nayayas picture is correct because the orientation of the a and b is not reversed in all 3 pictures. Surely everything has to be reversed when you reflect?

      • Nyaya Says:

        I am not sure what you mean. To get (iii) from (i). Rotate \pi/4 about the central axis (connecting the midpoints of the top face and the bottom face) and then \2pi/3 about the diagonal of the cube through a. To get (ii) from (i) rotate \pi/4 about the central axis and reflect along the plane containing ac and normal parallel to db (this is still a reflection… to see this, notice that the det of the operation is +1.(-1)=-1)

      • gowers Says:

        The picture looks correct to me too.

    6. David Says:

      Just a few small typos here: in the second proof you write “every u \in S_x” where it really should be “every u in S_xy”. And in the third proof you repeat “with h \in S_xy” twice.

      Many thanks — fixed now.

    7. anonymous Says:

      There is an analogy between the orbit-stabilizer theorem and the following simple observation. If we have two isomorphic structures, there may yet be more than one isomorphism from one to the other. It is possible to view the action of the group on the faces of the cube in this way. To be precise, the number of isomorphisms from a structure to an isomorphic structure is equal to the number of automorphisms of the structure. We can prove this deep fact and the orbit-stabilizer theorem in essentially the same way, using elementary combinatorial arguments about directed graphs.

    8. Ravi Shankar (Oklahoma) Says:

      I wanted to give a simple, elegant application of the Orbit-Stabilizer theorem. This is an alternate proof of Cauchy’s theorem: if a prime ‘p’ divides the order of the finite group G, then G has an element ‘x’ of order p. This proof is due to James McKay (Monthly, 1959).

      Consider the set S, a subset of GxGx…xG, the p-fold Cartesian product of G, where S = {(x_1, x_2, …, x_p): x_1*x_2*…*x_p = 1}. S is non-empty and in fact it is fairly easy to see that |S| = |G|^{p-1}.

      Define an action of Z/p on S, where Z/p acts via cyclic permutations (as a subgroup of the symmetric group S_p). Then by the Orbit-Stabilizer theorem (and the fact that p is prime) every orbit of this action on S has size either 1 or p. An orbit of size 1 is a fixed point, and an element of S is fixed for all cyclic permutations if and only if it looks like (x,x,…,x). The collection of fixed points in non-empty; the element (1,1,…,1) is fixed. If there are ‘k’ fixed points, where k is at least 1, and ‘m’ orbits of size p, then we have

      |S| = |G|^{p-1} = k + m*p

      The left side is divisible by p so that implies that k>1. So there is a non-trivial element (x,x,…,x) in S (in fact at least p-1 such elements) such that x^p = 1 which is the required element of order p.

      • gowers Says:

        I was going to write about that argument (which appears in one of the questions on the Cambridge examples sheet mentioned in the post) in a further post on group actions, but in the course of thinking about it, I came to the conclusion that it was a bogus application, in the sense that essentially the same argument can be written out more simply with no mention of the orbit-stabilizer theorem. It goes like this. (Most of it is the same as the argument as you’ve written it.)

        1. The set S has size |G|^{p-1}.

        2. If (x_1,\dots,x_p)\in S and x_1,\dots,x_p are not all equal, then all cyclic permutations of (x_1,\dots,x_p) belong to S and are distinct. (This depends on p being prime.)

        3. On the other hand if x_1=\dots=x_p then all cyclic permutations are trivially equal.

        4. Hence, if we define two elements of S to be equivalent if one is a cyclic permutation of the other, then the equivalence classes all have size 1 or p.

        5. Since (e,e,\dots,e) belongs to S, at least one equivalence class has size 1.

        6. Therefore, there must be another equivalence class of size 1, which implies that there is some g\ne e such that g^p=e.

        In this proof there is no need to mention the \mathbb{Z}_p action, though I suppose implicitly it’s there. Similarly with orbits. But the real point is that there is absolutely no need to use the orbit-stabilizer theorem, since we can see directly what the sizes of the orbits are.

      • gowers Says:

        Having just written that, I read what you wrote more carefully, and I now see that you do in fact use the orbit-stabilizer theorem to save a tiny bit of time, so I take back slightly what I’ve just written. In the argument I suggest above, you have to argue that if the sequence is equal to a cyclic permutation of itself, then we get some r such that x_i=x_{i+r} for every i, and since every element of \mathbb{Z}_p is a multiple of r it follows that all the x_i are equal. Your argument is that the stabilizer is a subgroup of \mathbb{Z}_p and hence has size 1 or p, from which it follows that the orbit has size 1 or p.

        That still leaves me with the feeling that Cauchy’s theorem isn’t really an application of the orbit-stabilizer theorem: it’s more that a small part of the proof can be made slightly quicker (at the cost of making the argument look a bit more mysterious and magical).

      • Ravi Shankar (Oklahoma) Says:

        I suppose one can get away with equivalence classes in this context but here is another example of the utility of the Orbit-Stabilizer theorem. Namely the Sylow theorems: let G be a finite group of order p^a*m, where p does not divide m and p prime.

        Let S denote the collection of all subsets of G of size p^a. S has size p^a*m choose p^a. It is (if I may dare say this to you Tim) a relatively simple exercise in combinatorics to show that p does not divide |S| if p is prime. Now G acts on S by right multiplication. Since p does not divide |S|, there must be an orbit O = G(X) of the G-action of size not divisible by p. But by the Orbit-Stabilizer theorem |O| divides |G| and hence |O| divides m. Let G_X be the stabilizer of the subset X in O. Again by the Orbit-Stabilizer theorem, |G_X| = p^a*k, where k = m/|O|. On the other hand, for an element x in X, the set {xg: g in G_X} is a subset of X so |G_X| is at most |X| = p^a. Hence, G_X is a subgroup of G of size p^a, i.e., we have shown the existence of a Sylow p-subgroup.

        Once can continue further: if C denotes the set of G conjugates of G_X (note now we considering a different action), then by considering the sub-action of G_X on C, one can show that G_X is the only fixed point (again using the Orbit-Stabilizer theorem) and hence all Sylow p-subgroups are conjugate. This also yields that the number of Sylow p-subgroups is congruent to 1 mod p, again by the O-S theorem, each orbit has size 1 or a power of p etc.

        Of course there are other proofs of the above theorems but to my taste this seems quite elegant and one can see for instance why the number of Sylow p-subgroups is 1 mod p.

    9. Joseph Says:

      A variant of your proof 2 would give you a bijection, if you think that would be neater. Define a map from S_{xy}\times S_x to S_{xy}\times S_{xy} by sending (h, g) to (h, hg).

      Another thought: You mention that the way to show that two sets are the same size is to find a bijection between them. I was trying to think how other proofs, such as proof 4, show that the two sets are the same size. The way it does it is by showing that the sets are equal to a pair of sets we know to be equal, a subgroup and one of its cosets. However, it is not much different, because we would show a bijection to prove that a subgroup is equal in size to one of its cosets, so all that is different here is that the use of the technique is buried in the proof of a result that we rely on.

      A typo: Where you wrote “What we’d really like is to define an element of g”, I believe this should be “What we’d really like is to define an element of G”.

      Thanks — fixed. (And I like that bijection.)

    10. Joseph Says:

      Suppose \phi_y : S_{xy}\times S_x \to S_{xy}\times S_{xy} is defined as in my previous comment. Notice that the domains and co-domains of these maps are disjoint. The closest thing to a bijection from O_x\times S_x to G is, I believe, \cup_y \phi_y : \cup_y S_{xy}\times S_x \to \cup_y S_{xy}\times S_{xy}.

      To get the theorem from this by counting, we need |S_{xy}| = |S_x| for all y. (Admittedly, once you have that it is hardly worth bothering with the argument we are in the middle of.) Then we proceed as follows:

      |\cup_y S_{xy}\times S_x| = |\cup_y S_{xy}\times S_{xy}|

      \Sigma_y |S_{xy}| |S_x| = |S_x| |\cup_y S_{xy}|

      |O_x| |S_x| |S_x| = |S_x| |G|

      |O_x| |S_x| = |G|

    11. valuevar Says:

      It’s worth pointing out here that the orbit-stabiliser theorem is robust enough that it can be stated and proved for arbitrary subsets of G, rather than for subgroups of G or (as here) G itself. This is something that became very useful to me early on – I recently chose to highlight it as Lemma 3.1 in http://arxiv.org/abs/1109.3550 .

    12. xyz Says:

      Minor typo: “So let’s take $u\in S_x$. How many preimages has it got?”
      Should be: $u\in S_{xy}$.

    13. Group actions IV: intrinsic actions « Gowers's Weblog Says:

      [...] a small list of possible actions to draw on). Just before I finish, I’m going to look back at a comment on my previous post but one on group actions, which I deliberately didn’t read properly because I knew I was going to write this. It will [...]

    14. Dr. Michael Pugh Says:

      Can different groups share stabilizers? I’m asking because I’ve been studying Lie Theory, and I’ve learned how to calculate differential invariants for ODE’s of any order. You substitute these into the ODE and with a little algebra, out pops the invariant solution. It’s very nice. Last night I found a Riccati equation someone had solved using an exponential transformation, and I solved it using a stretching group and got the same answer. When I calculated the stabilizers of the exponential group (using Sophus Lie’s technique) I found it to have the same stabilizers as my stretching group. Does this mean that they are in fact the same group, and does it matter that these are infinite groups, not finite groups?

    Leave a Reply

    Fill in your details below or click an icon to log in:

    WordPress.com Logo

    You are commenting using your WordPress.com account. Log Out / Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out / Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out / Change )

    Google+ photo

    You are commenting using your Google+ account. Log Out / Change )

    Connecting to %s


    Follow

    Get every new post delivered to your Inbox.

    Join 1,579 other followers

    %d bloggers like this: