Representations of Lie groups

31 janeiro 2022, 16:30 John Huerta

Let G be a Lie group, and g = Lie(G) its Lie algebra. In the first half of this course, we studied representations for finite groups. We now wish to study them for Lie groups.

Definition. A representation of G is a homomorphism π: G -> GL(V), where V is a finite-dimensional vector space over ℝ or ℂ.

Example. Every Lie group G has a god-given representation on its Lie algebra g, called the adjoint representation of G, and denoted Ad: G -> GL(g). If G is a matrix Lie group, then Ad is given by conjugation of matrices:

Ad A(X) = AXA -1 , for A in G and X in g.

More generally, if G is any Lie group, Ad comes from differentiating the conjugation action of G on itself.

We can also study representations of Lie algebras.

Definition. Let g be a Lie algebra over 𝕂 = ℝ or ℂ. A representation of g is a homomorphism of Lie algebras ∏: g -> gl(V), where V is a finite-dimensional vector space over 𝕂.

Example. Every Lie algebra g has a god-given representation on itself, called the adjoint representation of g, and denoted ad: g -> gl(g). An element X in g acts on Y in g by bracketing: ad X(Y) = [X,Y].

Exercise. Show that ad is a representation of g for any Lie algebra g. You will need the Jacobi identity!

Now we want to ask, how are representations of a Lie group G related to those of its Lie algebra g = Lie(G)? To understand this, let us reach into our toolbox in differential geometry: given any smooth map f: M -> N between manifolds M and N, and a point p of M, we get a linear map between tangent spaces, f': T pM -> T f(p)N, the pushforward.

Now apply this tool to a homomorphism of Lie groups, φ: G -> H. Let us take the pushforward at the identity:

φ': T eG -> T eH.

The second tangent space is at e because φ(e) = e - it's a group homomorphism! But we know these tangent spaces by another name; they're the Lie algebras of G and H, respectively. So we get a linear map:

φ': g -> h

Fact. The linear map φ' is a Lie algebra homomorphism.

Thus, from Lie group homomorphisms, we get Lie algebra homomorphisms by pushforward at e. Finally, let us apply this idea to representations. A representation of G is a homomorphism π: G -> GL(V). Taking the pushforward at e, we get a Lie algebra representation π': g -> gl(V).

Key question: can we go backwards, and get Lie group representations from Lie algebra representations?

It turns out the answer is yes, if we assume that G satisfies a topological condition: G must be "simply connected".

Definition. A topological space X is a simply connected if

  • for any two points x and y, there is a continuous curve connecting them: that is, there is a map c: [0,1] -> X such that c(0) = x and c(1) = y.
  • if c1 and c2 are two curves connecting x and y, c1 can be continously deformed into c2 through a family of curves that connect x and y: we say that c1 and c2 are homotopic.

Proposition. If G is a simply-connected Lie group with Lie algebra g = Lie(G), then any finite-dimensional representation ∏: g -> gl(V) of the Lie algebra comes from a unique representation π: G -> GL(V) of the Lie group via pushforward: ∏ = π'.

This propostion is a big help! Representations of Lie algebras are essentially just linear algebra, so they are much easier to study that representations of Lie groups. Easiest of all is linear algebra over the complex numbers ℂ, because then we can always find eigenvalues and try to diagonalize matrices.

In light of this, we will focus on one family of examples. We will study the representations of the complex special linear group, SL n(ℂ).

Fact. The complex special linear groups SL n(ℂ) is simply connected for all n ≥ 1. Hence the complex, finite-dimensional representations of the Lie group SL n(ℂ) are the same as those of the Lie algebra sl n(ℂ).

SL 1(ℂ) is the trivial group. So the first interesting example is SL 2(ℂ), and its Lie algebra sl 2(ℂ). It is this case that we will focus on for this lecture and most of the next.

In any case, the idea is always to try to diagonalize as many matrices as possible:

Definition. A Cartan subalgebra h of sl n(ℂ) is a maximal abelian subalgebra such that the adjoint action ad H of any H in h can be diagonalized.


Example. Let h be the diagonal matrices in sl n(ℂ). This subalgebra is:
  • abelian, because diagonal matrices commute;
  • maximal, because any additional matrices would be off diagonal, and would fail to commute;
  • adH is diagonalizable: for H the diagonal matrix with entries a1, a2, ..., an, and Eij the elementary matrix with 1 in the ijth entry and 0 elsewhere, we have adH(Eij) = (ai - aj) Eij. This shows that Eij is an eigenvector of adH with eigenvalue ai - aj, and we can use this formula to find a basis of eigenvectors in sln(ℂ).

For sl 2(ℂ), there's a Cartan subalgebra h = span{ H }, spanned by just one element:

H =
 1    0
 0   -1

In fact, we have a standard basis for sl 2(ℂ) including the element H:

E = 
 0    1
 0    0

H =
 1    0
 0   -1

F =
 0    0
 1    0

This basis satisfies the important relations:

[H, E] = 2E,    [E,F] = H,     [H, F] = -2F.

To help us analyze the representations of sl 2(ℂ), we note the following theorem without proof:

Theorem. All complex, finite-dimensional representations of sl n(ℂ) are completely reducible.

So, any representation can be decomposed into a direct sum of irreducible representations (irreps). From now on, we will assume that the representation V is irreducible. The key result for understanding V is a bit deep:

Deep theorem. Given any finite-dimensional complex representation (V, ∏) of sl 2(ℂ), then ∏ H is diagonalizable.

This should be plausible, because H itself is a diagonal matrix, and we already know that ad H is diagonalizable. The miracle is that ∏ H is diagonalizable for all ∏.

We use this diagonalizabity as follows: decompose the irrep V into eigenspaces:

V = ⊕ a V a

where the direct sum is over all complex numbers a which are eigenvalues of ∏ H, and each summand V a is the eigenspace for a. In other words, for all v in V a, we have Hv = av (really ∏ Hv = av, but I am going to suppress ∏).

We know how H acts - it acts by these eigenvalues. Next, we need to know how the other basis elements E and F act:

Proposition. If v is in V a, then Ev is in V a+2 and Fv is in V a-2.  (Really ∏ Ev and ∏ Fv, but I am going to suppress ∏.)


The proof of this proposition is so important, we call it the fundamental calculation:

HEv = EHv + [H, E]v = aEv + 2Ev = (a + 2)Ev.

Similarly, HFv = (a - 2)Fv. This is what we wanted to check.

So, we have arrived at this picture of the representation V:
V = ... ⊕ V a-2 ⊕ V a ⊕ V a+2 ⊕ ...
where:
  • E acts by raising the eigenvalue by 2;
  • F acts by lowering the eigenvalue by 2;
  • H acts by multiplying by the eigenvalue.
We're almost in a position to understand V completely. We will do so in the next lecture.