PDFLINK |

# Root Systems in Lie Theory: From the Classic Definition to Nowadays

Communicated by *Notices* Associate Editor Han-Bom Moon

## 1. Root Systems: The Origin

The purpose of this article is to discuss the role played by root systems in the theory of Lie algebras and related objects in representation theory, with focus on the combinatorial description and properties.

### 1.1. Semisimple Lie algebras

The study of Lie algebras began toward the end of the 19th century. They emerged as the algebraic counterpart of a purely geometric object: *Lie groups*, which we can briefly define as groups that admit a differentiable structure such that multiplication and the function that computes inverses are differentiable. Lie algebras appeared as some algebraic structure attached to the tangent space of the unit of this group.

Initially Lie algebras were only considered over complex or real numbers, but the abstraction of the definition led to Lie algebras over arbitrary fields.

There is a subtle difference when the field is of characteristic two: the antisymmetry is replaced by for all (which implies the former one). From now on all Lie algebras considered here are assumed to be finite-dimensional.

An easy example is to pick a vector space together with trivial bracket for all these Lie algebras are called ;*abelian*.

There is a general way to move from an associative algebra to a Lie algebra: take as vector space and set for each pair A prominent example of this construction is the general linear algebra . which is the set of linear endomorphisms of a finite-dimensional vector space , Other classical examples appear as Lie subalgebras (that is, subspaces closed under the bracket) of . :

- •
those endomorphisms whose trace is zero; if , then we simply denote , by or , when the field is clear from the context.

- •
The orthogonal and symplectic Lie subalgebras respectively , of those endomorphisms , such that

where is a symmetric, respectively antisymmetric, nondegenerate bilinear form on .

Analogously, we may start with the algebra of , matrices, and take some subalgebras, as the subspaces of upper triangular matrices, those of trace 0, the orthogonal matrices, between others.

Once we have a notion of *algebra*, it is natural to ask for ideals: in the case of Lie algebras, these are subspaces such that This leads to consider .*simple* Lie algebras, those Lie algebras such that and the unique ideals are the trivial ones: and In addition, we say that a Lie algebra . is *semisimple* if is isomorphic to the direct sum of simple Lie algebras.

For the rest of this section we fix We know that a Lie algebra . is simple if and only if is (isomorphic to) , , and a few exceptional examples , , , , That is, up to 5 exceptions, all the complex simple Lie algebras are subalgebras of matrices. Thus one may wonder if some properties of the algebras of matrices still hold for simple Lie algebras. We will recall some of them by the end of this section, following .Hum78.

As for associative algebras, we can study modules over Lie algebras. A -*module* is a pair where , is a space and -vector is a linear map such that

For example, the bracket gives an action of over itself, called the *adjoint action*.

For each we look at the *inner derivation*

associated to the adjoint action. These endomorphisms induce a symmetric bilinear form on called the ,*Killing form*:

The Killing form and the give other characterizations of semisimplicity: -modules is semisimple if and only if is nondegenerate if and only if every module is semisimple, i.e., every admits a complement which is a -submodule -submodule.

When is one of the Lie algebras of matrices above, the action of diagonal matrices is, in fact, diagonalizable. Mimicking this fact we look for subalgebras such that the action of their elements is diagonalizable, called *toral* subalgebras.

From now on assume that is also semisimple. It can be shown that toral algebras are abelian, and we pick a maximal one Thus . decomposes as the direct sum of the -eigenspaces:

As is abelian, we have that one can show that we have an equality, : Thus, if we set . then , a finite set called the ,*root system* of gives a decomposition of , into as follows: -eigenspaces

This decomposition is compatible with the bracket,

and the Killing form

We can derive that is nondegenerate, thus it induces a symmetric nondegenerate bilinear form .

### 1.2. Root systems for Lie algebras

We may derive strong properties of the root system using the representation theory of we refer to ,Bou02Hum78 for more details.

- (i)
is spanned by .

- (ii)
If then , Moreover, for each . , .

- (iii)
For each the eigenspace , is one-dimensional. Moreover, is a subalgebra isomorphic to Notice that . .

- (iv)
If then , .

- (v)
Let be such that Then there exist . such that

Moreover, That is, the .

*root string*over in the direction of has no holes.

By (i) there exists a basis of contained in We can check that all the coefficients of any . written in terms of , are rational numbers, so we may consider the , subspace -linear generated by and take the extension to we get a finite-dimensional : space -vector which *contains* all the information and the geometry of .

## 2. Classical Root Systems

From the information above one may wonder if there exists an abstract notion of root system. The answer is *yes*, and we will recall it following Bou02, see also Hum78. We can classify all finite root systems in terms of so-called finite Cartan matrices. We will also recall a way to come back from (abstract) root systems to complex Lie algebras.

### 2.1. Abstract definition

In Hum78 one also requires that for each , In other references, root systems with this extra property are called .*reduced*.

The reflections , are univocally determined and there exists a symmetric invariant nondegenerate bilinear form which is moreover invariant by , and positive definite. Now, the elements are recovered using this form:

Also, the set is a root system of with , There are four examples of reduced root systems in rank 2: . , , and with , , , and , roots, respectively. The third one is depicted in Figure 1.

Let be such that One may check that . (respectively, if ) (respectively, This is the starting point, together with ).(RS2) and (RS3), to check that an analogue of (v) holds for (abstract) root systems.

Another key point is the existence of a *base* of a root system. It means a subset such that is a basis of (as a vector space), and every is written, in terms of as a linear combination whose coefficients are all nonnegative integers, or all nonpositive integers. ,

The proof of existence of bases gives the geometric flavor behind root systems. We take a vector such that the orthogonal hyperplane to does not contain any root. Indeed belongs to where , is the kernel of i.e., the hyperplane orthogonal to , the connected components of : are called the *Weyl chambers*. Thus where ,

A base is made by those *indecomposable* roots in those : which cannot be written as a sum with , Moreover every base can be constructed in this way. .

For example, in Figure 1 we take the green hyperplane: the positive roots are the red ones, the negative are the blue ones, and is a base.

The Weyl group permutes bases (and Weyl chambers as well), and the action is simply transitive. We check then that any root belongs to a base, and for each base , is generated by , (we reduce the number of generators of to the rank of the root system). This leads to the study of *groups generated by reflections* and *Coxeter groups* considered in Bou02, which became an important subject of research on its own, and remains active until now.

### 2.2. The classification

As for algebraic objects, we may ask for *irreducible* root systems: those which cannot split into two orthogonal subsets (otherwise each subset is itself a root system). Every root system of decomposes uniquely as a union of irreducible root systems corresponding to the subspaces of spanned by Thus, in order to classify root systems, we can restrict to the irreducible ones. .

Assume now that is an irreducible root system of rank Set . as the matrix with entries

where is a base. One can check that is well-defined; i.e., it does not depend on the chosen base. In addition, is indecomposable: for all there exist such that Moreover, .

- (GCM1)
for all ,

- (GCM2)
if and only if ,

- (GCM3)
for all , .

Any satisfying (GCM1)–(GCM3) is called a *generalized Cartan matrix* (GCM) Kac90. The information of GCM is encoded in a graph called the *Dynkin diagram*: it has vertices, labelled with and for each pair , ,

- •
if then we add , edges between vertices and with an arrow from , to (respectively to if ) (respectively, in particular, if ); (so as well) then we draw no edges between and and if , then we draw just a line; ,

- •
if then we draw a thick line between , and labelled with .

For example, the Dynkin diagrams of and and are, respectively

One reason to differentiate between and is all finite and affine Dynkin diagrams satisfy the first condition, and these are probably the most studied cases. We refer to Bou02Hum78 for the definition of affine Dynkin diagrams while finite ones are depicted in Figure 2, in connection with finite-dimensional complex Lie algebras.

One may define the Weyl group of a GCM as the subgroup of generated by reflections , where , is the canonical basis of if : is the Cartan matrix of a Lie algebra as above, then the Weyl group of is generated by these Analogously, we can define ’s.

Then one can prove that is finite if and only if is finite, which is equivalent to the notion of *finite GCM*. Finite GCM are parametrized by *finite* Dynkin diagrams, i.e., those in Figure 2.

**Figure 2.**

Finite connected Dynkin diagrams.

, | |

Up to now we deal with three notions:

- (i)
Simple Lie algebras over ,

- (ii)
Irreducible root systems,

- (iii)
Finite Cartan matrices, or the corresponding Dynkin diagrams.

We moved first from (i) to (ii), and then state a correspondence (ii) (iii). Now we need to come back to (i). We can check that has Cartan matrix of type (see Example 1.2), while matrices of types , and appear for orthogonal and symplectic Lie algebras. For each one of the exceptional finite Cartan matrices in Figure 2 we can construct *by hand* a simple Lie algebra with Cartan matrix The natural question is if there exists a .*systematic* way to build these Lie algebras. We will recall it in the next subsection, i.e., a correspondence (iii)(i).

### 2.3. Back to Lie algebras: Kac-Moody construction

Looking at Example 1.2, the Cartan matrix of can be recovered from the action of the Cartan subalgebra on eigenvectors of a base of the root system In addition the decomposition . into positive and negative roots for the chosen base corresponds in this case to the upper and lower triangular matrices of (recall that is spanned by the set of all the diagonal matrices in ).

As for associative algebras, we have a notion of a Lie algebra *presented by generators and relations* as the appropiate quotient of a *free* Lie algebra. We will attach a Lie algebra to each matrix these algebras were introduced by Serre in 1966 for finite matrices ; and by Kac and Moody in two independent and simultaneous works in the late sixties, see ,Kac90 and the references therein. For the sake of simplicity of the exposition we assume that .

Let be the Lie algebra presented by generators , , , and relations ,

1Let be the subspace spanned by , the subalgebra generated by respectively , We have the following facts: .

- (a)
is a free Lie algebra in generators.

- (b)
As a vector space, .

- (c)
The adjoint action of on is diagonalizable.

- (d)
Among all the ideals of intersecting trivially there exists a maximal one , which satisfies ,