The Symmetric Group Through a Dual Perspective
Communicated by Notices Associate Editor Emilie Purvine
Mathematicians began the study of representation theory over a hundred years ago. Since then it has become a centerpiece technique in fields such as algebra, topology, number theory, geometry, mathematical physics, quantum information theory, and complexity theory. A premise of representation theory is that we can study groups and algebras from how they act on vector spaces.
In this article we take this a step further; to study actions of a group or algebra we study what commutes with the action. The collection of all linear transformations that commute with the action is called the commutant or the centralizer. The centralizer is itself an algebra which is called the Schur–Weyl dual.
A reason why this has become such an important technique is that it can lead to beautiful connections between seemingly different areas of mathematics. One example of this is the discovery of the Jones polynomial. This polynomial is a one variable invariant for oriented knots or links 67. The polynomial was discovered while studying linear functionals of the Temperley–Lieb algebra, an example of a centralizer algebra. Following this work, Jones received his Fields Medal for discovering deep connections between representation theory, topology, and theoretical physics 7.
In this article we will present how centralizer algebras can be used to study the representation theory of the symmetric group. Although we know a lot about its representation theory, there are still open problems that are out of reach such as the Kronecker problem and the restriction problem that we discuss below. Our approach to these problems has been to use centralizer algebras to develop combinatorial tools to study them.
All of the vector spaces in this article are over the complex numbers and , will denote the general linear group of invertible matrices with complex entries.
What is a representation?
Representation theory is the art of studying abstract algebraic objects, such as groups and algebras, by understanding how they act on vector spaces.
When acting on a vector space, each element in the algebra or group is represented by a linear transformation (or more concretely, a matrix). The set of resulting linear transformations is a representation of the algebra or group. Moreover, multiplying two elements in the algebra or group corresponds to composition of the corresponding linear transformations. It is common to refer to the vector space together with the action as the representation, or if the action is understood to just the vector space as the representation.
In this article we are interested in the representation theory of the symmetric group, the group of bijections from the set , to itself. There are many ways to represent the elements of in this article we will think of them in cycle notation or as diagrams. For example, , corresponds to the diagram in Figure 1.
We will use the following example of a representation to illustrate the ideas introduced in this article. Consider
where is the identity and the other elements are written using cycle notation (with one-cycles omitted). The symmetric group acts on the vector space If we choose the basis of standard column vectors for . , then an element , acts by As an example, . , and , Then each . is represented by a permutation matrix:
We will refer to this representation as the permutation representation of .
To define a representation, we need a vector space and an action of the group or algebra on the vector space. Another way to think of a representation is as a group homomorphism to the general linear group, For example, the permutation representation of . is the homomorphism shown above.
Each finite group or finite-dimensional algebra has an infinite number of matrix representations, but we only need a finite number of them to express all of the representations. A common theme in mathematics is to identify the basic building blocks in the theory. For example, in number theory the building blocks are the primes. Applying this idea to representation theory leads to the concept of an irreducible representation, which is a representation that does not contain a subspace that is closed under the action. For instance the permutation representation mentioned above has a subspace which is closed under the action and hence the permutation representation is not irreducible. The subspace is an irreducible representation of .
In this article, we will restrict our attention to semisimple representations. Just as composite numbers can be written using primes, semisimple representations can be decomposed into direct sums of irreducible ones. Many open problems in combinatorial representation theory ask for algorithms for decomposing representations into irreducible ones.
Representations of the symmetric group
A nontrivial and beautiful fact is that the irreducible representations of a finite group are in bijection with the conjugacy classes of that group. In the case of the symmetric group, the conjugacy classes are determined by cycle type (the lengths of the cycles in cycle notation). Since the cycle type is a partition of , then the irreducible representations are also indexed by these. ,
Recall that a partition of is a weakly decreasing sequence of nonnegative integers that add up to We use . for the sum We will think of a partition as a Young diagram, an array of boxes with . boxes in the row that are left-justified. We will use the English convention in which we write the boxes corresponding to -th in the top row, in the second row, and so on.
For instance, has three irreducible representations, which are indexed by partitions of 3,
We use to denote the irreducible representation indexed by One way to describe . is by giving a basis and the action of on this basis. Every representation of can be written as a direct sum of irreducible ones. This fact is known as Maschke’s theorem and is a property that is true for representations of finite groups. For example, the permutation representation of is isomorphic ( to the direct sum of two irreducible representations, namely )
where and .
One of the downsides of thinking about representations in terms of matrices is that the matrices depend on the basis chosen for the vector space. Changing basis produces an isomorphic representation. Fortunately, for complex representations the representations are determined up to isomorphism by their character. The character of a representation is the function defined by
Recall that in linear algebra the trace of a matrix is the sum of its diagonal entries. Using properties of the trace function, we can show that two conjugate elements have the same trace and that isomorphic representations have the same character. Therefore, the essence of the representation of a group can be stored in a vector with entries equal to the trace at a representative of a conjugacy class.
For example, the permutation representation of has ,
Therefore, we can think of its character as the vector with one value for each conjugacy class.
The irreducible complex characters of a finite group can be stored compactly in a square matrix where each row corresponds to an irreducible representation and each column corresponds to a conjugacy class, often indexed by a conjugacy class representative. For instance, the character table of is given in Figure 2.
Writing a representation as a direct sum of irreducibles is the same as writing a character as a vector sum of irreducible characters. In our running example, the character of the permutation representation is the sum of two irreducible characters,
Characters contain all essential information about representations; in fact, Frobenius developed the representation theory of finite groups completely in terms of their characters.
The Kronecker product
The tensor product of two representations for any group is also a representation. In the special case of the symmetric group, given two irreducible representations, and with and both partitions of , has underlying vector space the tensor product of the vectors spaces for and If . then , acts diagonally, i.e., .
The character of written , is the point-wise product of the characters of , and i.e., for ,,
For example, using character vectors with (see the character table for )
An interesting challenge is to write a tensor product such as as a sum of irreducible characters. In this case, by playing around with the character table in Figure 2, we can see that
In general, the coefficients of the irreducible characters, are the nonnegative integers which describe the number of times that the irreducible character , occurs in the decomposition of when written as a sum of irreducibles,
In combinatorial representation theory we are interested in finding combinatorial algorithms to compute the coefficients and tie them to enumerable set of objects. Then, we use this set to deduce properties of the coefficients. The following is a well-known open problem in this area:
The Kronecker problem
Find a set of objects depending only on , and , with cardinality We call this a combinatorial interpretation. .
This problem has motivated decades of research since the early 1900s. Most recently this is due to deep connections with quantum information theory 3 and the central role it plays within Geometric Complexity Theory 11. This is an approach that seeks to settle the celebrated P versus NP problem, one of the several Millennium Prize Problems set by the Clay Mathematics Institute.
Why a combinatorial interpretation?
Combinatorial interpretations of the multiplicities often lead to the discovery of new properties, a better understanding, and in some cases to proofs of longstanding open problems. For example, Knutson and Tao found a combinatorial model for the multiplicities occurring in the tensor product of representations of the general linear group. They used their model to prove the saturation property of these coefficients and this led to a proof of Horn’s conjecture from 1962 characterizing the spectrum of the sum of two Hermitian matrices 9.
Stability of Kronecker coefficients
The Kronecker product of symmetric group representations satisfies a stability property first discovered by Murnaghan 212. Murnaghan observed that for sufficiently large the decomposition of , only depends on the parts of and and not on For example, for . we always get the following decomposition when , and :
In general, we get nonnegative integer coefficients, that depend on three partitions , and , These coefficients are called reduced (or stable) Kronecker coefficients. .
A duality between and
In general, a representation of is a homomorphism, However, the representation theory of . can get pretty wild. In algebraic combinatorics, we often restrict our attention to polynomial representations. This means that the matrices have polynomial entries in the entries of the matrix The irreducible polynomial representations are indexed by partitions with at most . parts. The polynomial representations of were first studied by Issai Schur in his 1901 thesis under the supervision of Frobenius.
By fixing a basis of a three-dimensional vector space, an example of a polynomial representation is given by the following matrix:
In 1927, Schur reformulated his thesis results in what today is known as the Schur–Weyl duality. This duality defines a correspondence between irreducible representations of the symmetric group and irreducible, homogeneous, polynomial representations of of degree Letting . , acts diagonally on ( tensor product of -fold that is, for ), and in ,
Concretely, is the product of the matrix with the column vector The symmetric group . also has a right action of by permuting the tensor factors. For example, for , where , which can be visualized in Figure ,3.
The basic observation that Schur made is that the diagonal action of and the permutation action of commute. This implies there is a well-defined action of the group (direct product) on When we decompose . in terms of irreducible representations of we get
where is an irreducible, homogeneous, polynomial representation of and runs over all partitions of with at most parts. This gives a correspondence between representations of and polynomial representations of .
Consider the case when and Combinatorics can help to visualize . and how it decomposes following Equation 1. Figure 4 represents basis elements of as in three-dimensional space with and organizes them so that the irreducible components are compact. The blue points represent the red and the green points together represent , and the yellow points represent The Robinson–Schensted algorithm .16 gives a way of making this assignment in general.
A polynomial in commuting variables is symmetric if any permutation of the variables leaves the polynomial invariant. For every polynomial representation of there exists a symmetric polynomial , such that if has eigenvalues then the character value at , is In the previous section we saw that for every partition . with at most parts, there exists an irreducible polynomial representation of which we refer to as Schur showed the character corresponding to . are obtained by evaluations of a polynomial which is constructed combinatorially as follows:
In the boxes of the Young diagram of insert numbers so that the numbers increase weakly along each row from left to right and strictly from top to bottom in each column. This is called a semistandard Young tableau (SSYT for short).
For example, if then its Young diagram is If . these are the possible tableaux: ,
For each tableau, in part (1), define a monomial , where , is the number of times that occurs in For example, the corresponding monomials for the SSYT above are . , , , , and , respectively. ,
The Schur polynomial is defined by summing all monomials possible:
where the sum is over all SSYT constructed using and .
In the case of our running example, we get
The interested reader may check that the polynomial is symmetric since permuting the indices , and , in any way gives the same polynomial. In addition one may verify by listing the tableaux of shape that
If has eigenvalues then the character value for the representation , when acted on by is obtained by substituting for all , in , The number . is the trace of the matrix representing when acts on a basis of For example, if . has eigenvalues and then setting , , and , in gives the character of the representation of at the matrix In this case, . is the character value.
Any polynomial representation of is a representation for any subgroup of In particular, the symmetric group . thought of as the group of , permutation matrices, is a subgroup of Therefore, for any . , is a representation of We write . for this restricted representation. The following is a well-known open problem, for more details and references see 13.
The Restriction Problem: Given an irreducible polynomial representation of , give a combinatorial algorithm to compute the coefficients, , that occur when restricted to the symmetric group in the equation ,
We can obtain the character of the restricted representation by evaluating only at eigenvalues of permutation matrices.
For example, if we can restrict with character
to To get the character, for each conjugacy class of . choose a representative and compute its eigenvalues. The identity, has eigenvalues , a two-cycle has eigenvalues , and , are the eigenvalues for a three-cycle, where is a primitive third root of unity. Then evaluate, , and , Thus, again using the character table in Figure .2, the character of is the vector We can see that . and .
A dual approach to restriction
Schur used the diagonal action of on