Maximal subgroups of free idempotent generated semigroups over the full linear monoid

We show that the rank r component of the free idempotent generated semigroup of the biordered set of the full linear monoid of n x n matrices over a division ring Q has maximal subgroup isomorphic to the general linear group GL_r(Q), where n and r are positive integers with r


Introduction
The full linear monoid of all n × n matrices over a field (or more generally a division ring) is one of the most natural and well studied of semigroups. This monoid plays an analogous role in semigroup theory as the general linear group does in group theory, and the study of linear semigroups [28] is important in a range of areas such as the representation theory of semigroups [1], [5,Chapter 5], Putcha-Renner theory of linear algebraic monoids (monoids closed in the Zariski topology) [30,35,37], and the theory of finite monoids of Lie type [29,31,32,33].
The full linear monoid M n (Q) (where Q is an arbitrary division ring) is an example of a so-called (von Neumann) regular semigroup. In 1979 Nambooripad published his foundational paper [26] on the structure of regular semigroups, in which he makes the fundamental observation that the set of idempotents E(S) of an arbitrary semigroup carries a certain abstract structure of a so-called biordered set (or regular biordered set in the case of regular semigroups). He provided an axiomatic characterisation of (regular) biordered sets in his paper, and later Easdown extended this to arbitrary (non-regular) semigroups [7] showing that each abstract biordered set is in fact the biordered set of a suitable semigroup. Putcha's theory of monoids of Lie type shows that one can view the biordered set of idempotents of a reductive algebraic monoid as a generalised building [30], in the sense of Tits. Thus, in the context of reductive algebraic monoids, a natural geometric structure is carried by the biordered set of idempotents. We shall not need the formal definition of biordered set here, for more details of the theory of abstract biordered sets we refer the reader to [18].
The study of biordered sets of idempotents of semigroups is closely related with the study of idempotent generated semigroups. Here a semigroup is said to be idempotent generated if every element is expressible as a product of idempotents of the semigroup. Such semigroups are in abundance in semigroup theory. For instance, every non-invertible matrix of M n (Q) is expressible as a product of idempotent matrices [9,23], and the same result is true for the full transformation semigroup of all maps from a finite set to itself [20]. More recently, in a significant extension of Erdos's result [9], Putcha [34] gave necessary and sufficient conditions for a reductive linear algebraic monoid the have the property that every non-unit is a product of idempotents. Idempotent generated semigroups have received considerable attention in the literature, in part because of the large number of semigroups that occur in nature that have this property, and also because of the universal property that they possess: every semigroup embeds into an idempotent generated semigroup, and if the semigroup is (finite) countable it can be embedded in a (finite) semigroup generated by 3 idempotents. For a fixed abstractly defined biordered set E, the collection of all semigroups whose biordered set of idempotents is (biorder) isomorphic to E form a category when one restricts morphisms to those that are one-to-one when restricted to the set of idempotents. There is an initial object in this category, called the free idempotent generated semigroup over E and denoted IG(E), that thus maps onto every idempotent generated semigroup with biordered set E via a morphism that is oneto-one on idempotents. Clearly an important step towards understanding the class of semigroups with fixed biordered set of idempotents E is to study the free objects IG(E). For semigroup-theoretic reasons, much of the structure of IG(E) comes down to understanding the structure of its maximal subgroups. Until recently, very little was known about maximal subgroups of free idempotent generated semigroups. In fact, in all known cases, all such maximal subgroups had turned out to be free groups, and in [25] it was conjectured that this would always be the case. However, in 2009 Brittenham, Margolis and Meakin [2] gave a counterexample to this conjecture by showing that the free abelian group of rank 2 arises as a maximal subgroup of a free idempotent generated semigroup. The proof in [2] makes use of new topological tools introduced for the study of maximal subgroups of IG(E). In this new theory in a natural way a 2-complex, called the Graham-Houghton 2-complex GH(E), is associated to a regular biordered set E (based on work of Nambooripad [26], Graham [12] and Houghton [19]) and the maximal subgroups of IG(E) are the fundamental groups of the connected components of GH(E). The 2-cells of GH(E) correspond to the singular squares of E defined by Nambooripad in [26].
More recently, in [15], an alternative approach to the study of maximal subgroups of free idempotent generated semigroups was introduced. Using Reidemeister-Schreier rewriting methods originally developed in [36], together with methods from combinatorial semigroup theory (that is, the study of semigroups by generators and relations), a presentation for an arbitrary maximal subgroup of IG(E) was given in [15,Theorem 5]. Then applying this result it was shown that, in fact, every abstract group arises as the maximal subgroup of IG(E), for an appropriately chosen biordered set. Moreover, it was shown that every finitely presented group is a maximal subgroup of a free idempotent generated semigroup over a finite biordered set E.
Other recent work in the area includes [6] where free idempotent generated semigroups over bands are investigated, and it is shown that there is a regular band B such that IG(B) has a maximal subgroup isomorphic to the free abelian group of rank 2.
However, the structure of the maximal subgroups of free idempotent generated semigroups on naturally occurring biordered sets, such as the biordered set of the full linear monoid M n (Q) over a division ring Q, remained far from clear. In a recent paper [3] Brittenham, Margolis and Meakin further developed their topological tools to study this problem. The main result of [3] shows that the rank 1 component of the free idempotent generated semigroup of the biordered set of a full matrix monoid of size n × n, n > 2, over a division ring Q has maximal subgroup isomorphic to the multiplicative subgroup of Q. This result provided the first natural example of a torsion group that arises as a maximal subgroup of a free idempotent generated semigroup on some finite biordered set, answering a question raised in [8]. It is remarked in [3] that the methods used there seem difficult to extend to higher ranks. Here we shall extend their result, showing that general linear groups arise as maximal subgroups in higher rank components.
As mentioned above, the free idempotent generated semigroup over E is the universal object in the category of all idempotent generated semigroups whose sets of idempotents are isomorphic to E. The free idempotent generated semigroup over E is the semigroup defined by the following presentation.
(1.1) (It is an easy exercise to show that if, say, f e ∈ {e, f } then ef ∈ E. In the defining relation e · f = ef the left hand side is a word of length 2, and ef is the product of e and f in S, i.e. a word of length 1.) The idempotents of S and IG(E) are in the natural one-one correspondence; in fact they are isomorphic as biordered sets by the foundational result of Easdown [7]. We shall identify the two sets throughout.
We may now state our main result.
Theorem 1. Let n and r be positive integers with r < n/3, let E be the biordered set of idempotents of the full linear monoid M n (Q) of all n × n matrices over an arbitrary division ring Q, and let W be an idempotent matrix of rank r. Then the maximal subgroup of IG(E) with identity W is isomorphic to the general linear group GL r (Q).
Observe here that the condition r < n/3 forces n ≥ 4. Theorem 1 extends the main result of [3] where Theorem 1 is proved in the case r = 1 and n ≥ 3. In particular Theorem 1 shows that arbitrary general linear groups arise as maximal subgroups of naturally occurring biordered sets. An analogous result for the full transformation semigroup T n of all mappings from the set {1, . . . , n} to itself under composition was recently established in [16] where it is shown how the standard Coxeter presentation for the symmetric group S r is encoded by the set of all idempotents with image size r. Our methods do not extend to higher values of r, and the problem of describing the maximal subgroups in those cases remains open (see Section 8 for further discussion of this).
The proof of Theorem 1 is broken down into stages. For each stage of the proof the initial algebraic problem will be recast in purely combinatorial terms. At the heart of the proof will be the detailed analysis of various connectedness conditions satisfied by the structure matrices of the principal factors of the monoid M n (Q). Several different notions of connectedness arise, the first of which will be analysed using a coloured bipartite graph representation, closely related to the Graham-Houghton graphs employed in [2,3], while later connectedness conditions concern graphs obtained in a natural way from occurrences of symbols arising in multiplication tables of semigroups of matrices. The fact that these notions of connectedness are central to the proof reflects the natural geometric and topological structure underlying the problem, as explored in detail in [2,3].
The paper is structured as follows. In Section 2 we give the necessary background on matrix semigroups over division rings and on free idempotent generated semigroups, and then we go on to apply [15,Theorem 5] to write down a presentation for an arbitrary maximal subgroup of a rank r idempotent in IG(E(M n (Q))). The remainder of the paper is concerned with proving that, when r < n/3, the group that this presentation defines is GL r (Q). This proof is broken down into three main steps which are explained in Section 3. We work through the main steps of the proof over Sections 4, 5, 6 and 7. Finally, in Section 8 we discuss some open problems and possible directions for future research.

Preliminaries
Matrix semigroups. Throughout this paper, Q will be an arbitrary fixed division ring, M n (Q) will denote the full linear monoid of all n × n matrices over Q, and we shall use M m×l (Q) to denote the set of all m × l matrices over Q, for positive integers m, l. We let GL n (Q) denote the general linear group of all invertible n × n matrices over Q, which is, of course, the group of units of the monoid M n (Q). We also choose and fix an arbitrary idempotent W of M n (Q) of rank r < n/3. Since we are working over a division ring, which might not be commutative, some care needs to be taken here with notions like the rank of a matrix. Linear combinations of rows will always be taken using left scalar multiplication, and linear combinations of columns will be taken using right scalar multiplication. Thus by the row space Row A of a matrix A we shall mean left row space, by the column space Col A of A we shall mean right column space, and by the rank of a matrix we mean the left row rank of the matrix, which is equal to its right column rank.
With E equal to the biordered set of idempotents of M n (Q) our aim is to prove that the maximal subgroup of the free idempotent generated semigroup IG(E) with identity W is isomorphic to the general linear group GL r (Q). Since any pair of maximal subgroups in the same D-class of a semigroup are isomorphic, by Proposition 1(iii) below it follows that without loss of generality we may take where I r denotes the r × r identity matrix.
In general, important structural information about a semigroup may be obtained by studying its ideal structure. Since their introduction in [17], Green's relations have provided a powerful tool for the investigation of the ideal structure of semigroups. Recall that two elements s and t of a semigroup S are said to be R-related if they generate the same principal right ideal, L -related if they generate the same principal left ideal, and J -related if they generate the same principal two-sided ideal, that is In addition, we have the relations H = R ∩ L and D = R • L = L • R which is the join of R and L in the lattice of equivalence relations on S. Given an element a ∈ S we use R(a, S) to denote its R-class, and similarly we use the notation L(a, S), J(a, S), H(a, S) and D(a, S). Let e ∈ S be an idempotent. The set eSe is a submonoid in S with identity element e, and it is the largest submonoid of S which has e as identity. The group of units of eSe is the largest subgroup of S with identity e, and is called the maximal subgroup of S containing e. This maximal subgroup is precisely the H -class H(e, S) of S that contains the idempotent e. More background on Green's relations and their importance in semigroup theory may be found, for example, in [22]. One particularly important class are those semigroups that do not have any proper two-sided ideals. A semigroup S is called simple if its only ideal is S itself, and a semigroup with zero 0 ∈ S is called 0-simple if {0} and S are its only ideals (and S 2 = {0}). A semigroup is called completely (0-)simple if it is (0-)simple and has (0-)minimal left and right ideals, under the natural orders on left and right ideals by inclusion.
Let S be a completely 0-simple semigroup. The Rees theorem [22, Theorem 3.2.3] states that S is isomorphic to a regular Rees matrix semigroup M 0 [G; I, Λ; P ] over a group G, and conversely that every such semigroup is completely 0-simple. Here G is a group, I and Λ are index sets, P = (p λi ) is a regular Λ × I matrix over G ∪ {0} (where regular means that every row and column of the matrix contains at least one non-zero entry) called the structure matrix, and S = M 0 [G; I, Λ; P ] is the semigroup with elements (I × G × Λ) ∪ {0} and multiplication defined by (i, g, λ)(j, h, µ) = (i, gp λ,j h, µ) if p λ,j = 0, and 0 otherwise.
The importance of 0-simple semigroups comes from the way in which they may be viewed as basic building blocks of arbitrary semigroups. Indeed, given a Jclass J of a semigroup S we can form a semigroup J 0 from J, called the principal factor of S corresponding to J, where J 0 = J ∪ {0} and multiplication * is given by s * t = st if s, t, st ∈ J, and s * t = 0 otherwise. It is well known (see [22]) that J 0 is then either a semigroup with zero multiplication, or J 0 is a 0-simple semigroup. Recall that a semigroup S is called (von-Neumann) regular if a ∈ aSa for all a ∈ S. A semigroup is regular if and only if every R-class (equivalently every L -class) contains at least one idempotent.
Linear semigroups have received a lot of attention in the literature, and much is known about the structure of the full linear semigroup M n (Q); see [28,30]. Let us now recall some of these basic fundamental facts regarding M n (Q) that we shall need in what follows. The semigroup S = M n (Q) is a (von-Neumann) regular semigroup. More than this, it is completely semisimple meaning that each of its principal factors is a completely 0-simple semigroup, and thus by the Rees theorem, each principal factor of M n (Q) is isomorphic to some Rees matrix semigroup over a group.
The set of matrices of a fixed rank r ≤ n forms a J -class in the monoid M n (Q). In fact, J = D in M n (Q) and for matrices X, Y ∈ M n (Q), we have The maximal subgroups of the D-class of all matrices of rank r are isomorphic to GL r (Q). Green's relations R and L in M n (Q) are described by and Let D r be the D-class of M n (Q) consisting of the rank r matrices, and D 0 r the corresponding principal factor, which we know is a completely 0-simple semigroup. Following [3] we now write down a natural Rees matrix representation for the principal factor D 0 r . At the heart of our proof will be a detailed analysis of the combinatorial properties of the structure matrix P r of this Rees matrix semigroup.
Recall that a matrix is said to be in reduced row echelon form (RRE for short) if the following conditions are satisfied • all nonzero rows (rows with at least one nonzero element) are above any rows of all zeros, • the leading coefficient (the first nonzero number from the left, also called the pivot) of a nonzero row is always strictly to the right of the leading coefficient of the row above it, and • every leading coefficient is 1 and is the only nonzero entry in its column.
Given an r × q matrix A in RRE form we use LC(A) to denote the subset of {1, . . . , q} indexing the leading columns of the matrix, that is, the columns containing the leading 1s. If A is an r × q matrix in RRE form, and if A has rank r, then all of the rows must be non-zero and the leading coefficient in every row is 1, and therefore A must have exactly r leading columns which are, in order, the transposes of the 1 × r standard basis vectors }. Therefore, for every r × q rank r matrix A in RRE form, LC(A) is an r-element subset of {1, . . . , q}. Dually, given the transpose B of a matrix B T in RRE form, we use LR(B) to denote the set of numbers indexing the leading rows of B.
Let Y r denote the set of all r × n rank r matrices in RRE form, and let X r denote the set of transposes of elements of Y r . The structure of D 0 r is described in the following theorem (see [28]).
Theorem 2. The principal factor D 0 r of M n (Q) is isomorphic to the Rees matrix semigroup M 0 (GL r (Q); X r , Y r ; P r ) where the structure matrix P r = (P r (Y, X)) is defined for Y ∈ Y r , X ∈ X r by P r (Y, X) = Y X if Y X is of rank r and 0 otherwise. Given X ∈ X r and Y ∈ Y r we shall use R(X) to denote the R-class indexed by X, and L(Y ) to denote the L -class indexed by Y . So, the R-classes of D r are indexed by X r , the L -classes by Y r , and the H -class R(X) ∩ L(Y ) contains an idempotent if and only if P r (Y, X) = 0 which is true if and only if Y X has rank r. In this case we use e X,Y to denote the unique idempotent in the group H -class R(X) ∩ L(Y ).
Free idempotent generated semigroups. Let S be a semigroup, let E = E(S) be the set of idempotents of S, and let IG(E) be the free idempotent generated semigroup over E defined by the presentation (1.1). Some fundamental basic properties of the semigroup IG(E) are summarised in the following statement. to the H -class of e in IG(E)) is a homomorphism onto the maximal subgroup of S ′ containing e.
A presentation for the maximal subgroup of IG(E) containing W . For the remainder of the article E will denote the set of idempotents of the full linear monoid M n (Q). Our interest is in the maximal subgroup H(W, IG(E)) where IG(E) is defined by the presentation (1.1). Generalising the classical results from combinatorial group theory (see [24]) in [36] a Reidemeister-Schreier theory is developed for rewriting monoid presentations to obtain presentations for their maximal subgroups. Applying that general theory, in [15, Theorem 5] a presentation for a general maximal subgroup in a general free idempotent generated semigroup was given. Our first task here will be to specialise that general theory to obtain a presentation for the maximal subgroup of IG(E) containing W . The presentation given in [15,Theorem 5] depends on the choice of (1) a Schreier system, which is a certain set of words from F * where F is the set of idempotents in the D-class D r and (2) a choice function π, called the pivot function, which picks out a single idempotent from each R-class of D r . Ultimately, the presentation we obtain for H(W, IG(E)), which is stated below in equations (2.6)-(2.9), will have generating symbols F in one-one correspondence with the set F of idempotents in D r . The relations in the presentation will naturally divide into three families: the first a set of relations determined by the choice of Schreier system, the second family determined by the choice of pivot function π, and finally a family of relations determined by the so-called singular squares of idempotents from D r (see below for the definition of a singular square).
Let H W = H(W, IG(E)). For each Y ∈ Y r let H Y denote the H -class in the R-class of IG(E) indexed by the matrix Y . This makes sense since Y r indexes the L -classes of D r in M n (Q), and so by Proposition 1(iii) the set Y r also indexes the L -classes of the D-class of W in IG(E). A set of words ρ Y , ρ ′ Y ∈ F * is said to form a Schreier system of representatives for H W in IG(E) if the following conditions are satisfied: and (S2) every prefix of every ρ Y (including the empty word) is equal to some ρ Z .
Since the idempotents F are in natural one-one correspondence with the non-zero entries of the Rees structure matrix P r we now turn to this matrix and shall define our Schreier system using it. It should not be forgotten here that P r is the structure matrix for a Rees matrix semigroup isomorphic to D(W, M n (Q)) 0 but we do not know (or claim) that P r is a structure matrix for D(W, IG(E)) 0 . Here we are just making use of the fact that the natural homomorphism from IG(E) onto M n (Q) induces a bijection between D(W, IG(E)) 0 and D(W, M n (Q)) 0 which is R-class preserving, L -class preserving, and is bijective on the set F of idempotents.
We begin by associating a certain bipartite graph with P r , and then shall define the Schreier system using a certain subtree of this bipartite graph. The bipartite graph ∆(P r ) we define relates to, but is not the same as, the Graham-Houghton graphs utilised in [2,3].
Note that there are fewer edges in the graph ∆(P r ) than there are idempotents in D r , that is, the edges in this graph just pick out a subset of the idempotents. Now, it is easy to see that every path in ∆(P r ) with initial vertex X ∈ X r and terminal vertex in Y ∈ Y r corresponds to a product of idempotents from F whose product in S is an element in the H -class R(X) ∩ L(Y ). This will be explained in more detail below when we define a Schreier system. The use of bipartite graphs in this way as an approach to the study of products of elements in Rees matrix semigroups is widespread, see for example [12,13,14,19,21].
Given A 1 , . . . , A k , a collection of pairwise disjoint subsets of {1, . . . , n}, we let I(A 1 | · · · |A k ) denote the (k × n) matrix with 1 in positions (j, a j ) (a j ∈ A j ) for 1 ≤ j ≤ k, and every other entry equal to 0. In particular, given a subset {i 1 , . . . , i r } of {1, . . . , n} with i 1 < · · · < i r , we use I(i 1 | · · · |i r ) to denote the r × n matrix with 1 in positions (j, i j ) for 1 ≤ j ≤ r, and every other entry equal to 0. So the i 1 to i r th columns of I(i 1 | · · · |i r ) together form a copy of the r × r identity matrix, and the other columns are all zero vectors. We call I(i 1 | · · · |i r ) a scattered identity matrix.
Unless otherwise stated, throughout given an r-element subset {i 1 , . . . , i r } of {1, . . . , n} we shall adopt the convention that the elements are ordered so that i 1 < · · · < i r .
We group the vertices of ∆(P r ) together depending on their leading rows or columns. Given X ∈ X r with LR(X) = {i 1 , . . . , i r } where i 1 < · · · < i r we shall say that X belongs to the region (i 1 < · · · < i r ). Similarly, given Y ∈ Y r with LC(Y ) = {i 1 , . . . , i r } where i 1 < · · · < i r we shall say that Y belongs to the region (i 1 < · · · < i r ). Moreover, we let (i 1 < · · · < i r ) × (j 1 < · · · < j r ) denote the subgraph of ∆(P r ) induced by the set of all vertices X ∈ X r belonging to the region (i 1 < · · · < i r ) together with all vertices Y ∈ Y r belonging to the region (j 1 < · · · < j r ). In particular, we let and call these the diagonal regions.
We define a natural order on the set of r-element subsets of {1, . . . , n} where {1, . . . , r} A for every r-element subset of {1, . . . , n}, and given A = {a 1 , . . . , a r } = {1, . . . , r} with a 1 < a 2 < · · · < a r we set where m ∈ {1, . . . , r} is the smallest subscript such that a m = m, and then we take the reflexive transitive closure to obtain the relation . Clearly this defines a partial order on the r-element subsets of {1, . . . , n} and this order has a unique minimal element {1, . . . , r} which lies below every other element of the poset. This order clearly induces an order on the diagonal regions of the bipartite graph ∆(P r ). Note that the poset of r-element subsets of {1, . . . , n} under the -relation has the property that every non-minimal element p of the poset covers exactly one other element. That is, for every non-minimal p there is precisely one element q of the poset such that q ≺ p and there is no element z satisfying q ≺ z ≺ p. In particular, the Hasse diagram of such a poset is a tree; see Figure 1 for an illustration of this when n = 6 and r = 2.
Each of (T1), (T2) and (T3) is easily seen to define a subset of the edges of ∆(P r ). If one just takes the edges (T1) and (T2) one obtains a bipartite graph whose connected components are connected subgraphs of the regions ∆(i 1 < · · · < i r ). The remaining edges (T3) give exactly one edge connecting every pair of regions that are adjacent under the order. In particular this means that the graph obtained by factoring out T n,r by the equivalence relation given by the diagonal regions is isomorphic to the Hasse diagram of the poset of r-element subsets of {1, . . . , n} under . It follows from these observations that T n,r is a spanning tree for the graph ∆(P r ). An illustration of the spanning tree T n,r = T 6,2 in the graph ∆(P r ) = ∆(P 2 ) is given in Figure 2. We shall use the spanning tree T n,r both to define a Schreier system and also to define the pivot function π.
For the Schreier system, first set ρ I(1|···|r) = ρ ′ I(1|···|r) = ǫ, the empty word. Then given Y ∈ Y r with Y = I(1| · · · |r), we take the unique path p in the tree T n,r from I(1| · · · |r) T ∈ X r to Y ∈ Y r , say ,Y1 e X0,Y0 ∈ F * . It follows easily that with the above definition this set of ρ Y , ρ ′ Y form a Schreier system for H(W, IG(E)) in IG(E). For instance, in the example given in Figure 2, computing the Schreier word for Y we obtain Note that only the edges of types (T1) and (T3) were used to form the Schreier system. We use the remaining edges (T2) from T n,r to define the pivot function π : X r → Y r which we defined to map every X in the region ∆(i 1 < . . . < i r ) to the matrix I(i 1 | · · · |i r ). In other words, π is defined in such a way that the pairs (X, π(X)) are precisely the edges of type (T2) from the spanning tree T n,r (and this clearly uniquely determines the function π).
The final concept that we need before we can write down a presentation for the maximal subgroup is the notion of singular square. An E-square is a sequence (e, f, g, h, e) of elements of E with e R f L g R h L e. Unless otherwise stated, we shall assume that all E-squares are non-degenerate, i.e. the elements e, f, g, h are all distinct. An idempotent t = t 2 ∈ E left to right singularises the E-square (e, f, g, h, e) if te = e, th = h, et = f and ht = g. Figure 2. A partial view of the graph ∆(P r ), where n = 6 and r = 2, with edges from the spanning tree T n,r indicated. The bold lines represent edges of type (T3); the dashed lines those of type (T2); while the remaining edges are those of type (T1). Note that the edges (I(i|j), I(i|j) T ) are both of type (T1) and (T2). When the spanning tree is quotiented out by the diagonal regions we obtain a graph that is isomorphic to the Hasse graph illustrated in Figure 1. In particular, the three bold edges between regions in this figure correspond in the obvious natural way to the three bold edges in Figure 1.
Right to left, top to bottom and bottom to top singularisation is defined similarly and we call the E-square singular if it has a singularising idempotent of one of these types. It is easy to show that if (e, f, g, h, e) is a singular square then {e, f, g, h} forms a 2 × 2 rectangular band (that is, that the set {e, f, g, h} is closed under taking products). In other words, for a square to stand a chance of being singular it must be a rectangular band. In [3] it is shown that in the full linear semigroup the converse is also true. Let Σ ⊆ X r × X r × Y r × Y r be the set of all singular squares of D r , which by the above result are precisely the set of rectangular bands in D r . In terms of the structure matrix P r it is easily verified that (X, X ′ , Y, Y ′ ) is a rectangular band if and only if the equality holds in the group GL r (Q). (Actually this is a general fact describing 2 × 2 rectangular bands in Rees matrix semigroups.) Following [15, Theorem 5], with the above notation, we have that the maximal subgroup H(W, IG(E)) is defined by the presentation with generators and defining relations Let us denote this presentation by P r,n . The rest of the paper will be devoted to the proof that when r < n/3 this presentation actually defines the general linear group GL r (Q).

Outline of the proof
The basic idea behind the proof is as follows. We consider two matrices both with rows indexed by X r and columns indexed by Y r . The first matrix is the transpose P T r of the Rees structure matrix of D 0 r defined in Theorem 2. The second is the X r × Y r matrix with non-zero entries the abstract generators f X,Y from the presentation P r,n . So, we view the set of generators F given in (2.6) as being arranged in a matrix in a natural way where the entry indexed by the pair (X, Y ) is equal to the generator f X,Y and all the other entries are set to 0. We shall carry out a sequence of Tietze transformations to the presentation P r,n transforming it into a presentation for the general linear group GL r (Q). One of the key ideas is that we imagine the two X r × Y r matrices above laid out side-by-side, and then the Rees structure matrix P T r acts as a "guide" pointing out relations between the generators f X,Y one should be aiming to show hold. The fact that the structure of P r influences the relations we obtain should not come as a surprise since, firstly, the Schreier system and pivot function have both been defined in terms of P r , which links entries in P r with the relations (2.7) and (2.8), and secondly, because all the rectangular bands are singular, the relations (2.9) in the presentation correspond exactly to the singular squares which are seen inside P r .
The proof breaks down into the following three main steps.
In Section 4 we prove that for every such generator the relation f X,Y = 1 is a consequence of the relations in the presentation P r,n (see Lemma 2). This is done by first, using the relations (2.8) and (2.7) and the obvious relationship between the relations (2.7) and edges in the spanning tree T n,r , proving the result for pairs corresponding to edges in the spanning tree T n,r of ∆(P r ) and then extending this, using the relations (2.9), to arbitrary edges from the bipartite graph ∆(P r ).
In Sections 5 and 6 we prove that for every such pair, the relation f A,B = f X,Y is a consequence of the relations in the presentation P r,n (see Lemmas 9 and 10). This is achieved in the following way. We fix some element K ∈ GL r (Q) and consider all the generators f A,B such that BA = K. Given such a pair of generators f A,B , f A ′ ,B ′ we say that they are strongly connected if (1) they are in the same row or column (i.e. A = A ′ or B = B ′ ) and (2) the pair f A,B , f A ′ ,B ′ completes to a singular square such that the other pair f i , f j of the square are both known so satisfy f i = f j = 1 as a consequence of Stage 1. Then a sequence of generators f A,B all satisfying BA = K, and such that adjacent terms in the sequence are strongly connected, is called a strong path. In this language, in this step of the proof we prove that for every K ∈ GL r (Q), and for every pair Keeping in mind the relations (2.9), this will suffice to show that f A,B = f A ′ ,B ′ is a consequence of the relations from the presentation P r,n . Stage 3: Defining relations for GL r (Q).
By this stage we have transformed P r,n into a presentation whose generators are in natural one to one correspondence with the elements of GL r (Q). Using this correspondence, we denote the generating symbols in this new presentation by f A where A ∈ GL r (Q). To complete the proof, in Section 7, we show that for any pair of matrices A, B from GL r (Q) the relation f A f B = f AB belongs to (2.9) (see Lemma 15). Combined with fact Proposition 1(i), that GL r (Q) = H(W, M n (Q)) is a homomorphic image of H(W, IG(E)), this will suffice to show that H(W, IG(E)) is isomorphic to GL r (Q).
Let us conclude this section by making a few comments about our method of proof.
• We recall that, in general, when writing down a Rees matrix representation for a completely 0-simple semigroup the structure matrix P is in no sense unique; see [22,Theorem 3.4.1]. Given an arbitrary completely 0-simple semigroup, and some Rees matrix representation for it, the graph ∆(P ) defined above need not be connected (in fact it will not contain any edges at all if the structure matrix does not contain any 1s) even if the semigroup is idempotent generated. On the other hand, it is always possible to normalise the matrix putting it into a particular form, introduced in [12] and later utilised in [13], called Graham normal form. When the corresponding completely 0-simple semigroup is idempotent generated, if the structure matrix P is in Graham normal form, then ∆(P ) will be connected. It just so happens that the Rees matrix representation we work with in this paper is in Graham normal form, and the decision to work with this particular Rees matrix representation is an important part of the proof, since we need the graph ∆(P r ) to be connected in order to find the spanning tree T n,r which is the starting point of the proof of the main theorem. This suggests that putting the structure matrix into Graham normal form would be a sensible first step when investigating maximal subgroups of free idempotent generated semigroups in general.
• We should note that the proof given of the corresponding result for T n in [16] does not work directly with a natural Rees matrix representation for the principal factor like we do here, but rather a certain auxiliary matrix is defined and used as the "guide" towards the relations one should be aiming to prove hold. When comparing the two proofs, in this aspect the approach here is more straightforward, in particular the long list of label computations needed for the proof in [16] is avoided here, and is replaced by simple computations of entries of the matrix P r , which in each instance just involves multiplying a single pair of matrices together.
In this section we work through Stage 1 of the proof of the main theorem. We continue using the notation and definitions introduced in Section 2. So in particular, P r denotes the structure matrix for the Rees matrix representation for the principal factor D 0 r of M n (Q), ∆(P r ) is the bipartite graph defined in Definition 1 whose edges correspond to occurrences of 1 in P r , and T n,r is the spanning tree of ∆(P r ) spanned by the edges (T1)-(T3). Recall that our aim is to show that for every edge (X, Y ) of ∆(P r ) the relation f X,Y = 1 is a consequence of the presentation P r,n . We begin by dealing with the edges from the spanning tree. Lemma 1. For every edge (X, Y ) in the spanning tree T n,r the relation f X,Y = 1 is a consequence of the relations (2.7) and (2.8) from presentation P r,n .
Proof. Recall that ∆ = ∆(P r ) has been divided into regions, where ∆(i 1 < · · · < i r ) is the region consisting of all vertices X ∈ X r and Y ∈ Y r such that LC(Y ) = LR(X) = {i 1 , . . . , i r }.
Let Y 0 = I(1|2| · · · |r) and X 0 = Y T 0 . There are three types of edge in the tree T n,r ; types (T1), (T2) and (T3). If (X, Y ) is an edge of type (T2) then f X,Y = f X,π(X) = 1 is a relation in (2.8), so we may suppose that (X, Y ) is not a type (T2) edge. First suppose (X, Y ) is an edge in ∆(1 < · · · < r). In particular this means (X, Y ) is a type (T1) edge. Therefore (X, Y ) = (X 0 , Y ) and since we are assuming (X, Y ) is not type (T2) it follows that Y = Y 0 . Now from the definition of the Schreier system we have which means that f X0,Y0 = f X0,Y appears as a relation in (2.7), and so in this case we may deduce f X,Y = f X0,Y = f X0,Y0 = 1. Now let (i 1 < · · · < i r ) = (1 < · · · < r) and suppose that the edge (X, Y ) from ∆ ′ satisfies LR(X) . . , i r }, that at least one of X or Y belongs to ∆(i 1 < · · · < i r ), and assume inductively that we have There are two possibilities, either (X, Y ) is of type (T3) or of type (T1). Suppose that (X, Y ) is of type (T3). From the definition of T n,r and the assumptions on (X, Y ), since (X, Y ) is a (T3) edge the only possibility is that (X, Y ) = (X 1 , Z). From the definition of the Schreier system ρ Y1 = ρ Z e X1,Y1 which using relations (2.7) and (2.9) gives (4.1) Next suppose that (X, Y ) is of type (T1). From the definition of T n,r and the assumptions on (X, Y ), it follows in this case that (X, Y ) belongs to the region ∆(i 1 < · · · < i r ) and, since (X, Y ) is assumed not to be type (T2), X = X 1 and Y = Y 1 . Now from the definition of the Schreier system ρ Y = ρ Z e X1,Y which from relations (2.7) together with (4.1) gives This completes the inductive step showing that for every edge (X, Y ) of T n,r satisfying LR(X) {i 1 , . . . , i r } and LC(Y ) {i 1 , . . . , i r } we may deduce f X,Y = 1, completing the proof of the lemma.
Next we want to extend this result to every edge of the bipartite graph ∆ n,r . From Lemma 1 we already know f X,Y = 1 for every edge (X, Y ) in the spanning tree T n,r . With this initial information, together with the relations (2.9) from the presentation, we shall complete the proof of Stage 1 by proving the following result.
Then the relation f X,Y = 1 is a consequence of the relations (2.7)-(2.9).
It will be useful to rephrase the problem in purely combinatorial terms. Let n and r be positive integers with r < n/3, and let ∆ n,r be the bipartite graph given in Definition 1. Now we shall colour the edges of ∆ n,r so that every edge is either red or blue. Initially we colour all edges from the spanning tree T n,r of ∆ n,r blue (i.e. the edges (T1), (T2) and (T3)) and all the other edges red. Our aim is to turn the colour of every edge from red to blue, in the following way. For every square of edges in ∆ n,r if three of the edges of the square are blue, and the fourth is red, the we can transform the fourth edge from red to blue. We call such a transformation an elementary edge colour transformation. The remainder of this section will be concerned with proving the following result.
Proposition 2. Let n and r be positive integers with r < n/3, and let ∆ n,r be the coloured bipartite graph defined above with blue edges for every edge in the spanning tree T n,r and all other edges coloured red. Then every red edge of ∆ n,r may be turned blue by a finite sequence of elementary edge colour transformations.
Before proving Proposition 2 we now show how Lemma 2 follows form it.
Proof of Lemma 2. If (X, Y ) ∈ T n,r then we are done by Lemma 1. By Proposition 2 every edge (X, Y ) can be reached, and turned blue, by a finite sequence of elementary edge colour transformations. The proof now proceeds by induction on the number of elementary edge colour transformations required to turn the edge (X, Y ) blue. For the inductive step we have vertices X ′ ∈ X r and Y ′ ∈ Y r such that the edges (X ′ , Y ), (X, Y ′ ) and (X ′ , Y ′ ) have all been turned blue, and by induction f X ′ ,Y = 1, f X,Y ′ = 1 and f X ′ ,Y ′ = 1 are all consequences of the relations (2.7)-(2.9). Since by definition of ∆ n,r , it follows by Theorem 3 and (2.5) that this square is singular. Hence, by applying the relation (2.9) from the presentation P n,r we deduce f X,Y = 1.
After first proving some general lemmas we shall then deduce that Proposition 2 holds for all pairs (n, r) = (n, 1) with n ≥ 4. Then we prove the result for an arbitrary pair (n, r) (with 4 ≤ r < n/3) where we may (and shall) assume inductively that the result holds for the pair (n − 1, r − 1) (which we note still satisfies r − 1 < (n − 1)/3). Induction will be applied by finding a natural copy of the coloured bipartite graph ∆ n−1,r−1 as an induced subgraph of ∆ n,r . Proof. (i) Suppose that X, X ′ ∈ (i 1 < i 2 < · · · i r ). Then both of the edges (X, I(i 1 |i 2 | · · · |i r )) and (X ′ , I(i 1 |i 2 | · · · |i r )), belong to the tree T and thus are assumed already to be blue edges. Now the subgraph induced by these two edges together with the edges (X, Y ) and (X ′ , Y ) form a square in ∆ n,r . It is then immediate from the definition of elementary edge colour transformation that once (X, Y ) has been turned blue (X ′ , Y ) may also be turned blue.
(ii) The proof is dual to that of (i), making use of the fact that, with Y, Y ′ ∈ (i 1 < i 2 < · · · i r ), both of the edges (I(i 1 |i 2 | · · · |i r ) T , Y ) and (I(i 1 |i 2 | · · · |i r ) T , Y ′ ), belong to the spanning tree T .

Lemma 4. Let Γ be the subgraph
of ∆ n,r , and suppose that all the edges of Γ belong to a single connected component of Γ. Then every edge of Γ may be turned blue provided at least one edge of Γ has been turned blue.
Proof. Let (X, Y ) and (X ′ , Y ′ ) be edges in Γ. Suppose that the edges (X, Y ) and (X ′ , Y ′ ) are adjacent i.e. that X = X ′ or Y = Y ′ . Then it follows from Lemma 3 that (X, Y ) can be turned blue if and only if (X ′ , Y ′ ) can be turned blue. But since all the edges of Γ belong to a single connected component there is a sequence of edges between (X, Y ) and (X ′ , Y ′ ) where adjacent edges in the sequence are adjacent in Γ. The result is now immediate.
It should be noted that it is not true in general that every region will satisfy the hypotheses of Lemma 4.
then (X, Y ) can be turned blue.
Proof. Suppose that (X, Y ) belongs to the subgraph (i) and let I 0 = I(i 1 | · · · |i r ). Then the edges (I T 0 , I 0 ), (X, I 0 ) and (I T 0 , Y ), which all belong to the tree T and hence are blue edges, together with the edge (X, Y ) form a square, and hence (X, Y ) can be turned blue. Now suppose that (X, Y ) belongs to the subgraph (ii) which we shall denote here by Γ. We claim that all the edges of Γ belong to a single connected component of Γ. Indeed, for every vertex X ∈ (i 1 < i 2 < · · · < i r ), the edge (X, I(i 1 − 1, i 1 |i 2 | · · · |i r )) (4.2) belongs to Γ since I(i 1 − 1, i 1 |i 2 | · · · |i r )X = I r and, since Γ is bipartite, every other edge of Γ shares a common vertex with an edge from of the form (4.2). But the edge (I(i 1 |i 2 | · · · |i r ) T , I(i 1 − 1, i 1 |i 2 | · · · |i r )) from the spanning tree T belongs to (ii), and is blue by assumption, and so it follows from Lemma 4 that, since one edge has been turned blue and all the edges in the region belong to a single connected component, every edge in the subgraph (ii) can be turned blue. Finally suppose that (X, Y ) belongs to the subgraph (iii). This is the most difficult of the three cases since there are no edges from the spanning tree T in this region. By a dual argument to case (ii), since the mapping X → X T is an automorphism of the graph ∆ n,r preserving regions, we conclude that the subgraph (iii) has a connected component that contains all of its edges, and thus, by Lemma 4 it will suffice to show that at least one edge in each subgraph of type (iii) can be turned blue.
But these three edges are from subgraphs of type (ii), (ii) and (i), respectively, and therefore by parts (i) and (ii) we know that all three of these edges may be turned blue, and therefore the edge (4.3) may be turned blue. Next consider the edge (I(i 1 |i 2 | · · · |i r ) T , I(i 1 − 2, i 1 |i 2 | · · · |i r )).
The last of these three edges is (4.4) which we have already shown can be turned blue, while the other two edges belong to subgraphs of types (ii) and (i), respectively, so are also blue. Thus we deduce that the edge (4.5) can be turned blue and, since this edge belongs to the subgraph (iii), this completes the proof in this case.
In this case, let l and k be distinct numbers in {1, . . . , n} each different from all of 1, 2, i 2 , i 3 , . . . , i r . This is possible since n > r+2.
Then the subgraph of ∆ n,r induced by the four vertices Y r : I(2, k|i 2 |i 3 | · · · |i r ) I(1, l|i 2 |i 3 | · · · |i r ) X r : I(1, k|i 2 |i 3 | · · · |i r ) T I(2, l|i 2 |i 3 | · · · |i r ) T is a square, three of whose edges belong to subgraphs of types (i) or (ii) and so are blue edges, which means that the fourth edge (I(1, k|i 2 |i 3 | · · · |i r ) T , I(2, k|i 2 |i 3 | · · · |i r )), which belongs to the subgraph (iii), can be turned blue, completing the proof for this case. (Note that these four matrices are in RRE form regardless of the values of k and l.) We give another general result.
Lemma 6. Let (X, Y ) be an edge of ∆ n,r . If (X, Y ) belongs to the subgraph then the edge (X, Y ) can be turned blue.
Proof. We claim that this subgraph has a single connected component containing all of its edges. When i 1 = j 1 , so the subgraph is a diagonal region, this is immediate from the definition of the spanning tree T . Now suppose i 1 < j 1 < i 2 , the other case being dual. Then for every vertex B in (j 1 < i 2 < · · · < i r ) the edge (I(i 1 , j 1 |i 2 |i 3 | · · · |i r ) T , B) (4.7) belongs to ∆ n,r and, since the graph induced by this region is bipartite, it follows that every edge in this subgraph shares a vertex with one of the edges (4.7), proving the claim. Thus by Lemma 4 for each subgraph of the form (4.6) once we have turned one edge blue, we can conclude that every other edge in that subgraph can be turned blue. We prove the lemma by induction on |i 1 − j 1 |. When |i 1 − j 1 | ≤ 1 the result holds by Lemma 5, so suppose otherwise and assume that the result holds for smaller values of |i 1 − j 1 |. Suppose that i 1 < j 1 and j 1 = i 1 + 1, the other case is dual. Now consider the subgraph of ∆ n,r induced by the four vertices Y r : I(j 1 |i 2 | · · · |i r ) I(i 1 , i 1 + 1|i 2 | · · · |i r ) X r : I(i 1 , j 1 |i 2 | · · · |i r ) T I(i 1 + 1, j 1 |i 2 |i 3 | · · · |i r ) T .
Three of these edges belong to subgraphs whose edges may be turned blue by induction, thus the remaining edge (I(i 1 , j 1 |i 2 | · · · |i r ) T , I(j 1 |i 2 | · · · |i r )), which belongs to the subgraph may also be turned blue, completing the inductive step, and hence the proof of the lemma.
The following result will serve as a family of base cases for the induction proving Proposition 2. Proof. In this case every edge of ∆ n,1 belongs to a region of the form (4.6) and hence can be turned blue by Lemma 6.
So, from now on in this section we may suppose that r and n are integers satisfying 1 < r < n/3, shall assume that Proposition 2 holds for the pair (n − 1, r − 1), and then prove under this assumption that the proposition holds for the pair (n, r). Then by induction, with Corollary 1 dealing with the base cases, this will suffice to prove Proposition 2. These assumptions will remain in place for the rest of this section.
In order to apply our inductive assumption we shall first need to identify a natural subgraph of ∆ n,r which is isomorphic to ∆ n−1,r−1 . Let ∆ ′ n,r denote the subgraph of ∆ n,r induced by the set Y ′ r of all vertices from Y r of the form together with the set X ′ r ⊆ X r of transposes of the elements of Y ′ r . Since Y ∈ Y ′ r ⊆ Y r is an r × n rank r matrix in RRE form, it follows that Y ∈ Y n−1,r−1 , where Y n−1,r−1 denotes the set of all (r − 1) × (n − 1) rank r − 1 matrices in RRE form. Conversely given any (r − 1) × (n − 1) rank r − 1 matrix Y in RRE form, the matrix Y above is clearly then an r × n rank r matrix in RRE form. Thus defines, in a natural way, a bijection between the subset Y ′ r of Y r and the set Y n−1,r−1 . The obvious dual statements hold for pairs X ∈ X r , X ′ ∈ X n−1,r−1 where X n−1,r−1 denotes the set of transposes of elements of Y n−1,r−1 . Therefore we have a natural bijection Next we observe that the bijection (4.8) is actually an isomorphism between the subgraph ∆ ′ n,r of ∆ n,r induced by X ′ r ∪ Y ′ r and the graph ∆ n−1,r−1 . This is easily seen. Indeed, for every pair (X, in particular Y X = I r if and only if Y X = I r−1 , and therefore (X, Y ) is an edge of ∆ ′ n,r if and only if ( X, Y ) is an edge of ∆ n−1,r−1 . Finally, looking at the list of edges (T1), (T2) and (T3) in the definition of the spanning tree, we see that the edge (X, Y ) belongs the spanning tree T n,r of ∆ n,r if and only if the edge ( X, Y ) belongs to the spanning tree T n−1,r−1 of ∆ n−1,r−1 . In other words, for every edge (X, Y ) of ∆ ′ n,r ⊆ ∆ n,r , (X, Y ) is an initial blue edge of ∆ n,r if and only if ( X, Y ) is an initial blue edge of ∆ n−1,r−1 . (Here, in each case, by an initial blue edge we mean an edge that is blue by virtue of being in the spanning tree.) Now consider an arbitrary edge (X, Y ) of ∆ n,r such that (X, Y ) belongs to ∆ ′ n,r . Then from the above observations ∆ ′ n,r is an isomorphic copy of ∆ n−1,r−1 , preserving the initial red and blue edge colours, and since by induction we are assuming that Proposition 2 holds for ∆ n−1,r−1 , it follows immediately that using the same sequence of elementary edge transformations inside ∆ ′ n,r we can transform (X, Y ) into a blue edge.
( †) Therefore by induction we may assume that every edge (X, Y ) of ∆ n,r in ∆ ′ n,r has already been turned blue. This assumption will remain in place for the rest of the section. Lemma 7. Let (X, Y ) be an edge in ∆ n,r . If (X, Y ) belongs to the subgraph then (X, Y ) can be turned blue.
Proof. Let (X, Y ) be an edge in the subgraph (4.9). Let X ′ be the matrix obtained by replacing the first column of X by the n × 1 vector [1, 0, 0, . . . , 0] T and let Y ′ be the matrix obtained by replacing the first row of Y by the 1×n vector [1, 0, 0, . . . , 0]. Note that Y ′ is still a matrix in the set Y r (that is, it is still a RRE rank r matrix) and Y ′ belongs to the same region as Y . Similarly X ′ ∈ X r and X ′ belongs to the same region as X. Since Y X = I r , it follows from the way that Y ′ and X ′ have been defined that Y X ′ = I r , Y ′ X = I r and Y ′ X ′ = I r . Therefore the vertices {X, X ′ , Y, Y ′ } induce a square in ∆ n,r . Now the edge (X ′ , Y ′ ) belongs to ∆ ′ n,r and hence may be turned blue by induction ( †). Since (X ′ , Y ′ ) is blue, and X and X ′ belong to the same region, it follows from Lemma 3(i) that (X, Y ′ ) may be turned blue. Dually, since Y and Y ′ belong to the same region, (X ′ , Y ) may be turned blue. Therefore the remaining edge (X, Y ) in the square may be turned blue, completing the proof of the lemma.
can be turned blue.
Proof. Let (X, Y ) be an edge in the subgraph (4.10). If i 1 = 1 we are done by Lemma 7, so suppose i 1 > 1. Let X ′ be the matrix obtained by replacing the first row of X by the 1 × r vector [1, 0, 0 . . . , 0], and let Y ′ be the matrix obtained by replacing the first row of Y by the 1×n vector [1, 0, 0 . . . , 0]. Note that since the i 1 th column of Y is the r × 1 vector [1, 0, 0, . . . , 0] T , and since i 1 > 1, this transformation means that the i 1 th column of Y ′ is the zero vector. Clearly Y ′ ∈ Y r and X ′ ∈ X r . Since i 1 > 1 it follows that Y X ′ = Y X = I r . This in turn, along with the definition of Y ′ , implies Y ′ X ′ = Y X ′ = I r . Therefore each of (X, Y ), (X ′ , Y ) and (X ′ , Y ′ ) is an edge in ∆ n,r (while (X, Y ′ ) is not an edge since Y ′ X = I r ).
Next consider the subgraph of ∆ n,r induced by the four vertices Straightforward computations show that these four vertices form a square in ∆ n,r . In this square, both of the edges (X ′ , Y ′ ) and (I(1, i 1 |j 2 |j 3 | · · · |j r ) T , Y ′ ) may be turned blue by Lemma 7, while the edge (I(1, i 1 |j 2 |j 3 | · · · |j r ) T , Y ) belongs to the subgraph (1 < j 2 < j 3 < · · · < j r ) × (i 1 < j 2 < j 3 < · · · < j r ), and so can be turned blue by Lemma 6. Therefore we deduce that the edge (X ′ , Y ) may be turned blue.
Finally consider the subgraph of ∆ n,r induced by the four vertices Y r : Y I(i 1 |i 2 |i 3 | · · · |i r ) Again, it is easily verified that this set of vertices induces a square in ∆ n,r . We saw above that the edge (X ′ , Y ) may be turned blue. The edge (X, I(i 1 |i 2 |i 3 | · · · |i r )) belongs to a diagonal region and so may be turned blue by Lemma 5(i), while the edge (X ′ , I(i 1 |i 2 |i 3 | · · · |i r )) belongs to the subgraph and so may be turned blue by Lemma 6. Since three of the four edges of the square can be turned blue, we deduce that the fourth edge (X, Y ) may be turned blue, completing the proof of the lemma.
We are now in a position to complete the proof of the main result of this section.
Proof of Proposition 2. Let (X, Y ) be an arbitrary edge of ∆ n,r , where (X, Y ) belongs to (i 1 < i 2 < · · · < i r ) × (j 1 < j 2 < · · · < j r ), say. If i 1 = j 1 we are done by Lemma 8, so suppose i 1 > j 1 (the other case may be dealt with using a dual argument). Let X ′ be the matrix obtained by replacing the first column of X by the n × 1 vector [0, 0, . . . , 0, 1, 0, . . . , 0] T with 1 in position j 1 and 0s elsewhere. Note that since row i 1 of X is the 1 × r vector [1, 0, 0, . . . , 0], it follows that row i 1 of X ′ is the zero vector. Clearly since i 1 > j 1 it follows that X ′ ∈ X r . Now consider the subgraph of ∆ n,r induced by the four vertices From the definition of X ′ it follows that Y X ′ = Y X = I r , and it is then easily checked that these four vertices induce a square in ∆ n,r . The edge (X, I(j 1 , i 1 |i 2 |i 3 | · · · |i r )) belongs to (i 1 < i 2 < · · · < i r ) × (j 1 < i 2 < · · · < i r ) and so may be turned blue by Lemma 6. The edge (X ′ , I(j 1 , i 1 |i 2 |i 3 | · · · |i r )) belongs to (j 1 < i 2 < · · · < i r ) × (j 1 < i 2 < · · · < i r ), a diagonal region, and so may be turned blue by Lemma 5(i). Finally, the edge (X ′ , Y ) belongs to and so may be turned blue by Lemma 8. Since all three of these edges may be turned blue we deduce that the remaining edge (X, Y ) of this square may be turned blue, completing the proof of the proposition.

Remark 1.
In fact, although we shall not need this later on, the above proof actually shows that Proposition 2 holds under the weaker assumption just that r < n − 2. The assumption r < n − 2 is needed in the proof of Lemma 5, Case 2 of (iii).

Combinatorial properties of multiplication tables
From the explanation of Stage 2 of the proof of our main result given in Section 3 it may be seen that establishing this part of the proof comes down to the combinatorial analysis of the structure matrix P r for the Rees matrix representation of D 0 r . Recall that P r = (P r (Y, X)) is a matrix with rows indexed by Y r , columns by X r , and entries P r (Y, X) = Y X if Y X is of rank r, and 0 otherwise. So the entries of P r come from the set GL r (Q) ∪ {0}. We also view the abstract generators F given in (2.6) as being arranged in a table also with rows indexed by Y r , columns indexed by X r and the entry (Y, X) is f X,Y if P r (Y, X) = 0 (i.e. if f X,Y ∈ F ) and 0 otherwise. So far, using the defining relations (2.7)-(2.9) from the presentation P r,n we have been making deductions about relations between the symbols f X,Y appearing in this table. The results from the previous section show that we may deduce f X,Y = 1 whenever the corresponding entry P r (Y, X) of P r satisfies P r (Y, X) = Y X = I r . That was Stage 1 of the proof. Now we move on to consider Stage 2 of the proof. In this stage our aim is to prove that for any pair of non-zero entries P r (Y, X) and P r (Y ′ , X ′ ) from the structure matrix P r , if P r (Y, X) = P r (Y ′ , X ′ ) then f X,Y = f X ′ ,Y ′ may be deduced from (2.7)-(2.9).
As we did for the bipartite graph ∆ n,r in the previous section, we shall partition the matrix P r into regions where the region (5.1) is the set of all pairs (Y, X) with Y ∈ Y r , X ∈ X r , LC(Y ) = {i 1 , i 2 , . . . , i r } and LR(X) = {j 1 , j 2 , . . . , j r }. By the entries in the region (5.1) we mean the set of all matrices Y X with rank r where (Y, X) belongs to the region (5.1). In this section we focus our attention just on the region (1 < 2 < · · · < r) × (1 < 2 < · · · < r), (5.2) whose entries are those of the form: where A ∈ Mat r×(n−r−1) (Q) and B ∈ Mat (n−r−1)×r (Q). The aim of this section is to prove the following result.
As in the previous section, we shall find it useful to recast this problem in purely combinatorial terms before solving it. We begin by introducing a general framework, and some terminology, for the analysis of combinatorial properties of tables.
Let P = P (B, A) be a matrix with rows indexed by a set B and columns indexed by A, where the entries of P all come from a set L, that we call the set of labels. Then given an element l ∈ L we define a graph, called the λ-graph of l, with So, the λ-graph of l ∈ L is obtained by removing all entries from the matrix except occurrences of the symbol l, and then drawing an edge between every pair of ls that belong to the same row, or to the same column. Now, one natural source of such matrices is given by the multiplication tables of a semigroups, where given an semigroup S we take A = B = L = S and define the entry P (s, t) = st. Let us briefly think about how λ-graphs behave in this situation. If S happens to be a group, S = G, then this matrix is a Latin square and so (unless the group is trivial) for every g ∈ L = G the λ-graph of g will not be connected (in fact it will not have any edges at all). On the other hand, if S is a semigroup with a zero element 0 ∈ S, then since in the multiplication table the row labelled by 0 (and dually column labelled by 0) contains all zeros, it is clear that in this case the λ-graph of 0 in the multiplication table is connected. Now suppose that S is monoid with a non-trivial group of units such that the set of non-invertible elements of S forms an ideal of S (for example, the semigroup M n (Q) has this property). Then since here a product st is invertible if and only if both s and t are, by the same reasoning as for groups above, the λ-graphs of the invertible elements s ∈ S (i.e. those elements from the group of units of S) will not be connected. So for such a semigroup the most one could hope for would be for the λ-graphs of every non-invertible element to be connected. As we shall see below, this is exactly what happens in the multiplication table of the semigroup M n (Q). In fact we show rather more than this. It should be noted that, in contrast to the Rees structure matrix P r , in the matrix T m,k the index sets A and B range over all possible matrices, not just those in RRE form, and all products BA are recorded in the table, including those with rank less than k.
Theorem 4 is a general result which is possibly of independent interest. It might be of interest to explore more which semigroups have multiplication tables with this property, and whether there is some general connection between semigroups with this property and those for which the maximal subgroups of IG(E) are well behaved.
Before proving Theorem 4 let us see how it can be used to obtain Lemma 9 as a corollary. Clearly we have Y X = I r +ȲX. Thus for every pair (Y, X), Proof of Lemma 9. Let (Y, X) be an arbitrary pair in the region (1 < 2 < · · · < r) × (1 < 2 < · · · < r) (5.4) such that P r (Y, X) = 0, that is, rank(Y X) = r. It follows from Theorem 4, with k = r and m = n − r > r = k, that the λ-graph ofȲX in T m,k = T n−r,r is connected. But then from (5.3) it follows that the λ-graph of Y X in the region (5.4) is connected. Since (Y, X) were arbitrary we obtain that for every such entry Y X in the region (5.4) the λ-graph of this matrix in the component (5.4) is connected. Now let Y, Y ′ ∈ Y r and X, X ′ ∈ X r with LC(Y ) = LC(Y ′ ) = LR(X) = LR(X ′ ) = {1, 2, . . . , r} and P r (Y, X) = P r (Y ′ , X ′ ) = 0.
I(1|2| · · · |r) I r I r is a singular square by Theorem 3, equation (2.5), and the fact that Y X = Y X ′ , and hence from relation (2.9) we deduce f X,Y = f X ′ ,Y in this case. Dually, if X = X ′ then the square is singular since Y X = Y ′ X, and hence from relation (2.9) we deduce f X,Y = f X,Y ′ in this case. But now, since we know that the λ-graph of Y X in (5.4) is connected, it follows that there is a sequence of entries in (5.4) from P r (Y, X) to P r (Y ′ , X ′ ), all equal to Y X, where adjacent terms in the sequence are either in the same row or column of P r , and thus the corresponding generators are equal by the arguments given in the previous two paragraphs. Therefore, we may deduce f X,Y = f X ′ ,Y ′ as a consequence of the relations from the presentation P r,n .
The rest of this section is concerned with the proof of the above theorem.
Proof of Theorem 4. Let k and m be positive integers with k ≤ m. We prove the result by induction on k + m. When k = m = 1 the result is trivially seen to hold, since in this case T 1,1 is the multiplication table of Q, the only non-invertible element of which is 0, and as already observed above the corresponding λ-graph is connected. Now suppose k + m > 2 and assume inductively that the result holds for all pairs (k ′ , m ′ ) with k ′ ≤ m ′ and k ′ + m ′ < k + m. Let K ∈ M k (Q) be arbitrary.
The table T m,k naturally divides into regions indexed by pairs (α, β) where by definition the (α, β)-region is the set of all pairs , α is a column vector and β is a row vector. Note that the region (0, 0) is a natural copy of the table T m−1,k inside T m,k . For part (i), we are given that k < m and must prove that the λ-graph of K in T m,k is connected. We consider two cases.
Case 1: k < m − 1: Let A ∈ Mat k×m (Q) and B ∈ Mat m×k (Q) be arbitrary, and write where α is a k × 1 column vector, β is a 1 × k row vector, and A 2 , B 2 ∈ M k (Q).
We begin by arguing that without loss of generality we may assume that B 2 ∈ M k (Q) is invertible. Indeed, let U ∈ M k (Q) be an idempotent R-related to A 2 B 2 . Such an idempotent U exists since M k (Q) is regular. Then, since every idempotent is a left identity in its R-class (see [22,Proposition 2 where X ∈ M k (Q) is invertible, and this sequence of equalities defines a path in the λ-graph of AB. Hence we may assume without loss of generality that B 2 is invertible. But then and so we have found a λ-path into the (0, 0)-region. Recall that the (0, 0)-region is a natural copy of T m−1,k inside the table T m,k . Since k < m − 1 it follows by induction, applying (i), that the λ-graph of AB restricted to the (0, 0)-region is connected. Therefore every occurrence of AB is connected to an occurrence of AB in the (0, 0)-region, while any two occurrences of AB in the (0, 0)-region are joined by a λ-path in the (0, 0)-region by induction. Since the pair A, B was arbitrary, this completes the proof that the λ-graph of K is connected in this case.
Case 2: k = m − 1: Arguing as in the previous case, for every entry in T m,k there is a λ-path to a pair of the form where C, D ∈ Mat (m−1)×(m−1) (Q) and D is invertible. Now there are two cases depending on whether or not C is invertible. If C is not invertible then CD is not invertible and so by induction, applying (ii), the λ-graph of CD in the (0, 0)-region is connected, and the proof is complete as in the previous case.
So we may suppose that both C and D are invertible, and hence so is their product CD. It is easy to see that for any matrix L ∈ M m−1 (Q) appearing in the table T m,k = T m,m−1 and for any pair X and Y of invertible (m − 1) × (m − 1) matrices we have that the λ-graph of L is connected if and only if the λ-graph of XL is connected if and only if the λ-graph of XLY is connected. Indeed, left multiplication by X induces a permutation of the set of matrices M (m−1)×m (Q) which label the rows of the table T m,m−1 ; the same is true for right multiplication by Y on the set of matrices M m×(m−1) (Q) labelling the columns of T m,m−1 . This transformation of the table will result in a table where the entries XLY appear in precisely the positions where the entries L appeared in the the original table T m,m−1 . Since permuting rows and columns of the table does not affect λ-connectedness, we have the desired conclusion.
Therefore it will suffice to show that the λ-graph of is connected. Of course within the (0, 0)-region the λ-graph of I m−1 is not connected (since I m−1 belongs to the group of units) and so it will be necessary to move out of that region in order to prove that the λ-graph of I m−1 is connected in T m,m−1 . We shall prove that there is a λ-path connecting ([0|C] 0 C −1 ) into the region ([1, 0, 0, . . . , 0], [1, 0, 0, . . . , 0] T ).
Indeed, we have where E In conclusion we have proved that for every occurrence of I m−1 in T m,k there is a λ-path into the (0, 0)-region, and for every occurrence of I m−1 in the (0, 0)region there is a λ-path to the ([1, 0, 0, . . . , 0], [1, 0, 0, . . . , 0] T )-region, and in this region every pair of occurrences of I m−1 are connected by a λ-path. Therefore the λ-graph of I m−1 in T m,k is connected, completing the proof of the inductive step for part (i) of the theorem.
For part (ii), we are given that k = m and that K is non-invertible, and again we want to show that the λ-graph of K in T m,k = T m,m is connected.
Consider the entry AB in the multiplication table where A, B ∈ M m (Q) and AB is not invertible, so rank(AB) = l < m = k. Therefore AB is in the same D-class as the matrix J = I l 0 0 0 . Hence by (2.1) we can write J = X(AB)Y where X and Y are invertible matrices. But since X and Y are invertible it follows that in T m,k the λ-graph of AB is connected if and only if the λ-graph of XAB is connected if and only if the λ-graph of XABY = J is connected. So we shall prove instead that the λ-graph of J is connected. Suppose that AB = J where A, B ∈ M m (Q). Then we can write where A 11 and B 11 are both l × l matrices. Consequently, there is a λ-path given by into a region that is a natural copy of T m,l inside T m,k . By induction, since l < k = m, the λ-graph of AB in this copy of T m,l in T m,k is connected, which completes the proof of the inductive step for (ii), and hence also completes the proof of the theorem.

Strongly connecting the table
In this section we shall complete Stage 2 of the proof of the main theorem by extending Lemma 9 to obtain the following result.
As usual, we first recast this problem combinatorially. Let P = P (B, A) be a matrix with rows indexed by a set B and columns indexed by A, where the entries of P all come from a set L ∪ {1} where 1 is a distinguished symbol not belonging to L. Let l ∈ L and consider the λ-graph of l defined in Section 5. We say that two vertices (B, A) and (B ′ , A ′ ) of the λ-graph of l are connected by a strong edge if either A strong path is then a sequence of vertices where adjacent terms in the sequence are connected by strong edges, and we say that the λ-graph of l is strongly connected if between any pair of vertices (B, A) and (B ′ , A ′ ) there is a strong path. A strong path of length 3 is illustrated in Figure 3.
Theorem 5. Let r and n be positive integers with r < n/3, let Y r be the set of all r × n rank r matrices over a division ring Q in reduced row echelon form, X r be the set of transposes of elements of Y r , and let T n,r = T n,r (Y, X) be the matrix with entries Y X ∈ M r (Q) where Y ∈ Y r and X ∈ X r . Then for every matrix K ∈ M r (Q) the λ-graph of K in T n,r is strongly connected with respect to the distinguished entries 1 = I r . (1 < 2 < · · · < r) (i1 < i2 < · · · < ir) jr ≤n−r, kr ≤n−r (j1 < j2 < · · · < jr) × (k1 < k2 < · · · < kr) Figure 3. An illustration of the table T n,r from Theorem 5. The regions of the table are indicated, and in each diagonal region the Is corresponding to the edges of type (T1) and (T2) from the spanning tree T n,r are indicated. The diagonal regions vary in size with the bottom right diagonal region ∆(n − r + 1 < · · · < n) having just a single entry. The singular square indicated by the quadruple of shaded squares illustrates the proof of Lemma 11.
Note that T n,r is not exactly the same as the Rees structure matrix P r since T n,r contains all products Y X even if Y X does not have rank r.
The aim of Theorem 5 is to show that for every symbol K appearing in the table, the λ-graph of K is strongly connected. The structure of the proof is outlined in Figure 3. A strong path of length 3 is indicated in the figure. The first and last edges of this path are strong edges because of the singular squares indicated by the quadruple of diamonds and circles respectively. The remaining third edge of this path is a strong edge as a consequence of Lemma 11. In Lemma 12 we prove that the λ-graph of K restricted to the small dark grey region of the table is strongly connected. Then in Corollary 2 we prove that the λ-graph of K restricted to the larger light grey region of the table is strongly connected. This is done by finding a strong path from every K in the light grey region to a label K in the dark grey region. Finally we complete the proof of Theorem 5 by finding a strong path from an arbitrary K into the light grey region.
Before going on to prove Theorem 5 let us see how Lemma 10 may be deduced from it.
Proof of Lemma 10. Let Y, Y ′ ∈ Y r and X, X ′ ∈ X r and suppose that P r (Y, X) = P r (Y ′ , X ′ ) = 0. If (Y, X) and (Y ′ , X ′ ) are connected by a strong edge in the Rees structure matrix P r then applying Lemma 2, equation (2.5) and relation (2.9) we may deduce that f X,Y = f X ′ ,Y ′ . It follows that if (Y, X) and (Y ′ , X ′ ) are connected by a strong path in P r then we may deduce that f X,Y = f X ′ ,Y ′ . But by Theorem 5, (Y, X) and (Y ′ , X ′ ) are connected by a strong path in T n,r and therefore it is immediate from the definitions of T n,r and P r that the same path is also a strong path in P r connecting (Y, X) and (Y ′ , X ′ ), proving the lemma.
The rest of this section will, therefore, be devoted to the proof of Theorem 5. As usual, we partition the table T r,n into regions, where the region (i 1 < · · · < i r ) × (j 1 < · · · < j r ) (6.1) is the set of all pairs (Y, X) where LC(Y ) = {i 1 , . . . , i r } and LR(X) = {j 1 , . . . , j r }.
Lemma 11. If (Y, X) and (Y ′ , X ′ ) belong to the same region of T n,r , with Y X = Y ′ X ′ , and either Y = Y ′ , or X = X ′ , then they are strongly connected in T n,r .
Proof. Suppose that Y = Y ′ and that X and X ′ both belong to the region (i 1 < i 2 < · · · < i r ). Then the square in T n,r shows that (Y, X) and (Y, X ′ ) are strongly connected. The other case is dual using the column labelled by I(i 1 |i 2 | · · · |i r ) T .
The following result extends Lemma 9.
Therefore, to prove Theorem 5 it will be sufficient to show that for every entry (Y, X) in T n,r there is a strong path into the region (1 < 2 < · · · < r) × (1 < 2 < · · · < r), and this is what the rest of the proof will be focused on establishing. Lemma 13. Let K ∈ M r (Q) be an entry in a region (i 1 < i 2 < · · · < i r ) × (j 1 < j 2 < · · · < j r ) (6.2) where i r ≤ n − r and j r ≤ n − r. Then there is a strong path in T n,r from this entry to an entry where Y ∈ M r (Q) and D ∈ GL r (Q). Dually there is a strong path to an entry of the same form but where D ∈ M r (Q) and Y ∈ GL r (Q).
Proof. We prove the first statement, the second is proved using a dual argument. The proof has two steps which are illustrated in Figures 4 and 5 We proceed along similar lines as in the proof of Theorem 4. We shall construct a path where the entire path belongs to the region (6.2), and consequently by Lemma 11 this path will automatically be a strong path. For the first step of the proof, A, B ∈ M r (Q) and since this semigroup is regular there is an idempotent U with U RAB, so U ARAB and then by (2.3) there is an invertible matrix C ∈ GL r (Q) satisfying U AC = U AB = AB. Since C is invertible, M r (Q)C = M r (Q) and so there exists a matrix X ∈ M r (Q) such that the equation I r×(n−r) (i 1 | · · · |i r )Q + XC = K is satisfied. Combining these observations, in Figure 4 we construct a strong path in the region (6.2) from [P |A], Q B to [I r×(n−r) (i 1 | · · · |i r )|X], Q C .
For the second step of the proof, we use a dual argument to find a strong path, in the same region, from where D ∈ GL r (Q). This path is given in Figure 5 where V is an idempotent with V L XC, D ∈ GL r (Q) and XC = XCV = DCV . Here D exists by (2.4) since CV L XC. Then, using the fact that D is invertible, Y ∈ M r (Q) is chosen so that the equation I r×(n−r) (i 1 | · · · |i r )I r×(n−r) (i 1 | · · · |i r ) T + DY = K is satisfied. This completes the proof of the lemma.

Lemma 14.
Let K be an entry of the form where i r ≤ n − r, j r ≤ n − r, Y ∈ M r (Q) and X ∈ GL r (Q). Then there is a strong path from (6.3) to an entry K in the region (1 < 2 < · · · < r) × (1 < 2 < · · · < r).
Proof. The proof has two stages, first we find a strong path into the region (i 1 < · · · < i r ) × (1 < · · · < r) (6.4) and then apply Lemma 13 and a dual argument to complete the proof. To simplify notation in the proof let I(i 1 | · · · |i r ) = I r×(n−r) (i 1 | · · · |i r ).
(j 1 < · · · < j m−1 < m < j m+1 < · · · < jr) I(j 1 | · · · |j m−1 |m, jm|j m+1 | · · · |jr) T Y 1 to given in Figures 6 and 7. Here Y 1 , Y 2 ∈ M r (Q) have been chosen in such a way that the appropriate entries in the table are equal to K. Such choices for Y 1 and Y 2 are possible since X is invertible. This completes the proof of the first stage since {j 1 , . . . , j m−1 , m, j m+1 , . . . , j r } ≺ {j 1 , . . . , j m−1 , j m , j m+1 , . . . , j r } and so by induction there is a strong path from (6.5) to an entry of the form Next by Lemma 13 there is a strong path from (6.6) to an entry where Z ′ ∈ GL r (Q) is invertible and X ′ need not be. Then a dual argument to the one above gives a strong path from (6.7) to an entry in the region (1 < · · · < r) × (1 < · · · < r), completing the proof of the lemma.
Combining Lemmas 12, 13 and 14 gives the following result showing that we can strongly connect a large portion of the table T n,r . This portion of the table is represented by the large light grey region in Figure 3.
We are now in a position to complete the proof of the main result of this section, Theorem 5. In the following proof, for an r × n matrix A, with r < n, we shall use A[i] to denote its ith column. Dually, given an n × r matrix B, with r < n, we shall use B[i] for its ith row.
Proof of Theorem 5. By Corollary 2 it suffices to show that there is a strong path from every entry K ∈ M r (Q) in T n,r to an entry in the subtable (6.8). To this end, let A ∈ Y r , B ∈ X r and let K = AB ∈ M r (Q). Moreover, suppose that Therefore since t > r, and the column space of A has dimension r, it follows that there exists some s such that column k s can be expressed as a (right) linear combination of the columns {k s+1 , k s+2 , . . . , k t }. Write where λ i ∈ Q. Let C be the n × r matrix defined by Let [b 1 , b 2 , . . . , b r ] denote this row vector. Suppose that b v = 0 for some v. Then it follows that either column v of B[k s+j ] is non-zero or column v of B[k s ] must be non-zero which in turn, since B is in RRE form, implies that the leading row j v of B (that is, the first row of B to have a non-zero term in the vth column) must satisfy j v < k s+j . This argument shows that B ′ is in RRE form, and moreover that LR(B ′ ) = LR(B). Since Next we claim that above we may also choose k s so that it satisfies k s < i r . Indeed, suppose that k s > i r . Consider column k 1 of A. Certainly we have k 1 ∈ {1, . . . , n − r} since r < n/3. Now let A ′ be the matrix obtained by replacing column A[k s ] of A by a copy of A[k 1 ] and leaving all the other columns unchanged. Since k s > i r the matrix A ′ is still in RRE form, and since row k s of B ′ is the zero vector the product is not affected and we have AB = AB ′ = A ′ B ′ . Also A ′ is in the same region as A so the corresponding λ-path is strong. Now column k 1 and column k s of A ′ are equal and so column k 1 is a linear combination of columns indexed by {k 2 , k 3 , . . . , k t }. But k 1 ≤ n − r while i r ≥ n − r + 1 by assumption, and so k 1 < i r . Therefore, running once again through the argument given in the previous paragraph, we may suppose without loss of generality that k s < i r .
So now suppose that k s < i r in A. Take the least v such that i v > k s , which must exist since k s < i r . Then define a matrix A ′′ obtained by replacing column k s of A by a copy of A[i v ] (i.e. the unit vector with zeros everywhere except in position v). Clearly A ′′ is in RRE form. Again we see that AB = AB ′ = A ′′ B ′ , since B ′ [k s ] is the zero vector 0 1×r . Also the edge between AB ′ and A ′′ B ′ is easily seen to be a strong edge by considering the column indexed by the scattered identity matrix I(i 1 , i 2 , . . . , i r ) T , and computing A I(i 1 , i 2 , . . . , i r ) T = A ′′ I(i 1 , i 2 , . . . , i r ) T = I r .

Uncovering the multiplication table of the general linear group
By this stage in the proof we have succeeded in identifying all of the labels in the table with the corresponding elements in the group GL r (Q). So after performing the identifications f A,B = f X,Y whenever BA = Y X we obtain a presentation P ′ r,n with generators F ′ = {f U : U ∈ GL r (Q)}. The following easy lemma now completes the proof of our main result by showing that, under our assumption Proof. We show this relation appears among the relations (2.9) by finding an appropriate singular square. Such a singular square is illustrated in Figure 8 completing the proof of the lemma.
This completes all the steps of the proof of the main theorem, Theorem 1, as outlined in Section 3.

Concluding remarks
The obvious outstanding question that remains is whether our main result Theorem 1 is true more generally for rank r components in the range n/3 ≤ r < n − 1. The proof given here certainly does not extend in a straightforward way to higher values of r. In terms of the outline of the proof given in Section 3, Stage 1 of the proof does carry across and hold for all r in the range 1 ≤ r ≤ n − 2. However, for both Stages 2 and 3 of the proof the assumption r < n/3 is used in the proofs. Let us now see that as they stand the results in these sections do not extend to the case n/3 ≤ r < n − 1. Recall that in Stage 2 we show the abstract generators can be identified in such a way as to put them (after this identification) into bijective correspondence with the elements of GL r (Q), and then in Stage 3 the full multiplication table is shown to arise in the Rees structure matrix P r . In this way the proof given here shows that under the assumption r < n/3 the presentation P r,n given in equations (2.6)-(2.9) contains a copy of the multiplication table presentation of GL r (Q). Clearly a necessary condition for this to be possible is that the set F has size at least equal to the size of GL r (Q). However in general, without the assumption r < n/3 it is not always true that the size of F is greater than GL r (Q). This can be seen by a simple counting argument. Indeed, if F q is the finite field with q elements, then the number of R-classes in the D-class D r of M n (F q ) is precisely the number of r-dimensional subspaces of an n-dimensional vector space over F q which is given by the Gaussian coefficient n r q = (q n − 1)(q n−1 − 1) · · · (q n−r+1 − 1) (q r − 1)(q r−1 − 1) · · · (q − 1), and the number of idempotents in each R-class of D r is easily seen to be equal to q r(n−r) . On the other hand the size of the general linear group GL m (F q ) is well known to be given by the formula |GL m (F q )| = (q m − q 0 )(q m − q 1 ) · · · (q m − q m−1 ).
So for example if we take n = 7 and r = 7 − 2 = 5 and consider the D-class D 5 of M 7 (F 2 ) then the number of idempotents in D 5 is given by which is easily checked to be strictly less than the number of elements in the group GL 5 (F 2 ) which, using the above formula, is equal to Therefore, if Theorem 1 does extend to values n/3 ≤ r < n − 2 then the reason that the theorem holds is different from the reason that it holds for low rank r < n/3, namely that the Rees structure matrix P r encodes (in a particular way) the full multiplication table of the general linear group GL n (Q).
The main result of [16] for the full transformation monoid T n , and the main result Theorem 1, suggest that it may well be worth investigating maximal subgroups of endomorphism monoids of finite dimensional independence algebras (in the sense of [4,11]) which form a class of monoids generalising both the full transformation monoid and the full linear monoid over a field. In particular this may provide a route to proving a common generalisation of Theorem 1 and the corresponding result for T n established in [16].