Eigenvectors from eigenvalues: A survey of a basic identity in linear algebra

By Peter B. Denton, Stephen J. Parke, Terence Tao, and Xining Zhang

Abstract

If is an Hermitian matrix with eigenvalues and , then the th component of a unit eigenvector associated to the eigenvalue is related to the eigenvalues of the minor of formed by removing the th row and column by the formula

We refer to this identity as the eigenvector-eigenvalue identity and show how this identity can also be used to extract the relative phases between the components of any given eigenvector. Despite the simple nature of this identity and the extremely mature state of development of linear algebra, this identity was not widely known until very recently. In this survey we describe the many times that this identity, or variants thereof, have been discovered and rediscovered in the literature (with the earliest precursor we know of appearing in 1834). We also provide a number of proofs and generalizations of the identity.

1. Introduction

If is an Hermitian matrix, we denote its real eigenvalues by . The ordering of the eigenvalues will not be of importance in this survey, but for sake of concreteness let us adopt the convention of nondecreasing eigenvalues:

If , let denote the minor formed from by deleting the th row and column from . This is again a Hermitian matrix, and thus has real eigenvalues , which for sake of concreteness we again arrange in nondecreasing order. In particular we have the well known Cauchy interlacing inequalities (see, e.g., Reference Wil1963, pp. 103–104)

for .

By the spectral theorem, we can find an orthonormal basis of eigenvectors , where the are in of associated to the eigenvalues respectively. For any , let denote the th component of . This survey paper is devoted to the following elegant relation, which we will call the eigenvector-eigenvalue identity, relating this eigenvector component to the eigenvalues of and :

Theorem 1 (Eigenvector-eigenvalue identity).

With the notation as above, we have

If one lets denote the characteristic polynomial of ,

where denotes the identity matrix, and similarly let denote the characteristic polynomial of ,

then the derivative of at is equal to

and so Equation 2 can be equivalently written in the characteristic polynomial form

Example 2.

If we set and

then the eigenvectors and eigenvalues are

with minors and eigenvalues given by

one can observe the interlacing inequalities Equation 1. One can then verify Equation 2 for all :

One can also verify Equation 4 for this example after computing

Note, that , which is needed for the column normalization, see (x) in the consistency checks below.

Numerical code to verify the identity can be found at Reference Den2019.

Consistency Checks

Theorem 1 passes a number of basic consistency checks.

(i)

Dilation symmetry. If one multiplies the matrix by a real scalar , then the eigenvalues of and also get multiplied by , while the coefficients remain unchanged, which does not affect the truth of Equation 2. To put it another way, if one assigns units to the entries of , then the eigenvalues of acquire the same units, while remains dimensionless, and the identity Equation 2 is dimensionally consistent.

(ii)

Translation symmetry. If one adds a scalar multiple of the identity to , then the eigenvalues of and are shifted by , while the coefficient remains unchanged. Thus both sides of Equation 2 remain unaffected by such a transformation.

(iii)

Permutation symmetry. Permuting the eigenvalues of or does not affect either side of Equation 2 (provided one also permutes the index accordingly). Permuting the ordering of the rows (and colums), as well as the index , similarly has no effect on Equation 2.

(iv)

First degenerate case. If vanishes, then the eigenvector for also becomes an eigenvector for with the same eigenvalue after deleting the th coefficient. In this case, both sides of Equation 2 vanish.

(v)

Second degenerate case. If the eigenvalue of occurs with multiplicity greater than one, then by the interlacing inequalities Equation 1 it also occurs as an eigenvalue of . Again in this case, both sides of Equation 2 vanish.

(vi)

Compatibility with interlacing. More generally, the identity Equation 2 is consistent with the interlacing Equation 1 because the component of the unit eigenvector has magnitude at most .

(vii)

Phase symmetry. One has the freedom to multiply each eigenvector by an arbitrary complex phase without affecting the matrix or its minors . But both sides of Equation 2 remain unchanged when one does so.

(viii)

Diagonal case. If is a diagonal matrix with diagonal entries , then equals when and zero otherwise, while the eigenvalues of are formed from those of by deleting one copy of . In this case one can easily verify Equation 2 by hand.

(ix)

Row normalization. As the eigenvectors form the columns of an orthogonal matrix, one must have the identity for all . Assuming for simplicity that the eigenvalues are distinct, this follows easily from the algebraic identity for and any distinct complex numbers , which can be seen by integrating the rational function along a large circle and applying the residue theorem. See also Remark 9 below.

(x)

Column normalization. As the eigenvectors are unit vectors, one must have for all . To verify this, use the translation symmetry (ii) to normalize , and then observe (e.g., from Equation 7) that is the -th elementary symmetric function of the eigenvalues and thus equal (since vanishes) to . Comparing this with Equation 4, we obtain as desired. Alternatively, one can see from Jacobi’s formula that which when combined with Equation 4 also recovers the identity . Jacobi’s formula give us the need relationships between the eigenvalues of and the eigenvalues of the ’s. They are for , where is the th elementary symmetric polynomial of the eigenvalues of , e.g., .

(xi)

Relative phase information. As mentioned in (vii) above, the phase of any individual eigenvector is arbitrary, therefore the relative phase between and , , is arbitrary. However, the relative phases between the components of any , say between and for , is not arbitrary. Identity Equation 2 can be used to extract these relative phases as follows: consider a unitary transformation on the matrix and its eigenvectors such that and with where is the complex conjugate of . Applying Equation 2 to the original and to the two unitary transformed ’s, gives us the information need to extract . Note and are not invariant under this particular unitary transformation, but and other , , are invariant. Furthermore, the unitarity condition that the for , can also be derived in this fashion. For further discussion on the relative phases, see the beginning of Section 4.

We also note that, since Equation 2 is a continuous function of the matrix , it is possible to treat all eigenvalues as simple via the usual limiting argument.

The eigenvector-eigenvalue identity has a surprisingly complicated history in the literature, having appeared in some form or another (albeit often in a lightly disguised form) in over two dozen references, and being independently rediscovered a half-dozen times, in fields as diverse as numerical linear algebra, random matrix theory, inverse eigenvalue problems, graph theory (including chemical graph theory, graph reconstruction, and walks on graphs), and neutrino physics; see Figure 1. While the identity applies to all Hermitian matrices, and extends in fact to normal matrices and more generally to diagonalizable matrices, it has found particular application in the special case of symmetric tridiagonal matrices (such as Jacobi matrices), which are of particular importance in several fundamental algorithms in numerical linear algebra.

While the eigenvector-eigenvalue identity is moderately familiar to some mathematical communities, it is not as broadly well known as other standard identities in linear algebra, such as Cramer’s rule Reference Cra1750 or the Cauchy determinant formula Reference Cau1841 (though, as we shall shortly see, it can be readily derived from either of these identities). While several of the papers in which some form of the identity was discovered went on to be cited several times by subsequent work, the citation graph is only very weakly connected; in particular, Figure 1 reveals that many of the citations coming from the earliest work on the identity did not propagate to later works, which instead were based on independent rediscoveries of the identity (or one of its variants). In many cases, the identity was not highlighted as a relation between eigenvectors and eigenvalues, but was instead introduced in passing as a tool to establish some other application; also, the form of the identity and the notation used varied widely from appearance to appearance, making it difficult to search for occurrences of the identity by standard search engines. The situation changed after a popular science article by Wolchover Reference Wol2019 reported on the most recent rediscovery Reference DPZ2020Reference DPTZ2019 of the identity by ourselves. In the wake of the publicity generated by that article, we received many notifications (see Section 5) of the disparate places in the literature where the eigenvector-eigenvalue identity, or an identity closely related to it, was discovered. Effectively, this “crowdsourced” the task of collating all these references together. In this paper, we survey all the appearances of the eigenvector-eigenvalue identity that we are aware of as a consequence of these efforts, as well as provide several proofs, generalizations, and applications of the identity. Finally, we speculate on some reasons for the limited nature of the dissemination of this identity in prior literature.

2. Proofs of the identity

The identity Equation 2 can be readily established from existing standard identities in the linear algebra literature. We now give several such proofs.

2.1. The adjugate proof

We first give a proof using adjugate matrices, which is a purely “polynomial” proof that avoids any invertibility, division, or nondegeneracy hypotheses in the argument. In particular, as we remark below, it has an extension to (diagonalizable) matrices that take values in arbitrary commutative rings. This argument appears for instance in Reference Par1980, Section 7.9.

Recall that if is an matrix, the adjugate matrix is given by the formula

where is the matrix formed by deleting the th row and th column from . From Cramer’s rule we have the identity

If is a diagonal matrix with (complex) entries , then is also a diagonal matrix with entry . More generally, if is a normal matrix with diagonalization

where are an orthonormal basis of eigenvectors of and is the conjugate transpose of , then has the same basis of eigenvectors with diagonalization

If one replaces by for any complex number , we therefore have⁠Footnote1

1

To our knowledge, this identity first appears in Reference Hal1942, p. 157.

If one specializes to the case for some , then all but one of the summands on the right-hand side vanish, and the adjugate matrix becomes a scalar multiple of the rank one projection ,

Extracting out the component of this identity using Equation 5, we conclude that

which is equivalent to Equation 2. In fact this shows that the eigenvector-eigenvalue identity holds for normal matrices as well as Hermitian matrices (despite the fact that the minor need not be Hermitian or normal in this case). Of course in this case the eigenvalues are not necessarily real and thus cannot be arranged in increasing order, but the order of the eigenvalues plays no role in the identity Equation 2.

Remark 3.

The same argument also yields an off-diagonal variant

for any , where is the minor of the identity matrix . When , this minor is simply equal to and the determinant can be expressed in terms of the eigenvalues of the minor ; however when there is no obvious way to express the left-hand side of Equation 10 in terms of eigenvalues of (though one can still of course write the determinant as the product of the eigenvalues of ). Another way of viewing Equation 10 is that for every , the vector with th entry

is a nonnormalized eigenvector associated to the eigenvalue ; this observation appears for instance in Reference Gan1959, pp. 85–86. See Reference Van2014 for some further identities relating the components of the eigenvector to various determinants.

Remark 4.

This remark is due to Vassilis Papanicolaou.⁠Footnote2 The above argument also applies to nonnormal matrices , so long as they are diagonalizable with some eigenvalues (not necessarily real or distinct). Indeed, if we let be a basis of right eigenvectors of (so that for all ), and let be the corresponding dual basis⁠Footnote3 of left eigenvectors (so , and is equal to when and otherwise), then we have the diagonalization

3

In the case when is a normal matrix and the are unit eigenvectors, the dual eigenvector would be the complex conjugate of .

and one can generalize Equation 8 to

leading to an extension

of Equation 9 to arbitrary diagonalizable matrices. We remark that this argument shows that the identity Equation 12 is in fact valid for any diagonalizable matrix taking values in any commutative ring (not just the complex numbers). The identity Equation 10 may be generalized in a similar fashion; we leave the details to the interested reader.

Remark 5.

As pointed out to us by Darij Grinberg,⁠Footnote4 the identity Equation 4 may be generalized to the nondiagonalizable setting. Namely, one can prove that

for an arbitrary matrix (with entries in an arbitrary commutative ring), any (with the minor formed from by removing the th row and column, and any right-eigenvector and left-eigenvector with a common eigenvalue . After reducing to the case , the main step in the proof is to establish two variants of Equation 8, namely that and for all . We refer the reader to the comment of Grinberg for further details.

Remark 6.

As observed in Reference Van2014, Appendix A, one can obtain an equivalent identity to Equation 2 by working with the powers in place of . Indeed, from Equation 6 we have

and hence on extracting the component

for all and . Using Vandermonde determinants (and assuming for sake of argument that the eigenvalues are distinct), one can then solve for the in terms of the , eventually reaching an identity Reference Van2014, Theorem 2 equivalent to Equation 2 (or Equation 10), which in the case when is the adjacency matrix of a graph can also be expressed in terms of counts of walks of various lengths between pairs of vertices. We refer the reader to Reference Van2014 for further details.

2.2. The Cramer’s rule proof

Returning now to the case of Hermitian matrices, we give a variant of the above proof of Equation 2 that still relies primarily on Cramer’s rule, but makes no explicit mention of the adjugate matrix. As discussed in the next section, variants of this argument have appeared multiple times in the literature. We first observe that to prove Equation 2 for Hermitian matrices , it suffices to do so under the additional hypothesis that has simple spectrum (all eigenvalues occur with multiplicity one), or equivalently that

This is because any Hermitian matrix with repeated eigenvalues can be approximated to arbitrary accuracy by a Hermitian matrix with simple spectrum, and both sides of Equation 2 vary continuously with (at least if we avoid the case when occurs with multiplicity greater than one, which is easy to handle anyway by the second degenerate case (iv) noted in the introduction).

As before, we diagonalize in the form Equation 6. For any complex parameter not equal to one of the eigenvalues , the resolvent can then also be diagonalized as

Extracting out the component of this matrix identity using Cramer’s rule Reference Cra1750, we conclude that

which we can express in terms of eigenvalues as

Both sides of this identity are rational functions in , and have a pole at for any given . Extracting the residue at this pole, we conclude that

which rearranges to give Equation 2.

Remark 7.

One can view the above derivation of Equation 2 from Equation 14 as a special case of the partial fractions decomposition

whenever is a polynomial with distinct roots and is a polynomial of degree less than that of . Equivalently, this derivation can be viewed as a special case of the Lagrange interpolation formula (see, e.g., Reference AS1964, §25.2)

whenever are distinct and is a polynomial of degree less than .

Remark 8.

A slight variant of this proof was observed by Aram Harrow,⁠Footnote5 inspired by the inverse power method for approximately computing eigenvectors numerically. We again assume simple spectrum. Using the translation invariance noted in consistency check (ii) of the introduction, we may assume without loss of generality that . Applying the resolvent identity Equation 13 with equal to a small nonzero quantity , we conclude that

On the other hand, by Cramer’s rule, the component of the left-hand side is

Extracting out the top order terms in , one obtains Equation 4 and hence Equation 2. A variant of this argument also gives the more general identity

whenever is an eigenvalue of of some multiplicity . Note when we can recover Equation 4 thanks to L’Hôpital’s rule. The right-hand side of Equation 16 can also be interpreted as the residue of the rational function at .

An alternate way to arrive at Equation 2 from Equation 14 is as follows. Assume for the sake of this argument that the eigenvalues of are all distinct from the eigenvalues of . Then we can substitute in Equation 14 and conclude that

for . Also, since the form an orthonormal basis, we have from expressing the unit vector in this basis that

This is a system of linear equations in unknowns . For sake of notation let us use permutation symmetry to set . From a further application of Cramer’s rule, one can then write

where is the matrix with entry equal to when , and equal to when , and is the minor of formed by removing the th row and column. Using the well-known Cauchy determinant identity Reference Cau1841

and inspecting the asymptotics as , we soon arrive at the identities

and

and the identity Equation 2 then follows after a brief calculation.

Remark 9.

The derivation of the eigenvector-eigenvalue identity Equation 2 from Equation 17, as well as the obvious normalization Equation 18, is reversible. Indeed, the identity Equation 2 implies that the rational functions on both sides of Equation 14 have the same residues at each of their (simple) poles, and these functions decay to zero at infinity, hence they must agree by Liouville’s theorem. Specializing Equation 14 to then recovers Equation 17, while comparing the leading asymptotics of both sides of Equation 14 as recovers Equation 18 (note this also establishes the consistency check (ix) from the introduction). As the identity Equation 17 involves the same quantities as Equation 2, one can thus view Equation 17 as an equivalent formulation of the eigenvector-eigenvalue identity, at least in the case when all the eigenvalues of are distinct. The identity Equation 14 (viewing as a free parameter) can also be interpreted in this fashion.

Remark 10.

The above resolvent-based arguments have a good chance of being extended to certain classes of infinite matrices (e.g., Jacobi matrices), or other Hermitian operators, particularly if they have good spectral properties (e.g., they are trace class). Certainly it is well known that spectral projections of an operator to a single eigenvalue can often be viewed as residues of the resolvent at that eigenvalue, in the spirit of Equation 15, under various spectral hypotheses of the operator in question. The main difficulty is to find a suitable extension of Cramer’s rule to infinite-dimensional settings, which would presumably require some sort of regularized determinant such as the Fredholm determinant. We will not explore this question further here, however, as pointed out to us by Carlos Tomei (personal communication), for reasonable Hermitian infinite matrices such as Jacobi matrices, one can formulate an identity similar to Equation 14 for the upper left coefficient of the resolvent for in the upper half-plane, which can for instance be used (in conjunction with the Herglotz representation theorem) to recover the spectral theorem for such matrices.

2.3. Coordinate-free proof

We now give a proof that largely avoids the use of coordinates or matrices, essentially due to Bo Berndtsson.⁠Footnote6 For this proof we assume familiarity with exterior algebra (see, e.g., Reference BM1941, Chapter XVI). The key identity is the following statement.

Lemma 11 (Coordinate-free eigenvector-eigenvalue identity).

Let be a self-adjoint linear map that annihilates a unit vector . For each unit vector , let be the determinant of the quadratic form restricted to the orthogonal complement , where denotes the Hermitian inner product on . Then one has

for all unit vectors .

Proof.

The determinant of a quadratic form on a -dimensional subspace of can be expressed as for any nondegenerate element of the th exterior power (equipped with the usual Hermitian inner product ), where the operator is extended to in the usual fashion. If is a unit vector, then the Hodge dual is a unit vector in , so that we have the identity

To prove Equation 20, it thus suffices to establish the more general identity

for all . If is orthogonal to , then can be expressed as a wedge product of with an element of , and hence vanishes, so that Equation 22 holds in this case. If is orthogonal to , then we again obtain Equation 22 thanks to the self-adjoint nature of . Finally, when , the claim follows from Equation 21. Since the identity Equation 22 is sesquilinear in , the claim follows.

Now we can prove Equation 2. Using translation symmetry we may normalize . We apply Lemma 11 to the self-adjoint map , setting to be the null vector and to be the standard basis vector . Working in the orthonormal eigenvector basis , we have ; working in the standard basis , we have . Finally we have . The claim follows.

Remark 12.

In coordinates, identity Equation 21 may be rewritten as . Thus we see that Lemma 11 is basically Equation 8 in disguise. It seems likely that the variant identity in Remark 5 can also be established in a similar coordinate-free fashion.

2.4. Proof using perturbative analysis

Now we give a proof using perturbation theory, which to our knowledge first appears in Reference MD1989. By the usual limiting argument we may assume that has simple eigenvalues. Let be a small parameter, and consider the rank one perturbation of , where is the standard basis. From Equation 3 and cofactor expansion, the characteristic polynomial of this perturbation may be expanded as

On the other hand, from perturbation theory the eigenvalue may be expanded as

If we then Taylor-expand the identity

and extract the terms that are linear in , we conclude that

which gives Equation 4 and hence Equation 2.

2.5. Proof using a Cauchy–Binet type formula

Now we give a proof based on a Cauchy–Binet type formula, which is also related to Lemma 11. This argument first appeared in Reference DPTZ2019.

Lemma 13 (Cauchy–Binet type formula).

Let be an Hermitian matrix with a zero eigenvalue . Then for any matrix , one has

where denotes the matrix with right column and all remaining columns given by .

Proof.

We use a perturbative argument related to that in Section 2.4. Since , , and , we easily confirm the identity

for any parameter , where the matrix on the right-hand side is given in block form, with the top left block being an matrix and the bottom right entry being a scalar. Taking determinants of both sides, we conclude that

Extracting out the coefficient of both sides, we obtain the claim.

Remark 14.

In the case when is the basis vector , we may write in block form as , where denotes the zero matrix, and write for some matrix and -dimensional vector , in which case one can calculate

and

Since in this case, this establishes 13 in the case . The general case can then be established from this by replacing by and by , where is any unitary matrix that maps to .

We now prove Equation 4 and hence Equation 2. Using the permutation and translation symmetries, we may normalize and . We then apply Lemma 13 with , in which case

and

Applying Lemma 13, we obtain Equation 4.

2.6. Proof using an alternate expression for eigenvector component magnitudes

There is an alternate formula for the square of the eigenvector component that was introduced in the paper Reference ESY2009, (5.8) of Erdős, Schlein, and Yau in the context of random matrix theory, and then highlighted further in a paper Reference TV2011, Lemma 41 of the third author and Vu; it differs from the eigenvector-eigenvalue formula in that it involves the actual coefficients of and , rather than just their eigenvalues. It was also previously discovered by Gaveau and Schulman Reference GS1995, (2.6). For sake of notation we just give the formula in the case.

Lemma 15 (Alternate expression for ).

Let be an Hermitian matrix written in block matrix form as

where is an -dimensional column vector and is a scalar. Let , and suppose that is not an eigenvalue of . Let denote an orthonormal basis of eigenvectors of , associated to the eigenvalues . Then

This lemma is useful in random matrix theory for proving delocalization of eigenvectors of random matrices, which roughly speaking amounts to proving upper bounds on the quantity .

Proof.

One can verify that this result enjoys the same translation symmetry as Theorem 1 (see consistency check (ii) from the introduction), so without loss of generality we may normalize . If we write for an -dimensional column vector , then the eigenvector equation can be written as the system

By hypothesis, is not an eigenvalue of , so we may invert and conclude that

Since is a unit vector, we have . Combining these two formulae and using some algebra, we obtain the claim.

Now we can give an alternate proof of Equation 4 and hence Equation 2. By permutation symmetry (iii) it suffices to establish the case. Using limiting arguments as before, we may assume that has distinct eigenvalues; by further limiting arguments we may also assume that the eigenvalues of are distinct from those of . By translation symmetry (ii) we may normalize . Comparing Equation 4 with Equation 23, our task reduces to establishing the identity

However, for any complex number not equal to an eigenvalue of , we may apply Schur complementation Reference Cot1974 to the matrix

to obtain the formula

or equivalently

which on Taylor expansion around using gives

Setting and using , we conclude that vanishes. If we then extract the coefficient, we obtain the claim.

Remark 16.

The same calculations also give the well-known fact that the minor eigenvalues are precisely the roots for the equation

Among other things, this can be used to establish the interlacing inequalities Equation 1.

2.7. A generalization

The following generalization of the eigenvector-eigenvalue identity was recently observed⁠Footnote7 by Yu Qing Tang (private communication), relying primarily on the Cauchy–Binet formula and a duality relationship Equation 24 between the various minors of a unitary matrix. If is an matrix and are subsets of of the same cardinality , let denote the minor formed by removing the rows indexed by and the columns indexed by .

7

Variants of this identity have also been recently observed in Reference Che2019, Reference Sta2019.

Proposition 17 (Generalized eigenvector-eigenvalue identity).

Let be a normal matrix diagonalized as for some unitary and diagonal , let , and let have cardinality . Then

where denotes the complement of in , and similarly for .

Note that if we set , , and , we recover Equation 2. The identity Equation 10 can be interpreted as the remaining cases of this proposition.

Proof.

We have

and hence by the Cauchy–Binet formula

where range over subsets of of cardinality . A computation reveals that the quantity vanishes unless , in which case the quantity equals . Thus it remains to show that

Since , it will suffice to show that⁠Footnote8

8

This identity is also a special case of the more general identity of Jacobi Reference Jac1834, which is valid for arbitrary matrices , as can be seen after noting that .

for any of cardinality . By permuting rows and columns, we may assume that . If we split the identity matrix into the left columns and the right columns and take determinants of both sides of the identity

we conclude that

giving the claim.

3. History of the identity

In this section we present, roughly in chronological order, all the references to the eigenvector-eigenvalue identity Equation 2 (or closely related results) that we are aware of in the literature. For the primary references, we shall present the identity in the notation of that reference in order to highlight the diversity of contexts and notational conventions in which this identity has appeared.

The earliest appearance of identities equivalent to Equation 2 that we know of is due to Jacobi Reference Jac1834, §8, (33). In modern notation, Jacobi diagonalizes a symmetric quadratic form as for an orthogonal matrix , and then the for each the cofactors of the form are extracted. Noting that the columns of this cofactor matrix are proportional to the eigenvector , Jacobi concluded that

with the factor omitted from the left-hand side; this is essentially Equation 8 for real symmetric matrices. In Reference Jac1834, §8, (36) an identity essentially equivalent to Equation 4 for real symmetric matrices is also given.

An identity that implies Equation 2 as a limiting case appears a century after Jacobi in a paper of Löwner Reference Löw1934, (7). In this paper, a diagonal quadratic form

is considered, as well as a rank one perturbation

for some real numbers . The eigenvalues of the quadratic form are denoted . If the eigenvalues are arranged in nondecreasing order, one has the interlacing inequalities

(compare with Equation 1). Under the nondegeneracy hypothesis

the identity

is established for , which closely resembles Equation 2; a similar formula also appears in Reference Jac1834, §12. The identity Equation 26 is obtained via “Eine einfache Rechnung” (an easy calculation) from the standard relations

for (compare with Equation 17), after applying Cramer’s rule and the Cauchy determinant identity Equation 19. As such, it is very similar to the proof of Equation 2 in Section 2.2 that is also based on Equation 19. The identity Equation 26 was used in Reference Löw1934 to help classify monotone functions of matrices and has also been applied to inverse eigenvector problems and stable computation of eigenvectors Reference GE1994, Reference Dem1997, pp. 224–226. It can be related to Equation 2 as follows. For sake of notation let us just consider the case of Equation 2. Let be a small parameter, and consider the perturbation of the rank one matrix . Standard perturbative analysis reveals that the eigenvalues of this perturbation consist of eigenvalues of the form for , plus an outlier eigenvalue at . Rescaling, we see that the rank one perturbation of has eigenvalues of the form for , plus an outlier eigenvalue at . If we let be the quadratic forms associated to expressed using the eigenvector basis , the identity Equation 26 becomes

Extracting the component of both sides of this identity using the aforementioned perturbative analysis, we recover Equation 2 after a brief calculation.

A more complicated variant of Equation 26 involving various quantities related to the Rayleigh–Ritz method of bounding the eigenvalues of a symmetric linear operator was stated by Weinberger Reference Wei1960, (2.29), where it was noted that it can be proven using much the same method as in Reference Löw1934.

The first appearance of the eigenvector-eigenvalue identity in essentially the form presented here that we are aware of was by Thompson Reference Tho1966, (15), which does not reference the prior work of Löwner or Weinberger. In the notation of Thompson’s paper, is a normal matrix, and are the distinct eigenvalues of , with each occurring with multiplicity . To avoid nondegeneracy it is assumed that . One then diagonalizes for a unitary and diagonal , and then sets

for and , where are the coefficients of . The minor formed by removing the th row and column from is denoted ; it has “trivial” eigenvalues in which each with occurs with multiplicity , as well as some “nontrivial” eigenvalues . The equation Reference Tho1966, (15) then reads

for and . If one specializes to the case when is Hermitian with simple spectrum, so that all the multiplicities are equal to , and set and , it is then not difficult to verify that this identity is equivalent to the eigenvector-eigenvalue identity Equation 2 in this simple spectrum case. In the case of repeated eigenvalues, the eigenvector-eigenvalue identity Equation 2 may degenerate (in that the left and right-hand sides both vanish), but the identity Equation 27 remains nontrivial in this case. The proof of Equation 27 given in Reference Tho1966 is written using a rather complicated notation (in part because much of the paper was concerned with more general minors rather than the minors ), but it is essentially the adjugate proof from Section 2.1 (where the adjugate matrix is replaced by the closely related -th compound matrix). In Reference Tho1966, the identity Equation 27 was not highlighted as a result of primary interest in its own right, but was instead employed to establish a large number of inequalities between the eigenvalues and the minor eigenvalues in the Hermitian case; see Reference Tho1966, Section 5.

In a followup paper Reference TM1968 by Thompson and McEnteggert, the analysis from Reference Tho1966 was revisited, restricting attention specifically to the case of an Hermitian matrix with simple eigenvalues , and with the minor formed by deleting the th row and column having eigenvalues . In this paper the inequalities

and

for were proved (with most cases of these inequalities already established in Reference Tho1966), with a key input being the identity

where are the components of the unitary matrix used in the diagonalization of . Note from the Cauchy interlacing inequalities that each of the expressions in braces takes values between and . It is not difficult to see that this identity is equivalent to Equation 2 (or Equation 27) in the case of Hermitian matrices with simple eigenvalues, and the hypothesis of simple eigenvalues can then be removed by the usual limiting argument. As in Reference Tho1966, the identity is established using adjugate matrices, essentially by the argument given in the previous section. However, the identity Equation 30 is only derived as an intermediate step towards establishing the inequalities Equation 28, Equation 29, and is not highlighted as of interest in its own right. The identity Equation 27 was then reproduced in a further followup paper Reference Tho1969, in which the identity Equation 14 was also noted; this latter identity was also independently observed in Reference DH1978.

In the text of Šilov Reference Š1969, Section 10.27, the identity Equation 2 is established, essentially by the Cramer rule method. Namely, if is a diagonal real quadratic form on with eigenvalues , and is a hyperplane in with unit normal vector , and are the eigenvalues of on , then it is observed that

for , which is Equation 2 after changing to the eigenvector basis; identities equivalent to Equation 17 and Equation 4 are also established. The text Reference Š1969 gives no references, but given the similarity of notation with Reference Löw1934 (compare Equation 31 with Equation 26), one could speculate that Šilov was influenced by Löwner’s work.

In a section Reference Pai1971, Section 8.2 of the PhD thesis of Paige entitled “A Useful Theorem on Cofactors”, the identity Equation 30 is cited as “a fascinating theorem … that relates the elements of the eigenvectors of a symmetric to its eigenvalues and the eigenvalues of its principal submatrices”, with a version of the adjugate proof given. In the notation of that thesis, one considers a real symmetric tridiagonal matrix with distinct eigenvalues with an orthonormal of eigenvectors . For any , let denote the minor of defined by taking the rows and columns indexed between and , and let denote the associated trigonometric polynomial. The identity

is then established for , where is the th component of and . This is easily seen to be equivalent to Equation 2 in the case of real symmetric tridiagonal matrices with distinct eigenvalues. One can then use this to derive Equation 2 for more general real symmetric matrices by a version of the Lanczos algorithm for tridiagonalizing an arbitrary real symmetric matrix, followed by the usual limiting argument to remove the hypothesis of distinct eigenvalues; we leave the details to the interested reader. Returning to the case of tridiagonal matrices, Paige also notes that the method also gives the companion identity

for , where are the upper diagonal entries of the tridiagonal matrix ; this can be viewed as a special case of Equation 10. These identities were then used in Reference Pai1971, Section 8 as a tool to bound the behavior of errors in the symmetric Lanczos process.

Paige’s identities Equation 32, Equation 33 for tridiagonal matrices are reproduced in the textbook of Parlett Reference Par1980, Theorem 7.9.2, with slightly different notation. Namely, one starts with an real symmetric tridiagonal matrix , decomposed spectrally as where is orthogonal and . Then for and , the th component of is observed to obey the formula

when is a simple eigenvalue, where is the characteristic polynomial of the minor of formed by taking the rows and columns between and . This identity is essentially equivalent to Equation 32. The identity Equation 32 is similarly reproduced in this notation; much as in Reference Pai1971, these identities are then used to analyze various iterative methods for computing eigenvectors. The proof of the theorem is left as an exercise in Reference Par1980, with the adjugate method given as a hint. Essentially the same result is also stated in the text of Golub and van Loan Reference GVL1983, pp. 432–433 (equation (8.4.12) on page 474 in the 2013 edition), proven using a version of the Cramer rule arguments in Section 2.2. They cite as reference the earlier paper Reference Gol1973, (3.6), which also uses essentially the same proof (see also Reference Gla2004, (4.3.17), who cites Reference BG1978, who in turn cite Reference Gol1973). A similar result was stated without proof by Galais, Kneller, and Volpe Reference GKV2012, equations (6), (7). They provided expressions for both and the off-diagonal eigenvectors as a function of cofactors in place of adjugate matrices. Their work was in the context of neutrino oscillations.

The identities of Parlett and of Golub and Van Loan are cited in the thesis of Knyazev Reference Kny1986, (2.2.27), again to analyze methods for computing eigenvalues and eigenvectors; the identities of Golub and Van Loan and of Šilov are similarly cited in the paper of Knyazev and Skorokhodov Reference KS1991 for similar purposes. Parlett’s result is also reproduced in the text of Xu Reference Xu1995, (3.19). In the survey Reference CG2002, (4.9) of Chu and Golub on structured inverse eigenvalue problems, the eigenvector-eigenvalue identity is derived via the adjugate method from the results of Reference TM1968, and it is used to solve the inverse eigenvalue problem for Jacobi matrices; the text Reference Par1980 is also cited.

In the paper Reference DLNT1986, p. 210 of Deift, Li, Nanda, and Tomei, the eigenvector-eigenvalue identity Equation 2 is derived by the Cramer rule method, and is used to construct action-angle variables for the Toda flow. The paper cites Reference BG1978, which also reproduces Equation 2 as equation (1.5) of that paper, and in turn cites Reference Gol1973.

In the paper of Mukherjee and Datta Reference MD1989 the eigenvector-eigenvalue identity was rediscovered, in the context of computing eigenvectors of graphs that arise in chemistry. If is a graph on vertices , and is the graph on vertices formed by deleting a vertex , then in Reference MD1989, (4) the identity

is established for , where denotes the characteristic polynomial of the adjacency matrix of evaluated at , and is the coefficient at the th vertex of the eigenvector corresponding to the th eigenvalue, and one assumes that all the eigenvalues of are distinct. This is equivalent to Equation 4 in the case that is an adjacency matrix of a graph. The identity is proven using the perturbative method in Section 2.4, and it appears to have been discovered independently. A similar identity was also noted in the earlier work of Li and Feng Reference LF1979, at least in the case of the largest eigenvalue. In a later paper of Hagos Reference Hag2002, it is noted that the identity Equation 34 “is probably not as well known as it should be”, and it also carefully generalizes Equation 34 to an identity (essentially the same as Equation 16) that holds when some of the eigenvalues are repeated. An alternate proof of Equation 34 was given in the paper of Cvetkovic, Rowlinson, and Simic Reference CRS2007, Theorem 3.1, essentially using the Cramer rule type methods in Section 2.2. The identity Equation 14 is also essentially noted at several other locations in the graph theory literature, such as Reference God1993, Chapter 4, Reference GM1981, Lemma 2.1, Reference God2012, Lemma 7.1, Corollary 7.2, Reference GGKL2017, (2) in relation to the generating functions for walks on a graph, though in those references no direct link to the eigenvector-eigenvalue identity in the form Equation 2 is asserted.

In Reference NTU1993, Section 2 the identity Equation 2 is derived for normal matrices by the Cramer rule method, citing Reference Tho1969, Reference DH1978 as the source for the key identity Equation 14; the papers Reference Tho1966, Reference TM1968 also appear in the bibliography but were not directly cited in this section. An extension to the case of eigenvalue multiplicity, essentially corresponding to Equation 16, is also given. This identity is then used to give a complete description of the relations between the eigenvalues of and of a given minor when is assumed to be normal. In Reference BFdP2011 a generalization of these results was given to the case of -normal matrices for some diagonal sign matrix ; this corresponds to a special case of Equation 12 in the case where each left eigenvector is the complex conjugate of .

The paper of Baryshnikov Reference Bar2001 marks the first appearance of this identity in random matrix theory. Let be a Hermitian form on with eigenvalues , and let be a hyperplane of orthogonal to some unit vector . Let be the component of with respect to an eigenvector associated to , set , and let be the eigenvalues of the Hermitian form arising from restricting to . Then after Reference Bar2001, (4.5.2) (and correcting some typos) the identity

is established, by an argument based on Cramer’s rule and the Cauchy determinant formula Equation 19, similar to the arguments at the end of Section 2.2, and it appears to have been discovered independently. If one specializes to the case when is a standard basis vector , then is also the component of , and we recover Equation 2 after a brief calculation. This identity was employed in Reference Bar2001 to study the situation in which the hyperplane normal was chosen uniformly at random on the unit sphere. This formula was rederived (using a version of the Cramer rule method in Section 2.2) in the May 2019 paper of Forrester and Zhang Reference FZ2019, (2.7), who recover some of the other results in Reference Bar2001 as well, and study the spectrum of the sum of a Hermitian matrix and a random rank one matrix.

In the paper Reference DE2002, Lemma 2.7 of Dumitriu and Edelman, the identity Equation 32 of Paige (as reproduced in Reference Par1980, Theorem 7.9.2) is used to give a clean expression for the Vandermonde determinant of the eigenvalues of a tridiagonal matrix, which is used in turn to construct tridiagonal models for the widely studied -ensembles in random matrix theory.

In the unpublished preprint Reference Van2014 of Van Mieghem, the identity Equation 4 is prominently displayed as the main result, though in the notation of that preprint it is expressed instead as

for any , where is a real symmetric matrix with distinct eigenvalues and unit eigenvectors , is the minor formed by removing the th row and column from , and is the derivative of the (sign-reversed) characteristic polynomial . Two proofs of this identity are given, one being essentially the Cramer’s rule proof from Section 2.2 and attributed to the previous reference Reference CRS2007; the other proof is based on Cramer’s rule and the Desnanot–Jacobi identity (Dodgson condensation); this identity is used to quantify the effect of removing a node from a graph on the spectral properties of that graph. The related identity Equation 23 from Reference TV2011 is also noted in this preprint. Some alternate formulae from Reference VM2011 for quantities such as in terms of walks of graphs are also noted, with the earlier texts Reference God1993, Reference GVL1983 also cited.

The identity Equation 2 was independently rediscovered and then generalized by Kausel Reference Kau2018 as a technique to extract information about components of a generalized eigenmode without having to compute the entire eigenmode. Here the generalized eigenvalue problem

for is considered, where is a positive semidefinite real symmetric matrix, is a positive definite real symmetric matrix, and the matrix of eigenfunctions is normalized so that . For any , one also solves the constrained system

where are the minors of , respectively, formed by removing the th row and column. Then in Reference Kau2018, (18) the Cramer rule method is used to establish the identity

for the component of , where is the notation in Reference Kau2018 for the determinant of . Specializing to the case when is the identity matrix, we recover Equation 2.

The eigenvector-eigenvalue identity was discovered by three of us Reference DPZ2020 in July 2019, initially in the case of matrices, in the context of trying to find a simple and numerically stable formula for the eigenvectors of the neutrino oscillation Hamiltonian, which form a separate matrix known as the PMNS lepton mixing matrix. This identity was established in the case by direct calculation. Despite being aware of the related identity Equation 23, the four of us were unable to locate this identity in past literature and wrote a preprint Reference DPTZ2019 in August 2019 highlighting this identity and providing two proofs (the adjugate proof from Section 2.1, and the Cauchy–Binet proof from Section 2.5). The release of this preprint generated some online discussion,⁠Footnote9 and we were notified by Jiyuan Zhang (private communication) of the prior appearance of the identity earlier in the year in Reference FZ2019. However, the numerous other places in the literature in which some form of this identity appeared did not become revealed until a popular science article Reference Wol2019 by Wolchover was written in November 2019. This article spread awareness of the eigenvector-eigenvalue identity to a vastly larger audience, and generated a large number of reports of previous occurrences of the identity, as well as other interesting related observations, which we have attempted to incorporate into this survey.

4. Further discussion

The eigenvector-eigenvalue identity Equation 2 only yields information about the magnitude of the components of a given eigenvector , but it does not directly reveal the phase of these components. On one hand, this is to be expected, since (as already noted in the consistency check (vii) in the introduction) one has the freedom to multiply by a phase; for instance, even if one restricts attention to real symmetric matrices and requires the eigenvectors to be real , one has the freedom to replace by its negation , so the sign of each component is ambiguous. However, relative phases, such as the phase of are not subject to this ambiguity. There are several ways to try to recover these relative phases. One way is to employ the off-diagonal analogue Equation 10 of Equation 2, although the determinants in that formula may be difficult to compute in general. For small matrices, it was suggested in Reference MD1989 that the signs of the eigenvectors could often be recovered by direct inspection of the components of the eigenvector equation . In the application in Reference DPZ2020, the additional phase could be recovered by a further neutrino specific identity Reference Tos1991. For more general matrices, one way to retrieve such phase information is to apply Equation 2 in multiple bases. For instance, suppose was real symmetric and the were all real. If one were to apply the eigenvector-eigenvalue identity after changing to a basis that involved the unit vector , then one could use the identity to evaluate the magnitude of . Two further applications of the identity in the original basis would give the magnitude of , and this is sufficient information to determine the relative sign of and . We also remark that for real symmetric matrices that are acyclic (such as weighted adjacency matrices of graphs that do not contain loops), one can write down explicit formulae for the coefficients of eigenvectors (and not just their magnitudes) in terms of characteristic polynomials of minors; see Reference BK2016. We do not know of any direct connection between such formulae and the eigenvector-eigenvalue identity Equation 2.

For large unstructured matrices, it does not seem at present that the identity Equation 2 provides a competitive algorithm to compute eigenvectors. Indeed, to use this identity to compute all the eigenvector component magnitudes , one would need to compute all eigenvalues of each of the minors , which would be a computationally intensive task in general; furthermore, an additional method would then be needed to also calculate the signs or phases of these components. However, if the matrix is of a special form (such as a tridiagonal form), then the identity could be of more practical use, as witnessed by the uses of this identity (together with variants such as Equation 33) in the literature to control the rate of convergence for various algorithms to compute eigenvalues and eigenvectors of tridiagonal matrices. Also, as noted recently in Reference Kau2018, if one has an application that requires only the component magnitudes at a single location , then one only needs to compute the characteristic polynomial of a single minor of at a single value , and this may be more computationally feasible.

5. Sociology of science issues

As one sees from Section 3 and Figure 1, there was some partial dissemination of the eigenvector-eigenvalue identity amongst some mathematical communities, to the point where it was regarded as “folklore” by several of these communities. However, this process was unable to raise broader awareness of this identity, resulting in the remarkable phenomenon of multiple trees of references sprouting from independent roots, and only loosely interacting with each other. For instance, as discussed in the previous section, for two months after the release of our own preprint Reference DPTZ2019, we only received a single report of another reference Reference FZ2019 containing a form of the identity, despite some substantial online discussion and the dozens of extant papers on the identity. It was only in response to the popular science article Reference Wol2019 that awareness of the identity finally “went viral”, leading to what was effectively an ad hoc crowdsourced effort to gather all the prior references to the identity in the literature. While we do not know for certain why this particular identity was not sufficiently well known prior to these recent events, we can propose the following possible explanations:

(1)

The identity was mostly used as an auxiliary tool for other purposes. In almost all of the references discussed here, the eigenvector-eigenvalue identity was established only in order to calculate or bound some other quantity; it was rarely formalized as a theorem or even as a lemma. In particular, with a few notable exceptions, such as the preprint Reference Van2014, this identity would not be mentioned in the title, abstract, or even the introduction. In a few cases, the identity was reproven by authors who did not seem to be fully aware that it was already established in one of the references in their own bibliography.

(2)

The identity does not have a standard name, form, or notation, and does not involve uncommon keywords. As one can see from Section 3, the identity comes in many variants and can be rearranged in a large number of ways; furthermore, the notation used for the various mathematical objects appearing in the identity vary greatly depending on the intended application or on the authors involved. Also, none of the previous references attempted to give the identity a formal name, and the keywords used to describe the identity (such as “eigenvector” or “eigenvalue”) are in extremely common use in mathematics. As such, there are no obvious ways to use modern search engines to locate other instances of this identity, other than by manually exploring the citation graph around known references to that identity. Perhaps a “fingerprint database” for identities Reference BT2013 would be needed before such automated searches could become possible.

(3)

The field of linear algebra is too mature, and its domain of applicability is too broad. The vast majority of consumers of linear algebra are not domain experts in linear algebra itself, but instead use it as a tool for a very diverse array of other applications. As such, the diffusion of linear algebra knowledge is not guided primarily by a central core of living experts in the field, but instead relies on more mature sources of authority such as textbooks and lectures. Unfortunately, only a small handful of linear algebra textbooks mention the eigenvector-eigenvalue identity, thus preventing wider dissemination of this identity.

Online discussion forums for mathematics were only partially successful in disseminating this identity. For instance, the 2012 MathOverflow question⁠Footnote10 “Cramer’s rule for eigenvectors”, which inquired as to the existence of an eigenvector identity such as Equation 2, received nearly ten-thousand views, but only revealed the related identity in Lemma 15. Nevertheless, this post was instrumental in bringing these four authors together to produce the preprint Reference DPTZ2019, via a comment⁠Footnote11 on a Reddit post by one of us.

It is not fully clear to us how best to attribute authorship for the eigenvector-eigenvalue identity Equation 2. A variant of the identity was observed by Jacobi Reference Jac1834, but not widely propagated. An identity that implies Equation 2 was later given by Löwner Reference Löw1934, but the implication is not immediate, and this reference had only a modest impact on the subsequent literature. The paper of Thompson Reference Tho1966 is the first place we know of in which the identity explicitly appears, and it was propagated through citations into several further papers in the literature. But this did not prevent the identity from then being independently rediscovered several further times, such as in the text Reference GVL1983 (with the latter restricting attention to the case of tridiagonal matrices). Furthermore, we are not able to guarantee that there is not an even earlier place in the literature where some form of this identity has appeared. We propose the name “eigenvector-eigenvalue identity” for Equation 2 on the grounds that it is descriptive, and hopefully it is a term that can be detected through search engines by researchers looking for identities of this form.

Although, in this survey we have included approximately 50 references that mention some variant of the eigenvector-eigenvalue identity, in most cases the identity does not explicitly appear in a form such as Equation 2 that specifically links eigenvector component magnitudes of an arbitrary Hermitian matrix to eigenvalues of the matrix and its minors. Exceptions include the papers Reference Tho1966, Reference TM1968, Reference NTU1993, and (in the special case of tridiagonal matrices) Reference GVL1983, Reference Xu1995. To convert the other forms of the identity appearing in the literature to a form similar to Equation 2 requires a small but nonzero amount of additional work (such as a change of basis, passing to a limit, or expressing a characteristic polynomial or determinant in terms of eigenvalues). This may well be an additional factor that has prevented this identity from being more widely known until recently.

Acknowledgments

We thank Asghar Bahmani, Carlo Beenakker, Adam Denchfield, Percy Deift, Laurent Demanet, Alan Edelman, Chris Godsil, Aram Harrow, James Kneller, Andrew Knysaev, Manjari Narayan, Michael Nielsen, Małgorzata Stawiska, Karl Svozil, Gang Tian, Carlos Tomei, Piet Van Mieghem, Brad Willms, Fu Zhang, Jiyuan Zhang, and Zhenzhong Zhang for pointing out a number of references where some variant of the eigenvector-eigenvalue identity has appeared in the literature, or suggesting various extensions and alternate proofs of the identity. We also thank Darij Grinberg, Andrew Krause, Kristoffer Varholm, Jochen Voss and Jim Van Zandt for some corrections to previous versions of this manuscript.

Figures

Figure 1.

The citation graph of all the references in the literature we are aware of (predating the current survey) that mention some variant of the eigenvector-eigenvalue identity. To reduce clutter, transitive references (e.g., a citation of a paper already cited by another paper in the bibliography) are omitted. Note the very weakly connected nature of the graph, with many early initial references not being (transitively) cited by many of the more recent references. Blue references are preprints, green references are books, the brown reference is a thesis, and the red reference is a popular science article. This graph was mostly crowdsourced from feedback received by the authors after the publication of Reference Wol2019. The reference Reference Jac1834 predates all others found by a century!

Graphic without alt text

Mathematical Fragments

Equation (1)
Theorem 1 (Eigenvector-eigenvalue identity).

With the notation as above, we have

Equation (3)
Equation (4)
Equation (5)
Equation (6)
Equation (7)
Equation (8)
Equation (9)
Remark 3.

The same argument also yields an off-diagonal variant

for any , where is the minor of the identity matrix . When , this minor is simply equal to and the determinant can be expressed in terms of the eigenvalues of the minor ; however when there is no obvious way to express the left-hand side of 10 in terms of eigenvalues of (though one can still of course write the determinant as the product of the eigenvalues of ). Another way of viewing 10 is that for every , the vector with th entry

is a nonnormalized eigenvector associated to the eigenvalue ; this observation appears for instance in Reference Gan1959, pp. 85–86. See Reference Van2014 for some further identities relating the components of the eigenvector to various determinants.

Remark 4.

This remark is due to Vassilis Papanicolaou.⁠Footnote2 The above argument also applies to nonnormal matrices , so long as they are diagonalizable with some eigenvalues (not necessarily real or distinct). Indeed, if we let be a basis of right eigenvectors of (so that for all ), and let be the corresponding dual basis⁠Footnote3 of left eigenvectors (so , and is equal to when and otherwise), then we have the diagonalization

3

In the case when is a normal matrix and the are unit eigenvectors, the dual eigenvector would be the complex conjugate of .

and one can generalize Equation 8 to

leading to an extension

of Equation 9 to arbitrary diagonalizable matrices. We remark that this argument shows that the identity 12 is in fact valid for any diagonalizable matrix taking values in any commutative ring (not just the complex numbers). The identity Equation 10 may be generalized in a similar fashion; we leave the details to the interested reader.

Remark 5.

As pointed out to us by Darij Grinberg,⁠Footnote4 the identity Equation 4 may be generalized to the nondiagonalizable setting. Namely, one can prove that

for an arbitrary matrix (with entries in an arbitrary commutative ring), any (with the minor formed from by removing the th row and column, and any right-eigenvector and left-eigenvector with a common eigenvalue . After reducing to the case , the main step in the proof is to establish two variants of Equation 8, namely that and for all . We refer the reader to the comment of Grinberg for further details.

Equation (13)
Equation (14)
Remark 8.

A slight variant of this proof was observed by Aram Harrow,⁠Footnote5 inspired by the inverse power method for approximately computing eigenvectors numerically. We again assume simple spectrum. Using the translation invariance noted in consistency check (ii) of the introduction, we may assume without loss of generality that . Applying the resolvent identity Equation 13 with equal to a small nonzero quantity , we conclude that

On the other hand, by Cramer’s rule, the component of the left-hand side is

Extracting out the top order terms in , one obtains Equation 4 and hence Equation 2. A variant of this argument also gives the more general identity

whenever is an eigenvalue of of some multiplicity . Note when we can recover Equation 4 thanks to L’Hôpital’s rule. The right-hand side of 16 can also be interpreted as the residue of the rational function at .

Equation (17)
Equation (18)
Equation (19)
Remark 9.

The derivation of the eigenvector-eigenvalue identity Equation 2 from Equation 17, as well as the obvious normalization Equation 18, is reversible. Indeed, the identity Equation 2 implies that the rational functions on both sides of Equation 14 have the same residues at each of their (simple) poles, and these functions decay to zero at infinity, hence they must agree by Liouville’s theorem. Specializing Equation 14 to then recovers Equation 17, while comparing the leading asymptotics of both sides of Equation 14 as recovers Equation 18 (note this also establishes the consistency check (ix) from the introduction). As the identity Equation 17 involves the same quantities as Equation 2, one can thus view Equation 17 as an equivalent formulation of the eigenvector-eigenvalue identity, at least in the case when all the eigenvalues of are distinct. The identity Equation 14 (viewing as a free parameter) can also be interpreted in this fashion.

Lemma 11 (Coordinate-free eigenvector-eigenvalue identity).

Let be a self-adjoint linear map that annihilates a unit vector . For each unit vector , let be the determinant of the quadratic form restricted to the orthogonal complement , where denotes the Hermitian inner product on . Then one has

for all unit vectors .

Equation (21)
Equation (22)
Lemma 13 (Cauchy–Binet type formula).

Let be an Hermitian matrix with a zero eigenvalue . Then for any matrix , one has

where denotes the matrix with right column and all remaining columns given by .

Lemma 15 (Alternate expression for ).

Let be an Hermitian matrix written in block matrix form as

where is an -dimensional column vector and is a scalar. Let , and suppose that is not an eigenvalue of . Let denote an orthonormal basis of eigenvectors of , associated to the eigenvalues . Then

Equation (24)
Equation (26)
Equation (27)
Equation (28)
Equation (29)
Equation (30)
Equation (31)
Equation (32)
Equation (33)
Equation (34)

References

[AS1964]
M. Abramowitz and I. A. Stegun, Handbook of mathematical functions with formulas, graphs, and mathematical tables, National Bureau of Standards Applied Mathematics Series, vol. 55, For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, D.C., 1964. MR0167642, Show rawAMSref\bib{AS}{book}{ label={AS1964}, author={Abramowitz, Milton}, author={Stegun, Irene A.}, title={Handbook of mathematical functions with formulas, graphs, and mathematical tables}, series={National Bureau of Standards Applied Mathematics Series}, volume={55}, publisher={For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, D.C.}, date={1964}, pages={xiv+1046}, review={\MR {0167642}}, } Close amsref.
[Bar2001]
Yu. Baryshnikov, GUEs and queues, Probab. Theory Related Fields 119 (2001), no. 2, 256–274, DOI 10.1007/PL00008760. MR1818248, Show rawAMSref\bib{Bary}{article}{ label={Bar2001}, author={Baryshnikov, Yu.}, title={GUEs and queues}, journal={Probab. Theory Related Fields}, volume={119}, date={2001}, number={2}, pages={256--274}, issn={0178-8051}, review={\MR {1818248}}, doi={10.1007/PL00008760}, } Close amsref.
[BFdP2011]
N. Bebiano, S. Furtado, and J. da Providência, On the eigenvalues of principal submatrices of -normal matrices, Linear Algebra Appl. 435 (2011), no. 12, 3101–3114, DOI 10.1016/j.laa.2011.05.033. MR2831599, Show rawAMSref\bib{BFdP}{article}{ label={BFdP2011}, author={Bebiano, N.}, author={Furtado, S.}, author={da Provid\^{e}ncia, J.}, title={On the eigenvalues of principal submatrices of $J$-normal matrices}, journal={Linear Algebra Appl.}, volume={435}, date={2011}, number={12}, pages={3101--3114}, issn={0024-3795}, review={\MR {2831599}}, doi={10.1016/j.laa.2011.05.033}, } Close amsref.
[BG1978]
D. Boley and G. H. Golub, Inverse eigenvalue problems for band matrices, Numerical analysis (Proc. 7th Biennial Conf., Univ. Dundee, Dundee, 1977), Springer, Berlin, 1978, pp. 23–31. Lecture Notes in Math., Vol. 630. MR0474741, Show rawAMSref\bib{BG}{article}{ label={BG1978}, author={Boley, D.}, author={Golub, G. H.}, title={Inverse eigenvalue problems for band matrices}, conference={ title={Numerical analysis (Proc. 7th Biennial Conf., Univ. Dundee, Dundee, 1977)}, }, book={ publisher={Springer, Berlin}, }, date={1978}, pages={23--31. Lecture Notes in Math., Vol. 630}, review={\MR {0474741}}, } Close amsref.
[BK2016]
A. Bahmani and D. Kiani, An explicit formula for the eigenvectors of acyclic matrices and weighted trees, https://arxiv.org/abs/1609.02399, 2016.
[BM1941]
G. Birkhoff and S. MacLane, A Survey of Modern Algebra, Macmillan Company, New York, 1941. MR0005093, Show rawAMSref\bib{BM}{book}{ label={BM1941}, author={Birkhoff, Garrett}, author={MacLane, Saunders}, title={A Survey of Modern Algebra}, publisher={Macmillan Company, New York}, date={1941}, pages={xi+450}, review={\MR {0005093}}, } Close amsref.
[BT2013]
S. C. Billey and B. E. Tenner, Fingerprint databases for theorems, Notices Amer. Math. Soc. 60 (2013), no. 8, 1034–1039, DOI 10.1090/noti1029. MR3113227, Show rawAMSref\bib{BT}{article}{ label={BT2013}, author={Billey, Sara C.}, author={Tenner, Bridget E.}, title={Fingerprint databases for theorems}, journal={Notices Amer. Math. Soc.}, volume={60}, date={2013}, number={8}, pages={1034--1039}, issn={0002-9920}, review={\MR {3113227}}, doi={10.1090/noti1029}, } Close amsref.
[Cau1841]
A. L. Cauchy, Exercices d’analyse et de physique mathématique, vol. 2. Bachelier, page 154, 1841.
[CG2002]
M. T. Chu and G. H. Golub, Structured inverse eigenvalue problems, Acta Numer. 11 (2002), 1–71, DOI 10.1017/S0962492902000016. MR2008966, Show rawAMSref\bib{CG}{article}{ label={CG2002}, author={Chu, Moody T.}, author={Golub, Gene H.}, title={Structured inverse eigenvalue problems}, journal={Acta Numer.}, volume={11}, date={2002}, pages={1--71}, issn={0962-4929}, review={\MR {2008966}}, doi={10.1017/S0962492902000016}, } Close amsref.
[Che2019]
X. Chen, Note on eigenvectors from eigenvalues, https://arxiv.org/abs/1911.09081, 2019.
[Cot1974]
R. W. Cottle, Manifestations of the Schur complement, Linear Algebra Appl. 8 (1974), 189–211, DOI 10.1016/0024-3795(74)90066-4. MR354727, Show rawAMSref\bib{cottle}{article}{ label={Cot1974}, author={Cottle, Richard W.}, title={Manifestations of the Schur complement}, journal={Linear Algebra Appl.}, volume={8}, date={1974}, pages={189--211}, issn={0024-3795}, review={\MR {354727}}, doi={10.1016/0024-3795(74)90066-4}, } Close amsref.
[Cra1750]
G. Cramer, Introduction à l’analyse des lignes courbes algébriques, Chez les frères Cramer et C. Philibert, Geneva, pages 657–659, 1750.
[CRS2007]
D. Cvetković, P. Rowlinson, and S. K. Simić, Star complements and exceptional graphs, Linear Algebra Appl. 423 (2007), no. 1, 146–154, DOI 10.1016/j.laa.2007.01.008. MR2312331, Show rawAMSref\bib{CVETKOVIC2007146}{article}{ label={CRS2007}, author={Cvetkovi\'{c}, D.}, author={Rowlinson, P.}, author={Simi\'{c}, S. K.}, title={Star complements and exceptional graphs}, journal={Linear Algebra Appl.}, volume={423}, date={2007}, number={1}, pages={146--154}, issn={0024-3795}, review={\MR {2312331}}, doi={10.1016/j.laa.2007.01.008}, } Close amsref.
[DE2002]
I. Dumitriu and A. Edelman, Matrix models for beta ensembles, J. Math. Phys. 43 (2002), no. 11, 5830–5847, DOI 10.1063/1.1507823. MR1936554, Show rawAMSref\bib{DE}{article}{ label={DE2002}, author={Dumitriu, Ioana}, author={Edelman, Alan}, title={Matrix models for beta ensembles}, journal={J. Math. Phys.}, volume={43}, date={2002}, number={11}, pages={5830--5847}, issn={0022-2488}, review={\MR {1936554}}, doi={10.1063/1.1507823}, } Close amsref.
[Dem1997]
J. W. Demmel, Applied numerical linear algebra, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1997, DOI 10.1137/1.9781611971446. MR1463942, Show rawAMSref\bib{demmel}{book}{ label={Dem1997}, author={Demmel, James W.}, title={Applied numerical linear algebra}, publisher={Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA}, date={1997}, pages={xii+419}, isbn={0-89871-389-7}, review={\MR {1463942}}, doi={10.1137/1.9781611971446}, } Close amsref.
[Den2019]
P. B. Denton, Eigenvalue-eigenvector identity code, https://github.com/PeterDenton/Eigenvector-Eigenvalue-Identity, 2019.
[DH1978]
E. Deutsch and H. Hochstadt, On Cauchy’s inequalities for Hermitian matrices, Amer. Math. Monthly 85 (1978), no. 6, 486–487, DOI 10.2307/2320075. MR491766, Show rawAMSref\bib{deutsch}{article}{ label={DH1978}, author={Deutsch, Emeric}, author={Hochstadt, Harry}, title={On Cauchy's inequalities for Hermitian matrices}, journal={Amer. Math. Monthly}, volume={85}, date={1978}, number={6}, pages={486--487}, issn={0002-9890}, review={\MR {491766}}, doi={10.2307/2320075}, } Close amsref.
[DLNT1986]
P. Deift, L. C. Li, T. Nanda, and C. Tomei, The Toda flow on a generic orbit is integrable, Comm. Pure Appl. Math. 39 (1986), no. 2, 183–232, DOI 10.1002/cpa.3160390203. MR820068, Show rawAMSref\bib{DLNT}{article}{ label={DLNT1986}, author={Deift, P.}, author={Li, L. C.}, author={Nanda, T.}, author={Tomei, C.}, title={The Toda flow on a generic orbit is integrable}, journal={Comm. Pure Appl. Math.}, volume={39}, date={1986}, number={2}, pages={183--232}, issn={0010-3640}, review={\MR {820068}}, doi={10.1002/cpa.3160390203}, } Close amsref.
[DPTZ2019]
Peter B. Denton, Stephen J. Parke, Terence Tao, and Xining Zhang. Eigenvectors from Eigenvalues, arXiv e-prints, page arXiv:1908.03795v1, Aug 2019.
[DPZ2020]
P. B. Denton, S. J. Parke, and X. Zhang, Neutrino oscillations in matter via eigenvalues, Phys. Rev. D 101 (2020), no. 9, 093001, 12, DOI 10.1103/physrevd.101.093001. MR4111110, Show rawAMSref\bib{Denton:2019ovn}{article}{ label={DPZ2020}, author={Denton, Peter B.}, author={Parke, Stephen J.}, author={Zhang, Xining}, title={Neutrino oscillations in matter via eigenvalues}, journal={Phys. Rev. D}, volume={101}, date={2020}, number={9}, pages={093001, 12}, issn={2470-0010}, review={\MR {4111110}}, doi={10.1103/physrevd.101.093001}, } Close amsref.
[ESY2009]
L. Erdős, B. Schlein, and H.-T. Yau, Semicircle law on short scales and delocalization of eigenvectors for Wigner random matrices, Ann. Probab. 37 (2009), no. 3, 815–852, DOI 10.1214/08-AOP421. MR2537522, Show rawAMSref\bib{ESY}{article}{ label={ESY2009}, author={Erd\H {o}s, L\'{a}szl\'{o}}, author={Schlein, Benjamin}, author={Yau, Horng-Tzer}, title={Semicircle law on short scales and delocalization of eigenvectors for Wigner random matrices}, journal={Ann. Probab.}, volume={37}, date={2009}, number={3}, pages={815--852}, issn={0091-1798}, review={\MR {2537522}}, doi={10.1214/08-AOP421}, } Close amsref.
[FZ2019]
P. J. Forrester and J. Zhang, Co-rank 1 projections and the randomised Horn problem, arXiv e-prints, page arXiv:1905.05314, May 2019.
[Gan1959]
F. R. Gantmacher, The theory of matrices. Vols. 1, 2, Translated by K. A. Hirsch, Chelsea Publishing Co., New York, 1959. MR0107649, Show rawAMSref\bib{gantmacher}{book}{ label={Gan1959}, author={Gantmacher, F. R.}, title={The theory of matrices. Vols. 1, 2}, series={Translated by K. A. Hirsch}, publisher={Chelsea Publishing Co., New York}, date={1959}, pages={Vol. 1, x+374 pp. Vol. 2, ix+276}, review={\MR {0107649}}, } Close amsref.
[GE1994]
M. Gu and S. C. Eisenstat, A stable and efficient algorithm for the rank-one modification of the symmetric eigenproblem, SIAM J. Matrix Anal. Appl. 15 (1994), no. 4, 1266–1276, DOI 10.1137/S089547989223924X. MR1293916, Show rawAMSref\bib{GE}{article}{ label={GE1994}, author={Gu, Ming}, author={Eisenstat, Stanley C.}, title={A stable and efficient algorithm for the rank-one modification of the symmetric eigenproblem}, journal={SIAM J. Matrix Anal. Appl.}, volume={15}, date={1994}, number={4}, pages={1266--1276}, issn={0895-4798}, review={\MR {1293916}}, doi={10.1137/S089547989223924X}, } Close amsref.
[GGKL2017]
C. Godsil, K. Guo, M. Kempton, and G. Lippner, State transfer in strongly regular graphs with an edge perturbation, Journal of Combinatorial Theory, Series A, https://www.sciencedirect.com/science/article/abs/pii/S0097316519301621, 2017.
[GKV2012]
Sébastien Galais, James Kneller, and Cristina Volpe, The neutrino-neutrino interaction effects in supernovae: the point of view from the matter basis, J. Phys., G39:035201, 2012.
[Gla2004]
G. M. L. Gladwell, Inverse problems in vibration, 2nd ed., Solid Mechanics and its Applications, vol. 119, Kluwer Academic Publishers, Dordrecht, 2004. MR2102477, Show rawAMSref\bib{gladwell}{book}{ label={Gla2004}, author={Gladwell, Graham M. L.}, title={Inverse problems in vibration}, series={Solid Mechanics and its Applications}, volume={119}, edition={2}, publisher={Kluwer Academic Publishers, Dordrecht}, date={2004}, pages={xvi+457}, isbn={1-4020-2670-6}, review={\MR {2102477}}, } Close amsref.
[GM1981]
C. D. Godsil and B. D. McKay, Spectral conditions for the reconstructibility of a graph, J. Combin. Theory Ser. B 30 (1981), no. 3, 285–289, DOI 10.1016/0095-8956(81)90046-0. MR624545, Show rawAMSref\bib{GK}{article}{ label={GM1981}, author={Godsil, C. D.}, author={McKay, B. D.}, title={Spectral conditions for the reconstructibility of a graph}, journal={J. Combin. Theory Ser. B}, volume={30}, date={1981}, number={3}, pages={285--289}, issn={0095-8956}, review={\MR {624545}}, doi={10.1016/0095-8956(81)90046-0}, } Close amsref.
[God1993]
C. D. Godsil, Algebraic combinatorics, Chapman and Hall Mathematics Series, Chapman & Hall, New York, 1993. MR1220704, Show rawAMSref\bib{godsil-book}{book}{ label={God1993}, author={Godsil, C. D.}, title={Algebraic combinatorics}, series={Chapman and Hall Mathematics Series}, publisher={Chapman \& Hall, New York}, date={1993}, pages={xvi+362}, isbn={0-412-04131-6}, review={\MR {1220704}}, } Close amsref.
[God2012]
C. Godsil, When can perfect state transfer occur?, Electron. J. Linear Algebra 23 (2012), 877–890, DOI 10.13001/1081-3810.1563. MR2992400, Show rawAMSref\bib{godsil}{article}{ label={God2012}, author={Godsil, Chris}, title={When can perfect state transfer occur?}, journal={Electron. J. Linear Algebra}, volume={23}, date={2012}, pages={877--890}, review={\MR {2992400}}, doi={10.13001/1081-3810.1563}, } Close amsref.
[Gol1973]
G. H. Golub, Some modified matrix eigenvalue problems, SIAM Rev. 15 (1973), 318–334, DOI 10.1137/1015032. MR329227, Show rawAMSref\bib{golub1973}{article}{ label={Gol1973}, author={Golub, Gene H.}, title={Some modified matrix eigenvalue problems}, journal={SIAM Rev.}, volume={15}, date={1973}, pages={318--334}, issn={0036-1445}, review={\MR {329227}}, doi={10.1137/1015032}, } Close amsref.
[GS1995]
B. Gaveau and L. S. Schulman, Limited quantum decay, J. Phys. A 28 (1995), no. 24, 7359–7374. MR1381425, Show rawAMSref\bib{gaveau}{article}{ label={GS1995}, author={Gaveau, B.}, author={Schulman, L. S.}, title={Limited quantum decay}, journal={J. Phys. A}, volume={28}, date={1995}, number={24}, pages={7359--7374}, issn={0305-4470}, review={\MR {1381425}}, } Close amsref.
[GVL1983]
G. H. Golub and C. F. Van Loan, Matrix computations, Johns Hopkins Series in the Mathematical Sciences, vol. 3, Johns Hopkins University Press, Baltimore, MD, 1983. MR733103, Show rawAMSref\bib{golub}{book}{ label={GVL1983}, author={Golub, Gene H.}, author={Van Loan, Charles F.}, title={Matrix computations}, series={Johns Hopkins Series in the Mathematical Sciences}, volume={3}, publisher={Johns Hopkins University Press, Baltimore, MD}, date={1983}, pages={xvi+476}, isbn={0-8018-3010-9}, isbn={0-8018-3011-7}, review={\MR {733103}}, } Close amsref.
[Hag2002]
E. M. Hagos, Some results on graph spectra, Linear Algebra Appl. 356 (2002), 103–111, DOI 10.1016/S0024-3795(02)00324-5. Special issue on algebraic graph theory (Edinburgh, 2001). MR1944680, Show rawAMSref\bib{HAGOS2002103}{article}{ label={Hag2002}, author={Hagos, Elias M.}, title={Some results on graph spectra}, note={Special issue on algebraic graph theory (Edinburgh, 2001)}, journal={Linear Algebra Appl.}, volume={356}, date={2002}, pages={103--111}, issn={0024-3795}, review={\MR {1944680}}, doi={10.1016/S0024-3795(02)00324-5}, } Close amsref.
[Hal1942]
P. R. Halmos, Finite Dimensional Vector Spaces, Annals of Mathematics Studies, no. 7, Princeton University Press, Princeton, N.J., 1942. MR0006591, Show rawAMSref\bib{halmos}{book}{ label={Hal1942}, author={Halmos, Paul R.}, title={Finite Dimensional Vector Spaces}, series={Annals of Mathematics Studies, no. 7}, publisher={Princeton University Press, Princeton, N.J.}, date={1942}, pages={v + 196}, review={\MR {0006591}}, } Close amsref.
[Jac1834]
C. G. J. Jacobi, De binis quibuslibet functionibus homogeneis secundi ordinis per substitutiones lineares in alias binas tranformandis, quae solis quadratis variabilium constant; una cum variis theorematis de tranformatione etdeterminatione integralium multiplicium (Latin), J. Reine Angew. Math. 12 (1834), 1–69, DOI 10.1515/crll.1834.12.1. MR1577999, Show rawAMSref\bib{jacobi}{article}{ label={Jac1834}, author={Jacobi, C. G. J.}, title={De binis quibuslibet functionibus homogeneis secundi ordinis per substitutiones lineares in alias binas tranformandis, quae solis quadratis variabilium constant; una cum variis theorematis de tranformatione etdeterminatione integralium multiplicium}, language={Latin}, journal={J. Reine Angew. Math.}, volume={12}, date={1834}, pages={1--69}, issn={0075-4102}, review={\MR {1577999}}, doi={10.1515/crll.1834.12.1}, } Close amsref.
[Kau2018]
E. Kausel, Normalized modes at selected points without normalization Journal of Sound and Vibration, 420:261–268, April 2018.
[Kny1986]
A. Knyazev, Computation of eigenvalues and eigenvectors for mesh problems: algorithms and error estimates (in Russian), PhD thesis, Department of Numerical Mathematics, USSR Academy of Sciences, Moscow, 1986.
[KS1991]
A. V. Knyazev and A. L. Skorokhodov, On exact estimates of the convergence rate of the steepest ascent method in the symmetric eigenvalue problem, Linear Algebra Appl. 154/156 (1991), 245–257, DOI 10.1016/0024-3795(91)90379-B. MR1113145, Show rawAMSref\bib{KS}{article}{ label={KS1991}, author={Knyazev, A. V.}, author={Skorokhodov, A. L.}, title={On exact estimates of the convergence rate of the steepest ascent method in the symmetric eigenvalue problem}, journal={Linear Algebra Appl.}, volume={154/156}, date={1991}, pages={245--257}, issn={0024-3795}, review={\MR {1113145}}, doi={10.1016/0024-3795(91)90379-B}, } Close amsref.
[LF1979]
Q. Li and K. Q. Feng, On the largest eigenvalue of a graph (Chinese), Acta Math. Appl. Sinica 2 (1979), no. 2, 167–175. MR549045, Show rawAMSref\bib{LF}{article}{ label={LF1979}, author={Li, Qiao}, author={Feng, Ke Qin}, title={On the largest eigenvalue of a graph}, language={Chinese}, journal={Acta Math. Appl. Sinica}, volume={2}, date={1979}, number={2}, pages={167--175}, issn={0254-3079}, review={\MR {549045}}, } Close amsref.
[Löw1934]
K. Löwner, Über monotone Matrixfunktionen (German), Math. Z. 38 (1934), no. 1, 177–216, DOI 10.1007/BF01170633. MR1545446, Show rawAMSref\bib{Lowner}{article}{ label={L{\"{o}}w1934}, author={L\"{o}wner, Karl}, title={\"{U}ber monotone Matrixfunktionen}, language={German}, journal={Math. Z.}, volume={38}, date={1934}, number={1}, pages={177--216}, issn={0025-5874}, review={\MR {1545446}}, doi={10.1007/BF01170633}, } Close amsref.
[MD1989]
A. K. Mukherjee and K. K. Datta, Two new graph-theoretical methods for generation of eigenvectors of chemical graphs, Proceedings of the Indian Academy of Sciences–Chemical Sciences, 101(6):499–517, Dec 1989.
[NTU1993]
P. Nylen, T. Y. Tam, and F. Uhlig, On the eigenvalues of principal submatrices of normal, Hermitian and symmetric matrices, Linear and Multilinear Algebra 36 (1993), no. 1, 69–78, DOI 10.1080/03081089308818276. MR1308910, Show rawAMSref\bib{NTU}{article}{ label={NTU1993}, author={Nylen, Peter}, author={Tam, Tin Yau}, author={Uhlig, Frank}, title={On the eigenvalues of principal submatrices of normal, Hermitian and symmetric matrices}, journal={Linear and Multilinear Algebra}, volume={36}, date={1993}, number={1}, pages={69--78}, issn={0308-1087}, review={\MR {1308910}}, doi={10.1080/03081089308818276}, } Close amsref.
[Pai1971]
C. C. Paige, The computation of eigenvalues and eigenvectors of very large sparse matrices, PhD thesis, London University Institute of Computer Science, 1971.
[Par1980]
B. N. Parlett, The symmetric eigenvalue problem, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1980. Prentice-Hall Series in Computational Mathematics. MR570116, Show rawAMSref\bib{Parlett}{book}{ label={Par1980}, author={Parlett, Beresford N.}, title={The symmetric eigenvalue problem}, note={Prentice-Hall Series in Computational Mathematics}, publisher={Prentice-Hall, Inc., Englewood Cliffs, N.J.}, date={1980}, pages={xix+348}, isbn={0-13-880047-2}, review={\MR {570116}}, } Close amsref.
[Š1969]
G. E. Šilov, Matematicheskiĭ analiz: Vtoroĭ spetsial′nyĭ kurs (Russian), Izdat. “Nauka”, Moscow, 1965. MR0219869, Show rawAMSref\bib{silov}{book}{ label={\v {S}1969}, author={\v {S}ilov, G. E.}, title={{Matematicheski\u {i}} {analiz}: {Vtoro\u {i}} {spetsial\cprime ny\u {i}} {kurs}}, language={Russian}, publisher={Izdat. ``Nauka'', Moscow}, date={1965}, pages={327}, review={\MR {0219869}}, } Close amsref.
[Sta2019]
M. Stawiska, More on the eigenvectors-from-eigenvalues identity, https://arxiv.org/abs/1912.06967, 2019.
[Tho1966]
R. C. Thompson, Principal submatrices of normal and Hermitian matrices, Illinois J. Math. 10 (1966), 296–308. MR190151, Show rawAMSref\bib{Thompson:1966}{article}{ label={Tho1966}, author={Thompson, R. C.}, title={Principal submatrices of normal and Hermitian matrices}, journal={Illinois J. Math.}, volume={10}, date={1966}, pages={296--308}, issn={0019-2082}, review={\MR {190151}}, } Close amsref.
[Tho1969]
R. C. Thompson, Principal submatrices. IV. On the independence of the eigenvalues of different principal submatrices, Linear Algebra Appl. 2 (1969), 355–374, DOI 10.1016/0024-3795(69)90037-8. MR245601, Show rawAMSref\bib{thompson-iv}{article}{ label={Tho1969}, author={Thompson, R. C.}, title={Principal submatrices. IV. On the independence of the eigenvalues of different principal submatrices}, journal={Linear Algebra Appl.}, volume={2}, date={1969}, pages={355--374}, issn={0024-3795}, review={\MR {245601}}, doi={10.1016/0024-3795(69)90037-8}, } Close amsref.
[TM1968]
R. C. Thompson and P. McEnteggert, Principal submatrices. II. The upper and lower quadratic inequalities, Linear Algebra Appl. 1 (1968), 211–243, DOI 10.1016/0024-3795(68)90005-0. MR237532, Show rawAMSref\bib{Thompson:1968}{article}{ label={TM1968}, author={Thompson, R. C.}, author={McEnteggert, P.}, title={Principal submatrices. II. The upper and lower quadratic inequalities}, journal={Linear Algebra Appl.}, volume={1}, date={1968}, pages={211--243}, issn={0024-3795}, review={\MR {237532}}, doi={10.1016/0024-3795(68)90005-0}, } Close amsref.
[Tos1991]
S. Toshev, On T violation in matter neutrino oscillations, Mod. Phys. Lett., A6:455–460, 1991.
[TV2011]
T. Tao and V. Vu, Random matrices: universality of local eigenvalue statistics, Acta Math. 206 (2011), no. 1, 127–204, DOI 10.1007/s11511-011-0061-3. MR2784665, Show rawAMSref\bib{tao2011}{article}{ label={TV2011}, author={Tao, Terence}, author={Vu, Van}, title={Random matrices: universality of local eigenvalue statistics}, journal={Acta Math.}, volume={206}, date={2011}, number={1}, pages={127--204}, issn={0001-5962}, review={\MR {2784665}}, doi={10.1007/s11511-011-0061-3}, } Close amsref.
[Van2014]
P. Van Mieghem, Graph eigenvectors, fundamental weights and centrality metrics for nodes in networks, arXiv e-prints, page arXiv:1401.4580, Jan 2014.
[VM2011]
P. Van Mieghem, Graph spectra for complex networks, Cambridge University Press, Cambridge, 2011. MR2767173, Show rawAMSref\bib{vmbook}{book}{ label={VM2011}, author={Van Mieghem, Piet}, title={Graph spectra for complex networks}, publisher={Cambridge University Press, Cambridge}, date={2011}, pages={xvi+346}, isbn={978-0-521-19458-7}, review={\MR {2767173}}, } Close amsref.
[Wei1960]
H. F. Weinberger, Error bounds in the Rayleigh–Ritz approximation of eigenvectors, J. Res. Nat. Bur. Standards Sect. B 64B (1960), 217–225. MR129121, Show rawAMSref\bib{weinberger}{article}{ label={Wei1960}, author={Weinberger, H. F.}, title={Error bounds in the Rayleigh--Ritz approximation of eigenvectors}, journal={J. Res. Nat. Bur. Standards Sect. B}, volume={64B}, date={1960}, pages={217--225}, issn={0022-4340}, review={\MR {129121}}, } Close amsref.
[Wil1963]
J. H. Wilkinson, Rounding errors in algebraic processes, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1963. MR0161456, Show rawAMSref\bib{wilkinson}{book}{ label={Wil1963}, author={Wilkinson, J. H.}, title={Rounding errors in algebraic processes}, publisher={Prentice-Hall, Inc., Englewood Cliffs, N.J.}, date={1963}, pages={vi+161}, review={\MR {0161456}}, } Close amsref.
[Wol2019]
N. Wolchover, Neutrinos lead to unexpected discovery in basic math, Quanta Magazine, Nov 2019.
[Xu1995]
S. Xu, Theories and Methods of Matrix Calculations (in Chinese), Peking University Press, Beijing, 1995.

Article Information

MSC 2020
Primary: 15A18 (Eigenvalues, singular values, and eigenvectors)
Secondary: 15A42 (Inequalities involving eigenvalues and eigenvectors), 15B57 (Hermitian, skew-Hermitian, and related matrices)
Author Information
Peter B. Denton
Department of Physics, Brookhaven National Laboratory, Upton, New York 11973
pdenton@bnl.gov
ORCID
MathSciNet
Stephen J. Parke
Theoretical Physics Department, Fermi National Accelerator Laboratory, Batavia, Illinois 60510
parke@fnal.gov
ORCID
MathSciNet
Terence Tao
Department of Mathematics, University of California, Los Angeles, Los Angeles California 90095-1555
tao@math.ucla.edu
ORCID
MathSciNet
Xining Zhang
Enrico Fermi Institute & Department of Physics, University of Chicago, Chicago, Illinois 60637
xining@uchicago.edu
MathSciNet
Additional Notes

The first author acknowledges the United States Department of Energy under Grant Contract desc0012704 and the Fermilab Neutrino Physics Center.

This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. FERMILAB-PUB-19-377-T.

The third author was supported by a Simons Investigator grant, the James and Carol Collins Chair, the Mathematical Analysis & Application Research Fund Endowment, and by NSF grant DMS-1764034.

Journal Information
Bulletin of the American Mathematical Society, Volume 59, Issue 1, ISSN 1088-9485, published by the American Mathematical Society, Providence, Rhode Island.
Publication History
This article was received on and published on .
Copyright Information
Copyright 2021 American Mathematical Society
Article References

Settings

Change font size
Resize article panel
Enable equation enrichment

Note. To explore an equation, focus it (e.g., by clicking on it) and use the arrow keys to navigate its structure. Screenreader users should be advised that enabling speech synthesis will lead to duplicate aural rendering.

For more information please visit the AMS MathViewer documentation.