Another observation about operator compressions

Let $T$ be a self-adjoint operator on a finite dimensional Hilbert space. It is shown that the distribution of the eigenvalues of a compression of $T$ to a subspace of a given dimension is almost the same for almost all subspaces. This is a coordinate-free analogue of a recent result of Chatterjee and Ledoux on principal submatrices. The proof is based on measure concentration and entropy techniques, and the result improves on some aspects of the result of Chatterjee and Ledoux.


Introduction
Let T be an operator on a (real or complex) n-dimensional Hilbert space H, and let E ⊆ H be a subspace. The compression of T to E is the operator T E = π E T | E = π E T π * E on E, where π E : H → E is the orthogonal projection. The spectral distribution of a self-adjoint operator T is the probability measure on R where λ 1 (T ) ≥ · · · ≥ λ n (T ) are the eigenvalues of T , counted with multiplicity.
The following result shows that for 1 ≤ k ≤ n and a self-adjoint operator T on an n-dimensional Hilbert space H, the empirical spectral distribution of the compression T E is almost the same for almost every k-dimensional subspace E ⊆ H. The notations σ k and ρ are explained after the statement of the theorem; d 1 denotes the Kantorovich-Rubinstein metric on probability measures, also defined below. Theorem 1. Let H be an n-dimensional Hilbert space, T a self-adjoint operator on H, and 1 ≤ k ≤ n. Let E be a k-dimensional subspace of H chosen at random with respect to the rotationally invariant probability measure on the Grassmann manifold. Let µ E be the empirical spectral distribution of the compression of T to E, and let µ = Eµ E . Then Here ρ(T ) = 1 2 (λ 1 − λ n ) denotes one half the spectral diameter of T (which is different in general from the classical spectral radius); it is easy to check that ρ(T ) is the distance of T from the space of real scalar operators with respect to the operator norm. For 1 ≤ k ≤ n, where s 1 ≥ · · · ≥ s n ≥ 0 denote singular values. That is, σ k (T ) is the distance of T from the space of real scalar operators with respect to the norm T (k),2 = k i=1 s i (T ) 2 . The space of probability measures (with finite first moment) on R is equipped with the Kantorovich-Rubinstein or L 1 -Wasserstein distance d 1 , which may be equivalently defined in the following three ways: Here π varies over all probability measures on R × R with marginals µ and ν; f varies over all Lipschitz continuous functions R → R with Lipschitz constant at most 1; and F µ , F ν are the cumulative distribution functions of µ, ν. All three characterizations will be used in this note; for the equalities see [10,Chapter 1]. Theorem 1 is a coordinate-free analogue of a recent result of Chatterjee and Ledoux [3], which considered the empirical spectral measure of a random k × k principal submatrix of a fixed n × n Hermitian matrix. The approach taken in [3] is rather different than the one taken here; the result of [3] is also given in terms of the Kolmogorov distance between measures, rather than Wasserstein distance. See section 3 below for a more detailed comparison of the results.

Proof of Theorem 1
Throughout this section let H and T be fixed, and let µ E and µ be as defined in the statement of the theorem. For brevity we write σ k = σ k (T ) and ρ = ρ(T ). The notation B means ≤ cB, where c > 0 is some absolute constant.
Recall that the Grassmann manifold G k (H) of k-dimensional subspaces of H is equipped with the metric Thus by the self-adjointness of T and the Cauchy-Schwarz inequality, Observing that d 1 (µ E , µ F ) is invariant under addition of a real scalar matrix to T , the lemma is proved.
The same proof as above can be carried out (and is slightly simpler) with the Kantorovich-Rubinstein distance replaced by the L 2 -Wasserstein distance, although this observation will not be used here.
The following concentration inequality goes back to Gromov and Milman [5]; see also section 2.1 of [6] where it is pointed out explicitly that the same result applies in the complex case.
Observe that (1), Lemma 2, and Theorem 3 together imply (2), so it suffices now to prove (1). Let E ∈ G k (H) be distributed according to the rotation-invariant probability measure on G k (H). For a given function f : R → R, define the random variable X f = f dµ E − f dµ. By Lemma 2 and Theorem 3, for functions f, g The inequality (4) shows that the random process X f , indexed by some family F of Lipschitz continuous test functions (to be determined), satisfies a subgaussian increment condition with respect to the norm . This raises the possibility to estimate its expected supremum by Dudley's entropy bound [4] (see also [8]): where N (F, · ′ , ε) is minimum number of sets of diameter ε with respect to · ′ needed to cover F. Since µ E and µ are supported on [λ n , λ 1 ], Thus to prove (1), it suffices to estimate E sup f ∈F X f for F = {f : f C 1 ≤ 1 + 2ρ}. However, since C 1 is an infinite dimensional function space, for this choice of F the covering numbers N (F, · ′ , ε) in (5) will always be infinite for small ε.
The covering numbers N (F, · C 1 , ε) can be estimated using the methods of [9, § 2.7]; see [7] for explicit estimates which, combined with (5) and a linear change of variables, yield (6) E sup X f : The bound (1) is now derived from (6) via a smoothing and scaling argument. Fix f : R → R with |f | L ≤ 1 and f ∞ ≤ 2ρ. Let ϕ : R → R be a smooth probability density with finite first absolute moment and ϕ ′ ∈ L 1 (R). For t > 0 define ϕ t (x) = 1 t ϕ( x t ), and let g t = f * ϕ t . Then Now for any probability measure ν on R, and so by (6) and (7), Picking t of the order Now apply (8) with the operator T replaced by sT for s > 0. It is easy to check that the Kantorovich-Rubinstein distance d 1 (µ E , µ) is homogeneous with respect to this rescaling, as are σ k and ρ. Thus one obtains Picking s of the order (kn) 1/7 σ 2/7 k ρ 5/7 now yields (1).

Discussion
In [3], Chatterjee and Ledoux proved a version of Theorem 1 for principal submatrices. Namely, let H = C n , and suppose E is now uniformly distributed among k-dimensional coordinate subspaces of C n . Then [3] shows that for t > 0, and consequently Here d ∞ (µ, ν) = F µ − F ν ∞ is the Kolmogorov distance between probability measures µ and ν on R.
It is likely that the methods of this paper could be used to prove a result in the setting of [3], by replacing Theorem 3, which follows from concentration inequalities on the unitary or special orthogonal group, with an appropriate concentration inequality on the symmetric group S n . Furthermore, it may be possible to prove a result in the setting of this paper using methods related to those of [3], such as adapting the approach of Chatterjee in [2]. Below some quantitative comparison will be offered between Theorem 1 and the result of [3], ignoring the fact that the random subspace E has a different distribution in each setting. In particular, the distribution of E is probably responsible for the difference between the subexponential tail decay in (9) and the subgaussian tail decay in (2).
Before discussing more specific quantitative comparisons, we note that the clearest difference between the two results is that ours is coordinate-free. While there are settings in which coordinates have meaning and thus coordinate-oriented results are natural, there are many settings in which there is no clearly preferred basis in which to view an operator. Take, for example, the Laplacian ∆ on the sphere S n−1 . It has eigenvalues (up to sign convention) 0 > −λ 1 > −λ 2 > · · · → −∞, and the corresponding eigenspaces are multidimensional. If one took H to be the span of the first m eigenspaces, with T = ∆| H , there is no canonical choice of basis within each eigenspace, and so it would seem more natural to consider compressions of T to all subspaces of a given dimension, rather than only to the coordinate subspaces for some choice of basis.
Comparisons of the results are made somewhat difficult as the Kantorovich-Rubinstein distance d 1 and the Kolmogorov distance d ∞ are not comparable in general. However, since the measures here are all supported in the interval [λ n (T ), λ 1 (T )], from the third representation of d 1 in (3) one obtains the estimate in the present context. This estimate is related to a qualitative difference between d 1 and d ∞ : whereas d 1 is homogeneous with respect to a rescaling of the supports of measures (a fact which was exploited in the proof of Theorem 1), d ∞ is invariant under rescaling. Which behavior is more convenient may vary by the context. Inequality (11) makes some quantitative comparisons between the results of [3] and Theorem 1 possible. Observe that (9) and (10) only yield nontrivial information if k ≫ 1 (which of course requires n ≫ 1), whereas under appropriate scaling, Theorem 1 is nontrivial for n ≫ 1 even if k is small. In particular, (9) and (11) imply that the fluctuations of d 1 (µ E , µ) above its mean are of order (ignoring logarithmic factors) at most k −1/2 ρ(T ), whereas (2) together with the general estimate σ k (T ) ≤ √ kρ(T ) yields fluctuations of order at most (kn) −1/2 σ k (T ) ≤ n −1/2 ρ(T ). The issue of the expected distance is more complicated. The general estimate ρ(T ) ≤ σ k (T ) and inequalities (10) and (11) imply that which is slightly weaker than (1) for k large (in which case the lossy estimates used to arrive at (12) mean that the comparison should probably not be taken too seriously) and significantly weaker for k small. Since the different distributions of E are being ignored here there is little point in making the comparison very precise. Finally, the comparison of fluctuations highlights that the methods of this paper are more sensitive to the proximity of T to the space of scalar operators. If T is a (real) scalar operator then µ E is a constant point mass, so it is natural to expect that if T is nearly scalar in some sense then µ E will be more tightly concentrated then in general. The results of [3] do not directly reflect this at all, although the estimate (11) allows one to insert this effect by hand when changing metrics. However, σ k (T ) provides a sharper measure than ρ(T ) of how close T is to scalar, and in some cases the bound (kn) −1/2 σ k (T ) on the order of the fluctuations may be even much smaller than n −1/2 ρ(T ). This is the case, for example, if T has a large number of tightly clustered eigenvalues with a small number of outliers.