Spectral and Dynamical Properties of Certain Random Jacobi Matrices with Growing Parameters

In this paper, a family of random Jacobi matrices, with off-diagonal terms that exhibit power-law growth, is studied. Since the growth of the randomness is slower than that of these terms, it is possible to use methods applied in the study of Schr\"odinger operators with random decaying potentials. A particular result of the analysis is the existence of operators with arbitrarily fast transport whose spectral measure is zero dimensional. The results are applied to the infinite Gaussian $\beta$ Ensembles and their spectral properties are analyzed.

The case a(n) ≡ 1, η 2 < 0 ("η 1 = 0") and no perturbation off the diagonal, is the extensively studied family of one-dimensional discrete Schrödinger operators with a random decaying potential [3,4,5,13,14,19].For these Schrödinger operators it has been established that when η 2 < − 1  2 , the absolutely continuous spectrum of the Laplacian is a.s.preserved.When η 2 > − 1  2 , however, the disorder wins over and the spectrum is pure point with eigenfunctions that decay at a super-polynomial rate.At the critical point (η 2 = − 1  2 ) the spectral behavior exhibits a sensitive dependence on the coupling constant: The generalized eigenfunctions decay polynomially and the spectral measure is pure point or singular continuous according to whether these generalized eigenfunctions are in ℓ 2 or not (for a comprehensive treatment of discrete Schrödinger operators with random decaying potentials, see Section 8 of [13]).
From this perspective, the extension presented in this paper is in allowing growth of the off-diagonal terms.Intuitively, these terms are responsible for transport, and thus, their growth should have an effect on the spectrum similar to that of the decay of the potential (diagonal terms).Indeed, a particular case of our analysis is that of η 2 = 0, namely, the diagonal terms are i.i.d.random variables.We show that the critical point here is η 1 = 1 2 .Below this value, the spectrum is a.s.pure point, whereas above it the spectral measure is one-dimensional.
More generally, let {X ω (n)} ∞ n=1 be a sequence of i.i.d.random variables.Let {Y ω (n)} ∞ n=1 be another such sequence (the distributions of the X's and the Y 's need not be the same).Assume the following is satisfied: The common distribution of X ω (n) is absolutely continuous with respect to Lebesgue measure.

Given the quadruple
(1.5) The assumptions on the parameters defining J Υ,ω , do not exclude the possibility that some of the off-diagonal terms will vanish.However, with probability one, this may happen only a finite number of times, so that J Υ,ω has an infinite part with strictly positive off-diagonal entries.In the following, when we refer to J Υ,ω , we refer to this part.
We shall prove Theorem 1.1.For the model above, let γ = η 1 − η 2 and let Λ = The following holds with probability one: (1) If γ > 1 2 the spectrum of J Υ,ω is R and µ Υ,ω , the spectral measure of J Υ,ω , is one-dimensional, meaning that it does not give weight to sets of Hausdorff dimension less than 1.
(2) In the case γ = 1 2 the spectrum of J Υ,ω is R and we have the following two possibilities: 2 then the spectrum is pure point with eigenfunctions decaying like Remark.We say that a measure, µ, has exact Hausdorff dimension, ̺, if it is supported on a set of Hausdorff dimension ̺ and does not give weight to sets of Hausdorff dimension less than ̺.For more information concerning the decomposition of general measures with respect to their Hausdorff-dimensional properties, consult [15] and references therein.
Remark.In analogy to the Schrödinger case, one would expect to have absolutely continuous spectrum for γ > 1/2.Unfortunately, though we believe this is true, one-dimensional spectral measure is all we could get.
Remark.The requirement that {X ω (n)} ∞ n=1 and {Y ω (n)} ∞ n=1 be identically distributed sequences is not really necessary and is made here only to simplify the discussion.
In resemblance of the Schrödinger case, the proof of this theorem follows by analyzing the asymptotics of solutions to the formal eigenfunction equation "Jψ = Eψ".Namely, we shall analyze the solutions to the difference equation: By a theorem of Kiselev and Last [12,Theorem 1.2], the results obtained have implications for the quantum dynamics associated with J; namely the behavior of a given vector ψ under the operation of the one parameter unitary group e −itJ .More precisely, let X be the position operator defined by Xψ (n) = nψ(n).
Theorem 1.2 of [12], which can be seen to hold in our setting, says that Theorem 1.1 and Proposition 2.7 below lead to Note that η 1 may be chosen arbitrarily close to 1 while the spectral measure may have any dimension in [0, 1).Thus, by tuning the parameters we obtain operators with any local spectral dimensions having arbitrarily fast transport.
As an application of our general analysis, we study the Gaussian β ensembles arising naturally in the context of Random Matrix Theory: The eigenvalue distribution functions for the three classical Gaussian ensembles are given by with β = 1, 2 and 4 for the Gaussian Orthogonal Ensemble, Gaussian Unitary Ensemble and Gaussian Symplectic Ensemble respectively.A family of random matrix ensembles, indexed by β, having f β,N as their eigenvalue distribution function, for any positive value of β, was recently constructed by Dumitriu and Edelman [6].The matrices in these ensembles are finite random Jacobi matrices with the distribution of the off diagonal terms depending on β: (2) b β,ω (n) are all standard Gaussian variables (that is, with zero mean and variance= 1), irrespective of β and n.
(3) The probability distribution function of a β,ω (n) is given by (1.9) In [6], Dumitriu and Edelman showed that the eigenvalue distribution function of the finite matrix, obtained as the restriction of J β,ω to the N × N upper left corner, is f β,N for any β > 0.
From property 3 above, it follows that Thus we see that the family J β,ω corresponds to the case η 1 = 1/2, η 2 = 0 of the general matrices introduced above.Technically, the following theorem is not a corollary of Theorem 1.1, because of the O n −1/2 term in (1.10) and the O (n −1 ) term in (1.11).The proof of Theorem 1.1, however, is robust with respect to such a change, and we have Theorem 1.4.For any β, the spectrum of J β,ω is R with probability one.
This result, without the dynamical part, was announced in [2].We note that the analogous Circular β Ensembles can be realized as eigenvalues of truncated CMV matrices.This was shown by Killip and Nenciu [10] and later used by Killip and Stoiciu in their analysis of level statistics for ensembles of random CMV matrices [11].The bulk spectral properties of the appropriate matrices were analyzed by Simon [21,Section 12.7].
The proof of Theorem 1.1 is given in the next section.Since the proof of the spectral part of Theorem 1.4 is precisely the same, it is not given separately.As noted earlier, the dynamical part of our analysis (Theorem 1.2 and the corresponding statement in Theorem 1.4) follows immediately from Theorem 1.1 and Proposition 2.7, by Theorem 1.2 of [12].
The method we use is a variation on the one used by Kiselev-Last-Simon [13,Section 8] in their analysis of the Schrödinger case described above.A notable difference is the fact that, due to the growth of the a(n), the effective energy parameter, E a(n) , vanishes in the limit.This, in addition to requiring a modification in the technique of proof (see Lemma 2.4 and Proposition 2.5 below), leads to the fact that the asymptotics of the generalized eigenfunctions are constant over R. At the critical point ((η 1 − η 2 ) = 1  2 ), this implies uniformity of the local Hausdorff dimensions of the spectral measure.
A modified Combes-Thomas estimate, for operators with unbounded off-diagonal terms, enters our analysis in the identification of the spectrum of J Υ,ω .Such an estimate may be of independent interest and thus is presented in the Appendix.

Proof of Theorem 1.1
We begin with a simple lemma that shows that, in a certain sense, J Υ,ω is a random relatively decaying perturbation of J λ 1 ,η 1 .
Lemma 2.1.For any ε > 0 there exists, with probability one, a constant C = C(ω, ε) for which Proof.By (1.3) and Chebyshev's inequality we have for any k ∈ N By choosing 2k > ε −1 we see that (2.1) follows now from Borel-Cantelli.The proof of (2.2) is the same.
As stated in the Introduction, we follow the strategy of [13].In particular, we will deduce the spectral properties of J Υ,ω from the asymptotics of the solutions to the corresponding eigenfunction equation.
In order to fix notation, for a given Jacobi matrix ) and a fixed E ∈ R, denote by ψ E a solution to the equation (2. 3) It is customary to extend this equation to n = 1 by defining a(0) = 1.Clearly, the space of sequences {ψ E (n)} ∞ n=0 solving (2.3) is a twodimensional vector space and any such sequence is completely determined by its values at 0 and 1.We let ψ E φ (n) stand for the solution of (2.3) satisfying We note that, formally, , and so We call the matrices S E (n) defined above one-step transfer matrices, and for the matrices T E (n), we use the name n-step transfer matrices.Our main technical result is Theorem 2.2.Let J Υ,ω be the family of random Jacobi matrices described in the Introduction.Then, for any E ∈ R, the following holds with probability one: (1) The EFGP transform (see [13]) is a useful tool for studying the asymptotic behavior of T E (n) in the Schrödinger case (a(n) ≡ 1).For a(n) → ∞, certain modifications are needed.We proceed to present a version that is suitable for our purposes. Let ) be a Jacobi matrix whose entries are all independent random variables.Let ã(n) = a ω (n) and α ω = a ω (n) − ã(n), and assume that and that with probability one.These properties clearly hold for J Υ,ω (see Lemma 2.1).In the analysis that follows we keep E ∈ R fixed so we omit it from the notation.Define Then, For any φ, define the sequences so that, from the definition of ψ ω,φ (= ψ φ for the random Jacobi parameters), we see that (2.10) By (2.8) we see that for any E ∈ R and sufficiently large n, we may define k n ∈ (0, π) by so that (using (2.10)) .
(2.14) By (2.9), it follows that the right hand side converges to one with probability 1, uniformly in φ, so that almost surely, for sufficiently large n, there are constants C 1 , C 2 > 0 such that Now, by a straightforward adaptation of Lemma 2.2 of [13] it follows that for any two angles φ 1 = φ 2 , there are constants Thus, we are led to examine the asymptotic properties of log R ω,φ (n).

.25)
Proof.As in [13] we shall prove the statement by using the recursion relation for R ω (n) 2 (equation (2.17)).Namely, we shall prove that (2.26) converges to the appropriate limit, where We shall need some estimate on the behavior of θ ω (n).We start with Lemma 2.4.For any ε > 0, there exists, with probability one, a constant C = C(ω, ε), such that Thus, n 1+η 1 so we see that there exists, with probability one, a constant C 5 (ω, ε) for which This, together with (2.28), implies (2.27) and concludes the proof of the lemma.
The direct consequence of this is and Then, for any φ, almost surely.The same statement holds with θ replaced by θ.
Proof of the Proposition.We shall prove the statement for θ.By summation by parts, (2.31) Thus we are led to examine j l=1 cos(2θ ω,φ (l)).Assume that j = 2m is even.Then almost surely, by Lemma 2.4 and by which holds for sufficiently large n.Thus, for any j, we get (2.33) A simple calculation finishes the proof for θ.The proof for θ follows the same argument, with an additional |k 2l − k 2l−1 | term in (2.32).
to see that (2.48) In order to use subordinacy theory, we need the existence of another solution with faster decay at infinity.The following is Lemma 8.7 of [13], formulated for general regular matrices: (1) φ n has a limit φ ∞ if and only if Proposition 2.7.Let J Υ,ω be the family of random Jacobi matrices described in Theorem 2.2, with γ = 1 2 .Then, for any E ∈ R, there exists, with probability one, an initial condition Ψ φ(ω) = cos(φ(ω)) sin(φ(ω)) such that for almost every ω.By (2.12)-(2.13), where almost surely, we may apply a finite Taylor expansion to the above (using (2.36),(2.37)and (2.38)) to see that, with probability one, for large enough n (1) By Theorem 2.2, Fubini's Theorem, the fact that the distribution of X ω (n) is absolutely continuous with respect to Lebesgue measure, and the theory of rank-one perturbations ( [20]), it follows that, with probability one, the spectral measure is supported on the set of energies where, for any ε > 0 and sufficiently large n, T E ω (n) 2 ≤ n −η 1 +ε .From Corollary 4.4 of [9] it follows now that the spectral measure is continuous with respect to (1 − ε)-dimensional Hausdorff measure, for any ε > 0. Thus the spectral measure is one-dimensional.Since ψ(n) = δ 1 , (J − z) −1 δ n solves the eigenvalue equation for z (away from n = 0), Theorem A.1 and Wronskian conservation imply that if z ∈ R were to be outside of the spectrum, the transfer matrices would have to exhibit exponential growth.Since this is not the case, it follows that the spectrum is R.
(2) In this case again, the fact that the spectrum is R follows from the polynomial bound on the transfer matrices in Theorem 2.2 and Theorem A.1 below.As for the properties of the spectral measure, these follow from Theorem 1.2 in [9] using (2.48), Proposition 2.7 and the theory of rank one perturbations.(3) The existence, with probability one, of an exponentially decaying eigenfunction, for every E ∈ R, follows from Theorem 2.2 and Theorem 8.3 of [16].Fubini and the theory of rank one perturbations imply that the spectral measure is supported, with probability one, on the set where these eigenfunctions exist.
Comparing powers of n in the exponent, Theorem A.1 implies that, as long as η 1 > 2η 2 , the spectrum fills R.
which finishes the proof.