Monotone unitary families

A unitary family is a family of unitary operators $U(x)$ acting on a finite dimensional hermitian vector space, depending analytically on a real parameter $x$. It is monotone if $\frac1i U'(x)U(x)^{-1}$ is a positive operator for each $x$. We prove a number of results generalizing standard theorems on the spectral theory of a single unitary operator $U_0$, which correspond to the 'commutative' case $U(x)=e^{ix}U_0$. Also, for a two-parameter unitary family -- for which there is no analytic perturbation theory -- we prove an implicit function type theorem for the spectral data under the assumption that the family is monotone in one argument.


Introduction
Let U (x) be a family of unitary operators on a Hermitian vector space V of dimension M < ∞, depending real analytically on x ∈ R (or an interval in R). Then Thus x ∈ Z iff U (x) has eigenvalue one. A model case for this setup is U (x) = e ix U 0 for a unitary U 0 . Then W (x) is the eigenspace of U 0 with eigenvalue e −ix . Standard facts from the spectral theory of U 0 may be restated in terms of Z and W (x), for example: • Z is a 2π-periodic sequence, having exactly M terms in each half-open interval of length 2π (counting 'multiplicities'). • If I is an interval of length less than 2π then the spaces W (x), x ∈ I, are linearly independent (even pairwise orthogonal). • If (I − U (x 0 ))ϕ ≤ ε ϕ for some ϕ ∈ V \ {0}, x 0 ∈ R and ε ≥ 0 then e −ix0 lies within distance ε of an eigenvalue of U 0 (and so dist(x 0 , Z) ≤ πε/2). Furthermore, if ε ′ > ε and P denotes the orthogonal projection to x:|e −ix −e −ix 0 |<ε ′ W (x) then (2) ϕ − P ϕ ≤ ε ε ′ ϕ . In this paper we prove generalizations of these facts to arbitrary monotone unitary families, see Theorem 1 in Section 2 and Theorems 2 and 3 in Section 3.
The estimates are expressed in terms of uniform bounds on the first and second derivatives of U : Assume d min , d max , d 2 > 0 are such that Of course, such constants exist always locally. However, in applications of our results, see [2], it is essential that the estimates are uniform in terms of (3), and do not depend on additional data like separation of elements of Z, see the explanation below.
An important difference between a general unitary family and the model case is that U (x) and U (x ′ ) do not in general commute for x = x ′ . A consequence of this is that, while one may take a logarithm of U , i.e. find an analytic family A(x) of symmetric operators such that it is usually not true that D(x) equals A ′ (x), or even that positivity of D(x) implies positivity of A ′ (x). The opposite implication is true, however. See Section 4. Finally, we prove a result on two-parameter perturbation theory. Recall the main result of one-parameter perturbation theory (see [4], [3]): For an analytic family U (x) of unitary operators on V , there are real analytic functions µ j (x), ϕ j (x) for j = 1, . . . , M = dim V having values in R and V , respectively, such that for each x the eigenvalues of U (x) are e iµj (x) , with corresponding orthonormal basis of eigenvectors ϕ j (x): It is well-known that the analogous statement for two-parameter families of operators is false in general. However, we prove a related implicit function type theorem for the spectral data of a two-parameter family for which the dependence on one parameter is monotone. It may be regarded as the natural unitary family generalization of the one-parameter perturbation theory. See Theorem 7 in Section 5. The analytic functions µ j and ϕ j play a central role in the proofs of our theorems. It is essential to control their derivatives. For µ j this is easy from (3). However, ϕ j may vary wildly whenever e iµj is very close to another eigenvalue. To control this variation is the main technical problem in the proof of the generalization of (2), Theorem 3. Note that, unlike in the model case, one may not assume that the e iµj (x) for fixed x but varying j are uniformly separated, or equivalently that the elements of Z are uniformly separated. This can already be seen in the simple example U (x) = e ixL , where L is a diagonal matrix with positive diagonal entries that are independent over Q: If M ≥ 2 then for any ε > 0 there are x, x ′ ∈ Z satisfying 0 < |x − x ′ | < ε.
The problems we study here arose in the context of a singular perturbation problem: In [2] we study the eigenvalues and eigenfunctions of the Laplacian on a space X N which has a fixed compact part connected by cylindrical necks of length N > 0, and in particular their asymptotic behavior as N → ∞. The unitary families arise from the scattering matrix of the limit problem (infinitely long 'necks').

Eigenvalue distribution
Theorem 1. Let U be a monotone unitary family on R. Then Z ⊂ R is a discrete subset, and more precisely for all A < B In the special case U (x) = e ixL U 0 , L > 0, this implies the asymptotics This is the Weyl asympotitcs of a quantum graph, see for example [1]. We give a much simpler proof than [1]. First, we differentiate (5) and obtain monotonicity of the functions µ j from monotonicity of U : Also, for each x we have Proof of Theorem 1. We have Since the µ j are strictly increasing, we get from (8) with |R j | < 1, and this gives (6).

Eigenspaces
In this section we consider monotone unitary families satisfying the estimates (3). By (7) we have  (10) If ϕ x ∈ W (x) for each x ∈ I ∩ Z and The following theorem gives a stable version of almost orthogonality.
Theorem 3. Let U be a monotone unitary family satisfying (3). Assume ϕ ∈ V \ 0 satisfies Furthermore, there is a constant C only depending on d min , d max , d 2 , M such that the following holds: In the proofs of Theorems 2 and 3 we will need the following estimate, which replaces orthogonality of the eigenspaces of a unitary operator.
Proof of Theorem 3. The first estimate follows easily from the fact that, by the lower bound in (9), an eigenvalue close to one of U (x 0 ) will turn into an eigenvalue equal to one of U (x), for some x close to x 0 : Let B(x) = I − U (x) and let λ j (x) = 1−e iµj(x) be the eigenvalues of B(x). The assumption (11) implies that |λ j (x 0 )| ≤ ε for some j, and this implies dist(µ j (x 0 ), 2πZ) ≤ π 2 ε, and then µ ′ j ≥ d min shows that there is an x satisfying |x − x 0 | < πε/2d min and µ j (x) ∈ 2πZ, hence x ∈ Z, so (12) follows.
For δ > 0 let P δ (x) denote the orthogonal projection to the sum of the eigenspaces of B(x) with eigenvalues |λ j (x)| ≤ δ. Then B(x 0 )ϕ ≤ ε ϕ implies (see (2), which also applies to normal operators). To make this a good estimate, we want to take δ >> ε. Our goal is to replace P δ (x 0 ) by P W here. The idea is that eigenspaces of B(x 0 ) with eigenvalue |λ j (x 0 )| ≤ δ will turn into nullspaces of B(x) for some x within 2δ/d min of x 0 , by the first part of this proof. However, the variation of eigenspaces is much less well behaved than the variation of eigenvalues: An eigenspace may change rapidly with x if the eigenvalue is very close to another eigenvalue. Therefore, we need to consider not single eigenspaces but rather clusters of eigenspaces. The variation of eigenspaces is given as follows (see [3]): Fix x. If B(x) has no eigenvalue on the circle |λ| = δ then, with a prime denoting derivative in x, Here, all quantities are evaluated at x, and P j is the orthogonal projection to span ϕ j . Taking norms and using orthogonality of the P j one obtains from this, We need to choose δ carefully to make the spectral gap not too small: Let s = (ε ′ /ε) 1/(M+1) and consider the M disjoint subintervals [εs k , εs k+1 ) for k = 1, . . . , M of (ε, ε ′ ). Since B(x 0 ) has M eigenvalues and one of them has absolute value ≤ ε, at least one of these intervals contains no |λ j (x 0 )|. Assume (18) [δ, δs), δ = εs k , contains no |λ j (x 0 )|.

and then
for the same ψ x , ψ y as there. By simple standard calculations this implies where the sums are over all x ∈ Z with |x − x 0 | ≤ δ ′ , ψ x ∈ W (x) are arbitrary andM is the number of summands. Now by A) aboveM ≤ M . Since δ < ε ′ the expression in parantheses is ≥ 1 2 for sufficiently small ε ′ , so (24) gives x ψ x 2 D0 ≤ 2 x ψ x 2 D0 , which with (22) gives x ψ x 2 ≤ 2 dmax dmin x ψ x 2 and so We return to (20).

Monotonicity of U and of its logarithm
Denote by S(V ) and U(V ) the spaces of symmetric resp. unitary operators on V . The map S(V ) → U(V ), A → e iA has non-singular differential everywhere and is surjective, so it is a covering map. Hence any curve U : x → U (x) in U(V ) may be lifted to a curve x → A(x) in S(V ) (that is, U (x) = e iA(x) for all x) and the lift is unique if one prescribes it for one value of x. Furthermore, the lifted curve is analytic if U is.
Proposition 5. Let x → A(x) be a C 1 family of symmetric operators and U (x) = e iA(x) . Then Here, a prime denotes differentiation with respect to x, and U, U ′ , A, A ′ are taken at a fixed x.
Now fix x, and let B(t) = A ′ e itA . The solution of the ordinary differential equation Proof. Positivity of A ′ implies positivity of e iτ A A ′ e −iτ A for each τ , so the claim follows from (29).
The converse is not true. As an example let A 0 = 0 −π π 0 , B = −b 0 0 1 with 0 < b < 1 and A(x) = A 0 + xB. Then e iτ A0 = cos πτ − sin πτ sin πτ cos πτ is rotation by πτ , and a short calculation shows that 1 0 e iτ A0 Be −iτ A dτ = 1−b 2 I (this is also clear without calculation since the result must be rotation invariant with trace equal to tr B = 1 − b; in essence, the negative direction of B gets averaged away against the positive direction). Therefore, U (x) = e iA(x) is monotone near x = 0 but A ′ (0) = B is not positive.

Two parameter families
Theorem 7. Let U (x, y) be a unitary operator in a finite-dimensional Hermitian vector space depending real analytically on x, y ∈ R. Assume Then the set {(x, y) : U (x, y) has eigenvalue one} is, in a neighborhood of (x 0 , y 0 ), a union of real analytic curves x = x j (y). The corresponding projections P j (y) to the eigenspace of U (x j (y), y) with eigenvalue one are also analytic functions of y = y 0 , extending analytically to y = y 0 , and j P j (y 0 ) is the projection to ker(I − U (x 0 , y 0 )).
Note that in general it is not true that the eigenvalues and eigenprojections of U (x, y) may be arranged as real analytic functions of (x, y), see [3], II.6.1. While the example given there (in the analogous case of self-adjoint operators) does not satisfy the positivity assumption (30), it can be easily modified so it does, by adding a multiple of the identity. Explicitly, one may take A(x, y) = 3x y y x and U (x, y) = e iA(x,y) and (x 0 , y 0 ) = (0, 0). Note also that the statement of the theorem reduces to the well-known facts of one-parameter perturbation theory in case U (x, y) = e ix U (y), for an analytic one-parameter family of unitary operators U (y).
We first consider the case U (0, 0) = I. Let A = 1 i log U near (x, y) = (0, 0). Then the operators A(x, y) are self-adjoint, A(0, 0) = 0, and ∂A/∂x(0, 0) > 0 since it equals 1 i ∂U ∂x U −1 (0, 0) by (29), and we need to prove that the set S = {(x, y) : A(x, y) is not invertible } is a union of real analytic curves as claimed. 1 If A(x, y) = xA + yB is linear in x, y, then (since A > 0) A and B may be diagonalized simultaneously, hence may be assumed to be diagonal, and then it is obvious that S is a union of lines, x j (y) = yb j /a j , where a j , b j are the diagonal entries of A, B, respectively. In general, write A(x, y) = xA + yB + C(x, y) with C(x, y) = O(|x, y| 2 ) and w.l.o.g. A, B diagonal. Then, if the dimension of the vector space is M , det A(x, y) = M j=1 (xa j + yb j ) + O(|x, y| M+1 ), and a standard argument (using polar coordinates) shows that the zero set of this function is a union of real analytic lines x = x j (y), having tangents xa j + yb j = 0 at the origin.
Let C j (y) = U (x j (y), y) and let P j (y) be the projection to Ker C j (y). Since C j is analytic in y, its eigenprojections are analytic for y = 0 (but near zero) and extend analytically to y = 0 (see [3]), so this is in particular true for P j .