Perturbation theory for normal operators

Let $E \ni x\mapsto A(x)$ be a $\mathscr{C}$-mapping with values unbounded normal operators with common domain of definition and compact resolvent. Here $\mathscr{C}$ stands for $C^\infty$, $C^\omega$ (real analytic), $C^{[M]}$ (Denjoy--Carleman of Beurling or Roumieu type), $C^{0,1}$ (locally Lipschitz), or $C^{k,\alpha}$. The parameter domain $E$ is either $\mathbb R$ or $\mathbb R^n$ or an infinite dimensional convenient vector space. We completely describe the $\mathscr{C}$-dependence on $x$ of the eigenvalues and the eigenvectors of $A(x)$. Thereby we extend previously known results for self-adjoint operators to normal operators, partly improve them, and show that they are best possible. For normal matrices $A(x)$ we obtain partly stronger results.


Introduction and main results
The purpose of this paper is to prove the following theorem.
1.1. Theorem. Let x → A(x) be a parameterized family of unbounded normal operators in a Hilbert space H with common domain of definition and with compact resolvent.
(A) If A(x) is C ∞ (resp. C [M] ) in x ∈ R and if the order of contact of any two unequal eigenvalues is finite at each x ∈ R, then the eigenvalues and the eigenvectors of A(x) admit global C ∞ (resp. C [M] ) parameterizations in x. The latter condition is trivially satisfied if C [M] is quasianalytic. (B) Assume that C [M] is quasianalytic. If A(x) is C [M] in x ∈ R n , then for each x 0 ∈ R n and for each eigenvalue z of A(x 0 ), there exist a neighborhood D of z in C, a neighborhood W of x 0 in R n , and a finite covering {π k : U k → W } of W by composites of finitely many local blow-ups, such that the eigenvalues of A(π k (y)) in D and the corresponding eigenvectors can be chosen C [M] in y. (C) Assume that C [M] is quasianalytic. If A(x) is C [M] in x ∈ R n , then for each x 0 ∈ R n and for each eigenvalue z of A(x 0 ), there exists a neighborhood D of z in C, such that the eigenvalues of A(x) in D can be parameterized by functions which are locally 'piecewise Lipschitz continuous', i.e., belong to L C [M ] loc (cf. 6.18). In particular, they are SBV loc -functions whose classical gradient exists almost everywhere and is locally bounded.
If x 0 ∈ E ∩ U and c : R → E is a C ∞ -curve with c(0) = x 0 and c((0, 1]) ⊆ U , then λ • c| (0,1] is globally Lipschitz on (0, 1]. If E = R, then the eigenvalues admit a C 0,1 -parameterization in x. (E) If x → A(x) is C 1,α in x ∈ R, for some α > 0, then the eigenvalues admit a C 1 -parameterization in x. (F) If x → A(x) is C 2,α in x ∈ R, for some α > 0, then the eigenvalues admit a twice differentiable parameterization in x.
A convenient vector space is a real locally convex vector space E satisfying the following equivalent conditions: Mackey Cauchy sequences converge; C ∞ -curves in E are locally integrable in E; a curve c : R → E is C ∞ if and only if ℓ • c is C ∞ for all continuous linear functionals ℓ. The c ∞ -topology on E is the final topology with respect to all C ∞ -curves. Functions f defined on c ∞ -open subsets of convenient vector spaces E are called C k,α if f • c is C k,α for every C ∞ -curve c. If E is a Banach space, then a C k,α -function is C k and the kth derivative is locally Hölder continuous of order α in the usual sense. This has been proved in [16], see also the lemma in [25]. For the Lipschitz case see [17] and [23, 12.7 and 12.8].
That A(x) is a C ∞ , C [M] , or a C k,α -family of unbounded normal operators means the following: There is a dense subspace V of the Hilbert space H so that V is the domain of definition of each A(x), A(x) has closed graph, and we have A(x)A(x) * = A(x) * A(x) wherever defined. Moreover, we require that x → A(x)u | v is C ∞ , C [M] , or C k,α , for each u ∈ V and v ∈ H. This implies that x → A(x)u is of the same class as a mapping E → H (where E is either R or R n or an infinite dimensional convenient vector space) for each u ∈ V , by [23, 2.3] for C ∞ , by [27, 4.3, 4.4, 4.5, and 5.1] for C [M] , and by [23, 2.3], [17, 2.6.2] or [16, 4.14.4] for C k,α , because C k,α can be described by boundedness conditions only and for these the uniform boundedness principle is valid. Note that the real analytic case is included If A depends on a single real parameter x, then the eigenvalues of A may be chosen continuously near each (x 0 , z), where z is an eigenvalue of A(x 0 ), see [20,II Thm. 5.2]. The order of vanishing of a continuous function germ f at 0 ∈ R is the supremum of all integers p such that f (x) = x p g(x), where g is continuous; likewise at any x 0 ∈ R. The order of contact of two continuous function germs is the order of vanishing of their difference.
A local blow-up Φ over an open subset U of a C [M] -manifold X means the composite Φ = ι • ϕ of a blow-up ϕ : U ′ → U with center a C [M] -submanifold and of the inclusion ι : U → X.
A sequence of functions λ i is said to parameterize the eigenvalues of A, if, for each z ∈ C, the cardinality |{i : λ i (x) = z}| equals the multiplicity of z as an eigenvalue of A(x).
An SBV -function is a special function of bounded variation, i.e., a function having bounded variation whose distributional derivative has trivial Cantor part, see [14] and [3]. 1.4. Explanation of the results and background. The novelty of the results in Theorem 1.1 and of the partly stronger finite dimensional versions of (A)-(F) for normal matrices which will be shown in the course of the proof of Theorem 1.1 is threefold: • The results are well-known if all operators A(x) are self-adjoint; at least in some weaker formulation. We show that the assumption of self-adjointness can be replaced by normality, essentially without changing the conclusions (only in (D) we additionally have to assume continuity if dim E > 1). • We achieve utmost generality, at least for matrices, by working in abstractly defined quasianalytic subclasses of C ∞ which present a minimal setting for our method of proof. For unbounded operators we restrict to C [M] . • We partly even improve the results for self-adjoint operators and show that they are then best possible.
Let us briefly describe what was previously known. If all operators A(x) are self-adjoint, then (A) is due to Rellich [36] in 1942 for C ω , to [2] for C ∞ , and to [29] for C {M} (with special M = (M k )); the normal case follows for C ω from an observation due to Butler, see [20,II Thm. 1.10] and [5, 3.5.1]. Part (B) is due to [30] for C ω -families of symmetric matrices and to [29] for unbounded self-adjoint operators; in [29] (see also [34] and [35]) the normal case is treated, but there in addition we had to use local power substitutions. In the self-adjoint case, part (C) and part (D) are consequences of [35, 9.6] and [25], and part (E) was proved in [24]. Part (F) was shown in [24] under the assumption that R ∋ x → A(x) is a C ∞ -curve (or, more precisely, C 3n,α , if the multiplicity of an eigenvalues does never exceed n) of self-adjoint operators. Our proof of (F) works for normal A and needs only the assumption C 2,α .
It is somewhat surprising that these results carry over to normal operators. For Hermitian matrices the characteristic polynomial is hyperbolic, i.e., all its roots are real, and the roots of families of hyperbolic polynomials admit 'nice' parameterizations, which are reflected by the regularity properties of the eigenvalues and the eigenvectors. For instance, the roots of a hyperbolic polynomial with coefficients in some quasianalytic class of functions admit parameterizations in the same class after desingularization by means of local blow-ups (of the parameter space), see [35] and [30] for C ω ; and the (increasingly ordered) roots are locally Lipschitz, provided that the coefficients are in C n , where n is the degree, see [12]. The perturbation theory for complex polynomials is considerably weaker: In general, local power substitutions are needed in order to desingularize, and the roots cannot satisfy a local Lipschitz condition, e.g., z 2 − x = 0, x ∈ R, see [35]. However, not every quasianalytic family of polynomials appears as the characteristic polynomial of a quasianalytic family of normal matrices. In fact, the set of normal complex n × n matrices forms a real n 2 + n dimensional stratified submanifold of R 2n 2 (the set of all complex n × n matrices), see e.g. [19]. So the normality condition implies perturbation results for operators stronger than predicted by the perturbation theory for polynomials.
The results in (B) and (C) seem to be new even in the real analytic setting. However, we shall work in a minimal setting making the proofs (in particular desingularization) work, namely subclasses of C ∞ which are quasianalytic and have certain stability properties, see Section 2. Only when passing to infinite dimensions we will restrict to the framework of Denjoy-Carleman classes for which we have developed the required principles of calculus beyond Banach spaces in [26,28,27]. One may expect analogous results for any suitable quasianalytic function class.
In (D) we need to assume continuity of x → λ(x) if dim E > 1, since in general there will not exist continuous parameterizations of the single eigenvalues, see Example 8.2. However, it might be that the supplement in (C) is still true without that assumption, i.e., that a C 0,1 -family R n ∋ x → A(x) of normal complex matrices admits a parameterization of its eigenvalues by SBV loc -functions whose classical gradient exists a.e. and is locally bounded, see Question 6.21.
The conclusions in (E) and (F) are optimal in the following sense: There exist C ∞ -curves (even non-quasianalytic C [M] ) of real symmetric 2 × 2 matrices whose eigenvalues do not admit a parameterization in C 1,α for any α > 0, see the examples in [24] and [29]. We also want to stress that (A), (E), and (F) are no longer true if the parameter domain has more than one dimension: The eigenvalues ± x 2 + y 2 of the real analytic family x y y −x , x, y ∈ R, are not C 1 at the origin, see Example 8.1. We point out that the assumptions in Theorem 1.1 may be slightly relaxed, if all A(x) are m-sectorial operators. In that case it suffices to assume that the associated quadratic forms a(x) have common domain of definition V and x → a(x)(u) is of the respective class for each u ∈ V , see Remark 7.5.
The paper is organized as follows: We introduce and describe the classes of smooth functions we shall be working with in Section 2 and polynomials with coefficients in these classes in Section 3. In Section 4 we show that a quasianalytic polynomial is solvable (i.e., admits roots in the same class as the coefficients) along quasianalytic arcs if and only if it is solvable after blowing up (the parameter space). This will be used in the proof of (B). We shall prove (partly stronger) finite dimensional versions of (A)-(F) for normal matrices in Section 5 and 6. The proof of the Theorem 1.1 will finally be completed in Section 7. Several examples in Section 8 will show that the results are best possible in the sense that, generally, the assumptions cannot be weakened and the conclusions cannot be strengthened. In particular, the results are no longer true if A is a family of merely diagonalizable matrices.
Notation. The notation C [M] stands for either C (M) or C {M} with the following restriction: Statements that involve more than one C [M] symbol must not be interpreted by mixing C (M) and C {M} .
For a C ∞ function germ f at a ∈ R q we denote by f a ∈ F q its Taylor series at a, where F q is the ring of formal power series in q variables. We write if we want to stress that the coefficients belong to K (where K = R or K = C) and the variables are x 1 , . . . , x q . We also use f = f 0 . We write ω(F ) for the order of F ∈ F q , i.e., the lowest degree of non-zero monomials in F , with the convention ω(0) = +∞. For a C ∞ function germ f at 0 we set ω(f ) := ω( f ).
We write |S| for the cardinality of a finite set S and denote by H q the qdimensional Hausdorff measure.
L(E, F ) is the space of bounded linear mappings E → F .

Smooth function classes
2.1. Classes of C ∞ -functions. Let us assume that for every open U ⊆ R q , q ∈ N, we have a subalgebra C(U ) of C ∞ (U ) = C ∞ (U, R) so that the following assumptions (C 1 )-(C 5 ) are satisfied.
(C 1 ) C contains the restrictions of polynomial functions. The algebra of restrictions to U of polynomial functions on R q is contained in C(U ).
(C 5 ) C is closed under taking the inverse. Let ϕ : U → V be a C-mapping between open subsets U and V in R q . Let a ∈ U , ϕ(a) = b, and suppose that the Jacobian matrix (∂ϕ/∂x)(a) is invertible. Then there exist neighborhoods U ′ of a, V ′ of b, and a C-mapping ψ : , for every f ∈ C(V ). It follows from (C 1 ) and (C 2 ) that ϕ = (ϕ 1 , . . . , ϕ p ) is a C-mapping if and only if ϕ i ∈ C(U ), for all 1 ≤ i ≤ p. Property (C 5 ) is equivalent to the implicit function theorem in C: . . , f p ). Then there is a neighborhood V × W of (a, b) in U and a C-mapping g : V → W such that g(a) = b and f (x, g(x)) = 0, for x ∈ V .
It follows from (C 5 ) that C is closed under taking the reciprocal: If f ∈ C(U ) vanishes nowhere in U , then 1/f ∈ C(U ).
Frequently, we shall also require the following condition.
(Q) C is quasianalytic. If f ∈ C(U ) and for a ∈ U the Taylor series of f at a vanishes (i.e., f a = 0) then f vanishes in a neighborhood of a.
Since {x : f x = 0} is closed in U , condition (Q) is equivalent to the following property: If U is connected, then, for each a ∈ U , the Taylor series homomorphism Occasionally, we will need a further condition.
(C 6 ) C is closed under solving ODEs. Let I ⊆ R be an open interval and let U ⊆ R q be open. Consider the initial value problem where f : I ×U → R q is a C-mapping. Then the smooth solution x = x(t, y) is of class C wherever it exists.
A complex-valued function f : U → C is said to be a C-function, or to belong to C(U, C), if (Ref, Imf ) : U → R 2 is a C-mapping. It is immediately verified that (C 3 ) and (C 4 ) also hold for complex-valued functions f ∈ C(U, C); as well as (Q) if assumed.
Convention. From now on, C shall denote a fixed, but arbitrary, class of C ∞functions satisfying the conditions (C 1 )-(C 5 ). We shall write C Q for a class C which is required to satisfy (Q). It will be explicitly stated when (C 6 ) is assumed.
Note that C might be C ∞ and C Q might be C ω . Here are some more examples.
(1) Denjoy-Carleman classes of Roumieu type: If M = (M k ) is a positive logconvex sequence which is stable under derivation (see (M 1 ) and (M 2 )), then the Denjoy-Carleman class of Roumieu type C {M} has the properties (C 1 )-(C 6 ); see [11,Section 4] for (C 1 )-(C 5 ) and [21] for (C 6 ). In particular, this is true for all Gevrey classes G 1+s = C {(k! s ) k } , s ≥ 0. If M = (M k ) additionally satisfies (M 4 ), then C {M} is quasianalytic (Q). Among the Gevrey classes only G 1 = C ω has this property. However, by setting M δ,n k := 1 k! k · log(k) · · · · · log n−1 (k) · (log n (k)) δ k , where log n denotes the n-fold composition of log, we obtain for each 0 < δ ≤ 1 and each n ∈ N >0 a quasianalytic class C {M δ,n } satisfying all required conditions, and 2.3. Resolution of singularities in C Q . A C-manifold is a C ∞ -manifold such that all chart change mappings are of class C. This provides a category C of C-manifolds and C-mappings.
The implicit function property (C 5 ) implies that a smooth (i.e., not singular) subset of a C-manifold is a C-submanifold: Let M be a C-manifold. Suppose that U is open in M , g 1 , . . . , g p ∈ C(U ), and the gradients ∇g i are linearly independent at every point of the zero set X := {x ∈ U : g i (x) = 0 for all i}. Then X is a closed C-submanifold of U of codimension p.
The category C is closed under blowing up with center a closed C-submanifold. We shall use a simple version of the desingularization theorem of Hironaka [18] for C Q -function classes due to Bierstone and Milman [10,11]. We use the terminology therein.
2.4. Theorem ( [11, 5.12]). Let M be a C Q -manifold, X a closed C Q -hypersurface in M , and K a compact subset of M . Then, there is a neighborhood W of K and a surjective mapping ϕ : W ′ → W of class C Q , such that: (1) ϕ is a composite of finitely many C Q -mappings, each of which is either a blow-up with smooth center (that is nowhere dense in the smooth points of the strict transform of X) or a surjection of the form j U j → j U j , where the latter is a finite covering of the target space by coordinate charts. (2) The final strict transform X ′ of X is smooth, and ϕ −1 (X) has only normal crossings. (In fact ϕ −1 (X) and det dϕ simultaneously have only normal crossings, where dϕ is the Jacobian matrix of ϕ with respect to any local coordinate system.) See [11, 5.9 and 5.10] and [10] for stronger desingularization theorems in C Q . A real-or complex-valued C Q -function on a C Q -manifold M is said to have only normal crossings if each point in M admits a coordinate neighborhood U with coordinates x = (x 1 , . . . , x q ) such that where g is a non-vanishing C Q -function on U , and α ∈ N q . Observe that, if a product of C Q -functions has only normal crossings, then each factor has only normal crossings.
Let f ∈ C Q (M, C) and let K ⊆ M be compact. Then there exists a neighborhood W of K and a finite covering {π k : U k → W } of W by C Q -mappings π k , each of which is a composite of finitely many local blow-ups, such that, for each k, the function f • π k has only normal crossings. This follows from Theorem 2.4 applied to the real-valued C Q -function |f | 2 = f f and from the previous observation.
By a local blow-up Φ over an open subset U of a C Q -manifold M we mean the composite Φ = ι • ϕ of a blow-up ϕ : U ′ → U with smooth center and of the inclusion ι : U → M .
We shall need the following well-known lemma.
2.6. Lemma. Let I ⊆ R be an open interval. Let f j , g j : I → C, 1 ≤ j ≤ n, be C-functions such that |{j : f j (t) = z}| = |{j : g j (t) = z}| for all t ∈ I and z ∈ C. Assume that at each t 0 ∈ I the order of contact of any two elements of {f j } (equivalently {g j }) is finite unless their germs at t 0 coincide. Then {f j } and {g j } differ by a constant permutation.
The assumption on the order of contact is trivially satisfied if the functions are of class C Q .
Proof. Set f = (f 1 , . . . , f n ), g = (g 1 , . . . , g n ), and consider the set There exists a permutation σ t0 ∈ S n /(S n ) g(t0) so that f (t 0 ) = σ t0 .g(t 0 ). Setg = σ t0 .g. We claim that f =g. This is true locally near t 0 , since J is open. Assume for contradiction that there exists t 1 ∈ I so that f (t 1 ) =g(t 1 ). Without loss of generality assume t 0 < t 1 and let But then the C-curve h = f −g is identically 0 on the (non-trivial) interval [t 0 , s] and for each ǫ > 0 there exists t ∈ (s, s + ǫ) with h(t) = 0. Thus, h must vanish of infinite order at s, which contradicts our assumption.

C-polynomials
3.1. Monic univariate complex polynomials. The space of all monic univariate complex polynomials P of fixed degree n, naturally identifies with C n (via P → (a 1 , . . . , a n )). It may also be viewed as the orbit space C n / S n with respect to the standard action S n : C n of the symmetric group S n on C n by permuting the coordinates (the roots λ j of P ). The elementary symmetric functions generate the algebra of symmetric polynomials on C n , i.e., C[C n ] Sn = C[σ 1 , . . . , σ n ]. It follows that the orbit projection C n → C n / S n identifies with the mapping σ = (σ 1 , . . . , σ n ) : C n → C n and we have a j = σ j (λ 1 , . . . , λ n ) (Vieta's formulas). The associated polynomials are symmetric. Thus there exist unique polynomials∆ k such that ∆ k =∆ k • σ, and so the∆ k are functions of P . The number of distinct roots of P equals the maximal k such that∆ k (P ) = 0; it cannot decrease locally in P . If P is any monic polynomial, we denote by a j (P ) its coefficients so that P takes the form (3.2) with a j = a j (P ).
The inverse function property (C 5 ) and (C 1 ) imply the following lemma.
3.4. Lemma (Splitting lemma in C, see [35, 3.2]). Let P 0 be a complex polynomial satisfying P 0 = P 1 · P 2 , where P 1 and P 2 are monic polynomials without common root. Then for P near P 0 we have P = P 1 (P ) · P 2 (P ) for C-mappings of monic polynomials P → P 1 (P ) and P → P 2 (P ), defined for P near P 0 , with the given initial values. (Here P → P i (P ) is understood as a mapping R 2n → R 2 deg Pi .) 3.5. C-families of polynomials. By a C-family of polynomials we mean a polynomial where the coefficients a j are complex-valued C-functions defined in a C-manifold M . Let x 0 ∈ M . If P (x 0 ) has distinct roots ν 1 , . . . , ν m , the Splitting Lemma 3.4 provides a C-factorization P (x) = P 1 (x) · · · P m (x) near x 0 such that no two factors have common roots and all roots of P h (x 0 ) are equal to ν h , for 1 ≤ h ≤ m. This factorization amounts to a reduction of S n : C n to S n1 × · · ·× S nm : C n1 ⊕ · · ·⊕ C nm , where n h is the multiplicity of ν h . In this situation we shall write S(P (x 0 )) := S n1 × · · · × S nm .
In other words, S(P (x 0 )) is the stabilizer of the ordered n-tuple consisting of the roots of P (x 0 ) with multiplicities. Furthermore, we will remove fixed points of S n1 × · · · × S nm : C n1 ⊕ · · · ⊕ C nm or, equivalently, reduce each factor P h to the case a 1 (P h ) = 0 by replacing z by z − a 1 (P h )/n h . The effect on the roots of P h is a shift by a C-function.
For later reference we state the following result.
3.8. Normal nonflatness. Let I ⊆ R be an open interval and let I ∋ t → P (t) be a C-family of polynomials (3.6). We say that P is normally nonflat at t 0 ∈ I if it has the following property: (N) Let k be maximal with the property that the germ at t 0 of t →∆ k (P (t)) is not 0. Then t →∆ k (P (t)) is not infinitely flat at t 0 . By (3.3), condition (N) is equivalent to the following: Let λ j denote the germs at t 0 of a continuous parameterization of the roots of P ; such exist by Proposition 3.7. Then the order of contact at t 0 of any two unequal λ j is finite. Evidently, (N) is satisfied if P is a C Q -polynomial.
We shall say that P is normally nonflat if (N) holds at each t 0 ∈ I.

3.9.
Lemma ([34, 2.1]). Let P be a polynomial (3.6) with coefficients a j : R, 0 → C germs at 0 of C-functions, and a 1 = 0. Then, for integers r, the following conditions are equivalent: Consequently, if P is normally nonflat at 0 and ω(a j ) = ∞ for all j, then a j = 0 for all j.
3.10. Proposition (Puiseux's theorem in C). Let P be a polynomial (3.6) with coefficients a j : R, 0 → C germs at 0 of C-functions. If P is normally nonflat at 0, then there exists a positive integer γ and germs λ j : Proof. For C = C ∞ this was proved in [34, 3.2]. The same proof works for general C. See also [31].
3.11. Lemma (Glueing local choices of roots). Let R ∋ t → P (t) be a C-curve of polynomials (3.6). If P is normally nonflat and locally admits C-parameterizations of its roots, i.e., for each t 0 ∈ R there exist an open interval I t0 ∋ t 0 and C-functions which represent the roots of P on I t0 , then there exists a global C-parameterization of the roots.
Proof. Let I ⊆ R be a proper open subinterval and let λ j , 1 ≤ j ≤ n, be C-functions which represent the roots of P on I. We show that the C-parameterization λ j can be extended to a larger domain. Let the right (say) endpoint b of I be finite. There exists a C-parameterization µ j of the roots on some open interval I b ∋ b. By Lemma 2.6, we may renumber the µ j so that for all j, λ j = µ j on their common domain I ∩ I b . So together the λ j and the µ j form a C-parameterization of the roots on I ∪ I b .

C Q -polynomials solvable along C Q -arcs
We have shown in [35, 6.7] that a C Q -polynomial P admits a C Q -parameterization of its roots after desingularization by means of local blow-ups and local power substitutions. In this section we shall prove that local blow-ups suffice if P is solvable along C Q -arcs. This will be applied to the characteristic polynomial of normal C Q -matrices in Section 5. It might also be of independent interest. We where each π k is a composite of finitely many local blow-ups, such that, for all k, the family of polynomials P • π k allows a C Q -parameterization of its roots on U k .
Proof. Since the statement is local, we may assume without loss of generality that M is an open neighborhood of 0 ∈ R q . We use induction on the cardinality | S(P (0))| of S(P (0)).
If | S(P (0))| = 1, all roots of P (0) are pairwise different. So the statement follows from the C Q -implicit function theorem (C 5 ) or from the Splitting Lemma 3.4.
Suppose that | S(P (0))| > 1. Let ν 1 , . . . , ν m denote the distinct roots of P (0); some of them are multiple (m = 1 is allowed). The Splitting Lemma 3.4 provides a C Q -factorization P (x) = P 1 (x) · · · P m (x) near 0 such that the roots of distinct factors remain separated and P h (0)(z) = (z − ν h ) n h for 1 ≤ h ≤ m. We reduce to S n1 × · · · × S nm : C n1 ⊕ · · · ⊕ C nm and we remove fixed points (see 3.5), which preserves solvability along C Q -arcs. So, if a h,j := a j (P h ) denote the coefficients of P h , we may assume that a h,1 = 0 for all h. Then all roots of P h (0) are equal to 0, and hence a h,j (0) = 0, for all 1 ≤ h ≤ m and 1 ≤ j ≤ n h . If all coefficients a h,j of P h are identically 0, so are all its roots, and we remove the factor P h from the product P 1 · · · P m . Thus we can assume that for Let us define the C Q -functions By Theorem 2.4, we find a finite covering {π k : U k → U } of a neighborhood U of 0 by C Q -mappings π k , each of which is a composite of finitely many local blow-ups, such that, for each k, the non-zero A h,j • π k (for 1 ≤ h ≤ m and 2 ≤ j ≤ n h ) and its pairwise non-zero differences Let k be fixed and let x 0 ∈ U k . Then x 0 admits a neighborhood W k with suitable coordinates in which x 0 = 0 and so that either A h,j • π k = 0 or where A k h,j is a non-vanishing C Q -function on W k , and α h,j ∈ N q . The collection of exponents {α h,j : A h,j • π k = 0, 1 ≤ h ≤ m, 2 ≤ j ≤ n h } is totally ordered, by Lemma 2.5. Let α denote its minimum. If , not all roots of (P h • π k )(x 0 ) coincide (since a h,1 • π k = 0), and, thus, | S((P • π k )(x 0 ))| < | S(P (0))|. Obviously, P • π k is again solvable along C Q -arcs. By the induction hypothesis, there exists a finite covering {π kl : W kl → W k } of W k (possibly shrinking W k ) of the required type such that, for all l, the family of polynomials P • π k • π kl allows a C Q -parameterization of its roots on W kl .
Let us assume that α = 0. Then there exist C Q -functionsÃ k h,j on W k (maybe some of them 0) such that, for all 1 ≤ h ≤ m and 2 ≤ j ≤ n h , We have to prove that α/n! ∈ N q . Assume for contradiction that there is an i 0 such that α i0 /n! ∈ N. Let u ∈ W k be such that u i0 = 0 and u i = 0, for i = i 0 , and let e i0 denote the i 0 th standard unit vector in R q . Since P h is solvable along for C Q -functions λ h,j near t = 0. By (4.4), there exist h 0 and 2 ≤ j 0 ≤ n h0 so that A k h0,j0 is non-vanishing. By (4.2) and (4.3), we have Since α i0 > 0, (4.6) implies that λ h0,j (0) = 0 for all 1 ≤ j ≤ n h0 . Set and the µ h0,j parameterize the roots of the polynomialQ h0 with coefficients a j (Q h0 (t)) := t −jr h 0 a j (Q h0 (t)). Since µ h0,j (0) = 0 for some j, not all coefficients ofQ h0 (0) vanish. So, for some j 1 , we have Combining (4.6) and (4.9) we find α i0 /n! ≤ r 0 , and (4.7) and (4.8) together imply α i0 /n! ≥ r 0 . Hence α i0 /n! = r h0 ∈ N, a contradiction. Thus Claim 4.5 is shown. By (4.2), (4.3), and Claim 4.5, each a h,j • π k is divisible by x jβ where β = (β 1 , . . . , β q ), and, by ( . Consider the C Q -family of polynomials P k h with coefficients a j (P k h ) := a k h,j . By (4.4), there exist 1 ≤ h ≤ m and 2 ≤ j ≤ n h such that a k h,j (x 0 ) = 0, and, hence, not all roots of P k h (x 0 ) coincide. So for P k := P k 1 · · · P k m we have | S(P k (x 0 ))| < | S(P (0))|. 4.11. Claim. P k is solvable along C Q -arcs.
Let c : R → W k be a C Q -curve. By Lemma 3.11, it suffices to show that the roots of P k • c locally admit C-parameterizations, and without loss of generality it is enough to show this locally near 0 ∈ R. By Proposition 3.10, there exists γ ∈ N >0 such that t → P k (c(t γ )) admits a C Q -parameterization λ j of its roots near t = 0 ∈ R. Let γ be minimal with that property. For contradiction assume that γ > 1. By (4.10), the roots of P k and P • π k differ by the monomial factor m(x) := x β . Thus, the functions µ j (t) := m(c(t γ )) · λ j (t) form a C Q -parameterization of the roots of t → P (π k (c(t γ ))). Since P • π k is solvable along C Q -arcs, there exist C Q -functions ν j which parameterize the roots of P • π k • c. Hence, both collections {µ j } and {t → ν j (t γ )} parametrize the roots of t → P (π k (c(t γ ))), and, after renumbering, we may assume that ν j (t γ ) = m(c(t γ ))λ j (t) for all j, by Lemma 2.6. By (Q) and (C 4 ), the quotients ν j /(m • c) are C Q -functions. As they parameterize the roots of P k • c, the choice of γ was not minimal, a contradiction. This proves Claim 4.11. Now, by the induction hypothesis, there exists a finite covering {π kl : W kl → W k } of W k (possibly shrinking W k ) of the required type such that, for all l, the family of polynomials P k • π kl admits a C Q -parameterization λ kl j of its roots on W kl . Then the C Q -functions x → m(π kl (x)) · λ kl j (x) form a choice of the roots of the family Since k and x 0 were arbitrary, the assertion of the theorem follows. Let us call a C Q -family M ∋ x → P (x) of polynomials (3.6) solvable after blowing up if the conclusion of Theorem 4.1 holds, i.e., for K ⊆ M compact, there exists a finite covering {π k : U k → W } of a neighborhood W of K, where each π k is a composite of finitely many local blow-ups, such that, for all k, P • π k allows a C Q -parameterization of its roots. 4.12. Corollary (Solvability along C Q -arcs and after blowing up are equivalent).
Proof. One direction is shown in Theorem 4.1. For the converse direction let c : R → M be a C Q -curve. By Lemma 3.11, it suffices to prove that P • c admits C Q -parameterizations of its roots, locally. Let t 0 ∈ R, set K := {c(t 0 )}, and apply the assumption that P is solvable after blowing up. Denote by c : Since C Q -curves admit a lifting over blow-ups, all arrows in the diagram are of class C Q . This implies the statement. Remarks.
(2) Hyperbolic C Q -polynomials are solvable along C Q -arcs, see [35, 6.11]. Hyperbolic means all roots are real at each parameter value. In the next section will meet another class of polynomials solvable along C Q -arcs.

5.
Smooth perturbation theory for normal matrices 5.1. Lemma. Let P be a polynomial (3.6) with coefficients a j : R, 0 → C germs at 0 of C-functions, and assume that P is normally nonflat at 0. If there exist Λ 1 , . . . , Λ n ∈ C[[t]] which represent the roots of the formal polynomial P , i.e., then there exist germs λ 1 , . . . , λ n : R, 0 → C of C-functions such that P (t)(z) = n j=1 (z − λ j (t)) and λ j = Λ j for all j.
Proof. In view of the reduction procedure described in 3.5 (which preserves normal nonflatness) we may assume that all roots of P (0) equal 0 and a 1 = 0. Let r := min 1≤j≤n ω(Λ j ) ≥ 1. If r = ∞ then all a j = 0, by Lemma 3.9, and by setting all λ j = 0 we are done. So we may assume that r < ∞. For each j we have ω( a j ) ≥ jr, thus a j is divisible by t jr , and, by (C 4 ), there exist C-germs b j such that a j (t) = t jr b j (t). Consider the polynomial Q with coefficients a j (Q) := b j . It is easy to see that Q is normally nonflat at 0 and that not all roots of Q(0) coincide. Thus, induction on the cardinality of S(P (0)) proves the statement.
Remark. It is easy to check that the ring of germs at 0 ∈ R of complex-valued C Q -functions is a Henselian excellent discrete valuation ring with maximal ideal m = {h : h(0) = 0} and m-adic completion C[[t]]. Thus, by [4], [33], or [37,Thm. 4.2], it has the Artin approximation property which might be used alternatively to Lemma 5.1 in the quasianalytic case.
Let us introduce notation. We associate with a parameterized family of complex matrices A(x) = (A ij (x)) 1≤i,j≤n its characteristic polynomial χ(A) := det(A − zI) and set P A := (−1) n χ(A). Then P A is a family of polynomials (3.6) with coefficients a j (P A ) = Trace(Λ j A), i.e., We say that 1≤i,j≤n be a C-curve of normal complex matrices, i.e., the entries A ij belong to C(R, C), such that P A is normally nonflat. Then there exists a global C-parameterization of the eigenvalues and the eigenprojections of A.
In the real analytic case the local statement of this proposition is (by considering holomorphic extensions) a direct consequence of [20, II Thm. 1.10] which exploits the monodromy of algebraic functions; see also [5, 3.5.1]. An algebraic version for normal matrices over so-called Hermitian discrete valuation rings is due to [1]. Actually, for C Q -curves of normal matrices, the local statement follows from [1], since the germs at 0 ∈ R of complex-valued C Q -functions form a Hermitian discrete valuation ring (as can be checked using Remark 5.1).
Proof. First we treat the eigenvalues. By Lemma 3.11, it suffices to show that there exist C-parameterizations of the eigenvalues, locally near each t 0 . Without loss of generality assume that t 0 = 0. In view of Lemma 5.1 it is enough to prove the following claim.
This claim is a consequence of [1], since C[[t]] is a Hermitian discrete valuation ring and P A = P A , where the matrix A(t) = ( A ij (t)) is normal, since Taylor expansion commutes with transposition and conjugation (note that f j t j = f j t j ). Here is a direct proof more adapted to our situation.
Proof of claim. Let s be maximal with the property that the germ at 0 of∆ s (P A ) does not vanish identically. If∆ s (P A (0)) = 0, then the Splitting Lemma 3.4 implies the assertion. So let us assume that∆ s (P A (0)) = 0, i.e., generically distinct roots of P A meet at 0. By Proposition 3.10, there exists a minimal γ ∈ N >0 such that for generically distinct C-germs λ j : R, 0 → C. Let θ be a primitive γth root of unity and consider the formal power series By (5.4), the λ j (θt) represent the roots of the formal polynomial P A (t γ ), likewise with the λ j (t). Since C[[t, z]] is a unique factorization domain, we have: We shall show that σ is trivial. Then, in view of (5.5), λ j,k · θ k = λ j,k for all j and all k ∈ N. So λ j,k = 0 whenever k ∈ γN, and, thus, λ j (t 1/γ ) is a formal power series in t. By (5.4), the formal power series λ j (t 1/γ ), 1 ≤ j ≤ s, represent the distinct roots of P A (t); they are pairwise distinct by normal nonflatness. The claim follows. Suppose that σ is non-trivial. Clearly, λ 1 , . . . , λ s parameterize the generically distinct eigenvalues of t → A(t γ ). Let P 1 , . . . , P s denote the respective eigenprojections: Normal nonflatness implies that there exist (matrix-valued) C-germs Q i such that P i (t) = t −pi Q i (t), p i ∈ N. Since A(t γ ) is normal, and, thus, P i (t) = 1, each P i is of class C, by (C 4 ). So we may consider the formal power series (with coefficients n × n matrices) P i (θt) = k≥0 P i,k · (θt) k = k≥0 P i,k θ k · t k , and (5.6) and (5.7) imply that P i (θt) = P σ(i) (t) for all i. If σ is non-trivial, we get in particular P i,0 = P j,0 for some i = j. The fact that P i (t)P j (t) = 0 off 0 implies P i,0 P j,0 = 0 and, since P i is idempotent, we have (P i,0 ) 2 = P i,0 . Therefore, which contradicts P i (t) = 1. Hence σ = id and the claim is shown. Now we treat the eigenprojections. Let λ j : R → C, 1 ≤ j ≤ s, be a global Cparameterizations of the generically distinct eigenvalues of A and let P j , 1 ≤ j ≤ s, be the respective eigenprojections. Then each P i is expressed by (5.7) with γ = 1, and we may conclude similarly as above that each eigenprojection is globally of class C. Normal nonflatness implies that points where distinct eigenvalues meet cannot accumulate. 5.8. Theorem. Let M be a C Q -manifold and let A(x) = (A ij (x)) 1≤i,j≤n be a family of normal complex matrices with entries A ij in C Q (M, C). Let K ⊆ M be compact. Then there exists a finite covering {π k : U k → W } of a neighborhood W of K, where each π k is a composite of finitely many local blow-ups, such that, for all k, the family of normal complex matrices A • π k allows a C Q -parameterization of its eigenvalues and its eigenvectors on U k .
If M = R, A is of class C, and P A is normally nonflat, then there exist global C-parameterizations of the eigenvalues and local C-parameterizations of the eigenvectors of A. If we assume (C 6 ), also the eigenvectors admit a global Cparameterization.
Proof. The proof is subdivided into several claims.

5.9.
Claim. The statements about the eigenvalues are true.
For M = R this was shown in Proposition 5.3. Let M be a general C Q -manifold. Proposition 5.3 implies that the associated C Q -family of polynomials P A is solvable along C Q -arcs. So Theorem 4.1 implies Claim 5.9. 5.10. Claim. Let A = A(x) be a family of normal complex n×n matrices, where the entries A ij are C Q -functions and the eigenvalues of A admit a C Q -parameterization λ j in a neighborhood of 0 ∈ R q . Then there exists a finite covering {π k : U k → U } of a neighborhood U of 0, where each π k is a composite of finitely many local blow-ups, such that, for all k, A • π k admits a C Q -parameterization of its eigenvectors.
We prove Claim 5.10 using induction on | S(P A (0))|. First consider the following reduction: Let ν 1 , . . . , ν m denote the pairwise distinct eigenvalues of A(0) with respective multiplicities n 1 , . . . , n m . The sets form a partition of the λ i such that λ i (x) = λ j (x), for x near 0, if λ i and λ j belong to different Λ h . Consider (The order of the compositions is not relevant.) Then V (h) x is the kernel of a C Qvector bundle homomorphism B(x) with constant rank (even of constant dimension of the kernel), and thus it is a C Q -vector subbundle of the trivial bundle U ×C n → U (where U ⊆ R q is a neighborhood of 0) which admits a C Q -framing. This can be seen as follows: Choose a basis of C n such that A(0) is diagonal. By the elimination procedure one can construct a basis for the kernel of B(0). For x near 0, the elimination procedure (with the same choices) gives then a basis of the kernel of B(x). This clearly involves only operations which preserve the class C Q . The elements of this basis are then of class C Q in x near 0. Therefore, it suffices to find C Q -eigenvectors in each subbundle V (h) separately, expanded in the constructed C Q -frame field. But in this frame field the vector subbundle looks again like a constant vector space. So we may treat each of these parts (A restricted to V (h) , as matrix with respect to the frame field) separately. For simplicity of notation we suppress the index h.
Let us write a j := a j (P A ). Suppose that all eigenvalues of A(0) coincide and are equal to a 1 (0)/n, according to (5.2). Eigenvectors of A(x) are also eigenvectors of A(x) − (a 1 (x)/n)I (and vice versa), thus we may replace A(x) by A(x) − (a 1 (x)/n)I and assume that a 1 = 0. So A(0) = 0.
If A = 0 identically, we choose the eigenvectors constant and we are done. Note that this proves Claim 5.10, if | S(P A (0))| = 1.
Assume that A = 0. By Theorem 2.4, there exists a finite covering {π k : U k → U } of a neighborhood U of 0, where each π k is a composite of finitely many local blowups, such that, for each k, the non-zero entries A ij • π k of A • π k and its pairwise non-zero differences A ij • π k − A lm • π k simultaneously have only normal crossings.
Let k be fixed and let x 0 ∈ U k . Then x 0 admits a neighborhood W k with suitable coordinates in which x 0 = 0 and such that either A ij • π k = 0 or where B k ij is a non-vanishing C Q -function on W k , and α ij ∈ N q . The collection of exponents {α ij : A ij • π k = 0} is totally ordered, by Lemma 2.5. Let α denote its minimum.
If α = 0, then (A ij • π k )(x 0 ) = B k ij (x 0 ) = 0 for some 1 ≤ i, j ≤ n. Since a 1 • π k = 0, we may conclude that not all eigenvalues of (A • π k )(x 0 ) coincide. Thus, | S(P A•π k (x 0 ))| < | S(P A (0))|, and, by the induction hypothesis, there exists a finite covering {π kl : W kl → W k } of W k (possibly shrinking W k ) of the required type such that, for all l, the family of normal matrices A • π k • π kl allows a C Qparameterization of its eigenvectors on W kl .
Assume that α = 0. Then there exist C Q -functions A k ij (maybe some of them 0) such that, for all 1 ≤ i, j ≤ n, ) forms a C Q -family of normal n × n matrices, and its eigenvalues differ from those of (A • π k )(x) by a monomial factor x α and admit a C Q -parameterization. Indeed, the C Q -functions λ j • π k parameterize the eigenvalues of A • π k and are divisible by x α , otherwise x → λ j (π k (x))/x α would be an unbounded root of a polynomial with bounded coefficients, a contradiction (see e.g. [35, 2.4]). In view of (5.2), the C Q -functions x → λ j (π k (x))/x α represent the eigenvalues of A k .
Eigenvectors of A k (x) are also eigenvectors of (A • π k )(x) (and vice versa). As A k ij (x 0 ) = 0 for some i, j and since a 1 (P A k ) = 0, not all eigenvalues of A k (x 0 ) coincide. Hence, | S(P A k (x 0 )| < | S(P A (0))|, and the induction hypothesis implies the statement. The proof of Claim 5.10 is complete. 5.11. Claim. If M = R, A is of class C, and P A is normally nonflat, then there exist local C-parameterizations of the eigenvectors of A. If we assume (C 6 ), there exists a global C-parameterization of the eigenvectors.
By Claim 5.9 the eigenvalues admit global C-parameterizations λ j on R, which are unique up to a constant permutation, by Lemma 2.6. The proof of Claim 5.10 works in this case as well: Theorem 2.4 and Lemma 2.5, the only ingredients that need quasianalyticity, are both trivially true, and normal nonflatness is preserved by the reduction process. So there are local C-choices of the eigenvectors. The proof of Claim 5.10 further gives us, for each eigenvalue λ j : R → C with generic multiplicity n j , a unique n j -dimensional C-vector subbundle V (j) t of R × C n whose fiber over t ∈ R consists of eigenvectors for the eigenvalue λ j (t). By Proposition 5.3, the eigenprojection P j corresponding to λ j is C on R and P j (t)(C n ) = V (j) t . It suffices to prove that each P j has a transformation function of class C, cf. [20, II §4.2], i.e., there exists a matrix-valued function R ∋ t → U j (t) such that U −1 j (t) is invertible for each t, both U j and U −1 j are C on R, and U j (t)P j (0)U −1 j (t) = P (t). If {v i } is a basis of P j (0)(C n ), then {U j (t)v i } is a basis of P j (t)(C n ).
We construct a transformation function of class C following [20, II §4.2]. Let us suppress the index j. Differentiation of P 2 = P and applying this identity several times yields By (C 5 ), Q is of class C. By (C 6 ), the linear ODE with initial condition X(0) = I has a unique global solution X = U . Similarly, is a constant and, by the initial conditions, we find that V U = I, thus U −1 = V . Since (P U ) ′ = P ′ U + P U ′ = (P ′ + P Q)U = QP U , P U is a solution of (5.12) with initial condition X(0) = P (0). Since the general solution of (5.12) is X(t) = U (t)X(0), we have U (t)X(0) = P (t)U (t), hence U (t)P (0)U −1 (t) = P (t). So U is a transformation function for P . Moreover, U (t) is unitary for each t and hence the eigenvectors may be chosen orthonormal. This is seen as follows, cf. [20, II §6.2]: Normality of A implies P * = P and (P ′ ) * = P ′ , by differentiation. Thus Q = [P ′ , P ] is skew-Hermitian, and, since U solves (5.12), we find (U * ) ′ = −U * Q i.e., U * solves (5.13). Uniqueness implies that U * = V = U −1 .

Lipschitz eigenvalues of normal matrices
There is the following result. In particular, the unordered n-tuple of eigenvalues λ(A) = (λ 1 (A), . . . , λ n (A)) is continuous (even Lipschitz) as a function of the normal matrix A. However, the single eigenvalues do in general not allow continuous parameterizations, see Example 8.2. Continuous parameterizations exist if A is Hermitian (e.g. ordering by size λ j (A) ≤ λ j+1 (A); see [2, 4.1]) or if A depends on a single real parameter (see Proposition 3.7). We shall show in this section that, if A depends on parameters locally in a Lipschitz way and admits continuous parameterizations λ j of its eigenvalues, then the λ j are locally Lipschitz. No such result is true for the eigenvectors, see Section 8.
We will repeatedly use the following fact.
6.2. Lemma ( [22, 4.3]). Let c : (a, b) → X be a continuous curve in a compact metric space X. The set of accumulation points of c(t) as t → a + is connected.
Let us start with the one parameter case.
6.3. Proposition. Let A(t) = (A ij (t)) 1≤i,j≤n be a curve of normal complex matrices, where the entries A ij : R → C are locally Lipschitz. Then the eigenvalues of A admit a parameterization which is locally Lipschitz. Actually, any continuous parameterization of the eigenvalues of A is locally Lipschitz.
Proof. Let s ∈ R be fixed. Let z be an eigenvalue of A(s) of multiplicity m. We choose a simple closed C 1 -curve γ in the resolvent set of A(s) enclosing only z among all eigenvalues of A(s). By continuity, see Proposition 3.7, no eigenvalue of A(t) lies on γ, for t near s; see also Lemma 7.1 below. Now, is a locally Lipschitz curve of projections onto the direct sum of all eigenspaces corresponding to eigenvalues of A(t) in the interior of γ with constant rank (cf. Section 7). For t near s, there are equally many eigenvalues in the interior of γ, and, by Proposition 3.7, we may call them λ j (t), for 1 ≤ j ≤ m, so that each λ j is continuous. The image of t → P (t, γ) describes a locally Lipschitz vector subbundle of the trivial bundle R×C n → R. For each t choose an orthonormal system of eigenvectors v j (t) of A(t) corresponding to the λ j (t). They form a (not necessarily continuous) framing. By local triviality of the vector bundle, for each t near s and each sequence t k → t there is a subsequence (again denoted by t k ) such that v j (t k ) → w j (t), where the w j (t) form an orthonormal system of eigenvectors of A(t)| P (t)(C n ) . Consider Now assume that A ′ (s) exists. For t = s take the inner product of (6.4) with each w i (s): The first summand vanishes, since all λ j (s) coincide with z and since the w i (s) form also an orthonormal system of eigenvectors of A(s) * corresponding to the eigenvalue z (cf. [20, I §6.9]). Letting k → ∞, we find that the w i (s) are a basis of eigenvectors of P (s)A ′ (s)| P (s)(C n ) with eigenvalues We may conclude, by Lemma 6.2, that the right-sided derivative λ If we take the inner product of (6.4) with w j (t) (for t near s) and proceed to the limit, then (as the first summand vanishes again by the same reason) we obtain for a unit eigenvector w j (t) of A(t) with eigenvalue λ j (t). A similar formula holds for the left-sided derivatives λ (−) j (t). An inspection of these arguments shows that they hold for any continuous parameterization λ j of the eigenvalues of A. Hence we have shown: 6.6. Claim. Let λ j be any continuous parameterization of the eigenvalues of A. If A ′ (s) exists, then the one-sided derivatives of λ j exist at s, left-and right-sided derivatives form the same set with correct multiplicities, namely, the set of eigenvalues of A ′ (s), and they satisfy a formula of type (6.5). Applying a suitable permutation on one side of s provides a continuous choice of the eigenvalues which is differentiable at s.
Next we claim that each λ j is locally absolutely continuous. Then λ j is differentiable almost everywhere and its derivative is locally bounded, by (6.5). Thus λ j is locally Lipschitz. 6.7. Claim. Any continuous parameterization λ j of the eigenvalues of A is locally absolutely continuous.
Taking the inner product of (6.4) with w j (t) leads to respectively, and such that v j (t k ) → w j (t). Let I ⊆ R be an open bounded interval, J ⊇ I an open neighborhood of the closure I, and let C J denote the Lipschitz constant of A on J (with respect to the operator norm). If t ∈ J and J ∋ t k → t, t k = t, then, after passing to a subsequence (again denoted by t k ) so that v j (t k ) → w j (t), there is, by (6.8), a k 0 = k 0 (t, (t k )) ∈ N such that Let j be fixed. Consider the continuous functions q k (t) := λ j (t + 1/k) − λ j (t) 1/k and set C k := max t∈I |q k (t)|.
We claim that C k is bounded in k. Otherwise there exists a subsequence (again denoted by C k ) such that C k ր ∞. Choose t k ∈ I such that C k = |q k (t k )|. Since I is compact, after passing to a subsequence, t k → t ∞ ∈ I. We may also assume that this convergence is fast, i.e., for all n ∈ N the sequence k n (t k − t ∞ ) is bounded. If t k = t ∞ constantly, then C k = |q k (t ∞ )| ≤ 2C J for sufficiently large k, by (6.9). So we may assume that t k = t ∞ , and consider (6.10) By (6.9), there is some k 0 ∈ N such that both difference quotients on the right hand side of (6.10) are bounded by 2C J for all k ≥ k 0 . (Here we pass first to a subsequence of t k and then in turn to a subsequence of s k := t k + 1/k, and set k 0 := max{k 0 (t ∞ , (t k )), k 0 (t ∞ , (s k ))}.) This contradicts the assumption that C k is unbounded.
Since C k = max t∈I |q k (t)| is bounded, the sequence of functions q k is bounded in L p (I), for any p ≥ 1. Since L p (I) is reflexive if 1 < p < ∞, for such p, there exists a subsequence (again denoted by the full sequence) and an element λ ′ j ∈ L p (I) such that (see e.g. [13,V Thm. 4.2]) Thus, for a test function ϕ ∈ C ∞ c (I), where we used substitution and assumed that k is sufficiently large so that supp(ϕ)± 1/k ⊆ I. This shows that λ ′ j is the weak derivative of λ j , and, hence, λ j ∈ W 1,p (I). It follows that there is an absolutely continuous functionλ j on I which coincides with λ j almost everywhere in I, and, thus, on a dense subset of I. By continuity, λ j =λ j . The proof of Claim 6.7 is complete.
6.11. Proposition. Let A(t) = (A ij (t)) 1≤i,j≤n be a curve of normal complex matrices, where the entries A ij : R → C are C 1 (resp. C 2 ). Then the eigenvalues of A admit a parameterization which is C 1 (resp. twice differentiable).
Proof. The proof is subdivided into several claims. We use the notation in the proof of 6.3. 6.12. Claim. If A is C 1 , then the eigenvalues admit a C 1 -parameterization.
We use induction on n. Let λ j be a continuous parameterization of the eigenvalues of A (see Proposition 3.7). If s is such that not all λ j (s) coincide, then the set {1, . . . , n} decomposes into the subsets {j : λ j (s) = w}, w ∈ C. For i and j in different (non-empty) subsets, we have λ i (t) = λ j (t) for all t in an open interval I s containing s. As in the proof of 6.3, we may treat distinct subsets separately (by considering A(t)| P (t,γ)(C n ) , where γ encloses exactly one of the distinct eigenvalues of A(s) at a time). By the induction hypothesis, Claim 6.12 holds on I s .
Let I be an open interval containing only points s, where not all λ j (s) coincide. Let J ⊆ I be a maximal open subinterval on which Claim 6.12 holds. We claim that J = I. Otherwise an endpoint a of J belongs to I and there is a C 1 -parameterization of the eigenvalues on an open interval I a ∋ a. Choosing s ∈ J ∩ I a and permuting one choice of eigenvalues on one side of s in a suitable way (see Claim 6.6), we might extend the C 1 -parameterization beyond a, contradicting maximality of J.
The set E of points, where all eigenvalues coincide, is closed, and on its complement (which is a disjoint union of open intervals) we may parameterize the eigenvalues by C 1 -functions µ j . For each isolated point s of E we apply in turn the following arguments: Extending all µ j to s by the single n-fold eigenvalue of A(s) provides a continuous parameterization near s. By Claim 6.6, we may assume that the µ j are differentiable at s after applying a suitable permutation to the right of s. We claim that the derivative of each µ j is continuous at s. Namely, let t k → s and apply (6.5) to t k , . Choose a subsequence such that the w j (t k ) converge. Then (6.13) converges to one of the eigenvalues of A ′ (s). We may conclude, by Lemma 6.2, that the limit lim t→s + µ ′ j (t) exists and that it equals one of the eigenvalues of A ′ (s) (the same for t → s − ). By the mean value theorem, for θ ∈ (0, 1), Finally, we extend each µ j by the single n-fold eigenvalues of A(s) at each accumulation point s of E. By Claim 6.6 and since s is an accumulation point of E, all µ ′ j (s) exist and coincide. Let t k → s. By (6.13), the sequence µ ′ j (t k ) is bounded, and, thus, has a convergent subsequence. By passing to a subsequence again so that the w j (t k ) converge, we find, by (6.13), that µ ′ j (t k ) converges to some eigenvalue of A ′ (s). But the latter all coincide with µ ′ j (s), by Claim 6.6. This implies that the µ ′ j are continuous at s. The proof of Claim 6.12 is complete. 6.14. Claim. Assume that A is C 2 . For each s there is a C 1 -parameterization of the eigenvalues near s which is twice differentiable at s.
We may assume without loss of generality that s = 0. By the usual reduction procedure (i.e., treating distinct eigenvalues of A(0) separately by restricting to P (t, γ)(C n ) for suitable γ and in turn replacing A by A − (a 1 (P A )/n)I) we may assume without loss of generality that 0 is the only eigenvalue of A(0). Then A(t) = tÃ(t), where t →Ã(t) is a C 1 -curve of normal matrices. By Claim 6.12, there is a C 1 -parameterization µ j of the eigenvalues ofÃ. Then the functions t → tµ j (t) are twice differentiable at 0 and represent the eigenvalues of A.
6.15. Claim. If A is C 2 , then the eigenvalues of A admit a parameterization which is twice differentiable at every point.
We modify the proof of Claim 6.12 and just indicate the necessary changes. Let I be an open interval containing only points s so that not all eigenvalues of A(s) coincide. We show that a twice differentiable parameterization, say µ j , of the eigenvalues on an open subinterval J ⊆ I can be extended to I. Let a ∈ I denote the right, say, endpoint of J. By induction, there exists a twice differentiable parameterization λ j of the eigenvalues on an open interval I a ∋ a. Choose s ∈ J ∩ I a and let t k → s. For each k there is a permutation σ ∈ S n such that µ j (t k ) = λ σ(j) (t k ) for all j. By passing to subsequences in turn (and Claim 6.6), we can assume that σ does not depend on k and that also µ ′ j (t k ) = λ ′ σ(j) (t k ) for all j. Then and, thus, µ ′′ j (s) = λ ′′ σ(j) (s) for all j. So we can extend µ j beyond a. Let E denote the set of points s so that all eigenvalues of A(s) coincide. The last paragraph implies the existence of a twice differentiable parameterization of the eigenvalues on the complement of E. By the arguments in the proof of Claim 6.12, we may construct from it a C 1 -parameterization µ j on R which is twice differentiable on the complement of E. Let s ∈ E and t k → s. Let λ j be the parameterization of the eigenvalues near s provided by Claim 6.14. After passing to subsequences as above, we have (6.16).
Assume that s is isolated in E. As λ j is twice differentiable at s, we conclude, by Lemma 6.2, that the left-sided and the right-sided second order derivatives of µ j exist at s, and they form the same set of numbers with correct multiplicities.
By applying a permutation to the right of s, we obtain a twice differentiable parameterization of the eigenvalues near s. We treat all isolated points s ∈ E in this way.
If s is an accumulation point of E, then all µ ′ j (s) coincide. Let E ∋ t k → s. In view of (6.16) and by Lemma 6.2, we find that the second order derivatives of the µ j exist at s and they all coincide, by considering second order difference quotients on points in E. The proof is complete.
The following is a modification of [24,Lemma] and can be shown in the same way. For convenience of the reader, we include a proof.
Proof. We use induction on N . Certainly, the assertion is true if N = 1.
For s ∈ I such that not all µ j (s) coincide, the sets {λ j } and {µ j } decompose into subsets so that elements of different subsets do not meet on an open interval I s containing s. By induction, the statement holds on I s .
Suppose that for no t ∈ I all µ j (t) coincide. Let J be a maximal open subinterval of I for which the statement of the lemma is true with λ 1 j for j > n. We will show J = I. If the right (say) endpoint b of J belongs to I, then the statement holds on an open interval I b ∋ b with λ 2 j for j > n. Choose s ∈ J ∩ I b . We claim that there is a permutation σ so that each λ 1 j in {t ∈ J : t ≤ s} can be extended by λ 2 σ(j) in {t ∈ I b : t ≥ s}, contradicting maximality of J. Let t k → s − . We have λ 1 j (t k ) = λ 2 σ(j) (t k ) for a permutation σ which depends on k. By passing to a subsequence, we may assume that σ is independent of k which shows the claim in the continuous case. For the C 1 and the twice differentiable case, we pass to a subsequence again in order to obtain (λ 1 j ) ′ (t k ) = (λ 2 σ(j) ) ′ (t k ) and we use the arguments surrounding (6.16).
Let E denote the closed set of all points in I, where all µ j coincide. The complement I \ E is a disjoint union of open intervals, on each of which the lemma holds. Extending the λ j to s ∈ E by the unique value µ j (s), provides a continuous extension to I. For the C 1 and the twice differentiable case, we may renumber the λ j to the right of each isolated point s ∈ E so that they fit together in a C 1 or twice differentiable way (by Lemma 6.2). If s is an accumulation point of E, then all derivatives µ ′ j (s) =: µ ′ (s) coincide. Thus, by Lemma 6.2, each λ j is differentiable at s with λ ′ j (s) = µ ′ (s), and λ ′ j is continuous at s: If µ j is twice differentiable at s, then all µ ′′ j (s) =: µ ′′ (s) coincide, by considering second order difference quotients on points in E. By Lemma 6.2, we may conclude that each λ j is twice differentiable at s with λ ′′ j (s) = µ ′′ (s).
1≤i,j≤n be a parameterized family of normal complex matrices. Then: where U is open in R q , then for any compact K ⊆ U there exists a relatively compact neighborhood W of K and a parameterization λ i of the eigenvalues of A on W which belongs to L CQ , thus, also to SBV . More precisely, the classical gradient ∇λ i (x) exists for all x ∈ W \ E W,λi and for those x we have where is the operator norm and A ′ (x) = dA(x) the Fréchet derivative. Proof.
(1) By [35, 9.6], there exists a parameterization λ i of the eigenvalues of A on W which satisfies (L 1 ) and (L 2 ) and such that ∇λ i ∈ L 1 (W ); in particular, each λ i belongs to SBV , also by [35, 9.6]. For x ∈ W \ E W,λi , t ∈ R small, and e j the jth standard unit vector in R q , the curve t → λ i (x + te j ) represents an eigenvalue of t → A(x + te j ), and, by Claim 6.6, we have which implies the statement.
(2) Suppose that λ : V → C is a continuous eigenvalue of A. Let c : R → V be C ∞ . Then λ • c parameterizes an eigenvalue of the C 0,1 -curve of normal matrices A • c. By Lemma 6.17, λ • c can be completed to a continuous parameterization of the eigenvalues of A • c which is locally Lipschitz, by Proposition 6.3, and so λ • c is locally Lipschitz. Since c was arbitrary, we conclude that λ : V → C is C 0,1 (see 1.2). Let x 0 ∈ U ∩ V and let c : R → U be C ∞ with c(0) = x 0 and c((0, 1]) ⊆ V . We already know that λ • c| (0,1] is locally Lipschitz. Its derivative exists a.e. and is bounded by the Lipschitz constant of A • c| [0,1] (with respect to the operator norm), by Claim 6.6. The assertion follows.
1≤i,j≤n is a C 0,1family of normal complex matrices. Then Claim 6.6 actually implies that, whenever a (one-sided) directional derivative of an eigenvalue of A exists, it is uniformly bounded on compact subsets of U . 6.21. Question. Let R q ⊇ U ∋ x → A(x) = (A ij (x)) 1≤i,j≤n be a C 0,1 -family of normal complex matrices. Do the eigenvalues of A admit a parameterization by SBV loc -functions whose classical gradient exits a.e. and is locally bounded?
) forms a C 0,α -parameterization of the eigenvalues.
(2) follows from (1). Proof. For C ∞ , C {M} , with special M = (M k ), and C 0,α this was proved in [29]. The same proof works for general M = (M k ), C [M] , and C k,α ; for the latter even with the same references. So we just sketch the proof for C [M] : By definition

Perturbation theory for unbounded normal operators
for each u ∈ V and v ∈ H and, thus, x → A(x)u is of the same class as a mapping E → H for each u ∈ V (see 1.2).

Claim.
For each x consider the norm u 2 All these norms x on V are equivalent, locally uniformly in x. We then equip V with one of the equivalent Hilbert norms, say 0 , and have A(x) ∈ L(V, H) for all x.
By the linear uniform boundedness theorem and by [27, 5.1], we conclude that the mapping E → L(V, H), x → A(x), is C [M] . If for some (x, z) ∈ E × C the bounded operator A(x) − z : V → H is invertible, then this is true locally with respect to the c ∞ -topology on the product which is the product topology, by [23, 4.16]. The resolvent ( , since inversion is real analytic on the Banach space L(V, H) and since C [M] ⊇ C ω is stable under composition [27, 4.11].
Proof of Theorem 1.1. Let x 0 ∈ E and let z be an eigenvalue of A(x 0 ) of multiplicity N . We choose a simple closed C 1 -curve γ in the resolvent set of A(x 0 ) enclosing only z among all eigenvalues of A(x 0 ). Since the global resolvent set is open, see Lemma 7.1, no eigenvalue of A(x) lies on γ, for x near x 0 . By Lemma 7.1, is a C -mapping. Each P (x) is a projection, namely onto the direct sum of all eigenspaces corresponding to eigenvalues of A(x) in the interior of γ, with finite rank. Thus the rank must be constant: It is easy to see that the (finite) rank cannot fall locally, and it cannot increase, since the distance in L(H, H) of P (x) to the subset of operators of rank ≤ N = rank(P (x 0 )) is continuous in x and is either 0 or 1. So, for x in a neighborhood U of x 0 , there are equally many eigenvalues in the interior of γ, and we may call them λ j (x) for 1 ≤ j ≤ N (repeated with multiplicity). The family of N -dimensional complex vector spaces U ∋ x → P (x)(H) ⊆ H forms a C Hermitian vector subbundle over U of the trivial bundle U × H → U : For given x, choose v 1 , . . . v N ∈ H such that the P (x)(v i ) are linearly independent and thus span P (x)(H). This remains true locally in x. We use the Gram Schmidt orthonormalization procedure (which is C ω and preserves C ) for the P (x)(v i ) to obtain a local orthonormal C -frame of the bundle. Now A(x) maps P (x)(H) to itself and in a local C -frame it is given by a normal N × N matrix parameterized in a C -way by x. Then all local assertions (i.e., in a product neighborhood of (x 0 , z)) of the theorem follow: (A) and (B) follow from Theorem 5.8, (C) and (D) from Theorem 6.19, (E) and (F) from Proposition 6.11.
Let us prove (D).
open in a convenient vector space E, and let c : R → U be a C ∞ -curve. We first show that λ • c is locally Lipschitz. Let t ∈ R and x = c(t) ∈ U . By the local result, x ∈ U has an open neighborhood V such that the restriction λ| V is C 0,1 . Thus λ| V • c| I is locally Lipschitz, where I is the connected component of c −1 (V ) which contains t. This implies the statement.
For the supplements in (D) we need the following claim.
7.3. Claim. Let t → A(t) be C 0,1 in t ∈ R, let I ⊆ R be a compact interval, and let t → λ j (t) be a Lipschitz eigenvalue of t → A(t) defined on a subinterval of I. Then for a constant C depending only on I.
By reducing to P (t)A(t)| P (t)(H) as above, we may conclude that (6.8) holds true, and, thus, for V t = (V, t ) and for a constant C, since all norms t are uniformly equivalent locally in t, by Claim 7.2. Since t → λ j (t) is Lipschitz, in particular, absolutely continuous, we obtain |λ ′ j (t)| ≤ C + C|λ j (t)| a.e., and Gronwall's lemma (e.g. [15, (10.5
We consider sequences of E -functions (λ j ) j∈α , indexed by ordinals α and defined on open intervals I j containing some fixed t 0 ∈ R, which parameterize eigenvalues of A. The set of all such sequences is partially ordered by inclusion of ordinals and then by restriction of the component functions. For each increasing chain the union is again such a sequence. By Zorn's lemma there exists a maximal sequence (λ j ).
In any maximal sequence each component function λ j is globally defined on R. This is seen as follows: If b < ∞ is the right (say) endpoint of I j , then, by Claim 7.3 and by Lemma 6.2, the limit lim t→b − λ j (t) =: z exists and is an eigenvalue of A(b). By the local results, there exist δ, ǫ > 0 such that all eigenvalues |λ − z| < ǫ of A(t) for |t − b| < δ admit a parameterization by E -functions In case (A), λ j coincides with some µ j on their common domain, since unequal eigenvalues have finite order of contact, and, hence, it admits an extension beyond b. In the other cases consider the λ j whose graph {(t, λ j (t) : t ∈ I j } has nonempty intersection with the vertical boundary {b − δ, b + δ} × B ǫ (z) of the tube (b − δ, b + δ) × B ǫ (z) ⊆ R × C. The endpoints of the corresponding intervals I j decompose (b − δ, b + δ) into finitely many subintervals. We apply Lemma 6.17 on each subinterval; in case (D) where λ j ∈ C 0,1 we use its continuous version. Then we glue at the endpoints of the subintervals in a continuous, C 1 , or twice differentiable way, respectively, (as before in the proof of Proposition 6.11) to obtain an extension of at least λ j . In case (D) this extension is C 0,1 , since we already know that any continuous eigenvalue is C 0,1 . So the sequence was not maximal and the assertion follows.
Any maximal sequence (λ j ) parameterizes all eigenvalues of A with the right multiplicities. If not, there is some t 0 and some eigenvalue z of A(t 0 ) such that |{j : λ j (t 0 ) = z}| is less than the multiplicity of z. By the local results, Lemma 6.17, and the assumption on the order of contact in case (A), we may again conclude that (λ j ) was not maximal, a contradiction. Now let us treat the eigenvectors. Let λ j : R → C be a C ∞ (resp. C [M] ) eigenvalue with generic multiplicity N . By the arguments in the proof of Claim 5.11, we obtain a unique global N -dimensional C ∞ (resp. C [M] ) vector subbundle of R × H → R whose fiber over t consists of eigenvectors for the eigenvalue λ j (t). The corresponding C ∞ (resp. C [M] ) eigenprojection P j has a transformation function, since the arguments at the end of 5.11 work in Banach spaces, see [41] and [38, 3.4]. So we find global C ∞ (resp. C [M] ) eigenvectors for each eigenvalue. This completes the proof of Claim 7.4 and the proof of the theorem. 7.5. Remark (m-sectorial operators). The assumptions in Theorem 1.1 may be slightly relaxed, if all A(x) are m-sectorial operators. In that case it suffices to assume that the associated quadratic forms a(x) have common domain of definition V and x → a(x)(u) is of the respective class for each u ∈ V . In the following discussion we use the definitions of [20,VI].
Let E ∋ x → a(x) be a parameterized family of closed sectorial (possibly unbounded) sesquilinear forms in a Hilbert space H so that there is a dense subspace V of H which is the domain of definition of each a(x), i.e., V (a(x)) = V . We say that a(x) is C ∞ , C [M] , or C k,α if x → a(x)(u, v) is C ∞ , C [M] , or C k,α for each u, v ∈ V ; by polarization it is actually enough to require that x → a(x)(u) = a(x)(u, u) is of the respective class for all u ∈ V . Let C stand for C ∞ , C [M] , or C k,α .
There is a bijective correspondence a → A a between the set of all densely defined closed sectorial forms a and the set of all m-sectorial operators A, where a is bounded if and only if A a is bounded and a is symmetric (i.e., a(u, v) = a(v, u) for u, v ∈ V ) if and only if A a is self-adjoint (by the first representation theorem [20, VI Thm. 2.1]). Note that an m-sectorial operator necessarily is densely defined and closed.
Thus with the C -family x → a(x) of closed sectorial forms we associate the family x → A a (x) = A a(x) of m-sectorial operators. If we also assume that A a (x) is normal for every x and has compact resolvent for every (equivalently, some) x, then the conclusions of Theorem 1.1 hold true for the family x → A a (x). This follows from the following two claims which replace Lemma 7.1 and Claim 7.3. The assumption that x → a(x) is a C -family immediately gives that x → b(x)(u, v) is C for each u, v ∈ H. Consider the family of operators B(x) ∈ L(H, H) defined by B(x)u | v = b(x)(u, v). By the linear uniform boundedness principle and the fact that it suffices to use a set of linear functionals which together recognize bounded sets instead of the whole dual (see the references in 1.2), x → B(x) ∈ L(H, H) is C . Replacing u, v by Gu, Gv we obtain (7.7) a(x)(u, v) = B(x)Gu | Gv , for u, v ∈ V.

So we have
whence GB(x)Gu exists and equals A a (x)u, since G is self-adjoint. Since GB(x)G is accretive and A a (x) is m-accretive, we have is C near (x 0 , z 0 ). This shows Claim 7. 6.
In what follows we assume that the parameter space is E = R and t = x.
7.8. Claim. Assume that t → a(t) is C 0,1 . A C 0,1 -eigenvalue t → λ j (t) of t → A a (t) cannot accelerate to ∞ in finite time.
Note that (7.7) implies that a(t) is locally uniformly sectorial. Thus we may assume without loss of generality that s(t) = Re a(t) ≥ 1 near t 0 . Since a(t) is closed, the inner product u | v t := u | v + s(t)(u, v) makes V to a Hilbert space V t := (V, t ) (see [20,VI Thm. 1.11]). The arguments in the proof of [29,Claim (1)] show that all these norms t are locally uniformly equivalent. By reducing to P a (t)A a (t)| Pa(t)(H) (where P a (t) = − 1 2πi γ (A a (x) − z) −1 dz) we have (6.8) (with t replaced by t 0 ) and hence, using (7.7),

The results are best possible
The condition on the order of contact in (A) cannot be dropped: This follows from the examples in [24] and [29] for C ∞ and for non-quasianalytic C {M} . From the latter one can also deduce a counterexample for non-quasianalytic C (M) .
These examples together with Example 8.1 also show that results of type (C)-(F) are hopeless for the eigenvectors. Moreover, (B) is wrong without desingularization, by Example 8.1.
Result (C) is optimal, since by Example 8.2 the single eigenvalues cannot be chosen continuously in general. By the example in [24], in (E) and (F) the eigenvalues cannot be C 1,β for any β > 0, even if t → A(t) is C ∞ . On the other hand, in our proof the assumption C 1,α in (E) (resp. C 2,α in (F)) cannot be weakened to C 1 , by the "resolvent example" in [24], but we do not know whether there is a C 1 (resp. C 2 ) curve of unbounded normal operators with common domain and compact resolvent whose eigenvalues cannot be parameterized C 1 (resp. twice differentiably). has the eigenvalues ± x 2 + y 2 . There cannot exist a parameterization of the eigenvectors of A with locally bounded derivatives. Namely, if u v denotes an eigenvector with norm 1 for the eigenvalue x 2 + y 2 , then the partial derivative ux vx (where it exists) must satisfy If ux vx were bounded near 0, the left-hand side would converge to 0 as x, y → 0, whereas the right-hand side does not, a contradiction. The derivatives λ ′ ± exist everywhere, but they are discontinuous at 0 if α + β ≤ 4 and even unbounded near 0 if α + β < 4.