Interpolation in de Branges-Rovnyak spaces

A general interpolation problem with operator argument is studied for functions f from the de Branges-Rovnyak space H(s) associated with an analytic function s mapping the open unit disk D into the closed unit disk. The interpolation condition is taken in the Rosenblum-Rovnyak form f(A)c = b (with a suitable interpretation of f(A)c) for given Hilbert space operator A and two vectors b; c from the same space.


Introduction
The class of all holomorphic functions mapping the unit disk D into its closure has played a prominent role in function theory and its applications beginning with the work of I. Schur [8]; following now standard terminology, we refer to this class of functions as the Schur class and denote it by S. Since the first results by mathematicians as Schur, Carathéodory, Fejér, Nevanlinna and Pick in the early part of the last century with further elaboration for problems involving matrixand operator-valued Schur classes to the present, there has now emerged a rather complete theory for interpolation problems in the Schur class. Among several alternative characterizations of the Schur class is one in terms of positive kernels and associated reproducing kernel Hilbert spaces: the function s : D → C is in the Schur class S if and only if the associated so-called de Branges-Rovnyak kernel (1.1) K s (z, ζ) = 1 − s(z)s(ζ) 1 − zζ is a positive kernel and thereby gives rise to a reproducing kernel Hilbert space H(K s ) (see [5]). On the other hand the kernel ( The purpose of this note is to study an interpolation problem of general type in Rosenblum-Rovnyak form (see [7]) where the operator A and the vectors c and b are given) with the unknown function f in a de Branges-Rovnyak space H(K s ) rather than in the Schur class S. By way of motivation, we note that if we take the de Branges-Rovnyak data set (A, c, b) to be of the form for some points z 1 , . . . , z N in the unit disk and complex values w 1 , . . . , w N , then the interpolation condition (1. 3) transcribes to while, if instead we take (A, b, c) to be of the form (so A is the lower-triangular Jordan cell of size N × N with eigenvalue z 0 in the unit disk), then the interpolation condition (1. 3) transcribes to f (j) (z 0 ) = w j for j = 0, 1, . . . , N − 1.
In the final section of the paper we return to these two concrete examples in a combined form to illustrate the general ideas developed here. If s is inner or s ≡ 0 so that H(K s ) is contained in H 2 isometrically or just equals H 2 , the solution can be obtained upon combining the orthogonal projection theorem and the Beurling-Lax theorem on shift-invariant subspaces of H 2 . The general case requires somewhat more delicate arguments. In Theorem 2.2 (the main result of this paper) we present a parametrization of all solutions to a general interpolation problem with operator argument based on isometric multipliers between two de Branges-Rovnyak spaces. The formulas become even more concrete and explicit in the case of finite-dimensional data and invertibility of the associated Pick matrix. In the last section, we illustrate the general formalism by showing how higher-order multipoint interpolation of classical type fits into the general scheme.

Statement of main result
To formulate the interpolation problem of interest to us we first fix notation and recall some needed definitions. The symbol L(X , Y) stands for the algebra of bounded linear operators mapping a Hilbert space X into another Hilbert space Y, abbreviated to L(X ) in case X = Y. If T ∈ L(X ) and E ∈ L(X , C), the pair (E, T ) is called output-stable if the associated observability operator z n ET n x maps X into H 2 and is bounded. The pair is called observable if the operator O E,T : X → H 2 is injective. For an output-stable pair (E, T ), we define the tangential functional calculus f → f (T * )E * on H 2 by The computation n≥0 f n T * n E * , x that the output-stability of (E, T ) is exactly what is needed to verify that the infinite series in the definition (2. 2) of f (T * )E * converges in the weak topology on X . The same computation shows that tangential evaluation with operator argument amounts to the adjoint of O E,T : Evaluation (2.1) certainly applies to functions from de Branges-Rovnyak spaces H(K s ) ⊂ H 2 and suggests the following interpolation problem.
Problem 2.1. Given s ∈ S and given T ∈ L(X ) and E, x ∈ L(X , C) so that the pair (E, T ) is output stable and observable, find all functions f ∈ H(K s ) such that The solvability criterion for Problem 2.1 is given in terms of a positive semidefinite operator P ∈ L(X ) which we now introduce. We first apply evaluation (2.2) to the given s ∈ S to define N ∈ L(X , C) by T is positive semidefinite. Another important property of P (see e.g., [1]) is that it satisfies the Stein equation As we will see in Remark 3.1 below, the condition x * ∈ Ran P 1 2 is necessary and sufficient for Problem 2.1 to have a solution. In particular, Problem 2.1 always has a solution if P is strictly positive definite, that is if P −1 ∈ L(X ). Parametrization of all solutions of the problem in this nondegenerate case is given in Theorem 2.2 below which is the main result of the paper. To formulate it we need some auxiliary constructions.
We first observe that if P is boundedly invertible, then so is the observability gramian O * E,T O E,T (see formula (2.7)) which in turn implies (see e.g., [3]) that the operator T is strongly stable in the sense that T n converge to zero in the strong operator topology. We also recall that if T is strongly stable, then the Stein equation (2.8) has a unique solution (given of course by formula (2.7)). Let J be the signature matrix given by be a 2 × 2 matrix function such that for all z, ζ ∈ D, The function Θ is determined by equality (2.10) uniquely up to a constant J-unitary factor on the right. One possible choice of Θ satisfying (2.10) is is an injective solution to the J-Cholesky factorization problem (such a solution exists due to (2.8)). If spec(T ) ∩ T = T (which is the case if e.g., dim X < ∞), then a function Θ satisfying (2.10) can be taken in the form where µ is an arbitrary point in T\spec(T ). For Θ of the form (2.11), the verification of identity (2.10) is straightforward and relies on the Stein identity (2.8) only. It follows from (2.10) that Θ is J-contractive on D, i.e., that Θ(z)JΘ(z) * ≤ J for all z ∈ D. A much less trivial fact is that due to strong stability of T , the function Θ is J-inner, that is, the nontangential boundary values Θ(t) exist for almost all t ∈ T and are J-unitary: Θ(t)JΘ(t) * = J. In particular, | det Θ(t)| = 1. Every J-contractive function Θ = a b c d with det Θ ≡ 0 gives rise to the one-to-one linear fractional transform In case Θ is a J-inner function satisfying identity (2.10), the transform (2.12) establishes a one-to-one correspondence between S and the set of all Schur class functions g such that g(T * )E * = N * . Here we define g(T * )E * according to definition (2.2), using the fact that as sets we have the inclusion S ⊂ H 2 . Since the given function s satisfies the latter condition by definition (2.5) of N , it follows that s = T Θ [σ] for some (uniquely determined) function σ ∈ S which is recovered from s by The fact that σ of the form (2.13) belongs to the Schur class can be interpreted as a general version of the Schwarz-Pick lemma. The last needed ingredient is the L(X , C)-valued function which also is completely determined from interpolation data. For the multiplication operator M F s we have where the first equality follows from (2.14) and definitions of observability operators and where the second equality is a consequence of (2.6). Formula (2.15) and the range characterization of H(K s ) imply that M F s maps X into H(K s ). Furthermore, it follows from (2.14), (1.2) and (2.1) that x, x X for every x ∈ X which implies, on account of (2.7), that (2.16) F s x 2 H(Ks) = P x, x X for all x ∈ X . Theorem 2.2. Let us assume that the data set of Problem 2.1 is such that the operator P defined in (2.7) is strictly positive definite. Let Θ = a b c d be a J-inner function satisfying (2.10) and let σ ∈ S and F s be given as in (2.13) and (2.14).

Then
(1) All solutions f of Problem 2.1 are parametrized by the formula where h is a free parameter from the space H(K σ ).

The proof
In this section we present the proof of Theorem 2.2. We will make use of multiplication operators M F s : X → H(K s ) and M f : C → H(K s ) for the function F s defined in (2.14) and for the interpolant f ∈ H(K s ). Since H(K s ) ⊂ H 2 , the adjoints of M F s and M f can be taken in the metric of H 2 as well as in the metric of H(K s ) which are not the same unless s is inner. To avoid confusion we will use notation A [ * ] for the adjoint of A in the metric of H(K s ). In terms of these adjoints, we have F s M F s where the first equality is self-evident and the second rephrases (2.16). Furthermore, upon making subsequent use of (2.15), (1.2) and (2.3), we see that for every f ∈ H(K s ) and x ∈ X , which shows that interpolation condition (2.4) can be written as The next theorem characterizes solutions to Problem 2.1 in terms of positive kernels. Characterizations of this type were first applied to interpolation problems by V. Potapov [6].
where P and F s are defined in (2.7) and (2.1).
Proof. Let us assume that f ∈ H(K s ) satisfies f 2 H(Ks) ≤ γ and meets interpolation condition (2.4) (or equivalently, condition (3.2)). The operator P ∈ L(C ⊕ X ⊕ H(K s )) defined below is positive semidefinite: Taking this into account, along with equalities (3.1) and (3.2), we then have We next observe for every vector g ∈ C ⊕ X ⊕ H(K S ) of the form the identity holds. Since P ≥ 0, the quadratic form on the right hand side of (3.7) is nonnegative, which proves (3.3).
Conversely, let us assume that the kernel (3.3) is positive on D × D. Since the set of vectors of the form (3.6) is dense in C⊕X ⊕H(K S ), the identity (3.7) now implies that the operator P is positive semidefinite and therefore, the Schur complement of the (3, 3)-block in P is positive semidefinite. On account of (3.5), we have f M f ≤ γ which completes the proof.
Proof of Theorem 2.2. We now assume that P is strictly positive definite.
Proof of (1): We first assume that f is a solution of Problem 2.1. Then the kernel (3.3) (with γ = f 2 H(Ks) ) is positive by Theorem 3.2. Since P is boundedly invertible, we can take its Schur complement in K to get the equivalent inequality We first observe that in view of (1.1), (2.14) and (2.10), the (2, 2)-block in (3.8) can be written in the form We next observe that for σ ∈ S defined as in (2.13), Substituting (3.10) into (3.9) gives which in turn allows us to write (3.8) as By [4, Theorem 2.2], the latter inequality means that the function The desired representation (2.17) for f now follows from definitions of h and u.
Proof of (2): Upon letting h ≡ 0 in (2.17) we conclude that the function F s P −1 x * is a particular solution to Problem 2.1. Therefore, the second term on the right side of (2.17) is a general solution to the homogeneous problem: F s (a − cs)h, P −1 x * X = 0 which completes the proof of (2).
Proof of (3): As in part (1) we use notation u := a − cs. Take a function This function belongs to H(K σ ) and by the reproducing kernel property, We will show that uh H(Ks) = h H(Kσ) . Since the set of all functions of the form (3.4) is dense in H(K σ ) (recall that u ≡ 0), the desired statement will follow. To complete the proof use formula (3.11) to get (3.16) By the reproducing property and by (2.16), Since by (3.11), K s (z i , z j ) − F s (z i )P −1 F s (z j ) * = u(z i )u(z j )K σ (z i , z j ), it follows that the right hand side expressions in (3.17) and (3.15) are equal. Thus, uh H(Ks) = h H(Kσ) , which completes the proof of (3).

An example
Parametrization (2.17) is especially transparent in case dim X < ∞. Then with respect to an appropriate basis of X the output stable observable pair (E, T ) has following form: T is a block diagonal matrix T = diag{T 1 , . . . , T k } with the diagonal block T i equal the upper triangular n i × n i Jordan block with the number z i ∈ D on the main diagonal and E is the row vector It is not hard to show that for (E, T ) as above and for every function f analytic at z 1 , . . . , z k , evaluation (2.2) amounts to (4.1) f (T * )E * = Col 1≤i≤k Col 0≤j<ni f (j) (z i )/j!.
If we specify the entries of the column x * by letting then it is readily seen that Problem 2.1 amounts to the following Lagranges-Sylvester interpolation problem: LSP: Given s ∈ S, k distinct points z 1 , . . . , z k ∈ D and a collection {x ij } of complex numbers, find all functions f ∈ H(K s ) such that The auxiliary column N * is now defined from the derivatives of the given function s via formula (4.1) and we define the matrix P as the unique solution of the Stein equation (2.8). This matrix P turns out to be equal to the Schwarz-Pick matrix which in turn is known to be positive definite unless s is a Blaschke product of degree m < n := n 1 + . . . + n k in which case P is positive semidefinite and rank P = m. Thus the case P > 0 handled in Theorem 2.2 is generic if dim X < ∞. In this generic case, all the solutions f to the problem LSP are given by formula (2.17) where now all the ingredients are not only explicit but also computable; for example Θ is a rational J-inner function which can be taken in the form (2.11). In particular, the unique solution of minimal norm f min has the form f min (z) = (E − s(z)N )(I − zT ) −1 P −1 x * which is quite explicit in terms of the given data, at least given that one i willing to compute the inverse P −1 of the Schwarz-Pick matrix P . If s is a Blaschke product of degree m < n, the problem LSP has a unique solution. Here is the sketch of the proof. Let m 1 , . . . , m k be any nonnegative integers such that m 1 + . . . + m k = m = deg s and m i ≤ n i for i = 1, . . . , k. Let us consider the problem LSP with interpolation conditions f (j) (z i )/j! = x ij for j = 0, . . . , m i − 1; i = 1, . . . , k.
Let x, E, N , T and P be the matrices associated with this truncated problem. The m × m matrix P is a principal submatrix of P and it is invertible since deg s = m. Let Θ = a b c d be constructed by formula (2.11) (with E, N , T and P instead of E, N , T and P respectively). It then turns out that the function σ = ds− b a− cs is the Blaschke product of degree zero, that is a unimodular constant. By Theorem 2.2, all solutions of the truncated problem LSP are of the form f = F s P −1 x * + (a − sc)h, where F s = ( E − s(z) N )(I − z T ) −1 and where h is a function from H(K σ ). Since σ is a unimodular constant, the space H(K σ ) is trivial and thus the problem LSP has only one solution F s P −1 x * . It finally can be shown that if the necessary condition x * ∈ Ran P 1 2 holds, then this function also solves the larger problem LSP.
The case where P is not boundedly invertible is much more challenging and if dim X = ∞ and especially if the function s belongs to the operator-valued Schur class in which case the space H(K s ) consists of vector-valued functions (rather than scalar-valued). This case will be worked out on a separate occasion.