Skip to Main Content

Mathematics of Computation

Published by the American Mathematical Society since 1960 (published as Mathematical Tables and other Aids to Computation 1943-1959), Mathematics of Computation is devoted to research articles of the highest quality in computational mathematics.

ISSN 1088-6842 (online) ISSN 0025-5718 (print)

The 2020 MCQ for Mathematics of Computation is 1.78.

What is MCQ? The Mathematical Citation Quotient (MCQ) measures journal impact by looking at citations over a five-year period. Subscribers to MathSciNet may click through for more detailed information.

 

Backward error and conditioning of Fiedler companion linearizations
HTML articles powered by AMS MathViewer

by Fernando De Terán HTML | PDF
Math. Comp. 89 (2020), 1259-1300 Request permission

Abstract:

The standard way to solve polynomial eigenvalue problems is through linearizations. The family of Fiedler linearizations, which includes the classical Frobenius companion forms, presents many interesting properties from both the theoretical and the applied point of view. These properties make the Fiedler pencils a very attractive family of linearizations to be used in the solution of polynomial eigenvalue problems. However, their numerical features for general matrix polynomials had not yet been fully investigated. In this paper, we analyze the backward error of eigenpairs and the condition number of eigenvalues of Fiedler linearizations in the solution of polynomial eigenvalue problems. We get bounds for: (a) the ratio between the backward error of an eigenpair of the matrix polynomial and the backward error of the corresponding (computed) eigenpair of the linearization, and (b) the ratio between the condition number of an eigenvalue in the linearization and the condition number of the same eigenvalue in the matrix polynomial. A key quantity in these bounds is $\rho$, the ratio between the maximum norm of the coefficients of the polynomial and the minimum norm of the leading and trailing coefficient. If the matrix polynomial is well scaled (i. e., all its coefficients have a similar norm, which implies $\rho \approx 1$), then solving the Polynomial Eigenvalue Problem with any Fiedler linearization will give a good performance from the point of view of backward error and conditioning. In the more general case of badly scaled matrix polynomials, dividing the coefficients of the polynomial by the maximum norm of its coefficients allows us to get better bounds. In particular, after this scaling, the ratio between the eigenvalue condition number in any two Fiedler linearizations is bounded by a quantity that depends only on the size and the degree of the polynomial. We also analyze the effect of parameter scaling in these linearizations, which improves significantly the backward error and conditioning in some cases where $\rho$ is large. Several numerical experiments are provided to support our theoretical results.
References
Similar Articles
  • Retrieve articles in Mathematics of Computation with MSC (2010): 15A18, 65F15, 65F35
  • Retrieve articles in all journals with MSC (2010): 15A18, 65F15, 65F35
Additional Information
  • Fernando De Terán
  • Affiliation: Departamento de Matemáticas, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911 Leganés, Spain
  • Email: fteran@math.uc3m.es
  • Received by editor(s): February 22, 2019
  • Received by editor(s) in revised form: June 11, 2019
  • Published electronically: October 17, 2019
  • Additional Notes: This work was partially supported by the Ministerio de Ciencia e Innovación of Spain through grant MTM-2009-09281, and by the Ministerio de Economía y Competitividad of Spain through grants MTM-2012-32542, MTM2015-68805-REDT, and MTM2015-65798-P
  • © Copyright 2019 American Mathematical Society
  • Journal: Math. Comp. 89 (2020), 1259-1300
  • MSC (2010): Primary 15A18, 65F15, 65F35
  • DOI: https://doi.org/10.1090/mcom/3480
  • MathSciNet review: 4063318