Remote Access Mathematics of Computation
Green Open Access

Mathematics of Computation

ISSN 1088-6842(online) ISSN 0025-5718(print)



Convergence of an iterative algorithm for solving Hamilton-Jacobi type equations

Authors: Jerry Markman and I. Norman Katz
Journal: Math. Comp. 71 (2002), 77-103
MSC (2000): Primary 93B40, 49N35, 65P10
Published electronically: March 9, 2001
MathSciNet review: 1862989
Full-text PDF

Abstract | References | Similar Articles | Additional Information


Solutions of the optimal control and $H_\infty$-control problems for nonlinear affine systems can be found by solving Hamilton-Jacobi equations. However, these first order nonlinear partial differential equations can, in general, not be solved analytically. This paper studies the rate of convergence of an iterative algorithm which solves these equations numerically for points near the origin. It is shown that the procedure converges to the stabilizing solution exponentially with respect to the iteration variable. Illustrative examples are presented which confirm the theoretical rate of convergence.

References [Enhancements On Off] (What's this?)

  • 1. Arthur Bryson, and Yu-Chi Ho, Applied optimal control; optimization, estimation, and control, Blaisdell Pub. Co., 1969, Waltham, MA. MR 56:4953 (rev. printing)
  • 2. Hans Knobloch, Alberto Isidori, and Dietrich Flockerzi, Topics in Control Theory, Birkhauser Verlag, 1993. MR 95e:93002
  • 3. W.L. Garrard, Additional results on suboptimal feedback control for nonlinear systems, International Journal of Control, 1969, 10(6), 657-663. MR 41:5084
  • 4. Y. Nishikawa, A method for suboptimal design of nonlinear feedback systems, Automatica, 1971, 7, 703-712. MR 48:2860
  • 5. G.N. Saridis, and C.-S.G. Lee, An approximation theory of optimal control for trainable manipulators, IEEE Transactions on Systems, Man. and Cybernetics, 1979, SMC-9(3), 152-159. MR 80b:93070
  • 6. Randal Beard, George Saridis, and John Wen, Improving the Performance of Stabilizing Controls for Nonlinear Systems, IEEE Journal on Control Systems, 1996, 27-35.
  • 7. K.A. Wise, and J.L. Sedwick, Missile autopilot design using nonlinear $H_\infty$-optimal control, Proceedings of 13th IFAC Symposium on Automatic Control in Aerospace, 1994.
  • 8. Jerry Markman, Numerical Solutions of the Hamilton-Jacobi Equations Arising in Nonlinear $H_\infty$ and Optimal Control, Washington University, Department of Systems Science and Mathematics, D.Sc. thesis 1998.
  • 9. Jerry Markman, and I. Norman Katz, An Iterative Algorithm for Solving Hamilton-Jacobi Equations, SIAM J. Sci. Comput. 22 (2000), 312-329. CMP 2000:15
  • 10. Alberto Isidori, Attenuation of Disturbances in Nonlinear Control Systems, Systems, Models and Feedback, A. Isidori, and T.J. Tarn, editors, 1992, Birkhauser, pages 275-300. MR 93f:93047
  • 11. W. A. Coppel, Stability and Asymptotic Behavior of Differential Equations, D.C. Heath and Company, 1965, Boston.
  • 12. S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos, Springer-Verlag, 1990, New York. MR 92a:58041
  • 13. Richard Bellman, Stability Theory of Differential Equations, McGraw-Hill, 1953, New York. MR 15:794b
  • 14. A. J. van der Schaft, $L_2$-Gain Analysis of Nonlinear Systems and Nonlinear State Feedback $H_\infty $ Control, IEEE Trans. on Automatic Control, 37, 1992, 770-784. MR 93e:93027
  • 15. A. M. Stuart and A. R. Humphries, Dynamical Systems and Numerical Analysis, Cambridge University Press, 1996. MR 97g:65009

Similar Articles

Retrieve articles in Mathematics of Computation with MSC (2000): 93B40, 49N35, 65P10

Retrieve articles in all journals with MSC (2000): 93B40, 49N35, 65P10

Additional Information

Jerry Markman
Affiliation: Department of Systems Science and Mathematics, Washington University, Campus Box 1040, One Brookings Drive, St. Louis, Missouri 63130

I. Norman Katz
Affiliation: Department of Systems Science and Mathematics, Washington University, Campus Box 1040, One Brookings Drive, St. Louis, Missouri 63130

Keywords: Hamilton-Jacobi equations, convergence, optimal control
Received by editor(s): December 1, 1998
Received by editor(s) in revised form: February 17, 2000
Published electronically: March 9, 2001
Additional Notes: The results reported here are part of the doctoral dissertation of the first author.
This work was supported in part by the National Science Foundation under grant number DMS-9626202 and in part by the Defense Advanced Research Projects Agency (DARPA) and Air Force Research Laboratory, Air Force Materiel Command, USAF, under agreement number F30602-99-2-0551. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.
The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Defense Advanced Research Projects Agency (DARPA), the Air Force Research Laboratory, or the U.S. Government.
Article copyright: © Copyright 2001 American Mathematical Society

American Mathematical Society