Skip to main content
Log in

On a discrete approximation of the Hamilton-Jacobi equation of dynamic programming

  • Published:
Applied Mathematics and Optimization Submit manuscript

Abstract

An approximation of the Hamilton-Jacobi-Bellman equation connected with the infinite horizon optimal control problem with discount is proposed. The approximate solutions are shown to converge uniformly to the viscosity solution, in the sense of Crandall-Lions, of the original problem. Moreover, the approximate solutions are interpreted as value functions of some discrete time control problem. This allows to construct by dynamic programming a minimizing sequence of piecewise constant controls.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bertsekas DP, Shreve SE (1978) Stochastic optimal control: The discrete time case. Academic Press, New York

    Google Scholar 

  2. Capuzzo Dolcetta I, Evans LC (to appear) Optimal switching for ordinary differential equations. SIAM J Control

  3. Capuzzo Dolcetta I, Matzeu M (1981) On the dynamic programming inequalities associated with the deterministic optimal stopping problem in discrete and continuous time. Num Funct Anal Optim 3:425–450

    Google Scholar 

  4. Capuzzo Dolcetta I, Matzeu M, Menaldi JL (to appear) On a system of first order quasi-variational inequalities connected with the optimal switching problem. Systems and Control Letters

  5. Crandall MG, Evans LC, Lions PL (to appear) Some properties of the viscosity solutions of Hamilton-Jacobi equations. Trans Amer Math Soc

  6. Crandall MG, Lions PL (to appear) Viscosity solutions of Hamilton-Jacobi equations. Trans Amer Math Soc

  7. Evans LC (1980) On solving certain nonlinear partial differential equations by accretive operator methods. Israel J Math 36:365–389

    Google Scholar 

  8. Fleming WH, Rishel RW (1975) Deterministic and stochastic optimal control. Springer-Verlag, Berlin-Heidelberg-New York

    Google Scholar 

  9. Gawronski M (1982) Dissertation. Istituto Matematico, Università di Roma, Rome

    Google Scholar 

  10. Goletti F (1981) Dissertation. Istituto Matematico, Università di Roma, Rome

    Google Scholar 

  11. Henrici P (1962) Discrete variable methods in ordinary differential equations. J. Wiley, New York

    Google Scholar 

  12. Lions PL (1982) Generalized solutions of Hamilton-Jacobi equations. Pitman, London

    Google Scholar 

  13. Menaldi JL (1982) Le problème de temps d'arret optimal déterministe et l'inéquation variationnelle du premier ordre associée. Appl Math Optim 8:131–158

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Communicated by A. V. Balakrishnan

Supported in part by a CNR-NATO grant during a visit at the University of Maryland.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dolcetta, I.C. On a discrete approximation of the Hamilton-Jacobi equation of dynamic programming. Appl Math Optim 10, 367–377 (1983). https://doi.org/10.1007/BF01448394

Download citation

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01448394

Keywords

Navigation