Skip to main content
Log in

Optimal trajectories associated with a solution of the contingent Hamilton-Jacobi equation

  • Published:
Applied Mathematics and Optimization Submit manuscript

Abstract

In this paper we study the existence of optimal trajectories associated with a generalized solution to the Hamilton-Jacobi-Bellman equation arising in optimal control. In general, we cannot expect such solutions to be differentiable. But, in a way analogous to the use of distributions in PDE, we replace the usual derivatives with “contingent epiderivatives” and the Hamilton-Jacobi equation by two “contingent Hamilton-Jacobi inequalities.” We show that the value function of an optimal control problem verifies these “contingent inequalities.”

Our approach allows the following three results: (a) The upper semicontinuous solutions to contingent inequalities are monotone along the trajectories of the dynamical system. (b) With every continuous solutionV of the contingent inequalities, we can associate an optimal trajectory along whichV is constant. (c) For such solutions, we can construct optimal trajectories through the corresponding optimal feedback.

They are also “viscosity solutions” of a Hamilton-Jacobi equation. Finally, we prove a relationship between superdifferentials of solutions introduced by Crandallet al. [10] and the Pontryagin principle and discuss the link of viscosity solutions with Clarke's approach to the Hamilton-Jacobi equation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Aubin, J.-P. and A. Cellina (1984). Differential Inclusions. Springer-Verlag, New York.

    Google Scholar 

  2. Aubin, J.-P. and I. Ekeland (1984). Applied Nonlinear Analysis. Wiley-Interscience, New York.

    Google Scholar 

  3. Bellman, R. E. (1957). Dynamic Programming. Princeton University Press, Princeton, NJ.

    Google Scholar 

  4. Bellman, R. and S. Dreyfus (1962). Applied Dynamic Programming. Princeton University Press, Princeton, NJ.

    Google Scholar 

  5. Cesari, L. (1983). Optimization Theory and Applications. Springer-Verlag, New York.

    Google Scholar 

  6. Clarke, F. H. (1983). Optimization and Nonsmooth Analysis. Wiley-Interscience, New York.

    Google Scholar 

  7. Clarke, F. H. and R. B. Vinter (1983). Local optimality conditions and Lipschitzian solutions to the Hamilton-Jacobi equation, SIAM J. Control Optim., 21(6), pp. 856–870.

    Article  Google Scholar 

  8. Clarke, F. H. and R. B. Vinter (1987). The relationship between the maximum principle and dynamic programming, SIAM J. Control Optim., 25, pp. 1291–1311.

    Google Scholar 

  9. Crandall, M. G. and P. L. Lions (1983). Viscosity solutions of Hamilton-Jacobi equations, Trans. Amer. Math. Soc., 277, pp. 1–42.

    Google Scholar 

  10. Crandall, M. G., L. C. Evans, and P. L. Lions (1984). Some properties of viscosity solutions of the Hamilton-Jacobi equation, Trans. Amer. Math. Soc., 282(2), pp. 487–502.

    Google Scholar 

  11. Dreyfus, S. E. (1965). Dynamic Programming and the Calculus of Variations. Academic Press, New York.

    Google Scholar 

  12. Fleming, W. H. and R. W. Rishel (1975). Deterministic and Stochastic Optimal Control. Springer-Verlag, New York.

    Google Scholar 

  13. Frankowska, H. (to appear). Contingent cones to reachable sets of control systems, SIAM J. Control Optim.

  14. Frankowska, H. (to appear). Hamilton-Jacobi equations: viscosity solutions and generalized gradients, J. Math. Anal. Appl.

  15. Frankowska, H. (1987). Local controllability and infinitesimal generators of semigroups of set-valued maps, SIAM J. Control Optim., 25, pp. 412–432.

    Google Scholar 

  16. Frankowska, H. (1987). The maximum principle for an optimal solution to a differential inclusion with end point constraints, SIAM J. Control Optim., 25, pp. 145–157.

    Google Scholar 

  17. Jacobson, D. and D. Mayne (1970). Differential Dynamic Programming, Modern Analytic and Computational Methods in Science and Mathematics. Elsevier, New York.

    Google Scholar 

  18. Leitmann, G. (1982). Optimality and reachability with feedback control. In Dynamical Systems and Microphysics. Academic Press, New York, pp. 119–141.

    Google Scholar 

  19. Lions, P. L. (1982). Generalized solutions of Hamilton-Jacobi equations. Pitman, Boston.

    Google Scholar 

  20. Lions, P. L. and P. E. Souganidis (1985). Differential games, optimal control and directional derivatives of viscosity solutions of Bellman's and Isaaks' equations, SIAM J. Control Optim., 23(4), pp. 566–583.

    Google Scholar 

  21. Offin, D. C. (1978). A Hamilton-Jacobi approach to the differential inclusion problem. Master's thesis, University of British Columbia, Vancouver.

    Google Scholar 

  22. Zeidler, E. (1984). Nonlinear Functional Analysis and its Applications, vol. III. Springer-Verlag, New York.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Communicated by W. Fleming

Rights and permissions

Reprints and permissions

About this article

Cite this article

Frankowska, H. Optimal trajectories associated with a solution of the contingent Hamilton-Jacobi equation. Appl Math Optim 19, 291–311 (1989). https://doi.org/10.1007/BF01448202

Download citation

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01448202

Keywords

Navigation