Skip to Main Content

The Hamilton-Jacobi Equation, Then and Now

Ryan Hynd

Communicated by Notices Associate Editor Daniela De Silva

Article cover

1. Introduction

The story of the Hamilton-Jacobi equation begins in classical mechanics as developed in the 19th century. A basic problem within this theory is to describe the motion of a particle with mass subject to a force depending on its position. If this particle happens to be moving along the -axis, Newton’s second law takes the form

Here denotes time, is the position of the particle at time , and is differentiation with respect to time. Observe that in order to determine the trajectory of the particle, we are left to solve a differential equation for given initial conditions and .

Figure 1.

The position of a point mass versus time on the interval . This particle moves along the -axis and is subject to a force . We can find by solving the differential equation 1.

An interesting thing happens when we select a potential energy function for which

This choice allows us to derive the differential equation 1 through a least action principle. Indeed we may consider the action integral

for a given path ; here and throughout, all paths are assumed to be absolutely continuous. If there is a path such that

for each path with , then solves the Newton’s second law differential equation

for along with

Therefore, action-minimizing paths are natural candidates to describe the motion of a particle with mass subject to a force .

Figure 2.

Various trajectories with the same right endpoint as indicated with the gray dot. All of these trajectories are candidates to minimize the action subject to this right endpoint constraint.

The action integral is also known to satisfy a partial differential equation (PDE). Assume that for each and ,

is a path which minimizes among all paths which fulfill the right endpoint constraint . Further suppose that when the action integral is evaluated along ,

it is a smooth function. Then direct computation (as explained in lecture 19 of Jac84) shows that solves what is now known as a Hamilton-Jacobi equation

Here and denote partial differentiation with respect to and , respectively.

In this article, we will examine some of the mathematical developments concerning Hamilton-Jacobi equations since the mid-20th century. This includes the advent of viscosity solutions, which gives us a way to interpret how functions like defined above generally solve Hamilton-Jacobi equations. Another key idea we will discuss is the dynamic programming principle from control theory. In addition, we will present research directions involving interacting systems of particles and noncooperative differential games which aim to further expand what we know about Hamilton-Jacobi type equations.

2. Modeling Applications

Before studying the particular equation 3, we will discuss various optimization problems in which Hamilton-Jacobi equations arise. We will not analyze each of the resulting equations separately nor present a theory which encompasses them all. However, the ideas that we will subsequently introduce for equation 3 can be adapted to all of these equations. The purpose of this section is to give some perspective on the applicability of these concepts. We also note that some of the PDEs in the examples below are typically called Hamilton-Jacobi-Bellman equations, as they can be derived with Bellman’s dynamic programming method which we will discuss later in this article.

2.1. Escape of a light ray

Let us first consider a two-dimensional light ray passing through an inhomogeneous medium. We will represent the medium by a bounded open subset of the -plane and the light ray by a path

We will also assume that at each point in , the inhomogeneity of the medium constrains the speed of light to be no more than for a given positive function .

Figure 3.

A planar domain which represents an inhomogeneous medium. The dashed curves display possible light ray paths emanating from a common point in . Fermat’s principle tells us that light ray paths are among those which minimize their exit time from .

For any path with in , we define

as the first time that exits and if no such time exists. According to Fermat’s principle, a ray of light assumes a path which takes the least amount of time to exit this region. As a result we will try to find a path which satisfies the speed constraint

and minimizes . To this end, we consider

for in .

It turns out that can be interpreted as a solution of the eikonal equation

This PDE may be the first Hamilton-Jacobi equation ever written down Ham28. An important example occurs when

This corresponds to a homogeneous medium. Here is simply the distance from to the boundary of . In particular, the distance to the boundary function is a solution of the PDE

We recommend BCD97 and Tra21 for more on the eikonal and other Hamilton-Jacobi equations.

2.2. Optimal production of a commodity

Figure 4.

A graph of the amount of a commodity produced by a hypothetical factory. The boxes represent a changing inventory over time.

A basic problem in economics is to minimize the costs associated with the production of a given commodity on fixed time horizon . Let us represent as the amount of commodity stored in inventory at time , which is driven by a variable production rate ; for simplicity, we assume there is a constant demand rate . That is,

for . We seek to choose in order to minimize the total cost

modeling the sum of specific types of holding and operational costs. Here , and we’ll also use below.

Considering this optimization problem for a given level of commodity at time , we are led to the function

Here the minimum is taken over paths satisfying 4 for a nonnegative production rate . It turns out that can be interpreted as a solution of the PDE

It would be interesting to know if we can find explicitly and if it will tell us something about optimal production rates . Finally, we note that this optimization problem is a particular case of Example 2.1 in section I.2 of FS06.

2.3. Eradicating an infectious disease

There are many optimization problems of interest in epidemiology. We encounter one when considering the following differential equations in a compartmental model:

. Here and represent the respective susceptible and infected components of a population at time which is subject to an infectious disease, is the transmission rate, is the recovery rate, and represents a vaccination rate at time . As we have seen in the present COVID-19 epidemic, there may be logistical constraints in administering vaccines. As a result, we will suppose for simplicity that

for all .

Figure 5.

Solution of the differential equations 5 with , , , and . The graph of is shown in blue, and the graph of is shown in red. Here the vaccination rate is for and for .

Let us fix . Our goal is to choose a vaccination rate so that the time

it takes for the infected population to drop down to the given threshold is as small as possible. Here and below, we’ll write and for the solution of 5 with vaccination rate . The time corresponding to a minimizing is called an eradication time.

This problem was considered by a group of math biologists BBSG17, who showed that a minimizing vaccination rate is always of the form

for a switching time . While it seems intuitive that for an optimal vaccination rate, there are initial conditions which correspond to positive switching times (Figure 2a of BBSG17). However, it is not well understood how the quadrant of initial conditions is divided between points associated with immediate or delayed switching.

In recent work, we studied

and characterized as a solution of the Hamilton-Jacobi equation

for HIP20. It also turns out that solves another PDE which allowed us to give a new interpretation of optimal vaccination rates.

2.4. Turbulent flame fronts

In combustion science, the so-called -equation

is used to describe an evolving flame propagation front within a planar region Pet00. In particular, the level set

models the front for times . Here and represent the respective and coordinates of the “turbulent” velocity field of the flame; the parameter is the “laminar” flame speed.

Figure 6.

Schematic of the flame front determined by the zero level set ; note that is the outward unit normal to . This level set is time-dependent and will evolve according to the -equation 6.

The primary assumption used in the derivation of the -equation is that the level set moves in direction with speed

Here is the outward unit normal to , which points in the negative direction of the spatial gradient of provided the burnt region corresponds to points for which is positive. A central goal of this mathematical model is to describe

which is interpreted as the large time flame front speed. To this end, it has been exploited that the -equation is a Hamilton-Jacobi equation NXY09.

In particular, we may represent a solution of 6 as

Here is the function whose zero level set describes the front at time and satisfy

The “controls” are required to observe the constraint

for . Consequently, the -equation is indeed a PDE which can be identified as a Hamilton-Jacobi equation.

3. Dynamic Programming

Let us now return to minimizing the action associated with Newton’s second law that we considered in the introduction. As before, , where is the potential energy. We will consider paths whose right endpoint is fixed and also assume that for convenience. As a result, our problem is to minimize

among paths which satisfy the constraint

for a given and .

To this end, we’ll define the value function

Notice that we’ve included a function into our minimization problem. This is done with a bit of foresight, and we hope the reader will come to appreciate this addition. We also note that this definition of should be made with an infimum rather than a minimum, since we have not verified the existence of a minimizing path. We won’t lose much by this omission as it is fairly routine to find conditions on and so that the minimization problem associated with has a solution.

3.1. Deriving the Hamilton-Jacobi equation

A key insight made by Bellman is that the values of determine for Bel54. Specifically, there is a relationship between the prior optimal values and the current one . In general, this is called the dynamic programming principle. For our particular problem, it takes the form

for .

Since for any admissible path in 8, we can rewrite the dynamic programming principle above as

Assuming that is differentiable at and that exists, we also find

If we are allowed to interchange this limit and taking the minimum in 9, then

As a result, dynamic programming heuristically implies the Hamilton-Jacobi equation 3.

3.2. Action-minimizing paths

It is also not hard to check that if is an action-minimizing path in 7, is also a minimizer in 8 for any . Making use of this fact, we can apply the technique used to derive the least action principle to 8 and show

for any action-minimizing path . This conclusion assumes differentiability of the value function along minimizing trajectories. We also note that the necessary condition 10 is a part of a more general set of conditions due to Pontryagin (as detailed in Chapter 1 of FS06).

In view of 7, we see that . Therefore, is a candidate for a solution of the initial value problem

Alternatively, if is a continuously differentiable solution of this initial value problem and is a path with , then