Equi-Lipschitz minimizing trajectories for non coercive, discontinuous, non convex Bolza controlled-linear optimal control problems

This article deals with the Lipschitz regularity of the ''approximate`` minimizers for the Bolza type control functional of the form \[J_t(y,u):=\int_t^T\Lambda(s,y(s), u(s))\,ds+g(y(T))\] among the pairs $(y,u)$ satisfying a prescribed initial condition $y(t)=x$, where the state $y$ is absolutely continuous, the control $u$ is summable and the dynamic is controlled-linear of the form $y'=b(y)u$. For $b\equiv 1$ the above becomes a problem of the calculus of variations. The Lagrangian $\Lambda(s,y,u)$ is assumed to be either convex in the variable $u$ on every half-line from the origin (radial convexity in $u$), or partial differentiable in the control variable and satisfies a local Lipschitz regularity on the time variable, named Condition (S). It is allowed to be extended valued, discontinuous in $y$ or in $u$, and non convex in $u$.\\ We assume a very mild growth condition, that is fulfilled if the Lagrangian is coercive as well as in some almost linear cases. The main result states that, given any admissible pair $(y,u)$, there exists a more convenient admissible pair $(\overline y, \overline u)$ for $J_t$ where $\overline u$ is bounded, $\overline y$ is Lipschitz, with bounds and ranks that are uniform with respect to $t,x$ in the compact subsets of $[0,T[\times\mathbb R^n$. The result is new even in the superlinear case. As a consequence, there are minimizing sequences that are formed by pairs of equi-Lipschitz trajectories and equi--bounded controls.\\ A new existence and regularity result follows without assuming any kind of Lipschitzianity in the state variable.\\ We deduce, without any need of growth conditions, the nonoccurrence of the Lavrentiev phenomenon for a wide class of Lagrangians containing those that satisfy Condition (S), are bounded on bounded sets ``well'' inside the effective domain and are radially convex in the control variable.


Introduction
The main object of the article concerns the existence of "nice" pairs of approximate solutions to an optimal control problem. For the sake of clarity, we motivate the core of the paper by means of the basic problem of the calculus of variations. The classical problem of the calculus of variations consists of minimizing an integral functional min I(y) := T t Λ(s, y(s), y (s)) ds, y ∈ W 1,1 ([t, T ]; R n ), y(t) = x ∈ R n , y(T ) = y T ∈ R n , where Λ is a positive, Lebesgue-Borel Lagrangian. The main ingredients in order to obtain the existence of a solution are summarized in Tonelli's theorem: • Lower semicontinuity of Λ(s, y, u) with respect to (y, u); • Convexity of Λ(s, y, u) with respect to u; • Superlinearity of Λ(s, y, u) with respect to u: When a minimizer of I exists, a first step towards regularity is looking at its Lipschitzianity.
When Λ is autonomous, superlinearity alone suffices to ensure the Lipschitz continuity of the minimizers (see [2,23,24]). Weaker growth conditions were considered in the last decades, requiring a specific behavior of the Hamiltonian H(s, y, u, p) := p · u − Λ(s, y, u) associated with Λ(s, y, u), as p belongs to the convex subdifferential of u → Λ(s, y, u) and |u| → +∞. The essential idea of using such indirect growth conditions for the purposes of existence and regularity is due to F. Clarke, who introduced Condition (H) in his seminal paper [20] of 1993 for Lagrangians possibly nonautonomous, extended valued, with state and velocity constraints. Few years later, with different methods, A. Cellina and his school began working around a growth condition (G), formulated first in [17] for continuous Lagrangians of the form Λ(s, y, u) = f (y) + g(u) with g ∈ C 1 (R) and, in 2003 [15,16] for autonomous and continuous Lagrangians. The growth conditions (H) and (G) will be thoroughly examined below. At this stage we just mention here that if Λ is bounded on bounded sets then superlinearity implies Condition (G) and, in the real valued case, the validity of Condition (G) implies that of Condition (H); Conditions (G) and (H) are satisfied by some Lagrangians with almost linear growth, e.g., Λ(u) = |u| − |u| satisfies (G) as well as (H), and some Lagrangians of the form Λ(s, y, u) = h(s, y) 1 + |u| 2in particular Λ(u) = 1 + |u| 2 -satisfy (H), but not (G).
In the real valued autonomous case these weak growth conditions alone (with no need of convexity or continuity assumptions), instead of superlinearity, ensure the Lipschitz regularity of minimizers, as shown by P. Bettiol and C. Mariconda in [8].
In the nonautonomous case, there are examples of Lagrangians that satisfy Tonelli's assumptions but whose minimizers are not Lipschitz. Several regularity results appeared on the subject (see [21,23,33]), each requiring some extra assumptions on the state or velocity variable, e.g., local Lipschitz conditions on the state variable or Tonelli -Morrey type conditions, mostly motivated by the use of the Weierstrass inequality or Clarke's Maximum Principle. A Lipschitz regularity result without additional smoothness or convexity requirements on the state and velocity variables was obtained by P. Bettiol -C. Mariconda in [8,6] under the growth condition (H). The price to pay, with respect to Tonelli's assumptions, is the additional local Lipschitz condition (S) on the time variable, thoroughly examined in § 3, requiring that s → Λ(s, y, u) is locally Lipschitz and that, for all (y, u), |D s Λ(s, y, u)| ≤ κΛ(s, y, u) + A|u| + γ(s) a.e. s ∈ [t, T ], (1.1) for some κ, A ≥ 0 and γ ∈ L 1 ([t, T ]). Moreover, it turns out without need of any growth condition, that Λ is somewhat radially convex on the velocity variable along any given minimizer y * , in the sense that, for a.e. s ∈ [t, T ], the map 0 < r → Λ(s, y * (s), ry * (s)) has a nonempty subdifferential at r = 1, in the sense on convex analysis. The role of radial convexity in Lipschitz regularity was prefigured in [20] by the fact that the velocity constraint is a cone, and was first explicitly formulated for autonomous Lagrangians by C. Mariconda and G. Treu in [31]. The celebrated example by J. M. Ball and V. J. Mizel in [4] of a nonautonomous polynomial Lagrangian that is superlinear and convex in the velocity variable shows that, the violation of Condition (S) may lead not only to minimizers that are not Lipschitz, but even to the Lavrentiev phenomenon, i.e., the fact that Condition (S) is not new and appeared in several results. It is sufficient for the validity of the Du Bois-Reymond -Erdmann equation (see [18] for smooth Lagrangians, and [8] for a discussion in the general case) and, if one replaces in Tonelli's assumptions the superlinearity condition with the slower growth (H), it plays an essential role in establishing the Lipschitz continuity of minimizers in F. Clarke -R. Vinter's [23,Corollary 3]. Furthermore, F. Clarke proved in [20] that it provides existence of a minimizer, which is actually Lipschitz. The Lavrentiev phenomenon has been widely reconsidered in the 1980s, a long time after M. Lavrentiev and B. Manià realized (see [29]) that such a pathology could occur. Here again, the autonomous case stands on its own: G. Alberti -F. Serra Cassano proved in [1] that the Lavrentiev phenomenon never occurs if Λ(y, u) is just Borel, possibly extended valued; we refer to [9], [10], [11] for more insights on Lavrentiev's gap. More precisely if y(·) is an admissible trajectory and s → Λ(y(s), y (s)) ∈ L 1 ([t, T ]), there is no Lavrentiev gap at y, i.e., there is a sequence (y j ) j of Lipschitz functions that share the same boundary values with y, converging to y in W 1,1 ([t, T ]) and in energy, i.e. lim j→+∞ I(y j ) = I(y).
In the nonautonomous case some additional conditions have to be added. To the author's knowledge, the criteria for the avoidance of the Lavrentiev phenomenon either follow trivially from the fact that minimizers exist and are Lipschitz or, as in [28,35,37], they require that Λ is locally Lipschitz or Hölder continuous in the state variable. One of the reasons is that, as was pointed out by D. Carlson in [14], many of the published results can actually be obtained as a consequence of Property (D) introduced by L. Cesari and T.S. Angell in [19]. Regularity conditions on the state variable or convexity in the velocity variable are not satisfied in several problems arising from real life; discontinuous Lagrangians appear for instance in models arising from combustion in non homogeneous media or light propagation in the presence of layers. Some natural questions arise, and are addressed in the paper: 1. When existence of a minimizer fails because of the lack of continuity of the Lagrangian with respect to the state or velocity variable, can one at least approach the infimum of the functional I through the values of I along "nice" minimizing sequences (say equi-Lipschitz)?
2. May Condition (S) on the time variable replace the customary regularity assumptions in the state variable, in order to prevent the Lavrentiev phenomenon?
Problem 1 was considered by A. Cellina and A. Ferriero in [16], where the authors consider autonomous, continuous Lagrangians that are convex or differentiable in the velocity variable and satisfy the growth condition (G). While, in the real valued, convex case, this result may be seen as a consequence of [20,Theorem 2] ((G) implies (H) so that minimizers exist and are Lipschitz), new results arise in the differentiable case or in the extended valued framework, when Condition (G) and the Condition (H), in its original formulation (as in [8,7,20]), may even not overlap. Most of the present work is based on the intuition that some steps of the proof of the main result in [16] for the basic problem of the calculus of variations could actually be carried on in a more general setting, namely under a weaker growth condition of type (H) instead of (G), no more continuity assumptions on the state and velocity, nor convexity in the velocity variable and in the slightly wider framework of optimal control problems with a controlled-linear dynamics.
In this article we consider the more general Bolza optimal control (P t,x ) of minimizing an integral functional J t (y, u) := T t Λ(s, y(s), u(s)) ds + g(y(T )) among the absolutely continuous arcs y : [t, T ] → R n that have a prescribed value at t y(t) = x ∈ R n , that are subject to a state constraint and to a control-linear differential equation and g is positive, possibly extended valued, U is a cone. If b is the identity matrix, g is the indicator function of a point and S = R n problem (P t,x ) is the basic problem of the calculus of variations. The same Bolza problem was considered in [7]; the particular form of the dynamics is motivated by the reparametrization techniques used to obtain the results. The results thus apply for instance to the class of problems called of Grushin type (see [30]) and to control problems related to subriemaniann metrics (see [3]). We take nonautonomous Lagrangians which are Lebesgue-Borel measurable and possibly extended valued. We assume that the Lagrangian is measurable, has at least a linear growth from below and satisfies Condition (S). We admit two different types of Lagrangians: those that are radially convex w.r.t. the control variable or those that are partial differentiable w.r.t the control variable; no kind of lower semicontinuity nor global convexity in the state or control variable are required. The extended valued case needs some extra assumptions. In this situation we impose, moreover, that Λ tends uniformly to +∞ at the boundary of its effective domain Dom(Λ), together with some structure conditions on Dom(Λ) that are satisfied if, for instance, Λ(s, y, u) = a(s, y)L(u) where a is real valued and Dom(L) is star-shaped. In Section 4 we study various "slow" growth conditions and describe how they are related. When Λ is smooth Condition (G) imposes that The interpretation of (G) can be easily understood noticing that Λ(s, y, u) − u · ∇ u Λ(s, y, u) is the value of the intersection with the w axis of the tangent hyperplane to v → Λ(s, y, v) at v = u. Condition (G) has been considered in the framework of the autonomous case in [15,16,17] and extended to the nonautonomous case in [8].
In the smooth setting the original Condition (H), as formulated in [20] for the calculus of variations and in [7,12] for the optimal control problem considered here, requires that once (y(·), u(·)) is an admissible pair for (P t,x ) with ess inf and the κ, A, γ are as in (1.1). At a first glance, Condition (H) may appear quite involved since it relies on the essinf of a given admissible pair (y, u) and on Φ(y(·), u(·)), a function depending on Condition (S). In the autonomous case it appears to be more ductile since in that case Φ ≡ 0 (see Figure 1 for the interpretation of Condition (H) in the simple case of a Lagrangian of a positive real control variable). However, as anticipated in [20,Theorem 3] and proved in [7], conditions (1.3) -(1.4) represent merely a violation of the Du Bois-Reymond -Erdmann equation for high values of the velocity/control. With respect to [8,7,12,20] we formulate here Condition (H) in a slight different way for several reasons. We take into account that the initial time t and value x may vary. Furthermore, in the extended valued case the formulation given in § 4.3 widens the class of functions that satisfy (1.4) (see Remark 4.14 and Example 7.2); as a byproduct the validity of (G) implies now that of (H) in any "reasonable" case (Proposition 4.17). At the same time, at least in the real valued and nonautonomous case, our Condition (H) is slightly more restrictive with respect to the original one due to the presence, in (1.4), of a technical factor 2 in front of Φ(y(·), u(·)); this does not seem, however, to have any consequence in concrete applications.
A new growth condition (M) is introduced in § 4.4, so weak that it is in fact satisfied by any Lagrangian that is bounded on the bounded sets "well-inside" the c v n effective domain (in the sense of Definition 4.15) and radially convex in the control variable. In the real valued, smooth case it simply requires that, for a suitable c > 0 and ν > 0, The main result, formulated in Theorem 5.1, considers the two different types of growth (H) or (M): • If Condition (H) is verified, it states that, whenever (y, u) is admissible for (P t,x ) then there is an admissible pair (y, u) where y is Lipschitz, u is bounded, such that J t (y, u) ≤ J t (y, u).
Moreover, the Lipschitz constant of y and u are uniformly bounded as t, x vary in compact sets.
• If the less restrictive Condition (M) holds, given η > 0 we still get a pair (y, u) with the above regularity properties, and satisfying Several examples are provided in § 7 to illustrate the growth conditions involved in the article and the applicability of the results.
In the proof of Theorem 5.1 the Maximum Principle cannot be invoked, due to the lack of Lipschitz continuity of the Lagrangian in the state variable. Instead, we extend the method of [16] to this more general framework in order to build the desired Lipschitz function y via a Lipschitz reparametrization of y. Without entering into the several technical points of the proof, it may be of interest to briefly illustrate the link between reparametrizations and growth conditions. For simplicity, consider the case of the calculus of variations. Let ϕ be a smooth, increasing change of variable on [t, T ], y be an admissible trajectory for (P t,x ), and set y(s) := y(ϕ −1 (s)). Notice that, by taking high values of ϕ (τ ), one lowers the norm of the derivative of y(ϕ(τ )). The change of variable s = ϕ(τ ) yields Supposing that Λ smooth, the derivative of µ → Λ ϕ, y, u µ µ at µ = 1 is The proof consists on finding a suitable increasing and one-to-one change of variable ϕ : [t; T ] → [t, T ]. By choosing ν, c > 0 as in (1.4) (resp. (1.5)), Conditions of type (H) (resp. (M)) allow to compensate the values of integral in I on the sets where |y | > ν with the ones where |y | < c, up to obtain a lower value than I(y) (resp. I(y) + η). The essential ideas of the multiple step proof of Theorem 5.1 are described at the beginning of Section 9 for the convenience of the reader. Many technical issues are actually related to the fact that the Lagrangian is allowed to take the value +∞; we invite the reader focused in the real valued case to consult the simplified version in the announcement of the results given in [27]. It is worth mentioning that, in the proof of Theorem 5.1, the two growth conditions (H) and (M) share most of the arguments; their difference play a role just in few of the many steps. This fact seems to be a byproduct of the care needed to deal with Condition (H) and was unnoticed in [16], where the authors consider the more restrictive (but easier to handle) growth of type (G). Theorem 5.1 has several consequences. Under Condition (H), Corollary 5.5 yields "nice" minimizing sequences for (P t,x ) formed by equi-Lipschitz trajectories and equi-bounded controls as t, x vary in compact sets. This property, that does not need existence of minimizers, is expected to have a strong impact on the study of the regularity function of the value function V (t, x) = inf (P t,x ) and will be investigated ( [5], in preparation).
As a further consequences of the main result, the existence of a solution under slow growth conditions to the optimal control problem (P t,x ) when, in addition to the conditions of Theorem 5.1, one imposes some standard lower semicontinuity of Λ(s, y, u) in (y, u), convexity with respect to u, closure of the state constraint set S, closure and convexity of the control set. Corollary 6.2 almost overlaps [20,Theorem 3] when the problem concerns the calculus of variations, where the major difference relies on the version of Condition (H) mentioned above, but seems to be new in the framework of optimal control problems. Existence for more general controlled differential equations than (1.2) was considered in the autonomous case in [12]. However, though the controlled-linear structure of the system (1.2) might appear restrictive, the novelty with respect to the known literature is represented here by the absence of any kind of local Lipschitz condition on the state variable, and by the fact that Λ may be extended valued. Theorem 5.1 does also provide some answers related to Problem 2. The Lavrentiev phenomenon is excluded in Corollary 5.7 for a wide class of Lagrangians, assuming a growth condition of type (M). In particular it is avoided (Corollary 5.9) when Λ is real valued and, moreover: a) Λ satisfies Condition (S); b) Λ is radially convex in the control variable; c) Λ is bounded on bounded sets.
We stress again the fact that, differently from other results on the Lavrentiev phenomenon for optimal control problems (see [12,25,26]), we do not assume any kind of Lipschitz continuity of Λ in the state variable, nor we make use of the Maximum Principle.

Basic setting and notation
Let 0 ≤ t < T and x ∈ R n . We consider the Bolza type optimal control problem min J t (y, u) := T t Λ(s, y(s), u(s)) ds + g(y(T )) (P t,x ) Subject to: with the following basic assumptions.
Basic Assumptions and Notation. The following conditions hold.
• The Lagrangian Λ : is Lebesgue-Borel measurable (i.e., measurable with respect to the σ-algebra generated by products of Lebesgue measurable subsets of [0, T ] (for s) and Borel measurable subsets of R n × R m (for (y, u)); Borel measurable function such that, for some θ ≥ 0, We refer to y = b(y)u as to the controlled differential equation; • The control u : [t, T ] → R m is measurable; • The state constraint set S is a nonempty subset of R n ; • The control set ∅ = U ⊆ R m is a cone, i.e. if u ∈ U then λu ∈ U whenever λ > 0; • (Linear growth from below) There are α > 0 and d ≥ 0 satisfying, for a.e. s ∈ [0, T ] and every y ∈ R n , u ∈ U, We assume that for a.e. s ∈ [0, T ] and every y ∈ R n the set {u ∈ R m : (s, y, u) ∈ Dom(Λ)} is strictly star-shaped on the variable u w.r.t. the origin, i.e., Λ(s, y, u) < +∞, 0 < r ≤ 1 ⇒ Λ(s, y, ru) < +∞.
• The cost function g : S → [0, +∞[∪{+∞} is a given positive function, not identically equal to +∞. Notice that we allow g to take the value +∞ so that the class of problems studied here contains those with a end-point constraint.
Remark 2.1. Notice that it is not required that (s, y, 0) ∈ Dom(Λ) for some An admissible pair for (P t,x ) is a pair of functions (y, u) : [t, T ] → R n × R m with u measurable, (y, u) satisfying (2.1) and such that J t (y, u) < +∞. We assume henceforth that, for each t ∈ [0, T [ and x ∈ S, there exists at least an admissible pair for (P t,x ). A minimizing sequence (y j , u j ) j for (P t,x ) is a sequence of admissible pairs such that lim Notice, that in the particular case where, the function b is the identity matrix in the controlled differential equation, then (P t,x ) becomes a problem of the Calculus of Variations.
If z ∈ R k we shall denote by B k r (z) (simply B k r if z = 0) the closed ball of center z and radius r ≥ 0 in R k . The norm in L 1 is denoted by · 1 , and the norm in L ∞ by · ∞ . If (s, y, u) ∈ Dom(Λ) we shall denote by dist((s, y, u), ∂ Dom(Λ)) the euclidian distance from (s, y, u) to the boundary of Dom(Λ) in [0, T ] × R n × R m . We will denote by |·| both the norm in Euclidean spaces and the Lebesgue measure in R; the distinction will be clear from the context.

Assumption (A) and Condition (S)
In what follows, we assume the structure Assumption (A) on Λ(s, y, u) with respect to u, either radial convexity or partial differentiability, and the local Lipschitz condition (S) on Λ(s, y, u) with respect to s.

Assumption (A)
We assume henceforth the following structure condition on Λ(s, y, ·).

Structure Assumption (A). At least one of the two following assumptions holds:
A c ) (Radial convex case) For a.e. s ∈ [0, T ] and every y ∈ R n , u ∈ U, the map 0 < r → Λ(s, y, ru) is convex, or A d ) (Partial differentiable case) For a.e. s ∈ [0, T ] and every y ∈ R n , the map Λ(s, y, ·) has the partial derivative

Condition (S)
We will consider the following local Lipschitz condition on the Lagrangian Λ with respect to the time variable.
Condition (S) was considered in [8,6,20]. It is a nonsmooth extension of Condition (S), that appears in [18] to establish the validity of the Du Bois-Reymond -Erdmann equation in the smooth setting. We show now that Condition (S) is satisfied if s → Λ(s, y, u) fulfills a suitable growth condition. For y, u ∈ R n we denote by ∂ P s Λ(s, y, u) the proximal subgradient of τ → Λ(τ, y, u) at τ = s; it coincides with D s Λ(s, y, u) (resp. the convex subgradient of τ → Λ(τ, y, u) at τ = s) if Λ(·, y, u) is C 2 (resp. convex). We refer to [22] for more details on the subject. a) The map s → Λ(s, y, u) is lower semicontinuous for every y ∈ R n , u ∈ U; b) There is β ≥ 0, such that, for all s ∈ [0, T ], y ∈ R n , u ∈ U: |∂ P s Λ(s, y, u)| ≤ β Λ(s, y, u) + |u| + 1 . • If β > 0, then In both cases it turns out that Λ satisfies Condition (S).

Growth conditions
We introduce here the growth Conditions (G), (H δ B ), (M δ B ) that are less restrictive than superlinearity. We leave some examples and proofs to Section 8.

Partial derivatives and subgradients
In what follows we often deal with subdifferentials in the sense of convex analysis.

The Growth Condition (G)
The growth assumptions introduced below involve some uniform limits.
The growth Condition (G) was thoroughly studied by Cellina and his school for autonomous Lagrangians of the calculus of variations that are smooth or convex in the velocity variable. The extension to the radial convex case, recalled here, was considered in [31] in the autonomous case and was introduced in [6,7] for the nonautonomous case. The growth Condition (G) below subsumes the validity of the structure Assumption (A). The partial differentiable case is new.
Growth Condition (G). We say that Λ satisfies (G) if (just) one of the following assumptions holds. Either: Or, alternatively, the following differentiable condition holds.
where h ≥ 0 is Borel and bounded on bounded sets. Then Λ satisfies Condition as |u| → +∞ uniformly for s ∈ [0, 1] and y in bounded sets.
The next proposition was formulated for the autonomous case in [16] under the stronger assumption that Λ(y, u) is either convex or differentiable in u. Its proof is postponed to Section 8. Proposition 4.6 (Condition (G) implies linear growth). Assume that Λ fulfils Condition (G). Then Λ has a linear growth from below, i.e., there are α > 0 and d ∈ R such that (2.2) holds for a.e. s ∈ [0, T ] and every y ∈ R n , u ∈ U.
Superlinearity plays a key role in Tonelli's existence theorem. It has been widely used as a sufficient condition for Lipschitz regularity of minimizers (see [2,23,24]).

Superlinearity. There exists
If Λ(s, y, ·) is radially convex then superlinearity, together with some local boundedness condition, imply the validity of the growth Condition (G). We refer to [8,Proposition 2] for the proof of the following result.

Growth Condition (H δ B )
When B is an upper bound of a prescribed family of admissible pairs, with initial time t varying in [0, δ], the following quantities c t (B) and Φ(B) will play a role in the proof of the main results.
Moreover, if Condition (S) holds, we define where we set κ, A, γ equal to 0 if Λ is autonomous.
The next result highlights the roles of Φ(B) and c t (B) and is a key tool in the proof of Theorem 5.1.
Proof. 1. Condition (2.2) and the fact that 3. It is enough to notice that, from (4.4), Given B ≥ 0 and δ ∈ [0, T [, the growth Condition (H δ B ) below subsumes the validity of Assumption (A) as well as of Condition (S). It will be applied in Theorem 5.1 when B is an upper bound for the values of a given set of admissible pairs for problems (P t,x ) as t ∈ [0, δ]. In the autonomous case Φ(B) is assumed without restriction to be equal to 0, for then we may take κ = A = 0 and γ ≡ 0 (see Remark 3.2).
if for all K ≥ 0 there are ν > 0 and c > c δ (B) satisfying (just) one of the following assumptions. Either: Or, alternatively, 3. With the above notation, In particular Condition (H δ B ) is satisfied if, for some ν > 0, |u| < ν whenever (s, y, u) ∈ Dom(Λ) and the "inf" term in (4.8), or in (4.9), is not equal to −∞.
whereas, for any c > 0, Notice that Λ does not satisfy Condition (G), since Remark 4.13. It is useful to clarify the quantities that appear in Condition (H δ B ).
1. It follows from Remark 4.9 that if B > B ≥ 0 then the validity of Condition (H δ B ) implies that of (H δ B ).
• (the role of B) In view of Proposition 4.10, the initial assumption on B ensures that, if (y, u) is admissible for (P t,x ) (t ≤ δ) then, for K ≥ y ∞ and c > c δ (B), the set in such a way that the infimum in the right-hand side of (4.5) -(4.7) is not equal to +∞.
Remark 4.14. Condition (H δ B ) represents a violation of the Du Bois-Reymond -Erdmann condition for high values of the control variable (see [7]). Condition (H 0 B ) was introduced in [20] for a fixed initial time problem t = 0 in the following setting: • a convex problem of the calculus of variations with a real valued Lagrangian; • B equal to any upper bound of J t (y, y ) for a suitable admissible trajectory y : The present formulated is suitable for classes of admissible trajectories as the initial time varies in an interval [0, δ]. We point out that, with respect to the original version, the term 2Φ(B) in the right-hand side of (4.6) replaces Φ(B), so that our condition, in the nonautonomous case, is slightly more restrictive than [20, Hypothesis (H2) ]. We do not have, however, an example where this represents a true drawback. At the same time, the present version enlarges the realm of application in several new aspects: it takes into account the partial differentiable case, variable initial time/position (which involves the choice of B, c δ (B) and Φ(B) in Definition 4.8) and, in (4.5), (4.7), the additional requirement that the infimum is taken just for the points of the effective domain that satisfy dist((s, y, u), ∂ Dom(Λ)) ≥ ρ gives more chances for the condition to be satisfied. Indeed, in [ This fact is clarified in Example 7.2.
Next Proposition 4.17 shows that the infimum in (4.8) -(4.9) is finite under a natural assumption related to the boundedness of Λ on bounded sets that are far away from the boundary of the domain. In this situation, the validity of Condition (G) implies that of Condition (H δ B ), whatever are the choices of B and 0 ≤ δ < T .
as is the case under Assumption h 1 ) of Theorem 5.1, the notion of "well-inside" coincides with that of relatively compact subset.
Assume that Λ is bounded on the bounded sets that are well-inside Dom(Λ) and at least one of the following structure conditions: a) Λ is radially convex in the control variable as in A c ), or, b) Λ(s, y, ·) is partially differentiable as in A d ) and is uniformly Lipschitz for (s, y, u) in each bounded set that is well-inside the domain.
The following properties hold:  (1) It is not restrictive to assume that W K,c,ρ = ∅, otherwise the infimum in (4.10) equals +∞. It follows either from Lemma 4.18 (radial convex case) or from the local Lipschitzianity of Λ well-inside the domain (partial differentiable case) that Q(s, y, u) is bounded above on W K,c,ρ . The local boundedness condition on Λ implies that (2) In the partial differentiable case we set Q(s, y, u) := D u Λ(s, y, u); otherwise let Q(s, y, u) ∈ ∂ r Λ(s, y, ru) r=1 be such that Then Q is bounded on the bounded sets that are well-inside Dom(Λ).
Proof. Let (s, y, u) ∈ Dom(Λ) with |y| + |u| ≤ C for some C > 0 and dist((s, y, u), ∂ Dom(Λ)) ≥ ρ for some ρ > 0. We have the boundedness assumption of Λ implies that Q(s, y, u) is bounded above by a constant depending only on C and ρ. Similarly, from we deduce a lower bound for Q.

Growth Condition (M δ B )
We introduce here a new condition,than that turns out to be satisfied by a wide class of Lagrangians, as shown in       Then Thus, for any c > 0 and ν > 0 we have  ). Assume that Λ is radially convex in the control variable as in A c ). Assume, moreover, that: a) Λ is bounded on the bounded sets that are well-inside Dom(Λ); Then Λ satisfies condition (M δ B ) for any choice of B ≥ 0 and δ ∈ [0, T [. In particular the condition is satisfied whenever Λ(s, y, u) is real valued, continuous and radially convex in the control variable.

Nice admissible pairs
Theorem 5.1 is the main core of the paper. Some examples that illustrate its applications are postponed to Section 7, whereas Section 9 is entirely devoted to its proof.
Theorem 5.1 (Nice admissible pairs). Suppose that Λ satisfies Assumption (A) and Condition (S). Let δ ∈ [0, T [, δ * ≥ 0, x * ∈ R n , and A be a family of admissible pairs for (P t,x ), for some t ∈ [0, δ] and x ∈ B n δ * (x * ). Assume that J t (y, u) ≤ B for some B ≥ 0, whenever (y, u) ∈ A. Unless Λ is real valued assume, moreover, the following conditions: h 2 ) Λ tends uniformly to +∞ at the boundary of the effective domain, i.e., The following claims hold: 1. Suppose that Λ satisfies Condition (H δ B ). Then there is a constant K A such that, for every admissible pair (y, u) ∈ A for (P t,x ) there exists an admissible pair (y, u) for (P t,x ) such that (a) y = y • ψ, where ψ is a Lipschitz reparametrization of [t, T ]; , u), the inequality being strict if u is not bounded.
2. Suppose that Λ satisfies Condition (M δ B ) and let η > 0. Then the conclusions (a), (b) of Claim (1) remain valid with K A possibly depending on η and, moreover,

Hypothesis h 2 ) implies that the effective domain Dom(Λ) is open in
3. In the case of a family A reduced to single admissible pair (y, u) for (P t,x ), the requirement of the validity of condition (H δ B ) (resp. (M δ B )) in Theorem 5.1 may be replaced by that of the validity of (H t Jt(y,u) ) (resp. (M t Jt(y,u) )). The existence of an upper bound B in Theorem 5.1 is ensured, obviously, if the family A is reduced to a singleton {(y, u)}, in which case B = J t (y, u) is a suitable choice. Some sufficient conditions for the existence of B may be obtained when Λ is real valued. Lemma 5.3 (A uniform upper bound for the infima of (P t,x )). Assume that Λ is finite valued and bounded on bounded sets. Suppose that one of the two assumptions holds: 1. either b = 1 in the controlled differential equation, S is convex and U = R m , or 2. the cost function g is real valued, locally bounded and 0 ∈ U.
Proof. We consider separately the cases 1 and 2.
(1) Let ξ * ∈ S be such that g(ξ * ) < +∞. Moreover, for every s ∈ [t, T ], y(s) belongs to the segment joining x with ξ * , and It follows from the boundedness assumption of Λ on bounded sets that there is a constant B, depending only on x * , δ, δ * such that J t (y, y ) ≤ B.
(2) Assume that 0 ∈ U. The fact that Λ is bounded on bounded sets and that g is bounded on B n δ * (x * ) imply that the right-hand side of (5.1) is bounded above by a constant depending only on x * , δ, δ * , whence the claim.

Nice minimizing pairs
As a consequence of Theorem 5.1, the existence of uniformly equi-Lipschitz minimizing sequences under a growth assumption of type (H).
1. Let t ∈ [0, T [, x ∈ R n and suppose that Λ satisfies Condition H t Jt(y,u) , for a suitable admissible pair (y, u) for (P t,x ). Then there are a minimizing sequence (y j , u j ) j for (P t,x ), a constant K t,x such that y j ∞ ≤ K t,x , u j ∞ ≤ K t,x and each y j is Lipschitz of rank K t,x .
2. (Uniformity w.r.t. t, x, j) Let 0 ≤ δ < T , x * ∈ R n and δ * ≥ 0. Assume that, for some B ≥ 0, and any t ∈ [0, δ], x ∈ B n δ * (x * ), there is an admissible pair (y, u) for (P t,x ) satisfying J t (y, u) ≤ B. Furthermore, suppose that Λ satisfies Condition (H δ B ). Then the constants K t,x in Claim (1) may be chosen to be uniformly bounded above with respect to t ∈ [0, δ], x ∈ B n δ * (x * ). Proof. 1. Consider any minimizing sequence (y j , u j ) j for (P t,x ) with J t (y j , u j ) ≤ J t (y, u) for every j ∈ N. The application of Claim (1) of Theorem 5.1 with A = {(y j , u j ) : j ∈ N}, B = J t (y, u), δ = t, δ * = 0 and x = x * yields the claim. 2. Let t ∈ [0, δ], x ∈ B n δ * (x * ) and consider any minimizing sequence (y t,x j , u t,x j ) for (P t,x ); we may assume without restriction that The application of Claim (1) of Theorem 5.1 with A = {(y t,x j , u t,x j ) : j ∈ N, t ∈ [0, δ], x ∈ B n δ * (x * )} allows to conclude. Remark 5.6. The construction of a equi-Lipschitz minimizing sequence was considered under Condition (G) in [16] for continuous and autonomous Lagrangians of the calculus of variations, under prescribed boundary data and conditions, assuming either convexity or differentiability of Λ(y, y ) with respect to the "velocity" variable y . Corollary 5.5 extends [16, Theorems 1,2,3,4] in several directions.

Avoidance of the Lavrentiev phenomenon
Another consequence of Theorem 5.1 is the avoidance of the Lavrentiev phenomenon under Condition (S) and the weakest growth Condition of type (M).
Differently from other results in the literature (see [25,37]), we do not assume neither continuity of Λ, nor the local Lipschitz continuity of Λ(s, ·, u), nor global convexity on the control variable and we do not make use of the Maximum Principle. With respect to Corollary 5.5 we loose equi-boundedness of the minimizing sequences of controls; nevertheless in Claim (2) we still keep some uniformity with respect to the initial points and state, for any given index j of the sequence.
1. Let t ∈ [0, T [, x ∈ R n and suppose that Λ satisfies Condition M t Jt(y,u) , for a suitable admissible pair (y, u) for (P t,x ). Then there is a minimizing sequence (y t,x j , u t,x j ) j for (P t,x ) where, for each j ∈ N, u j is bounded and y j is Lipschitz.

(Uniformity
. Then in Claim (1), one may choose the minimizing sequences in such a way that, for all j ∈ N, there is a suitable constant K j such that u t,x j ∞ ≤ K j and the rank of y t,x j is less than K j , as t varies in [0, δ] and x varies in B n δ * (x * ).
Proof. The proof follows the lines of that of Corollary 5.5 making use of Claim (2), instead of Claim (1), of Theorem 5.1. 1. Consider any minimizing sequence (y j , u j ) j for (P t,x ) with J t (y j , u j ) ≤ J t (y, u) for every j ∈ N. The application of Claim (2) of Theorem 5.1 with A = {(y j , u j ) : j ∈ N}, δ = 0, B = J t (y, u), δ * = 0 and x = x * yields the claim. 2. Let t ∈ [0, δ], x ∈ B n δ * (x * ) and consider any minimizing sequence (y t,x j , u t,x j ) for (P t,x ); we may assume without restriction that Remark 5.8. Condition (S) plays an essential role here. In the framework of the calculus of variations the celebrated example by Ball and Mizel in [4] exhibits a Lagrangian Λ(s, y, y ) that is a polynomial, superlinear and convex in y , for which the Lavrentiev phenomenon occurs with some suitable boundary data. As shown in [1], autonomous problems of the calculus of variations (therefore with the dynamics y = u) do not face the Lavrentiev phenomenon, no matter the growth of the Lagrangian is.
Taking into account Proposition 4.24, we deduce as a particular case of Corollary 5.7, the nonoccurrence of the Lavrentiev phenomenon for a wide class of functionals with a real valued Lagrangian. We stress the fact that apart radial convexity in the control variable, no more regularity than measurability is required on the state and control variable. • Λ is radially convex in the control variable, i.e., for a.e. s ∈ [0, T ] and every y ∈ R n , u ∈ U, the map 0 < r → Λ(s, y, ru) is convex.
Then the Lavrentiev phenomenon for (P t,x ) does not occur.
Remark 5.10. Example 4.23 shows that there are Lagrangians that are not radially convex in the control variable for which the Lavrentiev phenomenon does not occur.

Existence and regularity of optimal pairs
The following Lipschitz regularity result somewhat extends and partly overlaps those formulated in [8,7] in the extended valued case. The lower semicontinuity assumption of u → Λ(s, y, u) in [7,Theorem 4.2] is replaced here by hypotheses h 1 ) and h 2 ). Also, as explained above, the growth condition (H δ B ) considered here is somewhat less restrictive than the one considered previously. At the same time we require here the structure assumption (A), not needed in [8,7]. Corollary 6.1 (Lipschitz regularity). Suppose that Λ satisfies Assumption (A) and Condition (S). Unless Λ is real valued assume hypotheses h 1 ), h 2 ) of Theorem 5.1. Let t ∈ [0, T [, x ∈ R n and (y * , u * ) be an admissible pair for (P t,x ) and suppose that Λ satisfies Condition H t Jt(y * ,u * ) . Then u * is bounded and y * is Lipschitz.
Proof. If u * is not bounded then Claim (1)(c) of Theorem 5.1 provides the existence of an admissible pair (y, u) with J t (y, u) < J t (y * , u * ), a contradiction. Therefore u * is bounded. The controlled differential equation y * = b(y * )u * implies the Lipschitzianity of y * .
The following existence result for the optimal control problem (P t,x ), follows easily from the previous claims. In the case of the real valued case of the calculus of variations it gives back [20,Theorem 3], though the Lagrangians with no linear growth from below escape from our method. In the extended valued case of the calculus of variations our Condition (H) brings in some new cases with respect to the one considered in [20] (see Remark 4.14); at the same time the uniform limit hypothesis h 2 ) at the boundary of Dom(Λ) is more restrictive than the lower semi-continuity alone required in [20,Theorem 3]. In the framework of optimal control problems, the existence question under the same slow growth condition was considered for autonomous Lagrangians (where Φ(B) = 0) in [12] with a more general type of differential controlled equation of the form y = f (y, u), assuming some extra regularity assumptions, e.g., local Lipschitz continuity on Λ(y, u) and f (y, u) with respect to the y variable, not required here. Corollary 6.2 (Existence and regularity of a solution to (P t,x )). Suppose that Λ satisfies Assumption (A) and Condition (S). Unless Λ is real valued assume hypotheses h 1 ), h 2 ) of Theorem 5.1. Let t ∈ [0, T [, x ∈ R n . Suppose that Λ satisfies Condition H t Jt(y,u) for some admissible pair (y, u) for (P t,x ). Moreover, suppose the validity of the following structure conditions: • For a.e. s ∈ [t, T ] the function (y, u) → Λ(s, y, u) is l.s.c.; • For a.e. s ∈ [t, T ] and every y ∈ R n the function u → Λ(s, y, u) is convex; • The cost function g is l.s.c. and b is continuous; • The set U ⊆ R m is closed and convex and S ⊆ R n is closed.
Then Problem (P t,x ) admits a solution (y * , u * ), with y * Lipschitz and u * bounded.
Proof. Let (y j , u j ) j be a minimizing sequence for (P t,x ). We may assume, from Claim (1) of Corollary 5.5, that y j are equi-Lipschitz, equi-bounded and that the controls u j are uniformly bounded. Ascoli's theorem implies that, modulo a subsequence, y j converges uniformly to a Lipschitz function y * on [t, T ]; the closure of S implies that y * ([t, T ]) ⊆ S. By the reflexivity of L 2 [t, T ] we may also assume that u j converges weakly in L 2 to a function u * : Mazur's lemma shows that u * is bounded and that, due to the closure and convexity of the sets U, that u * (s) ∈ U for a.e. s. We may then invoke a standard integral semicontinuity theorem (see, for instance, [22,Theorem 6.38]) to deduce that There remains only to verify that y * is the state trajectory corresponding to u * , which is a standard matter. We proceed by following, for instance, the proof of [22,Theorem 23.11]. It is enough to show that, for any measurable subset A of The equality holds when y * and u * are replaced by y j and u j , respectively. To obtain the desired conclusion, it suffices to justify passing to the limit as j → +∞. By weak convergence, and by dominated convergence theorem, we have A y j (s) ds → A y * (s) ds as j → +∞.
We also know that, as j → +∞, A b(y * (s))u j (s) ds → A b(y * (s))u * (s) ds, since b(y * (s)) is bounded. By Hölder's inequality we have that as j → +∞. Indeed the first factor tends to 0 by dominated convergence, and the second is uniformly bounded since the sequence (u j ) j is bounded in L 2 [t, T ]. Therefore The result follows.

Examples
We consider here some examples related to the growth conditions introduced above and some Lagrangians to which our results may be applied.

Growth Conditions
the inequality being strict if max ∂L(u) < min ∂L(v). Fix c > 0. The fact that L is not definitely affine implies that there exists ν > 0 such that max ∂L(c) < min ∂L(ν). Therefore, we obtain More generally if L : R → R is radially convex and c > 0, (4.5) is fulfilled whenever (see Figure 5) max{P (ν), P (−ν)} < min{P (c), P (−c)}.  Example 7.2. The example illustrates the role of the condition dist((s, y, u), ∂ Dom(Λ)) ≥ ρ in (4.5) -(4.7). Consider the function Therefore Λ fulfills Condition (H δ 2 ) for all δ ∈ [0, 1[. Example 7.3 (Lack of boundedness on bounded sets: a case where (G) holds but (H δ B ) does not). We show here that the local boundedness assumption in Proposition 4.17 is crucial in order to obtain the conclusion. Let, for any s ∈ Then

Application of the main results: real valued case
The next Example 7.5, a discontinuous version of [20,Example 4.3], exhibits a real valued Lagrangian of the calculus of variations that satisfies the assumptions of Theorem 5.1, to which the previous results of the literature (e.g., [13], [16], [20]) do not apply.
• From Definition 4.8 we get For all (s, y, u) Therefore, for any c ∈ R and K ≥ 0 we have Therefore (H 0 B ) is satisfied whenever, for some we get Either φ ≡ 0, in which case any c satisfies (7.3), or (7.2) and (7.3) are satisfied whenever Now, B is bounded above by 2(m φ + φ ∞ ) 1 + |ζ| 2 . Thus, a sufficient condition for the validity of (7.4) is the existence of c is ensured at least for sufficiently small values of λ = φ ∞ i.e., whenever φ is sufficiently close for being a constant.

Applications of the main result: extended valued case
In Example 7.6 we exhibit a nonautonomous, extended valued Lagrangian Λ(s, y, u) that satisfies the assumptions of Theorem 5.1 without being regular in the state variable, nor convex in the control variable.
+∞ otherwise; (7.5) the graph of L is depicted in Figure 7. The domain of L is star-shaped with respect to the origin. The function L has the following properties.
• L tends uniformly to +∞ at the boundary of its effective domain.
On each of the above sets we obtain the following estimates from below: Indeed, let (u 1 , u 2 ) ∈ U 3 ε and |a| ∈ ε, 1 √ 2 be such that We assume that a > 0, the case a < 0 being similar. Then and therefore If a ≤ ε| log ε| then If a ≥ ε| log ε| then, using the fact that a ≤ 1 √ 2 we obtain m(a, ε) = which, together with (7.9), gives (7.8).
• L is superlinear. Indeed if u = (u 1 , u 2 ) ∈ Dom(Λ) and u 2 > |u 1 | then • L is bounded on the bounded sets that are well-inside the domain. Indeed the domain of L is open and L is bounded on the relatively compact subsets of the domain.
• L is not continuous in the interior of its domain and thus L is not convex. Indeed, for instance, Now, let φ : [0, 1] →]0, +∞[ be Lipschitz, a : R 2 → R be any measurable function that is bounded below by a positive constant and bounded on bounded sets. Define where L is defined in (7.5). The above discussion shows that • The domain of Λ is a product, in the sense of Hypothesis h 1 ). Indeed, • Λ tends uniformly to +∞ at the boundary of the effective domain, in the sense of Hypothesis h 2 ); • 0 < r → Λ(s, y, ru) is convex for every (s, y, u) ∈ Dom(Λ); • Λ is superlinear and (s, y, u) ∈ Dom(Λ) whenever s ∈ [0, 1], y ∈ R n and u ∈ R m with |u| ≤ 1/2. where c := c φ min φ ; Therefore Λ fulfills the assumptions of Theorem 5.1, might be discontinuous in the variables y, u, and is nonconvex in u.

Growth conditions in more depth
We give here the proofs of some results formulated in Section 4.
Again, Lemma 4.18 and the assumption that Λ is bounded on bounded sets that are well-inside the effective domain imply that there is a constant c 2 (K) satisfying proving the validity of ii) of Condition (M δ B ).

Proof of the main result
This section is devoted to the proof of Theorem 5.1. Many technical points derive from the fact that the Lagrangian is allowed to take the value +∞. Due to its length, it may be convenient to illustrate what are the main arguments. We will often write, for the sake of clarity, Proof. We fix δ ∈ [0, T [, x * ∈ R n , t ∈ [0, δ], x ∈ B n δ * (x * ) and consider an admissible pair (y, u) for (P t,x ). We build a suitable admissible pair (y, u) with J t (y, u) ≤ J t (y, u) if (H δ B ) holds, J t (y, u) ≤ J t (y, u)+η, where η > 0 is arbitrary, if (M δ B ) holds: • Let c be the constant that appears in the growth conditions. For ν ≥ c let Then the "excess" function tends to 0 as ν → +∞.
• There are µ ∈]0, 1[, and ρ > 0 such that, for a.e. s on a non negligible set Ω and a.e.s ∈ [t, T ], We use here the fact that c > c δ (B) (see Proposition 4.10). In the extended valued case this is where Hypotheses h 1 ) and h 2 ) play a role, in which case Steps v) -vi) of the proof are rather technical.
Finally, though S ν , Σ ν , ε ν might depend on the chosen admissible pair (y, u), the constant ν -therefore the bound of u ∞ and the Lipschitz constant of y -depend in fact only on δ, B, δ * , x * (and possibly on η if one assumes just Condition (M δ B ) instead of (H δ B )).
For 0 < ρ ≤ ρ , we thus have The claim follows.

From
Step vi) and the fact that U is a cone we obtain and thus the conclusion follows from (9.6).

It follows from
Step vi) and the fact that U is a cone that where the last equality is a consequence of Remark 4.1. The claim follows from (9.6). Then uniformly with respect to t ∈ [0, δ] and x ∈ B n δ * (x * ). Indeed, it follows from Step i) that ix) Choice of ν ≥ ν and of Σ ν ⊆ Ω.
Notice that ϕ depends on both y, u and is well defined since S ν ∩ Σ ν , a subset of S ν ∩ Ω, is negligible. Clearly ϕ is strictly increasing and, from steps viii)ix), Therefore the image of ϕ is [t, T ] and thus ϕ : [t, T ] → [t, T ] is bijective; let us denote by ψ its inverse, which is absolutely continuous and even Lipschitz, xii) Set u(s) := u(ψ(s)) ϕ (ψ(s)) , y := y • ψ. Then (y, u) is admissible and y(T ) = y(T ).
xiii) u is bounded, y is Lipschitz; if Condition (H δ B ) holds the bound of u and the Lipschitz rank of y depend just on δ, B, δ * , x * ; otherwise they might depend also on η. It is convenient to write explicitly the function u(s), which is given by u(ψ(s)) µ if ψ(s) ∈ Σ ν , u(ψ(s)) otherwise.
We consider the two following frameworks.
The conclusion follows from the choice of ν in (9.8).
Remark 9.1 (Explicit bounds and Lipschitz ranks). In the real valued case, the knowledge of ν and c in Condition (H δ B ) correspondingly to the value K provided in Step ii) of the proof of Theorem 5.1 allows to give an explicit bound of u ∞ and y ∞ ) (thus of K A in Claim (1) of Theorem 5.1). Indeed, referring to the proof of the theorem:

• From
Step xiii), u ∞ ≤ ν and y ∞ ≤ θ(1 + K)ν where, from Step ix), one may take , c, ν, 2R ε * (9.17) and, from Step i), R = B + d T α ; • In the real valued case it is enough from Step vi) to take µ, equal to any real number such that c δ (B) c < µ < 1; • The proof of Step vi) shows that m in (9.17) is given by where ∆ is any real number in c δ (B) µ c, 1 .
Remark 9.2. The proof of Theorem 5.1 shows that one could replace in the growth Conditions (H δ B ), (M δ B ) and in Hypothesis h 2 ) the Euclidean distance dist((s, y, v), ∂ Dom(Λ)) with the pseudo-distance dist c ((s, y, v), ∂ Dom(Λ))) := inf{|v − v| : (s, y, v ) ∈ ∂ Dom(Λ)} : indeed in this case dist c ((s, y, v), ∂ Dom(Λ)) ≥ ρ > 0 whenever (s, y, v ) ∈ Dom(Λ) for every v ∈ B m ρ (v), so that, from Hypothesis h 1 ) it follows that dist c ((s, y, v), ∂ Dom(Λ)) ≥ ρ ⇒ dist c ((s, y, v), ∂ Dom(Λ)) ≥ ρ ∀s ∈ [0, T ], an essential property in Step vi) of the proof of Theorem 5.1. There are some advantages and drawbacks in replacing the Euclidean distance dist with the greater dist c . Indeed, in doing so, the infima in the growth conditions become smaller, so that (H δ B ), (M δ B ) become more restrictive (i.e., are satisfied by a smaller class of Lagrangians). Instead, Hypothesis h 2 ) is less restrictive (i.e., satisfied by a wider class of Lagrangians). However, the notion of being well-inside the domain for dist c (Definition 4.15) does not correspond anymore to the notion of relatively compact subset.