A finite difference approach to the infinity Laplace equation and tug-of-war games

We present a modified version of the two-player"tug-of-war"game introduced by Peres, Schramm, Sheffield, and Wilson. This new tug-of-war game is identical to the original except near the boundary of the domain $\partial \Omega$, but its associated value functions are more regular. The dynamic programming principle implies that the value functions satisfy a certain finite difference equation. By studying this difference equation directly and adapting techniques from viscosity solution theory, we prove a number of new results. We show that the finite difference equation has unique maximal and minimal solutions, which are identified as the value functions for the two tug-of-war players. We demonstrate uniqueness, and hence the existence of a value for the game, in the case that the running payoff function is nonnegative. We also show that uniqueness holds in certain cases for sign-changing running payoff functions which are sufficiently small. In the limit $\epsilon \to 0$, we obtain the convergence of the value functions to a viscosity solution of the normalized infinity Laplace equation. We also obtain several new results for the normalized infinity Laplace equation $-\Delta_\infty u = f$. In particular, we demonstrate the existence of solutions to the Dirichlet problem for any bounded continuous $f$, and continuous boundary data, as well as the uniqueness of solutions to this problem in the generic case. We present a new elementary proof of uniqueness in the case that $f>0$, $f<0$, or $f\equiv 0$. The stability of the solutions with respect to $f$ is also studied, and an explicit continuous dependence estimate from $f\equiv 0$ is obtained.


Introduction
In this article, we use a finite difference approximation to study the normalized infinity Laplace partial differential equation We refer to Aronsson, Crandall, and Juutinen [2] for more background and details. Recently, Peres, Schramm, Sheffield, and Wilson [18] showed that equation (1.1) also arises in the study of certain two-player, zero-sum, stochastic games. They introduced a random-turn game called ε-tug-of-war, in which two players try to move a token in an open set Ω toward a favorable spot on the boundary ∂Ω. During each round of the game, a fair coin is tossed to determine which player may move the token, and the winner may move it a maximum distance of ε > 0 from its previous position. The payoff is determined by a running payoff function f , and a terminal payoff function g. We describe tug-of-war in more detail in Section 2.
In [18] it was shown that under the hypothesis that the running payoff function f is positive, negative, or identically zero, this game has an expected value. Moreover, they showed that as ε → 0, the expected value function converges to the unique viscosity solution u of the equation −∆ ∞ u = f with u = g on ∂Ω. The probabilistic methods employed in [18] yielded new results and a better understanding of the PDE (1.1). Connections between stochastic games and the infinity Laplace equation have also been investigated by Barron,Evans,and Jensen [4].
In this paper, we use PDE techniques to study the value functions for tug-of-war games and the solutions of equation (1.1). By changing the rules of tug-of-war when the token is very near the boundary, we obtain a new game whose value functions better approximate solutions of (1.1). In particular, the upper and lower value functions of this modified game, which we call boundary-biased ε-step tug-of-war, are continuous. In fact, if f ≡ 0 and g is Lipschitz, then the (unique) value function is a minimizing Lipschitz extension of g toΩ, for each ε > 0. Furthermore, the upper and lower value functions are equal, and hence the game possesses a value, under more general hypotheses than is presently known for standard ε-tug-of-war.
In contrast to [18], we make little use of probabilistic methods in this paper. Instead, we study the difference equation in Ω, which is derived from the dynamic programming principle. The finite difference operator −∆ ε ∞ is defined in (2.13), below. We show that if f is continuous, bounded, and does not change sign in Ω, then (1.2) possesses a unique solution subject to any given, continuous boundary data g. It follows that the boundary-biased game has a value in this case. Furthermore, we show that for each bounded, continuous f , any sequence of solutions of (1.2) converges (after possibly taking a subsequence) to a solution of the continuum equation (1.1) as ε → 0. In the case f ≡ 0, Oberman [16] and Le Gruyer [13] obtained similar existence, uniqueness and convergence results for a similar difference equation on a finite graph. We also mention the recent work of Charro, García Azorero, and Rossi [6,5], who have used a similar approach to demonstrate the existence of solutions to mixed Dirichlet-Neumann boundary value problems for the infinity Laplace equation for f ≡ 0.
Our analysis of (1.2) yields several new results concerning the continuum equation. For any f ∈ C(Ω) ∩ L ∞ (Ω) and g ∈ C(∂Ω), we show that the Dirichlet problem possesses a unique maximal and minimal viscosity solution. Existence has been previously shown only for f satisfying f > 0, f < 0, or f ≡ 0, in which case we also have uniqueness. The latter uniqueness result appears in [18] as well as the paper of Lu and Wang [15]. The case f ≡ 0 is Jensen's famous result [12], and other proofs in this case have appeared in [3,2,9].
Here we give a new, elementary proof of uniqueness under the assumption that f > 0, f < 0, or f ≡ 0. Unlike previous proofs, our argument does not use deep viscosity solution theory or probabilistic techniques. Instead, our proof is based on the simple observation that by modifying a solution of the PDE (1.1) by "maxing over ε-balls," we obtain a subsolution of the finite difference equation (1.2), possibly with small error.
It is known that there may exist multiple viscosity solutions of the boundary value problem (1.3) in the case that f changes sign (see [18,Section 5.4]). However, in this article we demonstrate that uniqueness holds for generic f . That is, for fixed f ∈ C(Ω)∩L ∞ (Ω) and g ∈ ∂Ω, we show that the problem (1.3) has a unique solution for f =f + c for all but at most countably many c ∈ R. See Theorem 2.16 below. This result provides an affirmative answer to a question posed in [18].
Other new theorems obtained in this paper include a result regarding the stability of solutions of (1.3) and an explicit continuous dependence estimate from f ≡ 0.
In Section 2, we review our notation and definitions, describe the tug-of-war games in more detail, state our main results, and give an outline of the rest of the paper.

Preliminaries and main results
Notation. Throughout this article, Ω denotes a bounded, connected, and open subset of R n , f denotes an element of C(Ω) ∩ L ∞ (Ω), g denotes an element of C(∂Ω), and ε denotes a small number satisfying 0 < ε < 1. At various points we impose additional hypotheses on Ω, f , g, and ε.
If x, y ∈ R n , we denote the usual Euclidean inner product by x, y , and use |x| to denote the Euclidean length of x. If E ⊆ R n , we denote the closure of E byĒ. The set of upper semicontinuous functions onΩ is denoted by USC(Ω), and likewise the set of lower semicontinuous functions is denoted by LSC(Ω). We denote the n-by-n identity matrix by I n . If x ∈ R n , then x ⊗ x denotes the matrix (x i x j ).
We require that the boundary ∂Ω of Ω is sufficiently regular so that The open ball with respect to Euclidean distance is denoted B(x 0 , r). We usually refer to B(x 0 , r) only when x 0 ∈ Ω r , in which case B(x 0 , r) = Ω(x 0 , r). It is somewhat inconvenient to work with path distance, but it is needed in Section 3 to handle difficulties which appear near the boundary of the domain.
If K is a compact subset ofΩ and h : K → R is continuous, we define the modulus of h on K by It is easy to check that |h(x) − h(y)| ≤ ω h (|x − y|) ≤ ω h (d(x, y)), and that ω h is continuous, nondecreasing, and concave on [0, ∞), and ω h (0) = 0. In particular, We call any function ω with the properties above a modulus of continuity for h.
The infinity Laplace equation. We recall the notion of a viscosity solution of the (normalized) infinity Laplace equation. For a C 2 function ϕ defined in a neighborhood of x ∈ R n , we define the operators is lower semicontinuous, and the two are equal (and hence continuous) on the set {x ∈ Ω : Dϕ(x) = 0}.
Definition 2.1. An upper semicontinuous function u ∈ USC(Ω) is a viscosity subsolution of the normalized infinity Laplace equation if, for every polynomial ϕ of degree 2 and x 0 ∈ Ω such that . Likewise, a lower semicontinuous function u ∈ LSC(Ω) is a viscosity supersolution of (2.2) if, for every polynomial ϕ of degree 2 and x 0 ∈ Ω such that . We say that u is a viscosity solution of (2.2) if u is both a viscosity subsolution and viscosity supersolution of (2.2).
If we strengthen our definitions of viscosity subsolution/supersolution by requiring (2.3)/(2.4) to hold whenever ϕ ∈ C 2 and u − ϕ has a (possibly not strict) local maximum/minimum at x 0 , then we obtain equivalent definitions.
If u ∈ C(Ω) is a viscosity subsolution (supersolution) of (2.2), then we often write We emphasize that the differential inequality (2.5) is to be understood only in the viscosity sense.
Remark 2.2. In this paper, the symbol −∆ ∞ always denotes the normalized or 1-homogeneous infinity Laplacian operator In the PDE literature, it is more customary to reserve −∆ ∞ to denote the operator − D 2 u · Du, Du . We break from this convention since the normalized infinity Laplacian operator is more natural from the perspective of tug-of-war games, and is therefore the focus of this article. We also point out that there is no difference between the two resulting equations (in the viscosity sense) when the right-hand side f ≡ 0. We henceforth drop the modifier normalized and refer to (2.6) as the infinity Laplacian and the equation −∆ ∞ u = f as the infinity Laplace equation.
Tug-of-war. Let us briefly review the notion of two-player, zero-sum, random-turn tug-of-war games, which were first introduced by Peres, Schramm, Sheffield, and Wilson [18]. Fix a number ε > 0. The dynamics of the game are as follows. A token is placed at an initial position x 0 ∈ Ω. At the kth stage of the game, Player I and Player II select points x I k and x II k , respectively, each belonging to a specified set A(x k−1 , ε) ⊆Ω. The game token is then moved to x k , where x k is chosen randomly so that x k = x I k with probability P = P (x k−1 , x I k , x II k ) and x k = x II k with probability 1 − P , where P is a given function. After the kth stage of the game, if x k ∈ Ω, then the game continues to stage k + 1. Otherwise, if x k ∈ ∂Ω, the game ends and Player II pays Player I the amount where q is a given function. We call g the terminal payoff function and f the running payoff function. Of course, Player I attempts to maximize the payoff, while Player II attempts to minimize it. A strategy for a Player I is a mapping σ I from the set of all possible partially played games (x 0 , x 1 , . . . , x k−1 ) to moves x I k ∈ A(x k−1 , ε), and a strategy for Player II is defined in the same way.
Given a strategy σ I for Player I and a strategy σ II for Player II, we denote by F I (σ I , σ II ) and F II (σ I , σ II ) the expected value of the expression (2.7) if the game terminates with probability one, and this expectation is defined in [−∞, ∞]. Otherwise, we set F I (σ I , σ II ) = −∞ and F II (σ I , σ II ) = +∞. (If the players decide to play in a way that makes the probability of the game terminating less than 1, then we penalize both players an infinite amount.) The value of the game for Player I is the quantity sup σI inf σII F I (σ I , σ II ), where the supremum is taken over all possible strategies for Player I and the infimum over all possible strategies for Player II. It is the minimum amount that Player I should expect to win at the conclusion of the game. Similarly, the value of the game for Player II is inf σII sup σI F II (σ I , σ II ), which is the maximum amount that Player II should expect to lose at the conclusion of the game. We denote the value for Player I as a function of the starting point x ∈ Ω by V ε I (x), and similarly the value for Player II by V ε II (x). We extend the value functions to ∂Ω by setting V ε The tug-of-war game studied in [18], which in this paper we call standard ε-step tug-of-war, is the game described above for 1 A(x, ε) =Ω(x, ε), P = 1 2 , and q(ε, x, y) = ε.
In other words, the players must choose points in the ε-ball centered at the current location of the token, a fair coin is tossed to determine where the token is placed, and Player II accumulates a debt to Player I which is increased by 1 2 ε 2 f (x k−1 ) after the kth stage.
According to the dynamic programming principle, the value functions for Player I and Player II for standard ε-turn tug-of-war satisfy the relation If we divide the left side of (2.8) by ε 2 , we have a good approximation to the negative of the second derivative of u in the direction of Du(x), provided that u is smooth, Du(x) = 0, and dist(x, ∂Ω) ≥ ε (indeed, see Lemma 4.2 below). Thus we might expect that in the limit as ε → 0, the value functions for Players I and II converge to a solution of the boundary-value problem This is indeed the case for certain running payoff functions f , as was shown in [18].
One of the goals for this present work is to develop PDE methods for tug-of-war games and the infinity Laplace equation. Some of the difficulties in the analysis of standard ε-step tug-of-war are due to the discontinuity of the value functions. To understand this phenomena, we study the following simple example.
Example 2.4. Let ε = 1/k for some positive integer k ≥ 2, and consider standard ε-step tug-of-war played on the unit interval Ω = (0, 1), with vanishing running payoff function, and terminal payoff function given by and g(0) = 0 and g(1) = 1. It is easy to see that the value function V k must be constant on the intervals I j := ((j − 1)/k, j/k) for each j = 1, . . . , k. Denote its value on the inteval I j by v j , and write v = (v 1 , . . . , v k ). The dynamic programming relation (2.8) now yields the system 1 The game described in [18] actually requires the players to select points in the slightly smaller set A(x, ε) = Ω(x, ε) ∪ ({y ∈ ∂Ω : d(x, y) < ε}, when the current position of the token is x. Technical difficulties arise in some of the probabilistic arguments in [18] if the players are allowed to move to points in {y ∈Ω : d(x, y) = ε}. This small difference does not concern us here.
The system (2.11) has the unique solution v j = (j + 1)/(k + 1) for j = 1, . . . k. Thus the value function for this standard tug-of-war game is a step function which approximates the continuum solution, given by V (x) = x.
This one dimensional example can be lifted into higher dimensions by considering Ω = B(0, 2) \B(0, 1) and setting the terminal payoff function to 1 on ∂B(0, 2) and to 0 on ∂B(0, 1). It is clear that the value function for standard 1/k-step tug-of-war is then V k (|x| − 1) for 1 < |x| < 2.
Boundary-biased tug-of-war. In this article, we study a slight variant of standard ε-turn tug-of-war, which we call boundary-biased ε-step tug-of-war. This is the game described in the previous section, where we set , and q(ε, x, y) = ρ ε (x, y), The dynamics of the boundary-biased game and the accumulation of the running payoff are no different from that of the standard game while the token lies in the set Ω ε , as q(ε, The distinction between the games occurs near the boundary, where the boundary-biased game gives a player who wishes to terminate the game by exiting at a nearby boundary point a larger probability of winning the coin toss, if the other player selects a point in the domain Ω or a boundary point further away. The payoff has also been altered slightly from the standard game, so that small jumps to the boundary do not accrue as much running payoff. Boundary-biased ε-step tug-of-war is indeed only a slight variant of standard ε-step tug-of-war. In fact, by combining results in this paper with those in [18], we can show that the value functions for the two games differ by O(ε).
Our purpose for considering boundary-biased tug-of-war in this article is precisely because the value functions are more regular, as we see below. In particular, they are continuous, and uniformly bounded and equicontinuous along sequences ε j ↓ 0. These properties allow us to adapt techniques from viscosity solution theory.
The analogue of (2.8) for boundary-biased ε-step tug-of-war, derived from the dynamic programming principle, is the equation Let us introduce the notation . We may write (2.12) as (2.14) −∆ ε ∞ u = f in Ω. We call the operator ∆ ε ∞ the finite difference infinity Laplacian and the equation the (2.14) the finite difference infinity Laplace equation.
Remark 2.5. Let us briefly mention that the value functions for Players I and II are bounded: this is easy to see by adopting a strategy of "pulling" toward a specified point on the boundary. This strategy forces the game to terminate after at most Cε −2 expected steps (we refer to [18] for details).
Our approach is to study the finite difference equation (2.14) directly, using PDE methods. While we use the underlying tug-of-war games for intuition and motivation, we make no probabilistic arguments in this paper (with the exception of the proof of Lemma 3.7). Several of our analytic techniques are suggested by probabilistic arguments in [18], see for example the discussion preceding Lemma 4.1.
Main results. Our first theorem establishes comparison for solutions of the finite difference infinity Laplace equation. In order to state this result, we require the following definition.
Suppose also that u has no strict ε-local maximum, or v has no strict ε-local minimum, in Ω. Then (2.16) max We show in Lemma 3.2 that u ∈ USC(Ω) and −∆ ε ∞ u ≤ 0 in Ω imply that u has no strict ε-local maximum in Ω. By symmetry, we deduce that v ∈ LSC(Ω) and −∆ ε ∞ v ≥ 0 in Ω imply that v has no strict ε-local minimum in Ω. From these observations we immediately obtain the following corollary.
Corollary 2.8. Assume that u, −v ∈ USC(Ω) satisfy the inequality Then (2.16) holds. Our next main result establishes the existence of solutions. In fact, we show that the Dirichlet problem for the finite difference possesses unique maximal and minimal solutions, which are the value functions for Players II and I, respectively, for boundary-biased tug-of-war. Theorem 2.9. For each ε > 0, there exist solutions u ε ,ū ε ∈ C(Ω) of the equation with the property that if w :Ω → R is any bounded function satisfying the inequalities then w ≤ū ε (w ≥ u ε ) onΩ. Moreover, u ε is the value function for Player I, and u ε is the value function for Player II for the corresponding boundary-biased ε-step tug-of-war game.
It is not known if standard ε-turn tug-of-war possesses a value if f ≥ 0, f ≡ 0, and inf f = 0, or if f fails to be uniformly continuous. In contrast, according to Corollary 2.8 and Theorem 2.9, if f ∈ C(Ω) ∩ L ∞ (Ω) is nonnegative or nonpositive, then the problem (2.18) has a unique solution u ε =ū ε , which is the value of the corresponding boundary-biased ε-step tug-of-war game. Theorem 2.7 provides uniqueness in even greater generality: if u ε ≡ū ε , then u ε has a strict ε-local minimum andū ε has a strict ε-local maximum.
This latter result has an interesting probability interpretation. In [18], it was shown that nonuniqueness of solutions may arise from the necessity of guaranteeing termination of the game. If Player I must select his strategy first, then he must ensure that the game terminates after finitely many steps with probability 1, and likewise if Player II chooses her strategy first, she must ensure termination. In certain cases, the player selecting first may be required to adopt a strategy which gives up favorable positions in order to ensure termination of the game. One might suspect that unless there is a good reason for each player to keep the token away from the boundary (e.g., the value functions have a strict ε-local maximum or minimum), then the previous situation does not arise, since one of the players would ensure termination simply by playing optimally. In the latter case, we expect the value functions for the two players to be equal. Theorem 2.7 is a formal justification of this intuition.
One might also suspect that if the terminal payoff function g has large oscillation relative to f L ∞ (Ω) , then the players should aim for a favorable spot on the boundary rather than concern themselves with accumulating running payoff. Thus perhaps in this case the value functions for the players have no strict ε-local extrema, and hence the game has a value. As a further application of Theorem 2.7, we obtain the following uniqueness result for sign-changing but small f and nonconstant g, which rigorously justifies to this informal heuristic. Theorem 2.10. Assume that Ω is convex and for each x 0 ∈ ∂Ω and r > 0, the function g is not constant on ∂Ω ∩ B(x 0 , r). For each ε > 0 there exists a constant δ > 0 such that if f L ∞ (Ω) ≤ δ, then the boundary-value problem (2.18) has a unique solution.
We give two proofs of the following result, which asserts that as ε → 0 solutions of the finite difference infinity Laplace equation converge to a solution of the continuum infinity Laplace equation. It is an analogue of the last statement in Theorem 2.3 for the value functions of boundary-biased tug-of-war. Our result is more general, as we impose no assumptions on f ∈ C(Ω).
Theorem 2.11. Assume only that f ∈ C(Ω), and that {ε j } ∞ j=1 is a sequence of positive numbers converging to 0 as j → ∞, such that for each j the function u j ∈ C(Ω) is a viscosity subsolution of the inequality Suppose also that there exists a function u ∈ C(Ω) such that u j → u locally uniformly in Ω as j → ∞. Then u is a viscosity subsolution of the inequality We now turn to results for the continuum equation, which we obtain with the help of the results above and an interesting relationship between solutions of the the continuum and discrete equations (see Proposition 5.3 below).
From Proposition 4.4 below, we see that if ε j ↓ 0, then a sequence {u j } ⊆ C(Ω) of solutions of (2.18) is uniformly equicontinuous. Such a sequence {u j } is also uniformly bounded. In particular, the Arzela-Ascoli theorem asserts that every sequence {ε j } has a subsequence for which the maximal solutions of (2.18) for ε = ε j converge uniformly onΩ to some function u. According to Theorem 2.11, the limit function u is a viscosity solution of In particular, the boundary-value problem (2.22) possesses a solution for any given f ∈ C(Ω)∩L ∞ (Ω) and g ∈ C(∂Ω). This result appears to be new for the normalized infinity Laplacian, as all previous existence results of which we are aware (see for example [15, Theorems 4.1 and 4.2], in addition to Theorem 2.3 above) have required f > 0, f < 0 or f ≡ 0 in Ω.
The following stability result is a generalization of [15, Theorem 1.9]. The latter result imposes the additional assumption that f and f k be positive, negative, or identically zero in Ω.
Suppose that for each k, the function u k ∈ C(Ω) is a viscosity solution of the problem Then there exist a subsequence {u kj } and a solution u ∈ C(Ω) of (2.22) such that u kj → u uniformly onΩ as j → ∞.
With the help of Theorem 2.13 we obtain the following existence result, which is an improvement of Corollary 2.12.
Recently, Lu and Wang [14] found another proof of Theorem 2.14 using a different approach.
Our next result asserts that uniqueness occurs in the generic case, which gives an affirmative answer to the first open question posed in Section 8 of [18]. It is easily deduced from Theorem 2.14 and Proposition 5.8, below. Theorem 2.16. There exists an at most countable set N ⊆ R such that the problem has a unique solution for every c ∈ R \ N .
Via similar arguments we obtain the corresponding statement for the discrete infinity Laplace equation.
Theorem 2.17. There exists an at most countable set N ε ⊆ R such that the problem has a unique solution for every c ∈ R \ N ε .
Examples are presented in [18,Section 5] of f for which the boundary value problem (2.22) has infinitely many solutions. The functions f given in these examples change sign in Ω. This non-uniqueness phenomenon is not well understood. It is even unknown whether we have uniqueness for (2.22) under the assumption that f ≥ 0. The most general uniqueness result is the following theorem, which first appeared 2 in [15].
Then max The uniqueness of infinity harmonic functions with given boundary data is due to Jensen [12], and new proofs and extensions have appeared in the papers of Barles and Busca [3], Aronsson, Crandall, and Juutinen [2], Crandall, Gunnarsson and Wang [9], Peres, Schramm, Sheffield, and Wilson [18], and Lu and Wang [15]. With the exception of [18], which used probabilistic methods, all of the papers mentioned above use deep results in viscosity solution theory (as presented for example in [10]) as well as Aleksandrov's theorem on the twice differentiability of convex functions.
Recently, the authors [1] discovered a new proof of the uniqueness of infinity harmonic functions which does not invoke the uniqueness machinery of viscosity solution theory or Aleksandrov's theorem. Here we generalize the argument presented in [1] to give a new PDE proof of Theorem 2.18. Our argument uses only results for the finite difference infinity Laplace equation, and Proposition 5.3, below.
The next theorem is an explicit estimate of the difference between an infinity harmonic function and a solution of the infinity Laplace equation with small righthand side, relative to fixed boundary data. Our argument is a combination of the methods we develop here with the "patching lemma" of Crandall, Gunnarsson, and Wang [9, Theorem 2.1].
There exists a constant C > 0, depending only on diam(Ω), such that As an application of Theorem 2.19, we deduce an upper bound for the expected duration of a game of boundary-biased tug-of-war.
Corollary 2.20. For any given δ, ε > 0, in boundary-biased ε-step tug-of-war with no running payoff, Player I has a strategy that achieves an expected payoff of at least V ε I (x 0 ) − δ, for any initial point x 0 ∈ Ω, and for which the expected number of stages it takes for the game to terminate is less than Cδ −3 ε −2 . The constant C > 0 depends only on the oscillation max ∂Ω g − min ∂Ω g of the boundary data and the domain Ω.
The connection between Theorem 2.19 and Corollary 2.20 follows from an observation of Peres, Pete, and Somersille [17, Proposition 7.1], who proved Corollary 2.20 with upper bound C(δ)ε −2 , using a stability result of Lu and Wang [15]. We do not give the proof of Corollary 2.20 here, and instead refer the reader to the discussion in [17].
Let us mention that we can generalize Corollary 2.20 to any running payoff function f ∈ C(Ω) ∩ L ∞ (Ω) for which we have u ε → u. In this case, we deduce an upper bound of the form C(δ, f, g)ε −2 on the expected number of stages before termination, in boundary-biased ε-step tug-of-war, for some fixed Player I strategy that is expected win at least V ε I − δ.
Overview of this article. Section 3 is devoted to the study of the finite difference equation −∆ ε ∞ u = f , where we prove Theorems 2.7, 2.9, and 2.10 and study the regularity of solutions. In Section 4 we give our first proof of Theorem 2.11. In Section 5, we apply techniques developed in the previous sections to the continuum equation −∆ ∞ u = f . We give a second proof of Theorem 2.11, a new elementary proof of the uniqueness of infinity harmonic functions (a special case of Theorem 2.18), and prove Theorems 2.13, 2.14, 2.16, and 2.17. Section 6 is devoted to the relationship between continuous dependence of solution of the finite difference equation and uniqueness for the continuum equation. There we complete the proof of Theorem 2.18 and prove Theorem 2.19, as well as obtain explicit estimates for the rate of convergence, as ε → 0, of solutions of the finite difference equation to those of the continuum equation. In Section 7 we highlight some interesting open problems.

The finite difference infinity Laplacian
In this section, we study the solutions u of the difference equation in Ω subject to the Dirichlet boundary condition Remark 3.1. We employ the following simple observation many times in this section. If u, v :Ω → R are bounded functions and x 0 ∈ Ω is such that . Generalizing an argument of Le Gruyer [13], who established the uniqueness of solutions of a difference equation on a finite graph, we prove Theorem 2.7.
Proof of Theorem 2.7 . Assume that u, −v ∈ USC(Ω) satisfy the inequality By symmetry, we need to show only that u has a strict ε-local maximum in Ω. Define the set The set E is nonempty, closed, and contained in Ω. Let l := max E u. Since u is upper semicontinuous, the set is nonempty and closed. From Remark 3.1 and the inequality (3.3), we see that for every x ∈ E. We claim that every point of F is a strict ε-local maximum of u. We need to show that u(y) < l for every y ∈ F ε \ F. Suppose on the contrary that there is a point y ∈Ω \ F such that dist(y, F ) ≤ ε and u(y) ≥ l. It follows that y ∈ E, by the definition of F . Pick x ∈ F with y ∈Ω(x, ε). By reselecting y, if necessary, we may assume that Since y ∈ E, we see that This contradicts the second equality of (3.5), completing the proof.
In order to obtain Corollary 2.8 from Theorem 2.7, we need the following lemma.
Lemma 3.2. Suppose that u ∈ USC(Ω) satisfies the inequality Proof. Suppose on the contrary that u has a strict ε-local maximum at x 0 ∈ Ω. Select a nonempty closed set F ⊆ Ω which contains x 0 and for which Then E is nonempty, closed, and for any y ∈ ∂E we have S + ε u(y) = 0 < S − ε u(y), a contradiction to (3.6).
By an argument similar to the proofs of Theorem 2.7 and Lemma 3.2, we obtain the following proposition.
In particular, The lower semicontinuous envelope of u is u * := −(−u) * . The function u * is upper semicontinuous, u * is lower semicontinuous, and u * ≤ u ≤ u * .
Lemma 3.4. Suppose h ∈ USC(Ω) and u :Ω → R are bounded from above and satisfy the inequality Then u * also satisfies (3.10). If in addition u ≤ g on ∂Ω, then u * ≤ g on ∂Ω.
Proof. Fix x ∈ Ω, and let {x k } ∞ k=1 ⊆ Ω be a sequence converging to x as k → ∞ and for which u(x k ) → u * (x). We claim that Fix δ > 0, and select for each k a point y k ∈Ω(x k , ε) such that By taking a subsequence, we may assume that y k → y ∈Ω(x, ε). In the case that then we may pass to the limit k → ∞ in (3.12) to obtain Sending δ → 0, we obtain (3.11) provided that (3.13) holds. On the other hand, suppose that (3.13) fails. Then y ∈ ∂Ω ∩ {z : d(y, z) < ε} and y k ∈ Ω(x, ε) \ ∂Ω for infinitely many k. By taking a subsequence, assume that y k ∈ Ω(x, ε) \ ∂Ω for all k. Thus ρ ε (x, y k ) = ρ ε (x k , y k ) = ε for every k, and we have Passing to the limit δ → 0, we have (3.11) also in the case that (3.13) fails.
As g is continuous and u ≤ g on ∂Ω, by taking a subsequence we may assume that x k ∈ Ω. Moreover, (3.15) and u ≤ g on ∂Ω imply that S − ε u(x k ) → ∞ as k → ∞. By the continuity of g, there exist 0 < δ ≤ ε and a large positive integer K such that Since u is bounded above, the expression on the right side of the above inequality is finite. We deduce that Since h is bounded above, we have a contradiction to (3.10). Thus (3.15) is impossible. It follows that u * ≤ g.
Proof. Suppose on the contrary that u ≡ u * . Then there exists x 0 ∈ Ω such that (3.16) γ Since the function u − u * is upper semicontinuous, the set E := {u − u * ≥ γ} = {u − u * = γ} is closed. Since u * is lower semicontinuous, by relabeling x 0 , if necessary, we may assume that . According to (3.16), this quantity is greater than γ/ε. Since u * is lower semicontinuous, we may select y ∈Ω(x 0 , ε) such that Using (3.17) and u * (y) < u * (x 0 ), we see that y ∈ E. Thus u(y) − u * (y) < γ. But this implies that which contradicts the definition of S − ε u(x 0 ). We now construct explicit supersolutions to the finite difference infinity Laplace equation, which we find useful below. Lemma 3.6. Denote in Ω.
Considering the possible values for δ 1 and recalling (3.19), we deduce that To get the corresponding inequality for S − ε ϕ(x), we choose z 2 ∈Ω(x, ε) along some minimal-length path between x 0 and x, so that Combining the estimates for S + ε ϕ(x) and S − ε ϕ(x), we obtain We divide this inequality by ε to obtain the lemma.
In the following lemma, we compare subsolutions of the finite difference equation to the value function for Player II for boundary-biased tug-of-war. This is the only place we employ probabilistic methods in this article.
Lemma 3.7. Assume that u :Ω → R is a bounded function satisfying the inequality where V II is the value function for Player II with respect to boundary-biased tug-of-war, with running payoff function f and terminal payoff function g.
Proof. By replacing u with u * and applying Lemma 3.4, we may assume that u ∈ USC(Ω). Let us suppose that Player I adopts a strategy of choosing a point x I = z(x) ∈Ω(x, ε) for which whenever the game token is at x ∈ Ω. Then, regardless of the strategy of Player II, we see that the random variable It follows that {M k } is a submartingale with respect to all prior events. In particular, if the game terminates with probability 1 and the random variable T is the stage at which the game ends, then by the Markov property. The last inequality above follows from the fact that u ≤ g on ∂Ω. If the game does not terminate with probability 1, then it is clear that u(x 0 ) is less than the expected cost for Player II, as the latter is +∞. Thus no matter which strategy Player II chooses to employ, her expected cost is always greater than u(x 0 ). It follows that u ≤ V II .
We now prove Theorem 2.9, using a simple adaptation of Perron's method.
Proof of Theorem 2.9. Our candidate for a maximal solution is According to Lemma 3.6, the admissible set is nonempty and u is bounded below. Also, by varying the parameters δ > 0 and c > 0 in Lemma 3.6, we see that u = g on ∂Ω, and that u is continuous at each boundary point x ∈ ∂Ω. We may also use Lemma 3.6 together with Remark 3.1 to see that u is bounded above.
Let us verify that u satisfies the inequality (3.20) −∆ ε ∞ u ≤ f in Ω. Fix x ∈ Ω and δ > 0, and select a function w such that −∆ ε ∞ w ≤ f in Ω and w ≤ g on ∂Ω, and for which Then we have In a similar way, we check that According to Lemma 3.4 and the definition of u, we have u ≥ u * , and thus u = u * ∈ USC(Ω). We now check that u satisfies the inequality otherwise.
It is easy to check that , and, by the upper semicontinuity of u, S − ε w(y) = S − ε u(y). Thus w satisfies the inequality −∆ ε ∞ w ≤ f in Ω, and w ≤ g on ∂Ω. Since w(x) > u(x), we obtain a contradiction to the definition of u. Thus u satisfies (3.21). According to Lemma 3.5, u ∈ C(Ω).
We have shown that u is a solution of −∆ ε ∞ u = f in Ω. By construction, u is maximal. By Lemma 3.7, u ≤ V II . Since V II is also a bounded, measurable solution, we have V * II ≤ u by Lemma 3.4 and the definition of u. Hence u = V II , and the proof is complete.
We next prove estimates for solutions of the finite difference equation.

Lemma 3.8. Suppose that u :Ω → R is bounded and satisfies
Then there is a constant C depending only on diam(Ω) such that for every x ∈ Ω and x 0 ∈ ∂Ω.
Using that ω g is a modulus of continuity for g and recalling (2.1), it is straightforward to check that ϕ ≥ g on ∂Ω. By Corollary 2.8 and Lemma 3.6, we have ϕ ≥ u in Ω. In particular, The next lemma uses a marching argument to obtain an interior continuity estimate. By "marching," we mean an iterated selection of points x k+1 which achieve S + ε u(x k ). This is analogous to following the the gradient flowlines of a subsolution of the continuum equation. See [8, Section 6] for details.

Lemma 3.9. Suppose that u :Ω → R is bounded and satisfies
There exists a constant C depending only on diam(Ω) such that for every x, y ∈ Ω r and r ≥ ε.
Proof. Suppose that (3.23) fails for C = c. Then we may assume there is an r ≥ ε and x 0 ∈ Ω r with Assume first that u is continuous. Having chosen x 0 , x 1 , . . . , x k ∈ Ω, select x k+1 ∈ Ω(x k , ε) such that We halt this process at k = N , where x N ∈ ∂Ω or εN ≥ diam(Ω). Notice that whenever x k ∈ Ω, Hence for each 1 ≤ k ≤ N such that x k ∈ Ω, We claim that if c > 0 is large enough relative to diam(Ω), then x N ∈ ∂Ω. Suppose that εN ≥ diam(Ω) and x N ∈ Ω. Then Thus if c is large enough relative to diam(Ω), then we derive a contradiction to the estimate for sup Ω |u| deduced from (3.22). Thus we may assume x N ∈ ∂Ω and ε(N − 1) ≤ diam(Ω). Thus and from (2.1) we deduce that Using Lemma 3.8 we see that Combining (3.27) with a calculation similar to (3.26) yields Using εN ≤ diam(Ω) and recalling (3.24), we obtain a contradiction if c is large enough relative to diam(Ω). This completes the proof of (3.23) in the case that u is continuous. If u is not continuous, then we may only choose x k+1 which approximate (3.25). However, since at most ⌈ε −1 diam(Ω)⌉ approximations are required, we can let the error be arbitrarily small.
Combining the two previous lemmas, we obtain a global continuity estimate with respect to ρ ε (x, y).
Proof. Fix x, y ∈ Ω and set r := ρ ε (x, y) 1/2 . If x, y ∈ Ω r , then (3.23) implies for some C depending only on diam(Ω). If x, y ∈Ω \ Ω 2r , then (3.22) and the triangle inequality imply for another C depending only on diam(Ω). If x ∈ Ω 2r and y ∈Ω \ Ω r , then choose z ∈ Ω 2r \ Ω r on a minimal path between x and y. Since we can combine the two previous estimates to obtain (3.29).
In the case f ≡ 0 and g is Lipschitz, the proofs of the estimates above yield a bit more. In particular, we show that in this case the solution of the finite difference problem is a minimizing Lipschitz extension of g toΩ.
Proposition 3.11. Assume g ∈ C(∂Ω) is Lipschitz with constant K with respect to the path metric d. Then the unique solution of the problem is also Lipschitz with constant K with respect to the path metric d.
Observe thatũ is Lipschitz with constant K with respect to the path metric d. By Corollary 2.8 it is enough to show thatũ is a subsolution of (3.30). By Corollary 2.8 and Lemma 3.6, we know that u ≤ ϕ in Ω, where and x 0 ∈ ∂Ω is arbitrary. In particular,ũ ≤ g on ∂Ω.
If we repeat the marching argument of Lemma 3.9, and use the inequality u ≤ ϕ in place of (3.22) when deriving (3.28), then we obtain Thus, we may compute, .
By a symmetric calculation, we obtain and thus .
We conclude this section by proving Theorem 2.10. Our proof is a compactness argument, using Theorem 2.7 and the following lemma.
If the first equality in (3.33) fails for y + , then |y 0 − y + | < ε and y + ∈ Ω. But then we can find a point z ∈ B(x 0 , r) with |z − y + | < ε, and we deduce that εS + ε u(z) ≥ u(y + ) − u(z) = u(y + ) − u(y 0 ) = εα, a contradiction to the definition of r. If the second equality in (3.33) fails for y + , then by a similar argument we find a point z ∈ B(x 0 , r) with S + ε u(z) ≥ α, a contradiction. Using equation (3.32) and very similar arguments, we establish (3.33) for y − . According to (3.33), both y + and y − lie on the ray from x 0 through y 0 , and |y 0 − y ± | = min{ε, dist(y 0 , ∂Ω)}. Due to the convexity of Ω, there is only one such point, and thus y + = y − . This obviously contradicts the assumption that α > 0.
Proof of Theorem 2.10. Suppose on the contrary that for some ε > 0 there is a sequence δ j ↓ 0 and a sequence {f j } ⊆ C(Ω) ∩ L ∞ (Ω) such that f j L ∞ (Ω) ≤ δ j , and the boundary-value problem has more than one solution. In particular, by Theorems 2.7 and 2.9, there exists a solution u j of (3.34) with a strict ε-local maximum point x j . We may further assume that x j → x 0 as j → ∞. According to Lemma 3.8, we have the estimate u j L ∞ (Ω) ≤ C independently of j. Thus the functions defined in Ω. Moreover, by the estimate (3.22) it is clear that u(y k ) → u(y) and v(y k ) → v(y) whenever y k → y ∈ ∂Ω. According to Lemma 3.4, we have Since S + ε u j (x j ) = 0 for all j, we deduce that S − ε u(x 0 ) = S + ε u(x 0 ) = 0. Thus u is constant on the setΩ(x 0 , ε). Since u = g on ∂Ω and we assumed that g is not constant on any neighborhood of ∂Ω, we deduce that x 0 ∈Ω ε . Applying Lemma 3.12, we deduce that u is constant onΩ, which contradicts our hypothesis on g.

Convergence
In this section, we give our first proof of Theorem 2.11. While the second proof in Section 5 is much faster, this first proof has several advantages. For example, it includes an explicit derivation of the continuum infinity Laplace equation from the finite difference equation by Taylor expansion. Also, it is a PDE analogue of the probabilistic proof of convergence appearing in [18], and is thus of independent interest.
Demonstrating the convergence, as ε → 0, of the value functions for ε-step tugof-war to a viscosity solution of the infinity Laplace equation is tricky in the case that f ≡ 0 due to the singularity of the infinity Laplacian. In [18], this difficulty was overcome using an interesting probabilistic argument. The authors introduced a slightly modified tug-of-war game in which one of the players is given more options, and thus a small advantage. The value functions of these favored tug-of-war games are shown to be very close to those of standard tug-of-war, and also possess a monotonicity property along diadic sequences {2 −k ε} ∞ k=1 . This monotonicity property was used by the authors of [18] to prove convergence.
From our point of view, the source of this monotonicity property is a discrete version of the increasing slope estimate (the continuum version of which is Lemma 5.2, below). Observe that, if u ∈ U SC(Ω) satisfies −∆ ε ∞ u ≤ f and x, y ∈ Ω satisfy . This simple fact is the basis for the following analytic analogue of the monotonicity properties possessed by the favored tug-of-war games.
We will consider the cases p = 0 and p = 0 separately. In the former case, |Dϕ| is bounded away from zero near the origin. Since u j → u locally uniformly, for large enough j we may select x j ∈B(0, ε j ) such that . Using Lemma 4.2 and the upper semicontinuity of ∆ + ∞ ϕ near 0, we deduce that Thus (4.6) holds, in the case that p = 0. We now consider the more subtle case that p = Dϕ(0) = 0. We must show that Fix a large positive integer k ≥ 1. For each j, define the functions where R := 2 k − 1. According to Lemma 4.1, for every j we have It is clear that v j → u and f j → f locally uniformly in Ω as j → ∞. Thus we may select a sequence x j → x 0 such that for large enough j, the function v j − ϕ attains its maximum in the ballB(x j , 2 k ε j ) at x j .
We claim that we may choose z j such that (4.9) |z j | ≥ 1 − 2 −k for sufficiently large j ≥ 1.
If ϕ has no strict local maximum in B(x j , 2 k ε j ), then we may choose |z j | = 1. If ϕ does have a strict local maximum in B(x j , 2 k ε j ), then observe that M is negative definite and the strict local maximum must be at the origin. Thus, (4.10) x j + 2 k ε j z j = 0, and u also has a strict local maximum at the origin. For large enough j, this implies that v j (x) = u(0) for every x ∈B(0, Rε j ). These facts imply that v j − ϕ cannot have a local maximum in the ball B(0, Rε j ). Thus Recalling (4.10), we see that Having demonstrated (4.9), we calculate Rearranging, we obtain − M z j , z j ≤ f j (x j ).
By passing to limits along a subsequence and using (4.9), we deduce that Sending k → ∞, we obtain (4.7), and the proof is complete.
Remark 4.3. In the recent paper by Charro, García Azorero, and Rossi [6], a proof that the value functions for standard ε-turn tug-of-war converge as ε → 0 to the unique viscosity solution of (2.22) is attempted in the case that f ≡ 0, and with mixed Dirichlet-Neumann boundary conditions. The authors mistakenly state that the value functions for standard ε-step tug-of-war are continuous, an assumption which is needed in their proof. However, the argument in [6] can be repaired with straightforward modifications. In fact, in a recent preprint, Charro, García Azorero, and Rossi [5] correct this mistake and extend the results in [6].
The following proposition is needed in order to obtain Corollary 2.12.
Proposition 4.4. Let {ε j } ∞ j=1 be a sequence of positive real numbers converging to 0, and suppose that for each j, the function u j ∈ C(Ω) is a solution of (2.20) with u j = g on ∂Ω. Then the sequence {u j } is uniformly equicontinuous.
Corollary 2.12 is now immediately obtained from Theorems 2.9 and 2.11, Proposition 4.4 and the Arzela-Ascoli theorem.

Applications to the Infinity Laplace Equation
In this section we study solutions of the (continuum) infinity Laplace equation. Before beginning, we extend our definition of viscosity subsolution. In Definition 2.1 we defined −∆ ∞ u ≤ f only for continuous f . However, the definition makes no use of the continuity of f , and thus −∆ ∞ u ≤ h makes sense for any h : Ω → R. We use this for h ∈ U SC(Ω) below.
The following lemma establishes that solutions of the infinity Laplace equation "enjoy comparisons with quadratic cones," and is a special case of [15, Theorem 2.2]. A slightly less general observation appeared in [18]. The notion of comparisons with cones was introduced in the work of Crandall, Evans, and Gariepy [7] for infinity harmonic functions. Our proof is new, and much simpler than the one given in [15].
Let x 0 ∈ R n and a, b, c ∈ R. If the quadratic cone ϕ given by Proof. Without loss of generality, we assume that a = 0 and x 0 = 0. We argue by contradiction, and assume that (5.3) holds but (5.4) fails. We may selectc slightly greater than c such that if we setφ(x) := b|x| − 1 2c |x| 2 , then the function u −φ attains its maximum at a point x 1 ∈ V such that Notice that the gradient ofφ a point x = 0 is given by and for x = 0, we have Thus D 2φ (x) has eigenvalues −c with multiplicity 1 and associated eigenvector x, and b|x| −1 −c with multiplicity n − 1. We break our argument into multiple cases and deduce a contradiction in each of them.
By selecting α small enough, we may assume that u − ψ α,β has a maximum at some x 2 ∈ Ω. If x 2 can be selected such that |x 2 | = r + β, then the preceding discussion yields the contradiction −∆ + ∞ ψ α,β (x 2 ) =c > c. If every point x of local maximum of u − ϕ α,β in V satisfies |x| = r + β, then we may make β slightly larger. Now the difference u − ψ α,β must have a maximum in the set {|x| < r + β}, completing the proof.
Following Crandall [8], we introduce the increasing slope estimates (5.7) and (5.8) for subsolutions of the infinity Laplace equation. Our proof follows the argument given in [8] for the case f ≡ 0. Before stating the estimates, we introduce some notation. Let us define the local Lipschitz constant L(u, x) of a function u : Ω → R at a point x ∈ Ω by L(u, x) := lim r↓0 Lip (u, B(x, r)) = inf {Lip (u, B(x, r)) : 0 < r < dist(x, ∂Ω)} .
If, in addition, y ∈B(x, r) is such that u(y) = supB (x,r) u, then Proof. Let r, x, and y be as above, and define We may assume the quantity on the right-hand side is positive, as otherwise (5.8) is trivial. Since u touches ϕ from below at y and ϕ is increasing as we move towards ∂B(x, |y − x|), we have L(u, y) ≥ |Dϕ(y)|. Thus (5.8) holds.
The following proposition is essential to our theory for the continuum equation. It shows that subsolutions of the continuum equation are essentially finite difference subsolutions, possibly with a small error. This proposition bears a resemblance to Lemma 4.1, but as we see below, it is much stronger. h.
Notice that and similarly Combining these inequalities, we see that The increasing slope estimates (5.7) and (5.8) imply that Combining (5.11), (5.12), and (5.13) we deduce that −∆ ε ∞ u ε (x 0 ) ≤ h 2ε (x 0 ). As a first application of Proposition 5.3, we give a second proof of Theorem 2.11 which is more efficient than the proof in the previous section. We follow the outline of the first proof, but manage to avoid the diadic sequences and the appeal to Lemma 4.2. The idea is to perturb the test function rather than the sequence {u j }, in the spirit of Evans [11], and then invoke Proposition 5.3.
Second proof of Theorem 2.11. Suppose ε j ↓ 0, u j → u locally uniformly in Ω, and each u j ∈ U SC(Ω) satisfies −∆ εj ∞ u j ≤ f in Ω εj . Suppose ϕ ∈ C ∞ (Ω) is such that u − ϕ has a strict local maximum at a point x 0 ∈ Ω. We must show that Since ϕ is viscosity supersolution of We may assume there exist x j ∈ Ω 2εj such that x j → x 0 and We obtain Passing to the limit j → ∞ using the upper semicontinuity of x → ∆ + ∞ ϕ(x), we obtain (5.14).
As a second application of Proposition 5.3, we give a simple proof of Jensen's theorem on the uniqueness of infinity harmonic functions.
Theorem 5.4 (Jensen [12]). Suppose that u, −v ∈ U SC(Ω) satisfy Then max Passing to the limit ε → 0 and using the upper semicontinuity of u and −v yields the theorem. We need the following continuity estimates for solutions of the continuum infinity Laplace equation.
The inequality (5.19) immediately gives the estimate for some constant C depending only on diam(Ω). Since u is bounded, the increasing slope estimate (5.7) combined with (5.5) and (5.6) imply that u is locally Lipschitz, and for each r > 0. Combining (5.20) and (5.21) yields for every r > 0. It is now easy to combine (5.19) and (5.22) to obtain (5.18).
Remark 5.7. The above Lemma contains the continuum analogues of Lemma 3.8 and Lemma 3.10, but does not contain the analogue of Lemma 3.9. We could obtain this by studying the gradient flowlines, as in [8, Section 6], using the increasing slope estimate (5.8). It can also be obtained by passing to the limit ε → 0 in the estimates (3.23), in the case that we have uniqueness for the continuum equation.
We now obtain Theorem 2.13 from Proposition 5.3 and Theorem 2.11.
The next proposition is well-known (see, e.g., [15, Theorem 3.1]). Our proof, which is very similar to the proof of Theorem 5.4 above, is based on Proposition 5.3 and does not invoke deep viscosity solution machinery.
Remark 5.9. Yu [19] has improved Proposition 5.8 by showing that, under the same hypotheses, u ≤ v and u ≡ v. It follows immediately that the set {u < v} is dense in Ω. Moreover, by modifying the argument in [19], it can be shown that u(x) < v(x) whenever L(u, x) > 0 or L(v, x) > 0. This argument seems to breaks down for points x for which L(u, x) = L(v, x) = 0 due to the singularity of the infinity Laplacian. The question of whether we have u < v in Ω, in general, under the hypotheses of Proposition 5.8, appears to be open.
Using Theorem 2.13 and Proposition 5.8, we now prove Theorem 2.14.
Proof of Theorem 2.14. For each k ≥ 1, let u k ∈ C(Ω) be a solution of the equation given by Corollary 2.12. By applying the estimate in Lemma 5.6 and passing to a subsequence, we may assume that there exists a function u ∈ C(Ω) such that u k → u uniformly onΩ as k → ∞. According to Theorem 2.13, the function u is a solution of (2.22). According to Proposition 5.8, any subsolution v of (2.22) with v ≤ g on ∂Ω satisfies u k ≥ v and hence u ≥ v in Ω. That is, u =ū is the maximal solution of (2.22). In a similar way, we can argue that (2.22) possesses a minimal solution u.
Proof of Theorem 2.16. For each c ∈ R, letū c and u c denote the maximal and minimal solutions of the problem which exist by Theorem 2.14. Let N := {c ∈ R :ū c ≡ u c } be the set of points for which the problem (5.23) has two or more solutions. Define Since I + is increasing, it can have only countably many points of discontinuity. Thus N is at most countable.
Proof of Theorem 2.17. The argument is nearly identical to the proof of Theorem 2.16. We simply replace the mentions of Theorem 2.14 and Proposition 5.8 by Theorem 2.9 and Remark 3.1, respectively.
6. Uniqueness, continuous dependence, and rates of convergence In this section, we study the relationship between uniqueness for the continuum infinity Laplace equation and continuous dependence of solutions of the finite difference equation. As a corollary to this study, we obtain explicit estimates of the rate at which solutions of the finite difference equation converge to solutions of the continuum equation.
Adapting the proof of Theorem 5.4, we show that continuous dependence at f for the finite difference equation implies uniqueness for the continuum equation with f on the right-hand side. we see from Proposition 5.3 that (6.1) holds. By hypothesis, we have Since u − v is upper semicontinuous and H f (0) = 0, sending ε → 0 yields the proposition.
Remark 6.2. A compactness argument combined with Theorem 2.11 easily yields a converse to Proposition 6.1.
We now consider a simple continuous dependence result. f.
We remark that the proof below refers to a lemma of Crandall, Gunnarsson, and Wang [9]. This is the only place where our presentation fails to be self-contained.
Proof of Theorem 2.19. We assume first that 0 < u γ < 1, for all 0 ≤ γ < 1/8. Fix 0 < γ < 1/8, and set Proof. The proof is very similar to the proof of Proposition 6.4, above. The only difference is that the hypothesis that |f | > a is not needed, since for f ≡ 0 we also have f 2ε ≡ 0.

Open Problems
Let us discuss some open problems which the authors find interesting. We begin by repeating a question posed in [18].
Open Problem 7.1. If f ≥ 0 or f ≤ 0 in Ω, does the Dirichlet problem − ∆ ∞ u = f in Ω, have a unique solution?
The authors believe the answer to Open Problem 7.1 is Yes. Motivated by Theorem 2.7, we offer the following conjecture.
Conjecture 7.2. Assume that the functions u, −v ∈ USC(Ω) and f ∈ C(Ω) satisfy Suppose also that u has no strict local maximum, or v has no strict local minimum, in Ω. Then max Notice that if Conjecture 7.2 is true, then the answer to Open Problem 7.1 is Yes. Why are we unable to prove Conjecture 7.2? As explained in Section 6, this question is related to the study of the dependence of the solutionsū ε and u ε to the finite difference equation on the function f and the parameter ε > 0. If we could understand this dependence, we believe that we could prove Conjecture 7.2.
Another related question is the following.
Open Problem 7.3. In the case that the continuum problem does not have uniqueness, is it true that u ε → u andū ε →ū as ε → 0?
Notice that if Open Problem 7.3 could be resolved in the affirmative, then it would follow that for any f for which we have uniqueness for the finite difference problem for all ε > 0, we also have uniqueness for the continuum problem. Recalling Corollary 2.8, this would imply that the answer to Open Problem 7.1 is Yes, and we would also deduce a continuum analogue of Theorem 2.10.
Open Problem 7.4. Is the dependence of the right-side of the estimate (2.27) on the parameter γ optimal? Open Problem 7.5. Lemma 3.10 provides a continuity estimate for solutions of the finite difference equation on "scales larger than ε." This can be improved in the case that f ≡ 0, in which we have a uniform continuity estimate on "all scales" as shown in Proposition 3.11. Is the hypothesis f ≡ 0 necessary, or should we expect a nonzero running payoff function to create oscillations in the value functions on small scales which cannot be controlled?